3ds Max 2010 / 2012 workspace files parsing - xna

We are in the process of parsing out what 3ds Max files we need to keep as "source" for an Xna Game engine application.
Necessarily, there are .X (ActiveX model files) we need to keep of course. That I understand.
However, our 3D developer bailed on us and left us in a lurch. I am now tasked with the inglorious task of parsing which files to keep.
We've got a bunch of utilities, PDF documentation and other, what I would consider, "noise" files comingling the necessary stuff, like .X, .MAX, .VRMESH (I think?) or other such files.
Any pointers on what to watch out for taking an inventory what we've got, need to keep, could get rid of, or at least archive (which we have anyway)?
Thanks...
Michael

In 3ds Max there's a feature called Asset Tracking. It lets you see (and change) all the texture paths for the scene. Take the latest version of every 3ds Max scene, go through the asset list and copy each referenced texture to the desired folder. Note that it's possible to use relative paths here.
I'm sure next time you'll provide your artists with a directive on folder structure, source control and general organisation!.

Related

How to identify what projects have been affected by a code change

I have a large application to manage consisting of of three or four executables and as many as fifty .dlls. Many of the source code files are shared across many of the projects.
The problem is a familiar one to many of us - if I change some source code I want to be able to identify which of the binaries will change and, therefore, what it is appropriate to retest.
A simple approach would be simply to compare file sizes. That is an 80% acceptable solution, but there is at least a theoretical possibility of missing something. Secondly, it gives me very little indication as to WHAT has changed; It would be ideal to get some form of report on this so I can then filter out irrelevant (e.g. dates/versions copyrights etc..)
On the plus side :
all my .dcus are in a row - I mean they are all built into a single folder
the build is controlled by a script (.bat)(easy, for example, to emit .obj files if that helps)
svn makes it easy to collect together any (two) revisions for comparison
On the minus side
There is no policy to include all used units in all projects; some units get included because they are on a search path.
Just knowing that a changed unit is used/compiled by a project is not sufficient proof that the binary is affected.
Before I begin writing some code to solve the problem I would like to ask the panel what suggestions they might have as to how to approach this.
The rules of StackOverflow forbid me to ask for recommended software, but if anyone has any positive experiences of continuous integration tools that would help - great
I am open to any suggestion or observation that is relevant in this context.
It seems to me that your question boils down to knowing which units are contained in your various executables. Since you are using search paths, it will be hard for you to work this out ahead of time. The most robust way to find out is to consult the .map file that the compiler emits. This contains a list of all units contained in your executable.
Once you know which units are contained in each executable, you need to know whether or not anything has changed in those units. That information is contained in your revision control system. Put this all together and you have the information that you need.
Of course, just because the source code for a unit has changed, you might argue that re-testing is not needed. Perhaps the only change made was the version, or the date in a copyright label or some such. But it is asking too much to be able to ask a computer to make such a judgement. At some point you need a human to step up and take responsibility.
What is odd about this though is that you are asking the question at all. It seems to me to be enormously risky to attempt partial testing. I cannot understand why you don't simply retest the entire product.
After using it for > 10 years for commercial in-house and freelancer work in large projects, I can recommend to try Apache Ant. It is a build tool which supports dependencies, and has many very helpful features.
Apache Ant also integrates nicely with CI tools such as Hudson/Jenkins, Bamboo etc.
Another suggestion - based on experience with Maven - is to design the general software architecture as modular as possible. If modules (single or multiple source or DCU files in one directory) use a version number in the directory name as a version number, it is possible to control exactly how application are composed from these modules.
If you want to program such a tool yourself the approach would be something like this:
First you need to detect wheter there were any changes made to seperate source files. As you already figured out comparing the file size is bad idea as the file size can stay the same despite lots of changes made to it (as long as there is same amount of text in pas file its size won't change). So instead you could check the last modification time for specific file or create some hash value like MD5 hash for comparison (can be quite slow).
Then you need to generate yourself a dependancy tree which will tell you which files are used for which project/subproject.
Finally based on changes detected in seperate files you check the dependancy tree to see which projects needs to be recompiled.
The problem of such approach is that you would probably have to update the dependancy tree manually each time when new unit is added to the project or an existing one is removed from the project.
But the best way would be to go and use some version controll software istead of reinventing the wheel. I myself like the way how GIT works and I belive that with proper implementation of GIT into the project mannager itself could be quite powerfull do to GIT support of branching/subbranching (each project is its own branch, each version of your software can be its own subbranch).
Now latest version of Delphi does have GIT integration done though SVN but this unfortunately limits some of best GIT functionality. So if you maybe decide to go and integrate GIT support directly into Delphi I'm first in line to use it.

Blackberry source code files?

We outsourced the development of Blackberry 5, 6, and 7 apps. Please bare in mind that I have absolutely no knowledge of Blackberry development at all.
Development is complete, and they have sent us the source code - a collection of .cod, .csl, .cso, .debug, .jad and .rapc files.
I would at least like to review the code in terms of it's consistency and standards - somewhat a measure of the quality. Clean code is not something specific to any one platform.
I have tried to open each of these files in notepad, but with no source code found.
Please advise me on what I need before I go pay them a visit.
The files you have been given are the files that are created as part of the build of your project and the resultant executable files. There is no source included here.
In a BB OS Build, regardless of the development environment used, the Java source files will all have the suffix .java, and the assets (images etc.) will have a suffix appropriate to the asset (like .png). If you don't see these file suffixes, then you have not been given the actual source. You should be able to view the java files using Notepad, the other files will open in an appropriate application (like paint).
To get the complete source, you should just ask the full 'project' files for your development. This will be a directory with a number of sub directories. The actual names used and the structure will depend on the development tool. If your developer is using Eclipse, then the two important directories are called src for source and res (resources) for the assets. If they are using another development environment, then the directories might have different names. So you should ask them what development environment they are using too.
Two other points:
1) If you are paying for this development and wish to review the code, but are not familiar with Java, then I would recommend that you pay someone to review the code who has knowledge of BB Java. There are two reasons for this:
(a) you will not be able to form a judgement on the appropriateness of the code without some understanding of Java, and
(b) you will not understand if the correct BB Java approaches have been used.
You need to be cautious about this, because programmers will always find fault in other developer's code. The question is how significant the faults are.
2) Some developers might be wary of giving source to their client while some payment is outstanding.

Content Management Differences from XNA 3 to XNA 4

In XNA 4, the Content structure seems much different than it was in 3 (there's the new Content project there by default with the project I created). I'm essentially just trying to understand what the purpose of the structural change was. Am I required to put all my content in the "content project?" Is it just supposed to help me be neater? Is this what people mean when they're talking about the "content pipeline?"
Thanks!
The XNA 4 content project is the project that will store all assets in the game (textures, sounds, etc)
The content pipeline, as MSDN describes, is "A set of processes applied when a game that includes art assets is built. The process starts with an art asset in its original form as a file, and continues to its transformation as data that can be retrieved and used within an XNA Game Studio game through the XNA Framework Class Library. "
Content can be loaded using a ContentManager. Here you can find the basics to using it.

How to use an .xnb file in MonoGame on Windows 8?

I've got an .xnb file from a Windows Phone project made on Windows 7. I'd like to use the same asset in a Windows 8 Metro app. I've got MSVS 2012 RC on Windows 8 with a project from the MonoGameWindowsMetroApplication template. I put the .xnb file in the project Assets folder and tried to load it, but I get an exception telling me that the asset can't be loaded. What properties or configuration do I need to be able to use the .xnb file?
While MonoGame is working on the Content Pipeline piece of the framework, you can simply use the same Content Pipeline engine that you used for the Windows Phone 7 game to create a Content Pipeline and use the associated .xnb files in your Windows 8 XNA MonoGame project.
I have a blog tutorial series on building XNA games with MonoGame for Windows 8, and if you look at Part 3, I provide a step-by-stepy walkthrough of accomplishing this. My hands-on lab and information on my blog was reviewed and approved by the MonoGame team.
See info here: http://blogs.msdn.com/b/tarawalker/archive/2013/01/04/windows-8-game-development-using-c-xna-and-monogame-3-0-building-a-shooter-game-walkthrough-part-3-updating-graphics-using-content-pipeline-with-monogame.aspx
The runtime Xna framework has many type reader objects that read the various xnb files. Some of the type readers are for xnb files that were processed from fbx models, others are xnb files processed from 2d image files, etc.
When you call content.Load<T>(xnb file name), The content manager uses the appropriate reader to read the byte information of the xnb file into the appropriate Xna class. The byte information in an xnb file is designed to be used in a specific Xna class, no other. There are no Xna classes in metro.
All this to say there are no type readers in the metro environment that read these xnb files. You would have to write your own. It's not that hard though, that's similar to what I did. I now process an fbx file into xnb by building an Xna project, then my xna project writes the runtime model's data to my own binary file, which I read from my metro app. But I could've simply read the xnb also. I just would have had to study the contentManager's & Model class' type readers to see how the byte info was distributed.
After all is said and done, it just means I don't have to learn the FBXSDK which would be the standard way (and arguably preferred way) of bringing in model info.
EDIT - sorry, didn't clue into the use of the mono framework. I don't know anything about that... Maybe they do have a way of importing xnb files.
Are you on the develop3D branch? I think you need to add a dummy Content project and add it to that project, not your assets, build that project first, then you need to reference that project in your MonoGame project.
I didn't succeed on that yesterday as I'm using the current stable but one of the MonoGame guys told me to do just that yesterday. Here are some instructions:
https://github.com/mono/MonoGame/wiki/MonoGame-Content-Processing
In MonoGame content pipeline is not present, at least as off now.
Its very easy.
1)Create a xna project in VS2010
2)add all your assets over there.
3)Rebuild your project. Go to bin/x86/debug/content and copy all contents from there
Now go to your VS2012 Mono Project bin/x86/debug look for content folder, if its there copy all content there else create a content folder and copy all content there.
Now without changing even single line of code just run your project and you will find everything working great!!!
Let me know if you find any difficulty doing that.

Build config file into executable?

I am currently working on a little graphics demo (using DirectX) which is primarily based around an HLSL shader I am working on. Using the D3DX10CreateEffectFromFile I am loading (and compiling the shader) at runtime as I find it easier for tweaking.
However, once I am done I'd like to do some combination of the following:
Pre-compile the shader so the demo starts up faster for the user
Bury (compile into the executable) the compiled shader (or maybe just the source if necessary)
Primarily, I want to do this because I want the demo to just be one file that can be very easily copied around.
One thing I could easily do is just put the source text right into a cpp but that would be very tedious I needed to update it later.
Is it possible to do something like this (using Visual Studio, DirectX, HLSL)?
As pointed out in that link you can simply add it as a binary resource to the exe.
Personally, though, I'd go with something like having a big binary file. The start of the file has a table of contents. Basically a shader ID and an offset. The offset then corresponds to where the binary compiled data starts. You can put 4 bytes at the top of each compiled shader that says how long it is as well. Inserting a new shader can get troublesome though as it does require moving a fair whack of data around but seeing as its an offline process its not really a problem.

Resources