Can a 3dsmax model be exported to include lighting effects? - xna

I created a really basic cylinder, added a material and a glow effect
can I export a model to include the glow effect so the model will look like the rendering ?
http://imgur.com/VaNJLj4
Clarification:
Can I export the model to .fbx or .x and have it contain the lightning information so that if I import it into unity or xna the model will look like the rendering ?

"glowing" is really a post-process kind of effect. Actually a blur. There are quite a few tutorials on how to do this in XNA, but I doubt that you can easily export this from you modeling software (as in not possible at all).
The reason is that doing it usually requires setting up multiple rendertargets, custom shader, etc, which you will have to do yourself.
The reason you need multiple rendertargets;
When you render a model, only the pixels WITHIN the (visibly) outer vertices are processed by the pixel-shader. Hence, you can't render a smooth "fade-out" outside the model itself as would be the case in your picture.
What you usually do is you use a shader that renders your object normally, but also renders a "glow-color" to an other rendertarget.
When all models have finished rendering, you do a blur-effect on this second RT.
Then you blend your main RT with the blurred glow-RT.
This is VERY superficial, and I havent done it in AGES, so please DO check out some tutorials. Also, this bloom-sample basicly does the same thing, but on the entire scene, I think: http://xbox.create.msdn.com/en-US/education/catalog/sample/bloom

Add the glow in 3dsmax with filters, then it wil render automaticly.
Minor notice, 3dsmax is a very big program with lots of possibilities, just take your time finding everything out. Believe me it will take time.

Related

Valid technique for scalable graphics on iOS?

A little background: I'm working on an iOS app that has a variety of status icons for various states. These icons are used in a variety of places and sizes including as UITableViewCell imageViews, as custom MKMapAnnotations and a few other spots. I actually have a couple sets which include a more static status icon as well as ones that have dynamic text injected into the design.
So at first I went the conventional route of using static raster assets, but because the sizes were dynamic this wasn't always the best solution and I wasn't thrilled with the quality of the scaling using CGAffineTransforms. So instead I changed gears a bit and tried something else:
Created a custom UIView subclass for each high level class of icon. It takes as input the model object that derives the status from (I suppose I could have also just used an enum and loaded this into some kind of model constructor but this is how I did it) so it can decide what it needs to draw, then does the necessary drawing in drawRect. Since all of the drawing is based on the view bounds it scales to any reasonable dimensions.
Created a Category which has class method constructors that take the model inputs as well as the size you want to use and constructs the custom views.
Since I also wanted the option to have rasterized versions of these icons to plug into certain places (such as a UITableViewCell imageView) I also created constructors that build the view and return a UIImage using the fast iOS7 snapshotting functions.
So what does this give me? Well here's the pros/cons that I can see.
Pros
Completely scalable graphics that can easily be used in a variety of different scenarios and contexts.
Easy compatibility with adding dynamic info to the graphics such as text. Because I have the exact shape data on everything I'm drawing I don't need to guesstimate on the bounds for a text box since I know how everything is laid out.
Compatibility with situations where I might want a rasterized asset but I still get all the advantages of the dynamic view since I'm not rasterizing it till I need it.
Reduces the size of the application since I don't need to include raster assets.
Cons
The workflow for creating the draw code in the first place isn't ideal. For simple stuff I can do it straight in code but for more complex things I'll need to create the vector asset in Illustrator or Sketch then bring it into PaintCode and clean up the generated draw code into something more streamlined. This is not the most ideal process.
So the question is: does anyone have any better suggestions for how to deal with this sort of situation? I haven't found an enormous amount of material on techniques for this sort of thing and I'm wondering if I'm missing a better way of handling this or if there are any hidden gotchas here...performance doesn't seem to be an issue from my testing with my approach but I haven't tested it on the iPad3 or iPhone 4 yet so there could still be some unknowns.
You could try SVGKit, which draws SVG files, and can export to a UIImage, if desired.

How to make short film with WebGL

Is it possible to make a short film using WebGL? I see tons of examples on animating an object or trigger based animation but nothing like film. I am new to this field.
WebGL is just a graphics library. You'll need an animation engine (or game engine that has animation built in) and you'll need an authoring program to make the animation.
You might try babylon.js
Theoretically you could make an animation in Blender or 3DSMax or Maya, export to FBX and import through the converters included in the engine. I suspect it's not setup to handle whole 3D scenes as is though.
Three.js might do it as well but I suspect it also doesn't handle full scenes directly out of the 3D program.
I suggest you start small. Make a simple animated scene using a few primitives and see if you can export it into one of those libraries.
Inka3D, which is a Maya to WebGL exporter, has been used to create so-called demos which are close to short films. These are called "Azathioprine", "Radiotherapy" and "70s". You can simply use maya as usual only with some limitations and make your short film. See www.inka3d.com for links to the demos.

Composed animations, sprites in iOS

let's say I want to display a customizable (2D, cartoon-like) character, where some properties e.g. eye color, hair style, clothing etc can be chosen from a predefined set of options. Now I want to animate the character. What's the best way to deal with the customization?
1) For example, I could make a sprite sheet for each combination of properties. That's not very memory efficient and not very flexible, but probably gives the best performance.
2) I could compose the character from various layers, where each property only affects one layer. Thus, I could make a sprite-sheet for the body, a collection of sprite-sheets for the eyes (one for each eye color) etc.
2a) In that case, I could merge the selected sprite-sheets in order to generate a single sprite-sheet containing the animation of the customized character.
2b) Alternatively, I could keep the sprite-sheets separate and try to animate them simultaneously as layers. I fear, that this might become a problem performance-wise.
3) I could try to modify the layers programmatically, e.g. use a sprite-sheet for the eyes as a mask and map some texture on it before merging it down to a single sprite-sheet. I would think this is a very flexible approach when it comes to simple properties like eye colors, but might become difficult for things like hair-style. I am aware that this depends much on the character and probably a general answer is difficult.
I assume that my problem is not new, so there is probably a standard approach to it.
Concerning the platform, I'm particularly interested in iOS and try to avoid OpenGL (well, I'm open-minded). Maybe there is a nice framework that can help me here?
Thanks!
Depending on what your working on, you might want to animate part/all of the animations outside in another tool, such as flash. It is much easier to work with a visual environment.
Then there are tools that take swf files, and create sprite sheets that you would then animate in cocos2d.
That is a common game creation workflow.
You problably want to take a look on how to create sprites at cocos2d.
Cocos2d comes with a set of tools that help you to animate single parts and offers abstractions to compose parts (like CCBatchNode or CCNode). Also, it comes with tools that helps you to pack sprites into sprite sheets (e.g Texture Packer) and develop levels (e.g Level Helper).
Cocos2d is an open source framework and it is widely used. You also have cocos3d but I never used it :).

CGPathRef and PDF

It there a way to draw a complex shape with an application like CorelDraw or Adobe Flash, etc, save it or export it as a PDF and then open it with Core Graphics in iOS.
The idea is, to draw a shape, a vector, with CorelDraw - for example, and it is just the path. No color or fill. And then be able to open it directly by Core Graphics, add it as a CGPath to the context, and then be able to manipulate it, like fill it with solid color or gradient, or patterns.
The bottom line is, I am looking for a way to draw a complex shape in a user-friendly environment, like Corel or Flash, and export it, as a vactor, which can be manipulated in Core Graphics. And suggestions or help is really appreicated.
Thanks.
SVGKit doesn't work the exact same as I need either. Although I should say it is nicely done. There are also other resources, that I found and I'll leave them here for future references, if anyone stops by this post and is looking for a solution.
Converting SVG Paths to Objective-C Paths Good for simple paths; strokes and fills can be manipulated later by using protocols. Complex paths get mixed up.
SVGKit Good for creating images and animate them later through the course of the program. However, strokes, fills, paths can not be manipulated.
Opacity You can export as source code, hence you have more control over strokes, paths, and fills. As the path gets more complex, it is harder to manage the code manually. The other problem is by the time of export, the program adds resolution-dependent codes. It can be a pain to go through about 300+ lines of code to fix it so that it is not resolution dependent. By the final product wouldn't be mixed up, and can be manipulated by protocols. Layers are CGLayers, not CALayers.
If, as you say, you've got PDF files (from Corel, or another app), you can display them using CoreGraphics.
Take a look at the:
CGPDFDocument class
CGPDFPage class
Then, there is a CGContextDrawPDFPage function, that you can use to draw a PDF pages in a given graphic context, typically in the drawLayer: inContext: method of a UIView subclass.
There isn't really a built-in way to load CGPaths from files but you might want to take a look at SVGKit. Pretty much every modern vector drawing app can produce SVG files.

Problems with some textures in XNA

I have made a model using Sketchup, and have tested rendering it using Blender and it looks great.
However loading it in XNA has two problems.
1. One of the textures becomes see-thru not entierly transparent but items below on the inside of the model is visible (this is not the case in blender).
2. I have a rounded part on the model that is divided into smaller parts and the texture gets out of sync (the posisioning is all wrong).
I have tested exporting the model to 3ds and then use blender to save it as fbx (to eliminate any problems with Sketchup).
I have also tried using AutoDesks FBX Converter, same problems =(
I'm using myModel.Draw(World, View, Projection); to render the model.
Any suggestions?
/Jimmy
1)Sounds like a backface culling issue try this
device.RenderState.CullMode = CullMode.None; (try the CW and CCW variants)
also make sure the depth buffer is enabled
2) This may be similar problem to an issue I had with blender when you copy the bones try the gModel.CopyBoneTransformsTo(transforms); as well as gModel.CopyAbsoluteBoneTransformsTo(transforms);

Resources