Drawing a 2D HUD messes up rendering of my 3D models? - xna

I'm using XNA 3.1
I have recently created a 2D Heads Up Display (HUD) by using Components.Add(myComponent) to my game. The HUD looks fine, showing a 2D map, crosshairs and framerate counter. The thing is, whenever the HUD is on screen the 3D objects in the game no longer draw correctly.
Something further from my player might get drawn after something closer, the models sometimes lose definition when I walk past them. When I remove the HUD all is drawn normally.
Are their any known issues regarding this that I should be aware of? How should I draw a 2D HUD over my 3D game area?
This is what it looks like without a GameComponent:
And here's how it looks with a GameComponent (in this case it's just some text offscreen in the upper left corner that shows framerate), notice how the tree in the back is appearing in front of the tree closer to the camera:

You have to enable the depth buffer:
// XNA 3.1
GraphicsDevice.RenderState.DepthBufferEnable = true;
GraphicsDevice.RenderState.DepthBufferWriteEnable = true;
// XNA 4.0
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
SpriteBatch.Begin alters the state of the graphics pipeline:
SpriteBatch render states for XNA 3.1
SpriteBatch render states for XNA 4.0
In both versions depth buffering is disabled, that's what causes the issue.
Again, I cannot stress this enough:
Always make sure that ALL render states are correctly set before drawing any geometry.
BlendState
DepthStencilState
RasterizerState
Viewports
RenderTargets
Shaders
SamplerStates
Textures
Constants
Educate yourself on the purpose of each state and each stage in the rendering pipeline. If in doubt, try resetting everything to default.

Related

OpenGL: far objects appearing on top of near objects

I'm trying to put together a quick demo using iOS GLKit to render a retail store map using OpenGL using the source CAD files. I was able to render the walls and aisles in 2D, then programmatically add some artificial depth to create a series of cubes. All of this looks fine when looking top down, but I noticed that when I turned on the floor (with a z-value that is well below the aisles and walls that some of those objects are actually rendered under the floor:
...however if you rotate the model you can see that nothing is actually below the floor and some of the aisles are rendering outside of the wall:
You can view the code at StoreMapGLKitViewController.m, it all seems pretty simple to me, but I'm sure I'm making some kind of OpenGL rookie mistake.
So when you are messing with the Z values, and z = 0 for all the things, I'd imagine you'd still be able to see some of your walls and aisles, but they would also hang out the bottom a bit. As long as you don't care about that (its a demo, right) then that should be fine for now I would thenk.
Ends up that the depth buffer wasn't setup correcting so the depth test wasn't doing anything. Adding the code below fixed it.
GLKView *view = (GLKView *)self.view;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;

SceneKit - How to use different blend modes - e.g. additive blending? [duplicate]

I can't see an obvious way to change the blending function (glBlendFunc) for a scene kit node or geometry - it doesn't seem to be part of the material, and it isn't very obvious from the scene kit documentation how it organises render passes.
Do I need to make a Render delegate for the node which just changes the GLblending mode, or do I need to somehow set up different render passes etc. (It's not obvious from the documentation how I even control things like render passes)?
Will It Blend? - SceneKit
Yes! In iOS 9 and OS X 10.11 (currently in beta), blendMode is an attribute on materials, so you can render any SceneKit content with additive, multiplicative, or other kinds of blending.
But for while you're still supporting earlier OS versions... SceneKit in iOS 8.x and OS X 10.8 through 10.10 doesn't offer API for blend modes.
There are a couple of options you can look at for working around this.
1. Set the GL state yourself
If you call glBlendFunc and friends before SceneKit draws, SceneKit will render using the blend state you've selected. The trick is setting the state at an appropriate time for drawing your blended content and leaving the state as SceneKit expects for un-blended content.
If you set your GL state in renderer:willRenderScene:atTime: and unset it in renderer:didRenderScene:atTime:, you'll apply blending to the entire scene. Probably not what you want. And you can't use a node renderer delegate for only the node you want blended because then SceneKit won't render your node content.
If you can find a good way to wedge those calls in, though, they should work. Try rendering related nodes with a custom program and set your state in handleBindingOfSymbol:usingBlock:, maybe?
2. Use Programmable Blending (iOS only)
The graphics hardware in iOS devices supports reading the color value of a destination fragment in the shader. You can combine this value with the color you intend to write in any number of ways — for example, you can create Photoshop-style blend modes.
In SceneKit, you can use this with a fragment shader modifier — read from gl_LastFragData and write to _output. The example here uses that to do a simple additive blend.
#pragma transparent
#extension GL_EXT_shader_framebuffer_fetch : require
#pragma body
_output.color = gl_LastFragData[0] + _output.color;
From what I can tell after several hours of experimenting, there is no way to actually set the blend mode used to render a piece of geometry, or to control the overall blend mode used to render a pass using SCNTechnique.
SceneKit appears to only have two different blending modes - one where blending is off - if it considers the material opaque, and a "transparent" blending mode (GL_ONE, GL_ONE_MINUS_SRC_ALPHA) when it considers a material transparent. This is bad news if you want to render things like glows, because it doesn't seem possible to get anything like a (GL_ONE, GL_ONE) blend mode you'd want for rendering light beams or glows.
However, I've found a hack to get around this which doesn't give you proper control over blending, but which works if you're wanting to render glowing things like light beams:
Because SceneKit uses GL_ONE, GL_ONE_MINUS_SRC_ALPHA blending mode all you should have to do is render your geometry with an alpha channel of 0. Unfortunately, it's not that simple because the default SceneKit shader discards fragments with an alpha channel of 0, so nothing will actually get rendered. A quick-and-dirty workaround is to use a diffuse colour map which has an alpha channel of 1 (assuming an 8 bit per channel map with values from 1-255). Because the alpha channel is nearly 0, pretty much all of the background image will show through. This mostly works, but because the alpha isn't quite zero it will still produce noticeable artefacts in bright areas.
So to work around this problem you can just use a standard texture map with a solid alpha chanel, but attach a shader modifier to "SCNShaderModifierEntryPointFragment" which simply sets the alpha channel of the output colour to zero. This works because fragment shader modifiers come after the zero-alpha culling.
here's that shader modifier in its entirety :
#pragma transparent
#pragma body
_output.color.a = 0;
note the "#pragma transparent" declaration in the first line - this is necessary to force SceneKit to use its transparent blending mode even when it otherwise wouldn't.
This is not a complete solution, because it's not real control over blending - it's only a useful hack for producing light beam glows etc - and the shading process certainly isn't as optimal as it could be, but it works well for this case.

OpenGL ES IOS Texture 2D drawing upside down

I am writing a small game (it is 2d) in IOS using Opengl as a way to get comfortable with opengl. I am using the Texture2D class from the CrashLanding demo. I am using this to generate text for the score. When the text is drawn it is upside down. In the code there is comments about the texture being loaded upside down but I can not find a way to render it the correct way. Any help would be greatly appreciated.
OpenGL and your image loading code do not agree on where the origin is. OpenGL starts in the lower left hand corner, while your picture starts in the upper right hand corner. You can apply a transform to the picture in your app like the CrashLanding demo does. Or even simpler pre flip the image in an editing program such as Photoshop. This will work if your image will only be used as an OpenGL texture in this app. If you need to display this same picture elsewhere you'll need to keep a non flipped version, or figure out how to apply the transform.

iOS : Creating a 3D Compass

I want to make a 3D metal compass in iOS which will have a movable cover.
That is when you touch it by 3 fingers and try to move your fingers upward the cover keeps moving with your fingers and after certain distance it gets opened.Once you pull it down using 3 fingers again, it gets closed. I have attached a sketch about what I'm thinking.
Is it possible using core animations and CALayers? Or would I have to use OpenGL ES?
First you should obviously create a textured 3d model in app like 3Ds Max or Maya. Then export it to some suitable format. The simplest one is OBJ (there are lots of examples about how to load it). There are two options about animation:
Do animation manually by rotating the cover object. It's probably the easiest way to do that.
Create animation in you 3D editor and then interpolate between frames. By doing this you can get much more realistic view. However in this case OBJ format is not suitable, but COLLADA is. To load it I suggest to use Assimp library.
And if you don't need some advanced interraction another option is to use pseude 3D: just pre render all the compass animation frames and use that animation applied to 2d texture.

How do I go about implementing sprite masking?

Using DirectX I'm rendering textured polygons (orthographically) so they act as HUD sprites. Now I'm not sure how would I go about implementing sprite masking in this sytem?
So basically say I have a sprite, how can I make it render only in a given portion of the screen which I define? And if a part of it moves outside this portion of the screen you don't see it?
Scissor Test.
http://msdn.microsoft.com/en-us/library/ee422196(VS.85).aspx
You're looking for what is called a viewport. considering you did not specify which DirectX and which language you're using, I'll have to point to the DirectX9 spec

Resources