I'm trying to put together a quick demo using iOS GLKit to render a retail store map using OpenGL using the source CAD files. I was able to render the walls and aisles in 2D, then programmatically add some artificial depth to create a series of cubes. All of this looks fine when looking top down, but I noticed that when I turned on the floor (with a z-value that is well below the aisles and walls that some of those objects are actually rendered under the floor:
...however if you rotate the model you can see that nothing is actually below the floor and some of the aisles are rendering outside of the wall:
You can view the code at StoreMapGLKitViewController.m, it all seems pretty simple to me, but I'm sure I'm making some kind of OpenGL rookie mistake.
So when you are messing with the Z values, and z = 0 for all the things, I'd imagine you'd still be able to see some of your walls and aisles, but they would also hang out the bottom a bit. As long as you don't care about that (its a demo, right) then that should be fine for now I would thenk.
Ends up that the depth buffer wasn't setup correcting so the depth test wasn't doing anything. Adding the code below fixed it.
GLKView *view = (GLKView *)self.view;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
Related
Although I am quite experienced with most frameworks in iOS, I have no clue when it comes to 3D modelling. I even worked with SpriteKit, but never with something like SceneKit.
Now a customer wants a very ambitious menu involving a 3D object, an 'icosahedron' to be exact. I want it to look something like this:
So I just want to draw the lines, and grey out the 'see-through' lines on the back. Eventually I want the user to be able to freely rotate the object in 3D.
I already found this question with an example project attached, but this just draws a simple cube: Stroke Width with a SceneKit line primitive type
I have no clue how to approach a more complex shape.
Any help in the right direction would be appreciated! I don't even need to use SceneKit, but it seemed like the best approach to me. Any other suggestions are welcome.
to build an icosahedron you can use SCNSphere and set its geodesic property to YES.
Using shader modifiers to draw the wireframe (as described in Stroke Width with a SceneKit line primitive type) is a good idea.
But in your case lines are not always plain or dotted — it depends on the orientation of the icosahedron. To solve that you can rely on gl_FrontFacing to determine whether the edge belongs to a front-facing or back-facing triangle.
Am I the only one whose having issues with the new bodyWithTexture function of SKPhysicsBody?
I'm new to iOS development and maybe it's me, but I'm trying to create a game where I need to detect if a ball is inside a path.
I'm loading both from images dynamically (as the level proceeds the paths are more and more complex), and I'm setting a physics body to both the ball (bodyWithCircle) and from the dynamic path which is a PNG file of a path and all the rest is transparent background. I'm using the new bodyWithTexture function (yes I know it's supported only under iOS 8), and after assigning bit masks I've defined a contact between the ball and path and am informed with begin/end contact.
SKSpriteNode *lvlPath = [SKSpriteNode spriteNodeWithImageNamed:currentLevel.imagePath];
lvlPath.position = CGPointMake(self.frame.size.width/2, self.frame.size.height/2);
lvlPath.physicsBody = [SKPhysicsBody bodyWithTexture:lvlPath.texture size:lvlPath.frame.size];
now for simple paths like straight line, it works great. once it comes to complicated paths - the mechanism is going crazy, at least in my simluator (running iOS 8).
i've created another simple app just to check this issue, and saw that it's going crazy when the path is a complex shape. when the ball enters the path in one direction it seems to be working (begin/end contact), but going the reverse direction suddenly acts weird when still inside the path it reports to have ended contact, and then like randomly flips begin/end contact.
help... since the levels are loaded dynamically, this is a really cool feature for me, saving me the definitions of all levels as CGPathRef and creating a polygon for each level (and perhaps device).
thanks all,
Eyal
edit
example screenshot:
https://www.dropbox.com/s/e8v9g1kajtvakfq/screenshot%20bodywithtexture.jpg?dl=0
in this example the ball with the arrow is inited using bodyWithCircle, and the C shaped object is inited using bodyWithTexture. I'm debug printing "didBeginContact" and "didEndContact" and it freaks out in the top line there, you can see it's with "didEndContact" while the two object are definitely at contact. If I jiggle it (I'm moving it with the cursor) it suddenly flips to "didBeginContact".
With Simpler objects (like horizontal/vertical lines with round corners) it works perfectly.
I'm using Ray Wenderlich's tutorials to make a simple OpenGlES 2 app using GLKit, and I've come across some problems.
I changed the sample code to display two cubes by adding vertex and indices data to the existing vertex and indices data structs. It works, and draws two cubes to the screen.
The problem is that when the new cube is behind the old one, it shows through. However, when the old cube is behind the new one, it doesn't show through.
Perhaps my depth testing is messed up?
I can't post images because of my reputation :(
Here's a link to the source code though:
https://www.dropbox.com/s/4xrq3gmnmbcz02m/EthanGillCubeSnap.zip
Any help is much appreciated!
On line 279 of HelloGLKitViewController.m I added the line below and it rendered correctly:
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
You need to make sure to set the depth buffer size on your GLKView or else no buffer will be created, which is what was happening to you before.
I'm using XNA 3.1
I have recently created a 2D Heads Up Display (HUD) by using Components.Add(myComponent) to my game. The HUD looks fine, showing a 2D map, crosshairs and framerate counter. The thing is, whenever the HUD is on screen the 3D objects in the game no longer draw correctly.
Something further from my player might get drawn after something closer, the models sometimes lose definition when I walk past them. When I remove the HUD all is drawn normally.
Are their any known issues regarding this that I should be aware of? How should I draw a 2D HUD over my 3D game area?
This is what it looks like without a GameComponent:
And here's how it looks with a GameComponent (in this case it's just some text offscreen in the upper left corner that shows framerate), notice how the tree in the back is appearing in front of the tree closer to the camera:
You have to enable the depth buffer:
// XNA 3.1
GraphicsDevice.RenderState.DepthBufferEnable = true;
GraphicsDevice.RenderState.DepthBufferWriteEnable = true;
// XNA 4.0
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
SpriteBatch.Begin alters the state of the graphics pipeline:
SpriteBatch render states for XNA 3.1
SpriteBatch render states for XNA 4.0
In both versions depth buffering is disabled, that's what causes the issue.
Again, I cannot stress this enough:
Always make sure that ALL render states are correctly set before drawing any geometry.
BlendState
DepthStencilState
RasterizerState
Viewports
RenderTargets
Shaders
SamplerStates
Textures
Constants
Educate yourself on the purpose of each state and each stage in the rendering pipeline. If in doubt, try resetting everything to default.
We are working on a Three.js based WebGL project, and have trouble understanding how transparency is handled in WebGL. The image shows a doublesided surface drawn with alpha = 0.7, which behaves correctly on its right side. However closer to the middle strange artifacts appear, and on the left side the transparency does not seem to work at all.
http://emilaxelsson.se/sandbox/vis1/alpha.png
The problem can also be seen here:
http://emilaxelsson.se/sandbox/vis1/
Has anyone seen anything similar before? What could the reason be?
Your problem is that transparent objects needs to be sorted and rendered in a back-to-front order (if you try to change the opacity of your mesh from 0.7 (transparent) to 1.0 (opaque), you can see that the z-buffer works just fine).
See:
http://www.opengl.org/wiki/Transparency_Sorting
http://www.opengl.org/archives/resources/faq/technical/transparency.htm (15.050)
In your case it might be less trivial to solve, since I assume that you only have one mesh.
Edit: Just to summarize the discussion below. It is possible to achieve correct rendering of such a double-sided transparent mesh. To do this, you need to create 6 versions of the mesh, corresponding to 6 sides of a cube. Each version needs to be sorted in a back-to-front order based on the 'side of the cube' (front, back, left, right, top, bottom).
When rendering choose the correct mesh (based on the camera viewing direction) and render that single mesh.
The easy solution for your case (based on the picture you attached), without going to expensive sorting and multiple meshes, is to disable depth test and enable face culling. That produces acceptable results if you do not have any opaque objects in front of the mesh.