Greetings each and all.
I've been struggling with OpenGL ES 2.0 and a particular problem for the last few days now. I'm looking to implement a Geometry Wars clone, for the iPhone, for fun and to learn this technology. So, my background in 3d programming is fairly good, although mainly concentrated around vector mathematics rather then draw calls towards the graphical API, as I've been working with DirectX on and off for the last couple of years. The problem, however, is that I've mainly been working with bigger meshes, loading, translating and transforming them in several ways and now I find myself in a position where I want to handle small meshes, and lots of them.
The objects are triangles, rectangles, hexagons etc. and I want the ability to modify them all separately (eg making the other edge wavy or pulsating). When I've worked with multiple big meshes I've made separate draw calls for them, easily attaching shaders and their respective parameters, but in this case I would like to render it all in one call and there's where my knowledge fails me.
So, to clearify my question. How are you to modify small meshes, preferably stored in one vertex array, individually and render them all at once using shaders with OpenGL ES 2.0?
Although code examples are more then welcome, a "simple" explanation would be enough to get me started. I assume I'm missing something trivial here and any help is greatly appreciated.
Thanks in advance,
Karl
Sounds like Instancing (and instanced arrays) can be an answer to your problem, although it might be a bit too advanced for iOS or ES in general to be supported. This way you can render many copies of the same geometry with per instance data (like a specific texture index or sub-texture or shader parameters). But of course, you cannot render different objects with completely different shaders in one draw call.
Otherwise the much simpler (and maybe much less optimized) function glMultiDrawArrays/Elements renders multiple completely different geometries in one call, but you cannot tell which triangle belongs to which object in the shader and I also doubt that it gives that much of a performance boost.
Related
For SCNFloor, if the reflective is set to 1 and reflectionFallOffEnd is big enough, it will be like a mirror.
My question is how to apply this to other geometries (say plane or box)? As I want to have a mirror in my game.
I have done quite a bit of research on how to make reflections using Scenekit.
Here are the different leads I found (sadly, they will all need a serious amount of code and research):
Screen-Space reflections
Pros :
Cheap
Easy to make
Cons
Doesn't always look great
I'm not sure how to output a normal pass with SCNTechnique
Parallax-mapped cubemaps
Pros :
Cheap
Looks amazing
Cons
No real time objects unless using an image proxy
No good code sample online, will need research
Not quite sure how to use it with SCNProgram
Two cameras + Stencil
Pros :
Realistic
Real time
Almost built in
Cons
No documentation of the pointOfView of SCNTechnique
No documentation on Stencils
Needs to render the scene twice
OpenGL mirror
Pros :
Actually duplicates the geometry, so very accurate
This is the technique used by SCNFloor (I think)
Cons
Geometry can clip with the mirror plane (happens with SCNFloor)
Unusable on anything other than a plane
Needs OpenGL Code
4 Cameras linked to a Cubemap
Pros :
Easy to set up
Real time
Works on any object
Very popular technique in modern Video Games
Cons
I have no idea if this would work
Will need to render the scene 5 times for a single mirror
Not very accurate depending on object
My conclusion is that we need more help on using SCNTechnique. We could build amazing things with it but the lack of documentation and examples is a big problem.
If you could specify what kind of mirror you have in mind, I'll be happy to help you choose the best way to go.
I know this is an old question, but I wanted to share what I have done. I created a gist on GitHub that contains the code and explains how it works.
It basically attaches six cameras to a node and automatically creates a cubemap that is then used as the reflective property of the object. The main downside is that it won't work with physically based materials, but in order to simulate roughness, it blurs the cubemap to whatever you set the roughness property to. It works well in real time and you can set how quickly the cubemaps update so that you are not affecting the framerate of your game too much. It can also handle many different reflective objects and automatically stops updating nodes that you can't see.
This is currently not supported on other geometry types. Please file a request to Apple.
I have just delved into the world of Metal, and I thought that I'd got the hang of it! But then it occurred to me that if I wanted to make a game, then static objects moving around a screen wouldn't suffice. So my question is, 'Is it possible to create animations for models with Metal?'
I have looked at using other APIs, such as SpriteKit, and SceneKit, but I found that they do not support shaders, and are not as powerful as Metal.
The only way that I can think about how I would go about this, is by creating 60 different models, and then loading each one one after the other, to give a 'stop-motion' kind of effect, but I think that this would probably be incredibly inefficient, and was hoping that there was an easier answer?
Thanks a lot!
Yes, there are other, more efficient ways to do animation. But before getting into that, a warning: it really looks like you're barking up the wrong tree here.
Metal is a (conceptually) very low-level interface. You use Metal to talk (almost) directly to the GPU, so to work with it you need to think (sort of) like a GPU: in terms of data buffers, vertex transformations, etc. You seem to be working at a much higher conceptual level, so you're probably better served by one of the high-level game engines: SpriteKit for 2D or SceneKit for 3D. (Or a third party engine like Cocos or Unity.) Metal, on the other hand, is better suited for building those game engines.
SpriteKit and SceneKit do support shaders. Look at SKShader and SCNShadable in the docs (and be sure to click the "More" links to read the full overviews). SceneKit also supports character animations (aka skeletal animation aka skinning): typically one designs and rigs a model for animation in an external authoring tool (Maya, Blender, etc), then uses SceneKit to work with the animations at run time.
It is possible to do things like GPU-based skeletal animation in Metal. But I haven't seen any tutorials or similar written about it yet, probably because Metal is such a new technology. Fundamentally, though, it'd be based on the same sorts of techniques you'd use for skeletal animation in OpenGL or Direct3D — and much has been written about animation for those technologies. If you're willing to invest the time and energy to work at a low level, adapting the subject matter from GL/D3D tutorials is relatively easy.
You can do skeletal animation in Metal, SCNKit would be using the GPU to deform the mesh as well. But to do it in Metal you would need to pass skin weights, along with bone matrices for the bind pose and the transformations of the bones as they animate then calculate the new vertex positions based on these. In fact I think you need the inverse of the bind pose matrices. Each mesh vertex is then transformed by a weighted sum of transformations dictated by the skin weights.
I tried it but screwed it up somehow it didn’t deform properly, I don’t know if I’d obtained the wrong matrices from my custom script to grab animation data from blender or a bug in my shader maths or from the weights.
It was probably close, but with all the possible things that I may have got wrong in the process it was difficult to fix so I abandoned it in the end.
Probably easier to stick with SceneKit and let apple take care of the rest or use an existing game engine such as Unity.
Then again if you want a challenge, I’m sure it’s possible, just a little tricky. You could try CPU first to make sure the maths is ok then port it to the GPU to make it faster?
SceneKit do support shaders. And an object that manages the relationship between skeletal animations and the nodes and geometries they animate is SCNSkinner from SceneKit.
Typically, you need to create a skinned model using, for example, Autodesk Maya, save it along with animations that use the skeleton, in a scene file. You load the model from the scene file and pose or animate it in your app, either by using animation objects also loaded from the scene file or by directly manipulating the nodes in the skeleton. That's it.
Watch this 7-parts video about Blender's skeletal system and how to use it in SceneKit.
convenience init(baseGeometry: SCNGeometry?, //character
bones: [SCNNode], //array of bones
boneInverseBindTransforms: [NSValue]?, //ibt of matrix4
boneWeights: SCNGeometrySource, //influence on geometry
boneIndices: SCNGeometrySource //index mapping
)
I'm making an iOS app and I want to be able to render with individual "layers" so that I can do blending between them and use shaders on each individually before blending them all together and rendering to the screen.
I understand that I will be rendering to Textures and then rendering these textures on top of each other in the framebuffer, but I am not understanding clearly what code needs to be written to follow this procedure. In another answer I found what I want to do, but I don't know what code accomplishes this task: How to achieve multi-layered drawing with OpenGL ES on iOS? (For example how do I "Bind texture 1, then draw it"? What does it mean to "Attach texture 1"?)
I've also looked at Apple's documentation regarding this technique but it isn't very clear about the steps or code for the actual rendering part of the process.
How would I go about doing this? (hopefully with code examples of each step because I haven't understood spotty instructions that expect me to just know what is needed for each step)
Here is an example of what I want to do with this. The spheres would be rendered into a "layer" or Texture2D which I would then pass through the shader, then render on top of a already partially rendered scene. I don't know exactly what kind of openGL code could do that.
You're looking at wrong place. To use OpenGL, you need to study OpenGL, not anything else. Apple doesn't provide any OpenGL documentation because it's an open standard, so the specs are freely published. Apple assumes you're already familiar with it.
OpenGL ES 2.0 spec
manual pages
I think you are having trouble because you don't have understanding of GL specific terms. The spec describes them very well and clearly. So, please read the spec. That will save your time A LOT. Or you will keep the trouble.
Also, I like to introduce a site which has very nice conceptual description of OpenGL pipeline.
http://www.songho.ca/opengl/
This site is targeting desktop GL, and some API may be different a little. Please focus on conceptual understanding. For example, here's an illustration from the site.
For more tutorials, google with proper keyword like OpenGL ES 2.0 tutorials (or how-tos). Here's an example link, and would be helpful. There're also many more tutorials. If spec is too boring, it's also good to have some fun with tutors.
Update
I like to say one more. IMO, the OpenGL is all about drawing triangles. Everything is ultimately converted into triangles in 3D space to represent some shape. Anything else all exists only for optimization. And in most cases, GL chooses batch processing for major optimization strategy. Because overhead of each drawing call is not affordable for most games.
It's hard to start OpenGL ES because it's an optimized version of desktop GL, so all convenient or easy drawing features are stripped off. This is same even on recent version of desktop GL.
So there's no such drawOneTriangle function. Instead GL has something like
make a buffer,
put list of many triangles there
select the buffer for next drawing.
draw all triangles in current buffer at once
delete the buffer.
By using buffer, you don't need to dispatch duplicated data to GPU from CPU. And GL uses this approach everywhere. For example, you don't have such drawOneTriangleWithTexture function to use textures. Instead, you have to
make a buffer
put list of many pixels there (bitmap)
select the buffer for next drawing.
draw all triangles with the texture pixel data in current buffers.
delete the buffer.
Everything overly complex stuffs on GL are all exists for optimization. This may look weird at first, but there're usually very good reasons for the design.
Update 2
Now I think you're looking for render to texture feature. (well actually you already mentioned this…)
You can use a rendered image as a texture source. To do this,
you need to bind a frame-buffer with texture object rather then render-buffer object using some functions like glFramebufferTexture.
Once you render to a texture, and switch frame-buffer to final buffer, and bind the texture you drawn and others, and perform the final drawing. You need two frame buffers: one for render-to-texture, and one for final output.
I've been researching this problem I have and I can't seem to understand it well enough to solve it so I thought I might as well throw it out there and the intelligent bunch might have some ideas. :P
Basically I have been working on a iPhone project for a while where I have the luxury to use all the newest frameworks and target 5.1. So I've been using GLKit and the GLKBaseEffect which have been working just fine for me. The reason I started out with GLKBaseEffect rather then writing my own shaders is that I don't know glsl well. However the requirements have become more precise and the base effect just doesn't seem to cut it any longer.
Since I am already doing all my transforms using the base effect I would prefer if I could keep my base effect intact but add glsl-type shaders on top if that makes any sense.
My old approach look something like this (this is in a loop rendering all objects, where an object contain such things as transforms, a mesh and some other less important things for this problem, such as textures, materials and so on)
ObjectBase *obj = [ResourceManager.shared getObjectNamed:name inScene:sceneName];
GLKMatrix4 modelview = effect.transform.modelviewMatrix;
effect.transform.modelviewMatrix = GLKMatrix4Multiply(effect.transform.modelviewMatrix, obj.transform);
[effect prepareToDraw];
[obj render];
effect.transform.modelviewMatrix = modelView;
Here we fetch an object to render and transform (i.e. translate, rotate and scale) the object then we render it, where the rendering itself fetches the mesh for the object, bind the buffers and render it.
So far so good.
What I would like to do however is that during the [obj render]; call I would like the object to also do something like glUseProgram(someProgram); adding more specialized shader code.
I guess one could argue that I am trying to use the base effect for my vertex shaders and want to use "normal" shaders for my fragment shaders. At least that's what I think I want to do.
I have been trying some things.
I tried to create just the fragment shaders and glUseProgramon it, however it said that I need one vertex and one fragment shader when setting up and compiling it. I've also tried to create an empty vertex shader, which didn't turn out very well, I don't know what happens with that, but I am guessing that it overrule the base effect.
I am leaning toward, in the end, accept that it's probably best to throw out the base effects and just write my own shaders all the way. I just feel like it's a lot of work out the window, so I wanted to see how much of it I can save.
I do understand that my understanding of shaders are the part that gives me the most problems, so please be patient with that fact.
I just wanted to give my conclusions for anyone interested in them.
What I've done is actually thrown out the GLBaseEffect all together and implemented my own shader code.
My biggest problem were that I didn't really understand that it's all or nothing so to speak.
Please I might be wrong, so any corrections to where I am wrong will be greatly appreciated, I really don't want to fool anyone reading this.
What I found out during my endeavors is a couple of key-points:
GLKBaseEffect, is meant to mimic the fixed-function pipeline as seen in earlier versions of OpenGLES. Hence it is wrapping the common shader code so you don't really have to care to much about it. You will have basic functionality but it's not really very extendible.
You can still use the neat features of GLKit such as texture loader, the math library and so on if you write your own shader-code. So if you want something more complicated or customizable (bump mapping, toon shading and so on) it is totally worth rewriting the boiler-plate code needed to render properly. What I did at first was that I used the GLKBaseEffect to orient in the scene since it's quite comfortable and easy to use. However when I wanted to do more (tangent-space normal mapping) I kind of got stuck since I couldn't add to the shader program handled by the GLKBaseEffect.
Shaders are really not as scary as I always thought! I just had no idea what it really meant, and I'm surprised that I've read so much about them and still hadn't understood that basically shaders are programs REPLACING the fixed functionality pipeline. Simple as that.
That's enough rant I guess, just wanted to follow up and add what bits and pieces I've collected this far.
Just as you discovered, you can't just use a fragment shader and leave behind the vertex shader. This is because both have different tasks. Vertex shaders deal with the per-vertex aspects: calculating the vertex data, texture (uv's) etc and finally drawing the faces (triangles). Fragment shaders deal with what exactly will be drawn at each pixel on the screen (or in the viewport). When you provide only a fragment shader, you are not telling what your vertex data is, rather you are only telling OpenGL to do something on the pixels. And these pixels hold nothing/gibberish (I am not sure which) since your vertex shader did not do anything.
When using GLKEffect, a call to the [yourEffect prepareToDraw] method takes care of the shaders etc.
If you just wish to use a stock shader pair, why not use the one provided in the XCode OpenGL game template? When you run it, it has two cubes, one rendered using GLKit, and other the normal way. Though I think it will not be enough for most effects. In case you wish to know more about shaders, you can have a look at the NeHe GLSL introduction article. It is about GLSL and how you can write and use shaders in your code. You might want to have a look at Diney Bomfim's All About Shaders articles and this page.
Using GLKit is nice in most cases, since it saves you from writing lots of useless, repetitive code. For example, you do not have to go through so many image formats with different color codings and bits per pixel (per format) and all when you can just use GLKTextureLoader.
In 3d terrain that consists of thousands of cubes (i.e. Minecraft ), what is a way to handle each block in terms of location and rendering? More specifically, I know that drawing a primitive of a cube and world transforming it everywhere in directX 9 is probably a ridiculous way to accomplish this since there are so many performance issues, so I was wondering what a more reasonable method would be.
Should each cube be a mesh that's copied many times, or is their a way to create the appropriate meshes from the data in your vertex buffer?
I found this article that walks through some of the theory behind implementing what I want to implement, but I've never used octrees before so I wasn't able to take too much from the source code. If octrees are indeed the way to go, where is a good starting point to learn about them? Most of my google searches only turned up blog posts about theory with little or no implementation examples.
It seems like using voxels would be useful in doing this, but like with octrees, I'm coming from no experience here, so I don't really know what to study first.
Anyway, thanks for any advice\resources\book names you can spare. I'm sure it's obvious, but I'm still very new to 3d programming, so I appreciate your help.
First off if you're using Minecraft as your reference, think of their use of chunks and relate it to Oct-trees. Minecraft divides up their world into smaller chunks to handle the massive amount information that is needed to be stored so use Oct-trees to organize this data that will be stored. Goz has a very accurate description of how Oct-trees and Quad-trees work, so use his information as a reference.
Another thing to consider is that you don't actually want to draw every cube to the screen as this will eat up your framerate. Use Object Culling to only draw visible cubes to the screen. Again if you think Minecraft; have you ever encountered a glitch where you can see through the blocks and under the world? This is because Minecraft only draws the top layer of blocks. With this many objects on screen, it would be a worthwhile investment to look into Object Culling using both the camera frustum and occlusion query.
For information on using DirectX I would recommend any book by Frank Luna. I own this book myself and it never leaves my side when programming in DirectX. http://www.amazon.com/Introduction-Game-Programming-Direct-9-0c/dp/1598220160/ref=sr_1_3?ie=UTF8&qid=1332478780&sr=8-3
I highly recommend this book as I've learned almost everything I know about DirectX from it.
Upon a Google search I found this link that discusses Occlusion Culling, because Luna doesn't cover occlusion culling, only frustum culling. I hear the Programming Gems series mentioned a lot, but I can't attest to its name personally. http://http.developer.nvidia.com/GPUGems/gpugems_ch29.html
Hope this helps.
Oct-trees are fairly simple, especially axis aligned ones like those in mine craft.
It is basically just a 3D extension of the quad-tree. You may find it easier to learn about Quad-trees first.
To give you a quick overview of a quad-tree; basically you start off with a square. Now imagine placing a much smaller square in that square. If you wish to build a quad tree representing it you first divide the original square into 4 equal sized squares.
Next you check each quadrant and if the smaller square is in that quadrant you split that quadrant into 4 smaller sized squares. Then you check those 4 quadrants choose the quadrant and subdivide. Eventually your smaller square will be wholly contained in one or more quadrants inside quadrants inside quadrants (etc). You have now built your quad tree.
Now if you imagine you are searching for a specific square inside the larger square you can quickly see the bonus of a quad-tree. Instead of searching every possible square in the quad tree (equivalent to searching every pixel in a texture) you can now check the first 4 quadrants to see if they contain it. If one does you can check its 4 sub quadrants and so on until you find the smallest quadrant wholly containing your square (or pixel). This way you end up doing many fewer tests to find your object.
Now an oct-tree is basically the same thing but instead of encoding squares in squares you now encode cubes in cubes. Every cube can be split into 8 smaller octants (and hence the name oct-tree).
Oct-trees have the advantage that by knowing which octant you are starting in you can easily cast rays through the oct-tree to find collisions (as an octant is either full, partially full or it is empty). If an octant is empty then you pass right through it and then check the octant on the other side. If it is partially full you check its sub-octants and so on until you either find a full octant (ie you've hit a solid cube and you render it) or you pass through the octant entirely and hence there is no cube to render. This is how minecraft works (I'm guessing anyway ;)). This is also a good way of quickly rendering voxel data which more people are looking into these days as a possible future rendering mechanism.
Hope thats some help! :)
Oct-trees and quad-trees are useful for culling sections of your geometry to render. Minecraft uses 16x16x16 render blocks to break up the terrain into manageable pieces.
Another technique to consider is instancing. Instancing is where you tell the GPU to render an object multiple times in different locations. It's used for crowd rendering, trees, anything where the geometry is the same, but you have lots of them.
http://msdn.microsoft.com/en-us/library/windows/desktop/bb173349(v=vs.85).aspx
http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter03.html
Here is an article where the writer duplicates the minecraft renderer in OpenGL 4. While the code won't apply to your case the techniques (culling cubes that are surrounded, etc) can be applied to a directx renderer.
http://codeflow.org/entries/2010/dec/09/minecraft-like-rendering-experiments-in-opengl-4/
Don't be fooled by the blocky graphics and the low quality textures. Minecraft is an extremely complex renderer and you'll need to come up with ways to handle the sheer number of items involved. For example even a "small" part of the world, say 100x100x100 blocks is 1 million blocks. To push each block to the GPU as a separate mesh would kill your GPU. The Minecraft renderer is far more complex than most first person shooters when you get down to the technology.