In Threejs, what is the most performant way to display a large number of cubes on an xyz grid for the WebGL renderer, in terms of which rendering methods/ lights/ settings/ material to use? The cubes should support receiving and casting shadows based on a directionLight -- or I can precalculate the side colors if that helps and is possible -- but they don't have any texture or special rotation. Thanks!
Be sure to merge your geometry. It helps a LOT. up to 60times more code. here is a post explaining why. It contains a demo actually showing the difference
http://learningthreejs.com/blog/2011/10/05/performance-merging-geometry/
Another things is to remove the duplicated faces when applicable. For example three.js minecraft demo remove the faces from the geometry. http://mrdoob.github.com/three.js/examples/webgl_geometry_minecraft.html for the demo and https://github.com/mrdoob/three.js/blob/master/examples/webgl_geometry_minecraft.html#L106 for the source
Related
I'm trying to implement a moving light source in webGL.
I understand that the normalMatrix is the key to managing the lighting equation in the fragment shader but am not winning the battle to set it up correctly. The only tutorial I can find is the excellent "Introduction to Computer Graphics"
and he says:
It turns out that you need to drop the fourth row and the fourth column and then take something called the "inverse transpose" of the resulting 3-by-3 matrix. You don't need to know what that means or why it works.
I think I do need to understand this to really master this baby.
So I'm looking for guidance on how and why to use mat3.normalFromMat4.
(PS. I have achieved the moving light source using 3Js, but its handling of texture maps degrades the images to too great an extent for my application. In webGL I can achieve the desired resolution.)
For me, from reading this discussion, the answer appears to be simple. mat3.normalFromMat4 is only required if you scale the object non-uniformly (i.e. more in one dimension than the others) after the normals have been computed.
Since I'm not doing that, it's a null issue.
I'm currently learning how to use Metal and having some difficulty using the stencil buffer – possibly because its the wrong solution for the problem I have.
The problem: I have a tree of 2D render nodes, quads, that I'm rendering with metal. For some quads, I'd like to enable a 'clipping mask', that clips the rendering of all its sub-nodes to within its bounds.
I imagined that this might be a good use-case for the stencil attachment (Metal is my first foray into low-level graphics APIs) but am not 100% sure.
Having set-up the depth attachment however, I've got no idea of what to actually do with it. Is it even possible to implement this idea of nested clipping masks with this method?
My rough idea for how it might work would be:
Set-up a pipeline state for each quad as usual
Set-up a couple of depth stencil states, one for tree elements that will clip, and one for nodes that won't. (Write masks of 0xFF and 0x00 respectively.)
Begin a render pass as usual and begin traversing the tree.
If a node should clip, use the clipping depth stencil state otherwise the non-clipping.
Any idea if this is the right approach?
Any thoughts as to the specifics of tackling this problem. i.e configuration of the MTLStencilDescriptor and its read/write masks and various comparison operations and functions. How I would set the stencilReferenceValue on the render command encoder? Increment it at each level of the tree?
EDIT: A Similar question attempts to tackle this problem in Open GL (although the solution comes with its own compromises), so it appears that the above problem can be tackled using a stencil attachment.
The solution in the linked question notes "by giving each level in your scene graph a higher number than the last, you can assign each level its own stencil mask," but comes with the caveat: "Of course, if two siblings at the same depth level overlap, or simply are too close, then you've got a problem."
Is there a better way of achieving this with the capabilities that Metal provides? An idea of/pointers to a recommended algorithm/method to do this within the capabilities of Metal's API would be appreciated!
In 3d terrain that consists of thousands of cubes (i.e. Minecraft ), what is a way to handle each block in terms of location and rendering? More specifically, I know that drawing a primitive of a cube and world transforming it everywhere in directX 9 is probably a ridiculous way to accomplish this since there are so many performance issues, so I was wondering what a more reasonable method would be.
Should each cube be a mesh that's copied many times, or is their a way to create the appropriate meshes from the data in your vertex buffer?
I found this article that walks through some of the theory behind implementing what I want to implement, but I've never used octrees before so I wasn't able to take too much from the source code. If octrees are indeed the way to go, where is a good starting point to learn about them? Most of my google searches only turned up blog posts about theory with little or no implementation examples.
It seems like using voxels would be useful in doing this, but like with octrees, I'm coming from no experience here, so I don't really know what to study first.
Anyway, thanks for any advice\resources\book names you can spare. I'm sure it's obvious, but I'm still very new to 3d programming, so I appreciate your help.
First off if you're using Minecraft as your reference, think of their use of chunks and relate it to Oct-trees. Minecraft divides up their world into smaller chunks to handle the massive amount information that is needed to be stored so use Oct-trees to organize this data that will be stored. Goz has a very accurate description of how Oct-trees and Quad-trees work, so use his information as a reference.
Another thing to consider is that you don't actually want to draw every cube to the screen as this will eat up your framerate. Use Object Culling to only draw visible cubes to the screen. Again if you think Minecraft; have you ever encountered a glitch where you can see through the blocks and under the world? This is because Minecraft only draws the top layer of blocks. With this many objects on screen, it would be a worthwhile investment to look into Object Culling using both the camera frustum and occlusion query.
For information on using DirectX I would recommend any book by Frank Luna. I own this book myself and it never leaves my side when programming in DirectX. http://www.amazon.com/Introduction-Game-Programming-Direct-9-0c/dp/1598220160/ref=sr_1_3?ie=UTF8&qid=1332478780&sr=8-3
I highly recommend this book as I've learned almost everything I know about DirectX from it.
Upon a Google search I found this link that discusses Occlusion Culling, because Luna doesn't cover occlusion culling, only frustum culling. I hear the Programming Gems series mentioned a lot, but I can't attest to its name personally. http://http.developer.nvidia.com/GPUGems/gpugems_ch29.html
Hope this helps.
Oct-trees are fairly simple, especially axis aligned ones like those in mine craft.
It is basically just a 3D extension of the quad-tree. You may find it easier to learn about Quad-trees first.
To give you a quick overview of a quad-tree; basically you start off with a square. Now imagine placing a much smaller square in that square. If you wish to build a quad tree representing it you first divide the original square into 4 equal sized squares.
Next you check each quadrant and if the smaller square is in that quadrant you split that quadrant into 4 smaller sized squares. Then you check those 4 quadrants choose the quadrant and subdivide. Eventually your smaller square will be wholly contained in one or more quadrants inside quadrants inside quadrants (etc). You have now built your quad tree.
Now if you imagine you are searching for a specific square inside the larger square you can quickly see the bonus of a quad-tree. Instead of searching every possible square in the quad tree (equivalent to searching every pixel in a texture) you can now check the first 4 quadrants to see if they contain it. If one does you can check its 4 sub quadrants and so on until you find the smallest quadrant wholly containing your square (or pixel). This way you end up doing many fewer tests to find your object.
Now an oct-tree is basically the same thing but instead of encoding squares in squares you now encode cubes in cubes. Every cube can be split into 8 smaller octants (and hence the name oct-tree).
Oct-trees have the advantage that by knowing which octant you are starting in you can easily cast rays through the oct-tree to find collisions (as an octant is either full, partially full or it is empty). If an octant is empty then you pass right through it and then check the octant on the other side. If it is partially full you check its sub-octants and so on until you either find a full octant (ie you've hit a solid cube and you render it) or you pass through the octant entirely and hence there is no cube to render. This is how minecraft works (I'm guessing anyway ;)). This is also a good way of quickly rendering voxel data which more people are looking into these days as a possible future rendering mechanism.
Hope thats some help! :)
Oct-trees and quad-trees are useful for culling sections of your geometry to render. Minecraft uses 16x16x16 render blocks to break up the terrain into manageable pieces.
Another technique to consider is instancing. Instancing is where you tell the GPU to render an object multiple times in different locations. It's used for crowd rendering, trees, anything where the geometry is the same, but you have lots of them.
http://msdn.microsoft.com/en-us/library/windows/desktop/bb173349(v=vs.85).aspx
http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter03.html
Here is an article where the writer duplicates the minecraft renderer in OpenGL 4. While the code won't apply to your case the techniques (culling cubes that are surrounded, etc) can be applied to a directx renderer.
http://codeflow.org/entries/2010/dec/09/minecraft-like-rendering-experiments-in-opengl-4/
Don't be fooled by the blocky graphics and the low quality textures. Minecraft is an extremely complex renderer and you'll need to come up with ways to handle the sheer number of items involved. For example even a "small" part of the world, say 100x100x100 blocks is 1 million blocks. To push each block to the GPU as a separate mesh would kill your GPU. The Minecraft renderer is far more complex than most first person shooters when you get down to the technology.
I'm trying to detect a texture using OpenCV. The texture would be similar to that of a brush on a paintbrush, so on an image it would have many many little lines together. I've tried using Hough Lines to be able to distinguish the texture from other things, but it hasn't been working out too well, as too many other false positives are detected. Other than that, I've had ideas about using Template Matching as well as Fast-Fourier Transforms, but I haven't tried testing or implementing them.
So, would anyone else have any idea of a possible method to do this? Maybe use some other line detector or an edge detector? Or would that bring up too many false positives?
This texture should be able to be detected in a cluttered scene and the algorithm for doing so should be relatively fast, since I want it to be tracked in real-time if possible. Sorry for not being able to post a sample of the texture I want (too less rep l0l), but you can simply search up paintbrush/paintbrush texture if you really need to see what it looks like. But if you've seen a paintbrush before, it should be pretty obvious which part I'm talking about (the part with the brush).
Thanks in advance, really appreciate it.
Greetings each and all.
I've been struggling with OpenGL ES 2.0 and a particular problem for the last few days now. I'm looking to implement a Geometry Wars clone, for the iPhone, for fun and to learn this technology. So, my background in 3d programming is fairly good, although mainly concentrated around vector mathematics rather then draw calls towards the graphical API, as I've been working with DirectX on and off for the last couple of years. The problem, however, is that I've mainly been working with bigger meshes, loading, translating and transforming them in several ways and now I find myself in a position where I want to handle small meshes, and lots of them.
The objects are triangles, rectangles, hexagons etc. and I want the ability to modify them all separately (eg making the other edge wavy or pulsating). When I've worked with multiple big meshes I've made separate draw calls for them, easily attaching shaders and their respective parameters, but in this case I would like to render it all in one call and there's where my knowledge fails me.
So, to clearify my question. How are you to modify small meshes, preferably stored in one vertex array, individually and render them all at once using shaders with OpenGL ES 2.0?
Although code examples are more then welcome, a "simple" explanation would be enough to get me started. I assume I'm missing something trivial here and any help is greatly appreciated.
Thanks in advance,
Karl
Sounds like Instancing (and instanced arrays) can be an answer to your problem, although it might be a bit too advanced for iOS or ES in general to be supported. This way you can render many copies of the same geometry with per instance data (like a specific texture index or sub-texture or shader parameters). But of course, you cannot render different objects with completely different shaders in one draw call.
Otherwise the much simpler (and maybe much less optimized) function glMultiDrawArrays/Elements renders multiple completely different geometries in one call, but you cannot tell which triangle belongs to which object in the shader and I also doubt that it gives that much of a performance boost.