Displaying a series of RGB array images in DirectX or Vulkan - directx

Welcome.
I have the following problem. I have a function written in C++, and it returns an array of RGB pixels representing a single image. Each subsequent call to this function returns a new array representing the next frame of an animation. I want to display this animation frame by frame in a window, and I want this to be as fast as possible.
I have used GDI+ but have not been able to get satisfactory efficiency. I have therefore become interested in the DirectX and Vulkan libraries. Unfortunately for me, but at first glance, these libraries seem downright overwhelming to learn. Moreover, the tutorials I found focus on 3d graphics, which I don't need because, as I wrote earlier, I only want to display a series of RGB pixel arrays.
And this is where my question comes in.
Is there a boilerplate project on Github or anywhere else that I could use? If not, does anyone here know of a tutorial that, without going into learning 3D graphics, would help me to learn DirectX or Vulkan enough so that I could write the code I need myself?
Thank you in advance for your reply.

Related

how to use mat3.normalFromMat4

I'm trying to implement a moving light source in webGL.
I understand that the normalMatrix is the key to managing the lighting equation in the fragment shader but am not winning the battle to set it up correctly. The only tutorial I can find is the excellent "Introduction to Computer Graphics"
and he says:
It turns out that you need to drop the fourth row and the fourth column and then take something called the "inverse transpose" of the resulting 3-by-3 matrix. You don't need to know what that means or why it works.
I think I do need to understand this to really master this baby.
So I'm looking for guidance on how and why to use mat3.normalFromMat4.
(PS. I have achieved the moving light source using 3Js, but its handling of texture maps degrades the images to too great an extent for my application. In webGL I can achieve the desired resolution.)
For me, from reading this discussion, the answer appears to be simple. mat3.normalFromMat4 is only required if you scale the object non-uniformly (i.e. more in one dimension than the others) after the normals have been computed.
Since I'm not doing that, it's a null issue.

Rendering "layers" in OpenGL ES

I'm making an iOS app and I want to be able to render with individual "layers" so that I can do blending between them and use shaders on each individually before blending them all together and rendering to the screen.
I understand that I will be rendering to Textures and then rendering these textures on top of each other in the framebuffer, but I am not understanding clearly what code needs to be written to follow this procedure. In another answer I found what I want to do, but I don't know what code accomplishes this task: How to achieve multi-layered drawing with OpenGL ES on iOS? (For example how do I "Bind texture 1, then draw it"? What does it mean to "Attach texture 1"?)
I've also looked at Apple's documentation regarding this technique but it isn't very clear about the steps or code for the actual rendering part of the process.
How would I go about doing this? (hopefully with code examples of each step because I haven't understood spotty instructions that expect me to just know what is needed for each step)
Here is an example of what I want to do with this. The spheres would be rendered into a "layer" or Texture2D which I would then pass through the shader, then render on top of a already partially rendered scene. I don't know exactly what kind of openGL code could do that.
You're looking at wrong place. To use OpenGL, you need to study OpenGL, not anything else. Apple doesn't provide any OpenGL documentation because it's an open standard, so the specs are freely published. Apple assumes you're already familiar with it.
OpenGL ES 2.0 spec
manual pages
I think you are having trouble because you don't have understanding of GL specific terms. The spec describes them very well and clearly. So, please read the spec. That will save your time A LOT. Or you will keep the trouble.
Also, I like to introduce a site which has very nice conceptual description of OpenGL pipeline.
http://www.songho.ca/opengl/
This site is targeting desktop GL, and some API may be different a little. Please focus on conceptual understanding. For example, here's an illustration from the site.
For more tutorials, google with proper keyword like OpenGL ES 2.0 tutorials (or how-tos). Here's an example link, and would be helpful. There're also many more tutorials. If spec is too boring, it's also good to have some fun with tutors.
Update
I like to say one more. IMO, the OpenGL is all about drawing triangles. Everything is ultimately converted into triangles in 3D space to represent some shape. Anything else all exists only for optimization. And in most cases, GL chooses batch processing for major optimization strategy. Because overhead of each drawing call is not affordable for most games.
It's hard to start OpenGL ES because it's an optimized version of desktop GL, so all convenient or easy drawing features are stripped off. This is same even on recent version of desktop GL.
So there's no such drawOneTriangle function. Instead GL has something like
make a buffer,
put list of many triangles there
select the buffer for next drawing.
draw all triangles in current buffer at once
delete the buffer.
By using buffer, you don't need to dispatch duplicated data to GPU from CPU. And GL uses this approach everywhere. For example, you don't have such drawOneTriangleWithTexture function to use textures. Instead, you have to
make a buffer
put list of many pixels there (bitmap)
select the buffer for next drawing.
draw all triangles with the texture pixel data in current buffers.
delete the buffer.
Everything overly complex stuffs on GL are all exists for optimization. This may look weird at first, but there're usually very good reasons for the design.
Update 2
Now I think you're looking for render to texture feature. (well actually you already mentioned this…)
You can use a rendered image as a texture source. To do this,
you need to bind a frame-buffer with texture object rather then render-buffer object using some functions like glFramebufferTexture.
Once you render to a texture, and switch frame-buffer to final buffer, and bind the texture you drawn and others, and perform the final drawing. You need two frame buffers: one for render-to-texture, and one for final output.

Handling blocks in Minecraft-style terrain (d3d/c++)

In 3d terrain that consists of thousands of cubes (i.e. Minecraft ), what is a way to handle each block in terms of location and rendering? More specifically, I know that drawing a primitive of a cube and world transforming it everywhere in directX 9 is probably a ridiculous way to accomplish this since there are so many performance issues, so I was wondering what a more reasonable method would be.
Should each cube be a mesh that's copied many times, or is their a way to create the appropriate meshes from the data in your vertex buffer?
I found this article that walks through some of the theory behind implementing what I want to implement, but I've never used octrees before so I wasn't able to take too much from the source code. If octrees are indeed the way to go, where is a good starting point to learn about them? Most of my google searches only turned up blog posts about theory with little or no implementation examples.
It seems like using voxels would be useful in doing this, but like with octrees, I'm coming from no experience here, so I don't really know what to study first.
Anyway, thanks for any advice\resources\book names you can spare. I'm sure it's obvious, but I'm still very new to 3d programming, so I appreciate your help.
First off if you're using Minecraft as your reference, think of their use of chunks and relate it to Oct-trees. Minecraft divides up their world into smaller chunks to handle the massive amount information that is needed to be stored so use Oct-trees to organize this data that will be stored. Goz has a very accurate description of how Oct-trees and Quad-trees work, so use his information as a reference.
Another thing to consider is that you don't actually want to draw every cube to the screen as this will eat up your framerate. Use Object Culling to only draw visible cubes to the screen. Again if you think Minecraft; have you ever encountered a glitch where you can see through the blocks and under the world? This is because Minecraft only draws the top layer of blocks. With this many objects on screen, it would be a worthwhile investment to look into Object Culling using both the camera frustum and occlusion query.
For information on using DirectX I would recommend any book by Frank Luna. I own this book myself and it never leaves my side when programming in DirectX. http://www.amazon.com/Introduction-Game-Programming-Direct-9-0c/dp/1598220160/ref=sr_1_3?ie=UTF8&qid=1332478780&sr=8-3
I highly recommend this book as I've learned almost everything I know about DirectX from it.
Upon a Google search I found this link that discusses Occlusion Culling, because Luna doesn't cover occlusion culling, only frustum culling. I hear the Programming Gems series mentioned a lot, but I can't attest to its name personally. http://http.developer.nvidia.com/GPUGems/gpugems_ch29.html
Hope this helps.
Oct-trees are fairly simple, especially axis aligned ones like those in mine craft.
It is basically just a 3D extension of the quad-tree. You may find it easier to learn about Quad-trees first.
To give you a quick overview of a quad-tree; basically you start off with a square. Now imagine placing a much smaller square in that square. If you wish to build a quad tree representing it you first divide the original square into 4 equal sized squares.
Next you check each quadrant and if the smaller square is in that quadrant you split that quadrant into 4 smaller sized squares. Then you check those 4 quadrants choose the quadrant and subdivide. Eventually your smaller square will be wholly contained in one or more quadrants inside quadrants inside quadrants (etc). You have now built your quad tree.
Now if you imagine you are searching for a specific square inside the larger square you can quickly see the bonus of a quad-tree. Instead of searching every possible square in the quad tree (equivalent to searching every pixel in a texture) you can now check the first 4 quadrants to see if they contain it. If one does you can check its 4 sub quadrants and so on until you find the smallest quadrant wholly containing your square (or pixel). This way you end up doing many fewer tests to find your object.
Now an oct-tree is basically the same thing but instead of encoding squares in squares you now encode cubes in cubes. Every cube can be split into 8 smaller octants (and hence the name oct-tree).
Oct-trees have the advantage that by knowing which octant you are starting in you can easily cast rays through the oct-tree to find collisions (as an octant is either full, partially full or it is empty). If an octant is empty then you pass right through it and then check the octant on the other side. If it is partially full you check its sub-octants and so on until you either find a full octant (ie you've hit a solid cube and you render it) or you pass through the octant entirely and hence there is no cube to render. This is how minecraft works (I'm guessing anyway ;)). This is also a good way of quickly rendering voxel data which more people are looking into these days as a possible future rendering mechanism.
Hope thats some help! :)
Oct-trees and quad-trees are useful for culling sections of your geometry to render. Minecraft uses 16x16x16 render blocks to break up the terrain into manageable pieces.
Another technique to consider is instancing. Instancing is where you tell the GPU to render an object multiple times in different locations. It's used for crowd rendering, trees, anything where the geometry is the same, but you have lots of them.
http://msdn.microsoft.com/en-us/library/windows/desktop/bb173349(v=vs.85).aspx
http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter03.html
Here is an article where the writer duplicates the minecraft renderer in OpenGL 4. While the code won't apply to your case the techniques (culling cubes that are surrounded, etc) can be applied to a directx renderer.
http://codeflow.org/entries/2010/dec/09/minecraft-like-rendering-experiments-in-opengl-4/
Don't be fooled by the blocky graphics and the low quality textures. Minecraft is an extremely complex renderer and you'll need to come up with ways to handle the sheer number of items involved. For example even a "small" part of the world, say 100x100x100 blocks is 1 million blocks. To push each block to the GPU as a separate mesh would kill your GPU. The Minecraft renderer is far more complex than most first person shooters when you get down to the technology.

Detecting Textures in OpenCV?

I'm trying to detect a texture using OpenCV. The texture would be similar to that of a brush on a paintbrush, so on an image it would have many many little lines together. I've tried using Hough Lines to be able to distinguish the texture from other things, but it hasn't been working out too well, as too many other false positives are detected. Other than that, I've had ideas about using Template Matching as well as Fast-Fourier Transforms, but I haven't tried testing or implementing them.
So, would anyone else have any idea of a possible method to do this? Maybe use some other line detector or an edge detector? Or would that bring up too many false positives?
This texture should be able to be detected in a cluttered scene and the algorithm for doing so should be relatively fast, since I want it to be tracked in real-time if possible. Sorry for not being able to post a sample of the texture I want (too less rep l0l), but you can simply search up paintbrush/paintbrush texture if you really need to see what it looks like. But if you've seen a paintbrush before, it should be pretty obvious which part I'm talking about (the part with the brush).
Thanks in advance, really appreciate it.

Rendering lots of smaller objects in OpenGL ES 2.0

Greetings each and all.
I've been struggling with OpenGL ES 2.0 and a particular problem for the last few days now. I'm looking to implement a Geometry Wars clone, for the iPhone, for fun and to learn this technology. So, my background in 3d programming is fairly good, although mainly concentrated around vector mathematics rather then draw calls towards the graphical API, as I've been working with DirectX on and off for the last couple of years. The problem, however, is that I've mainly been working with bigger meshes, loading, translating and transforming them in several ways and now I find myself in a position where I want to handle small meshes, and lots of them.
The objects are triangles, rectangles, hexagons etc. and I want the ability to modify them all separately (eg making the other edge wavy or pulsating). When I've worked with multiple big meshes I've made separate draw calls for them, easily attaching shaders and their respective parameters, but in this case I would like to render it all in one call and there's where my knowledge fails me.
So, to clearify my question. How are you to modify small meshes, preferably stored in one vertex array, individually and render them all at once using shaders with OpenGL ES 2.0?
Although code examples are more then welcome, a "simple" explanation would be enough to get me started. I assume I'm missing something trivial here and any help is greatly appreciated.
Thanks in advance,
Karl
Sounds like Instancing (and instanced arrays) can be an answer to your problem, although it might be a bit too advanced for iOS or ES in general to be supported. This way you can render many copies of the same geometry with per instance data (like a specific texture index or sub-texture or shader parameters). But of course, you cannot render different objects with completely different shaders in one draw call.
Otherwise the much simpler (and maybe much less optimized) function glMultiDrawArrays/Elements renders multiple completely different geometries in one call, but you cannot tell which triangle belongs to which object in the shader and I also doubt that it gives that much of a performance boost.

Resources