In the past day of my foray into OpenGL ES 2.0, while attempting to apply two projective textures -- one sprite animation and one video file texture -- to a skybox, I started simply pounding my hands on the keyboard like stubs, and miraculously it all started working.
However, the texture created from the video file is flipped upside-down. In other words, the texture coordinates for (0,0) seem to be mapping to (0,1), and vice-versa.
The function which creates the video file texture from a CVImageBufferRef, CVOpenGLESTextureCacheCreateTextureFromImage(), includes a parameter "CFDictionaryRef textureAttributes."
CVOpenGLESTextureCache.h helpfully explains: "A CFDictionaryRef containing attributes to be used for creating the CVOpenGLESTexture objects. May be NULL."
I immediately thought of GLKTextureLoader, which allows you to pass in an options dictionary, with one of the available options being used to flip the texture around.
So, I'm a bit confused on two points:
Will passing in a CFDictionaryRef of attributes allow me to easily change things about the texture, like rotation? Or does it somehow mean 'attribute' in the shader-sense? (I don't think it very likely means the shader-sense, but I also think it's odd that it calls them attributes and not options.)
Is there a list somewhere of the key/value pairs that will tell it to do useful things?
I wanted to look into this before finding some other way to flip it around, since if it's possible to do it here, it seems like it would be the most straightforward way, if the procedure is indeed parallel to GLKTextureLoader's options.
After reading through apple's RosyWriter sample code again, I've realized that after creating the texture with CVOpenGLESTextureCacheCreateTextureFromImage(), they flip the texture around by modifying the texture coordinates for the vertices.
Since I'm projecting the texture and computing the texture coordinates in the vertex shader, I think the simplest solution for me will be to flip the actual movie file asset around before I drop it into xcode. So that's probably what I'll do for each movie. Just realized what a simple solution that is. That way I don't need to fork my vertex shader code for projections that need to be rotated & don't need to be rotated.
I'd still really appreciate clarification on the attributes argument though, if anybody has info on that.
Related
I have a problem in which in my game I have to use SpriteSortMode.Texture because I have a lot of objects with few textures, so I cannot afford to use SpriteSortMode.BackToFront.
The thing is this means I cannot draw by layers, unless I do SpriteBatch.Begin with the exact same settings, which is what I'm currently doing.
I only have 3 draw layers I need - a Tileset surface, Objects like rocks or characters on the surface, and UI.
Other solutions I've found is using texture quads (which supposedly also improves tileset drawing performance), going 3D with orthogonal view which I haven't researched yet.
I'm hoping there's a better to make this work.
Why would having a lot of objects with few textures mean you have to use SpriteSortMode.Texture?
"This can improve performance when drawing non-overlapping sprites of uniform depth." says the MSDN page, and this is clearly not what you are doing.
Just use the default SpriteSortMode.Deferred and draw things back to front in order.
I have a question and hope you can help me with it. I've been busy making a game with xna and have recently started getting into shaders (hlsl). There's this shader that i like, use, and would like to improve.
The shader creates an outline by drawing the back faces of a model (in black), and translating the vertex along its normal. Now, for a smooth-shaded model, this is fine. I, however, am using flat-shaded models (I'm posting this from my phone, but if anyone is confused about what that means, I can upload a picture later). This means that the vertices are translated along the normal of its corresponding face, resulting in visible gaps between the faces.
Now, the question is: is there a way to calculate (either in the shader or in xna), how i should translate each vertex, or is the only viable option just making a copy of the 3d model, but with smooth shading?
Thanks in advance, hope you can educate me!
EDIT: Alternatively, I could load only a smooth-shaded model, and try to flat-shade it in the shader. That would, however, mean that I have to be able to find the normals of all vertices of the corresponding face, add their normals, and normalize the result. Is there a way to do this?
EDIT2: So far, I've found a few options that don't seem to work in the end: setting "shademode" in hlsl is now deprecated. Setting the fillmode to "wireframe" would be neat (while culling front faces), if only I would be able to set the line thickness.
I'm working on a new idea here. I could maybe iterate through vertices, find their position on the screen, and draw 2d lines between those points using something like the RoundLine library. I'm going to try this, and see if it works.
Ok, I've been working on this problem for a while and found something that works quite nicely.
Instead doing a lot of complex mathematics to draw 2d lines at the right depth, I simply did the following:
Set a rasterizer state that culls front faces, draws in wireframe, and has a slightly negative depth bias.
Now draw the model in all black (I modified my shader for this)
Set a rasterizer state that culls back faces, draws in fillmode.solid and has a 0 depth bias.
Now draw the model normally.
Since we can't change the thickness of wireframe lines, we're left with a very slim outline. For my purposes, this was actually not too bad.
I hope this information is useful to somebody later on.
I'm making an iOS app and I want to be able to render with individual "layers" so that I can do blending between them and use shaders on each individually before blending them all together and rendering to the screen.
I understand that I will be rendering to Textures and then rendering these textures on top of each other in the framebuffer, but I am not understanding clearly what code needs to be written to follow this procedure. In another answer I found what I want to do, but I don't know what code accomplishes this task: How to achieve multi-layered drawing with OpenGL ES on iOS? (For example how do I "Bind texture 1, then draw it"? What does it mean to "Attach texture 1"?)
I've also looked at Apple's documentation regarding this technique but it isn't very clear about the steps or code for the actual rendering part of the process.
How would I go about doing this? (hopefully with code examples of each step because I haven't understood spotty instructions that expect me to just know what is needed for each step)
Here is an example of what I want to do with this. The spheres would be rendered into a "layer" or Texture2D which I would then pass through the shader, then render on top of a already partially rendered scene. I don't know exactly what kind of openGL code could do that.
You're looking at wrong place. To use OpenGL, you need to study OpenGL, not anything else. Apple doesn't provide any OpenGL documentation because it's an open standard, so the specs are freely published. Apple assumes you're already familiar with it.
OpenGL ES 2.0 spec
manual pages
I think you are having trouble because you don't have understanding of GL specific terms. The spec describes them very well and clearly. So, please read the spec. That will save your time A LOT. Or you will keep the trouble.
Also, I like to introduce a site which has very nice conceptual description of OpenGL pipeline.
http://www.songho.ca/opengl/
This site is targeting desktop GL, and some API may be different a little. Please focus on conceptual understanding. For example, here's an illustration from the site.
For more tutorials, google with proper keyword like OpenGL ES 2.0 tutorials (or how-tos). Here's an example link, and would be helpful. There're also many more tutorials. If spec is too boring, it's also good to have some fun with tutors.
Update
I like to say one more. IMO, the OpenGL is all about drawing triangles. Everything is ultimately converted into triangles in 3D space to represent some shape. Anything else all exists only for optimization. And in most cases, GL chooses batch processing for major optimization strategy. Because overhead of each drawing call is not affordable for most games.
It's hard to start OpenGL ES because it's an optimized version of desktop GL, so all convenient or easy drawing features are stripped off. This is same even on recent version of desktop GL.
So there's no such drawOneTriangle function. Instead GL has something like
make a buffer,
put list of many triangles there
select the buffer for next drawing.
draw all triangles in current buffer at once
delete the buffer.
By using buffer, you don't need to dispatch duplicated data to GPU from CPU. And GL uses this approach everywhere. For example, you don't have such drawOneTriangleWithTexture function to use textures. Instead, you have to
make a buffer
put list of many pixels there (bitmap)
select the buffer for next drawing.
draw all triangles with the texture pixel data in current buffers.
delete the buffer.
Everything overly complex stuffs on GL are all exists for optimization. This may look weird at first, but there're usually very good reasons for the design.
Update 2
Now I think you're looking for render to texture feature. (well actually you already mentioned this…)
You can use a rendered image as a texture source. To do this,
you need to bind a frame-buffer with texture object rather then render-buffer object using some functions like glFramebufferTexture.
Once you render to a texture, and switch frame-buffer to final buffer, and bind the texture you drawn and others, and perform the final drawing. You need two frame buffers: one for render-to-texture, and one for final output.
I'm relatively new to 3D development and am currently using Actionscript, Stage3D and AGAL to learn. I'm trying to create a scene with a simple procedural mesh that is flat shaded. However, I'm stuck on exactly how I should be passing surface normals to the shader for the lighting. I would really like to just use a single surface normal for each triangle and do flat, even shading for each. I know it's easy to achieve better looking lighting with normals for each vertex, but this is the look I'm after.
Since the shader normally processes every vertex, not every triangle, is it possible for me to just pass a single normal per triangle, rather than one per vertex? Is my thinking completely off here? If anyone had a working example of doing simple, flat shading I'd greatly appreciate it.
I'm digging up an old question here since I stumbled on it via google and can see there is no accepted answer.
Stage3D does not have an equivalent "GL_FLAT" option for it's shader engine. What this means is that the fragment shader program always receives a "varying" or interpolated value from the output of the three respective vertices (via the vertex program). If you want flat shading, you have basically only one option:
Create three unique vertices for each triangle and set the normal for
each vertex to the face normal of the triangle. This way, each vertex
will calculate the same lighting and result in the same vertex color.
When the fragment shader interpolates, it will be interpolating three
identical values, resulting in flat shading.
This is pretty lame. The requirement of unique vertices per triangle means you can't share vertices between triangles. This will definitely increase your vertex count, causing increased delays during your VertexBuffer3D uploads as well as overall lower frame rates. However, I have not seen a better solution anywhere.
I am just trying to better understand the directX pipeline. Just curious if depth buffers are mandatory in order to get things work. Or is it just a buffer you need if you want objects to appear behind one another.
The depth buffer is not mandatory. In a 2D game, for example, there is usually no need for it.
You need a depth buffer if you want objects to appear behind each other, but still want to be able to draw them in arbitrary order.
If you draw all triangles from the back to the front, and none of them intersect, then you could do without the depth buffer. However, it's generally easier to do away with depth sorting and just to use the depth buffer anyway.
Depth buffers are not mandatory. They simply solve the following problem: suppose you have an object near to the camera which is drawn first. Then, after that is already drawn, you want to draw an object which is far away, but at the same position as the nearby object on-screen. Without depth buffers, it gets drawn on top, which looks wrong. With depth buffers, it is obscured, because the GPU figures out its behind something else that has already been drawn.
You can turn them off and deal with that, eg. by drawing back-to-front (but this has other problems solved by depth buffering), which is easy in 2D games. Alternatively for some reason you might want that over-draw as some kind of effect. But it is by no means necessary for basic rendering.