In TWGL, why are "setBuffersAndAttributes" and "drawBufferInfo" two seperate calls? - webgl

In TWGL, why do I have to pass the buffer information first to setBuffersAndAttributes and then pass it again to drawBufferInfo? I'm very new to WebGL and just try to understand the pipeline, why are these two seperate calls, in which scenario would I first set the buffer information and then do something else before drawing it or not draw it at all or draw a different buffer information?

why are these two seperate calls, in which scenario would I first set the buffer information and then do something else before drawing it or not draw it at all or draw a different buffer information?
The most common reason would be drawing many of the same thing.
twgl.setBuffersAndAttributes(gl, someProgramInfo, someBufferInfo);
// uniforms shared by all instances like projection or view
twgl.setUniforms(...);
for each instance
// uniforms unique this instance like material, texture or world matrix
twgl.setUniforms(...);
twgl.drawBufferInfo(...);
The only thing twgl.drawBufferInfo does is call gl.drawArrays or gl.drawElements and the only info it needs to do that is
the draw count as in how many vertices to process
whether or the data is indexed (in other words, should it call gl.drawArrays or gl.drawElements
If it's index, what type of indices are, UNSIGNED_BYTE, UNSIGNED_SHORT, or UNSIGNED_INT
It's sole purpose is so you don't have to change your code from gl.drawArrays to gl.drawElements or visa versa if you change the data from non-indexed to indexed

Related

WebGL Multi-Render Target: For drawBuffers, what does gl.BACK do?

I am working on my Multiple Render Target pipeline and I came across a curiosity in the docs that I don't fully understand and googling for an hour hasn't helped me find a clear answer.
You utilize gl.drawBuffers([...]) to link the locations used in your shader to actual color attachments in your framebuffer. So, most of the expected parameters makes sense:
gl.NONE - Make the shader output for this location NOT output to any Color attachment in the FBO
gl.COLOR_ATTACHMENT[0 - 15] - Make the shader location output to the specified color attachment.
But then we have this mysterious target (from the docs):
gl.BACK: Fragment shader output is written into the back color buffer.
I don't think I understand what the back color buffer is, especially relative to the currently attached FBO. As far as I know you don't specify a 'back color buffer' when making a FBO...so what does this mean? What is this 'back color buffer'?
In WebGL the backbuffer is effectively "the canvas". It's called the backbuffer because sometimes there is a frontbuffer. Canvas's in WebGL are double buffered. One buffer is whatever is visible, the other is the buffer you're currently drawing to.
You can't use [gl.BACK, gl_COLOR_ATTACHMENT0]
When writing to a framebuffer each entry can only be the same attachment or NONE. For example imagine you have 4 attachments. Then the array you pass to drawBuffers is as follows
gl.drawBuffers([
gl.COLOR_ATTACHMENT0, // OR gl.NONE,
gl.COLOR_ATTACHMENT1, // OR gl.NONE,
gl.COLOR_ATTACHMENT2, // OR gl.NONE,
gl.COLOR_ATTACHMENT3, // OR gl.NONE,
])
You can not swap around attachments.
gl.drawBuffers([
gl.NONE,
gl.COLOR_ATTACHMENT0, // !! ERROR! This has to be COLOR_ATTACHMENT1 or NONE
])
You can't use gl.BACK gl.BACK is only for when writing to the canvas, in other words then the frame buffer is set to null as in gl.bindFramebuffer(null);
gl.drawBuffers([
gl.BACK, // OR gl.NONE
]);
note: drawBuffers state is part of the state of each framebuffer (and canvas). See this and this

Sample Count In Metal

I am drawing texture in a quad with 4. then I am drawing a triangle with sample count 4. I feel there is no need to draw texture in a quad with 4 sample count. It affect performance. Is it possible use different sample count in a single program.
It's not possible to use different MSAA sample counts with a single render pipeline state or within a single render pass (render command encoder), because each of these objects is immutably configured with the sample count. In order to achieve MSAA, the render pass has one or more attachments which must be resolved to produce a final image. If you need different sample counts for different draw calls (i.e., you want to draw some MSAA passes and some non-MSAA passes), you should first perform your multisample passes, then load the resolveTextures of the final MSAA pass as the textures of the corresponding attachments in subsequent passes, using a loadAction of .load, then perform your non-MSAA drawing.

ios, opengl es2.0., using multiple textures, but only get one active texture unit

I'm developing a opengl es application for ios.
I'm trying to blend two textures in my shader, but I always get only one active texture unit.
I have generated two texture, and linked them with two "sampler2D" from the fragment shader.
I set them to unit 0 and 1 by using glUniform1f();
And I have bind the textures using a loop
for (int i = 0; i < 2; i++)
{
glActiveTexture(GL_TEXTURE0 + i);
glBindTexture(GL_TEXTURE_2D, textures[i]);
}
But when I draw the opengl frame, only one unit is active. like in the picture below
So, what I've been doing wrong?
The way I read the output of that tool (I have not used it), the left pane shows the currently active texture unit. There is always exactly one active texture unit, corresponding to your last call of glActiveTexture(). This means that after you call:
glActiveTexture(GL_TEXTURE0 + i);
the value in the left circled field will be the value of i.
The right pane shows the textures bound to each texture unit. Since you bound textures to unit 0 and 1 with the loop shown in your question, it shows a texture (with id 201) bound to texture unit 0, and a texture (with id 202) bound to texture unit 1.
So as far as I can tell, the state shown in the screenshot represents exactly what you set based on your description and code fragment.
Based on the wording in your question, you might be under the impression that glActiveTexture() enables texture units. That is not the case. glActiveTexture() only specifies which texture unit subsequent glBindTexture() calls operate on.
Which textures are used is then determined by the values you set for the sampler uniforms of your shader program, and by the textures you bound to the corresponding texture units. The value of the currently active texture unit has no influence on the draw call, only on texture binding.

Storing game data for 2D game like Star Ocean Second Story and Legend of Mana

I'm trying to go for a 2D game like Legend of Mana and Star Ocean Second Story rather than tile based 2D games.
OVERVIEW
Currently, the way I'm going about building my game is like so:
I have geometry files and texture files. Geometry files store width, height, XY position, Texture ID number to use and texture name to use.
The way I'm trying to do is:
I will have a scene manager object that loads "scene" objects. Each scene object stores a list of all geometry objects in that scene as well as texture and sound objects (to be implemented).
Each scene is rendered using vertex array whereby the scene manager object would call a method like get scene vertices by scene name which returns a pointer to an array of GLFloats (GLfloat *) and this pointer to GLfloat gets used in OpenGL's glVertexPointer() function.
When I want to update each character's position (like the hero for example), my aim is to use the "hero" game objects in the current scene and call a function like:
Hero.Move(newXPosition, newYPosition);
which will actually alter the hero geometry object's vertex data. So it will do something like:
for(int i = 0; i < 8; i++)
{
vertexList[i] = vertexList[i] + newXPosition;
vertexList[i+1] = vertexList[i+1] + newYPosition;
...
}
This way, when I go to render the scene in the render frame, it will render the entire scene again with the updated vertex coordinates.
Each object in the game will just be a quadrilateral with a texture mapped to it.
THE PROBLEM
I'm using Objective C for my programming and OpenGL. This is for iOS platform.
I have been successful thus far using this method to render 1 object.
The problem I have is I'm using a NSMutableDictionary which is a data structure that uses key-value pair to store geometry instance objects in the scene object class. Dictionaries in Objective C doesn't retrieve data in the same order every time the code is run. It retrieves then in random order.
Becausing of this, I am having trouble combining all the vertex array data from each geometry object in the scene object and passing out 1 single vertex pointer to GLfloats.
Each geometry object stores it's own array of 8 vertex values (4 pairs of X,Y coordinate value). I would like each geometry object to manage it's own vertices (so I can use Move, Rotate like mentioned earlier) and at the same time, I would like my scene object to be able to output a single pointer reference to all vertices data of all geometry objects in the current scene for using in OpenGL's glVertexArray() function.
I am trying to avoid calling OpenGL's glDrawArrays(GL_TRIANGLE_STRIP, 0, 4) multiple times. Like draw hero, draw map, draw AI agents, draw BG objects. That would not be very efficient. Minimizing the amount of GL draw calls as much as possible (to 1 draw call preferably), especially on limited hardware like the iPhone is what was suggested when I was reading about OpenGL game development.
SUGGESTIONS?
What is the best practice way of going about doing what I'm trying to do?
Should I use a SQL database to store my game data like geometry vertices data and load JUST 1 scene into iPhone memory from the sql database file on iPhone's disk?
You can use a a list of lists to keep track of draw layer and order, and use the dictionary solely for fast lookup.
What I don't understand is why you don't use Cocos2D, that happens to be built on the scene manager paradigm. Think of all the development time you will save...
I have worked in a company that did wonderful games. It eventually died because they kept putting out buggy games, due to the lack of development time for debugging. They did however find time to create a graphics rendering engine.
I thought, and still think, they had wrong priorities. It seems to me you are doing the same mistake: are you trying to make an iOS game, or are you trying to learn how to do a 2D gaming engine for iOS?
Note: Cocos2D is Open Source, you can therefore read it, now that you have thought about the process to create such an engine.

HLSL: Handle lack of TexCoords?

I'm in the process of writing my first few shaders, usually writing a shader to accomplish features as I realize that the main XNA library doesn't support them.
The trouble I'm running into is that not all of my models in a particular scene have texture data in them, and I can't figure out how to handle that. The main XNA libraries seem to handle it by using a wrapper class for BasicEffect, loading it through the content manager and selectively enabling or disabling texture processing accordingly.
How difficult is it to accomplish this for a custom shader? What I'm writing is an generic "hue shift" effect, that is, I want whatever gets drawn with this technique to have its texture colors (if any) and its vertex color hue shifted by a certain degree. Do I need to write separate shaders, one with textures and one without? If so, when I'm looping through my MeshParts, is there any way to detect if a given part has texture coordinates so that I can apply the correct effect?
Yes, you will need separate shaders, or rather different "techniques" - it can still be the same effect and use much of the same code. You can see how BasicEffect (at least the pre-XNA 4.0 version) does it by reading the source code.
To detect whether or not a model mesh part has texture coordinates, try this:
// Note: this allocates an array, so do it at load-time
var elements = meshPart.VertexBuffer.VertexDeclaration.GetVertexElements();
bool result = elements.Any(e =>
e.VertexElementUsage == VertexElementUsage.TextureCoordinate);
The way the content pipeline sets up its BasicEffect is via BasicMaterialContent. The BasicEffect.TextureEnabled property is simply turned on if Texture is set.

Resources