Metal: Instance a Fragment Function? - metal

I’m a beginner to MSL, I’ve written a pair of vertex and fragment shaders. The fragment shader takes a material argument to implement phong lighting. I then learned about instancing and modified my vertex function with the instance_id tag, but it seems MSL has no analogous tag for my material argument in the fragment shader. I could put material in my vertex uniform, but I’d like instances to be able to have different materials. I could save the instance_id in my rasterizer struct and pass a material buffer to the fragment shader, but that ruins (my understanding of) the point of instancing. Is there a way to instance a fragment function, or a good-practice alternative?

Related

WebGL: How to interact between javascript and shaders, and how to use multiple shaders

I have seen demos on WebGL that
color rectangular surface
attach textures to the rectangles
draw wireframes
have semitransparent textures
What I do not understand is how to combine these effects into a single program, and how to interact with objects to change their look.
Suppose I want to create a scene with all the above, and have the ability to change the color of any rectangle, or change the texture.
I am trying to understand the organization of the code. Here are some short, related questions:
I can create a vertex buffer with corresponding color buffer. Can I have some rectangles with texture and some without?
If not, I have to create one vertex buffer for all objects with colors, and another with textures. Can I attach a different texture to each rectangle in a vector?
For a case with some rectangles with colors, and others with textures, it requires two different shader programs. All the demos I see have only one, but clearly more complicated programs have multiple. How do you switch between shaders?
How to draw wireframe on and off? Can it be combined with textures? In other words, is it possible to write a shader that can turn features like wireframe on and off with a flag, or does it take two different calls to two different shaders?
All the demos I have seen use an index buffer with triangles. Is Quads no longer supported in WebGL? Obviously for some things triangles would be needed, but if I have a bunch of rectangles it would be nice not to have to create an index of triangles.
For all three of the above scenarios, if I want to change the points, the color, the texture, or the transparency, am I correct in understanding the glSubBuffer will allow replacing data currently in the buffer with new data.
Is it reasonable to have a single object maintaining these kinds of objects and updating color and textures, or is this not a good design?
The question you ask is not just about WebGL, but also about OpenGL and 3D.
The most used way to interact is setting attributes at the start and uniforms at the start and on the run.
In general, answer to all of your questions is "use engine".
Imagine it like you have javascript, CPU based lang, then you have WebGL, which is like a library of stuff for JS that allows low level comunication with GPU (remember, low level), and then you have shader which is GPU program you must provide, but it works only with specific data.
Do anything that is more then "simple" requires a tool, that will allow you to skip using WebGL directly (and very often also write shaders directly). The tool we call engine. Engine usually binds together some set of abilities and skips the others (difference betwen 2D and 3D engine for example). Engine functions call some WebGL preset functions with specific order, so you must not ever touch WebGL API again. Engine also provides very complicated logic to build only single pair, or few pairs of shaders, based just on few simple engine api calls. The reason is that during entire program, swapping shader program cost is heavy.
Your questions
I can create a vertex buffer with corresponding color buffer. Can I
have some rectangles with texture and some without? If not, I have to
create one vertex buffer for all objects with colors, and another with
textures. Can I attach a different texture to each rectangle in a
vector?
Lets have a buffer, we call vertex buffer. We put various data in vertex buffer. Data doesnt go as individuals, but as sets. Each unique data in set, we call attribute. The attribute can has any meaning for its vertex that vertex shader or fragment shader code decides.
If we have buffer full of data for triangles, it is possible to set for example attribute that says if specific vertex should texture the triangle or not and do the texturing logic in the shader. Anyway I think that data size of attributes for each vertex must be equal (so the textured triangles will eat same size as nontextured).
For a case with some rectangles with colors, and others with textures,
it requires two different shader programs. All the demos I see have
only one, but clearly more complicated programs have multiple. How do
you switch between shaders?
Not true, even very complicated programs might have only one pair of shaders (one WebGL program). But still it is possible to change program on the run:
https://www.khronos.org/registry/webgl/specs/latest/1.0/#5.14.9
WebGL API function useProgram
How to draw wireframe on and off? Can it be combined with textures? In
other words, is it possible to write a shader that can turn features
like wireframe on and off with a flag, or does it take two different
calls to two different shaders?
WebGL API allows to draw in wireframe mode. It is shader program independent option. You can switch it with each draw call. Anyway it is also possible to write shader that will draw as wireframe and control it with flag (flag might be both, uniform or attribute based).
All the demos I have seen use an index buffer with triangles. Is Quads
no longer supported in WebGL? Obviously for some things triangles
would be needed, but if I have a bunch of rectangles it would be nice
not to have to create an index of triangles.
WebGL supports only Quads and triangles. I guess it is because without quads, shaders are more simple.
For all three of the above scenarios, if I want to change the points,
the color, the texture, or the transparency, am I correct in
understanding the glSubBuffer will allow replacing data currently in
the buffer with new data.
I would say it is rare to update buffer data on the run. It slows a program a lot. glSubBuffer is not in WebGL (different name???). Anyway dont use it ;)
Is it reasonable to have a single object maintaining these kinds of
objects and updating color and textures, or is this not a good design?
Yes, it is called Scene graph and is widely used and might be also combined with other techniques like display list.

What is the difference of shaders with extensions .Shader, .vsh, .fsh?

What is the difference between files named "name. Shader" & "name. vsh" & "name. fsh" in scene kit ?when i call some shaders in my project my model would be like a purple mask. What should i do?
there are three kinds of shaders when working with SceneKit.
As for every OpenGL app there are vertex shaders and fragment shaders. Vertex shaders often have the .vert or .vsh extension and fragment shaders often have the .frag or .fsh extension. These shaders are used with the SCNProgram class.
In addition SceneKit exposes the concept of shader modifiers which often have the .shader extension. Shader modifiers can affect either a vertex of fragment shader and are used with the SCNShadable protocol.
These file extensions are just indications and could be really anything you want.

Use single vertex buffer or many?

I'm implementing a 2D game with lots of independent rectangular game pieces of various dimensions. The dimensions of each piece do not change between frames. Most of the pieces will display an image and share the same fragment shader. I am new to WebGL and it is not clear to me what the best strategy is for managing vertex buffers in regard to performance for this situation.
Is it better to use a single vertex buffer (quad) to represent all of the game's pieces and then rescale those vertices in the vertex shader for each piece? Or, should I define a separate static vertex buffer for each piece?
The GPU is a state machine, switching states is expensive(even more when done through WebGL because of the additional layer of checks introduced by the WebGL implementation) so binding vertex buffers is expensive.
Its good practice to reduce API calls to a minimum.
Even when having multiple distinct objects you still want to use a single vertex buffer and use the offset parameter of the drawArrays or drawElements methods.
Here is a list of API calls ordered by decreasing expensiveness(top is most expensive):
FrameBuffer
Program
Texture binds
Vertex format
Vertex bindings
Uniform updates
For more information on this you can watch this great talk Beyond Porting: How Modern OpenGL can Radically Reduce Driver Overhead by Cass Everitt and John McDonald, this is also where the list above comes from.
While these benchmarks were done on Nvidia hardware its a good guideline for AMD and Intel graphics hardware as well.

GPUImageLookupFilter with intensity control?

I am using Brad Larson's GPUImage framework for my project.
I was trying to find a way to implement intensity control to GPUImageLookupFilter and came across
https://github.com/BradLarson/GPUImage/issues/1485
gl_FragColor = mix(textureColor, vec4(vec3(newColor),1.0), mixTexture);
the "textureColor" is the original texture, and "newColor" is the LookupFilter result, and mixTexture is the Alpha value which is (0 ~ 1.0), you can think it as intensity variable.
I do not know how to implement this,
I have no knowledge of how to implement OpenGL shaders. Could anyone tell me where to add this code to implement intensity control to GPUImageLookupFilter?
Every GPUImage filter has its own fragment shader. These are defined at the top of the .m file for that filter as string constants. In the case of GPUImageLookupFilter, that's the kGPUImageLookupFragmentShaderString at the top of GPUImageLookupFilter.m.
These fragment shaders are C-like programs that have a few unique attributes when compared to standard C, but should still be reasonably easy to follow once you've seen a few examples.
As was pointed out in that issue, if you want this kind of an intensity control (which I don't include by default for performance reasons), you'll want to create a new filter class and copy over the code from GPUImageLookupFilter into it (of which there is little beyond the fragment shader). There are two versions of the fragment shader, one for Mac (generally just without the precision qualifiers) and one for iOS. At the bottom of both of those is a line that outputs the final color. You'll want to modify that to use a mix() operation like is described above.
You'll also need to add a property to adjust this intensity value, if you don't want to hardcode an intensity value. For that, you'll need to set up a matching uniform in the fragment shader to take in this property. Look at the GPUImageBrightnessFilter for a simple example of a property that matches to a uniform in a fragment shader.
I'd recommend looking through the fragment shader code provided for the various filters, to see how these are written and how they work. Most people are able to pick up the fundamentals just by examining the various shaders already in the framework.

Simple flat shading using Stage3D/AGAL

I'm relatively new to 3D development and am currently using Actionscript, Stage3D and AGAL to learn. I'm trying to create a scene with a simple procedural mesh that is flat shaded. However, I'm stuck on exactly how I should be passing surface normals to the shader for the lighting. I would really like to just use a single surface normal for each triangle and do flat, even shading for each. I know it's easy to achieve better looking lighting with normals for each vertex, but this is the look I'm after.
Since the shader normally processes every vertex, not every triangle, is it possible for me to just pass a single normal per triangle, rather than one per vertex? Is my thinking completely off here? If anyone had a working example of doing simple, flat shading I'd greatly appreciate it.
I'm digging up an old question here since I stumbled on it via google and can see there is no accepted answer.
Stage3D does not have an equivalent "GL_FLAT" option for it's shader engine. What this means is that the fragment shader program always receives a "varying" or interpolated value from the output of the three respective vertices (via the vertex program). If you want flat shading, you have basically only one option:
Create three unique vertices for each triangle and set the normal for
each vertex to the face normal of the triangle. This way, each vertex
will calculate the same lighting and result in the same vertex color.
When the fragment shader interpolates, it will be interpolating three
identical values, resulting in flat shading.
This is pretty lame. The requirement of unique vertices per triangle means you can't share vertices between triangles. This will definitely increase your vertex count, causing increased delays during your VertexBuffer3D uploads as well as overall lower frame rates. However, I have not seen a better solution anywhere.

Resources