In OpenGLES or Metal, I render something to the screen using vertex/fragment shaders. But immediately after fragment shader is done I need to pass few vertices and draw polylines connecting those vertices. How do I do this? In other words, is it possible to chain output of shaders and basic OpenGL commands that draws polygons? I could in principal draw the lines by implementing additional logic in the fragment shader and it will involve lot of calculation and if-then-else, which I think is not very clean approach.
Related
What is the difference between files named "name. Shader" & "name. vsh" & "name. fsh" in scene kit ?when i call some shaders in my project my model would be like a purple mask. What should i do?
there are three kinds of shaders when working with SceneKit.
As for every OpenGL app there are vertex shaders and fragment shaders. Vertex shaders often have the .vert or .vsh extension and fragment shaders often have the .frag or .fsh extension. These shaders are used with the SCNProgram class.
In addition SceneKit exposes the concept of shader modifiers which often have the .shader extension. Shader modifiers can affect either a vertex of fragment shader and are used with the SCNShadable protocol.
These file extensions are just indications and could be really anything you want.
I was wondering if it is worth it to use shaders to draw a 2D texture in xna. I am asking because with openGL it is much faster if you use GLSL.
Everything on a modern GPU is drawn using a shader.
For the old immediate-style rendering (ie: glBegin/glVertex), that will get converted to something approximating buffers and shaders somewhere in the driver. This is why using GLSL is "faster" - because it's closer to the metal, you're not going through a conversion layer.
For a modern API, like XNA, everything is already built around "buffers and shaders".
In XNA, SpriteBatch provides its own shader. The source code for the shader is available here. The shader itself is very simple: The vertex shader is a single matrix multiplication to transform the vertices to the correct raster locations. The pixel shader simply samples from your sprite's texture.
You can't really do much to make SpriteBatch's shader faster - there's almost nothing to it. There are some things you can do to make the buffering behaviour faster in specific circumstances (for example: if your sprites don't change between frames) - but this is kind of advanced. If you're experiencing performance issues with SpriteBatch, be sure you're using it properly in the first place. For what it does, SpriteBatch is already extremely well optimised.
For more info on optimisation, see this answer.
If you want to pass a custom shader into SpriteBatch (eg: for a special effect) use this overload of Begin and pass in an appropriate Effect.
I'm supposed to be having a 2d program using webgl and supposed to have a picture over the canvas which is a texture and above it i draw lines using webgl also. But the thing is both have different shader programs and whenever i'm drawing lines it's getting messed up. So Can anyone help?
was playing a bit with awesome GPUImage framework and was able to reproduce some "convex"-like effects with fragment shaders.
However, I'm wondering if it's possible to get some more complex plane curving in 3D using GPUImage or any other OpenGL rendering to texture.
The effect I'm trying to reach looks like this one - is there any chance I can get something alike using depth buffer and vertex shader - or just need to develop some more sophisticated fragment shader emulating Z coordinate?
This is what I get now using only fragment shader and some periodical bulging
Thanks
Well another one thought is maybe it's possible to prototype a curved surface in some 3d modeling app and somehow map the texture to it?
Is it possible to optimise OpenGL ES 2.0 drawing by using dirty rectangles?
In my case, I have a 2D app that needs to draw a background texture (full screen on iPad), followed by the contents of several VBOs on each frame. The problem is that these VBOs can potentially contain millions of vertices, taking anywhere up to a couple of seconds to draw everything to the display. However, only a small fraction of the display would actually be updated each frame.
Is this optimisation possible, and how (or perhaps more appropriately, where) would this be implemented? Would some kind of clipping plane need to be passed into the vertex shader?
If you set an area with glViewport, clipping is adjusted accordingly. This however happens after the vertex shader stage, just before rasterization. As the GL cannot know the result of your own vertex program, it cannot sort out any vertex before applying the vertex program. After that, it does. How efficent it does depents on the actual GPU.
Thus you have to sort and split your objects to smaller (eg. rectangulary bounded) tiles and test them against the field of view by yourself for full performance.