So I have a cocos2d iOS app which uses OpenGL ES 2.0. I've got fragment shader where I'm currently just grabbing vec4 lastFragColor = gl_LastFragData[0]; and manipulating it.
But what I'm really wondering if/how I can access the neighboring fragments of the current one so that I can do convolution type effects like a gaussian blur?
The answer is no. You can't grab neighbouring fragments using this gl extension.
Render to an fbo and use that as an input to render Gaussian blurs instead.
Related
I need some performance in my project back, so I thought of implementing clipping and later on Backface culling.
So I am looking in my vertex shader if the Vertex is facing me, if it is true, I render it and if not, not. But how do I say no render in the vertex shader oO
Same with the clipping how do I say paint only this section in the vertex shader, or do I get something wrong here, I am quit new with Open Gl and my project is for IPhone, it is OpenGl ES 2.0.
Vertices don't face front or backward. When 3 or more vertices meet, the plan (triangle) they make face front or back depending on the convention (clock/anticlock).
You have to just enable the culling and do nothing different in your shaders.
Some APIs of interest are: (Should be more or less same in iOS also which is not 100% compliant to OGLES)
glDisable / glEnable with argument GL_CULL_FACE
glCullFace with argument GL_FRONT, GL_BACK or GL_FRONT_AND_BACK
glFrontFace with argument GL_CW or GL_CCW
you could use discard in your fragment
Im trying to implement a particle system (using OpenGL 2.0 ES), where each particle is made up of a quad with a simple texture
the red pixels are transparent. Each particle will have a random alpha value from 50% to 100%
Now the tricky part is i like each particle to have a blendmode much like Photoshop "overlay" i tried many different combinations with the glBlendFunc() but without luck.
I dont understand how i could implement this in a fragment shader, since i need infomations about the current color of the fragment. So that i can calculate a new color based on the current and texture color.
I also thought about using a frame buffer object, but i guess i would need to re-render my frame-buffer-object into a texture, for each particle since each particle every frame, since i need the calculated fragment color when particles overlap each other.
Ive found math' and other infomations regrading the Overlay calculation but i have a hard time figuring out which direction i could go to implement this.
http://www.pegtop.net/delphi/articles/blendmodes/
Photoshop blending mode to OpenGL ES without shaders
Im hoping to have a effect like this:
You can get information about the current fragment color in the framebuffer on an iOS device. Programmable blending has been available through the EXT_shader_framebuffer_fetch extension since iOS 6.0 (on every device supported by that release). Just declare that extension in your fragment shader (by putting the directive #extension GL_EXT_shader_framebuffer_fetch : require at the top) and you'll get current fragment data in gl_LastFragData[0].
And then, yes, you can use that in the fragment shader to implement any blending mode you like, including all the Photoshop-style ones. Here's an example of a Difference blend:
// compute srcColor earlier in shader or get from varying
gl_FragColor = abs(srcColor - gl_LastFragData[0]);
You can also use this extension for effects that don't blend two colors. For example, you can convert an entire scene to grayscale -- render it normally, then draw a quad with a shader that reads the last fragment data and processes it:
mediump float luminance = dot(gl_LastFragData[0], vec4(0.30,0.59,0.11,0.0));
gl_FragColor = vec4(luminance, luminance, luminance, 1.0);
You can do all sorts of blending modes in GLSL without framebuffer fetch, but that requires rendering to multiple textures, then drawing a quad with a shader that blends the textures. Compared to framebuffer fetch, that's an extra draw call and a lot of schlepping pixels back and forth between shared and tile memory -- this method is a lot faster.
On top of that, there's no saying that framebuffer data has to be color... if you're using multiple render targets in OpenGL ES 3.0, you can read data from one and use it to compute data that you write to another. (Note that the extension works differently in GLSL 3.0, though. The above examples are GLSL 1.0, which you can still use in an ES3 context. See the spec for how to use framebuffer fetch in a #version 300 es shader.)
I suspect you want this configuration:
Source: GL_SRC_ALPHA
Destination: GL_ONE.
Equation: GL_ADD
If not, it might be helpful if you could explain the math of the filter you're hoping to get.
[EDIT: the answer below is true for OpenGL and OpenGL ES pretty much everywhere except iOS since 6.0. See rickster's answer for information about EXT_shader_framebuffer_fetch which, in ES 3.0 terms, allows a target buffer to be flagged as inout, and introduces a corresponding built-in variable under ES 2.0. iOS 6.0 is over a year old at the time of writing so there's no particular excuse for my ignorance; I've decided not to delete the answer because it's potentially valid to those finding this question based on its opengl-es, opengl-es-2.0 and shader tags.]
To confirm briefly:
the OpenGL blend modes are implemented in hardware and occur after the fragment shader has concluded;
you can't programmatically specify a blend mode;
you're right that the only workaround is to ping pong, swapping the target buffer and a source texture for each piece of geometry (so you draw from the first to the second, then back from the second to the first, etc).
Per Wikipedia and the link you provided, Photoshop's overlay mode is defined so that the output pixel from a background value of a and a foreground colour of b, f(a, b) is 2ab if a < 0.5 and 1 - 2(1 - a)(1 - b) otherwise.
So the blend mode changes per pixel depending on the colour already in the colour buffer. And each successive draw's decision depends on the state the colour buffer was left in by the previous.
So there's no way you can avoid writing that as a ping pong.
The closest you're going to get without all that expensive buffer swapping is probably, as Sorin suggests, to try to produce something similar using purely additive blending. You could juice that a little by adding a final ping-pong stage that converts all values from their linear scale to the S-curve that you'd see if you overlaid the same colour onto itself. That should give you the big variation where multiple circles overlap.
I was wondering if it is worth it to use shaders to draw a 2D texture in xna. I am asking because with openGL it is much faster if you use GLSL.
Everything on a modern GPU is drawn using a shader.
For the old immediate-style rendering (ie: glBegin/glVertex), that will get converted to something approximating buffers and shaders somewhere in the driver. This is why using GLSL is "faster" - because it's closer to the metal, you're not going through a conversion layer.
For a modern API, like XNA, everything is already built around "buffers and shaders".
In XNA, SpriteBatch provides its own shader. The source code for the shader is available here. The shader itself is very simple: The vertex shader is a single matrix multiplication to transform the vertices to the correct raster locations. The pixel shader simply samples from your sprite's texture.
You can't really do much to make SpriteBatch's shader faster - there's almost nothing to it. There are some things you can do to make the buffering behaviour faster in specific circumstances (for example: if your sprites don't change between frames) - but this is kind of advanced. If you're experiencing performance issues with SpriteBatch, be sure you're using it properly in the first place. For what it does, SpriteBatch is already extremely well optimised.
For more info on optimisation, see this answer.
If you want to pass a custom shader into SpriteBatch (eg: for a special effect) use this overload of Begin and pass in an appropriate Effect.
was playing a bit with awesome GPUImage framework and was able to reproduce some "convex"-like effects with fragment shaders.
However, I'm wondering if it's possible to get some more complex plane curving in 3D using GPUImage or any other OpenGL rendering to texture.
The effect I'm trying to reach looks like this one - is there any chance I can get something alike using depth buffer and vertex shader - or just need to develop some more sophisticated fragment shader emulating Z coordinate?
This is what I get now using only fragment shader and some periodical bulging
Thanks
Well another one thought is maybe it's possible to prototype a curved surface in some 3d modeling app and somehow map the texture to it?
I'm relatively new to WebGL, and OpenGL too for that matter, but in recent days I've filled up most my time writing a little game for it. However, when I wanted to implement something like heat waves, or any sort of distortion, I was left stuck.
Now, I can make a texture ripple using the fragment shader, but I feel like I'm missing something when it comes to distorting the content behind an object. Is there any way to grab the color of a pixel that's already been rendered within the fragment shader?
I've tried rendering to a texture and then having the texture of the object be that, but it appears if you choose to render your scene to a texture, you cannot render it to the screen also. And beyond that, if you want to render to a texture, that texture must be a power of two (which many screen resolutions do not quite fit into)
Any help would be appreciated.
You're going to have to render to a texture and draw that texture onto the screen while distorting it. Also, there's no requirement that framebuffer objects must be of a power-of-two size in OpenGL ES 2.0 (which is the graphics API WebGL uses). But non-power-of-two textures can't have mipmapping or texture-wrapping.
I believe you can modify individual canvas pixels directly. Might be a good way to ripple a small area, but might not be gpu-accelerated.