I am using Brad Larson's GPUImage framework for my project.
I was trying to find a way to implement intensity control to GPUImageLookupFilter and came across
https://github.com/BradLarson/GPUImage/issues/1485
gl_FragColor = mix(textureColor, vec4(vec3(newColor),1.0), mixTexture);
the "textureColor" is the original texture, and "newColor" is the LookupFilter result, and mixTexture is the Alpha value which is (0 ~ 1.0), you can think it as intensity variable.
I do not know how to implement this,
I have no knowledge of how to implement OpenGL shaders. Could anyone tell me where to add this code to implement intensity control to GPUImageLookupFilter?
Every GPUImage filter has its own fragment shader. These are defined at the top of the .m file for that filter as string constants. In the case of GPUImageLookupFilter, that's the kGPUImageLookupFragmentShaderString at the top of GPUImageLookupFilter.m.
These fragment shaders are C-like programs that have a few unique attributes when compared to standard C, but should still be reasonably easy to follow once you've seen a few examples.
As was pointed out in that issue, if you want this kind of an intensity control (which I don't include by default for performance reasons), you'll want to create a new filter class and copy over the code from GPUImageLookupFilter into it (of which there is little beyond the fragment shader). There are two versions of the fragment shader, one for Mac (generally just without the precision qualifiers) and one for iOS. At the bottom of both of those is a line that outputs the final color. You'll want to modify that to use a mix() operation like is described above.
You'll also need to add a property to adjust this intensity value, if you don't want to hardcode an intensity value. For that, you'll need to set up a matching uniform in the fragment shader to take in this property. Look at the GPUImageBrightnessFilter for a simple example of a property that matches to a uniform in a fragment shader.
I'd recommend looking through the fragment shader code provided for the various filters, to see how these are written and how they work. Most people are able to pick up the fundamentals just by examining the various shaders already in the framework.
Related
Using webgl I need to perform 3 passes to render my scene. Each pass runs the same geometry and shaders but has differing values for some uniforms and textures.
I seem to have two choices. Have a single "program" and set all of the uniforms and textures for each pass. Or have 3 "programs" each containing the same shaders, and set all the necessary uniforms/shaders once per program, and then just switch programs for each pass. This means that I will do one useProgram call per pass instead of man setUniform calls for each pass.
Is this second technique likely to be faster as it will avoid very many setuniform calls, or is changing the program very expensive? I've done some trials but with the very simple geometry I have at the moment I don't see any difference in performance because setup costs overwhelm any differences.
Is there any reason to prefer one technique over the other?
Just send different values via glUniform if the shader programs are the same.
Switching between programs is generally slower than change value of uniform.
Anyway Uber Shader Program (with list of uniforms like useLighting, useAlphaMap) in most cases aren't good.
#gman
We are talking about WebGL (GLES 2.0) where we don't have UBO. (uniform buffer object)
#top
Summing try to avoid rebinding shader programs (but it's not end of the world) and don't create one uber shader!
When you have large amouts of textures to rebind, texture atlasing should be the fastest solution, so you don't need to rebind textures, don't need to rebind programs. Textures can be switched by modifying uniforms representing texCoord offsets.
Modifying such uniforms can be optimized even further:
You should consider moving frequently modified uniforms to attributes. Usualy their data source are provided using attribPointers but you can also use constant values when they are disabled. Instead of unformXXX() use attribXXX() functions to specify their constant values.
I think best example is light position. Normaly you'd have to specify uniform values for it every time light position changes to ALL programs that make use of it. In contrast, when using 'attributed' uniforms you can specify attribute value once globaly when your light moves.
-pros:
This method is best suited when you have many programs which would like to share uniforms, as we know we can't use uniform buffers in WebGL, it seams to be the only reasonable solution.
-cons:
Of course available size of such 'attributed' uniforms will be much smaller than using regular uniforms, but it still can speed things up a lot if you do it to some part of your uniforms.
Im trying to implement a particle system (using OpenGL 2.0 ES), where each particle is made up of a quad with a simple texture
the red pixels are transparent. Each particle will have a random alpha value from 50% to 100%
Now the tricky part is i like each particle to have a blendmode much like Photoshop "overlay" i tried many different combinations with the glBlendFunc() but without luck.
I dont understand how i could implement this in a fragment shader, since i need infomations about the current color of the fragment. So that i can calculate a new color based on the current and texture color.
I also thought about using a frame buffer object, but i guess i would need to re-render my frame-buffer-object into a texture, for each particle since each particle every frame, since i need the calculated fragment color when particles overlap each other.
Ive found math' and other infomations regrading the Overlay calculation but i have a hard time figuring out which direction i could go to implement this.
http://www.pegtop.net/delphi/articles/blendmodes/
Photoshop blending mode to OpenGL ES without shaders
Im hoping to have a effect like this:
You can get information about the current fragment color in the framebuffer on an iOS device. Programmable blending has been available through the EXT_shader_framebuffer_fetch extension since iOS 6.0 (on every device supported by that release). Just declare that extension in your fragment shader (by putting the directive #extension GL_EXT_shader_framebuffer_fetch : require at the top) and you'll get current fragment data in gl_LastFragData[0].
And then, yes, you can use that in the fragment shader to implement any blending mode you like, including all the Photoshop-style ones. Here's an example of a Difference blend:
// compute srcColor earlier in shader or get from varying
gl_FragColor = abs(srcColor - gl_LastFragData[0]);
You can also use this extension for effects that don't blend two colors. For example, you can convert an entire scene to grayscale -- render it normally, then draw a quad with a shader that reads the last fragment data and processes it:
mediump float luminance = dot(gl_LastFragData[0], vec4(0.30,0.59,0.11,0.0));
gl_FragColor = vec4(luminance, luminance, luminance, 1.0);
You can do all sorts of blending modes in GLSL without framebuffer fetch, but that requires rendering to multiple textures, then drawing a quad with a shader that blends the textures. Compared to framebuffer fetch, that's an extra draw call and a lot of schlepping pixels back and forth between shared and tile memory -- this method is a lot faster.
On top of that, there's no saying that framebuffer data has to be color... if you're using multiple render targets in OpenGL ES 3.0, you can read data from one and use it to compute data that you write to another. (Note that the extension works differently in GLSL 3.0, though. The above examples are GLSL 1.0, which you can still use in an ES3 context. See the spec for how to use framebuffer fetch in a #version 300 es shader.)
I suspect you want this configuration:
Source: GL_SRC_ALPHA
Destination: GL_ONE.
Equation: GL_ADD
If not, it might be helpful if you could explain the math of the filter you're hoping to get.
[EDIT: the answer below is true for OpenGL and OpenGL ES pretty much everywhere except iOS since 6.0. See rickster's answer for information about EXT_shader_framebuffer_fetch which, in ES 3.0 terms, allows a target buffer to be flagged as inout, and introduces a corresponding built-in variable under ES 2.0. iOS 6.0 is over a year old at the time of writing so there's no particular excuse for my ignorance; I've decided not to delete the answer because it's potentially valid to those finding this question based on its opengl-es, opengl-es-2.0 and shader tags.]
To confirm briefly:
the OpenGL blend modes are implemented in hardware and occur after the fragment shader has concluded;
you can't programmatically specify a blend mode;
you're right that the only workaround is to ping pong, swapping the target buffer and a source texture for each piece of geometry (so you draw from the first to the second, then back from the second to the first, etc).
Per Wikipedia and the link you provided, Photoshop's overlay mode is defined so that the output pixel from a background value of a and a foreground colour of b, f(a, b) is 2ab if a < 0.5 and 1 - 2(1 - a)(1 - b) otherwise.
So the blend mode changes per pixel depending on the colour already in the colour buffer. And each successive draw's decision depends on the state the colour buffer was left in by the previous.
So there's no way you can avoid writing that as a ping pong.
The closest you're going to get without all that expensive buffer swapping is probably, as Sorin suggests, to try to produce something similar using purely additive blending. You could juice that a little by adding a final ping-pong stage that converts all values from their linear scale to the S-curve that you'd see if you overlaid the same colour onto itself. That should give you the big variation where multiple circles overlap.
I'm relatively new to 3D development and am currently using Actionscript, Stage3D and AGAL to learn. I'm trying to create a scene with a simple procedural mesh that is flat shaded. However, I'm stuck on exactly how I should be passing surface normals to the shader for the lighting. I would really like to just use a single surface normal for each triangle and do flat, even shading for each. I know it's easy to achieve better looking lighting with normals for each vertex, but this is the look I'm after.
Since the shader normally processes every vertex, not every triangle, is it possible for me to just pass a single normal per triangle, rather than one per vertex? Is my thinking completely off here? If anyone had a working example of doing simple, flat shading I'd greatly appreciate it.
I'm digging up an old question here since I stumbled on it via google and can see there is no accepted answer.
Stage3D does not have an equivalent "GL_FLAT" option for it's shader engine. What this means is that the fragment shader program always receives a "varying" or interpolated value from the output of the three respective vertices (via the vertex program). If you want flat shading, you have basically only one option:
Create three unique vertices for each triangle and set the normal for
each vertex to the face normal of the triangle. This way, each vertex
will calculate the same lighting and result in the same vertex color.
When the fragment shader interpolates, it will be interpolating three
identical values, resulting in flat shading.
This is pretty lame. The requirement of unique vertices per triangle means you can't share vertices between triangles. This will definitely increase your vertex count, causing increased delays during your VertexBuffer3D uploads as well as overall lower frame rates. However, I have not seen a better solution anywhere.
I've been researching this problem I have and I can't seem to understand it well enough to solve it so I thought I might as well throw it out there and the intelligent bunch might have some ideas. :P
Basically I have been working on a iPhone project for a while where I have the luxury to use all the newest frameworks and target 5.1. So I've been using GLKit and the GLKBaseEffect which have been working just fine for me. The reason I started out with GLKBaseEffect rather then writing my own shaders is that I don't know glsl well. However the requirements have become more precise and the base effect just doesn't seem to cut it any longer.
Since I am already doing all my transforms using the base effect I would prefer if I could keep my base effect intact but add glsl-type shaders on top if that makes any sense.
My old approach look something like this (this is in a loop rendering all objects, where an object contain such things as transforms, a mesh and some other less important things for this problem, such as textures, materials and so on)
ObjectBase *obj = [ResourceManager.shared getObjectNamed:name inScene:sceneName];
GLKMatrix4 modelview = effect.transform.modelviewMatrix;
effect.transform.modelviewMatrix = GLKMatrix4Multiply(effect.transform.modelviewMatrix, obj.transform);
[effect prepareToDraw];
[obj render];
effect.transform.modelviewMatrix = modelView;
Here we fetch an object to render and transform (i.e. translate, rotate and scale) the object then we render it, where the rendering itself fetches the mesh for the object, bind the buffers and render it.
So far so good.
What I would like to do however is that during the [obj render]; call I would like the object to also do something like glUseProgram(someProgram); adding more specialized shader code.
I guess one could argue that I am trying to use the base effect for my vertex shaders and want to use "normal" shaders for my fragment shaders. At least that's what I think I want to do.
I have been trying some things.
I tried to create just the fragment shaders and glUseProgramon it, however it said that I need one vertex and one fragment shader when setting up and compiling it. I've also tried to create an empty vertex shader, which didn't turn out very well, I don't know what happens with that, but I am guessing that it overrule the base effect.
I am leaning toward, in the end, accept that it's probably best to throw out the base effects and just write my own shaders all the way. I just feel like it's a lot of work out the window, so I wanted to see how much of it I can save.
I do understand that my understanding of shaders are the part that gives me the most problems, so please be patient with that fact.
I just wanted to give my conclusions for anyone interested in them.
What I've done is actually thrown out the GLBaseEffect all together and implemented my own shader code.
My biggest problem were that I didn't really understand that it's all or nothing so to speak.
Please I might be wrong, so any corrections to where I am wrong will be greatly appreciated, I really don't want to fool anyone reading this.
What I found out during my endeavors is a couple of key-points:
GLKBaseEffect, is meant to mimic the fixed-function pipeline as seen in earlier versions of OpenGLES. Hence it is wrapping the common shader code so you don't really have to care to much about it. You will have basic functionality but it's not really very extendible.
You can still use the neat features of GLKit such as texture loader, the math library and so on if you write your own shader-code. So if you want something more complicated or customizable (bump mapping, toon shading and so on) it is totally worth rewriting the boiler-plate code needed to render properly. What I did at first was that I used the GLKBaseEffect to orient in the scene since it's quite comfortable and easy to use. However when I wanted to do more (tangent-space normal mapping) I kind of got stuck since I couldn't add to the shader program handled by the GLKBaseEffect.
Shaders are really not as scary as I always thought! I just had no idea what it really meant, and I'm surprised that I've read so much about them and still hadn't understood that basically shaders are programs REPLACING the fixed functionality pipeline. Simple as that.
That's enough rant I guess, just wanted to follow up and add what bits and pieces I've collected this far.
Just as you discovered, you can't just use a fragment shader and leave behind the vertex shader. This is because both have different tasks. Vertex shaders deal with the per-vertex aspects: calculating the vertex data, texture (uv's) etc and finally drawing the faces (triangles). Fragment shaders deal with what exactly will be drawn at each pixel on the screen (or in the viewport). When you provide only a fragment shader, you are not telling what your vertex data is, rather you are only telling OpenGL to do something on the pixels. And these pixels hold nothing/gibberish (I am not sure which) since your vertex shader did not do anything.
When using GLKEffect, a call to the [yourEffect prepareToDraw] method takes care of the shaders etc.
If you just wish to use a stock shader pair, why not use the one provided in the XCode OpenGL game template? When you run it, it has two cubes, one rendered using GLKit, and other the normal way. Though I think it will not be enough for most effects. In case you wish to know more about shaders, you can have a look at the NeHe GLSL introduction article. It is about GLSL and how you can write and use shaders in your code. You might want to have a look at Diney Bomfim's All About Shaders articles and this page.
Using GLKit is nice in most cases, since it saves you from writing lots of useless, repetitive code. For example, you do not have to go through so many image formats with different color codings and bits per pixel (per format) and all when you can just use GLKTextureLoader.
I am doing a bit of work on some of our HLSL shaders, trying to get them to work in SM2.0. I've nearly succeeded but one of our shaders accepts a parameter:
float alignment : VFACE
My understanding from MSDN is this is an automatic var calculated in case I need it, but it's not supported under SM2.0... so, how might I reproduce this? I'm not a shader programmer so any (pseudo) code would be really helpful. I understand what VFACE does, but not how I might calculate it myself in a pixel shader, or in a VS and pass it into the PS. Calculating it per-pixel sounds expensive so maybe someone can show a skeleton to calculate it in a VS and use it in a PS?
You can't. Because VFACE means orientation of the triangle (back or front) and the VS or PS stages have not access to the whole primitive (like in SM4/5 GS stage).
The only way is to render your geometry in two passes (one with back face culling, the other with front face culling) and pass a constant value to the shader matching VFACE meaning.