I'm wanting to port some image processing work to OpenGL for performance using OpenGL ES. I've got a very simple thresholding algorithm in place but I'd like to combine additional filters to the image (such as contrast).
My first thought would be to complete this by using multiple fragment shaders. However, I'd like to do this pretty quickly so would this cause a lot of state change? The only method I've read about is to do this by working on a texture and then calling 'use program' multiple times.
Is there a more efficient way to do this? Ideally, I'd like to perform a contrast stretch and a histogram balance as part of the steps.
Unless I can combine this into a single shader would an FBO work for me here?
I'm a bit new to OpenGL (in case you couldn't tell).
Thanks!
Simon
You can't "merge" fragment shader unless you do it manually, so the only sane choice is to do "ping-pong" rendering using FBOs. You have 2 FBOs, draw to one and read from another, then switch FBOs and repeat, switching fragment shaders between rendering.
Ping pong rendering is quite nice for this purpose, actually! Here's a code sample that works within LibGDX, where there is a 'batch' object that actually draws to the screen, the input to which we can capture with FrameBuffers:
FrameBuffer ping = fbo; // the framebuffer containing your rendered texture
for (ShaderProgram shader : shaders) {
pong.clear();
pong.begin();
batch.begin();
batch.setShader(shader);
batch.draw(ping.getColorBufferTexture(), 0, 0, width, height);
batch.end();
pong.end();
ping = pong;
}
batch.begin();
batch.draw(pong.getColorBufferTexture(), 0, 0, width, height);
batch.end();
The clear call to pong (in my case, making a new framebuffer) is expensive, so preallocating might offer some speedups. This is more of a followup to Matias' answer than an answer in itself, but wouldn't fit in a comment.
Related
I have a simple program that renders a couple of 3D objects, using DirectX 3D 9 and HLSL. I'm just starting off with HLSL, I have no experience with 3D rendering.
I am able to change the texture & color of the models and fade between two textures without problems, however I was wondering what the best way to simply fade a 3D object (blend it with the background) would be. I would assume that it wouldn't be done as fading between two textures (using lerp), since I want the object faded to the entire background, so there would be many different textures behind it.
I'm using the LPD3DXEFFECT as my effect class, DrawIndexedPrimitive as the drawing function in each pass, and I only have a single pass. I'm also using Shader Model 3, as this is an older project.
The only way that I thought it possible would be to simply get the color of the pixel before you apply any changes, and then do calculations on it with the color of the texture of the model to attain a faded pixel. However, after looking over the internet, it does not appear that it's actually possible to get the color of a pixel before doing anything to it with HLSL.
Is it even possible to do something like this using HLSL? Am I missing something that could assist me here?
Any help is appreciated!
Forgive me if I'm misunderstanding, but it sounds like you're trying to simulate transparency instead of using built-in transparency.
If you're trying to get the color of the pixels behind the object and want to avoid using transparency, I'd start by trying to use the last rendered frame as a texture, then reference that texture in your current shader. There may be some way to do it within the same frame - to force all other rendering to go first, then handle the one object - but I don't know it.
After a long grind, I finally found a very good workaround for my problem, and I will try to explain my understanding of it for anyone else that has a smillar issue. Thanks to Alexander Stewart for suggesting that there may be an in-built way to do it.
Method Description
Instead of taking care of the background fade in the HLSL pixel shader, there is another way to do it, using a method called Frame Buffer Alpha Blending (full MS Docs documentation: https://learn.microsoft.com/en-us/windows/win32/direct3d9/frame-buffer-alpha).
The basic idea behind this method is to provide a simple way of blending a given pixel that is to be rendered, with the existing pixel on the screen. There is a formula that is followed: FinalColor = ObjectPixelColor * SourceBlendFactor + BackgroundPixelColor * DestinationBlendFactor, all of these "variables" being groups of 4 float values, in the format (R, G, B, A).
How I Implemented it
Before doing anything with the actual shaders, in my Visual Studio C++ file I have to pass a few flags to my render device (I used LPDIRECT3DDEVICE9 as my device class). I had to set render states for both D3DRS_SRCBLEND and D3DRS_DESTBLEND, which are reffering to ObjectPixelColor and DestinationBlendFactor respectivelly in the formula above. These will be my factors that will be multiplying each one of my object and background pixel colors. There are many possible values that can be assigned to D3DRS_SRCBLEND and D3DRS_DESTBLEND, full list is available in the MS Docs link above, but in order to achieve what I wanted to (simply a way to fade an object into the background with an alpha number going from 0 to 1), I figured out the flags should be like this: SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA); SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);.
After setting these flags, before passing through my shaders & rendering, I just needed to set one more flag: SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);. I was also able to alternate between TRUE and FALSE here without changing anything else with no rendering problems (although my project was very simple, it will probably cause issues on larger projects). You can then pass any arguments you want, such as the alpha number, to the HLSL shader as a global variable (I did it using SetValue()).
Going back to my HLSL shader, after these changes, passing a color float4 variable taken from the tex2D() function from my pixel shader with an alpha value between 0 and 1 yielded the correct alpha, provided there aren't other issues (another issue that I had but hadn't realized at the time was the fact that my transparent object was actually rendering before the background, so I can only reccomend to check the rendering order when working on rendering projects).
I'm sure there could have probably been a better way of implementing this with the latest DirectX, but my compiler only supports Shader Model 3 and lower.
I'm using SceneKit on iOS and I have a geometry I want to render as a wireframe. So basically I want to draw only the lines, so no textures.
I figured out that I could use the shaderModifiers property of the used SCNMaterial to accomplish this. Example of a shader modifier:
material.shaderModifiers = [
SCNShaderModifierEntryPointFragment: "_output.color.rgb = vec3(1.0) - _output.color.rgb;"
]
This example apparently simply inverts the output colors. I know nothing about this 'GLSL' language I have to use for the shader fragment.
Can anybody tell me what code I should use as the shader fragment to only draw near the edges, to make the geometry look like a wireframe?
Or maybe there is a whole other approach to render a geometry as a wireframe. I would love to hear it.
Try setting the material fillMode to .lines (iOS 11+, and macOS 10.13+):
sphereNode.geometry?.firstMaterial?.fillMode = .lines
Now it is possible (at least in Cocoa) with:
gameView.debugOptions.insert(SCNDebugOptions.showWireframe)
or you can do it interactively if enabling the statistics with:
gameView.showsStatistics = true
(gameView is an instance of SCNView)
This is not (quite) an answer, because this a question without an easy answer.
Doing wireframe rendering entirely in shader code is a lot more difficult than it seems like it should be, especially on mobile where you don't have a geometry shader. The problem is that the vertex shader (and subsequently the fragment shader) just doesn't have the information needed to know where polygon edges are.
I know nothing about this 'GLSL' language I have to use for the shader fragment.
If you really want to tackle this problem, you'll need to learn some more about GLSL (the OpenGL Shading Language). There are loads of books and tutorials out there for that.
Once you've got some GLSL under your belt, take a look at some of the questions (like this one pulled from the Related sidebar) and other stuff people have written about the problem. (Note that when you're looking for mobile-specific limitations, OpenGL ES has the same limitations as WebGL on the desktop.)
With SceneKit, you have the additional wrinkle that you probably don't have a barycentric-coordinates vertex attribute (aka SCNGeometrySource) for the geometry you're working with, and you probably don't want to do the hard work of generating one. In OS X, you can use an SCNProgram with a geometryShader to add barycentric coordinates before the vertex/fragment shaders run — but then you have to do your own shading (i.e. you can't piggyback on the SceneKit shading like you can with shader modifiers). And that isn't available in iOS — the hardware there doesn't do geometry shaders. You might be able to fake it using texture coordinates if those happen to be lined up right in your geometry.
It might be easier to just draw the object using lines — try making a new SCNGeometry from the sources and elements of your original (solid) geometry, but when recreating the SCNGeometryElement, use SCNPrimitiveTypeLine.
I'm working on an app based on Apple's GLPaint sample code. I've changed the clear color to transparent black and have added an opacity slider, however when I mix colors together with a low opacity setting they don't mix the way I'm expecting. They seem to mix the way light mixes, not the way paint mixes. Here is an example of what I mean:
The "Desired Result" was obtained by using glReadPixels to render each color separately and merge it with the previous rendered image (i.e. using apple's default blending).
However, mixing each frame with the previous is too time consuming to be done on the fly, how can I get OpenGL to blend the colors properly? I've been researching online for quite a while and have yet to find a solution that works for me, please let me know if you need any other info to help!
From the looks of it, with your current setup, there is no easy solution. For what you are trying to do, you need custom shaders. Which is not possible using just GLKit.
Luckily you can mix GLKit and OpenGL ES.
My recommendation would be to:
Stop using GLKit for everything except setting up your rendering
surface with GLKView (which is tedious without GLKit).
Use an OpenGl program with custom shaders to draw to a texture that is backing an FBO.
Use a second program with custom shaders that does post processing (after drawing above texture to a quad which is then rendered to the screen).
A good starting point would be to load up the OpenGl template that comes with Xcode. And start modifying it. Be warned: If you don't understand shaders, the code here will make little sense. It draws 2 cubes, one using GLKit, and one without - using custom shaders.
References to start learning:
Intro to shaders
Rendering to a Texture
Shader Toy - This should help you experiment with your post processing frag shader.
GLEssentials example - This shows how to render to texture using OpenGL ( a bit outdated.)
Finally, if you are really serious about using OpenGL ES to it's full potential, you really should invest the time to read through OpenGL ES 2.0 programming guide. Even though it is 6 years old, it is still relevant and the only book I've found that explains all the concepts correctly.
Your "Current Result" is additive color, which is how OpenGL is supposed to work. To work like mixing paint would be substractive color. You don't have control over this with OpenGL ES 1.1, but you could write a custom fragment shader for OpenGL ES 2.0 that would do substractive color. If you are blending textures images from iOS, you need to know if the image data has been premultiplied by alpha or not, in order to do blending. OpenGL ES expects the non-premultiplied format.
You need to write that code in the function which is called on color change.
and each time you need to set BlendFunc.
CGFloat red , green, blue;
// set red, green ,blue with desire color combination
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glColor4f(red* kBrushOpacity,
green * kBrushOpacity,
blue* kBrushOpacity,
kBrushOpacity);
To do more things by using BlendFunc use this link
Please specify if it works or not. It work for me.
I'm currently new to OpenGL ES and am self teaching myself how to program iOS games. I'm currently playing with a project that I would like to put a HUD over with some custom text. I don't want to do this using a UILabel and currently have no idea how to use Quads to cut up a png or such full of text and attach them to normal text to be used for display. I would like the end result to be providing a simple string to a command/method and the output to be displayed using the textures/bitmap for the quad. Say glPrint("Hello World");. Would anyone be able to guide me in the proper direction? There doesn't seem to be a single good tutorial on how to do this for OpenGL ES 2.0 (just OpenGL). I also want to try to avoid using 3rd party APIs. I really need/want to understand how to tackle this.
When I was getting started with OpenGL ES for my current 2D project I used Ray's tutorial, which helped me get a handle on rendering textured 2D quads. In conjunction with his 3D OpenGL ES tutorial, you might be able to piece together what you want to do. Note that you probably wouldn't render every single quad separately like in the tutorial, as that is very inefficient. Instead, you would gather all of the vertices of the characters into two big arrays/vertex buffers and batch render the characters. The basic flow for rendering each frame would probably look like this: pass a normal perspective projection matrix for 3D rendering, get your vertex information for your 3D scene to your shaders somehow, render the 3D scene. This part you've already done. For the text, immediately after, pass an orthogonal projection matrix in, bind your font texture (generally generated earlier with the GLKTextureLoader class) to the active texture unit, generate two big arrays of texture and geometric vertices for the characters/update VBOs if the text has changed, pass that in, and then batch render all of the letters at once using either glDrawArrays or glDrawElements (which requires indices).
Also, as I'm also new at using OpenGL, some of this may be wrong/inefficient. I've yet to use OpenGL ES to render anything 3D, so I'm not sure what other state changes (enabling, disabling, etc) besides a different projection matrix might be needed between rendering your 3D scene and the 2D scene (text).
It seems that drawing text using only OpenGL is a relatively difficult and tedious task, so if you just want to render a HUD overlay displaying frame rates and other things you are much better off using UILabels and saving yourself the trouble, especially if your project is not very complex. This also prevents you from having to deal with wrapping, kerning, font sizes, colors, different languages and a load of other stuff that greatly complicates text rendering if you need anything more complex.
Rather than tracking the location of each letter, why not use Core Graphics to draw your entire string into a bitmap, then upload that as a texture? You'd just need to get the dimensions from your bitmap to know what size quad to draw for that text string.
Within my open source GPUImage framework, I have an input class called a GPUImageUIElement that does something similar. The relevant code from that input is as follows:
CGSize layerPixelSize = [self layerSizeInPixels];
GLubyte *imageData = (GLubyte *) calloc(1, (int)layerPixelSize.width * (int)layerPixelSize.height * 4);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)layerPixelSize.width, (int)layerPixelSize.height, 8, (int)layerPixelSize.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextTranslateCTM(imageContext, 0.0f, layerPixelSize.height);
CGContextScaleCTM(imageContext, layer.contentsScale, -layer.contentsScale);
[layer renderInContext:imageContext];
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)layerPixelSize.width, (int)layerPixelSize.height, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
free(imageData);
This code takes a CALayer (either directly or from the backing layer of a UIView) and renders its contents to a texture. I've already initialized the texture before this, so the code sets up a bitmap context, renders the layer into that context using -renderInContext:, and then uploads that bitmap to the texture for use in OpenGL ES.
The helper method -layerSizeInPixels just accounts for the current Retina scale factor as follows:
- (CGSize)layerSizeInPixels;
{
CGSize pointSize = layer.bounds.size;
return CGSizeMake(layer.contentsScale * pointSize.width, layer.contentsScale * pointSize.height);
}
If you used a UILabel for your view and had it autosize to fit its text, you could set the text on it, use the above to render and upload your texture, and then take the pixel size of the element to determine your quad size. However, it would probably be more efficient to just draw the text yourself using -drawAtPoint:withFont:fontForSize: or the like with an NSString.
Using Core Graphics to render your text makes it easy to manipulate the text as an NSString and use all of Core Graphics' typesetting capabilities instead of rolling your own.
I'm in the process of writing my first few shaders, usually writing a shader to accomplish features as I realize that the main XNA library doesn't support them.
The trouble I'm running into is that not all of my models in a particular scene have texture data in them, and I can't figure out how to handle that. The main XNA libraries seem to handle it by using a wrapper class for BasicEffect, loading it through the content manager and selectively enabling or disabling texture processing accordingly.
How difficult is it to accomplish this for a custom shader? What I'm writing is an generic "hue shift" effect, that is, I want whatever gets drawn with this technique to have its texture colors (if any) and its vertex color hue shifted by a certain degree. Do I need to write separate shaders, one with textures and one without? If so, when I'm looping through my MeshParts, is there any way to detect if a given part has texture coordinates so that I can apply the correct effect?
Yes, you will need separate shaders, or rather different "techniques" - it can still be the same effect and use much of the same code. You can see how BasicEffect (at least the pre-XNA 4.0 version) does it by reading the source code.
To detect whether or not a model mesh part has texture coordinates, try this:
// Note: this allocates an array, so do it at load-time
var elements = meshPart.VertexBuffer.VertexDeclaration.GetVertexElements();
bool result = elements.Any(e =>
e.VertexElementUsage == VertexElementUsage.TextureCoordinate);
The way the content pipeline sets up its BasicEffect is via BasicMaterialContent. The BasicEffect.TextureEnabled property is simply turned on if Texture is set.