iOS GLSL. Is There A Way To Create An Image Histogram Using a GLSL Shader? - ios

Elsewhere on StackOverflow a question was asked regarding a depthbuffer histogram - Create depth buffer histogram texture with GLSL.
I am writing an iOS image-processing app and am intrigued by this question but unclear on the answer provided. So, is it possible to create an image histogram using the GPU via GLSL?

Yes, there is, although it's a little more challenging on iOS than you'd think. This is a red histogram generated and plotted entirely on the GPU, running against a live video feed:
Tommy's suggestion in the question you link is a great starting point, as is this paper by Scheuermann and Hensley. What's suggested there is to use scattering to build up a histogram for color channels in the image. Scattering is a process where you pass in a grid of points to your vertex shader, and then have that shader read the color at that point. The value of the desired color channel at that point is then written out as the X coordinate (with 0 for the Y and Z coordinates). Your fragment shader then draws out a translucent, 1-pixel-wide point at that coordinate in your target.
That target is a 1-pixel-tall, 256-pixel-wide image, with each width position representing one color bin. By writing out a point with a low alpha channel (or low RGB values) and then using additive blending, you can accumulate a higher value for each bin based on the number of times that specific color value occurs in the image. These histogram pixels can then be read for later processing.
The major problem with doing this in shaders on iOS is that, despite reports to the contrary, Apple clearly states that texture reads in a vertex shader will not work on iOS. I tried this with all of my iOS 5.0 devices, and none of them were able to perform texture reads in a vertex shader (the screen just goes black, with no GL errors being thrown).
To work around this, I found that I could read the raw pixels of my input image (via glReadPixels() or the faster texture caches) and pass those bytes in as vertex data with a GL_UNSIGNED_BYTE type. The following code accomplishes this:
glReadPixels(0, 0, inputTextureSize.width, inputTextureSize.height, GL_RGBA, GL_UNSIGNED_BYTE, vertexSamplingCoordinates);
[self setFilterFBO];
[filterProgram use];
glClearColor(0.0, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE);
glEnable(GL_BLEND);
glVertexAttribPointer(filterPositionAttribute, 4, GL_UNSIGNED_BYTE, 0, (_downsamplingFactor - 1) * 4, vertexSamplingCoordinates);
glDrawArrays(GL_POINTS, 0, inputTextureSize.width * inputTextureSize.height / (CGFloat)_downsamplingFactor);
glDisable(GL_BLEND);
In the above code, you'll notice that I employ a stride to only sample a fraction of the image pixels. This is because the lowest opacity or greyscale level you can write out is 1/256, meaning that each bin becomes maxed out once more than 255 pixels in that image have that color value. Therefore, I had to reduce the number of pixels processed in order to bring the range of the histogram within this limited window. I'm looking for a way to extend this dynamic range.
The shaders used to do this are as follows, starting with the vertex shader:
attribute vec4 position;
void main()
{
gl_Position = vec4(-1.0 + (position.x * 0.0078125), 0.0, 0.0, 1.0);
gl_PointSize = 1.0;
}
and finishing with the fragment shader:
uniform highp float scalingFactor;
void main()
{
gl_FragColor = vec4(scalingFactor);
}
A working implementation of this can be found in my open source GPUImage framework. Grab and run the FilterShowcase example to see the histogram analysis and plotting for yourself.
There are some performance issues with this implementation, but it was the only way I could think of doing this on-GPU on iOS. I'm open to other suggestions.

Yes, it is. It's not clearly the best approach, but it's indeed the best one available in iOS, since OpenCL is not supported. You'll lose elegance, and your code will probably not as straightforward, but almost all OpenCL features can be achieved with shaders.
If it helps, DirectX11 comes with a FFT example for compute shaders. See DX11 August SDK Release Notes.

Related

How do I draw a polygon in an info-beamer node.lua program?

I have started experimenting with the info-beamer software for Raspberry Pi. It appears to have support for display PNGs, text, and video, but when I see GLSL primitives, my first instinct is to draw a texture-mapped polygon.
Unfortunately, I can't find the documentation that would allow me to draw so much as a single triangle using the shaders. I have made a few toys using GLSL, so I'm familiar with the pipeline of setting transform matrices and drawing triangles that are filtered by the vertex and fragment shaders.
I have grepped around in info-beamer-nodes on GitHub for examples of GL drawing, but the relevant examples have so far escaped my notice.
How do I use info-beamer's GLSL shaders on arbitrary UV mapped polygons?
Based on the comment by the author of info-beamer it is clear that functions to draw arbitrary triangles are not available in info-beamer 0.9.1.
The specific effect I was going to attempt was a rectangle that faded to transparent at the margins. Fortunately the 30c3-room/ example in the info-beamer-nodes sources illustrates a technique where we draw an image as a rectangle that is filtered by the GL fragment shader. The 1x1 white PNG is a perfectly reasonable template whose color can be replaced by the calculations of the shader in my application.
While arbitrary triangles are not available, UV-mapped rectangles (and rotated rectangles) are supported and are suitable for many use cases.
I used the following shader:
uniform sampler2D Texture;
varying vec2 TexCoord;
uniform float margin_h;
uniform float margin_v;
void main()
{
float q = min((1.0-TexCoord.s)/margin_h, TexCoord.s/margin_h);
float r = min((1.0-TexCoord.t)/margin_v, TexCoord.t/margin_v);
float p = min(q,r);
gl_FragColor = vec4(0,0,0,p);
}
and this LUA in my node.render()
y = phase * 30 + center.y
shader:use {
margin_h=0.03;
margin_v=0.2;
}
white:draw(x-20,y-20,x+700,y+70)
shader:deactivate()
font:write(x, y, "bacon "..(phase), 50, 1,1,0,1)

HLSL correct pixel position in deferred shading

In OpenGL, I am using the following in my pixel shaders to get the correct pixel position, which is used to sample diffuse, normal, position gbuffer textures:
ivec2 texcoord = ivec2(textureSize(unifDiffuseTexture) * (gl_FragCoord.xy / UnifAmbientPass.mScreenSize));
So far, this is what I do in HLSL:
float2 texcoord = input.mPosition.xy / gScreenSize;
Most notably, in GLSL I am using textureSize() to get accurate pixel position. I am wondering, is there a HLSL equivalent to textureSize()?
In HLSL, you have GetDimensions
But it may be costlier than reading it from a constant buffer, even if it looks easier to use at first to do quick tests.
Also, you have alternative, using SV_Position and Load, just use the xy as an uint2, you remove the need of an user interpolator carrying a texture coordinate to index the screen.
Here the full documentation of a TextureObject

Does the input texture to a fragment shader change as the shader runs?

I'm trying to implement the Atkinson dithering algorithm in a fragment shader in GLSL using our own Brad Larson's GPUImage framework. (This might be one of those things that is impossible but I don't know enough to determine that yet so I'm just going ahead and doing it anyway.)
The Atkinson algo dithers grayscale images into pure black and white as seen on the original Macintosh. Basically, I need to investigate a few pixels around my pixel and determine how far away from pure black or white each is and use that to calculate a cumulative "error;" that error value plus the original value of the given pixel determines whether it should be black or white. The problem is that, as far as I could tell, the error value is (almost?) always zero or imperceptibly close to it. What I'm thinking might be happening is that the texture I'm sampling is the same one that I'm writing to, so that the error ends up being zero (or close to it) because most/all of the pixels I'm sampling are already black or white.
Is this correct, or are the textures that I'm sampling from and writing to distinct? If the former, is there a way to avoid that? If the latter, then might you be able to spot anything else wrong with this code? 'Cuz I'm stumped, and perhaps don't know how to debug it properly.
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp vec3 dimensions;
void main()
{
highp vec2 relevantPixels[6];
relevantPixels[0] = vec2(textureCoordinate.x, textureCoordinate.y - 2.0);
relevantPixels[1] = vec2(textureCoordinate.x - 1.0, textureCoordinate.y - 1.0);
relevantPixels[2] = vec2(textureCoordinate.x, textureCoordinate.y - 1.0);
relevantPixels[3] = vec2(textureCoordinate.x + 1.0, textureCoordinate.y - 1.0);
relevantPixels[4] = vec2(textureCoordinate.x - 2.0, textureCoordinate.y);
relevantPixels[5] = vec2(textureCoordinate.x - 1.0, textureCoordinate.y);
highp float err = 0.0;
for (mediump int i = 0; i < 6; i++) {
highp vec2 relevantPixel = relevantPixels[i];
// #todo Make sure we're not sampling a pixel out of scope. For now this
// doesn't seem to be a failure (?!).
lowp vec4 pointColor = texture2D(inputImageTexture, relevantPixel);
err += ((pointColor.r - step(.5, pointColor.r)) / 8.0);
}
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
lowp float hue = step(.5, textureColor.r + err);
gl_FragColor = vec4(hue, hue, hue, 1.0);
}
There are a few problems here, but the largest one is that Atkinson dithering can't be performed in an efficient manner within a fragment shader. This kind of dithering is a sequential process, being dependent on the results of fragments above and behind it. A fragment shader can only write to one fragment in OpenGL ES, not neighboring ones like is required in that Python implementation you point to.
For potential shader-friendly dither implementations, see the question "Floyd–Steinberg dithering alternatives for pixel shader."
You also normally can't write to and read from the same texture, although Apple did add some extensions in iOS 6.0 that let you write to a framebuffer and read from that written value in the same render pass.
As to why you're seeing odd error results, the coordinate system within a GPUImage filter is normalized to the range 0.0 - 1.0. When you try to offset a texture coordinate by adding 1.0, you're reading past the end of the texture (which is then clamped to the value at the edge by default). This is why you see me using texelWidth and texelHeight values as uniforms in other filters that require sampling from neighboring pixels. These are calculated as a fraction of the overall image width and height.
I'd also not recommend doing texture coordinate calculation within the fragment shader, as that will lead to a dependent texture read and really slow down the rendering. Move that up to the vertex shader, if you can.
Finally, to answer your title question, usually you can't modify a texture as it is being read, but the iOS texture cache mechanism sometimes allows you to overwrite texture values as a shader is working its way through a scene. This leads to bad tearing artifacts usually.
#GarrettAlbright For the 1-Bit Camera app I ended up with simply iterating over the image data using raw memory pointers and (rather) tightly optimized C code. I looked into NEON intrisics and the Accelerate framework, but any parallelism really screws up an algorithm of this nature so I didn't use it.
I also toyed around with the idea to do a decent enough aproximation of the error distribution on the GPU first, and then do the thresholding in another pass, but I never got anything but a rather ugly noise dither from those experiments. There are some papers around covering other ways of approaching diffusion dithering on the GPU.

GPGPU programming with OpenGL ES 2.0

I am trying to do some image processing on the GPU, e.g. median, blur, brightness, etc. The general idea is to do something like this framework from GPU Gems 1.
I am able to write the GLSL fragment shader for processing the pixels as I've been trying out different things in an effect designer app.
I am not sure however how I should do the other part of the task. That is, I'd like to be working on the image in image coords and then outputting the result to a texture. I am aware of the gl_FragCoords variable.
As far as I understand it it goes like that: I need to set up a view (an orthographic one maybe?) and a quad in such a way so that the pixel shader would be applied once to each pixel in the image and so that it would be rendering to a texture or something. But how can I achieve that considering there's depth that may make things somewhat awkward to me...
I'd be very grateful if anyone could help me with this rather simple task as I am really frustrated with myself.
UPDATE:
It seems I'll have to use an FBO, getting one like this: glBindFramebuffer(...)
Use this tutorial, it's targeted at OpenGL 2.0, but most features are available in ES 2.0, the only thing i have doubts is floating point textures.
http://www.mathematik.uni-dortmund.de/~goeddeke/gpgpu/tutorial.html
Basically, you need 4 vertex positions (as vec2) of a quad (with corners (-1,-1) and (1,1)) passed as a vertex attribute.
You don't really need a projection, because the shader will not need any.
Create an FBO, bind it and attach the target surface. Don't forget to check the completeness status.
Bind the shader, set up input textures and draw the quad.
Your vertex shader may look like this:
#version 130
in vec2 at_pos;
out vec2 tc;
void main() {
tc = (at_pos+vec2(1.0))*0.5; //texture coordinates
gl_Position = vec4(at_pos,0.0,1.0); //no projection needed
}
And a fragment shader:
#version 130
in vec2 tc;
uniform sampler2D unit_in;
void main() {
vec4 v = texture2D(unit_in,tc);
gl_FragColor = do_something();
}
If you want an example, I created this project for iOS devices for processing frames of video grabbed from the camera using OpenGL ES 2.0 shaders. I explain more about it in my writeup here.
Basically, I pull in the BGRA data for a frame and create a texture from that. I then use two triangles to generate a rectangle and map the texture on that. A shader is used to directly display the image onscreen, perform some effect on the image and display it, or perform some effect on the image while in an offscreen FBO. In the last case, I can then use glReadPixels() to pull in the image for some CPU-based processing, but ideally I want to fix this so that the processed image is just passed on as a texture to the next set of shaders.
You should also check out ogles_gpgpu, which even supports Android systems. An overview about this topic is given in this publication: Parallel Computing for Digital Signal Processing on Mobile Device GPUs.
You can do more advanced GPGPU things with OpenGL ES 3.0 now. Check out this post for example. Apple now also has the "Metal API" which allows even more GPU compute operations. Both, OpenGL ES 3.x and Metal are only supported by newer devices with A7 chip.

Writing texture data onto depth buffer

I'm trying to implement the technique described at : Compositing Images with Depth.
The idea is to use an existing texture (loaded from an image) as a depth mask, to basically fake 3D.
The problem I face is that glDrawPixels is not available in OpenglES. Is there a way to accomplish the same thing on the iPhone?
The depth buffer is more obscured than you think in OpenGL ES; not only is glDrawPixels absent but gl_FragDepth has been removed from GLSL. So you can't write a custom fragment shader to spool values to the depth buffer as you might push colours.
The most obvious solution is to pack your depth information into a texture and to use a custom fragment shader that does a depth comparison between the fragment it generates and one looked up from a texture you supply. Only if the generated fragment is closer is it allowed to proceed. The normal depth buffer will catch other cases of occlusion and — in principle — you could use a framebuffer object to create the depth texture in the first place, giving you a complete on-GPU round trip, though it isn't directly relevant to your problem.
Disadvantages are that drawing will cost you an extra texture unit and textures use integer components.
EDIT: for the purposes of keeping the example simple, suppose you were packing all of your depth information into the red channel of a texture. That'd give you a really low precision depth buffer, but just to keep things clear, you could write a quick fragment shader like:
void main()
{
// write a value to the depth map
gl_FragColor = vec4(gl_FragCoord.w, 0.0, 0.0, 1.0);
}
To store depth in the red channel. So you've partially recreated the old depth texture extension — you'll have an image that has a brighter red in pixels that are closer, a darker red in pixels that are further away. I think that in your question, you'd actually load this image from disk.
To then use the texture in a future fragment shader, you'd do something like:
uniform sampler2D depthMap;
void main()
{
// read a value from the depth map
lowp vec3 colourFromDepthMap = texture2D(depthMap, gl_FragCoord.xy);
// discard the current fragment if it is less close than the stored value
if(colourFromDepthMap.r > gl_FragCoord.w) discard;
... set gl_FragColor appropriately otherwise ...
}
EDIT2: you can see a much smarter mapping from depth to an RGBA value here. To tie in directly to that document, OES_depth_texture definitely isn't supported on the iPad or on the third generation iPhone. I've not run a complete test elsewhere.

Resources