iOS openGL image invert - ios

I'm trying to get create an inverse of an image, as part of an app.
I have implemented some image processing using openGL, following the GLImageProcessing demo from Apple.
My image is loaded from a .jpg, and loaded to a texture. I can happily modify contrast and brightness.
However, I am having unsatisfactory results when trying to create an inverse image (swap black for white).
I've tried a few approaches, but nothing works well on the device.
so far, I have tried;
glBlendFunc(GL_ONE_MINUS_DST_COLOR, GL_ZERO);
this works well in the simulator, but on the device produces unstable results.
I have also tried;
glBlendFunc(GL_ONE_MINUS_SRC_COLOR, GL_ONE_MINUS_DST_COLOR);
this creates results as before.
Finally, I have tried;
glLogicOp(GL_XOR);
this simply toggles between the image, and a black screen.
Can you suggest a (relatively) simple solution to invert my image?

How about this shader:
void main() {
vec4 color = texture2D(texture, textureCoordinate);
float inverted = 1.0 - color.r;
vec4 inverted_vec = vec4( vec3(inverted), 1.0);
gl_FragColor = clamp(inverted_vec, 0.0, 1.0);"
};

If you can use OpenGL ES 2.0 (which you should be able to do safely, given the tiny fraction of 1.1-only devices in the wild), a simple way to do this would be to use my open source GPUImage framework and its GPUImageColorInvertFilter.
The following code loads your image from disk, creates a color invert filter, applies that filter, and then spits out a UIImage for you to use:
UIImage *inputImage = [UIImage imageNamed:#"image.jpg"];
GPUImageColorInvertFilter *stillImageFilter = [[GPUImageColorInvertFilter alloc] init];
UIImage *quickFilteredImage = [stillImageFilter imageByFilteringImage:inputImage];
If you want to just display the resulting image to the screen, or use it as a texture, you can instead direct the filter to a GPUImageView or GPUImageTextureOutput to avoid some Core Graphics overhead (due to the generation of the output UIImage).

If you have no transparency I suggest you first draw a white rectangle where the image should be (no texture, just pure white color). Then use glBlendFunc(GL_ZERO, GL_ONE_MINUS_SRC_COLOR) and disable the alpha channel glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_FALSE) and draw the grayscale texture. This should give you the desired effect if you have no need for transparency, in other case you should do the same using a FBO with a texture attached. Also after you draw the texture reenable the alpha channel.

Related

Photoshop like brush with OpenCV

I am trying to add some blur brush stroke with OpenCV
If I use cv.add or cv.addweighted, I only got half of the red color (look pink),
but I want the red to cover the underlying picture, not blend.
If I use copyto or clone, I can't get the blur edge, so how should I do it ??
The background of your brush image is black. If you were in photoshop and set that brush layer's blend mode to screen the red gradient would show through and the black background would become transparent.
Here are a couple of relevant posts:
How does photoshop blend two images together?
Trying to translate formula for blending mode
Alternatively, if you're using OpenCV 3.0 you try using seamlessClone.
I got a trick to do it simply copy the area to a new mat, then draw a circle and blur it on the new mat, and use addWeighted to blend it to the image with whatever alpha I need.
Mat roi = myImage.submat(y-20, y+20, x-20, x+20);
Mat mix = roiB.clone();
Imgproc.circle(mix, new Point(20, 20), 12, color, -1);
Core.addWeighted(mix, alpha, roiB, beta, 0.0, mix);
Imgproc.GaussianBlur(mix, mix, new Size(21, 21), 0);
mix.copyTo(roiB);

GPUImage GPUImageFalseColorFilter giving wrong color and coloring transparent areas

I have a green button with a white icon and title. I am trying to use the GPUImage library that I just learned about to change the green to blue, but keep the white as white. Here is my code:
UIImage *inputImage = [UIImage imageNamed:#"pause-button"];
GPUImageFalseColorFilter *colorSwapFilter = [[GPUImageFalseColorFilter alloc] init];
colorSwapFilter.firstColor = (GPUVector4){0.0f, 0.0f, 1.0f, 1.0f};
colorSwapFilter.secondColor = (GPUVector4){1.0f, 1.0f, 1.0f, 1.0f};
UIImage *filteredImage = [colorSwapFilter imageByFilteringImage:inputImage];
There are 2 problems:
The resulting image is a pale purple instead of blue. Almost as though the blue is being overlaid with a 50% opacity or something and the original green was set to white.
The button isn't a rectangle (more of an oval), and the transparent areas of the PNG (the corners) are now filled in with a semi-transparent blue (well, pale purple actually). Basically the button is now a rectangle with a darker oval in the middle.
Am I using this filter incorrectly? Do I have to do some pre-processing before using this filter?
The GPUImageFalseColorFilter is probably not what you want to use to alter the hue of something. It's a reimplementation of the filter by the same name in Core Image, which first converts an image to its luminance and then replaces white with one color and black with another. Instead of a grayscale, you get a variable mix between these colors. I also don't think it respects alpha channels at present.
You might want something more like a GPUImageHueFilter (again, not sure if it respects alpha) or a GPUImageLookupFilter. You might need to build a custom filter to locate a color within a certain threshold (look at the chroma keying ones for that) and to replace that with your given color. Hue changes might do the job, though.

How to encode emission or specular info in the alpha of a open gl texture

I have an OpenGL texture with UV map on it. I've read about using the alpha channel to store some other value which saves needing to load an extra map from somewhere. For example, you could store specular info (shininess), or an emission map in the alpha since you only need a float for that and the alpha isn't being used.
So I tried it. Writing the shader isn't the problem. I have all that part worked out. The problem is just getting all 4 channels in to the texture like I want.
I have all the maps so in PSD I put the base map in the rgb and the emissions map in the a. But when you save as png the alpha either doesn't save (if you add it as a new channel) or it trashes the rgb by premultiplying the transparency to the rgb (if you apply the map as a mask).
Apparently PNG files support transparency but not alpha channels per se. So there doesn't appear to be a way to control all 4 channels.
But I have read about doing this. So what format can I save it in from PSD that I can load with my image loader in the iPhone?
NSString *path = [[NSBundle mainBundle] pathForResource:name ofType:type];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
Does this method accept other file formats? Like TIFF which would allow me to control all 4 channels?
I could use texturetool to make a PVR.. but from the docs it appears to also take a PNG as input.
EDIT:
First to be clear this is in the iPhone.
It might be psd's fault. Like I said, there are two ways to set up the document in my version of psd (cc 14.2 mac) that I can find. One is to manually add a new channel and paste the maps in there. It shows up as a red overlay. The second is to add a mask, option click it and paste the alpha in there. In that case it shows it with the alpa as a transparency with the checkerboard in the alpha zero areas. When I save as png the alpha option greys out.
And when I load the png back in to psd it appears to be premultiplied. I can't get back to my full rgb data in photoshop.
Is there a different tool I can use to merge the two maps into a png that will store it png-32?
TIFF won't work cause it doesn't store alpha either. Maybe I was thinking of TGA.
I also noticed this in my loader...
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef thisContext = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
if (flipImage)
{
CGContextTranslateCTM (thisContext, 0, height);
CGContextScaleCTM (thisContext, 1.0, -1.0);
}
CGColorSpaceRelease( colorSpace );
CGContextClearRect( thisContext, CGRectMake( 0, 0, width, height ) );
CGContextDrawImage( thisContext, CGRectMake( 0, 0, width, height ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
glBindTexture(GL_TEXTURE_2D, textureInfo[texIndex].texture);
When I create that context the option is kCGImageAlphaPremultipliedLast.
Maybe I do need to try the glkit loader, but it appears that my png is premultiplied.
It is possible to create a PNG with an alpha channel, but you will not be able to read that PNG image using the builtin iOS APIs without a premultiplication. The core issue is that CoreGraphics only supports premultiplied alpha for performance reasons. You also have to be careful to disable Xcode's optimization of PNGs attached to the project file because it does the premultiplication at compile time. What you could do is compile and link in your own copy of libpng after turning off the Xcode PNG processing, and then read the file directly with libpng at the C level. But, honestly this is kind of a waste of time. Just save one image with the RGB values and another as grayscale with the alpha values as 0-255 grayscale values. You can have those grayscale values mean anything you want and you will not have to worry about premult messing things up. Your opengl code will just need to read from multiple textures, not a big deal.

Preventing antialiasing from CALayer renderInContext

I have a very simple UIView containing a few black and white UIImageViews. If I take a screenshot via the physical buttons on the device, the resulting image looks exactly like what I see (as expected) - if I examine the image at the pixel level it is only black and white.
However, if I use the following snippet of code to perform the same action programmatically, the resulting image has what appears to be anti-aliasing applied - all the black pixels are surrounded by faint grey halos. There is no grey in my original scene - it's pure black and white and the dimensions of the "screenshot" image is the same as the one I am generating programmatically, but I can not seem to figure out where the grey haloing is coming from.
UIView *printView = fullView;
UIGraphicsBeginImageContextWithOptions(printView.bounds.size, NO, 0.0);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[printView.layer renderInContext:ctx];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
UIGraphicsEndImageContext();
I've tried adding the following before the call to renderInContext in an attempt to prevent the antialiasing, but it has no noticeable effect:
CGContextSetShouldAntialias(ctx, NO);
CGContextSetAllowsAntialiasing(ctx, NO);
CGContextSetInterpolationQuality(ctx, kCGInterpolationHigh);
Here is a sample of the two different outputs - the left side is what my code produces and the right side is a normal iOS screenshot:
Since I am trying to send the output of my renderInContext to a monochrome printer, having grey pixels causes some ugly artifacting due to the printer's dithering algorithm.
So, how can I get renderInContext to produce the same pixel-level output of my views as a real device screenshot - i.e. just black and white as is what is in my original scene?
It turns out the problem was related to the resolution of the underlying UIImage being used by the UIImageView. The UIImage was a CGImage created using a data provider. The CGImage dimensions were specified in the same units as the parent UIImageView however I am using an iOS device with a retina display.
Because the CGImage dimensions were specified in non-retina size, renderInContext was upscaling the CGImage and apparently this upscaling behaves differently than what is done by the actual screen rendering. (For some reason the real screen rendering upscaled without adding any grey pixels.)
To fix this, I created my CGImage with double the dimension of the UIImageView, then my call to renderInContext produces a much better black and white image. There are still a few grey pixels in some of the white area, but it is a vast improvement over the original problem.
I finally figured this out by changing the call to UIGraphicsBeginImageContextWithOptions() to force it to do a scaling of 1.0 and noticed the UIImageView black pixel rendering had no grey halo anymore. When I forced UIGraphicsBeginImageContextWithOptions() to a scale factor of 2.0 (which is what it was defaulting to because of the retina display), then the grey haloing appeared.
I would try to set the
printView.layer.magnificationFilter
and
printView.layer.minificationFilter
to
kCAFilterNearest
Are the images displayed in UIImageView instances? Is printView their superview?

WebGL transparent black

I have a strange problem that I can't figure out when trying to do blending in WebGL. Black is rendered fully transparent , and everything with shades of grey in it is rendered also semi transparent. I have set it to use the alpha channel as the source for transparency, and in some respect it works, every thing that isn't black/grey is rendered differently when changing the alpha value. but even when I set the alpha to 1, black is still displayed transparent.
This is how I enable transparency:
this.gl.blendFunc(this.gl.SRC_ALPHA, this.gl.ONE);
this.gl.enable(this.gl.BLEND);
this.gl.disable(this.gl.DEPTH_TEST);
And the part of the shader that does transparency:
gl_FragColor = vec4(texColor.rgb * vLightWeight, texColor.a * uAlpha);
where texColor is the texture color that is being sampled, vLightWeight is the shadowing that is being calculated in the vertex shader, and uAlpha the uniform which I use for transparency.
I think you should have gl.ONE_MINUS_SRC_ALPHA where you currently have gl.ONE.
gl.blendFunc( gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA );

Resources