UIImage/CGImage changing my pixel color - ios

I have an image that is totally white in its RGB components, with varying alpha -- so, for example, 0xFFFFFF09 in RGBA format. But when I load this image with either UIImage or CGImage APIs, and then draw it in a CGBitmapContext, it comes out grayscale, with the RGB components set to the value of the alpha -- so in my example above, the pixel would come out 0x09090909 instead of 0xFFFFFF09. So an image that is supposed to be white, with varying transparency, comes out essentially black with transparency instead. There's nothing wrong with the PNG file I'm loading -- various graphics programs all display it correctly.
I wondered whether this might have something to do with my use of kCGImageAlphaPremultipliedFirst, but I can't experiment with it because CGBitmapContextCreate fails with other values.
The ultimate purpose here is to get pixel data that I can upload to a texture with glTexImage2D. I could use libPNG to bypass iOS APIs entirely, but any other suggestions? Many thanks.

White on a black background with an alpha of x IS a grey value corresponding to x in all the components. Thats how multiplicative blending works.

Related

iOS how to mask the image background color

I want to do following thing within in my iOS app:
user can draw something on white background paper.
my app allows user to capture the drawn image. Here the image will capture with background white color.
finally from the captured image i need to mask the white background color and just get the image alone into UIImage object.
I completed the steps 1 and 2. But i do not have any idea how to do the last step. Is there any openCV library that i can use it with my iOS app?.
Any help that might be really appreciated.
Well, since OpenCV itself is THE library, I guess that you are looking for a way to do that with OpenCV.
First, convert the input image to Mat, which is the data type OpenCV uses to represent an image;
Then, assuming the background is white, threshold the Mat to separate the background from whatever the user draw. According to the example below, the result of this operation makes the background black, and every pixel that is not black will represent something the user has draw:
Finally, convert the resulting Mat to UIImage: for this, iterate on the Mat and copy every pixel that is not black to the UIImage to have an UIImage that contains only what the user draw.
A better idea is to iterate on the thresholded Mat, figure out which pixel is not black, and instead of copying it directly to the new UIImage, copy that pixel (x,y) from the original UIImage, so you have a colored pixel at the end, which gives a more realistic result.

opengl es2 premultiplied vs straight alpha + blending

Opengl es2 on the iphone. Some objects are made with multiple layers of sprites with alphas.
Then I also have UI elements that are also composited together from various sprites that I fade in/out over top of everything else. I do the fading by adjusting the alpha in the shader.
My textures are PNG with alpha made in photoshop. I don't premultply them on purpose. I want them to be straight alpha, but in my app they're acting as if they're premultiplied where I can see a dark edge around a white sprite drawn over a white background.
If I set my blend mode to:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
The elements composite nicely together, no weird edges. But when elements are fading out they POP at the end. They will start to fade but won't go all the way down to alpha zero. So at the end of the fadeout animation when I remove the elements they "pop off" cause they're not completely faded out.
If I switch my blend mode to:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
The elements fade up and down nicely. But any white on white element has a dark edge around it from what looks like alpha premultiplication. See the white circle drawn over top of the white box:
But the other blends in the scene look good to me. The other transparent objects blend nicely.
Another important note is that my shader handles opacity for elements and colorizing elements. For each thing that is drawn I multiply by an element color and the final alpha with an opacity value:
void main(void)
{
gl_FragColor = texture2D(u_textureSampler,v_fragmentTexCoord0);
gl_FragColor = vec4(gl_FragColor.r*u_baseColor.r,gl_FragColor.g*u_baseColor.g,gl_FragColor.b*u_baseColor.b,gl_FragColor.a*u_baseColor.a * u_opacity);
}
This allows me to take a white object in my sprite sheet and make it any color I want. Or darken objects by using a baseColor of grey.
Every time I think I understand these blend modes something like this comes up and I start doubting myself.
Is there some other blend mode combo that will have smooth edges on my sprites and also support alpha fading / blending in the shader?
I'm assuming the GL_SRC_ALPHA is needed to blend using alpha in the shader.
Or is my problem that I need to use something other than PSD to save my sprite sheet? Cause that would be almost impossible at this point.
UPDATE:
I think the answer might just be NO that there is no way to do what I want. The 2nd blend mode should be correct. But it's possible that it's double multiplying the RGB with the alpha somewhere, or that it's premultiplied in the source file. I even tried premultiplying the alpha myself in the shader above by adding:
gl_FragColor.rgb *= glFragColor.a;
But that just makes the fades look bad as things turn grey as they fade out. If I premultiply the alpha myself and use the other blend mode above, things appear about the same as in my original. They fade out without popping but you can still see the halo.
Here's a great article on how to avoid dark fringes with straight alpha textures http://www.realtimerendering.com/blog/gpus-prefer-premultiplication/
If you're using mip-maps, that might be why your straight alpha textures have dark fringes -- filtering a straight alpha image can cause that to happen, and the best solution is to pre-multiply the image before the mip-maps are created. There's a common hack fix as well described in that article, but seriously consider pre-multiplying instead.
Using straight alpha to create the textures is often necessary and preferred, but it's still better to pre-multiply them as part of a build step, or during the texture load than to keep them as straight-alpha in memory. I'm not sure about OpenGL ES, but I know WebGL lets you pre-multiply textures on the fly during load by using gl.pixelStorei with a gl.UNPACK_PREMULTIPLY_ALPHA_WEBGL argument.
Another possibility is if you're compositing many elements, your straight alpha blending function is incorrect. In order to do a correct "over operator" with a straight alpha image, you need to use this blending function, maybe this is what you were looking for:
gl.blendFuncSeparate(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA, gl.ONE, gl.ONE_MINUS_SRC_ALPHA);
The common straight alpha blending function you referred to (gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA)) does not correctly handle the destination alpha channel, you have to use separate blend functions for color and alpha when blending a straight alpha image source, if you intend to composite many layers using an "over operator". (Think about how you probably don't want to interpolate the alpha channel, it should always end up more opaque than both the source & dest.) And take special care when you blend, because the result of the straight-alpha blend is a premultiplied image! So if you use the result later, you still have to be prepared to do premultiplied blending. For a longer explanation, I wrote about this here: https://limnu.com/webgl-blending-youre-probably-wrong/
The nice thing about using premultiplied images & blending is that you don't have to use separate blend funcs for color & alpha, and you automatically avoid a lot of these issues. You can & should create straight alpha textures, but then pre-multiply them before or during load and using premult blending (glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA)) throughout your code.
AFAIK, glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); is for premultiplied alpha, and it should work well if you use colors as is. glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); is for straight alpha.
Many texture loading frameworks implicitly convert images into premultiplied-alpha format. This is because many of them are doing image re-drawing into a new image, and CGBitmapContext doesn't support straight-alpha (non-multiplied) image. Consequently, they will usually generate premultiplied-alpha image. So, please look in your texture loading code, and check whether it was converted into premultiplied format.
Also, Photoshop (of course Adobe's) implicitly erases color informations on fully transparent (alpha=0) pixels when exporting to PNG. If you use linear or other texture filtering, then GPU will sample over neighboring pixels, and colors in transparent pixels will affect pixels at edges. But Photoshop already erased the color information so they will have some random color values.
Theoretically, this color bleeding can be fixed by keeping correct color values on transparent pixels. Anyway, with Photoshop, we have no practical way to export a PNG file with keeping correct color value because Photoshop doesn't respect invisible stuffs. (it's required to write a dedicated PNG exporter Photoshop plug-in to export them correctly, I couldn't fine existing one which support this)
Premultiplied alpha is good enough just to display the image, but it won't work well if you do any shader color magics because colors are stored in integer form, so it usually doesn't have enough precision to restore original color value. If you need precise color magic stuffs, use straight alpha — avoid Photoshop.
Update
Here's my test result with #SlippD.Thompson's test code on iPhone simulator (64-bit/7.x)
<Error>: CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 24 bits/pixel; 3-component color space; kCGImageAlphaNone; 2048 bytes/row.
cgContext with CGImageAlphaInfo 0: (null)
cgContext with CGImageAlphaInfo 1: <CGContext 0x1092301f0>
cgContext with CGImageAlphaInfo 2: <CGContext 0x1092301f0>
<Error>: CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 32 bits/pixel; 3-component color space; kCGImageAlphaLast; 2048 bytes/row.
cgContext with CGImageAlphaInfo 3: (null)
<Error>: CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 32 bits/pixel; 3-component color space; kCGImageAlphaFirst; 2048 bytes/row.
cgContext with CGImageAlphaInfo 4: (null)
cgContext with CGImageAlphaInfo 5: <CGContext 0x1092301f0>
cgContext with CGImageAlphaInfo 6: <CGContext 0x1092301f0>
<Error>: CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 24 bits/pixel; 0-component color space; kCGImageAlphaOnly; 2048 bytes/row.
cgContext with CGImageAlphaInfo 7: (null)

Resize png image in Delphi - incorrect alpha channel

I am resizing png images which might have alpha channel.
Everything works good, with one exception:
I get some gray pixels around the transparent areas.
The original image doesn't have any drop shadows.
Is there a way to fix this / work it around?
I am using SmoothResize by Gustavo Daud (See the first answer to this question), to resize the png image.
I cannot provide the code that I am using as I did not write it and do not have the author's permission to publish it.
I suspect that is caused by 2 things: funny RGBA values in PNG and naive resizing code.
You need to check your PNG contents. You are looking for RGB values in transparent areas. Despite transparent areas having Alpha at 0, they still have an RGB info. In your case I would expect that transparent areas are filled with black RGB value. That is what might cause the grey outline after resize if the resize is done naively. Example: What happens if code resizes 2 adjustent pixels (0,0,0,0) and (255,255,255,255) into one? Both pixels contribute by 50% The result is 128,128,128,128) which is semi-transparent grey. Same thing happens when you upscale by e.g x1.5, the added pixel inbetween original two will be grey. Usually that does not happen because image-editing software is smart enough to fill those invisible pixels with color from nearest visible pixel.
You can try to "fix" the PNG by filling transparent areas with white (or another color that is on border of your images).
Another approach is to use an advanced resizing code (write or find library), that will ignore transparent pixels RGB values (e.g. by taking RGB from closest non-transparent pixel).

Preserve color of transparent pixels using CGContextDrawimage?

I'm loading image data from a TIFF stored on disk in to a buffer which I subsequently use to create an OpenGL texture. I'm getting at the data by writing to a CGContext. The original image is 100% white on every single pixel. The only thing that changes from one pixel to the next is the alpha value.
When I write to the CGContext, the color of the transparent pixels isn't preserved. "Why do you care about the color of transparent pixels" you ask? When the image is scaled, the color of the transparent pixels can become visible, creating ugly dark outline artifacts.
I've tried reading the data directly from the CGImage into a buffer and using that buffer to create my texture (using CGImageGetDataProvider(image)), but this only works in cases where the color space of the CGImage is RGBA. Presumably, CGContextDrawimage handles converting from one color space to another.
Is there any way I can tell CGContextDrawimage to preserve the color of transparent pixels? Or am I going to have to load my images some other way?

Convert to grayscale and reduce the size

I am trying to develop an OCR in VB6 and I have some problems with BMP format. I have been investigating the OCR process and the first step is to convert the image in "black and white" with a threshold. The conversion process is easy to understand and I have done it. However, I'm trying to reduce the size of the resulting image because it uses less colors (each pixel only has 256 possible values in grayscale). In the original image I have 3 colors (red, green and blue) but now I only need one color (the value in grayscale). In this moment I have achieved the conversion but the resulting grayscale images have the same size as the original color image (I assign the same color value in the three channels).
I have tried to modify the header of the BMP file but I haven't achieved anything and now I don't understand how it works. For example, if I convert the image with paint, the offset that is specified in the header changes its value. If the header is constant, why does the offset change its value?.
The thing is that a grey-scale bitmap image is the same size as a color bitmap image because the data that is used to save the grey colors takes just as much space as the color.
The only difference is that grey is just 3 times that same value. (160,160,160) for example with color giving something like (123,200,60). The grey values are just a small subset of the RGB field.
You can trim down the size after converting to grey-scale by converting it from 24 bit to 16 bit or 8-bit for example. Although it depends on what you are using to do the conversion whether that is already supplied to you. Otherwise you'll have to make it yourself.
You can also try using something else than BMP images. PNG files are lossless too, and would even save space with the 24 bit version. Image processing libraries usally give you several options as output formats. Otherwise you can probably find a library that does this for you.
You can write your own conversion in a "lockbits" method. It takes a while to understand how to lock/unlock bits correctly, but the effort is worth it, and once you have the code working you'll see how it can be applied to other scenarios. For example, using an lock/unlock bits technique you can access the pixel values from a bitmap, copy those pixel values into an array, manipulate the array, and then copy the modified array back into a bitmap. That's much faster than calling GetPixel() and SetPixel(). That's still not the fastest image manipulation code one can write, but it's relatively easy to implement and maintain the code.
It's been a while since I've written VB6 code, but Bob Powell's often has good examples, and he has a page about lock bits:
https://web.archive.org/web/20121203144033/http://www.bobpowell.net/lockingbits.htm
In a pinch you could create a new Bitmap of the appropriate format and call SetPixel() for every pixel:
Every pixel (x,y) in your 24-bit color image will have a color value (r,g,b)
After conversion to a 24-bit gray image, each pixel (x,y) will have a three equal values for each color channel; that can be expressed as (n,n,n) as Willem wrote in his reply. If all three colors R,G,B have the same value, then you can say that color value is the "grayscale" value of that pixel. This is the same shade of gray that you will see in your final 8-bit bitmap.
Call SetPixel for each pixel (x,y) in a newly created 8-bit bitmap that is the same width and height as the original color image.

Resources