Resize png image in Delphi - incorrect alpha channel - delphi

I am resizing png images which might have alpha channel.
Everything works good, with one exception:
I get some gray pixels around the transparent areas.
The original image doesn't have any drop shadows.
Is there a way to fix this / work it around?
I am using SmoothResize by Gustavo Daud (See the first answer to this question), to resize the png image.
I cannot provide the code that I am using as I did not write it and do not have the author's permission to publish it.

I suspect that is caused by 2 things: funny RGBA values in PNG and naive resizing code.
You need to check your PNG contents. You are looking for RGB values in transparent areas. Despite transparent areas having Alpha at 0, they still have an RGB info. In your case I would expect that transparent areas are filled with black RGB value. That is what might cause the grey outline after resize if the resize is done naively. Example: What happens if code resizes 2 adjustent pixels (0,0,0,0) and (255,255,255,255) into one? Both pixels contribute by 50% The result is 128,128,128,128) which is semi-transparent grey. Same thing happens when you upscale by e.g x1.5, the added pixel inbetween original two will be grey. Usually that does not happen because image-editing software is smart enough to fill those invisible pixels with color from nearest visible pixel.
You can try to "fix" the PNG by filling transparent areas with white (or another color that is on border of your images).
Another approach is to use an advanced resizing code (write or find library), that will ignore transparent pixels RGB values (e.g. by taking RGB from closest non-transparent pixel).

Related

Metal - How to overlap textures based on color

I'm trying to use a render pass descriptor to draw two grayscale textures. I am drawing a black square first, then a light gray square after. The second square partially covers the first.
With this setup, the light gray square will always appear in front of the black square because it was drawn most recently in the render pass. However, I would like to know if there is a way to draw the black square above the light gray one based on its brightness. Since the squares only partially overlap is there a way to still have the black square appear on top simply because it has a darker pixel value?
Currently it looks something like this, where the gray square is drawn second so it appears on top.
What I would like is to be able to still draw the gray square second, but have it appear underneath based on the pixel brightness, like so:
I think MTLBlendOperationMin will do what you want: https://developer.apple.com/documentation/metal/mtlblendoperation/mtlblendoperationmin?language=objc

Detecting "surrounded" areas in a binary image

from performing another operation, I get an B/W ( binary ) image which has white and black areas. Now I want to find and floodfill the blacj areas that are completely surrounded by white and not touching the Image border.
The "brute-force" approach i used, which is basically iterating over all pixels( all but the "border" rows/cols), if it finds a black one, I look at the neighbours ( mark them as "visited" ) and if they are black recursively go to their neighbours. And if I only hit white pixels and don't end up at a border I floodfill the area.
This can take a while on a high resolution image.
Is there a not too complicated faster way to to this?
Thank you.
As you have a binary image, you can perform a connected component labeling of the black components. All the components found are surrounded with white. Then you go along the borders in order to find the components that touch the border, and you delete them.
An other simpler and faster solution, would be to go along the borders, and as soon as you find a black pixel, you set a seed that expands until all the black pixels that touch the original pixel are white. Doing that, you delete all the black pixels touching the borders. It will remain only the black components that do not touch the borders.
If most black areas are not touching the border, doing the reverse is likely faster (but equally complicated).
From the border mark any reachable pixel (reachable meaning you can get to the border via only black pixels). After this do a pass of the whole image. Anything black and not visited will be a surrounded area.

Preserve color of transparent pixels using CGContextDrawimage?

I'm loading image data from a TIFF stored on disk in to a buffer which I subsequently use to create an OpenGL texture. I'm getting at the data by writing to a CGContext. The original image is 100% white on every single pixel. The only thing that changes from one pixel to the next is the alpha value.
When I write to the CGContext, the color of the transparent pixels isn't preserved. "Why do you care about the color of transparent pixels" you ask? When the image is scaled, the color of the transparent pixels can become visible, creating ugly dark outline artifacts.
I've tried reading the data directly from the CGImage into a buffer and using that buffer to create my texture (using CGImageGetDataProvider(image)), but this only works in cases where the color space of the CGImage is RGBA. Presumably, CGContextDrawimage handles converting from one color space to another.
Is there any way I can tell CGContextDrawimage to preserve the color of transparent pixels? Or am I going to have to load my images some other way?

UIImage/CGImage changing my pixel color

I have an image that is totally white in its RGB components, with varying alpha -- so, for example, 0xFFFFFF09 in RGBA format. But when I load this image with either UIImage or CGImage APIs, and then draw it in a CGBitmapContext, it comes out grayscale, with the RGB components set to the value of the alpha -- so in my example above, the pixel would come out 0x09090909 instead of 0xFFFFFF09. So an image that is supposed to be white, with varying transparency, comes out essentially black with transparency instead. There's nothing wrong with the PNG file I'm loading -- various graphics programs all display it correctly.
I wondered whether this might have something to do with my use of kCGImageAlphaPremultipliedFirst, but I can't experiment with it because CGBitmapContextCreate fails with other values.
The ultimate purpose here is to get pixel data that I can upload to a texture with glTexImage2D. I could use libPNG to bypass iOS APIs entirely, but any other suggestions? Many thanks.
White on a black background with an alpha of x IS a grey value corresponding to x in all the components. Thats how multiplicative blending works.

Cleaning up speckles around text in a scanned image

I've tried -noise radius and -noise geometry and they don't seem to do what I want at all. I have some b&w images (TIFF G4 Fax compression) with lots of noise around the characters. This noise takes the form of pixel blobs that are 1 pixel wide in most cases.
My desire is to do the following 3 steps (in this order):
Whiteout all black pixels that are 1 pixel wide (white pixels to the left and right)
Whiteout all black pixels that are 1 pixel tall (white pixels above and below)
Whiteout all black pixels that are 1 pixel wide (white pixels to the left and right)
Do I have to write code to do this, or can Imagemagick pull it off? If it can, how do you specify the geometry to do it?
Lacking a lot of good answers here, I put this one to the ImageMagick forum and their response was really good. You can read it here ImageMagick Forum
Morphology proved to be the best answer.
Blur then sharpen would be the normal technique for speckle noise.
Imagemagik can do both of these - you might have to play with the amoutn of blurring

Resources