Detecting "surrounded" areas in a binary image - image-processing

from performing another operation, I get an B/W ( binary ) image which has white and black areas. Now I want to find and floodfill the blacj areas that are completely surrounded by white and not touching the Image border.
The "brute-force" approach i used, which is basically iterating over all pixels( all but the "border" rows/cols), if it finds a black one, I look at the neighbours ( mark them as "visited" ) and if they are black recursively go to their neighbours. And if I only hit white pixels and don't end up at a border I floodfill the area.
This can take a while on a high resolution image.
Is there a not too complicated faster way to to this?
Thank you.

As you have a binary image, you can perform a connected component labeling of the black components. All the components found are surrounded with white. Then you go along the borders in order to find the components that touch the border, and you delete them.
An other simpler and faster solution, would be to go along the borders, and as soon as you find a black pixel, you set a seed that expands until all the black pixels that touch the original pixel are white. Doing that, you delete all the black pixels touching the borders. It will remain only the black components that do not touch the borders.

If most black areas are not touching the border, doing the reverse is likely faster (but equally complicated).
From the border mark any reachable pixel (reachable meaning you can get to the border via only black pixels). After this do a pass of the whole image. Anything black and not visited will be a surrounded area.

Related

Calculating center of symmetry between zones in a image in Matlab

I have images such as below (I am pasting only a sketch here) where I want to calculate the center of symmetry and the displacement between the 2 marked zones in the image(Marked in Red and blue). Could anyone suggest a simple algorithm for this? (Please note the signal is symmetric to 180-degree rotation).
The idea is to calculate the center of symmetry between the red and blue zones
Here is my algorithm approach:
Distinguish the red and blue pixels in the image by thresholding: Lets say set the blue pixels to wihte(255) and set the red pixels(0) and set the rest of image to the (125) in a gray channel image.
Check all of the pixels horizontally in the middle row of the image.
While you are checking, if you hit the red pixel, then try to search blue one. However, if you hit red pixel again, then start searching blue again.
During the search after red pixel if you hit a blue pixel then you can easiy calculate the middle of it. Then re-start the process.
Here is a demonstration of search line:
Note: Since they are dashed lines, luckily you may pass them with gaps. To fix, this you can try couple of random different rows.

Metal - How to overlap textures based on color

I'm trying to use a render pass descriptor to draw two grayscale textures. I am drawing a black square first, then a light gray square after. The second square partially covers the first.
With this setup, the light gray square will always appear in front of the black square because it was drawn most recently in the render pass. However, I would like to know if there is a way to draw the black square above the light gray one based on its brightness. Since the squares only partially overlap is there a way to still have the black square appear on top simply because it has a darker pixel value?
Currently it looks something like this, where the gray square is drawn second so it appears on top.
What I would like is to be able to still draw the gray square second, but have it appear underneath based on the pixel brightness, like so:
I think MTLBlendOperationMin will do what you want: https://developer.apple.com/documentation/metal/mtlblendoperation/mtlblendoperationmin?language=objc

Resize png image in Delphi - incorrect alpha channel

I am resizing png images which might have alpha channel.
Everything works good, with one exception:
I get some gray pixels around the transparent areas.
The original image doesn't have any drop shadows.
Is there a way to fix this / work it around?
I am using SmoothResize by Gustavo Daud (See the first answer to this question), to resize the png image.
I cannot provide the code that I am using as I did not write it and do not have the author's permission to publish it.
I suspect that is caused by 2 things: funny RGBA values in PNG and naive resizing code.
You need to check your PNG contents. You are looking for RGB values in transparent areas. Despite transparent areas having Alpha at 0, they still have an RGB info. In your case I would expect that transparent areas are filled with black RGB value. That is what might cause the grey outline after resize if the resize is done naively. Example: What happens if code resizes 2 adjustent pixels (0,0,0,0) and (255,255,255,255) into one? Both pixels contribute by 50% The result is 128,128,128,128) which is semi-transparent grey. Same thing happens when you upscale by e.g x1.5, the added pixel inbetween original two will be grey. Usually that does not happen because image-editing software is smart enough to fill those invisible pixels with color from nearest visible pixel.
You can try to "fix" the PNG by filling transparent areas with white (or another color that is on border of your images).
Another approach is to use an advanced resizing code (write or find library), that will ignore transparent pixels RGB values (e.g. by taking RGB from closest non-transparent pixel).

Identifying white regions using cvMinMaxLoc ()

My application will be processing 12-bit binary images that I am getting from a camera. The same image is being shown below in jpeg.
The task is to identify each white glowing region.
These 4 regions as a bunch are randomly coming in each image.
It can be assumed that the 4 white regions always move together from one image to another image.
Each point will be having very high intensity as compared to black or near black background. Each point is actually not a single pixel but a 14 x 14 ROI.
Also the height of the image is 200 pixels.
The distances between each white region is always fixed.
If I apply CVMinMaxLoc(); I will get only one location which is the brightest one.
How do I identify each region?
What you can do is the following:
use threshold() to get a black-and-white image with at least one white dot per white region.
On the thresholded image: Apply minMaxLoc() to get the first white region, but then use floodFill() to get rid of that white region by painting it black.
Repeat step 2 until you get all the white regions. (You will find each white connected component once because you paint each black.)
If your white regions are not connected after threshold(), you can use dilate() to make them connected.
If you wanna detect the centre of the white regions, you can also use erode() after step 1.

How to overlay an picture with a given mask

I want to overlay an image in a given image. I have created a mask with an area, where I can put this picture:
Image Hosted by ImageShack.us http://img560.imageshack.us/img560/1381/roih.jpg
The problem is, that the white area contains a black area, where I can't put objects.
How can I calculate efficiently where the subimage must to put on? I know about some functions like PointPolygonTest. But it takes very long.
EDIT:
The overlay image must put somewhere on the white place.
For example at the place from the blue rectangle.
Image Hosted by ImageShack.us http://img513.imageshack.us/img513/5756/roi2d.jpg
If I understood correctly, you would like to put an image in a region (as big as the image) that is completely white in the mask.
In this case, in order to get valid regions, I would apply an erosion to the mask using a kernel of the same size as the image to be inserted. After erosion, all valid regions will be white.
The image you show however has no 200*200 regions that is entirely white, so I must have misunderstood...
But if you what to calculate the region with the least black in the mask, you could apply a blur instead of an erosion and look for the maximal intensity pixel in the blurred mask.
In both case you want to insert the sub-image so that its centre is on the position maximal intensity pixel of the eroded/blurred mask.
Edit:
If you are interested in finding the region that would be the most distant from any black pixel to put the sub-image, you can define its centre as the maximal value of the distance transform of the mask.
Good luck,

Resources