My application will be processing 12-bit binary images that I am getting from a camera. The same image is being shown below in jpeg.
The task is to identify each white glowing region.
These 4 regions as a bunch are randomly coming in each image.
It can be assumed that the 4 white regions always move together from one image to another image.
Each point will be having very high intensity as compared to black or near black background. Each point is actually not a single pixel but a 14 x 14 ROI.
Also the height of the image is 200 pixels.
The distances between each white region is always fixed.
If I apply CVMinMaxLoc(); I will get only one location which is the brightest one.
How do I identify each region?
What you can do is the following:
use threshold() to get a black-and-white image with at least one white dot per white region.
On the thresholded image: Apply minMaxLoc() to get the first white region, but then use floodFill() to get rid of that white region by painting it black.
Repeat step 2 until you get all the white regions. (You will find each white connected component once because you paint each black.)
If your white regions are not connected after threshold(), you can use dilate() to make them connected.
If you wanna detect the centre of the white regions, you can also use erode() after step 1.
Related
I have images such as below (I am pasting only a sketch here) where I want to calculate the center of symmetry and the displacement between the 2 marked zones in the image(Marked in Red and blue). Could anyone suggest a simple algorithm for this? (Please note the signal is symmetric to 180-degree rotation).
The idea is to calculate the center of symmetry between the red and blue zones
Here is my algorithm approach:
Distinguish the red and blue pixels in the image by thresholding: Lets say set the blue pixels to wihte(255) and set the red pixels(0) and set the rest of image to the (125) in a gray channel image.
Check all of the pixels horizontally in the middle row of the image.
While you are checking, if you hit the red pixel, then try to search blue one. However, if you hit red pixel again, then start searching blue again.
During the search after red pixel if you hit a blue pixel then you can easiy calculate the middle of it. Then re-start the process.
Here is a demonstration of search line:
Note: Since they are dashed lines, luckily you may pass them with gaps. To fix, this you can try couple of random different rows.
I have an image with a gentle gradient background and sharp foreground features that I want to detect. I wish to find green arcs in the image. However, the green arcs are only green relative to the background (they are semitransparent). The green arcs are also only one or two pixels wide.
As a result, for each pixel of the image, I want to subtract the average color of the surrounding pixels. The surrounding 5x5 or 10x10 pixels would be sufficient. This should leave the foreground features relatively untouched but remove the soft background, and I can then select the green channel for further processing.
Is there any way to do this efficiently? It doesn't have to be particularly accurate, but I want to run it at 30Hz on a 1080p image.
You can achieve this by doing a simple box blur on the image and then subtracting that from the original image:
blur(img,blurred,Size(15,15));
diff=img-blurred;
I tried this on a 1920x1080 image and the blur took 25ms and the subtract took 15ms on a decent spec iMac.
If your background is not changing fast, you could calculate the blur over the space of a few frames in a second thread and keep re-using it till you recalculate it again a few frames later then you would only have 15ms subtraction to do for each of your 30fps rather than 45ms of processing.
Depending on how smooth the background gradient is, edge detection followed by dilation might work well for you.
Edge detection will output the 3 lines that you've shown in the sample and discard the background gradient (again, depending on smoothness). Dilating the image then will thicken the edges so that your line contours are more significant.
You can superimpose this edge map on your original image as:
Mat src,edges;
//perform edge detection on src and output in edges.
Mat result;
cvtColor(edges,edges,cv2.COLOR_GRAY2BGR);//change mask to a 3 channel image
subtract(edges,src,result);
subtract(edges,result);
The result should contain only the 3 lines with 3 colours. From here on you can apply color filtering to select the green one.
from performing another operation, I get an B/W ( binary ) image which has white and black areas. Now I want to find and floodfill the blacj areas that are completely surrounded by white and not touching the Image border.
The "brute-force" approach i used, which is basically iterating over all pixels( all but the "border" rows/cols), if it finds a black one, I look at the neighbours ( mark them as "visited" ) and if they are black recursively go to their neighbours. And if I only hit white pixels and don't end up at a border I floodfill the area.
This can take a while on a high resolution image.
Is there a not too complicated faster way to to this?
Thank you.
As you have a binary image, you can perform a connected component labeling of the black components. All the components found are surrounded with white. Then you go along the borders in order to find the components that touch the border, and you delete them.
An other simpler and faster solution, would be to go along the borders, and as soon as you find a black pixel, you set a seed that expands until all the black pixels that touch the original pixel are white. Doing that, you delete all the black pixels touching the borders. It will remain only the black components that do not touch the borders.
If most black areas are not touching the border, doing the reverse is likely faster (but equally complicated).
From the border mark any reachable pixel (reachable meaning you can get to the border via only black pixels). After this do a pass of the whole image. Anything black and not visited will be a surrounded area.
Background subtraction.
MOG and MOG2 turned out to be unhelpful because they assume the first frame is the background.
So I did frame by frame subtraction. like this
My problem is to now paint the only the detected object white.
Btw, I did try out The inbuilt FindContours() method & obtained thousands of contours in the image.
for findContours() you may are mislead. The method assumes a binarized image as input, if it is not quite binary, it treats non-zero pixel as 1, regardless which color or grayscale the are. findContours
So your image is nearly binarized and you observe black and white regions. The black regions are treated as background and the not-black ones (non-zero) are treated as foreground pixels respektivly regions. findContours() does nothing more or less than "marking" coherent foreground pixels (yes the regions). So you get a List of vectors (a vector of points for each detected region).
For detecting the whole bus as object, you may want to lookup on: convexHull
This is (if I recall correctly) a list of vertrices too, that describes a region where all (previously found) regions are inside. So you may need to substract outliers first (like the piece of street or shadow on the bottom of your image).
also interesting: convexityDefects
and: approxPolyDP
I want to overlay an image in a given image. I have created a mask with an area, where I can put this picture:
Image Hosted by ImageShack.us http://img560.imageshack.us/img560/1381/roih.jpg
The problem is, that the white area contains a black area, where I can't put objects.
How can I calculate efficiently where the subimage must to put on? I know about some functions like PointPolygonTest. But it takes very long.
EDIT:
The overlay image must put somewhere on the white place.
For example at the place from the blue rectangle.
Image Hosted by ImageShack.us http://img513.imageshack.us/img513/5756/roi2d.jpg
If I understood correctly, you would like to put an image in a region (as big as the image) that is completely white in the mask.
In this case, in order to get valid regions, I would apply an erosion to the mask using a kernel of the same size as the image to be inserted. After erosion, all valid regions will be white.
The image you show however has no 200*200 regions that is entirely white, so I must have misunderstood...
But if you what to calculate the region with the least black in the mask, you could apply a blur instead of an erosion and look for the maximal intensity pixel in the blurred mask.
In both case you want to insert the sub-image so that its centre is on the position maximal intensity pixel of the eroded/blurred mask.
Edit:
If you are interested in finding the region that would be the most distant from any black pixel to put the sub-image, you can define its centre as the maximal value of the distance transform of the mask.
Good luck,