How to remove the unwanted black region from binary image? - image-processing

I have binarized an image and calculate its black and white pixels. After pixel calculation, calculate the ratio of black pixels i.e. R= (no:of black pixels/ no: of white pixel+no: of black pixels)*100. I have to used this result to declare an eye is open or closed if R>20% then eye is in open state else closed. But when I calculate this ratio it is not coming what i want. I think there might be some error in calculation of black and white pixel, due to unwanted black region in image or might be problem in thresholding an image. I am using Otsu's method for thresholding image.
While finding on this topic I also try `openInput=bwareaopen(bw, 80) but this is not work well to remove unwanted black area. Kindly help me out in removing the unwanted area.
close all
clear all
I=imread('op.jpg');
I=rgb2gray(I);
thres_level=graythresh(I); % find the threshold level of image
bw=im2bw(I,thres_level); % converts an image into binary
figure, imshow(bw);
totnumpix=numel(bw); % calculate total no of pixels in image
nwhite_open=sum(bw(:)); % calculate the black pixels in image;
nblack_open=totnumpix-nwhite_open; %calculate white pixels in image;
R=(nblack_open/(nblack_open+nwhite_open))*100

The area opening erase areas having a surface smaller than the parameter you enter. In your case, it will not likely help.
I think that the black/white ratio is not the solution. I would either:
Detect the iris (multiple effective papers/posts on the subject, mainly using hough transform), and if the detection fails, then the eye is closed.
If you have the original color image, then use a skin detection (simple color thresholding in the HSV color space). Because the ratio of skin you will determine when the eye is closed.

Related

Lane detection with brightness change and shades on lanes?

I am currently working on a lane detection project, where the input is an RGB road image "img" from a racing game, and the output is the same image annotated with drawn colored lines on detected lanes.
The steps are:
Convert the RGB image "img" to HSL image, then use a white color mask on it (white lanes only are expected in the image) with a white color range to discard any parts of the image with colors outside this range (put their values as zeros), let the output of this step by "white_img".
convert "white_img" to Grayscale producing "gray_img".
Apply Gaussian blurring to "gray_img" to make edges smoother, so less noisy edges can be detected, producing "smoothed_img".
Apply edge detection on "smoothed_img", producing "edge_img".
Crop "edge_img" by selecting a region of interest ROI, which is approximately within the lower half of image, producing "roi_img".
Finally, apply Hough transform on "roi_img" to detect the lines which will be considered as the detected lanes.
The biggest problems I am facing now are the brightness change and shades on lanes. For a dark image with shades on lanes, the lanes color can become very dark. I tried to increase the accepted white color range in step 1, which worked well for this kind of images. But for a bright image with no shades on lanes, most of the image is not discarded after step 1, which produces an output containing many things irrelevant to lanes.
Examples of input images:
Medium brightness without shades on lanes
Low brightness with shades on lanes
High brightness without shades on lanes
Any help to deal with these issues will be appreciated. Thanks in advance.

How to remove the local average color from an image with OpenCV

I have an image with a gentle gradient background and sharp foreground features that I want to detect. I wish to find green arcs in the image. However, the green arcs are only green relative to the background (they are semitransparent). The green arcs are also only one or two pixels wide.
As a result, for each pixel of the image, I want to subtract the average color of the surrounding pixels. The surrounding 5x5 or 10x10 pixels would be sufficient. This should leave the foreground features relatively untouched but remove the soft background, and I can then select the green channel for further processing.
Is there any way to do this efficiently? It doesn't have to be particularly accurate, but I want to run it at 30Hz on a 1080p image.
You can achieve this by doing a simple box blur on the image and then subtracting that from the original image:
blur(img,blurred,Size(15,15));
diff=img-blurred;
I tried this on a 1920x1080 image and the blur took 25ms and the subtract took 15ms on a decent spec iMac.
If your background is not changing fast, you could calculate the blur over the space of a few frames in a second thread and keep re-using it till you recalculate it again a few frames later then you would only have 15ms subtraction to do for each of your 30fps rather than 45ms of processing.
Depending on how smooth the background gradient is, edge detection followed by dilation might work well for you.
Edge detection will output the 3 lines that you've shown in the sample and discard the background gradient (again, depending on smoothness). Dilating the image then will thicken the edges so that your line contours are more significant.
You can superimpose this edge map on your original image as:
Mat src,edges;
//perform edge detection on src and output in edges.
Mat result;
cvtColor(edges,edges,cv2.COLOR_GRAY2BGR);//change mask to a 3 channel image
subtract(edges,src,result);
subtract(edges,result);
The result should contain only the 3 lines with 3 colours. From here on you can apply color filtering to select the green one.

How to detect large galaxies using thresholding?

I'm required to create a map of galaxies based on the following image,
http://www.nasa.gov/images/content/690958main_p1237a1.jpg
Basically I need to smooth the image first using a mean filter then apply thresholding to the image.
However, I'm also asked to detect only large galaxies in the image. So what should I adjust the smoothing mask or thresholding in order to achieve that goal?
Both: by smoothing the picture first, the pixels around smaller galaxies will "blend" with the black space and, thus, shift to a lower intensity value. This lower intensity can then be thresholded, leaving only the white centres of bigger galaxies.

How to overlay an picture with a given mask

I want to overlay an image in a given image. I have created a mask with an area, where I can put this picture:
Image Hosted by ImageShack.us http://img560.imageshack.us/img560/1381/roih.jpg
The problem is, that the white area contains a black area, where I can't put objects.
How can I calculate efficiently where the subimage must to put on? I know about some functions like PointPolygonTest. But it takes very long.
EDIT:
The overlay image must put somewhere on the white place.
For example at the place from the blue rectangle.
Image Hosted by ImageShack.us http://img513.imageshack.us/img513/5756/roi2d.jpg
If I understood correctly, you would like to put an image in a region (as big as the image) that is completely white in the mask.
In this case, in order to get valid regions, I would apply an erosion to the mask using a kernel of the same size as the image to be inserted. After erosion, all valid regions will be white.
The image you show however has no 200*200 regions that is entirely white, so I must have misunderstood...
But if you what to calculate the region with the least black in the mask, you could apply a blur instead of an erosion and look for the maximal intensity pixel in the blurred mask.
In both case you want to insert the sub-image so that its centre is on the position maximal intensity pixel of the eroded/blurred mask.
Edit:
If you are interested in finding the region that would be the most distant from any black pixel to put the sub-image, you can define its centre as the maximal value of the distance transform of the mask.
Good luck,

Cleaning up speckles around text in a scanned image

I've tried -noise radius and -noise geometry and they don't seem to do what I want at all. I have some b&w images (TIFF G4 Fax compression) with lots of noise around the characters. This noise takes the form of pixel blobs that are 1 pixel wide in most cases.
My desire is to do the following 3 steps (in this order):
Whiteout all black pixels that are 1 pixel wide (white pixels to the left and right)
Whiteout all black pixels that are 1 pixel tall (white pixels above and below)
Whiteout all black pixels that are 1 pixel wide (white pixels to the left and right)
Do I have to write code to do this, or can Imagemagick pull it off? If it can, how do you specify the geometry to do it?
Lacking a lot of good answers here, I put this one to the ImageMagick forum and their response was really good. You can read it here ImageMagick Forum
Morphology proved to be the best answer.
Blur then sharpen would be the normal technique for speckle noise.
Imagemagik can do both of these - you might have to play with the amoutn of blurring

Resources