How to overlay an picture with a given mask - opencv

I want to overlay an image in a given image. I have created a mask with an area, where I can put this picture:
Image Hosted by ImageShack.us http://img560.imageshack.us/img560/1381/roih.jpg
The problem is, that the white area contains a black area, where I can't put objects.
How can I calculate efficiently where the subimage must to put on? I know about some functions like PointPolygonTest. But it takes very long.
EDIT:
The overlay image must put somewhere on the white place.
For example at the place from the blue rectangle.
Image Hosted by ImageShack.us http://img513.imageshack.us/img513/5756/roi2d.jpg

If I understood correctly, you would like to put an image in a region (as big as the image) that is completely white in the mask.
In this case, in order to get valid regions, I would apply an erosion to the mask using a kernel of the same size as the image to be inserted. After erosion, all valid regions will be white.
The image you show however has no 200*200 regions that is entirely white, so I must have misunderstood...
But if you what to calculate the region with the least black in the mask, you could apply a blur instead of an erosion and look for the maximal intensity pixel in the blurred mask.
In both case you want to insert the sub-image so that its centre is on the position maximal intensity pixel of the eroded/blurred mask.
Edit:
If you are interested in finding the region that would be the most distant from any black pixel to put the sub-image, you can define its centre as the maximal value of the distance transform of the mask.
Good luck,

Related

What does normalizedPath refer to and how can I draw it over an image in iOS?

I have two images, one that is a monochrome one which is a mask and another one with full color. What I need to do is find the CGRect of the mask (white pixels) in the other full color one.
What I did is to first find the contour of the mask using the Vision framework. Now, this returns a CGPath which is normalised. How can I translate this path into coordinates to the other image? Both have been scaled the same way to make them the same size so the translation should be "easy" but I can't figure it out.

How to remove the unwanted black region from binary image?

I have binarized an image and calculate its black and white pixels. After pixel calculation, calculate the ratio of black pixels i.e. R= (no:of black pixels/ no: of white pixel+no: of black pixels)*100. I have to used this result to declare an eye is open or closed if R>20% then eye is in open state else closed. But when I calculate this ratio it is not coming what i want. I think there might be some error in calculation of black and white pixel, due to unwanted black region in image or might be problem in thresholding an image. I am using Otsu's method for thresholding image.
While finding on this topic I also try `openInput=bwareaopen(bw, 80) but this is not work well to remove unwanted black area. Kindly help me out in removing the unwanted area.
close all
clear all
I=imread('op.jpg');
I=rgb2gray(I);
thres_level=graythresh(I); % find the threshold level of image
bw=im2bw(I,thres_level); % converts an image into binary
figure, imshow(bw);
totnumpix=numel(bw); % calculate total no of pixels in image
nwhite_open=sum(bw(:)); % calculate the black pixels in image;
nblack_open=totnumpix-nwhite_open; %calculate white pixels in image;
R=(nblack_open/(nblack_open+nwhite_open))*100
The area opening erase areas having a surface smaller than the parameter you enter. In your case, it will not likely help.
I think that the black/white ratio is not the solution. I would either:
Detect the iris (multiple effective papers/posts on the subject, mainly using hough transform), and if the detection fails, then the eye is closed.
If you have the original color image, then use a skin detection (simple color thresholding in the HSV color space). Because the ratio of skin you will determine when the eye is closed.

Is there a way to detect near-rectangle in opencv?

I'm going to find the most look-like rectangles among shapes. The first image is the original image with shapes which possibly be rectangles but they are not. The green rectangles in the second image is what I want. So is there a way to do this with opencv? I've tried hough lines but the result's not good
The source image:
And what I want is to find out the most look-like rectangle among these shapes, like the rectangles in green.
What I want:
A very simple approach is, after you have a rectangle bounding box around your shape, count the percentage of pixels inside the box which are white.
The higher the percentage of white pixels, the closest to a rectangle it is.
To get the bounding boxes you should take a look at either findContours from opencv, or some Blob extracting algorithm, you will find plenty of questions regarding those.
Edit:
Maybe you should first get the Minimum bounding rectangles of the shapes and then do this kind of heuristic:
Shrink the rectangle dimensions until the white-pixel percentage inside the rectangle reaches some threshold defined by you (like 90% of white pixels inside the rectangle).
To get the Minimum bounding rectangle (the smallest rectangle which contains the whole shape), you might check this tutorial:
http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/bounding_rects_circles/bounding_rects_circles.html
One thing that might also help is doing the difference of sizes from the minimum bounding rectangle and the maximum inner rectangle (the biggest rectangle you can fit inside the white shape). The less difference there is between those rectangle's properties (width, height, area, center coordinates) the closest is the shape to a rectangle.

How to detect large galaxies using thresholding?

I'm required to create a map of galaxies based on the following image,
http://www.nasa.gov/images/content/690958main_p1237a1.jpg
Basically I need to smooth the image first using a mean filter then apply thresholding to the image.
However, I'm also asked to detect only large galaxies in the image. So what should I adjust the smoothing mask or thresholding in order to achieve that goal?
Both: by smoothing the picture first, the pixels around smaller galaxies will "blend" with the black space and, thus, shift to a lower intensity value. This lower intensity can then be thresholded, leaving only the white centres of bigger galaxies.

rotated crop in opencv

I am trying to crop a picture on right on along the contour. The object is detected using surf features and than i want to crop the image of extactly as detected.
When using crop some outside boundaries of other object is includes. I want to crop along the green line below. OpenCV has RotatedRect but i am unsure if its good for cropping.
Is there way to perfectly crop along the green line
I assume you get you get your example from http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html, so what you can do is to find the minimum axis aligned bounding box around the green bounding box, crop it from the image, use the inverted homography (H.inv()) matrix to transform that sub image into a new image (call cv::warpPerspective), and then crop your green bounding box (it should be axis aligned in your new image).
You can get the equations of the lines from the end points for each. Use these equations to check whether any given pixel lies within the green box or not i.e. does it lie between the left and right lines and between the top and bottom lines. Run this over the entire image and reset anything that doesn't lie within the box to black.
Not sure about in-built functionality to do this, but this simple methodology is guaranteed to work. For higher accuracy, you may want to consider sub-pixel checks.

Resources