I'm trying to use OpenCV to overlay two images together.
Input 1 background (b.jpg):
Input 2 foreground (f.jpg):
Desired output:
Real output:
The idea is to overlay circular part of foreground onto background.
I'm using the code:
Mat background = imread("b.jpg");
Mat foreground = imread("f.jpg");
Mat mask{foreground.size(), foreground.type(), Scalar::all(0)};
circle(mask, Point{foreground.cols / 2, foreground.rows / 2}, foreground.cols / 2 - 10, Scalar::all(255), -1, CV_AA);
imwrite("mask.jpg", mask);
foreground.copyTo(background, mask);
imwrite("overlay.jpg", background);
For the mask itself, I can see a perfect circle draw with very smooth edge.
But as soon as I call copyTo with the circular mask. The resulting image has abrupt edges and seems the anti-aliasing part is completely missing.
Is there a way to make the copyTo honoring the anti-aliasing fact? Or there is an easier way to achieve the same output?
Related
I am trying to detect two concentric circles using opencv in Android. Big outer circle is red, inner smaller circle is blue. The idea is to detect big circle while distance is long and detect inner circle as the distance becomes short.
Sample picture
I am using simple code:
Mat matRed = new Mat();
Core.inRange(matHsv, getScalar(hue - HUE_D, saturation - SAT_D, brightness - BRIGHT_D), getScalar(hue + HUE_D, saturation + SAT_D, brightness + BRIGHT_D), matRed);
//here we have black-white image
Imgproc.GaussianBlur(matRed, matRed, new Size(0, 0), 6, 6);
Mat matCircles = new Mat();
Imgproc.HoughCircles(matRed, matCircles, CV_HOUGH_GRADIENT, 1, matRed.rows()/8, 100, param2, 0, 0);
After calling inRange we have white ring on black background. HoughCircles function detects only inner black circle.
How can I make it to detect outer white circle instead?
Without seeing a sample image (or being quite sure what you mean by 'detect big circle while distance is long and detect inner circle as the distance becomes short'), this is somewhat of a guess, but I'd suggest using Canny edge detect to get the boundaries of your circles and then using contours to extract the edges. You can use the contour hierarchy to determine which is inside which if you need to extract one or the other.
Additionally, given the circles are different colours, you might want to look at using inRange to segment based on colour; for example, this post from PyImageSearch contains a Python application which does colour-based tracking.
Hi , I have attached the image below with an yellow bounding box. Is there any algorithm or (sequence of algorithms) in Opencv by which I can detect the yellow pixels and create a ROI mask (which will block out all the pixels outside of it).
You can do:
Find the yellow polygon
Fill the inside of the polygon
Copy only the inside of the polygon to a black-initialized image
Find the yellow polygon
Unfortunately, you used anti-aliasing to draw the yellow line, so the yellow color is not pure yellow, but has a wider range due to interpolation. This affects also the final results, since some not yellow pixels will be included in the result image. You can easily correct this by not using anti-aliasing.
So the best option is to convert the image in the HSV space (where it's easier to segment a single color) and keep only values in a range around the pure yellow.
If you don't use anti-aliasing, you don't even need to convert to HSV and simply keep points whose value is pure yellow.
Fill the inside of the polygon
You can use floodFill to fill the polygon. You need a starting point for that. Since we don't know if a point is inside the polygon (and taking the baricenter may not be safe since the polygon is not convex), we can safely assume that the point (0,0), i.e. the top-left corner of the image is outside the polygon. We can then fill the outside of the polygon, and then invert the result.
Copy only the inside of the polygon to a black-initialized image
Once you have the mask, simply use copyTo with that mask to copy on a black image the content under non-zero pixels in the mask.
Here the full code:
#include <opencv2\opencv.hpp>
using namespace cv;
int main()
{
Mat3b img = imread("path_to_image");
// Convert to HSV color space
Mat3b hsv;
cvtColor(img, hsv, COLOR_BGR2HSV);
// Get yellow pixels
Mat1b polyMask;
inRange(hsv, Scalar(29, 220, 220), Scalar(31, 255, 255), polyMask);
// Fill outside of polygon
floodFill(polyMask, Point(0, 0), Scalar(255));
// Invert (inside of polygon filled)
polyMask = ~polyMask;
// Create a black image
Mat3b res(img.size(), Vec3b(0,0,0));
// Copy only masked part
img.copyTo(res, polyMask);
imshow("Result", res);
waitKey();
return 0;
}
Result:
NOTES
Please note that there are almost yellow pixels in the result image. This is due to anti-aliasing, as explained above.
I have a simple colorful image taken by camera, and I need to detect some 'Red' circles inside of it very accurate.Circles have different radius and they should be distinguishable. There are some black circles in the photo also.
Here is the procedure I followed:
1 - Convert from RGB to HSV
2 - Determining "red" upper and lower band:
lower_red = np.array([100, 50, 50])
upper_red = np.array([179, 255, 255])
3 - Create a mask.
4 - Applying cv2.GaussianBlur to smoothing the mask and noise reduction.
5 - Detecting remaining circles by using 'cv2.HoughCircles' on 'Mask' functions with different radius. (I have radius range)
Problem: When I create mask, the quality is not good enough, therefore Circles are detected wrong according to their radius.
Attachments include main photo, mask, and detected circles.
Anybody can help to set all pixels to black appart red pixels. Or in the other words, creating a high quality mask.
I am developing face features detection in my project.
Heretofore i have developed detecting the face, then finding the eyes within the face.
I want to crop the eyes which are in circular .
circle( mask, center, radius, cv::Scalar(255,255,255), -1, 8, 0 );
image.copyTo( dst, mask );
Here in the above code , I am able to Mask image with black color leaving eye region. now I am want to crop only the Eye region.
Can anybody help me out on this issue.Please check below image
Cropping, by definition, means cutting an axis aligned rectangle from a larger image, leaving a smaller image.
If you want to "crop" a non-axis-aligned rectangle, you will have to use a mask. The mask can be the size of the full image (this is sometimes convenient), or as small and the smallest bounding (axis-aligned) rectangle containing all the pixels you want to leave visible.
This mask can be binary, meaning that it indicates whether or not a pixel is visible, or it can be an alpha-mask which indicated the degree of transparency of any pixel within it, with 0 indicating a non-visible pixel and (for 8-bit mask image) 255 indicating full opacity.
In your example above you can get the sub-image ROI (Region-Of-Interest) like this:
cv::Mat eyeImg = image(cv::Rect(center.x - radius, // ROI x-offset, left coordinate
center.y - radius, // ROI y-offset, top coordinate
2*radius, // ROI width
2*radius)); // ROI height
Note that eyeImg is not a copy, but refers to the same pixels within image. If you want a copy, add a .clone() at the end.
I want to do background subtraction in a video file using OpenCV method. Right now I'm able to do background subtraction, but the problem is that I couldn't get the output in color mode. All the output after subtracting the background is coming in grayscale color mode :(. I want to get the color information to the foreground which is the resulting output after background got subtracted.
Can I do it using masking technique?? like the following procedure which I'm thinking about.
Capture Input -- InputFrame (RGB)
Process InputFrame
Subtract background, store foreground in TempFrame (which is coming in grayscale :( )
Create a mask using TempFrame
Apply the created mask to the InputFrame
Get colored foreground as OutFrame
I'm struck up with doing the masking using OpenCV. I'm just a very beginner in OpenCV. Please help me to overcome this.
Thanks in advance.
http://vimeo.com/27477093
code is here
http://code.google.com/p/derin-deli-mavi/downloads/detail?name=denemeOpenCv23.zip&can=2&q=
to reach a colored foreground
just copy image by using foreground mask
// image.copyTo(foreground,foreground);
Okay, I don't understand how TempFrame (your foreground) could be greyscale if you are using background subtraction. You must be using a very special algorithm. But assuming TempFrame is greyscale, then you would do this:
cv::Mat mask = tempFrame > 0.5;
cv::Mat outFrame;
capturedFrame.copyTo(outFrame, mask);
That is OpenCV 2.0 code above. The number 0.5 is a threshold, you'll need to set it to something appropriate. If you're not using floating-point images, you'd probably set it to 128 or something like that. This is the same thing in OpenCV 1.1 code:
CvMat* mask = cvCreateMat(tempFrame.rows, tempFrame.cols, CV_8UC1);
cvCmpS(tempFrame, 0.5, mask);
CvMat* outFrame = cvCreateMat(capturedFrame.rows, capturedFrames.cols, CV_32FC3);
cvCopy(capturedFrame, outFrame, mask);