Merging Images by using Mat and MatOfPoint - opencv - opencv

Here is what I am trying.
Detect rectangles from Mat(contains background image data) using findContours() - done
fill those rectangles image from my pc
Now I have an iteration code finding each rectangles, and inside of it, I have a MatOfPoint data which has the 4 vertices of the rectangle.
I am wondering if I have a Mat of an image which needs to be inserted, is there a way to merge this Mat to the background image's Mat referencing the known vertices by MatOfPoint? No proportion maintenance is needed. Only consideration for fitting image into a rectangle I have found.
SKETCH
(I attach this sketch to help your understanding, but please consider that I am very new to opencv and if I have a misunderstanding of the basic concept of opencv, please let me know)

Related

Region of interest in fingerprint image

I am working on a fingerprint recognition project with OpenCV. Currently I need to extract the inner region in fingerprint (ellipse in image), but I am not sure how to do it.
Any suggestion is appreciated.
EDIT:
I need to check if a fingerprint from sensor device and another from identification card match or not. The fingerprint in sensor is as follow (left) meanwhile in identification card is as right fingerprint. In order to validate them, it is required to segment this fingerprint (outside the ellipse doesn't provide useful information but indeed adds "noise" for this purpose).
Thank you.
#API55's comment is correct, for clarity:
create a mask (white inside the ellipse and black outside) you can do this with ellipse function and -1 in the thickness. Then copy the image using the mask (bitwise_and for python or copyTo for c++ should do it)... you will always have a squared image, but you will have black (or the color you want) outside the ellipse
These steps are pretty much spot on,
Create your circular mask in the correct place in the image
Copy the image using that mask
Your new image contains the mask data, and black data everywhere else.
below is an example of how to implement this in code:
( I lovingly borrowed from here)
Mat img = imread("small1.png", 0); // load gray
Rect region(10,10,40,40); // example roi
Mat roi(img, region); // part of img
Mat mask(Size(40,40), CV_8U, Scalar(0)); // all black
circle(mask, Point(20,20), 20, Scalar(255), -1, LINE_AA); // filled circle
Mat circRoi;
bitwise_and(roi, roi, circRoi, mask); // retain only pixels inside the circle
//
// now you can do your intensity calculation on circRoi
//
imshow("circle masked", mask);
imshow("masked roi", circRoi);
waitKey();
Useful Links
Why ROIs don't have to be circular but Mats do
Older code example, useful for learning the theory but I wouldnt recommend implementing using IPLimage
Creating a custom ROI of any shape or size

How can we extract region bound by a contour in OpenCV?

I am new to OpenCV and I was trying to extract the region bound by largest contour. It may be a simple question, but I am not able to figure it out. I tried googling too, without any luck.
I would:
Use contourArea() to find the largest closed contour.
Use boundingRect() to get the bounds of that contour.
Draw the contour using drawContours() (with thickness set to -1 to
fill the contour) and use this as a mask.
Use use the mask to set all pixels in the original image not in the
ROI to (0,0,0).
Use the bounding rectangle to extract just that area from the
original image.
Here is well explained what do you want do develop.
Basically you have to:
apply threshold to a copy of the original image;
use findContours -> output is:
vector<vector<Point>>
that stores contours;
iterate on contours to find the largest.

Stretch region of image through opencv or opengl in iOS

I am trying to make double chin in fat image as mentioned in my desired result image below.
I have morphed the normal face to fat face by wrapping an image on mesh and deformed the mesh.
Original image
Wrapped image on mesh grid with vertex points displaced
Current result image
I tried a lot by arranging mesh points but could not get the result like I have shown in first image.
Any ideas how to achieve this by open GL or open CV in iOS?
It's obvious from the first image that there is an added effect to produce the double or triple chin.
This actually looks like a either a preset image blended into the original or a scale and stretched version of the original chin blended into the warped image.

Compare histograms of specific areas of two images? OpenCV

Basically, I want to be able to compare two histograms, but not of whole images just specific areas. I have image A and have a specific rectangular region on it that I want to compare to another image B. Is there a way to get the histogram of a definable rectangular region on an image? I have the x y position of the rectangular area, as well as it's width and height, and want to get its histogram. I'm using opencv with python.
Sorry if that isn't very clear :(
(I'm setting up a program that takes a picture of a circuit board, and checks each solder pad for consistency with an image of a perfect board. If one pad is off, the program raises a flag saying that specific pad is off by x percent, not the whole board.
Note: The following is in C++ but I think it is not hard to find the equivalent functions for python.
You can find the histogram of an image using this tutorial. So for example for the lena image we get:
In your case, since you have the rectangle coordinates, you can just extract the ROI of the image:
// C++ code
cv::Mat image = cv::imread("lena.png", 0);
cv::Rect roiRect = cv::Rect(150, 150, 250, 250);
cv::Mat imageRoi = image(roiRect);
and then find the histogram of just the ROI with the same way as above:
Is this what you wanted (in theory at least) or I misunderstood?

OpenCV cvRemap Cropping Image

So I am very new to OpenCV (2.1), so please keep that in mind.
So I managed to calibrate my cheap web camera that I am using (with a wide angle attachment), using the checkerboard calibration method to produce the intrinsic and distortion coefficients.
I then have no trouble feeding these values back in and producing image maps, which I then apply to a video feed to correct the incoming images.
I run into an issue however. I know when it is warping/correcting the image, it creates several skewed sections, and then formats the image to crop out any black areas. My question then is can I view the complete warped image, including some regions that have black areas? Below is an example of the black regions with skewed sections I was trying to convey if my terminology was off:
An image better conveying the regions I am talking about can be found here! This image was discovered in this post.
Currently: The cvRemap() returns basically the yellow box in the image linked above, but I want to see the whole image as there is relevant data I am looking to get out of it.
What I've tried: Applying a scale conversion to the image map to fit the complete image (including stretched parts) into frame
CvMat *intrinsic = (CvMat*)cvLoad( "Intrinsics.xml" );
CvMat *distortion = (CvMat*)cvLoad( "Distortion.xml" );
cvInitUndistortMap( intrinsic, distortion, mapx, mapy );
cvConvertScale(mapx, mapx, 1.25, -shift_x); // Some sort of scale conversion
cvConvertScale(mapy, mapy, 1.25, -shift_y); // applied to the image map
cvRemap(distorted,undistorted,mapx,mapy);
The cvConvertScale, when I think I have aligned the x/y shift correctly (guess/checking), is somehow distorting the image map making the correction useless. There might be some math involved here I am not correctly following/understanding.
Does anyone have any other suggestions to solve this problem, or what I might be doing wrong? I've also tried trying to write my own code to fix distortion issues, but lets just say OpenCV knows already how to do it well.
From memory, you need to use InitUndistortRectifyMap(cameraMatrix,distCoeffs,R,newCameraMatrix,map1,map2), of which InitUndistortMap is a simplified version.
cvInitUndistortMap( intrinsic, distort, map1, map2 )
is equivalent to:
cvInitUndistortRectifyMap( intrinsic, distort, Identity matrix, intrinsic,
map1, map2 )
The new parameters are R and newCameraMatrix. R species an additional transformation (e.g. rotation) to perform (just set it to the identity matrix).
The parameter of interest to you is newCameraMatrix. In InitUndistortMap this is the same as the original camera matrix, but you can use it to get that scaling effect you're talking about.
You get the new camera matrix with GetOptimalNewCameraMatrix(cameraMat, distCoeffs, imageSize, alpha,...). You basically feed in intrinsic, distort, your original image size, and a parameter alpha (along with containers to hold the result matrix, see documentation). The parameter alpha will achieve what you want.
I quote from the documentation:
The function computes the optimal new camera matrix based on the free
scaling parameter. By varying this parameter the user may retrieve
only sensible pixels alpha=0, keep all the original image pixels if
there is valuable information in the corners alpha=1, or get something
in between. When alpha>0, the undistortion result will likely have
some black pixels corresponding to “virtual” pixels outside of the
captured distorted image. The original camera matrix, distortion
coefficients, the computed new camera matrix and the newImageSize
should be passed to InitUndistortRectifyMap to produce the maps for
Remap.
So for the extreme example with all the black bits showing you want alpha=1.
In summary:
call cvGetOptimalNewCameraMatrix with alpha=1 to obtain newCameraMatrix.
use cvInitUndistortRectifymap with R being identity matrix and newCameraMatrix set to the one you just calculated
feed the new maps into cvRemap.

Resources