Region of interest in fingerprint image - opencv

I am working on a fingerprint recognition project with OpenCV. Currently I need to extract the inner region in fingerprint (ellipse in image), but I am not sure how to do it.
Any suggestion is appreciated.
EDIT:
I need to check if a fingerprint from sensor device and another from identification card match or not. The fingerprint in sensor is as follow (left) meanwhile in identification card is as right fingerprint. In order to validate them, it is required to segment this fingerprint (outside the ellipse doesn't provide useful information but indeed adds "noise" for this purpose).
Thank you.

#API55's comment is correct, for clarity:
create a mask (white inside the ellipse and black outside) you can do this with ellipse function and -1 in the thickness. Then copy the image using the mask (bitwise_and for python or copyTo for c++ should do it)... you will always have a squared image, but you will have black (or the color you want) outside the ellipse
These steps are pretty much spot on,
Create your circular mask in the correct place in the image
Copy the image using that mask
Your new image contains the mask data, and black data everywhere else.
below is an example of how to implement this in code:
( I lovingly borrowed from here)
Mat img = imread("small1.png", 0); // load gray
Rect region(10,10,40,40); // example roi
Mat roi(img, region); // part of img
Mat mask(Size(40,40), CV_8U, Scalar(0)); // all black
circle(mask, Point(20,20), 20, Scalar(255), -1, LINE_AA); // filled circle
Mat circRoi;
bitwise_and(roi, roi, circRoi, mask); // retain only pixels inside the circle
//
// now you can do your intensity calculation on circRoi
//
imshow("circle masked", mask);
imshow("masked roi", circRoi);
waitKey();
Useful Links
Why ROIs don't have to be circular but Mats do
Older code example, useful for learning the theory but I wouldnt recommend implementing using IPLimage
Creating a custom ROI of any shape or size

Related

How to make a mask of everything outside the result of FindContours()

I've detected the triangular shape of the traffic sign I want to detect. To use this image for further processing, I'd like to create a mask to make the background white (or black, doesn't really matter).
To detect this shape I used:
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
Mat hierarchy = new Mat();
CvInvoke.FindContours(input.GetImage(), contours, hierarchy,Emgu.CV.CvEnum.RetrType.External,Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
With the DrawContours method, I can easily draw them, with the correct result: image of the contours.
How do I make a mask so that I can clear everything outside the contours?
You can use the contours you have and draw to another similar empty image if all what you want is the outline.
If you want to fill, you can use the fillPoly function to fill the contours directly with specified color, and then use the binary filter to make all pixels with that color while and others black.

Can't determine document edges from camera with OpenCV

I need find edges of document that in user hands.
1) Original image from camera:
2) Then i convert image to BG:
3) Then i make blur:
3) Finds edges in an image using the Canny:
4) And use dilate :
As you can see on the last image the contour around the map is torn and the contour is not determined. What is my error and how to solve the problem in order to determine the outline of the document completely?
This is code how i to do it:
final Mat mat = new Mat();
sourceMat.copyTo(mat);
//convert the image to black and white
Imgproc.cvtColor(mat, mat, Imgproc.COLOR_BGR2GRAY);
//blur to enhance edge detection
Imgproc.GaussianBlur(mat, mat, new Size(5, 5), 0);
if (isClicked) saveImageFromMat(mat, "blur", "blur");
//convert the image to black and white does (8 bit)
int thresh = 128;
Imgproc.Canny(mat, mat, thresh, thresh * 2);
//dilate helps to connect nearby line segments
Imgproc.dilate(mat, mat,
Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(3, 3)),
new Point(-1, -1),
2,
1,
new Scalar(1));
This answer is based on my above comment. If someone is holding the document, you cannot see the edge that is behind the user's hand. So, any method for detecting the outline of the document must be robust to some missing parts of the edge.
I suggest using a variant of the Hough transform to detect the document. The Wikipedia article about the Hough transform makes it sound quite scary (as Wikipedia often does with mathematical subjects), but don't be discouraged, actually they are not too difficult to understand or implement.
The original Hough transform detected straight lines in images. As explained in this OpenCV tutorial, any straight line in an image can be defined by 2 parameters: an angle θ and a distance r of the line from the origin. So you quantize these 2 parameters, and create a 2D array with one cell for every possible line that could be present in your image. (The finer the quantization you use, the larger the array you will need, but the more accurate the position of the found lines will be.) Initialize the array to zeros. Then, for every pixel that is part of an edge detected by Canny, you determine every line (θ,r) that the pixel could be part of, and increment the corresponding bin. After processing all pixels, you will have, for each bin, a count of how many pixels were detected on the line corresponding to that bin. Counts which are high enough probably represent real lines in the image, even if parts of the line are missing. So you just scan through the bins to find bins which exceed the threshold.
OpenCV contains Hough detectors for straight lines and circles, but not for rectangles. You could either use the line detector and check for 4 lines that form the edges of your document; or you could write your own Hough detector for rectangles, perhaps using the paper Jung 2004 for inspiration. Rectangles have at least 5 degrees of freedom (2D position, scale, aspect ratio, and rotation angle), and memory requirement for a 5D array obviously goes up pretty fast. But since the range of each parameter is limited (ie, the document's aspect ratio is known, and you can assume the document will be well centered and not rotated much) it is probably feasible.

Merging Images by using Mat and MatOfPoint - opencv

Here is what I am trying.
Detect rectangles from Mat(contains background image data) using findContours() - done
fill those rectangles image from my pc
Now I have an iteration code finding each rectangles, and inside of it, I have a MatOfPoint data which has the 4 vertices of the rectangle.
I am wondering if I have a Mat of an image which needs to be inserted, is there a way to merge this Mat to the background image's Mat referencing the known vertices by MatOfPoint? No proportion maintenance is needed. Only consideration for fitting image into a rectangle I have found.
SKETCH
(I attach this sketch to help your understanding, but please consider that I am very new to opencv and if I have a misunderstanding of the basic concept of opencv, please let me know)

Opencv Detect boundary and ROI mask

Hi , I have attached the image below with an yellow bounding box. Is there any algorithm or (sequence of algorithms) in Opencv by which I can detect the yellow pixels and create a ROI mask (which will block out all the pixels outside of it).
You can do:
Find the yellow polygon
Fill the inside of the polygon
Copy only the inside of the polygon to a black-initialized image
Find the yellow polygon
Unfortunately, you used anti-aliasing to draw the yellow line, so the yellow color is not pure yellow, but has a wider range due to interpolation. This affects also the final results, since some not yellow pixels will be included in the result image. You can easily correct this by not using anti-aliasing.
So the best option is to convert the image in the HSV space (where it's easier to segment a single color) and keep only values in a range around the pure yellow.
If you don't use anti-aliasing, you don't even need to convert to HSV and simply keep points whose value is pure yellow.
Fill the inside of the polygon
You can use floodFill to fill the polygon. You need a starting point for that. Since we don't know if a point is inside the polygon (and taking the baricenter may not be safe since the polygon is not convex), we can safely assume that the point (0,0), i.e. the top-left corner of the image is outside the polygon. We can then fill the outside of the polygon, and then invert the result.
Copy only the inside of the polygon to a black-initialized image
Once you have the mask, simply use copyTo with that mask to copy on a black image the content under non-zero pixels in the mask.
Here the full code:
#include <opencv2\opencv.hpp>
using namespace cv;
int main()
{
Mat3b img = imread("path_to_image");
// Convert to HSV color space
Mat3b hsv;
cvtColor(img, hsv, COLOR_BGR2HSV);
// Get yellow pixels
Mat1b polyMask;
inRange(hsv, Scalar(29, 220, 220), Scalar(31, 255, 255), polyMask);
// Fill outside of polygon
floodFill(polyMask, Point(0, 0), Scalar(255));
// Invert (inside of polygon filled)
polyMask = ~polyMask;
// Create a black image
Mat3b res(img.size(), Vec3b(0,0,0));
// Copy only masked part
img.copyTo(res, polyMask);
imshow("Result", res);
waitKey();
return 0;
}
Result:
NOTES
Please note that there are almost yellow pixels in the result image. This is due to anti-aliasing, as explained above.

OpenCV cvRemap Cropping Image

So I am very new to OpenCV (2.1), so please keep that in mind.
So I managed to calibrate my cheap web camera that I am using (with a wide angle attachment), using the checkerboard calibration method to produce the intrinsic and distortion coefficients.
I then have no trouble feeding these values back in and producing image maps, which I then apply to a video feed to correct the incoming images.
I run into an issue however. I know when it is warping/correcting the image, it creates several skewed sections, and then formats the image to crop out any black areas. My question then is can I view the complete warped image, including some regions that have black areas? Below is an example of the black regions with skewed sections I was trying to convey if my terminology was off:
An image better conveying the regions I am talking about can be found here! This image was discovered in this post.
Currently: The cvRemap() returns basically the yellow box in the image linked above, but I want to see the whole image as there is relevant data I am looking to get out of it.
What I've tried: Applying a scale conversion to the image map to fit the complete image (including stretched parts) into frame
CvMat *intrinsic = (CvMat*)cvLoad( "Intrinsics.xml" );
CvMat *distortion = (CvMat*)cvLoad( "Distortion.xml" );
cvInitUndistortMap( intrinsic, distortion, mapx, mapy );
cvConvertScale(mapx, mapx, 1.25, -shift_x); // Some sort of scale conversion
cvConvertScale(mapy, mapy, 1.25, -shift_y); // applied to the image map
cvRemap(distorted,undistorted,mapx,mapy);
The cvConvertScale, when I think I have aligned the x/y shift correctly (guess/checking), is somehow distorting the image map making the correction useless. There might be some math involved here I am not correctly following/understanding.
Does anyone have any other suggestions to solve this problem, or what I might be doing wrong? I've also tried trying to write my own code to fix distortion issues, but lets just say OpenCV knows already how to do it well.
From memory, you need to use InitUndistortRectifyMap(cameraMatrix,distCoeffs,R,newCameraMatrix,map1,map2), of which InitUndistortMap is a simplified version.
cvInitUndistortMap( intrinsic, distort, map1, map2 )
is equivalent to:
cvInitUndistortRectifyMap( intrinsic, distort, Identity matrix, intrinsic,
map1, map2 )
The new parameters are R and newCameraMatrix. R species an additional transformation (e.g. rotation) to perform (just set it to the identity matrix).
The parameter of interest to you is newCameraMatrix. In InitUndistortMap this is the same as the original camera matrix, but you can use it to get that scaling effect you're talking about.
You get the new camera matrix with GetOptimalNewCameraMatrix(cameraMat, distCoeffs, imageSize, alpha,...). You basically feed in intrinsic, distort, your original image size, and a parameter alpha (along with containers to hold the result matrix, see documentation). The parameter alpha will achieve what you want.
I quote from the documentation:
The function computes the optimal new camera matrix based on the free
scaling parameter. By varying this parameter the user may retrieve
only sensible pixels alpha=0, keep all the original image pixels if
there is valuable information in the corners alpha=1, or get something
in between. When alpha>0, the undistortion result will likely have
some black pixels corresponding to “virtual” pixels outside of the
captured distorted image. The original camera matrix, distortion
coefficients, the computed new camera matrix and the newImageSize
should be passed to InitUndistortRectifyMap to produce the maps for
Remap.
So for the extreme example with all the black bits showing you want alpha=1.
In summary:
call cvGetOptimalNewCameraMatrix with alpha=1 to obtain newCameraMatrix.
use cvInitUndistortRectifymap with R being identity matrix and newCameraMatrix set to the one you just calculated
feed the new maps into cvRemap.

Resources