Removing ROI from image using JavaCV - opencv

I am learning JavaCV and want to extract part of images dynamically based on color.
As identification I am outlining the region which I need to extract with a color. Is there anyway I can do extract ROI based on color outline. Any help appreciated.
Here is the Sample Image

it is quite simple. Since your figure has 4 corners hence you ought to follow the following steps.
1.identify the orientation of the image and store the points in a MatofPoint2f in a specific order.
(clock wise or anti clockwise- For this you can use Math.atan2(p1(y)-centerpoint(y),p1(x)-centerpoint(x)) and then sort the points according to the result of the equation. find the center point by finding the avg all the xcoords and y coords or any method you prefer).
2.Create a MatofPoint2f containing the corner coords of the result image size you want the cropped image in.
3.use Imgproc.getPerspectiveTransform() to perform the cropping.
4.Finally use Imgproc.warpPerspective() to obtain the output that is desired.
And for creating the border of the ROI the best way to go is to threshold the image by using some specific range so as to extract only those parts of the spectrum which is required.

Related

find rectangle coordinates in a given image

I'm trying to blindly detect signals in a spectra.
one way that came to my mind is to detect rectangles in the waterfall (a 2D matrix that can be interpret as an image) .
Is there any fast way (in the order of 0.1 second) to find center and width of all of the horizontal rectangles in an image? (heights of rectangles are not considered for me).
an example image will be uploaded (Note I know that all rectangles are horizontal.
I would appreciate it if you give me any other suggestion for this purpose.
e.g. I want the algorithm to give me 9 center and 9 coordinates for the above image.
Since the rectangle are aligned, you can do that quite easily and efficiently (this is not the case with unaligned rectangles since they are not clearly separated). The idea is first to compute the average color of each line and for each column. You should get something like that:
Then, you can subtract the background color (blue), compute the luminance and then compute a threshold. You can remove some artefact using a median/blur before.
Then, you can just scan the resulting 1D array filled with binary values so to locate where each rectangle start/stop. The center of each rectangle is ((x_start+x_end)/2, (y_start+y_end)/2).

Detecting contours of predefined shape with OpenCV

I'm working on a project which locates the Machine Readable Zone on ID cards.
For this I need to do some pre processing to extract the ID card from a scanned image which typically are randomly disposed on a white page. I'm able to locate the majority of the cards by using a Histogram equalization with CLAHE before a contour detection. But in some cases the border around the MRZ is totally invisible (white on white) as shown on the attached image.
I'd like to detect rectangle of a predefined shape as I know the shape of the ID card will be always the same but so far I wasn't able to find a way do do something like this with OpenCV.
Basically what I need is to find two rectangle of a fixed ratio that best match the 2 cards on the scan.
I'm wondering if I need to try OpenCV matchers or if there is a simpler way to accomplish this kind of detection.
The solution to you problem is likely going to be matrix transformations. The concept is to pinpoint 4 coordinates on the card that can be easily detected using opencv, such as the the rectangle colored in blue & cyan.
Have coordinates of the card with the predefined shape stored in an array, where a corner of the card is at the 0, 0. Also store the coordinates of the blue * cyan rectangle in an array. With the two arrays you can find the perspective transform of the two arrays using the cv2.getPerspectiveTransform method.
Using the perspective transform found, you can detect the coordinates of the whole card every time you detect the coordinates of the blue & cyan rectangle.

How can we extract region bound by a contour in OpenCV?

I am new to OpenCV and I was trying to extract the region bound by largest contour. It may be a simple question, but I am not able to figure it out. I tried googling too, without any luck.
I would:
Use contourArea() to find the largest closed contour.
Use boundingRect() to get the bounds of that contour.
Draw the contour using drawContours() (with thickness set to -1 to
fill the contour) and use this as a mask.
Use use the mask to set all pixels in the original image not in the
ROI to (0,0,0).
Use the bounding rectangle to extract just that area from the
original image.
Here is well explained what do you want do develop.
Basically you have to:
apply threshold to a copy of the original image;
use findContours -> output is:
vector<vector<Point>>
that stores contours;
iterate on contours to find the largest.

Get position and size of an object in a binary image

Does someone have an idea to get the size and the position from an object? The Object is detected in a binary image with white pixels:
For example: Detected / Original
http://ivrgwww.epfl.ch/supplementary_material/RK_CVPR09/Images/segmentation/2_sal/0_12_12171.jpg
http://ivrgwww.epfl.ch/supplementary_material/RK_CVPR09/Images/comparison/orig/0_12_12171.jpg
I know about the CvMoments- Method. But I don't know how to use it in this case.
By the way: How can I make my mask more clearly?
Simple algorithm:
Delete small areas of white pixels using morphological operations (erosion).
Use findContours to find all contours.
Use countNonZero or contourArea to find area of each contour.
Cycle throught all points of each contour and find mean of them. This will be the center of contour.
If the object is tree, you should delete small areas by using morphology as Astor written.
Alternative of finding mass, and mass center is using moments:
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=moments#moments
m00 as doc says is mass
There are also formulas for mass center.
This approach works when only your object remains on image after segmentation.

Stretch an image to fit in any quadrangle

The application PhotoFiltre has an option to stretch part of an image. You select a rectangular shape and you can then grab and move the vertexes somewhere else to make any quadrangle. The image part which you selected will stretch along. Hopefully these images make my point a little clearer:
Is there a general algorithm which can handle this? I would like to obtain the same effect on HTML5 canvas - given an image and the resulting corner points, I would like to be able to draw the stretched image in such a way that it fills the new quadrangle neatly.
A while ago I asked something similar, where the solution was to divide the image up in triangles and stretch each triangle so that each three points correspond to the three points on the original image. This technique turned out to be rather exprensive and I would like if there is a more general method of accomplishing this.
I would like to use this in a 3D renderer, but I would like to work with a (2D) quadrangle.
I don't know whether PhotoFiltre internally also uses triangles, or whether it uses another (cheaper) algorithm to stretch an image like this.
Does someone perhaps know if there is a cheaper or more general method/algorithm to stretch a rectangular image, so that it fills a quadrangle given four points?
The normal method is to start with the destination, pick an appropriate grid size and then for each point in the new shape calculate the corresponding point in the source image (possibly with interpolation depending on the quality you need)
Affine transform.
Given four points for the "stretched" figure and four points for the figure it should match (e.g. a rectangle), an affine transform provides the spatial mapping you need. For each point (x1,y1) in the original image there is a corresponding point (x2,y2) in the second, "stretched" image.
For each integer-valued pixel (x2, y2) in the stretched image, use the affine transform to find the corresponding real-valued point (x1, y1) in the original image and apply its color to (x2,y2).
http://demonstrations.wolfram.com/AffineTransform/
You'll find sample code for Java and other languages online. .NET has the Matrix class.

Resources