I have used convex hull and convexity defects and found the points in the hand as shown below.
With the above points information available, how can I crop the region marked in red (Knuckle) as shown in the below image.
My intention is to detect the Knuckles in the hand.
Note: The green region drawn is using "Draw contours". Is it possible to use this region to crop the red marked area ( Knuckle ). How to crop these regions.
Update ( 26/2/2014 ):
I have found contour points as below. With the below information in hand is it possible to find the knuckle region. Is there any ways to find using the points.
Since you already know the red position, all you want is to crop this region?
It's very easy, you just need to set a ROI (Region of interest) and to copy this region to another image. Like this (in pseudo-code since I don't have an open CV project up and running)
img1.ROI = varRedRectangle
img1.copyTo(img2)
img1.ROI = null;
If your question is how to detect the red section, I think you need to do like anyone in image recognition and work a lot because there is tons of way to do it nobody here will find them for you.
Hope it helps!
If your idea is to detect those red areas you can use the following simple idea.
Get edge image and remove the edges outside the green boundary.
Apply Horizontal histogram to get separate the strips.
In each take vertical histogram and locate the bins with values within a neighbourhood of the peak. (Lets call these as Peak Bins)
The longest contiguous sequence of peak bins should give the answer.
Related
I'm trying to blindly detect signals in a spectra.
one way that came to my mind is to detect rectangles in the waterfall (a 2D matrix that can be interpret as an image) .
Is there any fast way (in the order of 0.1 second) to find center and width of all of the horizontal rectangles in an image? (heights of rectangles are not considered for me).
an example image will be uploaded (Note I know that all rectangles are horizontal.
I would appreciate it if you give me any other suggestion for this purpose.
e.g. I want the algorithm to give me 9 center and 9 coordinates for the above image.
Since the rectangle are aligned, you can do that quite easily and efficiently (this is not the case with unaligned rectangles since they are not clearly separated). The idea is first to compute the average color of each line and for each column. You should get something like that:
Then, you can subtract the background color (blue), compute the luminance and then compute a threshold. You can remove some artefact using a median/blur before.
Then, you can just scan the resulting 1D array filled with binary values so to locate where each rectangle start/stop. The center of each rectangle is ((x_start+x_end)/2, (y_start+y_end)/2).
I'm working on a project which locates the Machine Readable Zone on ID cards.
For this I need to do some pre processing to extract the ID card from a scanned image which typically are randomly disposed on a white page. I'm able to locate the majority of the cards by using a Histogram equalization with CLAHE before a contour detection. But in some cases the border around the MRZ is totally invisible (white on white) as shown on the attached image.
I'd like to detect rectangle of a predefined shape as I know the shape of the ID card will be always the same but so far I wasn't able to find a way do do something like this with OpenCV.
Basically what I need is to find two rectangle of a fixed ratio that best match the 2 cards on the scan.
I'm wondering if I need to try OpenCV matchers or if there is a simpler way to accomplish this kind of detection.
The solution to you problem is likely going to be matrix transformations. The concept is to pinpoint 4 coordinates on the card that can be easily detected using opencv, such as the the rectangle colored in blue & cyan.
Have coordinates of the card with the predefined shape stored in an array, where a corner of the card is at the 0, 0. Also store the coordinates of the blue * cyan rectangle in an array. With the two arrays you can find the perspective transform of the two arrays using the cv2.getPerspectiveTransform method.
Using the perspective transform found, you can detect the coordinates of the whole card every time you detect the coordinates of the blue & cyan rectangle.
I am trying to find a reliable method to calculate the corner points of a container. From these corner point’s idea is to calculate the center point of the container for the localization of robot, it means that the calculated center point will be the destination of robot in order to pick the container. For this I am looking for any suggestions to calculate the corner points or may be if any possibility to calculate the center point directly. Up to this point PCL library C/C++ is used for the processing of the 3D data.
The image below is the screenshot of the container.
thanks in advance.
afterApplyingPassthrough
I did the following things:
I binarized the image (black pixels = 0, green pixels = 1),
inverted the image (black pixels = 1, green pixels = 0),
eroded the image with 3x3 kernel N-times and dilated it with same kernel M-times.
Left: N=2, M=1;Right: N=6, M=6
After that:
I computed contours of all non-zero areas and
removed the contour that surrounded entire image.
This are the contours that remained:
I do not know how "typical" input image looks like in your case. Since I only have access to one sample image, I would rather not speculate about "general solution" that will be suitable for you. But to solve this particular case, you could analyze every contour in the following way:
compute rotatated rectangle that fits best around your contour (you need something similar to minAreaRect from OpenCV)
compute areas of rectangle and contour interior
if the difference between contour area and the area of the rotated bounding rectangle is small, the contour has approximately rectangular shape
find the contour that is both rectangular and satisfies some other condition (for example: typical area of the container). Assume that this belongs to container and compute its center.
I am not claiming that this is a solution that will work well in real world scenarios. It is also not fast. You should view it as a "sketch" that shows how to extract some useful information.
I assume the wheels maintain the cart a known offset from the floor and you can identify the floor. Filter out all points which are too close to the floor (this will remove wheels and everything but cart which will help limit data and simplify later steps.
If you isolate the cart, you could apply a simple average point (centroid), alternately, if that is not precise, you could try finding the bounding box of the isolated cart (min max in primary directions) and then take the centroid of that bounding box (this should be more accurate, but will still need a slight vertical offset due to the top handles).
If you can not isolate the cart or the other methods are not working well, you could try using PCL sample consensus specifically SACMODEL_LINE. This will be an involved strategy, but will give very solid results, basically run through and find each line and subtract its members from the cloud so as to find the next best line. After you have your 4 primary cart lines, use their parameters to find your centroid. *this would also be robust against random items being in or on the cart as well as carts of various sizes (assuming they always had linear perpendicular walls)
What is Distance Transform?What is the theory behind it?if I have 2 similar images but in different positions, how does distance transform help in overlapping them?The results that distance transform function produce are like divided in the middle-is it to find the center of one image so that the other is overlapped just half way?I have looked into the documentation of opencv but it's still not clear.
Look at the picture below (you may want to increase you monitor brightness to see it better). The pictures shows the distance from the red contour depicted with pixel intensities, so in the middle of the image where the distance is maximum the intensities are highest. This is a manifestation of the distance transform. Here is an immediate application - a green shape is a so-called active contour or snake that moves according to the gradient of distances from the contour (and also follows some other constraints) curls around the red outline. Thus one application of distance transform is shape processing.
Another application is text recognition - one of the powerful cues for text is a stable width of a stroke. The distance transform run on segmented text can confirm this. A corresponding method is called stroke width transform (SWT)
As for aligning two rotated shapes, I am not sure how you can use DT. You can find a center of a shape to rotate the shape but you can also rotate it about any point as well. The difference will be just in translation which is irrelevant if you run matchTemplate to match them in correct orientation.
Perhaps if you upload your images it will be more clear what to do. In general you can match them as a whole or by features (which is more robust to various deformations or perspective distortions) or even using outlines/silhouettes if they there are only a few features. Finally you can figure out the orientation of your object (if it has a dominant orientation) by running PCA or fitting an ellipse (as rotated rectangle).
cv::RotatedRect rect = cv::fitEllipse(points2D);
float angle_to_rotate = rect.angle;
The distance transform is an operation that works on a single binary image that fundamentally seeks to measure a value from every empty point (zero pixel) to the nearest boundary point (non-zero pixel).
An example is provided here and here.
The measurement can be based on various definitions, calculated discretely or precisely: e.g. Euclidean, Manhattan, or Chessboard. Indeed, the parameters in the OpenCV implementation allow some of these, and control their accuracy via the mask size.
The function can return the output measurement image (floating point) - as well as a labelled connected components image (a Voronoi diagram). There is an example of it in operation here.
I see from another question you have asked recently you are looking to register two images together. I don't think the distance transform is really what you are looking for here. If you are looking to align a set of points I would instead suggest you look at techniques like Procrustes, Iterative Closest Point, or Ransac.
Please can some one explain how to identify area which are should in red and blue colors in following image ? I tried to use cvFindContours() method but it didn't give expected result for me.
Input image
Expected result
I Like to know whether there are any other methods to identify or calculate the area of this kind of contours. Please be kind enough to share simple code example with this.
The function floodFill can also return an area as its return value. So one thing you can do is to raster scan each pixel: each time you reach untouched pixel, colour it into some colour(black), and store the area of that region along with the pixel coordinates, continue until whole image would not be covered.
In the end you will have a set of areas with cordinates for one pixel in each region.
Will you need to recover specific region you can use floodFill again by colouring that region to a specific colour.