I have been working on a Door detector in an uncontrolled environment. I used mask rcnn with to get the mask and bounding boxes. Though the class accuracy is very good, the mask is not very accurate. Also the expected result is a bounding quadrilateral (get the box connecting the exact 4 corners of the door ) but the bounding box returned is rectangular. I tried taking the mask, finding the convex hull and then simplify the hull using cv2.approxPolyDP to get the quadrilateral. In some cases it's giving great output where the mask is accurate but doesn't work for all specifically when there is an occlusion or the mask obtained is not very uniform around the edges.
Using Oriented Bounding Box detector also gives a rotated rectangle not quadrilateral. I would like to have some suggestion regarding how can I achieve the required result.
TIA
Related
I'm working on a project which locates the Machine Readable Zone on ID cards.
For this I need to do some pre processing to extract the ID card from a scanned image which typically are randomly disposed on a white page. I'm able to locate the majority of the cards by using a Histogram equalization with CLAHE before a contour detection. But in some cases the border around the MRZ is totally invisible (white on white) as shown on the attached image.
I'd like to detect rectangle of a predefined shape as I know the shape of the ID card will be always the same but so far I wasn't able to find a way do do something like this with OpenCV.
Basically what I need is to find two rectangle of a fixed ratio that best match the 2 cards on the scan.
I'm wondering if I need to try OpenCV matchers or if there is a simpler way to accomplish this kind of detection.
The solution to you problem is likely going to be matrix transformations. The concept is to pinpoint 4 coordinates on the card that can be easily detected using opencv, such as the the rectangle colored in blue & cyan.
Have coordinates of the card with the predefined shape stored in an array, where a corner of the card is at the 0, 0. Also store the coordinates of the blue * cyan rectangle in an array. With the two arrays you can find the perspective transform of the two arrays using the cv2.getPerspectiveTransform method.
Using the perspective transform found, you can detect the coordinates of the whole card every time you detect the coordinates of the blue & cyan rectangle.
I have two images, one is the result of applying an affine transform to the other.
I can register them using homography by extracting the points using the ORB_create function in OpenCV.
However, I want to calculate the Affine matrix needed for this transformation.
is there any way of doing it simply by having the two images?
Detect a rotated rectangle and use its corners to get your transformation matrix
Use : getPerspectiveTransform or getAffineTransform
Edit: regarding rotated rectangle detection :
Please check this Opencv tutorial on how to find contours and detect rotated rectangles Creating Bounding rotated boxes and ellipses for contours
I need to find the size or coordinates of a rectangle that is displayed as a quadrilateral in a 3D image. The quadrilateral is on a plane that lines up with 3d world vanishing points. To clarify, the quadrilateral IS a rectangle in the 3D world, and that's the rectangle I want the size of.
I do not need to get all the textures and make a new image. I also do not know the coordinates of the target rectangle as required by the homography (perspective transformation) solutions I've seen, because I don't know the aspect ratio it's supposed to have.
I've read through this thread: proportions of a perspective-deformed rectangle and the guy seemed to find an algorithm that works. However I've read other research papers that claim to calculate a homography yet they don't say how they did it. Also it seems such a basic function there would be something in the existing openCV library.
Thanks.
After some color detection, binary thresholding, and using cvFindContours() and drawing the contours and detected blue rectangle on the image I have:
My problem is to some simple collision avoidance (the blue rectangle in the center cannot hit the red "walls"). It would be helpful for my purposes to have the red wall contours be approximated as with rectangles. However, using a simple cvBoundingRect and drawing red rectangles around the white contours I get:
The edges are a little cropped off, but you may get the idea of what we would expect using a bounding rectangle for the contours, as the entire contour is used for the approximation of the bounding rectangle and hence the large overlapping rectangles. What I would like to have is the wall contours be divided into multiple bounding rectangles, such as the the left wall be approximated as one rectangle, the right wall, the forward wall, etc...as in my illustrative rendition below:
Any help in doing so would be greatly appreciated.
Detecting lines (typically Hough, RANSAC) together with some other information you have about the problem should be enough, maybe even overkill. For instance, starting with the below image at left, we get the below image at right.
But if you have the above image at left (which you should have already), the problem is already solved. Just draw both internal and external contours of the walls and you are set.
I found contours on two images with same object and I want to find displacement and rotation of this object. I've tried with rotated bounding boxes of this contours and then its angles and center points but rotations of bounding boxes don't tell about contour rotation correctly because it's the same for angles a+0, a+90, a+180 etc. degrees.
Is it any other good way to find rotation and displacement of contours? Maybe some use of convex hull, convexity defects? I've read in Learning OpenCv about matching contours but it hasn't helped. Could someone give some example?
//edit:
Maybe there is some way to use something similar to freeman chains to this? But I can't figure out algorithm at the moment. Making chain with angles between sequence point and then checking sequence match isn't working good...
If the object has convexity defects then you could choose one defect, make a vector from the centroid of the first contour to the centroid of this defect.
Then you could check the defects in the second contour and match the one that you used before.Again a vector from the centroid of the contour to the centroid of the matched defect.
From this you get 2 segments (vectors) from which you could obtain a displacement and a rotation.