Motion blur robust edge detection - opencv

I need to detect squares on an image (for AR marker detection). Squares are rotated in 3D (meaning their projection I'm seeing isn't really a square but a 4 sided polygon). My problem is that the polygons I need to detect are moving so they are subject to motion blur. Squares are black with a white margin so there's a high contrast.
My approach for detection was to detect edges (canny for example), find contours, approximate polygons and filter them by the number of sides and maybe some other geometrical constraints.
What approach would you recommend for detecting edges on an image with a motion blur?
Thanks

I would use Harris corner detection to detect the corner points and then use Hough transform to detect the lines. Using the position of the corners and lines it is possible to get the polygons.

Related

Reverse image distortion

Say I've the following image
which is a circle and a square capture by a camera positioned 30° far from the ground.
This the scene from an orthogonal POV:
This is the camera:
Is it possible to reverse the distortion in order to obtain the second image (orthogonal POV) from the first one (distorted image) without knowing the camera angle?
Regards
To inverse a perspective transformation, also known as homography, you need to identify 8 parameters. For this you need to know the (X, Y) coordinates of four points both in the original and undistorted image.
A possibility is to use the four corners of the square, but this won't be very accurate. Alternatively, use two corners and the tangency points of the lines from these corners to the circle.
If you don't know the relative sizes of the square and the circle and their distance, you are a little stuck.

Easiest/most robust to detect shape for OpenCV for Intersection over Union of two objects

I am trying to measure the precision of my marker tracking algorithm via post-processing a video.
My algorithm is: Find a printed planar marker in a Videostream and place a virtual marker at that position. I am working with AR.
Here are two frames of such a video:
Virtual Marker on top of detected marker
Virtual Marker with offset to actual marker
I want to calculate the Intersecion over Union / Jaccard Index of the actual marker and virtual marker. For the first picture it would give me ~98% and the second ~1/5th %. This will give me the quality for my algorithm, how precise and well it works.
I want to get the position and rotation of both markers in each frame with OpenCV and calculate the Jaccard Index. As you can see though, if I directly place a virtual marker on top of the paper marker, I will make it difficult for myself (with OpenCV) to detect them.
My idea is to not place a white marker on top of the actual marker, but place an easily detectable "thing" with a specific color or shape with an offset to the marker, let's say 10cm to the right maybe. Then I subtract the offset. So now, at the best case scenario, the position and rotation of the actual marker and the "thing" with the offset subtracted will be the same.
But what should I use as the easily detectable "thing"? I don't have enough experience with OpenCV to know what (colored?) shape I should use. The augmentation can go in front, behind, left, right... of the actual marker anytime during the video and it should do two things:
Not hinder the detection of the actual marker, like currently shown in the pictures
Be easily detectable itself
Help would be much appreciated!
Assuming you have enough white background around the visual marker:
You could use colored circles, for example in red, green, blue and black.
Use opencv blob detection [1] to detect all blobs and filter for circular ones:
Look-up average color values for detected blobs and filter for the colors of the circles.
Alternatively you could filter the whole image for each color and do blob detection on the filtered images. But this is slower.
Find the centroids (~ center point) of each blob using moments of the blob contours. [2] "Center of multiple blobs in an Image".
Now you have the four pixel positions of your circles. If you know the world coordinates of your light projected circles you can use solvePnP to get a pose from this.
Knowing the correct world coordinates is tricky in your case because you project the circle with light on a surface. This involves some 3D geometry. You need to know the transformation from camera coordinate system to pattern projector coordinate system and the projection parameters of your projector.
I guess you send the projected pattern as an image to the projector. I think you can then model the projector as a camera with a certain camera matrix (basically field of view & center point). Naturally you know the pixel coordinates of the projected circles. From this you can compute rays in 3D space (in projector coordinate system). As a starting point see [3]. Intersecting [4] them with the correct surface plane (in projector coordinate system) gives you the 3D coordinates of
the projected circle pattern in projector coordinate system. Transform these to camera coordinate system using your known transformation. Now use opencv solvePnP to determine pose of projected light marker.
How to get surface plane?
If your setup is static you could use visual marker detection of all recorded images and use mean oder median of marker pose as surface plane. Not sure what this implies for your evaluation though..
[1] https://www.learnopencv.com/blob-detection-using-opencv-python-c/
[2] https://www.learnopencv.com/find-center-of-blob-centroid-using-opencv-cpp-python/
[3] https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
[4] https://www.cs.princeton.edu/courses/archive/fall00/cs426/lectures/raycast/sld017.htm

rectangle detection only detecting large rectangles not smaller ones? iOS, Swift

I am using vision to detect rectangles but it only seems to be detecting larger rectangles that are more square than rectangle. Is there a way to detect longer rectangles?
You have to adjust the minimumAspectRatio of the VNDetectRectanglesRequest which is 0.5 per default. The green rectangle seems to have a much lower one.

Rectangular approximation of contours

After some color detection, binary thresholding, and using cvFindContours() and drawing the contours and detected blue rectangle on the image I have:
My problem is to some simple collision avoidance (the blue rectangle in the center cannot hit the red "walls"). It would be helpful for my purposes to have the red wall contours be approximated as with rectangles. However, using a simple cvBoundingRect and drawing red rectangles around the white contours I get:
The edges are a little cropped off, but you may get the idea of what we would expect using a bounding rectangle for the contours, as the entire contour is used for the approximation of the bounding rectangle and hence the large overlapping rectangles. What I would like to have is the wall contours be divided into multiple bounding rectangles, such as the the left wall be approximated as one rectangle, the right wall, the forward wall, etc...as in my illustrative rendition below:
Any help in doing so would be greatly appreciated.
Detecting lines (typically Hough, RANSAC) together with some other information you have about the problem should be enough, maybe even overkill. For instance, starting with the below image at left, we get the below image at right.
But if you have the above image at left (which you should have already), the problem is already solved. Just draw both internal and external contours of the walls and you are set.

what is the relationship between image edges and gradient?

Is there anybody can help me interpret
"Edge points may be located by the maxima of the
module of the gradient, and the direction of edge contour is orthogonal to the direction of the gradient."
Paul R has given you an answer, so I'll just add some images to help make the point.
In image processing, when we refer to a "gradient" we usually mean the change in brightness over a series of pixels. You can create gradient images using software such as GIMP or Photoshop.
Here's an example of a linear gradient from black (left) to white (right):
The gradient is "linear" meaning that the change in intensity is directly proportional to the distance between pixels. This particular gradient is smooth, and we wouldn't say there is an "edge" in this image.
If we plot the brightness of the gradient vs. X-position (left to right), we get a plot that looks like this:
Here's an example of an object on a background. The edges are a bit fuzzy, but this is common in images of real objects. The pixel brightness does not change from black to white from one pixel to the next: there is a gradient that includes shades of gray. This is not obvious since you typically have to zoom into a photo to see the fuzzy edge.
In image processing we can find those edges by looking at sharp transitions (sharp gradients) from one brightness to another. If we zoom into the upper left corner of that box, we can see that there is a transition from white to black over just a few pixels. This transition is a gradient, too. The difference is that the gradient is located between two regions of constant color: white on the left, black on the right.
The red arrow shows the direction of the gradient from background to foreground: pixels are light on the left, and as we move in the +x direction the pixels become darker. If we plot the brightness sampled along the arrow, we'll get something like the following plot, with red squares representing the brightness for a specific pixel. The change isn't linear, but instead will look like one side of a bell curve:
The blue line segment is a rough approximation of the slope of the curve at its steepest. The "true" edge point is the point at which slope is steepest along the gradient corresponding to the edge of an object.
Gradient magnitude and direction can be calculated using horizontal and vertical Sobel filters. You can then calculate the direction of the gradient as:
gradientAngle = arctan(gradientY / gradientX)
The gradient will be steepest when it is perpendicular to the edge of the object.
If you look at some black and white images of real scenes, you can zoom in, look at individual pixel values, and develop a good sense of how these principles apply.
Object edges typically result in a step change in intensity. So if you take the derivative of intensity it will have a large (positive or negative) value at edges and a smaller value elsewhere. If you can identify the direction of steepest gradient then this will be at right angles to (orthogonal to) the object edge.

Resources