OpenCV: How to detect rhombus on image? - opencv

I hame some image with plane which have perspective transform.
I need to detect center of each white rhombus or rhombus itself.
Here is examples:
As I unserstand the problem can be solved by simple template matching if we rectify image, but I need to do it automatically.
Is there any functions in OpenCV suitable for this task? Any other ideas?

Here are two quick tests I just did without correcting the perspective issue.
Pure mathematical morphology:
Extract the red channel
Big white top-hat in order to detect all the bright areas, but without the big bright reflexion.
Small white top-hat in order to detect only the thin lines between the rhombus
Result of 2 minus result of 3. The lines between the rhombus are then thinner or even disappeared.
Opening to clean the final result.
Here are two results: Image1 and Image2. The main issue is that the rhombus do not have the same sizes (different magnification and perspective), which can be problematic with the mathematical morphology.
So here is an other solution using the Hough transform:
You start with the resulting image of the step 3 from the previous algorithm.
You apply a hough transform.
Here are the results: Hough1 and Hough2. Then you have to filter between lines touching a rhombus or not, but you can use my first algorithm for that. Even if all the rhombus are not detected by the first algorithm, most will be and it will be enough to detect the lines touching the Rhombus. Then the line intersections will be the centroids that your are looking for.

Related

Any way to get strongest edge local to a contour line using cv2 or scikit-image?

I am working on accurately segmenting objects from an image.
I have found contour lines by using a simple rectangular prism in HSV space as a color filter (followed by some morphological operations on the resulting mask to clear up noise). I found this approach to be better than applying canny edge detection to the whole image as that just picked up a lot of other edges I don't care about.
Is there a way to go about refining the contour line I have extracted such that it clips to the strongest local edge kind of like Adobe Photoshop's smart cropping utility?
Here's an image of what I mean
You can see a boundary between the sky blue and the gray. The dark blue is a drawn on contour. I'd like to somehow clip this to the nearby edge. It also looks like there are other lines in the grey region, so I think the algorithm should do some sort of more globalish optimisation to ensure that the "clipping" action doesn't jump randomly between my boundary of interest and the nearby lines.
Here are some ideas to try:
Morphological snakes: https://scikit-image.org/docs/dev/auto_examples/segmentation/plot_morphsnakes.html
Active contours: https://scikit-image.org/docs/dev/auto_examples/edges/plot_active_contours.html
Whatever livewire is doing under the hood: https://github.com/PyIFT/livewire-gui
Based on this comment, the last one is the most useful.

Detect semi-transparent rectangular overlays on images

I have images that contain transparent rectangular overlay similar to the following images: Image1 Image2. My goal is to detect if these rectangular boxes exist (location doesn't matter). These boxes will always have edges parallel to the sides of the image.
Assumptions:
The transfer function of how the transparent rectangles are drawn is now known
The sides of the rectangles will also be parallel to the image
Attempted Solution 1: Color Detection
So far, I've tried color detection via cv.threshold as well as using band-pass filters with the cv2.inRange() on multiple color spaces (HSV, LUV, XYZ etc). The issue with color detection is that I am also capturing too much noise to effectively just tune for the pixels for the transparent area. I tried laying the masks using cv2.bitwiseAnd but still can't tune the noise down to a negligible state. I tried only isolating for large groups of pixels using morphological transformations but this still fails.
Attempted Solution 2: Edge Detection + Edge Validation
My second try at detecting the box involved applying cv2.bilateralFilter and then generating hough lines via cv2.Canny,cv2.HoughLinesP. Although I am detecting a significant number of edges related to the transparent box, I also get many miscellaneous edges.
To filter out false edges, I take each line segment and check a few sample pixels to the left and right sides. By applying something something similar to what I believe the transfer function is (cv2.addWeighted) I checked to see if if I can reproduce the similar values. Unfortunately, this also doesn't work well enough to tell the difference between edges from the transparent box vs "real edges." Result From Edge Detection
Any thoughts on how I might detect these boxes is highly appreciated!

Detect missing squares in color calibration chart opencv python

I am am currently working on a method to extract colors from a macbeth color chart. So far I have had moderate success by using thresholding and then extracting square contours. Sadly through, colors that are too close to each other either mix together or do no get detected.
The code in it's current form:
<script src="https://pastebin.com/embed_js/mNi0TcDE"></script>
The image before any processing
After thresholding, you can see that there are areas where lines are incomplete due to too small differences in color. I have tried to use dilation to midigate these issues and it does work to a degree. But not enough to detect all squares.
Image after thresholding
This results in the following contours being detected
Detected contours
I have tried using:
Hough lines, sadly no lines here detected.
Centroids of contours, but I was unable to find a way to use centroids to draw lines and detect the centers of the missing contours
Corner detection, corners where found. But I was unsuccessful in finding a real way to put them to use.
Can anyone point me in the right direction?
Thaks in advance,
Emil
Hum, if your goal is color calibration, you really do not need to detect the squares in their entirety. A 10x10 sample near the center of the image of each physical square will give you 100 color samples, which is plenty for any reasonable calibration procedure.
There are many ways to approach this problem. If you can guarantee that the chart will cover the image, you could even just do k-means clustering, since you know in advance the exact number of clusters you seek.
If you insist on using geometry, I'd do template matching in scale+angle space - it is reasonable to assume that the chart will be mostly facing, and only slightly rotated, so you only need to estimate scale and a small rotation about the axis orthogonal to the chart.

Are there other methods to detect circles apart from HoughCircles

I am trying to detect circular road signs and I have some issues.
The HoughCircles function detects circles in a gray image, however with the same parameters but the image binarized (the circle is still perfectly visible) it does not detect any circle. I do not why it fails a lot with a binarized image. Any ideas why I have this issue with binary images?
To try to correct that I set the dp parameter to 2 and changed the threshold. In the binary image I now detect circles, but it also gives me a lot of false positives. I do not understand what the dp parameter is, or how to use it.
If there is no way to make it work, I would like to know if there is any other way of detecting circles in an image.
Hough generally works well with bad data - partial or obscured circles and noise.
But it is sensitive to the tuning parameters (max, min diameter, number of votes for a result).
Typically you could run hough to find all possible circles and then examine each possible circle by eg checking distance from center to points on the circumference. Or you could look at found circle diameters and then refine your diameter/vote bins, especially if this is a video stream and you expect the circles to be similar in the future.

Extract coordinates from image file

How to get an array of coordinates of a (drawn) line in image? Coordinates should be relative to image borders. Input: *.img . Output array of coordinates (with fixed step). Any 3rd party software to do this? For example there is high contrast difference - white background and color black line; or red and green etc.
Example:
Oh, you mean non-straight lines. You need to define a "line". Intuitively, you might mean a connected area of the image with a high aspect ratio between the length of its medial axis and the distance between medial axis and edges (ie relatively long and narrow, even if it winds around). Possible approach:
Threshold or select by color. Perhaps select by color based on a histogram of colors, or posterize as described here: Adobe Photoshop-style posterization and OpenCV, then call scipy.ndimage.measurements.label()
For each area above, skeletonize. Helpful tutorial: "Skeletonization using OpenCV-Python". However, you will likely need the distance to the edges as well, so use skimage.morphology.medial_axis(..., return_distance=True)
Do some kind of cleanup/filtering on the skeleton to remove short branches, etc. Thinking about your particular use, and assuming your lines don't loop around, you can just find the longest single path in the skeleton. This is where you can also decide if a shape is a "line" or not, based on how long the longest path in its skeleton is, relative to distance to the edges. Not sure how to best do that in opencv, but "Analyze Skeleton" in Fiji/ImageJ will let you filter by branch length.
What is left is the most elongated medial axis of the original "line" shape. You can resample that to some step that you prefer, or fit it with a spline, etc.
Due to the nature of what you want to do, it is hard to come up with a sample code that will work on a range of images. This is likely to require some careful tuning. I recommend using a small set of images (corpus), running any version of your algo on them and checking the results manually until it is pretty good, then trying it on a large corpus.
EDIT: Original answer, only works for straight lines:
You probably want to use the Hough transform (OpenCV tutorial).
Python sample code: Horizontal Line detection with OpenCV
EDIT: Related question with sample code to skeletonize: How can I get a full medial-axis line with its perpendicular lines crossing it?

Resources