I've a road skeleton (from thinning) as shown below:
The original image was
I couldn't find pruning in Julia. Is there a away to remove small segments (streets) from continuous longer ones (main roads )? The red dots show the corner points whose Index is known.
Related
I have two images of a shooting target (4 meters by 4 meters), divided into sections of 0.5 meter by 0.5 meter squares. The images are taken before and after a firing trial. The target has already bullet holes on it before the firing. Moreover, there is some clutter on or in front of the target (fixing screws and steel lines to hold target straight). Let us assume all bullet holes are visible on both images. How can I programmatically identify bullet holes by comparing before and after images? Can you specify tools or libraries, or algorithm steps?
A possible approach would consist in the following steps:
perform image registration in order to have both images seen from the same angle. Here, you'll need to find the combination of rotation, scaling and translation that relates one view to the other. See for example http://scikit-image.org/docs/dev/auto_examples/transform/plot_matching.html#example-transform-plot-matching-py that determines the transformation from a set of points of interest (corners for example). (The transformation that you need might be a bit more complex than the one of the example, since the rotation is in 3D for your images and not only in 2D.).
once you have aligned the images together, you can try different approaches. One of them is to detect the holes in both images, with a segmentation method. Since the holes seem to be lighter, you can try thresholding the image (http://scikit-image.org/docs/dev/auto_examples/segmentation/plot_local_otsu.html) and maybe cleaning the result with mathematical morphology (http://www.scipy-lectures.org/packages/scikit-image/index.html#mathematical-morphology). Then, for each hole of the target after, you can try to match it with a hole in the target before, for example by picking the closest center of mass in the target before and computing the cross-correlation between a given patch around the hole, in the two images.
I've given a few links to scikit-image examples, but openCV is often cited as the reference library for computer vision.
I want to split many images inside one image.
My method :
1- Quantize image using pyramid segmentation.
2- Extract contour from image.
3- Accumulate horizontal & vertical edges.
4- Compute the intersection of the horizontal & vertical lines.
What is your suggestion for this problem?
Please refer to the here to see sample images.
I assume the number and size of the images is variable (if they aren't, then you can easily cut at known distances)
Since the joint between different images is going to generate contrasts, you can use canny plus standard line detection. You can find a good tutorial here . Also check documentation of HoughLines & HoughLinesP.
After detecting the lines, you can discard all the non-horizontal and non-vertical ones. Then, you can find positions and distances between horizontal and vertical lines in order to compute the sub-images boundaries.
How to get an array of coordinates of a (drawn) line in image? Coordinates should be relative to image borders. Input: *.img . Output array of coordinates (with fixed step). Any 3rd party software to do this? For example there is high contrast difference - white background and color black line; or red and green etc.
Example:
Oh, you mean non-straight lines. You need to define a "line". Intuitively, you might mean a connected area of the image with a high aspect ratio between the length of its medial axis and the distance between medial axis and edges (ie relatively long and narrow, even if it winds around). Possible approach:
Threshold or select by color. Perhaps select by color based on a histogram of colors, or posterize as described here: Adobe Photoshop-style posterization and OpenCV, then call scipy.ndimage.measurements.label()
For each area above, skeletonize. Helpful tutorial: "Skeletonization using OpenCV-Python". However, you will likely need the distance to the edges as well, so use skimage.morphology.medial_axis(..., return_distance=True)
Do some kind of cleanup/filtering on the skeleton to remove short branches, etc. Thinking about your particular use, and assuming your lines don't loop around, you can just find the longest single path in the skeleton. This is where you can also decide if a shape is a "line" or not, based on how long the longest path in its skeleton is, relative to distance to the edges. Not sure how to best do that in opencv, but "Analyze Skeleton" in Fiji/ImageJ will let you filter by branch length.
What is left is the most elongated medial axis of the original "line" shape. You can resample that to some step that you prefer, or fit it with a spline, etc.
Due to the nature of what you want to do, it is hard to come up with a sample code that will work on a range of images. This is likely to require some careful tuning. I recommend using a small set of images (corpus), running any version of your algo on them and checking the results manually until it is pretty good, then trying it on a large corpus.
EDIT: Original answer, only works for straight lines:
You probably want to use the Hough transform (OpenCV tutorial).
Python sample code: Horizontal Line detection with OpenCV
EDIT: Related question with sample code to skeletonize: How can I get a full medial-axis line with its perpendicular lines crossing it?
I am a newbie to OpenCV. I would like to work on a small project for tracking the rotation speed of a gear (by using webcam). However, until now, I have no idea how to work on this.
The posted image shows a machine which contains two 'big' gears. What I am interested in the gear only on left hand side (the red line as I highlighted).
Link
My plan is:
Extract the Interested gear region .
Mask all unrelated region. So, the masked image shows the left gear only (ROI).
.....
The problem is, how can I locate/extract/mask the ROI and mask?.I go through some example about cvMatchTemplate(), but it doesn't support rotations and scalings. Due to using webcam, the captured image may scaled or rotated. cvfindcontour() will extract all contours in the image rather then ROI.
If you previously know the gear you can use a picture of it to extract keypoints with SIFT, SURF, FAST or any corner detection algorithm. Then do as follows:
1- Apply FAST on every frame to detect keypoints.
2- Extract SIFT descriptors from those keypoints
3- Match detected points in the scene with your previously extracted points from the image. You can use FLANN matcher for this.
4- Those matches will define a region in the scene containing the gear you are looking for.
This is not trivial so you will need to look for information in OpenCV documentation for using all these functions.
I have a bunch of "simple" images and I want to compare if they are similar together. I compare them to each other using template matching (cv::matchTemplate) and results are quite good.
Now I want to fine tune my program and I face a problem. For example I have two images which look very much alike. Only differences they have is that another one has thicker line and the digit front of item is different. When both images are small, one pixell difference in line thickness makes big result differences when doing template matching. When line thicknesses are same and only difference is the front digit, I get template matching result something like 0.98 with CV_TM_CCORR_NORMED when match successful. When line thickness is different matching result is something like 0.95.
I cannot decrease my threshold value below 0.98 because some other similar images have same line thickness.
Here are example images:
So what options do I have?
I have tried:
dilate the original and template
erode also both
morphologyEx both
calculating keypoints and comparing them
finding corners
But no big success yet. Are those images too simple that detecting "good features" is hard?
Any help is very wellcome.
Thank you!
EDIT:
Here are some other example images. What my program consider as similar are put in same zip-folder.
ZIP
A possible way might be thinning the two images, so that every line is of one pixel width, since the differing thickness is causing you the main problem with similarity.
The procedure would be to first binarize/threshold the images, then apply a thinning operation on both images, so both are now having the same thickness of 1 px. Then use the usual template matching that you used before with good results.
In case you'd like more details on the thinning/skeletonization of binary images here are a few OpenCV implementations posted on various discussion forums and OpenCV groups:
OpenCV code for thinning (Guo and Hall algo, works with CvMat inputs)
The JR Parker implementation using OpenCV
Possibly more efficient code here (uses OpenCV optimized access methods a lot, however most of the page is in Japanese!)
And lastly a brief overview of thinning in case you're interested.
You need something more elementary here, there isn't much reason to go for fancy methods. Your figures are already binary ones, and their shapes are very similar overall.
One initial idea: consider the upper points and bottom points in a certain image and form a upper hull and a bottom hull (simply a hull, not a convex hull or anything else). A point is said to be an upper point (respec. bottom point) if, given a column i, it is the first point starting at the top (bottom) of the image that is not a background point in i. Also, your image is mostly one single connected component (in some cases there are vertical bars separated, but that is fine), so you can discard small components easily. This step is important for your situation because I saw there are some figures with some form of noise that is irrelevant to the rest of the image. Considering that a connected component with less than 100 points is small, these are the hulls you get for the respective images included in the question:
The blue line is indicating the upper hull, the green line the bottom hull. If it is not apparent, when we consider the regional maxima and regional minima of these hulls we obtain the same amount in both of them. Furthermore, they are all very close except for some displacement in the y axis. If we consider the mean x position of the extrema and plot the lines of both images together we get the following figure. In this case, the lines in blue and green are for the second image, and the lines in red and cyan for the first. Red dots are in the mean x coordinate of some regional minima, and blue dots the same but for regional maxima (these are our points of interest). (The following image has been resized for better visualization)
As you can see, you get many nearly overlapping points without doing anything. If we do even less, i.e. not even care about this overlapping, and proceed to classify your images in the trivial way: if an image a and another image b have the same amount of regional maxima in the upper hull, the same amount of regional minima in the upper hull, the same amount of regional maxima in the bottom hull, and the same amount of regional minima in the bottom hull, then a and b belong to the same class. Doing this for all your images, all images are correctly grouped except for the following situation:
In this case we have only 3 maxima and 3 minima for the upper hull in the first image, while there are 4 maxima and 4 minima for the second. Following you see the plots for the hulls and points of interest obtained:
As you can notice, in the second upper hull there are two extrema very close. Smoothing this curve eliminates both extrema, making the images match by the trivial method. Also, note that if you draw a rectangle around your images, then this method will tell they are all equal. In that case you will want to compare multiple hulls, discarding the points in the current hull and constructing other ones. Nevertheless, this method is able to group all your images correctly given they are all very simple and mostly noisy-free.
From as much as I can get, the difficulty is when the shape is the same, just size is different. A simple hack approach could be:
- subtract the images, then erode. If the shapes were the same but one slightly bigger, subtracting will leave only the edges, which will be thin an vanish with erosion as noise.
Somewhat more formal, would be to take the contours and then the approximate polygons and do a invariants comparison (Hu Moments etc.)