Detecting blobs of uniform colour with openCV - opencv

I am working on a program to detect split fields for remote sensing (ie. more than one colour/field type within each image, where the image corresponds to the land owned by one farmer) and have been trying to find a solution by reading in images and posterizing them with a clustering algorithm, then analysing the colours and shapes present to try and 'score' each image and decide if more than one type of field is present. My program works reasonably well although there are still quite a few obvious splits that it fails to detect.
Up until now I have been doing this using only standard libraries in c++, but I think now that I should be using openCV or something and I was wondering which techniques to start with. I see there are some image segmentation and blob detection algorithms, but I'm not sure they are applicable because the boundary between fields tends to be blurred or low in contrast. The following are some sample images that I would expect my program to detect as 'split':
(True colour Landsat)
http://imgur.com/m9qWBcq
http://imgur.com/OwqvUvs
Are there any thoughts on how I could go about solving this problem in a different way? Thanks!

1) Convert to HSV and take H or take gray-scaled form. Apply median filter to smooth the fields :P if images are high-resolution.
2) Extract histogram and find all the peaks. These peaks indicate the different colored fields.
3) (A) Now you can use simple thresholding around these peaks-value and then find canny edges for trapezium or similar shapes.
--OR--
(B) Find canny-edges around the peak value ie for peak with maxima value x, find edge for range of (x - dx) to (x + dx) where dx is a small value to be find experimentally.
4) Now you can extract count of contours at different levels/peaks.
I haven't added code because language is not specified and all these constructs are readily available in OpenCV. Its fun to learn. Feel free to ask further. Happy Coding.

Try the implementations of the MSER algorithm in MserFeatureDetector.
The original algorithm was thought for grayscale pictures, and I don't have good experiences with the color version of it, so try to do some preprocesing of the original frames to generate grayscales according to our needs.

Related

How does Skimage's Hough_Ellipse work exactly?

I am currently working on a Bachelor Thesis to recognize & count eggs in a hen nest using Computer Vision methods. The eggs can be (partially) occluded by hens for a while and can be positioned in different rotations. My current ideas are using an Elliptic Hough Transform and an AI solution using YOLO - For tracking, I am currently researching :)
However, reading through skimage's tutorial about Hough_Ellipse() and trying to find resources, I am currently at a dead end which results in the following questions:
What is an accumulator threshold value exactly?
What is the accuracy and bin size on the minor axis?
How do all these parameters work together to find ellipses along with min_size & max_size
(min_size is minimal major axis length & max_size is maximal minor axis length)
Isn't it that the major & minor axis can change?
For the transform, I currently am using Grayscaling -> Gaussian Blurring -> Canny Detection.
The result of the preprocessing looks like this at the moment:
Preprocessed Image
The preprocessing shows that there is - indeed - an ellipse. I am unsure whether FitEllipse() from OpenCV will end up helping me detecting Ellipses, especially when partially occluded by hens and having different possible rotations of an egg.
Furthermore, how do I figure out the individual parameters for Hough_Transform()?
PS: If anyone has better ideas aside from AI to test out, I'd be happy to try out more things :)

Gauge percent similarity of line drawings in OpenCV

I'm attempting to take two images in OpenCV, both drawn with a program like MS Paint or a simple drawing app, and gauge their percent similarity to each other. I have passing familiarity with some OpenCV image processing methods, but the approaches I've tried so far haven't been effective.
What I've thought of doing so far has been:
Simplest approach - comparing the two images pixel by pixel. This is easy to code up but scale / rotation invariant. The solution needs to be able to recognize an imperfect version
Hausdorff distance. This seems readymade for this problem, and I read a couple of other stack overflow posts about using it, but it takes in contours and I'm not sure how to extract contours from one image and match them to contours in another. One of the images might be empty, or they might be drastically different.
Feature extraction / matching - The approach I've tried so far has been to use feature detectors (ORB, AKAZE) paired with a Flann-based matcher, and have gotten extremely poor results so far. I'm currently changing this to SIFT/SURF and brute-force, but it doesn't seem like this has worked very well.
What are other possible computer vision algorithms I can try? I attached two sample images that are representative of what I'm trying to compare (and I did grayscale the image before processing, so color is not a factor).
Thank you!

OCR detection with openCV

I'm trying to create a simpler OCR enginge by using openCV. I have this image: https://dl.dropbox.com/u/63179/opencv/test-image.png
I have saved all possible characters as images and trying to detect this images in input image.
From here I need to identify the code. I have been trying matchTemplate and FAST detection. Both seem to fail (or more likely: I'm doing something wrong).
When I used the matchTemplate method I found the edges of both the input image and the reference images using Sobel. This provide a working result but the accuracy is not good enough.
When using the FAST method it seems like I cant get any interresting descriptions from the cvExtractSURF method.
Any recomendations on the best way to be able to read this kind of code?
UPDATE 1 (2012-03-20)
I have had some progress. I'm trying to find the bounding rects of the characters but the matrix font is killing me. See the samples below:
My font: https://dl.dropbox.com/u/63179/opencv/IMG_0873.PNG
My font filled in: https://dl.dropbox.com/u/63179/opencv/IMG_0875.PNG
Other font: https://dl.dropbox.com/u/63179/opencv/IMG_0874.PNG
As seen in the samples I find the bounding rects for a less complex font and if I can fill in the space between the dots in my font it also works. Is there a way to achieve this with opencv? If I can find the bounding box of each character it would be much more simple to recognize the character.
Any ideas?
Update 2 (2013-03-21)
Ok, I had some luck with finding the bounding boxes. See image:
https://dl.dropbox.com/u/63179/opencv/IMG_0891.PNG
I'm not sure where to go from here. I tried to use matchTemplate template but I guess that is not a good option in this case? I guess that is better when searching for the exact match in a bigger picture?
I tried to use surf but when I try to extract the descriptors with cvExtractSURF for each bounding box I get 0 descriptors... Any ideas?
What method would be most appropriate to use to be able to match the bounding box against a reference image?
You're going the hard way with FASt+SURF, because they were not designed for this task.
In particular, FAST detects corner-like features that are ubiquituous iin structure-from-motion but far less present in OCR.
Two suggestions:
maybe build a feature vector from the number and locations of FAST keypoints, I think that oyu can rapidly check if these features are dsicriminant enough, and if yes train a classifier from that
(the one I would choose myself) partition your image samples into smaller squares. Compute only the decsriptor of SURF for each square and concatenate all of them to form the feature vector for a given sample. Then train a classifier with these feature vectors.
Note that option 2 works with any descriptor that you can find in OpenCV (SIFT, SURF, FREAK...).
Answer to update 1
Here is a little trick that senior people taught me when I started.
On your image with the dots, you can project your binarized data to the horizontal and vertical axes.
By searching for holes (disconnections) in the projected patterns, you are likely to recover almost all the boudnig boxes in your example.
Answer to update 2
At this point, you're back the my initial answer: SURF will be of no good here.
Instead, a standard way is to binarize each bounding box (to 0 - 1 depending on background/letter), normalize the bounding boxes to a standard size, and train a classifier from here.
There are several tutorials and blog posts on the web about how to do digit recognition using neural networks or SVM's, you just have to replace digits by your letters.
Your work is almost done! Training and using a classifier is tedious but straightforward.

Shape context matching in OpenCV

Have OpenCV implementation of shape context matching? I've found only matchShapes() function which do not work for me. I want to get from shape context matching set of corresponding features. Is it good idea to compare and find rotation and displacement of detected contour on two different images.
Also some example code will be very helpfull for me.
I want to detect for example pink square, and in the second case pen. Other examples could be squares with some holes, stars etc.
The basic steps of Image Processing is
Image Acquisition > Preprocessing > Segmentation > Representation > Recognition
And what you are asking for seems to lie within the representation part os this general algorithm. You want some features that descripes the objects you are interested in, right? Before sharing what I've done for simple hand-gesture recognition, I would like you to consider what you actually need. A lot of times simplicity will make it a lot easier. Consider a fixed color on your objects, consider background subtraction (these two main ties to preprocessing and segmentation). As for representation, what features are you interested in? and can you exclude the need of some of these features.
My project group and I have taken a simple approach to preprocessing and segmentation, choosing a green glove for our hand. Here's and example of the glove, camera and detection on the screen:
We have used a threshold on defects, and specified it to find defects from fingers, and we have calculated the ratio of a rotated rectangular boundingbox, to see how quadratic our blod is. With only four different hand gestures chosen, we are able to distinguish these with only these two features.
The functions we have used, and the measurements are all available in the documentation on structural analysis for OpenCV, and for acces of values in vectors (which we've used a lot), can be found in the documentation for vectors in c++
I hope you can use the train of thought put into this; if you want more specific info I'll be happy to comment, Enjoy.

How to match texture similarity in images?

What are the ways in which to quantify the texture of a portion of an image? I'm trying to detect areas that are similar in texture in an image, sort of a measure of "how closely similar are they?"
So the question is what information about the image (edge, pixel value, gradient etc.) can be taken as containing its texture information.
Please note that this is not based on template matching.
Wikipedia didn't give much details on actually implementing any of the texture analyses.
Do you want to find two distinct areas in the image that looks the same (same texture) or match a texture in one image to another?
The second is harder due to different radiometry.
Here is a basic scheme of how to measure similarity of areas.
You write a function which as input gets an area in the image and calculates scalar value. Like average brightness. This scalar is called a feature
You write more such functions to obtain about 8 - 30 features. which form together a vector which encodes information about the area in the image
Calculate such vector to both areas that you want to compare
Define similarity function which takes two vectors and output how much they are alike.
You need to focus on steps 2 and 4.
Step 2.: Use the following features: std() of brightness, some kind of corner detector, entropy filter, histogram of edges orientation, histogram of FFT frequencies (x and y directions). Use color information if available.
Step 4. You can use cosine simmilarity, min-max or weighted cosine.
After you implement about 4-6 such features and a similarity function start to run tests. Look at the results and try to understand why or where it doesnt work. Then add a specific feature to cover that topic.
For example if you see that texture with big blobs is regarded as simmilar to texture with tiny blobs then add morphological filter calculated densitiy of objects with size > 20sq pixels.
Iterate the process of identifying problem-design specific feature about 5 times and you will start to get very good results.
I'd suggest to use wavelet analysis. Wavelets are localized in both time and frequency and give a better signal representation using multiresolution analysis than FT does.
Thre is a paper explaining a wavelete approach for texture description. There is also a comparison method.
You might need to slightly modify an algorithm to process images of arbitrary shape.
An interesting approach for this, is to use the Local Binary Patterns.
Here is an basic example and some explanations : http://hanzratech.in/2015/05/30/local-binary-patterns.html
See that method as one of the many different ways to get features from your pictures. It corresponds to the 2nd step of DanielHsH's method.

Resources