OpenCV - Dynamically find HSV ranges for color - ios

When given an image such as this:
And not knowing the color of the object in the image, I would like to be able to automatically find the best H, S and V ranges to threshold the object itself, in order to get a result such as this:
In this example, I manually found the values and thresholded the image using cv::inRange.The output I'm looking for, are the best H, S and V ranges (min and max value each, total of 6 integer values) to threshold the given object in the image, without knowing in advance what color the object is. I need to use these values later on in my code.
Keypoints to remember:
- All given images will be of the same size.
- All given images will have the same dark background.
- All the objects I'll put in the images will be of full color.
I can brute force over all possible permutations of the 6 HSV ranges values, threshold each one and find a clever way to figure out when the best blob was found (blob size maybe?). That seems like a very cumbersome, long and highly ineffective solution though.
What would be good way to approach this? I did some research, and found that OpenCV has some machine learning capabilities, but I need to have the actual 6 values at the end of the process, and not just a thresholded image.

You could create a small 2 layer neural network for the task of dynamic HSV masking.
steps:
create/generate ground truth annotations for image and its HSV range for the required object
design a small neural network with at least 1 conv layer and 1 fcn layer.
Input : Mask of the image after applying the HSV range from ground truth( mxn)
Output : mxn mask of the image in binary
post processing : multiply the mask with the original image to get the required object highligted

Related

What color model is used for the param.blobcolor ? Is it BRG or HSV?

params.blobColor = 44;
//I am going to find blobs for skin color
Please refer to the OpenCV documentation
http://docs.opencv.org/trunk/d0/d7a/classcv_1_1SimpleBlobDetector.html
It says:
The class implements a simple algorithm for extracting blobs from an
image:
Convert the source image to binary images by applying thresholding with several thresholds from minThreshold (inclusive) to maxThreshold
(exclusive) with distance thresholdStep between neighboring
thresholds.
...
This class performs several filtrations of returned blobs. You should
set filterBy* to true/false to turn on/off corresponding filtration.
Available filtrations:
By color. This filter compares the intensity of a binary image at the
center of a blob to blobColor. If they differ, the blob is filtered
out. Use blobColor = 0 to extract dark blobs and blobColor = 255 to
extract light blobs.
blobColor is a byte value, it does not actually represent a color so applying color models doesn't make sense. It's neither HSV nor BRG.
According to this tutorial: https://www.learnopencv.com/blob-detection-using-opencv-python-c/
This filter operation was or maybe still is broken.

Using Image recognition inside a Video with Vuforia and Unity

I'm trying to do image recognition / tracking inside of video file play from unity. Is it possible to do image recognition of a video file (not augmented reality) using Vuforia API?
If not, does anyone else have any suggestions on how to accomplish this?
Thanks!
If you want to recognize a particular frame in your video stream, most simple but effective solution is matching histograms of your sample frame and frames in your video stream, i don't know whether it can be done using Vuforia API, but if you are interested in implementation of some image processing algorithm the process is quite simple:
1) Convert your sample image in gray (if its a color image).
2) Calculate image histogram for certain number of bin.
3) Store this histogram in a variable.
4) Now run your video file in a loop and extract frames from it, apply above 3 steps and get histogram of same size as of sample image.
5) Find out distance between two histogram using simple some of square, put a similarity threshold there, if distance is less than your threshold frame is quite similar to sample image.
Another approach may be:
1) Find out color co-variance matrix from the input sample(if its a color image):
2) to find out it convert your color channel (R,G,B) into column vector and put them column wise in a single variable e.g. [R,G,B].
3) get column wise mean and subtract it with each value of the respected column (Centralize your data around the mean).
4) now transpose your 3 column matrix and multiply it like:
Cov = [R,G,B]^T * [R,G,B];
5) above will give you a 3 by 3 matrix.
6) do above for each frame and find distance between cov matrix of sample image and query frame. put threshold to find similarity.
Further extension in above may be finding eigen values of cov matrix and then use them as features for similarity caculation.
you can also try extraction of color histogram rather than a gray scale histogram.
For more complex situation you can go with key-point detection and matching approaches.
Thank You

How to classify RGB images based on color?

For RGB images i split each one into R band G band and B band, then threshold each color band and took average of the three bands.Is this procedure wrong? As i am not getting correct results
What is the correct procedure to classify similar images with different colors based on their RGB values?so that i can get classes with different colored images
Thanks
The method is you described is not wrong, but there are more reliable ways. The difficulty with RGB is the response is complex when the intensity of the lighting changes. Instead I would convert the image to Lab colour space. I would then set a threshold on a & b to find the colours. You also need to make sure L has a minimum value since a and b are poorly defined for black colours.
If you are classifying images with multiple colours I would consider using two histograms of a and b and measuring the Euclidean distance between your test image and the training set.

Threshold to amplify black lines

Given an image (Like the one given below) I need to convert it into a binary image (black and white pixels only). This sounds easy enough, and I have tried with two thresholding functions. The problem is I cant get the perfect edges using either of these functions. Any help would be greatly appreciated.
The filters I have tried are, the Euclidean distance in the RGB and HSV spaces.
Sample image:
Here it is after running an RGB threshold filter. (40% it more artefects after this)
Here it is after running an HSV threshold filter. (at 30% the paths become barely visible but clearly unusable because of the noise)
The code I am using is pretty straightforward. Change the input image to appropriate color spaces and check the Euclidean distance with the the black color.
sqrt(R*R + G*G + B*B)
since I am comparing with black (0, 0, 0)
Your problem appears to be the variation in lighting over the scanned image which suggests that a locally adaptive thresholding method would give you better results.
The Sauvola method calculates the value of a binarized pixel based on the mean and standard deviation of pixels in a window of the original image. This means that if an area of the image is generally darker (or lighter) the threshold will be adjusted for that area and (likely) give you fewer dark splotches or washed-out lines in the binarized image.
http://www.mediateam.oulu.fi/publications/pdf/24.p
I also found a method by Shafait et al. that implements the Sauvola method with greater time efficiency. The drawback is that you have to compute two integral images of the original, one at 8 bits per pixel and the other potentially at 64 bits per pixel, which might present a problem with memory constraints.
http://www.dfki.uni-kl.de/~shafait/papers/Shafait-efficient-binarization-SPIE08.pdf
I haven't tried either of these methods, but they do look promising. I found Java implementations of both with a cursory Google search.
Running an adaptive threshold over the V channel in the HSV color space should produce brilliant results. Best results would come with higher than 11x11 size window, don't forget to choose a negative value for the threshold.
Adaptive thresholding basically is:
if (Pixel value + constant > Average pixel value in the window around the pixel )
Pixel_Binary = 1;
else
Pixel_Binary = 0;
Due to the noise and the illumination variation you may need an adaptive local thresholding, thanks to Beaker for his answer too.
Therefore, I tried the following steps:
Convert it to grayscale.
Do the mean or the median local thresholding, I used 10 for the window size and 10 for the intercept constant and got this image (smaller values might also work):
Please refer to : http://homepages.inf.ed.ac.uk/rbf/HIPR2/adpthrsh.htm if you need more
information on this techniques.
To make sure the thresholding was working fine, I skeletonized it to see if there is a line break. This skeleton may be the one needed for further processing.
To get ride of the remaining noise you can just find the longest connected component in the skeletonized image.
Thank you.
You probably want to do this as a three-step operation.
use leveling, not just thresholding: Take the input and scale the intensities (gamma correct) with parameters that simply dull the mid tones, without removing the darks or the lights (your rgb threshold is too strong, for instance. you lost some of your lines).
edge-detect the resulting image using a small kernel convolution (5x5 for binary images should be more than enough). Use a simple [1 2 3 2 1 ; 2 3 4 3 2 ; 3 4 5 4 3 ; 2 3 4 3 2 ; 1 2 3 2 1] kernel (normalised)
threshold the resulting image. You should now have a much better binary image.
You could try a black top-hat transform. This involves substracting the Image from the closing of the Image. I used a structural element window size of 11 and a constant threshold of 0.1 (25.5 on for a 255 scale)
You should get something like:
Which you can then easily threshold:
Best of luck.

OpenCv: Get over lightness affect on the color histogram

I'm using OpenCv. For the purpose of comparison, I have to fetch data about the color histogram of an image.
In detail, I have a large amount of images which I organize into many sub sets, each sub sets consists of a group of similar images. My destination is to be able to get a new image and determine the sub set it belongs to, based on color similarity.
Now, I know how to build the histogram of an image, but my problem is how to decrease as much as possible the affect of the image's lightness on the color histogram. I have thought about using cvEqualizeHist() before calculating the histogram, but since I'm pretty new in OpenCv I'm not sure what the best way is.
Any advise is very appreciated,
Transform your image from RGB to HSV color space using cvtColor() with CV_BGR2HSV or CV_RGB2HSV option. H, S and V stands for Hue, Saturation and Intensity respectively. Use color histograms in this HSV space and use only a couple of bins for V channel.

Resources