Traffic Signal Detection with Cascade Classifier with OpenCV - opencv

I am working on " Controlling Raspberry Pi's GPIO pins according to change in traffic lights (Red, Green, Yellow)". Right now, I am focusing only on Traffic light detection part. For that, I am using Cascade classifier for Haar features.
I have 2000 negative sample images, which I have converted to grayscale and reshaped to 120 X 120. Also, I have ONE positive image of traffic signal (40 X 120), from which I am generating 2000 positive samples. And finally, I am training my classifier using 2000 positive samples and 1000 negative samples with 10 stages.
My output for some test images looks like following:
Output 1
output 2
Output 3
Image from which I created positive samples:
positive image
I have some questions/doubts and need some suggestions to improve or modify my classifier.
1) Do I need to use more than one image as positive image to create samples?
2) Why I am not able to detect all the traffic signals in above images?
3) Am I doing wrong in image shape or anything?
4) Please correct me in this point if I am wrong - To draw a rectangle over traffic signal, I am using cv2.rectangle function and provided constant height/width parameter, and thats the ONLY reason why it is drawing a big rectangle regardless of how near/far my traffic signal is in an image! Any suggestions to change this size dynamically?
Thank you.

To me, it looks like your network has not learned enough.
1) I strongly suggest taking 20-50 samples of traffic lights, instead of one sample. You still can generate thousands of samples using them, for training.
2) most likely because of inadequate training, but you should also check the parameters in the detection stage. What are the minimum and maximum sizes that you have set for detection?
3) You don't have to re-shape or re-size the image, so that should not be a problem.
The detector returns the position (x,Y) and the size (width, height) of all objects that were detected. So you should be able to change the size dynamically instead of using constant width and height. Please refer to the opencv example of Haar Face Detection, of the language of your choice.

Related

Haar Classifier positive image set clarification

Could you please help understand several points related to Haar Classifier training:
1) Should positive image contain only the training object or they can contain some other objects in it? Like I want to recognize some traffic sign, should the positive image contain only traffic sign or it can contain highway also?
2) There are 2 ways of creating samples vector file, one is using info file, which contains the detected object coordinates in positive image, another just giving the list of positives and negatives. Which one is better?
3) How usually you create info file, which contains the detected object coordinates in positive image? Can image clipper generate object cordinates?
And does dlib histogram of adaptive gradient provides better results than Haar classifier?
My target is traffic sign detection in raspberry pi.
Thanks
the positive sample (not necessarily the image) should contain only the object. Sometimes it is not possible to get the right aspect ratio for each positive sample, then you would either add some background or crop some of the object boundary. The final detector will detect regions of your positive sample aspect ratio, so if you use a lot of background around all of your positive samples, your final detector will probably not detect a region of your traffix sign, but a region with a lot of background around your traffic sign.
Afaik, the positive samples must be provided by a .vec file which is created with opencv_createsamples.exe and you'll need a file with the description (where in the images are your positive samples?). I typically go the way that I preprocess my labeled training samples, crop away all the background, so that there are only intermediate images where the positive sample fills the whole image and the image is already the right aspect ratio. I fill a text file with basically "folder/filename.png 0 0 width height" for each of those intermediate images and then create a .vec file from that intermediate images. But the other way, using a real roi information out of full-size images should be of same quality.
Be aware that if you don't fix the same aspect ratio for each positive sample, you'll stretch your objects, which might or might not be a problem in your task.
And keep in mind, that you can create additional positive samples from warping/transforming your images. opencv_createsamples can do that for you, but I never really used it, so I'm not sure whether training will benefit from using such samples.

Pixel wise classification using Convolutional Neural Network?

The question is conceptual. I basically understand how MNIST example works, the feedforward net takes an image as input and output a predicted label 0 to 9.
I'm working on a project that ideally will take an image as the input, and for every pixel on that image, I will output a probability of that pixel being a certain label or not.
So my input, for example is of the size 600 * 800 * 3 pixels, and my output would be 600 * 800, where every single entry on my output is a probability.
How can I design the pipeline for that using Convolutional Neural Network? I'm working with Tensorflow. Thanks
Elaboration:
Basically I wanted to label every pixel as either foreground or background (The probability of the pixel being foreground). My intuition is that in convolutional layers, the neurons will be able to pick up information in a patch around that pixel, and finally be able to tell how likely this pixel could be the foreground.
Although it wouldn't be very efficient, a naive method could be to color a window (say, 5px x 5px) of pixels black, record the probabilities for each output class, then slide the window over a bit, then record again. This would be repeated until the window passed over the whole image.
Now we have some interesting information. For each window position, we know the delta of the probability distribution over the labels compared to the probabilities when the classifier received the whole image. That delta corresponds to the amount that that region contributed to the classifier making that decision.
If you want this mapped down to a per-pixel level for visualization purposes, you could use a stride length of 1 pixel when sliding the window and map the probability delta to the centermost pixel of the window.
Note that you don't want to make the window too small, otherwise the deltas will be too small to make a difference. Also, you'll probably want to be a bit smart about how you choose the color of the window so the window itself doesn't appear to be a feature to the classifier.
Edit in response to your elaboration:
This would still work for what you're trying to do. In fact, it becomes a bit nicer even. Instead of keeping all the label probability deltas separate, you would sum them. This would give you measurement which tells you "how much does this region make the image more like a number" (or in other words, the foreground). Also, you wouldn't measure the deltas against the uncovered image, but rather against the vector of probabilities where P(x)=0 for each label.

collect negative samples of adaboost algorithm for face detection

Viola-Jones' AdaBoost method is very popular for face detection? We need lots of positive and negative samples o train a face detector.
The rule for collecting positive sample is simple: the image which contains faces. But the rule for collecting negative sample is not very clear: the image which does not contains faces.
But there are so many scene that do not contain faces (which may be sky, river, house animals etc.). Which should I collect it? How can know I have collected enough negative samples?
Some suggested idea for negative samples: using the positive samples and crop the face region using the left part as negative samples. Is this work?
You have asked many questions inside your thread.
Amount of samples. As a rule of thumbs: When you train a detector you need roughly few thousands positive and negative examples per stage. Typical detector has 10-20 stages. Each stage reduces the amount of negative by a factor of 2. So you will need roughly 3,000 - 10,000 positive examples and ~5,000,000 to 100,000,000 negative examples.
Which negatives to take. A rule of thumb: You need to find a face in a given environment. So you need to take that environment as negative examples. For instance, if you try to detect faces of students sitting in a classroom than take as negative examples images from the classroom (walls, windows, human body, clothes etc). Taking images of the moon or of the sky will probably not help you. If you don't know your environment than just take as much as possible different natural images (under different light conditions).
Should you take facial parts (like an eye, or a nose) as negative? You can but this is definitely not enough (to take only those negatives). The real strength of the detector will come from the negative images which represent the typical background of the faces
How to collect/generate negative samples - You don't actually need many negative images. You can take 1000 images and generate 10,000,000 negative samples from them. Here is how you do it. Suppose you take a photo of a car of 1 mega pixel resolution 1000x1000 pixels. Suppose than you want to train face detector to work on resolution of 20x20 pixels (like openCV did). So you take your 1000x1000 big image and cut it to pieces of 20x20. You can get 2,500 pieces (50x50). So this is how from a single big image you generated 2,500 negative examples. Now you can take the same big image and cut it to pieces of size 10x10 pixels. You will now have additional 10,000 negative examples. Each example is of size 10x10 pixels and you can enlarge it by factor of 2 to force all the sample to have the same size. You can repeat this process as much as you want (cutting the input image to pieces of different size). Mathematically speaking, if your image is of size NxN - You can generate O(N^4) negative examples from it by taking each possible rectangle inside it.
In step 4, I described how to take a single big image and cut it to a large amount of negative examples. I must warn you that negative examples should not have high co-variance so I don't recommend taking only one image and generating 1 million negative examples from it. As a rule of thumb - create a library of 1000 images (or download random images from Google). Verify than none of the images contains faces. Crop about 10,000 negative examples from each image and now you have got a decent 10,000,000 negative examples. Train your detector. In the next step you can cut each image to ~50,000 (partially overlapping pieces) and thus enlarge your amount of negatives to 50 millions. You will start having very good results with it.
Final enhancement step of the detector. When you already have a rather good detector, run it on many images. It will produce false detections (detect face where there is no face). Gather all those false detections and add them to your negative set. Now retrain the detector once again. The more such iterations you do the better your detector becomes
Real numbers - The best face detectors today (like Facebooks) use hundreds of millions of positive examples and billions of negatives. As positive examples they take not only frontal faces but faces in many orientations, different facial expressions (smiling, shouting, angry,...), different age groups, different genders, different races (Caucasians, blacks, Thai, Chinese,....), with or without glasses/hat/sunglasses/make-up etc. You will not be able to compete with the best, so don't get angry if your detector misses some faces.
Good luck

SVM classifier testing works only on images bigger than the training set

I'm implementing, for the first time, a sw for objects detection for static images. My first goal is to detect simple circles, then I'll move to more complex object. Unfortunately it seems I have problem when validating my classifier.
My choice was to use a HOG descriptor (using OpenCv) and a svm as classifier (using svmlight). The code compiles and works but there is something that sounds odd to me, probably concerning the svm.
I have:
a training set composed by 5 images 48x48px of different circles and 5 images 48x48px of non-circles (I know there are too few of them in order to have a solid classifier but, up to know, it's to test that everything works)
a test set composed by 4 images 48x48px (with circles as big as the ones used for the training) and 1 image much bigger (765x600px) with multiple size circles and other geometric forms.
What happens is that:
the circles in the test set are not detected when the images are 48x48, even if in the test set there are some images used in the training phase.
in the image 765x800 (which contains circles of any size) the circles which are of the same size of the training set, or bigger, are correctly identified.
I'm using the following parameters:
hog: winSize=48x48px, winStride=4x4px, cellSize=4px, blockSize=8px, blockStride=4x4px
classifier: svm regression with a linear classifier with C=0.01. (RBF results are worse than linear)
This is the api which performs the detections with the parameters I'm using.
vector<Rect> found;
double hitThreshold = 0.; // tolerance
Size padding(Size(32, 32));
double scale = 1.05;
int groupThreshold = 2;
hog.detectMultiScale(testImg, found, hitThreshold, win_stride, padding, scale, groupThreshold);
Is there any reason why the circles in the images 48x48px are not detected and the circles in the bigger image are detected? I expect 48x48px images to be correctly classified in order to validate the classifier. I have added the bigger image when nothing where detected in 48x48px images.
Besides, what sounds stranger is the fact that in the 48x48ps test set there are some images used in the training set and I think they must be identified, instead they are not! (I know that the training set and the test set must be different but I did that when nothing were detected.)
This is my first experience with hog descriptors and svm so it might not work because of a configuration error or the choice of the images..
Any help is welcome!
Thanks in advance :)

Determine if an image needs contrasting automatically in OpenCV

OpenCV has a handy cvEqualizeHist() function that works great on faded/low-contrast images.
However when an already high-contrast image is given, the result is a low-contrast one. I got the reason - the histogram being distributed evenly and stuff.
Question is - how do I get to know the difference between a low-contrast and a high-contrast image?
I'm operating on Grayscale images and setting their contrast properly so that thresholding them won't delete the text i'm supposed to extract (thats a different story).
Suggestions welcome - esp on how to find out if the majority of the pixels in the image are light gray (which means that the equalise hist is to be performed)
Please help!
EDIT: thanks everyone for many informative answers. But the standard deviation calculation was sufficient for my requirements and hence I'm taking that to be the answer to my query.
You can probably just use a simple statistical measure of the image to determine whether an image has sufficient contrast. The variance of the image would probably be a good starting point. If the variance is below a certain threshold (to be empirically determined) then you can consider it to be "low contrast".
If you're adjusting contrast just so you can threshold later on, you may be able to avoid the contrast adjustment step if you set your threshold adaptively using Ohtsu's method.
If you're still interested in finding out the image contrast, then read on.
While there are a number of different ways to calculate "contrast". Often, those metrics are applied locally as opposed to the entire image, to make the result more sensitive to image content:
Divide the image into adjacent non-overlaying neighborhoods.
Pick neighborhood sizes that are approximate to size of the features of your image (e.g. if your main feature is horizontal text, make neighborhoods tall enough to capture 2 lines of text, and just as wide).
Apply the metric to each neighborhood individually
Threshold the metric result to separate low and high variance blocks. This will prevent such things as large, blank areas of page skewing your contrast estimates.
From there, you can use a number of features to determine contrast:
The proportion of high metric blocks to low metric blocks
High metric block mean
Intensity distance between the high and low metric blocks (using means, modes, etc)
This may serve as a better indication of image contrast than global image variance alone. Here's why:
(stddev: 50.6)
(stddev: 7.9)
The two images are perfectly in contrast (the grey background is just there to make it obvious it's an image), but their standard deviations (and thus variance) are completely different.
Calculate cumulative histogram of image.
Make linear regression of cumulative histogram in the form y(x) = A*x + B.
Calculate RMSE of real_cumulative_frequency(x)-y(x).
If that RMSE is close to zero - image is already equalized. (That means that for equalized images cumulative histograms must be linear)
Idea is taken from here.
EDIT:
I've illustrated this approach in my blog (C example code included).
There is a support provided in skimage for this. skimage.exposure.is_low_contrast. reference
example :
>>> image = np.linspace(0, 0.04, 100)
>>> is_low_contrast(image)
True
>>> image[-1] = 1
>>> is_low_contrast(image)
True
>>> is_low_contrast(image, upper_percentile=100)
False

Resources