I'm trying to implement a license plate detection algorithm, so far I have narrowed it down to a few interest regions:
My next step would be to classify each interest region and ignore the false regions. I was thinking maybe I can check each region for character. If the region contains some characters, then it is a plate, otherwise, it's a false region. How would I go about checking for characters?
Another approach I can think of is to use PCA to determine if a region contains plate, but I have no idea how to do it in OpenCV.
Text detection is not an easy task at all. It may be harder than the whole plate detection you are building. I can suggest you a simple tricky approach:
Find contours inside each region.
Find bounding rectangle around each contour.
delete very small or very larg rectangle.
Check weather these contours are represent a straight line. The region with contours that are arranged in linear way is the plate.
There is a technique called MSER (Maximally stable extremal regions). It detects connected blobs of similar color (which individual letters usually are). Then you classify these blobs to tell if they are letters or not. Then it's called CSER.
See OpenCV doc, Wikipedia
Also there is a Quasi-Linear version of the algorithm, if you feel like implementing it yourself.
Related
I have to implement a contour detection of full human body (from feet to head, in several poses such as raising hands etc.) using opencv. I managed to compile and run code I found here https://gist.github.com/yoggy/1470956, but it only draws a rectangle around the body, and not the exact contour. Can one help me with identifying and displaying the contour itself?
Thanks!!
I'm afraid the answer to this question is:
There's no algorithm that can do this perfectly.
Computer vision has not developed to that extent yet. Take a look at recent papers in CVPR, PAMI, and you will find that most algorithms are "rectangle", or more specifically, bounding-box based, in terms of human labeling and algorithmic detecting.
It is true that you can find the contours within the bounding-box. However the computer just doesn't know which contour belongs to the specified object.
I suggest you search for "human pose estimation" for further information.
One approach that might work is background subtraction:
http://docs.opencv.org/3.1.0/db/d5c/tutorial_py_bg_subtraction.html
This would work for video but perhaps also for single images in a scenario where you were in a controlled (fixed camera) environment where you had an image of the pose and also and image of the background, with no one present.
You can use the function findCountors within the returned bounding box:
http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/find_contours/find_contours.html
first, I've learning just couple of week about image processing, NN, dll, by myself, so I'm really new n really far to pro. n sorry for my bad english.
there's image or photo of my drawing, I want to get the coordinates of object/shape (black dot) n the number around it, the number indicating the sequence number of dot.
How to get it? How to detect the dots? Shape recognition for the dots? Number handwriting recognition for the numbers? Then segmentation to get the position? Or use template matching? But every dot has a bit different shape because of hand drawing. Use neural network? in NN, the neuron is usually contain every pixel to recognize an character, right? can I use an picture of character or drawing dot contained by each neuron to recognize my whole picture?
I'm very new, so I'm really need your advice, correct me if I wrong! Please tell me what I must learn, what I must do, what I must use.
Thank you very much. :'D
This is a difficult problem which can't be solved by a quick solution.
Here is how I would approach it:
Get a better picture. Your image is very noisy and is taken in low light with high ISO. Use a better camera and better lighting conditions so you can get the background to be as white as possible and the dots as black as possible. Try to maximize the contrast.
Threshold the image so that all the background is white and the dots and numbers are black. Maybe you could apply some erosion and/or dilation to help connect the dark edges together.
Detect the rectangle somehow and set your work area to be inside the rectangle (crop the rest of the image so that you are left with the area inside the rectangle). You could do this by detecting the contours in the image and then the contour that has the largest area is the rectangle (because it's the largest object in the image). Of course, this is not the only way. See this: OpenCV find contours
Once you are left with only the dots, circles and numbers you need to find a way to detect them and discriminate between them. You could again find all contours (or maybe you've found them all from the previous step). You need to figure out a way to see if a certain contour is a circle, a filled circle (dot) or a number. This is a problem in it's own. Maybe you could count the white/black pixels in the contour's bounding box. Dots have more black pixels than circles and numbers. You also need to do something about numbers that connect with dots (like the number 5 in your image)
Once you know what is a dot, circle or number you could use an OCR library (Tesseract or any other OCR lib) to try and recognize the numbers. You could also use a neural network library (maybe trained with the MNIST dataset) to recognize the digits. A good one would be a convolutional neural network similar to LeNet-5.
As you can see, this is a problem that requires many different steps to solve, and many different components are involved. The steps I suggested might not be the best, but with some work I think it can be solved.
I'm doing research in the field of emotion recognition. For this purpose I need to catch and classify particular face details like eyes, nose, mouth, etc. Standard OpenCV function for this is detectMultiScale(), but its disadvantage is that it returns list of rectangles (video) while I'm mostly interested in particular key points - corners of mouth, upper and lower points, edges, etc (video).
So, how do they do it? OpenCV is ideal, but other solutions are ok too.
To analyse such precise points, you can use Active appearance models. Your second video seems to be done with AAM. Check out above wikipedia link, where you can get a lot of AAM tools and API.
On the other hand, if you can detect mouth using haar-cascade, apply colour filtering. Obviously lips and surrounding region has color difference. You get precise model of lips and find its edges.
Check out this paper: Lip Contour Extraction
Have OpenCV implementation of shape context matching? I've found only matchShapes() function which do not work for me. I want to get from shape context matching set of corresponding features. Is it good idea to compare and find rotation and displacement of detected contour on two different images.
Also some example code will be very helpfull for me.
I want to detect for example pink square, and in the second case pen. Other examples could be squares with some holes, stars etc.
The basic steps of Image Processing is
Image Acquisition > Preprocessing > Segmentation > Representation > Recognition
And what you are asking for seems to lie within the representation part os this general algorithm. You want some features that descripes the objects you are interested in, right? Before sharing what I've done for simple hand-gesture recognition, I would like you to consider what you actually need. A lot of times simplicity will make it a lot easier. Consider a fixed color on your objects, consider background subtraction (these two main ties to preprocessing and segmentation). As for representation, what features are you interested in? and can you exclude the need of some of these features.
My project group and I have taken a simple approach to preprocessing and segmentation, choosing a green glove for our hand. Here's and example of the glove, camera and detection on the screen:
We have used a threshold on defects, and specified it to find defects from fingers, and we have calculated the ratio of a rotated rectangular boundingbox, to see how quadratic our blod is. With only four different hand gestures chosen, we are able to distinguish these with only these two features.
The functions we have used, and the measurements are all available in the documentation on structural analysis for OpenCV, and for acces of values in vectors (which we've used a lot), can be found in the documentation for vectors in c++
I hope you can use the train of thought put into this; if you want more specific info I'll be happy to comment, Enjoy.
For a project I've to detect a pattern and track it in space despite rotation, noise, etc.
It's highlighted with IR light and recorded with an IR camera:
Picture: https://i.stack.imgur.com/RJuVS.png
As on this picture it will be only very simple shape and we can choose which one we're gonna use.
I need direction on how to process a recognition of these shapes please.
What I do currently is thresholding and erosion to get a cleaner shape and then a contour detection and a polygon approximation.
What should I do then? I tried hu-moments but it wasn't good at all.
Could you please give me a global approach to recognize and track such pattern in space?
Can you choose which shape to project?
if so I would recomend using few concentric circles. Then using hough transform for circles you can easily find the center of the shape even when tracking is extremly hard (large movement/low frame rate).
If you must use rectangular shape then there is a good open source which does that. It is part of a project to read street signs and auto-translate them.
Here is a link: http://code.google.com/p/signfinder/
This source is not large and it would be easy to cut out the relevant part.
It uses "good features to track" of openCV in module CornerFinder.
Hope it helped
It is possible, you need following steps: thresholding image, some morphological enhancement,
blob extraction and normalization of blob size, blobs shape analysis, comparison of analysis results with pattern that you want to track.
There is many methods for blobs shape analysis. Simple methods: geometric dimensions, area, perimeter, circularity measurement; bit quads and others (for example, William K. Pratt "Digital Image Processing", chapter 18). Complex methods: spacial moments, template matching, neural networks and others.
In any event, it is very hard to answer exactly without knowledge of pattern shapes that you want to track )
hope it helped