Opencv how to ignore small parts on image - image-processing

I need a little help in Opencv, I´m a beginner and don´t know all functions yet.
I´m trying to do an OCR in my licence plate, it´s an Brazilian plate. So after some image processing like cvCvtColor,cvCanny,cvFindContours and cvDrawContours, I get images like this:
It´s a fake image, I mounted this image because I don´t want to publish my real plate on the web. On my real image, there is only black and white color I painted some parts on this example because I want ignore this parts. Red color it´s a city name, yellow color is a hyphen separator and green color is the hole to fix the plate on car. I need to know if there is a way to ignore this small parts and recognize only big parts, so after this filter i can do my OCR processing. Any help?

I'm not sure if it helps you in other situations but in this situation you can remove small contours using erosion or simply using contourArea for calculating contour's area (and remove contour if it's area is too small).

Related

Image Processing - Film negative cutting

I'm trying to figure out how to automatically cut some images like the one below (this is a negative film), basically, I want to remove the blank parts at the top and at the bottom. I'm not looking for complete code for it, I just want to understand a way to do it. The language is not important at this point, but I think this kind of thing normally is accomplished with Python.
I think there are several ways to do that, ranging from simple to complex. You can see the problem as detecting white rectangles or segmenting the image I would say.
I can suggest you opencv (which is available in more than one language, among which python), you can have a look here at the image processing examples
First we need to find the white part, then remove it.
Finding the white part
Thresholding
Let's start with an easy one: thresholding
Thresholding means dividing the image into two parts (usually black and white). You can do that by selecting a threshold (in your case, the threshold would be towards white - or black if you invert the image). By doing so, however, you may also threshold some parts of the images (for example the chickens and the white part above the kid). Do you have information about position of the white stripes? Are they always on top and bottom of the image? If so, you can apply the thresholding operation only on the top 25% and bottom 25% of the image. You would then most likely not "ruin" the image.
Finding the white rectangle
If that does not work or you would like to try something else, you can see the white stripes as rectangles and try to find their contour. You can see how in this tutorial. In this case you do not get a binary image, but a bounding box of the white areas. You most likely find the chickens also in this case, but by looking at the bounding box is easy to understand which one are correct and which one not. You can also check this calculating areas of the bounding box (width * height) and keep only the big ones.
Removing the part
Once you get the binary image (white part and not white part) or the bounding box, you have to crop the image. This also can be done in several ways, but I think the easiest one would be cropping by just selecting the central part of the image. For example, if the image has H pixels vertically, you would keep only the pixel from H1 (the height of the first white space) to H-H2 (where H2 is the height of the second white space). There is no tutorial on cropping, but several questions here on SO, as for example this one.
Additional notes
You could use more advanced segmentation algorithms as the Watershed one or even learn and use advanced techinques as machine learning to do that (here an article), as you can see the rabbit hole is pretty deep in this case. But I believe that would be an overkill and already the easy techniques would give you some results in this case.
Hope this was helpful and have fun!

relief image - analysis of homogeneity, some algorithms?

I have images of relief map. I would like to write a program that is able to analyze this map and identify typical situations shows in fig below.
Red color represents the lowest value, purple color - the highest. The region of interest is inside the white square. Do not pay attention to black and white circles - they serve for personal purposes not related to this question.
The color itself is not critical here, what is really crucial is the "form" of the maximum, i.e. the way it is located on the map. We can definitely tell the maximum on the left image while it is quite blurred in the right one (because it touches other areas which share similar color).
What I want my program to do is to distinguish between these two completely different cases, i.e. identify whether area inside white square is "reliable" or not (in terms of its "blurriness").
But I do not know what algorithms I should search for. Of course I can do this analysis manually comparing values at each point to others points, but I would like to use some established and robust algorithms if they exist.
Honestly speaking, I thought of using of the algorithms that find contours on binarized images, but it does not seem to be robust.
Thank you in advance.
P.S. I am using OpenCV, so if you know that something is already implemented in it it would be beneficent if you tell me.
UPD: I am not interested in situation only inside the white square - I would like to know what happens outside it as well and how it's compared to the region inside.

Merging with background while thresholding

I am doing a project on License plate recognition system.
But I am facing problem in segmenting license plate characters.
I have tried cvAdaptiveThreshold() with different window sizes,
otsu and niblacks algorithm.
But in most of the cases license plate characters merge with the
background.
Sample images and outputs given below,
In the first image all the license plate characters are connected by a white line in the bottom hence using thresholding algorithm i couldn't extract characters, how can I extract characters from those images... ??
In the second image noise in the background merges with foreground, which connects all the characters together.. How can I segment characters in these types of images..??
Is there any segmentation algorithms which can segment characters in the second image.. ?
Preprocessing: find big black areas on your image and mark it as background.
Do this for example with treshold. Another way might be to use findContours (contourArea to get the size on the result).
This way you know what areas you can colour black after step 1.
Use OTSU (top image, right column, blue title background).
Colour everything you know to be background black.
Use Opening/Closing or Erode/Dilate (not sure which will work better) to get rid of small lines and to refine your results
Alternatively you could make an edge detection and merge all areas that are "close together", like the second 3 in your example. You could check if areas are close together with a distance between the bounding box of your contours.
ps: I don't think you should blur your image, since it seems to be pretty small already.

IDEAL solution to separate text from background?

Suppose I have gray-scale photographic pictures of texts sheets. Each sheet of paper is exactly white and text is exactly black.
Unfortunately, the light is not uniform, also perspective shading occurs, also sheets of papers may be curved. Of course, there are some small hi freq noise on an image.
I AM SURE that there should be nearly IDEAL solution to separate text and background in this situation.
So what is it? :)
I don't believe it is impossible or even hard to turn such gray-scale images into nearly perfect black and white pictures. I cant prove this but I judge on my own perception: I need no any intelligence to recognize such pictures by an eye. They can be in any language even unfamiliar, but I will SEE what is written exactly.
So, how to teach computer to do the same?
UPDATE
Consider original image
Any global thresolding will cause artefacts (1) and nonuniform text representation (2)
I need some thresolding, which looks for local statistics.
Switch to adaptive thresholding.
Here you will find some introduction - http://homepages.inf.ed.ac.uk/rbf/HIPR2/adpthrsh.htm
Adaptive thresholding is designed to deal with exactly this kind of problems.

Simple OpenCV example to measure Size of Object on a screen

following up on my other question, do you guys know a good example in OpenCV, with a simple Black/White-Calibration Picture and appropriate detection-algorithms?
I just want to show some B&W-image on a screen, take a picture of that image from afar and calculate the size of the shown image, to calculate the distance to said screen.
Before I invent the wheel again, I recon this is so easy that it could be achieved through many different ways in OpenCV, yet I thought I'd ask if there's a preferred way around, possibly with some sample code.
(I got some face-detection code running using haarcascade-xml files already)
PS: I already have the resolution/dpi-part of my screen covered, so I know how big a picture would be in cm on my screen.
EDIT:
I'll make it real simple, I need:
A pattern, that is easily recognizable in an Image. Right now I'm experimenting with a checkerboard. The people who made ARDefender used this.
An appropriate algorithm to tell me the exact pixel coordinates of pattern 1) in a picture using OpenCV.
Well, it's hard to say which image is the best for recognition - in different illumination any color could be interpret as another color. Simple example:
As you can see both traffic signs have red color border but even on one image upper sign border is obviously not red.
So in my opinion you should use image with many different colors (like a rainbow). And also you said that it should be easy recognizable in different angles. That's why circle shape is the best for it.
That's why your image should look like this:
So idea of detection such object is the following:
Make different color segmentation (blue, red, green etc). For this use HSV color space.
Detect circles of specific color on image.
That area which has the biggest count of circles seems to be your object.
you just have to take pictures of your B&W object from several known distances (1m, 2m, 3m, ...) and then for each distance check the size of your object in the corresponding image.
From those datas, you will be able to create a linear function giving you the distance from the size in pixels (y = ax + b should do ;) ), translate it into your code and you're done.
Cheers

Resources