Merging with background while thresholding - opencv

I am doing a project on License plate recognition system.
But I am facing problem in segmenting license plate characters.
I have tried cvAdaptiveThreshold() with different window sizes,
otsu and niblacks algorithm.
But in most of the cases license plate characters merge with the
background.
Sample images and outputs given below,
In the first image all the license plate characters are connected by a white line in the bottom hence using thresholding algorithm i couldn't extract characters, how can I extract characters from those images... ??
In the second image noise in the background merges with foreground, which connects all the characters together.. How can I segment characters in these types of images..??
Is there any segmentation algorithms which can segment characters in the second image.. ?

Preprocessing: find big black areas on your image and mark it as background.
Do this for example with treshold. Another way might be to use findContours (contourArea to get the size on the result).
This way you know what areas you can colour black after step 1.
Use OTSU (top image, right column, blue title background).
Colour everything you know to be background black.
Use Opening/Closing or Erode/Dilate (not sure which will work better) to get rid of small lines and to refine your results
Alternatively you could make an edge detection and merge all areas that are "close together", like the second 3 in your example. You could check if areas are close together with a distance between the bounding box of your contours.
ps: I don't think you should blur your image, since it seems to be pretty small already.

Related

Image Processing - Film negative cutting

I'm trying to figure out how to automatically cut some images like the one below (this is a negative film), basically, I want to remove the blank parts at the top and at the bottom. I'm not looking for complete code for it, I just want to understand a way to do it. The language is not important at this point, but I think this kind of thing normally is accomplished with Python.
I think there are several ways to do that, ranging from simple to complex. You can see the problem as detecting white rectangles or segmenting the image I would say.
I can suggest you opencv (which is available in more than one language, among which python), you can have a look here at the image processing examples
First we need to find the white part, then remove it.
Finding the white part
Thresholding
Let's start with an easy one: thresholding
Thresholding means dividing the image into two parts (usually black and white). You can do that by selecting a threshold (in your case, the threshold would be towards white - or black if you invert the image). By doing so, however, you may also threshold some parts of the images (for example the chickens and the white part above the kid). Do you have information about position of the white stripes? Are they always on top and bottom of the image? If so, you can apply the thresholding operation only on the top 25% and bottom 25% of the image. You would then most likely not "ruin" the image.
Finding the white rectangle
If that does not work or you would like to try something else, you can see the white stripes as rectangles and try to find their contour. You can see how in this tutorial. In this case you do not get a binary image, but a bounding box of the white areas. You most likely find the chickens also in this case, but by looking at the bounding box is easy to understand which one are correct and which one not. You can also check this calculating areas of the bounding box (width * height) and keep only the big ones.
Removing the part
Once you get the binary image (white part and not white part) or the bounding box, you have to crop the image. This also can be done in several ways, but I think the easiest one would be cropping by just selecting the central part of the image. For example, if the image has H pixels vertically, you would keep only the pixel from H1 (the height of the first white space) to H-H2 (where H2 is the height of the second white space). There is no tutorial on cropping, but several questions here on SO, as for example this one.
Additional notes
You could use more advanced segmentation algorithms as the Watershed one or even learn and use advanced techinques as machine learning to do that (here an article), as you can see the rabbit hole is pretty deep in this case. But I believe that would be an overkill and already the easy techniques would give you some results in this case.
Hope this was helpful and have fun!

How to segment part of an image so that the edges are smooth?

I have an input image as follows and wish to segment the parts into regions. I also want the segmented parts to not been just the pixels which contribute to the solid color but also the edge anti-aliasing between the edge of the region and the next region.
Does there exist any filter or method to segment the image in this way? The important part is that the end result segmented part must contain the edge anti-aliasing between it and the next regions. A correct solution is shown in yellow.
In these two images I zoomed the pixels to be large so the edge anti-aliasing between region edges can be seen clearly.
An example output that I want for the yellow region is shown.
For a definition of "edge anti-aliasing" see https://markpospesel.wordpress.com/2012/03/30/efficient-edge-antialiasing/
I'm not sure what exactly you want. For example, would some pixels belong to two segments? If that is the case, then I'm relatively sure you have to do something on your own. Otherwise, the following might work:
Opening and Closing
Opening and closing are two morphological operations which will smooth borders
Clustering
There are many clustering algorithms. They are what you want for non-semantic segmentation (for semantic segmentation, you might want to read my literature survey). One example is
P. F. Felzenszwalb, “Graph based image segmentation.”
I would simply give those algorithms a try and see if one directly works.
Other clustering algorithms:
K-means
DB-SCAN
CLARANS
AGNES
DIANA

How to get position (x,y) and number of particular objects or shape in a handdrawing image?

first, I've learning just couple of week about image processing, NN, dll, by myself, so I'm really new n really far to pro. n sorry for my bad english.
there's image or photo of my drawing, I want to get the coordinates of object/shape (black dot) n the number around it, the number indicating the sequence number of dot.
How to get it? How to detect the dots? Shape recognition for the dots? Number handwriting recognition for the numbers? Then segmentation to get the position? Or use template matching? But every dot has a bit different shape because of hand drawing. Use neural network? in NN, the neuron is usually contain every pixel to recognize an character, right? can I use an picture of character or drawing dot contained by each neuron to recognize my whole picture?
I'm very new, so I'm really need your advice, correct me if I wrong! Please tell me what I must learn, what I must do, what I must use.
Thank you very much. :'D
This is a difficult problem which can't be solved by a quick solution.
Here is how I would approach it:
Get a better picture. Your image is very noisy and is taken in low light with high ISO. Use a better camera and better lighting conditions so you can get the background to be as white as possible and the dots as black as possible. Try to maximize the contrast.
Threshold the image so that all the background is white and the dots and numbers are black. Maybe you could apply some erosion and/or dilation to help connect the dark edges together.
Detect the rectangle somehow and set your work area to be inside the rectangle (crop the rest of the image so that you are left with the area inside the rectangle). You could do this by detecting the contours in the image and then the contour that has the largest area is the rectangle (because it's the largest object in the image). Of course, this is not the only way. See this: OpenCV find contours
Once you are left with only the dots, circles and numbers you need to find a way to detect them and discriminate between them. You could again find all contours (or maybe you've found them all from the previous step). You need to figure out a way to see if a certain contour is a circle, a filled circle (dot) or a number. This is a problem in it's own. Maybe you could count the white/black pixels in the contour's bounding box. Dots have more black pixels than circles and numbers. You also need to do something about numbers that connect with dots (like the number 5 in your image)
Once you know what is a dot, circle or number you could use an OCR library (Tesseract or any other OCR lib) to try and recognize the numbers. You could also use a neural network library (maybe trained with the MNIST dataset) to recognize the digits. A good one would be a convolutional neural network similar to LeNet-5.
As you can see, this is a problem that requires many different steps to solve, and many different components are involved. The steps I suggested might not be the best, but with some work I think it can be solved.

Opencv how to ignore small parts on image

I need a little help in Opencv, I´m a beginner and don´t know all functions yet.
I´m trying to do an OCR in my licence plate, it´s an Brazilian plate. So after some image processing like cvCvtColor,cvCanny,cvFindContours and cvDrawContours, I get images like this:
It´s a fake image, I mounted this image because I don´t want to publish my real plate on the web. On my real image, there is only black and white color I painted some parts on this example because I want ignore this parts. Red color it´s a city name, yellow color is a hyphen separator and green color is the hole to fix the plate on car. I need to know if there is a way to ignore this small parts and recognize only big parts, so after this filter i can do my OCR processing. Any help?
I'm not sure if it helps you in other situations but in this situation you can remove small contours using erosion or simply using contourArea for calculating contour's area (and remove contour if it's area is too small).

How to compensate for uneven illumination in a photograph of a printed page?

I am trying to teach my camera to be a scanner: I take pictures of printed text and then convert them to bitmaps (and then to djvu and OCR'ed). I need to compute a threshold for which pixels should be white and which black, but I'm stymied by uneven illumination. For example if the pixels in the center are dark enough, I'm likely to wind up with a bunch of black pixels in the corners.
What I would like to do, under relatively simple assumptions, is compensate for uneven illumination before thresholding. More precisely:
Assume one or two light sources, maybe one with gradual change in light intensity across the surface (ambient light) and another with an inverse square (direct light).
Assume that the white parts of the paper all have the same reflectivity/albedo/whatever.
Find some algorithm to estimate degree of illumination at each pixel, and from that recover the reflectivity of each pixel.
From a pixel's reflectivity, classify it white or black
I have no idea how to write an algorithm to do this. I don't want to fall back on least-squares fitting since I'd somehow like to ignore the dark pixels when estimating illumination. I also don't know if the algorithm will work.
All helpful advice will be upvoted!
EDIT: I've definitely considered chopping the image into pieces that are large enough so they still look like "text on a white background" but small enough so that illumination of a single piece is more or less even. I think if I then interpolate the thresholds so that there's no discontinuity across sub-image boundaries, I will probably get something halfway decent. This is a good suggestion, and I will have to give it a try, but it still leaves me with the problem of where to draw the line between white and black. More thoughts?
EDIT: Here are some screen dumps from GIMP showing different histograms and the "best" threshold value (chosen by hand) for each histogram. In two of the three a single threshold for the whole image is good enough. In the third, however, the upper left corner really needs a different threshold:
I'm not sure if you still need a solution after all this time, but if you still do. A few years ago I and my team photographed about 250,000 pages with a camera and converted them to (almost black and white ) grey scale images which we then DjVued ( also make pdfs of).
(See The catalogue and complete collection of photographic facsimiles of the 1144 paper transcripts of the French Institute of Pondicherry.)
We also ran into the problem of uneven illumination. We came up with a simple unsophisticated solution which worked very well in practice. This solution should also work to create black and white images rather than grey scale (as I'll describe).
The camera and lighting setup
a) We taped an empty picture frame to the top of a table to keep our pages in the exact same position.
b) We put a camera on a tripod also on top of the table above and pointing down at the taped picture frame and on a bar about a foot wide attached to the external flash holder on top of the camera we attached two "modelling lights". These can be purchased at any good camera shop. They are designed to provide even illumination. The camera was shaded from the lights by putting small cardboard box around each modelling light. We photographed in greyscale which we then further processed. (Our pages were old browned paper with blue ink writing so your case should be simpler).
Processing of the images
We used the free software package irfanview.
This software has a batch mode which can simultaneously do color correction, change the bit depth and crop the images. We would take the photograph of a page and then in interactive mode adjust the brightness, contrast and gamma settings till it was close to black and white. (We used greyscale but by setting the bit depth to 2 you will get black and white when you batch process all the pages.)
After determining the best color correction we then interactively cropped a single image and noted the cropping settings. We then set all these settings in the batch mode window and processed the pages for one book.
Creating DjVu images.
We used the free DjVu Solo 3.1 to create the DjVu images. This has several modes to create the DjVu images. The mode which creates black and white images didn't work well for us for photographs, but the "photo" mode did.
We didn't OCR (since the images were handwritten Sanskrit) but as long as the letters are evenly illuminated I think your OCR software should ignore big black areas like between a two page spread. But you can always get rid of the black between a two page spread or at the edges by cropping the pages twices once for the left hand pages and once for the right hand pages and the irfanview software will allow you to cleverly number your pages so you can then remerge the pages in the correct order. I.e rename your pages something like page-xxxA for lefthand pages and page-xxxB for righthand pages and the pages will then sort correctly on name.
If you still need a solution I hope some of the above is useful to you.
i would recommend calibrating the camera. considering that your lighting setup is fixed (that is the lights do not move between pictures), and your camera is grayscale (not color).
take a picture of a white sheet of paper which covers the whole workable area of your "scanner". store this picture, it tells what is white paper for each pixel. now, when you take take a picture of a document to scan, you can reload your "white reference picture" and even the illumination before performing a threshold.
let's call the white reference REF, the picture DOC, the even illumination picture EVEN, and the maximum value of a pixel MAX (for 8bit imaging, it is 255). for each pixel:
EVEN = DOC * (MAX/REF)
notes:
beware of the parenthesis: most image processing library uses the image pixel type for performing computation on pixel values and a simple multiplication will overload your pixel. eventually, write the loop yourself and use a 32 bit integer for intermediate computations.
the white reference image can be smoothed before being used in the process. any smoothing or blurring filter will do, and don't hesitate to apply it aggressively.
the MAX value in the formula above represents the target pixel value in the resulting image. using the maximum pixel value targets a bright white, but you can adjust this value to target a lighter gray.
Well. Usually the image processing I do is highly time sensitive, so a complex algorithm like the one you're seeking wouldn't work. But . . . have you considered chopping the image up into smaller pieces, and re-scaling each sub-image? That should make the 'dark' pixels stand out fairly well even in an image of variable lighting conditions (I am assuming here that you are talking about a standard mostly-white page with dark text.)
Its a cheat, but a lot easier than the 'right' way you're suggesting.
This might be horrendously slow, but what I'd recommend is to break the scanned surface into quarters/16ths and re-color them so that the average grayscale level is similar across the page. (Might break if you have pages with large margins though)
I assume that you are taking images of (relatively) small black letters on a white background.
One approach could be to "remove" the small black objects, while keeping the illumination variations of the background. This gives an estimate of how the image is illuminated, which can be used for normalizing the original image. It is often enough to subtract the illumination estimate from the original image and then do a threshold based segmentation.
This approach is based on gray scale morphological filters, and could be implemented in matlab like below:
img = imread('filename.png');
illumination = imclose(img, strel('disk', 10));
imgCorrected = img - illumination;
thresholdValue = graythresh(imgCorrected);
bw = imgCorrected > thresholdValue;
For an example with real images take a look at this guide from mathworks. For further reading about the use of morphological image analysis this book by Pierre Soille can be recommended.
Two algorithms come to my mind:
High-pass to alleviate the low-frequency illumination gradient
Local threshold with an appropriate radius
Adaptive thresholding is the keyword. Quote from a 2003 article by R.
Fisher, S. Perkins, A. Walker, and E. Wolfart: “This more sophisticated version
of thresholding can accommodate changing lighting conditions in the image, e.g.
those occurring as a result of a strong illumination gradient or shadows.”
ImageMagick's -lat option can do it, for example:
convert -lat 50x50-2000 input.jpg output.jpg
input.jpg
output.jpg
You could try using an edge detection filter, then a floodfill algorithm, to distinguish the background from the foreground. Interpolate the floodfilled region to determine the local illumination; you may also be able to modify the floodfill algorithm to use the local background value to jump across lines and fill boxes and so forth.
You could also try a Threshold Hysteresis with a rate of change control. Here is the link to the normal Threshold Hysteresis. Set the first threshold to a typical white value. Set the second threshold to less than the lowest white value in the corners.
The difference is that you want to check the difference between pixels for all values in between the first and second threshold. Ideally if the difference is positive, then act normally. But if it is negative, you only want to threshold if the difference is small.
This will be able to compensate for lighting variations, but will ignore the large changes between the background and the text.
Why don't you use simple opening and closing operations?
Try this, just lool at the results:
src - cource image
src - open(src)
close(src) - src
and look at the close - src result
using different window size, you will get backgound of the image.
I think this helps.

Resources