what's the proper way to make openALPR to read plates in different angles? - image-processing

The bounty expires in 5 days. Answers to this question are eligible for a +50 reputation bounty.
Jack wants to draw more attention to this question.
I'm using openALPR library to read plate licenses but I'm having issues reading the plates in different angles, like below image. My question is: what's the proper way to do that? Image processing to try make the image straight before submitting such as Homography ?
openALPR's settings such as max_plate_angle_degrees max_detection_input_width, max_detection_input_height, etc or train the tesseract-ocr with cropped images in different angles? I have no code to show because I'm looking for a direction how to do that.

Yes, Homography is the general term for what you are looking for, but I think you might find better resources by searching for "geometric transformation" or "projective/perspective transformation".
Here are some starting resources that might help:
Understanding Homography (a.k.a Perspective Transformation) - Towards Data Science
Geometric Transformation of Image - OpenCV Tutorials
Applying perspective transformation and homography - Packt (soft paywall)
I don't know about OpenALPR, and can't find clear documentation anywhere, so I can't comment on it. But those functions you've listed (max_plate_angle_degrees max_detection_input_width, max_detection_input_height) does not sound like what you are looking for.
And I personally don't recommend training Tesseract OCR directly on unprocessed images (uncropped, angled license plates). Typically, you would follow an image processing procedure before passing it to Tesseract OCR as the final step. However, I haven't touched Tesseract since version 3 back in 2018, so the latest version 5 might fare better...
Regardless, I would imagine a traditional, end-to-end image processing pipeline like this:
Image capture
Preprocessing of image brightness, contrast, and colors
Cascade classifiers to quickly detect & locate license plates (also filters out images without license plates)
Crop image to the detected license plate
Edge and corner detection
Identify control points with Hough transform (finding 4 corners of license plate through line intersections)
Estimate homography matrix with linear algebra (most image processing libraries should already have this function for you)
Projective transform on image from step (4) with the estimated homography matrix
Tesseract OCR on transformed image to extract license number
You would likely implement fewer steps than this, as most libraries provide higher level functionalities that handles all this automatically.

Related

Image processing LABVIEW

How do I use feature detection to measure dimensions and locate a circle/line/rectangle in an image on LABVIEW.
For example, lets say I inserted an image into labview I want labview to detect if it has any shape in it!
I used lab view in high school for a robotics team and we developed a lot of real time image tracking code that did pretty much that.
What we did was create a little system that took an image checked it for pixels that were a specific Hue, Saturation, and Luminosity, than grouped them together by creating a convex hull around said pixels. We then we scored the convex hull on its linear averages and that was put up against expected results.
And once you get the convex hull you can preform some particle analysis and a few calculations later you have your dimensions.
You could use the function "IMAQ Find Circles" to locate circles. For lines and rectangles I'd probably write something on my own. Segment the image using IMAQ Threshold and let "IMAQ Particle Analysis" give you characteristics of the resulting blobs.
A sample image of what you're trying to achieve would help to understand the problem you're facing. Please upload one.
Also refer to the image processing manuals for LabVIEW. These two are pretty good and give a lot of examples on how to process images:
NI Vision for LabVIEW User Manual:
http://www.ni.com/pdf/manuals/371007b.pdf
NI Vision Concepts Manual:
http://www.ni.com/pdf/manuals/372916e.pdf
labview have some method to find shape you can find all of them in vision assistance machine vision part but you need some post processing to use such algorithm after that
you can use geometry matching and other vi to find any shape you want

Comparing similar images as photographs -- detecting difference, image diff

The situation is kind of unique from anything I have been able to find asked already, and is as follows: If I took a photo of two similar images, I'd like to be able to highlight the differing features in the two images. For example the following two halves of a children's spot the difference game:
The differences in the images will be bits missing/added and/or colour change, and the type of differences which would be easily detectable from the original image files by doing nothing cleverer than a pixel-by-pixel comparison. However the fact that they're subject to the fluctuations of light and imprecision of photography, I'll need a far more lenient/clever algorithm.
As you can see, the images won't necessarily line up perfectly if overlaid.
This question is tagged language-agnostic as I expect answers that point me towards relevant algorithms, however I'd also be interested in current implementations if they exist, particularly in Java, Ruby, or C.
The following approach should work. All of these functionalities are available in OpenCV. Take a look at this example for computing homographies.
Detect keypoints in the two images using a corner detector.
Extract descriptors (SIFT/SURF) for the keypoints.
Match the keypoints and compute a homography using RANSAC, that aligns the second image to the first.
Apply the homography to the second image, so that it is aligned with the first.
Now simply compute the pixel-wise difference between the two images, and the difference image will highlight everything that has changed from the first to the second.
My general approach would be to use an optical flow to align both images and perform a pixel by pixel comparison once they are aligned.
However, for the specifics, standard optical flows (OpenCV etc.) are likely to fail if the two images differ significantly like in your case. If that indeed fails, there are recent optical flow techniques that are supposed to work even if the images are drastically different. For instance, you might want to look at the paper about SIFT flows by Ce Liu et al that addresses this problem with sparse correspondences.

Shape context matching in OpenCV

Have OpenCV implementation of shape context matching? I've found only matchShapes() function which do not work for me. I want to get from shape context matching set of corresponding features. Is it good idea to compare and find rotation and displacement of detected contour on two different images.
Also some example code will be very helpfull for me.
I want to detect for example pink square, and in the second case pen. Other examples could be squares with some holes, stars etc.
The basic steps of Image Processing is
Image Acquisition > Preprocessing > Segmentation > Representation > Recognition
And what you are asking for seems to lie within the representation part os this general algorithm. You want some features that descripes the objects you are interested in, right? Before sharing what I've done for simple hand-gesture recognition, I would like you to consider what you actually need. A lot of times simplicity will make it a lot easier. Consider a fixed color on your objects, consider background subtraction (these two main ties to preprocessing and segmentation). As for representation, what features are you interested in? and can you exclude the need of some of these features.
My project group and I have taken a simple approach to preprocessing and segmentation, choosing a green glove for our hand. Here's and example of the glove, camera and detection on the screen:
We have used a threshold on defects, and specified it to find defects from fingers, and we have calculated the ratio of a rotated rectangular boundingbox, to see how quadratic our blod is. With only four different hand gestures chosen, we are able to distinguish these with only these two features.
The functions we have used, and the measurements are all available in the documentation on structural analysis for OpenCV, and for acces of values in vectors (which we've used a lot), can be found in the documentation for vectors in c++
I hope you can use the train of thought put into this; if you want more specific info I'll be happy to comment, Enjoy.

Image processing [duplicate]

I need to count out boxes in a warehouse by using edge detection techniques; images will be taken from a 3D model of a warehouse and the propose system will be used 3 images in 3 different angles to cover the whole area of a warehouse.
As I have no experience in image processing before I'm a bit confused about which algorithm to use.
For a quick start I would suggest looking at those two:
Sobel operator
Canny operator
These are the most widely used edge detection filters with pretty good results.
If you are starting to learn computer vision, you should also learn about typical operations in image processing and convolution.
The OpenCV library is a great library which implements many algorithms of computer vision, including the two operators mentioned above.
Check out AForge. It has full C# implementation of some edge detection algorithms.
Take a look at Image Processing Library for C++ question. You can find several useful links there. The suggested libraries not only have algorithm description but also their implementations.
OpenCV has a very nice algorithm which detects closed contours in an image and returns them as lists of points. You can then throw away all contours which don't have 4 points, and then check some constraints of the remaining ones (aspect ratio of the rectangles, etc...) to find your remaining box sides. This should at least solve the image processing part of your problem, though turning this list of contours into a count of boxes in your warehouse is going to be tough.
Check here for the OpenCV function:
http://opencv.willowgarage.com/documentation/structural_analysis_and_shape_descriptors.html#findcontours
http://opencv.willowgarage.com/documentation/drawing_functions.html#drawcontours
'Sujoy Filter' is better than Sobel Filter for Edge-detection. Here's the Julia implementation (with paper link): Sujoy Filter

Image edge detection

I need to count out boxes in a warehouse by using edge detection techniques; images will be taken from a 3D model of a warehouse and the propose system will be used 3 images in 3 different angles to cover the whole area of a warehouse.
As I have no experience in image processing before I'm a bit confused about which algorithm to use.
For a quick start I would suggest looking at those two:
Sobel operator
Canny operator
These are the most widely used edge detection filters with pretty good results.
If you are starting to learn computer vision, you should also learn about typical operations in image processing and convolution.
The OpenCV library is a great library which implements many algorithms of computer vision, including the two operators mentioned above.
Check out AForge. It has full C# implementation of some edge detection algorithms.
Take a look at Image Processing Library for C++ question. You can find several useful links there. The suggested libraries not only have algorithm description but also their implementations.
OpenCV has a very nice algorithm which detects closed contours in an image and returns them as lists of points. You can then throw away all contours which don't have 4 points, and then check some constraints of the remaining ones (aspect ratio of the rectangles, etc...) to find your remaining box sides. This should at least solve the image processing part of your problem, though turning this list of contours into a count of boxes in your warehouse is going to be tough.
Check here for the OpenCV function:
http://opencv.willowgarage.com/documentation/structural_analysis_and_shape_descriptors.html#findcontours
http://opencv.willowgarage.com/documentation/drawing_functions.html#drawcontours
'Sujoy Filter' is better than Sobel Filter for Edge-detection. Here's the Julia implementation (with paper link): Sujoy Filter

Resources