image preprocessing tesseract - opencv

I'm doing a business card scanner for my final examination about digital image processing, and I would ask to you how I have to preprocess a photo of a business card so tesseract can recognize text. I tried a lot of things, like erosion, dilation, thresholding, but I can't have a good result... Can you help me?
Thank you
Marco

If your concern is only about text recognition and not about preprocessing, consider using ScanTailor. It is an excellent pre-processing tool and it is open source.
If you want to implement the pre-processing yourself, you might want to have a look at this paper - especially the skew correction and the background estimation.The results of the algorithms described here are good. ScanTailor uses some of these.

I would recommend the open source C++ image processing library OpenCV combination with the open source free Optical Character Recognition (OCR) library tesseract.
Since your information of your problem isn't quite specific, i can answer your question in general
The main procedure in OCR is:
perform some kind of preprocessing on the image
text detection to get your ROI (Region of interest, the region containing your text)
character detection (take the text-only image and use it as input for tesseract
a few words about tesseract:
There is a lot of information to the library available online. It is a google open source library used for the google books OCR purpose. Can also handle layout analyzes in your image, but isn't perfect in this, therefore a preprocessing yourself and using tesseract only for the real character recognition part can lead to a better result. Feel free to question, if you still have questions, or if I missunderstood your question.

Related

Optical character recognition - how it works?

I was interested in symbol recognition recently and I start to read about it in the Internet. I got more information about preprocessing and segmentation stages, but all of it is just prestage for transformation from image to string. And all notes from Internet led me to using ready solution, like Tesseract, which do all works behind interface. However, I interested in detailed description of this process and I want to get all steps of this transformation.
Can anybody give me some links to exhaustive literature or articles about this theme? For example, Tesseract image_to_string() function algorithm. I will thankful for any help
The most straightforward way is the github page of Tesseract, especially the Wiki page of Tesseract.
Or if you want to recognize specific symbol, you can make your own recognizer using neural networks, follow this step-to-step tutorial.

Face landmark extraction in OpenCV 3.0. Can anyone suggest any good open source libraries that will allow me to extract facial landmarks?

I am currently using OpenCV3.0 with the hope i will be able to create a program that does 3 things. First, finds faces within a live video feed. Secondly, extracts the locations of facial landmarks using ASM or AAM. Finally, uses a SVM to classify the facial expression on the persons face in the video.
I have done a fair amount of research into this but can't find anywhere the most suitable open source AAM or ASM library to complete this function. Also if possible I would like to be able to train the AAM or ASM to extract the specific face landmarks i require. For example, all the numbered points in the picture linked below:
www.imgur.com/XnbCZXf
If there are any alternatives to what i have suggested to get the required functionality then feel free to suggest them to me.
Thanks in advance for any answers, all advice is welcome to help me along with this project.
In the comments, I see that you are opting to train your own face landmark detector using the dlib library. You had a few questions regarding what training set dlib used to generate their provided "shape_predictor_68_face_landmarks.dat" model.
Some pointers:
The author (Davis King) stated that he used the annotated images from the iBUG 300-W dataset. This dataset has a total of 11,167 images annotated with the 68-point convention. As a standard trick, he also mirrors each image to effectively double the training set size, ie 11,167*2=22334 images. Here's a link to the dataset: http://ibug.doc.ic.ac.uk/resources/facial-point-annotations/
Note: the iBUG 300-W dataset includes two datasets that are not freely/publicly available: XM2VTS, and FRGCv2. Unfortunately, these images make up a majority of the ibug 300-W (7310 images, or 65.5%).
The original paper only trained on the HELEN, AFW, and LFPW datasets. So, you ought to be able to generate a reasonably-good model on only the publicly-available images (HELEN,LFPW,AFW,IBUG), ie 3857 images.
If you Google "one millisecond face alignment kazemi", the paper (and project page) will be the top hits.
You can read more about the details of the training procedure by reading the comments section of this dlib blog post. In particular, he briefly discusses the parameters he chose for training: http://blog.dlib.net/2014/08/real-time-face-pose-estimation.html
With the size of the training set in mind (thousands of images), I don't think you will get acceptable results with just a handful of images. Fortunately, there are many publicly available face datasets out there, including the dataset linked above :)
Hope that helps!
AAM and ASM are pretty old school and results are a little bit disappointing.
Most of Facial landmarks trackers use cascade of patches or deep-learning. You have DLib that performs pretty well (+BSD licence) with this demo, some other on github or a bunch of API as this one that is free to use.
You can also give a look at my project using C++/OpenCV/DLib with all functionalities you quoted and perfectly operational.
Try Stasm4.0.0. It gives approximately 77 points on face.
I advise you to use FaceTracker library. It is written in C++ using OpenCV 2.x. You won't be disappointed on it.

Image pre-processing in OCR

Our project is all about OCR and base on my research, before performing the character recognition it will go through on pre-processing stage. I know we can use openCV for that but we can't use it base on our rules.
My question is, can someone tells me the step-by-step of pre processing and the best method/algorithm to use.
like what I know,
1.YUVluminace
2.greyscale
3.otsu thresholding
4.Binarization
5.Hough transform
Original Image> YUVluminace> greyscale what's next??
thanks!
In some of my older blog posts, I addressed some parts of your questions:
Binarization on various image qualities from mobile cameras:
http://www.ocr-it.com/guide-to-better-mobile-images-from-cell-phone-camera-for-higher-quality-ocr
Image pre-rpocessing and segmentation for better OCR:
http://www.ocr-it.com/user-scenario-process-digital-camera-pictures-and-ocr-to-extract-specific-numbers
In reality, there is no step-by-step, per my experience. You could use original image for OCR if you wanted to, with means no pre-processing is nessesary. Yes, pre-processing will help, but it depends on the source and type of your images (which you did not specify). For example, a typical office document scanned on a professional scanner with Kofax VRS requires no pre-processing before OCR. Mobile camera image requires a lot of pre-processing. Picture from a parking garage camera will require a lot of pre-processing, but different steps and algorithms from mobile camera picture.
I think decide what is the next major limiting factor in your images, pre-process against it, then look for the next correctable issue.

iOS: Real Time OCR on top of live camera feed (similar to iTunes Redeem Gift Card)

Is there a way to accomplish something similar to what the iTunes and App Store Apps do when you redeem a Gift Card using the device camera, recognizing a short string of characters in real time on top of the live camera feed?
I know that in iOS 7 there is now the AVMetadataMachineReadableCodeObject class which, AFAIK, only represents barcodes. I'm more interested in detecting and reading the contents of a short string. Is this possible using publicly available API methods, or some other third party SDK that you might know of?
There is also a video of the process in action:
https://www.youtube.com/watch?v=c7swRRLlYEo
Best,
I'm working on a project that does something similar to the Apple app store redeem with camera as you mentioned.
A great starting place on processing live video is a project I found on GitHub. This is using the AVFoundation framework and you implement the AVCaptureVideoDataOutputSampleBufferDelegate methods.
Once you have the image stream (video), you can use OpenCV to process the video. You need to determine the area in the image you want to OCR before you run it through Tesseract. You have to play with the filtering, but the broad steps you take with OpenCV are:
Convert the images to B&W using cv::cvtColor(inputMat, outputMat, CV_RGBA2GRAY);
Threshold the images to eliminate unnecessary elements. You specify the threshold value to eliminate, and then set everything else to black (or white).
Determine the lines that form the boundary of the box (or whatever you are processing). You can either create a "bounding box" if you have eliminated everything but the desired area, or use the HoughLines algorithm (or the probabilistic version, HoughLinesP). Using this, you can determine line intersection to find corners, and use the corners to warp the desired area to straighten it into a proper rectangle (if this step is necessary in your application) prior to OCR.
Process the portion of the image with Tesseract OCR library to get the resulting text. It is possible to create training files for letters in OpenCV so you can read the text without Tesseract. This could be faster but also could be a lot more work. In the App Store case, they are doing something similar to display the text that was read overlaid on top of the original image. This adds to the cool factor, so it just depends on what you need.
Some other hints:
I used the book "Instant OpenCV" to get started quickly with this. It was pretty helpful.
Download OpenCV for iOS from OpenCV.org/downloads.html
I have found adaptive thresholding to be very useful, you can read all about it by searching for "OpenCV adaptiveThreshold". Also, if you have an image with very little in between light and dark elements, you can use Otsu's Binarization. This automatically determines the threshold values based on the histogram of the grayscale image.
This Q&A thread seems to consistently be one of the top search hits for the topic of OCR on iOS, but is fairly out of date, so I thought I'd post some additional resources that might be useful that I've found as of the time of writing this post:
Vision Framework
https://developer.apple.com/documentation/vision
As of iOS 11, you can now use the included CoreML-based Vision framework for things like rectangle or text detection. I've found that I no longer need to use OpenCV with these capabilities included in the OS. However, note that text detection is not the same as text recognition or OCR so you will still need another library like Tesseract (or possibly your own CoreML model) to translate the detected parts of the image into actual text.
SwiftOCR
https://github.com/garnele007/SwiftOCR
If you're just interested in recognizing alphanumeric codes, this OCR library claims significant speed, memory consumption, and accuracy improvements over Tesseract (I have not tried it myself).
ML Kit
https://firebase.google.com/products/ml-kit/
Google has released ML Kit as part of its Firebase suite of developer tools, in beta at the time of writing this post. Similar to Apple's CoreML, it is a machine learning framework that can use your own trained models, but also has pre-trained models for common image processing tasks like Vision Framework. Unlike Vision Framework, this also includes a model for on-device text recognition of Latin characters. Currently, use of this library is free for on-device functionality, with charges for using cloud/SAAS API offerings from Google. I have opted to use this in my project, as the speed and accuracy of recognition seems quite good, and I also will be creating an Android app with the same functionality, so having a single cross platform solution is ideal for me.
ABBYY Real-Time Recognition SDK
https://rtrsdk.com/
This commercial SDK for iOS and Android is free to download for evaluation and limited commercial use (up to 5000 units as of time of writing this post). Further commercial use requires an Extended License. I did not evaluate this offering due to its opaque pricing.
'Real time' is just a set of images. You don't even need to think about processing all of them, just enough to broadly represent the motion of the device (or the change in the camera position). There is nothing built into the iOS SDK to do what you want, but you can use a 3rd party OCR library (like Tesseract) to process the images you grab from the camera.
I would look into Tesseract. It's an open source OCR library that takes image data and processes it. You can add different regular expressions and only look for specific characters as well. It isn't perfect, but from my experience it works pretty well. Also it can be installed as a CocoaPod if you're into that sort of thing.
If you wanted to capture that in real time you might be able to use GPUImage to catch images in the live feed and do processing on the incoming images to speed up Tesseract by using different filters or reducing the size or quality of the incoming images.
There's a project similar to that on github: https://github.com/Devxhkl/RealtimeOCR

OpenCV detect numbers

I'm using OpenCV on the iPhone and need to detect numbers in an image. I split the image into smaller images so each image has only one number (1-9). All numbers are printed, NOT handwritten.
What would be the best approach to figure out the numbers with OpenCV?
UPDATE:
I have successfully found the numbers and extracted them. They look like this:
http://img198.imageshack.us/img198/5671/101ht.jpg
http://img824.imageshack.us/img824/539/606yu.jpg
When they are extracted they are in the same size and so on. I have saved a bunch of images and put them in a OCR dir where they are categorized into numbers. Like: ocr/1/100.jpg 101.jpg.... and ocr/2/200.jpg 201.jpg....
Then I was going to use the same approach as in the Basic OCR tutorial:http://blog.damiles.com/?p=93
However, I'm programming for iPhone and can't use C++ code (error on compiling and so on) and I don't have access to highgui.
I tried using cvMatchTemplate() and match a bunch of images but it seems to work pretty bad...
Any other ideas I can try?
You could start by reading about Principal Component Analysis (PCA), Fisher's Linear Discriminant Analysis (LDA), and Support Vector Machines (SVMs). These are classification methods that are extremely useful for OCR, and there are libraries in any language including C++, Python, C# etc.
It turns out that OpenCV already includes excellent implementations on PCAs and SVMs[dead link]. I haven't seen any OpenCV code examples for OCR, but you can use some modified version of face classification to perform character classification. An excellent resource for face recognition code for OpenCV is this website[dead link].
If the numbers are printed, the job is quite simple, you just need to figure out a nice set of features to match. If the numbers are one font, you can get away with this approach:
Extract the number
Find the bounding box
Scale the image down to something like 10x8, try to match the aspect ratio
Do this for a small training set, take the 'average' image for each number
For new images, follow the steps above, but the last is just a absolute image difference with each of the number-templates. Then take the sum of the differences (pixels in the difference image). The one with the minimum is your number.
All above are basic OpenCV operations.
Basically your problem is just to classify a feature vector, which is the set of pixel intensities after some preprocessing steps. You can use any classifier for this task, like eg. neural networks, which should have a C implementation inside OpenCV. You might also try a C libsvm library for Support Vector Machines.
There is a good site related to this problem with a lot of papers and a training database.
Maybe the most simple and convinient way is to use svm as ml algorithm
http://opencv.willowgarage.com/documentation/cpp/support_vector_machines.html
and gray images as feature vectors.
Objective C++?
Try renaming your .m files to .mm and you can then use c++ in your iPhone project.
Convolution Neural Networks are by far the best algorithms for hand written digits. The are implemented in most systems like USPS etc. Here are few papers explaining the algorithms.
http://yann.lecun.com/exdb/lenet/
This is a nice open source ,It is a ORCDemo on iPhone.Hope it is useful to you
Simple Digit Recognition OCR in OpenCV-Python
This might help you out. Converting the code from Python to C++ is not a difficult task, since OpenCV API's are same for the both.
Tesseract is also a nice free OCR engine that is readily available for iPhone and allows you to use your own sets of training images:
http://tinsuke.wordpress.com/2011/11/01/how-to-compile-and-use-tesseract-3-01-on-ios-sdk-5/
HOG + SVM (Try to play with kernels)

Resources