After spending over 10 hours to compile tesseract using libc++ so it works with OpenCV, I've got issue getting any meaningful results. I'm trying to use it for digit recognition, the image data I'm passing is a small square (50x50) image with either one or no digits in it.
I've tried using both eng and equ tessdata (from google code), the results are different but both get guess 0 digits. Using eng data I get '4\n\n' or '\n\n' as a result most of the time (even when there's no digit in the image), with confidence anywhere from 1 to 99.
Using equ data I get '\n\n' with confidence 0-4.
I also tried binarizing the image and the results are more or less the same, I don't think there's a need for it though since images are filtered pretty good.
I'm assuming that there's something wrong since the images are pretty easy to recognize compared to even simplest of the example images.
Here's the code:
Initialization:
_tess = new TessBaseAPI();
_tess->Init([dataPath cStringUsingEncoding:NSUTF8StringEncoding], "eng");
_tess->SetVariable("tessedit_char_whitelist", "0123456789");
_tess->SetVariable("classify_bln_numeric_mode", "1");
Recognition:
char *text = _tess->TesseractRect(imageData, (int)bytes_per_pixel, (int)bytes_per_line, 0, 0, (int)imageSize.width, (int)imageSize.height);
I'm getting no errors. TESSDATA_PREFIX is set properly and I've tried different methods for recognition. imageData looks ok when inspected.
Here are some sample images:
http://imgur.com/a/Kg8ar
Should this work with the regular training data?
Any help is appreciated, my first time trying tessarect out and I could have missed something.
EDIT:
I've found this:
_tess->SetPageSegMode(PSM_SINGLE_CHAR);
I'm assuming it must be used in this situation, tried it but got the same results.
I think Tesseract is a bit overkill for this stuff. You would be better off with a simple neural network, trained explicitly for your images. At my company, recently we were trying to use Tesseract on iOS for an OCR task (scanning utility bills with the camera), but it was too slow and inaccurate for our purposes (scanning took more than 30 seconds on an iPhone 4 at a tremendously low FPS). At the end, I trained a neural-network specifically for our target font, and this solution not only beat Tesseract (it could scan stuff flawlessly even on an iPhone 3Gs), but also a commercial ABBYY OCR engine, which we were given a sample from the company.
This course's material would be a good start in machine learning.
Related
I am working on a project where my task is to identify machine part by its part number written on label attached to it or engraved on its surface. One such example of label and engraved part is shown in below figures.
My task is to recognise 9 or 10 alphanumerical number (03C 997 032 D in 1st image and 357 955 531 in 2nd image). This seems to be easy task however I am facing problem in distinguishing between useful information in the image and rest of the part i.e. there are many other numbers and characters in both image and I want to focus on only mentioned numbers. I tried many things but no success as of now. Does anyone know the image pre processing methods or any ML/DL model which I should apply to get desired result?
Thanks in advance!
JD
You can use OCR to the get all characters from the image and then use regular expressions to extract the desired patterns.
You can use OCR method, like Tesseract.
Maybe, you want to clean the images before running the text-recognition system, by performing some filtering to remove noise / remove extra information, such as:
Convert to gray scale (colors are not relevant, aren't them?)
Crop to region of interest
Canny Filter
A good start can be one of this tutorial:
OpenCV OCR with Tesseract (Python API)
Recognizing text/number with OpenCV (C++ API)
I need to scan a handwritten font. I am using tesseract OCR v3.02 for that. I have trained my OCR using box files and adding dictionary words as well but still I am not able to get 100% accuracy.
I am trying to scan the following image
So far the text file i am obtaining is like this:
We each took baths before bed. Bigfoot was so large that he had to use the bathtub more like a sink trying to clean
up his best. he left a pool of water around the usa It was a mess and I mopped it and removed all of the fur
left behind before taking a bath of my Own. I have to admit it was cguite disgusting. “It”s cguite simple.” Me
assured her. “Mere, I”ll show you.”Iimmy‘s was the favorite burger joint among us kids. Sreat burgers. and
there were video games and pinball machines. Jimmy”s has been around since the fift"es. all our parents used to
go there when they were young. “‘Mot exactly.""Another blanket. Ben. please."“I really hope so. because I feel so
Any help to improve the the OCR?
I am trying to determine when a food packaging have error or not error. Example
the logo " McDonald's " have error misprints or not, as the wrong label, wrong color..( i can not post picture )
What should I do, please help me!!
It's not a trivial task by any stretch of the imagination. Two images of the same identical object will always be different according to lightning conditions, perspective, shooting angle, etc.
Basically you need to:
1. Process the 2 images into "digested" data - dominant color, shapes, etcw
2. Design and run your own similarity algorithm between the 2 objects
You may want to look at Feature detectors in OpenCV: Surf, SIFT, etc.
Along a result I just found your question, so I think I come too late.
If not I think your problem car easily be resolved, it exists since years and is called Sikuli .
While it's for testing purposes, I have been using it in the same way as you need : compare a reference and a production image. Based on OpenCV it does it very well.
I'm using cv::PCA class for a face recognition project. I convert photos of faces to one row vectors, concatenate them to one big array and feed to pca, to acquire a new space in which I can try to use distance for recognition. Problem is, that calculating the pca from scratch each time I start the program is really time consuming (almost five minutes). I figured out that I need to save the calculated pca to hard drive, and load it when I start the program again. And here is the problem. As I can see, all cv::Mat objects in cv::PCA are of type CV_32F. When i try to save it as a normal picture, its converted to 8 bit image, and there is some data lost. When i use XML/YAML persistence, the generated file is really big, and data is also lost (I have saved it, loaded to another structure and ran cerr<<sum(pca_orginal.mean==pca_loaded.mean)[0]<<endl to check how big is the difference). Right now I'm trying to use std::ofstream::write with std::ofstream::binary flag, and istream::read, but there are some type issues (out.write(_pca.mean.data,_pca.mean.rows*_pca.mean.cols*4/*CV_32F->4*CV_8U*/\); generates error: no matching function for call to ‘std::basic_ofstream<char, std::char_traits<char> >::write(uchar*&, int). I've also heard about openexr library and it's file format, but I would rather avoid using additional libraries. I'm using OpenCV 2.3.1 and OpenCV 2.2.
edit:
I'm sorry for the confusion. I misread cv::Mat operator== description, and thought that it works the opposite way that it does, so sum(pca_orginal.mean==pca_loaded.mean)[0] giving 0 is the worse possible result, not the best. It means that XML/YML works fine apart from generating huge files. Also, after using c-style casting I was able to make the binary streams work, but the files generated are also big (over 150MB).
In the C interface, there are functions cvSave and cvLoad for saving arbitrary matrices. There are probably C++ interface counterparts, too.
I am working on an image manipulation problem. I have an overhead projector that projects onto a screen, and I have a camera that takes pictures of that. I can establish a 1:1 correspondence between a subset of projector coordinates and a subset of camera pixels by projecting dots on the screen and finding the centers of mass of the resulting regions on the camera. I thus have a map
proj_x, proj_y <--> cam_x, cam_y for scattered point pairs
My original plan was to regularize this map using the Mathscript function griddata. This would work fine in MATLAB, as follows
[pgridx, pgridy] = meshgrid(allprojxpts, allprojypts)
fitcx = griddata (proj_x, proj_y, cam_x, pgridx, pgridy);
fitcy = griddata (proj_x, proj_y, cam_y, pgridx, pgridy);
and the reverse for the camera to projector mapping
Unfortunately, this code causes Labview to run out of memory on the meshgrid step (the camera is 5 megapixels, which apparently is too much for labview to handle)
I then started looking through openCV, and found the cvRemap function. Unfortunately, this function takes as its starting point a regularized pixel-pixel map like the one I was trying to generate above. However, it made me hope that functions for creating such a map might be available in openCV. I couldn't find it in the openCV 1.0 API (I am stuck with 1.0 for legacy reasons), but I was hoping it's there or that someone has an easy trick.
So my question is one of the following
1) How can I interpolate from scattered points to a grid in openCV; (i.e., given z = f(x,y) for scattered values of x and y, how to fill an image with f(im_x, im_y) ?
2) How can I perform an image transform that maps image 1 to image 2, given that I know a scattered mapping of points in coordinate system 1 to coordinate system 2. This could be implemented either in Labview or OpenCV.
Note: I am tagging this post delaunay, because that's one method of doing a scattered interpolation, but the better tag would be "scattered interpolation"
So this ends up being a specific fix for bugs in Labview 8.5. Nevertheless, since they're poorly documented, and I've spent a day of pain on them, I figure I'll post them so someone else googling this problem will come across it.
1) Meshgrid bombs. Don't know when this was fixed, definitely a bug in 8.5. Solution: use the meshgrid-like function on the interpolation&extrapolation pallet instead. Or upgrade to LV2009 which apparently works (thanks Underflow)
2) Griddata is defective in 8.5. This is badly documented. The 8.6 upgrade notes say that a problem with griddata and the "cubic" setting, but it is fact also a problem with the DEFAULT LINEAR setting. Solutions in descending order of kludginess: 1) pass 'v4' flag, which does some kind of spline interpolation, but does not have bugs. 2) upgrade to at least version 8.6. 3) Beat the ni engineers with reeds until they document bugs properly.
3) I was able to use the openCV remap function to do the actual transformation from one image to another. I tried just using the matlab built in interp2 vi, but it choked on large arrays and gave me out of memory errors. On the other hand, it is fairly straightforward to map an IMAQ image to an IPL image, so this isn't that bad, except for the addition of the outside library.