OCR on image - iOS - ios

Is it possible to perform OCR on image (for example, from assets) instead of live video with Anyline, microblink or other SDKs?
Tesseract is not an option due to my limited time.
I've tested it but the results are very inappropriate. I know that it can be improved with OpenCv or something but I have to keep a deadline.
EDIT:
This is an example of what the image looks like when it arrives to the OCR SDK.

I am not sure for the others, but you can use microblink SDK for reading from a single image. It is documented here.
Reading from a video stream will give much better results, but it all depends on what you are trying to do exactly. What are you trying to read?
For reading barcodes or MRZ from i.e. identity documents, it works pretty well. For raw text OCR, not quite as good but it is not really intended for that anyway.

https://github.com/garnele007/SwiftOCR
Machine learning based, Trainable on different font, chars, etc.
and free

Related

What is the way to parse a string of a well known format from an image on iOS (some library created specifically for this purpose)?

Local travel cards in Saint-Petersburg, Russia have got huge id numbers that aren't easy to read and type into a web page when topping up the card online. So I want to build a small app that would take a photo of a travel card and parse the number out.
The task is a bit easier than a free form recognition:
card is of the very well known size
id numbers are of known size, are located in the very well known location on a card and they are number only, no letters (okay, there are two variations I think and maybe they will add 1-2 more in the future)
even the font is known in advance
even the first several numbers are the same for most of the card (so far there are only two prefixes used)
How would you do it? Are there any libraries tuned not for the general OCR, but for a "hinted" OCR like I need?
Best regards,
Artem.
P.S.
Actually a free/cheap web service for this task would also be good enough
Yes Google has a library called Tesseract and there is an iOS SDK on Github you can import into your application. So you can use this SDK and it has some documentation that you can read that will explain how to set it up in your app. It has methods that will return you a string with the text of the card in the string. BUT it will be ALL of the text from the card. So best thing to do would be to:
1 "clip" the original image to extract a sub image that displays only the portion of the card you wish to get the numbers from.
2 Process this sub image through Tesseract to retrieve the string you are looking for.
3 Then parse through the string and pick out the data that you need.
But just be warned, it can be a bit quirky. This SDK tends to recognize words best from images that are scanned, not "taken a picture of". Because although it is an advance piece of technology, it isn't perfect. So to get it to work as perfectly as possible for you, try to get scanned copies of the originals.
Best of luck.
The ideal solution for you would have three components:
1) Detection of the card. This is useful because if you have the detection, then the end users have much easier time actually using the scanner, because they can place the phone above the card in an arbitrary direction
2) Accurate OCR component. Ideally, customizable for this exact font you have on the card, for the exact position on the card.
3) Parsing mechanism. This would enable you to obtain the exact string written on the card without writing huge amount of OCR parsing code.
BlinkID SDK has all this. It has a preset for detection cards in the ID-1 format. It has integrated OCR engine. And it provides RegexParser, where you can define the exact format of the text which you're trying to extract from the document.
BlinkID was initially built for scanning ID documents which have very similar properties as the problem you're trying to solve.
Note. I'm one of the developers working on BlinkID.

iOS - Object recognition in images

This is a known area and OpenCV might well be involved, but still to start from the scratch.
How has something like Evernote's scannable app been developed. I mean, how does it automatically recognize a document using a camera and then extract it.
What are the UIKit frameworks involved here and what are the libraries that may have been used. Or any nice articles or blogs. How does one go about understanding this.
This tutorial is what you might be needing. Although, this tutorial is in Python but all these function are available in iOS bindings.
Here, are results you will get.
Once, you have the ROI i.e. the page, you should run OCR to detect the characters. For this you can use Tesseract and this tutorial might be helpful.
For anyone coming here now, there are better solutions now. CIDetector does precisely this. And to have it working on a live camera feed, you'd have to use it on live CIImages being generated by AVFoundation (rendered using Metal or OpenGL).

Converting an image to Doc

I am trying to make an application which make a editable document file(doc or pdf) from an image. I am planning to use tesseract for extraction of the text. But i am not yet sure how to get the basic formatting of the text(size,bold,italic,underline) & images that might be present in the document image. I am planning to use J2EE, to make a Web Based App(Have to use J2EE). I think i might be able to recognize the components and formatting of the document using OpenCV, but i am not really sure.
Given that you are planning to use Tesseract for the basic OCR capabilities, try looking into the hORC formatted output. That includes quite a lot of additional information about font-size, font-face, position, etc.
You can find a description of hOCR here:
https://docs.google.com/document/d/1QQnIQtvdAC_8n92-LhwPcjtAUFwBlzE8EWnKAxlgVf0/preview#heading=h.e903b9bca924
If that doesn't work out, it depends on how much effort you want to put into Tesseract. It's internal APIs (available in Java via Tess4J, among others) do provide much of the information that you would need to reconstruct the page layout.

iOS / C: Algorithm to detect phonemes

I am searching for an algorithm to determine whether realtime audio input matches one of 144 given (and comfortably distinct) phoneme-pairs.
Preferably the lowest level that does the job.
I'm developing radical / experimental musical training software for iPhone / iPad.
My musical system comprises 12 consonant phonemes and 12 vowel phonemes, demonstrated here. That makes 144 possible phoneme pairs. The student has to sing the correct phoneme pair 'laa duu bee' etc in response to visual stimulus.
I have done a lot of research into this, it looks like my best bet may be to use one of the iOS Sphinx wrappers ( iPhone App › Add voice recognition? is the best source of information I have found ). However, I can't see how I would adapt such a package, can anyone with experience using one of these technologies give a basic rundown of the steps that would be required?
Would training be necessary by the user? I would have thought not, as it is such an elementary task, compared with full language models of thousands of words and far greater and more subtle phoneme base. However, it would be acceptable (not ideal) to have the user train 12 phoneme pairs: { consonant1+vowel1, consonant2+vowel2, ..., consonant12+vowel12 }. The full 144 would be too burdensome.
Is there a simpler approach? I feel like using a fully featured continuous speech recogniser is using a sledgehammer to crack a nut. It would be far more elegant to use the minimum technology that would solve the problem.
So really I'm hunting for any open source software that recognises phonemes.
PS I need a solution which runs pretty much real-time. so even as they are singing the note, firstly it blinks on to illustrate that it picked up the phoneme pair that was sung, and then it glows to illustrate whether they are singing the correct note pitch
If you are looking for a phone-level open source recogniser, then I would recommend HTK. Very good documentation is available with this tool in the form of the HTK Book. It also contains an entire chapter dedicated to building a phone level real-time speech recogniser. From your problem statement above, it seems to me like you might be able to re-work that example into your own solution. Possible pitfalls:
Since you want to do a phone level recogniser, the data needed to train the phone models would be very high. Also, your training database should be balanced in terms of distribution of the phones.
Building a speaker-independent system would require data from more than one speaker. And lots of that too.
Since this is open-source, you should also check into the licensing info for any additional details about shipping the code. A good alternative would be to use the on-phone recorder and then have the recorded waveform sent over a data channel to a server for the recognition, pretty much something like what google does.
I have a little bit of experience with this type of signal processing, and I would say that this is probably not the type of finite question that can be answered definitively.
One thing worth noting is that although you may restrict the phonemes you are interested in, the possibility space remains the same (i.e. infinite-ish). User training might help the algorithms along a bit, but useful training takes quite a bit of time and it seems you are averse to too much of that.
Using Sphinx is probably a great start on this problem. I haven't gotten very far in the library myself, but my guess is that you'll be working with its source code yourself to get exactly what you want. (Hooray for open source!)
...using a sledgehammer to crack a nut.
I wouldn't label your problem a nut, I'd say it's more like a beast. It may be a different beast than natural language speech recognition, but it is still a beast.
All the best with your problem solving.
Not sure if this would help: check out OpenEars' LanguageModelGenerator. OpenEars uses Sphinx and other libraries.
http://www.hfink.eu/matchbox
This page links to both YouTube video demo and github source.
I'm guessing it would still be a lot of work to mould it into the shape I'm after, but is also definitely does do a lot of the work.

Automated Webcam Application / Hardware Problems

I am starting to develop an automated webcam application. The goal is to automatically take pictures, do some image processing and then upload the results to a FTP site. All of these tasks seem simple.
However, I am having a hard time to find a decent camera. I don't want to use a simple webcam or hd-webcam because the image quality of still frames isn't very good.
I'm also having a hard time finding an affordable digital camera supporting USB snapshot or control.
My second concern is the development itself. I'm not quite sure which programming language to use. I have experience with AS3, Processing, Java and some simple C++ and Open CV.
Do you have a clue?
Regarding the camera, There are pretty good webcams that you can find, some with HD quality. look at the cameras on Logitech (I tested their API and it is quite good), A HD camera has a retail of $99 which is very cheap. If you are looking for something better I would go with Nikon as they also have a pretty good API for C#/C++. You can get a basic SLR with simple 28mm lens for $500. Don't use a PowerShot as Nikon stopped supporting their API. Whatever camera you decide to buy make sure a proper API is available, is being maintained and free.
Regarding development, I would go with C#/Java as they are easier than C++. There are quite allot of libraries for image processing for C#/Java, just make sure that the Camera comes with an API the fits your chosen language.
Good luck.
Generally (from experience) most USB cameras that show up as an imaging device through Windows can be used with JAI [Java Advanced Imaging]. Additionally [on the .net/c++ side], the same cameras can be used through DirectShow as a capture device. Java/C# will make development easier but expect to loose some performance [even with the best of optimizations]. Additionally you can only perform upto the speed of the camera and the data line running from the camera to the computer [USB1.0 will seriously limit a decent framerate]
first get the image in RAM:
If you are using CHDK, I suggest you get the image copied from camera memory to RAM by using supported scripting languages by CHDK - you can take help from the CHDK forum http://chdk.setepontos.com/index.php for this.
or if thats difficult you can continuously copy the image to hard disk and load in RAM from there. (you need to take care (delete) of massive images accumulated on hard disk in a short period of time !)
This sounds like a 'brute force' approach, but will get your work going while you are researching correct approach.
perform image processing:
once the image is in RAM, you can apply your image processing algorithms as usual e.g. using opencv library.
hope this helps you

Resources