I am new to programming in general and have recently gotten into App development. I am using tesseract for iOS and I can't seem to figure out how to get it only read numbers.
There is an exact question posted by Alex G Here and I have the same files/problem as him. the answer apparently is
Go to the tessdata\configs\digits file. If you're using the API, then
call SetVariable("tessedit_char_whitelist", "0123456789-."); You use
the SetVariable API if you want to programmatically call Tesseract,
via Objective-C, for instance.
Except I still do not understand how to do this. Where is this command line? I am calling Tesseract from Xcode and not a command line. I also do not contain this tessdata\configs\digits file.
If somebody could please help me I would really appreciate it.
Thanks!
Ted
If you follow the How To: Compile and Use Tesseract (3.01) on iOS (SDK 5) article, you would place the SetVariable statement after the Init call, as follows:
tesseract->Init([dataPath cStringUsingEncoding:NSUTF8StringEncoding], "eng");
tesseract->SetVariable("tessedit_char_whitelist", "0123456789-.");
Related
I am new to iOS development!
And I'm working on a project that deals with shift scheduling problems.
I was reading online paper and they mentioned that they used CPLEX to solve their linear programming problems.
So I'm wondering if there's anything that I have to know to run my scheduling constraints on CPLEX but get the results on Swift codes (XCode)?
what you could try also is to use CPLEX in the cloud which would be called from IOS.
You can find an example at https://developer.ibm.com/docloud/blog/2016/03/17/docloud-and-bluemix-demo/
You can try that example on your smartphone and then have a look at how to.
regards
CPLEX offers some libraries written in C, C++.
Xcode allow the use of this kind of librairies so I think you'll be able to work with inside your Swift project.
Hope this helps !
Hello I have found two links about AVCam
Apple: https://developer.apple.com/library/ios/samplecode/AVCam/History/History.html
alex-chan/AVCamSwift: https://github.com/alex-chan/AVCamSwift
The first link has demo files that work perfectly, but its in Object-C - can someone show me documentation on converting Object-C to Swift?
The second link I have downloaded the files but it will not run in my 4s - can someone tell me why?
I would like to have a swift version so I can easily adopt it into my swift build + thanks again SO!
if you have a working objc version why not just import it with a bridging header? there is no one document about converting obj-c to swift, if you really want to convert it you are going to need to do it line by line.
also what exactly are you trying to do? get a live camera feed displayed? these docs have been ported to swift and would suit that purpose but you would need to get the input port first.
https://developer.apple.com/library/mac/documentation/AVFoundation/Reference/AVCaptureVideoPreviewLayer_Class/index.html#//apple_ref/occ/clm/AVCaptureVideoPreviewLayer/layerWithSession:
Update: how to import obj-c headers, a newer way to create them that isnt quite as talked about yet is to just create any normal .m file or objective c file then select yes to creating a briding header, it configures everything for you.
That being said it may be worth while to play around with the basics a bit more and maybe follow a few guides before attempting to implement this type of feature if you are having issues with following the links.
Here is a random application creation guide http://www.ioscreator.com/tutorials/calculator-tutorial-in-ios8-with-swift that should teach you alot. i would recommend following and reading through stuff like this until you have a bit more of a footing and can come up with more of an exact question. no one here is going to rewrite the apple program for you and your questions are extremely broad.
I'm trying to learn how to do bluetooth streaming on the iOS. In the sample code mentioned in Technical Q&A QA1753 there is a reference to another sample code called SRVResolver:
If you want the callbacks to run on a specific run loop, you can use DNSServiceRefSockFD to get the DNS-SD socket, wrap that in a CFSocket, and then schedule the CFSocket on the run loop. The SRVResolver sample code shows an example of this.
http://developer.apple.com/library/mac/#samplecode/SRVResolver/
However that link no longer exists on the apple dev site.. and I couldn't find an example of it any where else on the web.. can anyone direct me to where I can find it?
SRVResolver does not seem to exist in OS X 10.8 docset. It can be found in 10.6 and 10.7 docsets.
In 10.8, there's the DNSSDObjects example, which looks similar, but I didn't look exactly at what it does. QA1753 was updated to refer to this new sample.
Googling for SRVResolver filetype:m did not produce any results, but older docsets should still be available for download from within Xcode's Preferences.
I've been search around how to setup panoramaGL for a whole day and none of these answers my questions. emm, maybe that's because I am new to ios developing and I start with ios5 with all ARC cool features. I did find a pretty comprehensive guide at http://www.codeproject.com/Articles/60635/Panorama-360-iPod-Touch-iPhone but it's a little bit out of date to me. I cannot follow this guide in xcode 4.3 with ios 5.0 sdk.
Emm, so here is the question, assuming panoramaGL and helloPanorama works perfectly fine in whatever xcode version and sdk version it is created in. Is there a way , without any code modification, I can import the library and using the api along with my app developed in ios5? Of course I don't mind some minor modification and I did dive into those code and comment all the retain or release stuff. but wired errors keep popping up. I really need help here.
If it finally turns out to be impossible to reuse it in ios5.0, I will probably need to write the whole thing line by line with my understand of the complicated panorama algorithm...
Thanks so much for the help!
It seems someone is working on another library based on panoramaGL. Works on IOS 5.
See http://code.google.com/p/tk-panorama/
The new version of PanoramaGL 0.1 r2 was released, please check http://code.google.com/p/panoramagl/. This version runs on iOS >= 4.x and supports Hotspots.
Please check HelloPanoramaGL example
Im currently using tesseract for iOS using Nolan Brown's example. It works ok, but I need it to start picking up a new font (which I have in .tff format) which will always be numbers.
I have found questions on StackOverflow about tesseract learning fonts which all point to the google guides on how to teach Tesseract a new font using command line. But I'm already using a compiled copy of the lib from Nolan's example.
How can I teach tesseract a new font? Will I need to recompile the lib for iOS? How do I do this?
You might try training a new "traineddata" file using these instructions.