LeadTools SDK implementation in iOS - ios

Actually I need to implement MICR, OCR code read from Cheque in iOS via LeadTools. But I am not able to get the LeadTools iOS tutorial for Beginners so that I can start the implementation. Please help me to know how to start LeadTools sdk implementation in iOS.

It is not clear from your question whether you found the tutorial but faced problems with it, or you haven't found it to begin with.
Either way, here's the best approach to implementing OCR and MICR recognition on iOS using LEADTOOLS:
First you need to install the OSX/iOS toolkit package. If you don't already own the toolkit, you can download the free evaluation edition from www.leadtools.com
After that, follow this tutorial:
https://www.leadtools.com/help/leadtools/v18/dh/to/leadtools.topics~leadtools.topics.gettingstartedwithleadtoolsforiososx.html
If you run into problems with any step or get errors when you complete the tutorial contact LEAD support with full details of the problem.
Once you get the starting tutorial up and running, move to the 2 demos for OCR and MICR. Their source code is installed with the package in these locations:
Examples/Xcode/iOS/OcrDemo
Examples/Xcode/iOS/MicrDemo

Related

How to apply IBM CPLEX to my iOS projects?

I am new to iOS development!
And I'm working on a project that deals with shift scheduling problems.
I was reading online paper and they mentioned that they used CPLEX to solve their linear programming problems.
So I'm wondering if there's anything that I have to know to run my scheduling constraints on CPLEX but get the results on Swift codes (XCode)?
what you could try also is to use CPLEX in the cloud which would be called from IOS.
You can find an example at https://developer.ibm.com/docloud/blog/2016/03/17/docloud-and-bluemix-demo/
You can try that example on your smartphone and then have a look at how to.
regards
CPLEX offers some libraries written in C, C++.
Xcode allow the use of this kind of librairies so I think you'll be able to work with inside your Swift project.
Hope this helps !

Can someone please explain the differences between Cocos2d-Swift, SpriteBuilder, Xcode and CocoaPods?

I'm completely confused and I don't know where to start asking questions. I tried googling, but the terminology is confusing and I'm not sure what either of these things do (except for Xcode). Can someone explain like I'm 5?
I'm on the cocos2d-swift website and after reading the getting started section it says "From this point onwards, using SpriteBuilder is optional.". I don't know what they mean by that.
How do each of these correlate with each other?
Also, how is an API Documentation Browser and Code Snippet Manager useful to an everyday iOS Developer?
cocos2d-swift is a framework that enables you to build things like sprite-based games quickly.
SpriteBuilder is a tool that helps you build your own multilayered sprites (images and animations grouped into a single package -- i.e. Mario, a Goomba, a Fireflower fireball, etc.).
Xcode is a developer environment in which you write your source code, compile, distribute, and test.
CocoaPods is a tool that fetches and manages framework/SDK dependancies.
You would use CocoaPods to fetch the cocos2d-swift framework so that you could build a sprite-based game in Xcode using sprites you generated in SpriteBuilder.
Not sure what Cocos2d is, but swift is the latest programming language by Apple for both OS X and iOS development.
http://en.wikipedia.org/wiki/Swift_(programming_language)
SpriteBuilder is a framework used to create games for iOS very quick. Think of it as a game engine.
http://www.spritebuilder.com/about
Xcode is the IDE (integrated development environment) that you use when writing native OS X and iOS applications. It's awesome!
CocoaPods is a way to load in third-party libraries and frameworks without having to manually install them on your own. It also makes it very easy to keep the frameworks up-to-date. Pods also allows your project to be more portable as it's much easier to install an application with multiple dependancies via Pods.
http://cocoapods.org
A documentation browser is good if you want to have access to documentation while offline. However, I almost always use Google to find what I'm looking for regardless of what technology I'm working on. Google is just the best way to search.
Finally, I'd start off with this book. I read the first edition years ago, and made things very easy for me to understand.
http://www.bignerdranch.com/we-write/ios-programming.html
Hope this helps!
Here are some basics:
XCode (A Program)- Most of your iOS development will happen here. Coding, creating the app etc.
Think of an SDK as a suite of commands or tools you can use-API's (API - Application programming interface)
Cocoas2d (An SDK) - Game engine. A software development kit for creating games. you would pull this library of code and tools into xcode to use it.
SpriteBuilder (An SDK) - Suite of tools for building games. Just like Cocoas, you would pull this into xCode to make use of it as you code.
CocoaPods - A tool for linking/loading SDK's into XCode and easily updating them.
Moral of the story: XCode is the software you will use for everything. Everything else are just additional libraries of code you can pull in.

Looking for an IOS version of ShowcaseView

I'm looking for a library similar to https://github.com/amlcurran/ShowcaseView
I want to be able to implement forced tutorials on new users who open my application for the first time. Does anyone know where I can find a library that helps make this easy?
You ca try https://github.com/rahuliyer95/iShowcase
This is a quite similar iOS implementation of the ShowcaseView for Android (https://github.com/amlcurran/ShowcaseView).

Adding poi in wikitude phonegap app

I am working on an app in which I've used WikitudeAR Cordova plugin, I've successfully implemented that and app is running fine on the device with no POIs. I want to add POIs in the app, what is the procedure to do that?
Thanks
You can have a look at the examples and tutorials that are inside the Wikitude SDK package that you have downloaded. Especially the tutorial series should give you a good overview on how to use the Architect API to create objects in AR. Here is another quick start guide which shows you what you need to do to create a simple AR experience.

Implementing ASIFT in Android

I am new to both openCV and Android. I have to detect objects in my project. So, I have decided to use ASIFT for the same. However, the code they have given here is very lengthy. It contains lots of C file. It also doesn't have openCV support.
Some search on the SO itself suggested that it is easier to connect the ASIFT code to the openCV library, but I can't figure out how to do that. Can anyone help me by giving some link or by telling the steps that I should use to add ASIFT to my openCv library, which I can further utilize in making my Android application?
Also, I would like to know whether using Android NDK along with JNI to make calls to the C files or using Android SDK along with binary package for my android project(Object Detection) would be a suitable option for me?
Finally , I solved my problem by using the source code given at the website of ASIFT developers. I compacted all the source files together to make my own library using make. I then called the required function from the library using JNI.
It worked for me, but the execution is taking approximate 2 mins on an Android device. Anyone having some idea about ways to reduce the running time ?
They used very simple and slow brute force matching (just for proving of concept). You can use FLANN library and it will help a lot. http://docs.opencv.org/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html

Resources