Scanner API for iOS [closed] - ios

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm looking for a scanner API library to be embedded in a new app I'm developing in order to give it the feature of scanning documents (in PDF or other formats) using the built-in camera of the iPhone/iPad. Is anybody aware of such a library (...available for free or cheap...of course). Thanks in advance

The Genius Scan SDK allows developers to integrate a document scanning module into mobile apps. Both iOS and Android.
It's a full–fledged image processing SDK rather than an API as it includes the core features needed for capturing documents on mobile (edge detection, distortion correction, multiple types of image amelioration and PDF/JPEG generation). It doesn't require installing third-party apps.
SDK is native for iOS and Android, and doesn't support OCR at this time. It's derived from the application of the same name, which provides a good preview of the SDK (free on both app stores).

So you are looking for an Optical Character Recognition (OCR) iOS SDK.
There is 2 basic type of OCR SDKs: offline and cloud based.
Here is a few options:
ABBYY ,which is a great OCR engine, they have an offline and a cloud base solution too. So far it is the best OCR engine for iOS, very good performance and very good precision, but they are not cheap. You have to contact the sales team, and provide information for about your project for a demo SDK.
Tesseract, iOS Wrapper here, which is Google's Open Source project. It is free, but it has way worse performance than ABBYY's engine. It is very flexible, and has a big community.
Also there is some more, but I don't have any experience with these:
Pixelnetica
OCR Api Service
VeryPDF Cloud
First you should let the user take a good photo of the desired document, crop and scale it for the most accurate picture, and after that submit it to the OCR engine.

Related

Can we deliver HEIF images through a CDN? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I'd like to deliver HEIF images through a CDN such as Amazon CloudFront after release iOS11.
But we can't use Nokia's HEIF implementation for commercial purposes because of this license.
So I'm looking for an another method to encode HEIF images.
Accordig to Introducing HEIF and HEVC, HEIF images can only be created by iOS devices that have A10 Fusion chip.
we currently only have HEIF encode support and hardware on iOS with minimum configuration being the A10 Fusion chip, an example, of which, is the iPhone 7 and the iPhone 7 Plus.
Can we deliver HEIF images that are created by iPhone7 etc through a CDN?
Is this a patent infringement?
Apple is only talking about hardware-level encoding support. HEIF is a format developed by MPEG and isn't Apple-controlled. Usability and support are still limited, but AFAIK there aren't any technical or legal reasons why you cant use it anywhere (if supported):
More information and links to C++ and JS libraries here: https://nokiatech.github.io/heif/
The licensing issue you're concerned about is only for Nokia's reference implementation. My guess is that Apple is using their own implementation. Regardless, it's not something you need to be concerned about.
If for some reason you are looking to create HEIF images yourself, there's at least one open-source implementation currently available for commercial use. Specifically, GPAC. Though its license (LGPL) does have its own set of potential drawbacks for commercial projects.

What is the best Augmented Reality Environment to use to develop mobile app? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I need to develop a prototype of an augmented reality app for research purpose.
This is my first time with augmented application and I only have basic knowledge on android.
The application is an android application that uses video overlay to display over the scene after detecting a target. The only problem is that the videos are obtained from another user (through another application that will allow user to record a video). The tutorials and examples I found allow you to attach the video to the target by developer not the the user and that's not my case.
Since either way I have to invest sometime to learn how to develop it. Whats you recommended environment? Android studio with **Vuforia. Vuforia with unity ? or other SDK? and I would appreciate if you have a slimier tutorial and samples.
It just a prototype therefore I'm not looking for high quality. Easy and less time consuming is my interest.
Unity 3D with Vuforia is easier for a beginner to understand. For more advanced functions you might want to use the Android Studio. I am not sure about how the user can attach a video to the target since normally, the developer has to include the video file in the Assets folder to attach it to target. This is how it is done in Unity. I hope someone can give you a better insight into this. But, I have worked with Unity/Vuforia and it is a pretty comfortable and easy environment for a beginner. All the best for your prototype :)

Is Google Tango the first/only augmented reality SDK to provide targetless tracking? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I'm trying to learn how to build an augmented reality app to place a 3D object into the world without the need for a "target". So far, pretty much every augmented reality framework seems to only work with targets; Vuforia, Wikitude, etc. Some have slight extensions on target tracking, such as Vuforia's "Extended Tracking" and "Smart Terrain" features, but in the end these are afflicted by the same limitation -- the SDKs are hopelessly obsessed with "targets".
Then I came across Google Tango. It's hard to tell exactly, but it appears to be the only option I could find which offers placing 3D objects into the real world and allowing the user to walk around with the 3D objects staying in place (relative to the world).
Am I correct in my assumption that Google Tango is the only option for this? If I'm going to spend $512 on a Tango development kit, I want to first make sure there weren't other augmented reality libraries I could have used for this.
Kudan provides markerless tracking, via SLAM. They have a free trial version and no hardware necessary.
Markerles tracking is not new. 13th Lab's PointCloud SDK prodided markerless tracking a few years ago. But it removed that offering when their implementation was licensed exclusively by a 3rd party.
The SLAM algorithm is neither new nor proprietary. Anyone can implement it from the CalTech academic paper, though it's a graduate-level problem to tackle!
Check out http://easyar.com markerless tracking and other cool features. And it is free and opensource
Wikitude also have a markerless tech. I have not tried it to tell how good it works.
Vuforia had something called Smart Terrain which was more of an interaction with environment. But the tech is quite old and I am not sure they kept it going in the last couple of years.
Actually, none can really claim to have a working markerless, it is more of a working in certain conditions. You need proper hardware fast enough to process all the calculations and limited usage conditions as no fast movement and ideal environment.
Microsoft is also coming up with Hololens but hardware is at $3000 and only for dev at the moment.

Alternatives to Parse for a Social App? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I recently delved into app development on Xcode, and decided to develop a social app. I was using Parse as my mbaaS for a while, but unfortunately they are closing down. I was wondering in your experience what is(are) the best alternatives to Parse. Im basically using looking for something
1) easy to use
2) Well documented
3) Lots of tutorials
4) Can handle lots of RPS and users.
The app I am developing is a social app, so if you have any specific recommendation for an app of that type, that would help tremendously. Also it is important to note that I have no backend development experience, so it would be a challenge to develop my own.
Thanks again
There are several Parse alternatives out there right now:
AWS Mobile Hub - this is a direct Parse replacement that recently came out by AWS. Although this is in Beta, AWS is a well respected platform that supports many huge companies like Netflix and Yelp
Firebase - (acquired by Google) Firebase offers a great solution for real time communication and data storage. It's perfect if what you are doing is mainly data & realtime (chat, game, collaboration, etc...) but it's not very flexible for other things (e.g. payment, SMS, push notifications etc...) firebase.com
RapidAPI - a backend platform that allows for saving data and integrating APIs. It is based on blocks so each basic action is represented by a block. You can combine blocks to create logic. It has a bit of a higher learning curve but it's probably more flexible
BackAnd - a platforms that allows you to create an AngularJS ready backend for your app. Its really good of you are working on AngularJS web apps and your data is stored on Amazon RDS.
Baasbox is a good alternative to Parse. A lot of features used in Parse are there (Push messaging etc), so migrating an app is relatively straightforward. They provide an API for Android, iOS and Javascript.
One of the main advantages it has over Parse is that it can be hosted yourself (Although there is a hosted option available, but it's not free).
http://www.baasbox.com
Out of all the available backends, we found this to be most similar to Parse.

How to tack real world object using iphone camera? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am iOS developer. I am new with doing object tracking feature. I have searched many links based on tracking real objects. But I have found many links like image tracking, image matching and all. Likewise I would like to track the real world objects.And also I have gone through a site below:
http://developers.arlab.com/me
It helps a lot for tracking images, image matching etc. But not specified any object tracking. If anybody suggesting good tutorial or having any sample source code of object tracking, please share.
Advance Thanks for your support.
You can use the OpenCV framework for Object Tracking. There are many nice tutorial and documentation on internet.
Below i have listed some.Please check if this helps you
Official Site
OpenCV is released under a BSD license and hence it’s free for both academic and commercial use. It has C++, C, Python and Java interfaces and supports Windows, Linux, Mac OS, iOS and Android. OpenCV was designed for computational efficiency and with a strong focus on real-time applications...
http://opencv.org/
You can download the SDK from here
Demo Github project
https://github.com/Itseez/opencv
https://github.com/atduskgreg/opencv-processing
These are some examples you can find many more.

Resources