Image recognition for text in React Native - ios

This may be a crazy question but I've seen done with apps. Is there any kind of API that can be used to recognition the text within an image (the way chase recognizes numbers on a check) OR, is there an API that can be used to search (lets say google) for information based off an image? Example would be if I took a picture of a business logo, google will search for a business listing that fits that logo?
I know crazy question but I want to know if it can even be done. If it can, can it be used with React Native? Thanks!

The React Native Tesseract package only supports Android. iOS support is pending but no timeline when it will be done.
The pure Javascript implementation of Tesseract would offer cross-platform support in React Native.
http://tesseract.projectnaptha.com/

Related

How to design a multi platform video conference/chat app?

I am a developer who is still learning . I want to design an app which can allow multiple people to have a video conference/chats simultaneously something like zoom . I know i can design native apps like specific for Android as well as iOS but I am still learning Android development and have no idea about iOS code .I searched and found that we can have hybrid apps having React,Node.js or with Angular.js and they work on different platforms .But as I'm a newbie I need suggestions as well as guidance .what I'm expecting in my app are following things :
Should support all video resolutions and audio quality, should
work in low and high network scenarios
Should be low on usage of power/ processor
Should not have any external hardware dependency
Should work on any device
Should have chat option during conference, even the multi
people conference
Should have sign-in and non-sign-in options to join a
conference
Can be browser and/or app based interface
Should have encrypted network communication
Should have audio/ video recording feature
Should have screen/file sharing capabilities
Should allow audio to close captioning during chat
(multilingual)
Should have capabilities to host multiple concurrent
conferences having multiple participants in each conference
I know its a tedious task to involve everything I discussed but I need guidance how to do this .
I have already told my expectation so now I want to know what steps I need to do so ,How to start as well as where to start ,what language/library I should choose ,whether having a hybrid app be a good idea or should I go for native apps .As I have earlier said I am a learner so I am going to learn each and everything to get my project done ,so whether its react or node or angular or whatever experienced developer are going to suggest/guide here .I know my question may look broad or even vague but still I am asking only because I see stack-overflow as a group of supportive accomplished coders .Hope you guys will help me in getting my project done .Thank you !
OK then you have got much work to do. I will point you to some references which should give you a good start. I will try to keep this as short as possible.
As you mentioned, WebRTC is the way to go.
With WebRTC, you can add real-time communication capabilities to your
application that works on top of an open standard. It supports video,
voice, and generic data to be sent between peers, allowing developers
to build powerful voice- and video communication solutions. The
technology is available on all modern browsers as well as on native
clients for all major platforms.
This blog explains how WebRTC functions in details - https://medium.com/#anto.christo.20/understanding-web-real-time-communication-webrtc-d4cec5a43f2f
This blog explains how to build peer2peer video calling in android -
https://medium.com/#anto.christo.20/understanding-web-real-time-communication-webrtc-d4cec5a43f2f
https://webrtc.org/ also contains lot of headstart material including sample code.
Once you have done this you can add other features on top of it.
Now, this will take care of peer2peer but if you want o build a multi-user functionality from scratch there is some extra work required as mentioned in the answer - how to build multi-user video chatting web app using webRTC, node.js and socket.io

Unity3D - OCR Number Recognition

Our initial use case called for writing an application in Unity3D (write solely in C# and deploy to both iOS and Android simultaneously) that allowed a mobile phone user to hold their camera up to the title of a magazine article, use OCR to read the title, and then we would process that title on the backend to get related stories. Vuforia was far and away the best for this use case because of its fast native character recognition.
After the initial application was demoed a bit, more potential uses came up. Any use case that needed solely A-z characters recognized was easy in Vuforia, but the second it called for number recognition we had to look elsewhere because Vuforia does not support number recognition (now or anywhere in the near future).
Attempted Workarounds:
Google Cloud Vision - works great, but not native and camera images are sometime quite large, so not nearly as fast as we require. Even thought about using the OpenCV Unity asset to identify the numbers and then send multiple much smaller API calls, but still not native and one extra step.
Following instructions from SO to use a .Net wrapper for Tesseract - would probably work great, but after building and trying to bring the external dlls into Unity I receive this error .Net Assembly Not Found (most likely an issue with the version of .Net the dlls were compiled in).
Install Tesseract from source on a server and then create our own API - honestly unclear why we tried this when Google's works so well and is actively maintained.
Has anyone run into this same problem in Unity and ultimately found a good solution?
Vuforia on itself doesn't provide any system to detect numbers, just letters. To solve this problem I followed the next strategy (just for numbers near of a common image):
Recognize the image.
Capture a Screenshot just after the target image is recognized (this screenshot must contain the numbers).
Send the Screenshot to an OCR web-service and get the response.
Extract the numbers from the response.
Use these numbers to do whatever you need and show AR info.
This approach solves this problem, but it doesn't work like a charm. Their success depends on the quality of the screenshot and the OCR service.

how do we display an image/color in lua

I am having this problem, and I am not able to figure out the solution.
I wish to display an image in some window if possible(not necessary tough), and then move it across the page by sending events from keyboard.
The problem is I can't use LÖVE framework, as we can't integrate it into our setup.
So I would require the Lua api's to do so.
Is anyone here aware about it? Also do I have to install some kind of extra library to support color and image operations?
Thanks for sharing the knowledge.
Lua is quite a bare-bones language to start with, so there is no built in image support whatsoever. But this also goes for almost all other programing languages, image support is typically something contained in supplementary libraries.
You need to install some library providing GUI functionality (like IUP), or use an application integrating Lua with graphical libraries (like murgaLua, Löve, ...)
From the tags you attached to your question it seems that you're using an embedded platform. It might be useful for people to know which, in order to provide more useful answers.

Is Adobe Flash Builder 4.5 good enough for cross platform applications

I am planning to develop a outlook based app for cross platforms(iPhone,blackberry playbook and andriod).I recently heard about the adobe flash builder 4.5.1 update to support these platforms using flex framework and action script. I would like to know the hidden pros and cons of this tool in developing cross platform apps.
Few Queries:
Is it compatible with iOS5?
Is it giving exact native look across platforms?
Is it fast and responsive for touch based events?
Can we include any third party sdk's into flash builder project like 3D openGL and external libraries?
Does it have all UI Controls support for all platforms?
Any challenge or hidden disadvantages apart from the above queries are highly appreciated...
Thanks in advance...
My company is developing cross-platform business apps in Flex and found that the performance is good enough for our purposes. I would imagine an email app would be the same, but if you are trying to do streaming or voice quality, I suspect you might want to write to the hardware layer.
Flex does a pretty good job at supporting native look and feel, but you need to think about/design and structure your app up front (ie. does the gestures in your app design match the feel of those environments). If you don't design for it, it won't happen.
By and large having one code base will reap you large benefits. But if you've not written cross platform applications before (linux/Windows/Unix/Mac, etc), you might find more of a learning curve in your thought process and design process.

Are there any limitations of Flex 4.5 mobile apps on iOS?

I've looked at a few demos from adobe that show apps built with Flex 4.5 running on iOS from simple list views to video capture. This has made me wonder if its an effective solution to build cross platform mobile apps.
For those of you who have taken Flex 4.5 mobile for a spin, what is your impression on the capability and performance on iOS. Is there anything you can't do with Flex 4.5 mobile that you couldn't with a native app? Are there any limitations?
The mobile story in Flex is quite strong, in my opinion. It is what has attracted me to the platform and what seems to be bringing life into the Flex community.
The experience for developing apps in Android and iOS is quite fantastic, actually. The velocity in which you can dev is blazing and the abstractions provided by Flex (data binding, state management, skinning, etc) give you the ability to totally rock your app.
The performance is better than I had expected. It is not as great as a native app, but it certainly gets the job done. The ability to share code and UIs between Android and iOS more than make up for it in most cases.
There are, however, limitations. For one, you are not using the native widget set. You are using the Flex widget set. This means that you do not get the native look/feel. For this reason, it is best to build apps that look like YOUR app... not a Flex app or a native app. There are lots of popular examples in the app stores that work this way... and a Flex app pretty much requires it in my opinion.
There are also a lot of APIs not available to you. Flex provides hardware abstractions for the most popular APIs (video, audio, accelerometer, positioning, webkit, etc) but platform specific APIs are still missing (contacts, calendar, system notifications, etc).
At that point, it is worth asking what your app needs to do. Does it have a lot of native interfacing? If so, Flex might not be right for you. Find the APIs you need to talk to and make sure Flex has an abstraction for you. If it is a data-centric display/edit app, then Flex is a strong fit.
Hope this helps :)

Resources