Is it possible to recognise light patterns on iOS? - ios

Is it possible to recognise light patterns on iOS?
Is there a native iOS SDK to do so?
Use case:
Detect light patterns (e.g. on / off) using smartphone camera
Background information:
Apple has acquired last year Metaio so I presume at some point we will have such SDK, but for now I presume that the best way to achieve this is by using third party SDK or using image capturing and processing the image (if the images are simple enough so that a simple algorithm can be applied).

You could take a look at Kudan AR. https://www.kudan.eu/
They currently offer a SDK for iOS, not yet for Android. Their tracking quality is phenomenally good. But, I do not know if it is appropriate to your goals. It would be best if you talk to them and ask if their tracking would fit your needs.

Related

iOS Programmatically take a picture with camera based on what camera sees

I am currently working on an iOS app that can take a picture programmatically using AVFoundation libraries like AVCaptureDevice through a custom button.
The new requirement is that the camera should automatically take a picture when the camera session detects something specific. For example, if the camera is open, and I line up an apple to fill a certain circle part of the capture screen, it should take the picture automatically. We can see this auto capture feature in some banking apps when you submit a mobile check deposit.
Does anyone know of existing libraries(open-source or proprietary) that can analyze images in real time while a user is taking a picture?
The first thing you are going to need to do is decide how you want to detect the apple. You can do this using shape detection, image recognition, or various other methods. This is important because you need to know the approach you want to take before you can identify the best way to implement it.
Once you know how you are going to identify the apple, the easiest way to do real-time image processing like this would be to use an existing augmented reality SDK. For example:
http://www.wikitude.com/products/wikitude-sdk/
http://artoolkit.org/
https://developer.vuforia.com/
If you are feeling really adventurous you could roll your own using AForge or a similar library. I have taken this approach in the past for basic shape detection projects.
Edit
The reason I suggest using an existing AR SDK is because generally they provide a lot of the glue between the camera feed and their API for you and it takes a lot of leg work out of the equation. Even though you won't be using any of the actual "augmentation" part of their SDKs, you can still take advantage of the detection part.
No matter what approach you take, you can think about it in the simplest terms of looking a picture, and figuring out if the item you want is in that picture. How do you decide? In most cases you look for a specific shape or pattern.

Realtime augmented reality color changing?

I am developing an augmented reality application where it recognizes objects and change the colors real time (example color of walls of a house). Can I use Vuforia SDK for this or are there any other better sdks to be used ?
Basically yes - Vuforia is able to detect pre-known images and let you know when something was detected, and what you do then is up to you. However, it depends on the objects you plan to recognize - Vuforia does not allow just any image to be detected, the image must have enough features. You can read about this here: https://developer.vuforia.com/library/articles/Solution/Natural-Features-and-Ratings

Is it possible to view panoramic image using google cardboard in iOS application?

I have been searching for ways to integrate Google Cardboard SDK in iOS. One way is using unity but i am looking for something through which i can directly integrate the cardboard sdk in ios and i want to view a panoramic image in that. Is there any way to do that?
I am looking for an iOS alternative for this project : Link Here
Okay, I've spent a few days getting CardboardSDK-iOS to do what I want (which is like the "Exhibit" demo in Google Cardboard App), and I'm pretty pleased with it. I'm guessing that it's pretty faithfull to the original, but since I'm not familiar with the original, I can't say for sure.
But I can say that it's not just a case of dropping a panoramic data set in. You need to do a bit of work to display the stereo image pair required, in OpenGL, depending on where the viewer has their head pointing. If you understand 3D transforms, how OpenGL works, and you've got your data prepared correctly, it should not be to onerus to get it working.
Of course - this is all done in xcode in ObjectiveC/C++ - and not in Java. And I'm assuming that by "panoramic image" you mean you have a hemispherical stereo data set which should give you something like what you see in Google's Cardboard "Urban Hike" demo.
Hope this helps !

Is is possible to use Vuforia without a camera?

Is it possible to use Vuforia without a camera for image tracking?
Basically I would like a function I could call with an image as a indata parameter and coordinates of a image target as a result. Does that exist?
It is unfortunately not possible. I've been looking for such an option myself several times while working on a Moodstocks (image recognition SDK) / Vuforia mashup (see these 2 blog posts if you are interested in it), but the Vuforia SDK prevents the use of any other source than the camera.
I guess the main reason for this is that the camera management is fully handled internally by the Vuforia SDK, probably in order to make it easier to use as managing the camera by ourselves is at best a boring task (lines and lines of code to repeat in each project...), at worst a huge pain in the ass (especially on Android where there are sometimes devices than don't behave as expected).
By the way, it looks to me like the Vuforia SDK is not the best solution you can find for your use case: it is mainly an augmented-reality SDK, focussed on real-time tracking, which imply working with a camera stream... so using it to do "simple" image recognition looks really overkill!

Basic IOS motion detectors

Is there a framework or a library that covers the basic motions committed by the user
like moving the device up, down tossing the device, flipping etc.. based on the accelerometer
in iOS
Core Motion framework will be the best choice.
You can find a nice tutorial here.
http://nscookbook.com/2013/03/ios-programming-recipe-19-using-core-motion-to-access-gyro-and-accelerometer/

Resources