Creating a panoramic photo - ios

I want to be able to create a panoramic photo app or something that will be able to stitch multiple photos together (much like google photo sphere), but before I start I want to get a bit more information how it is done.
Is it done using the UIImagePickerController framework?
Is there any other useful API's or anything out there I can use?
Can somebody give me a brief overview of how this works.

There is no available native API with stitching algorithm. You should dig into the 3rd party OpenCV library and check their stitcher documentation
Basic Key steps of stitching algorithm:
Detect keypoints in each input image (eg. Harris corners) and extract the invariant descriptors of the images (eg. SIFT)
Match the descriptors between images
Using RANSACcalculate the homography matrix and apply the transformation

Related

Tell the library to create spherical panoramas

I want to create a site that will take the original images and create a spherical panorama of them. I plan to use the finished library, but I do not know which one has this feature. I watched OpenCV, but I didn’t understand whether it could create spherical panoramas from a set of photos. Perhaps someone had experience with this issue.

How do I generate stereo images from mono camera?

I have a stationary mono camera which captures a single image frame at some fps.
Assume the camera is not allowed to move,how do I generate a stereo image pair from the obtained single image frame? Is there any algorithms exists for this? If so, are they available in Open-CV?
To get a stereo image, you need a stereo camera, i.e. a camera with two calibrated lenses. So you cannot get a stereo image from a single camera with traditional techniques.
However, with the magic of deep learning, you can obtain the depth image from single camera.
And no, there's no builtin OpenCV function to do that.
The most common use of this kind of techniques is in 3D TVs, which often offer 2D-to-3D conversion, and thus mono to stereo conversion.
Various algorithms are used for this, you can look at this state of the art report.
There is also optical way for this.
If you can add binocular prisms/mirrors to your camera objective ... then you could obtain real stereoscopic image from single camera. That of coarse need access to the camera and setting up the optics. This also introduce some problems like wrong auto-focusing , need for image calibration, etc.
You can also merge Red/Cyan filtered images together to maintain the camera full resolution.
Here is a publication which might be helpful Stereo Panorama with a single Camera.
You might also want to have a look at the opencv camera calibration module and a look at this page.

Wikitude/AR SDK's to Pick out objects in 3d space

Im looking at integrating an AR Kit into our iOS App so we can use the camera to scan a room or field of view for objects. Above is an example of what i mean, if you were to bring up the camera it would highlight the separate objects in the room and allow them to be clicked and "added" into the system.
Does anyone know if this is achievable with the current AR kits or anything else out there? It all seems to be the fact that objects that you are looking for have to be pre-defined and loaded into a database so the app can find them. Im hoping it should be able to pick out the objects realtime. It doesnt need to actually know any details on the actual object just so that can be pulled off the base scenary.
Any ideas?
OpenCV library (iOS) contains many algorithms to compare different image blobs. If you want to match some simple template to find objects then try Viola & Jones algorithm and so called Haar cascades. OpenCV has trained collection of templates in XML files for detecting faces for example. OpenCV contains utility for training thus you are able to generate cascades for other kinds of objects.
Some example projects:
https://github.com/alexmac/alcexamples/blob/master/OpenCV-2.4.2/doc/user_guide/ug_traincascade.rst Cascade Classifier Training
https://github.com/lukagabric/iOS-OpenCV Example code for detecting Colors and Circle shapes
https://github.com/BloodAxe/OpenCV-Tutorial Feature Detection (SURF, ORB, FREAK)
https://github.com/foundry/OpenCVSquaresSL Square Detection using Pyramid scaling, Canny, contours, contour simpification

send images to kinect as input instead of camera feed

I want to send my images to Kinect SDK either OpenNI or windows SDK for Kinect sdk to tell me the position of a user's hand or head and ..
I don't want to use kinect's camera feed. The images are from a paper which I want to do some image processing on them and I need to work exactly on same images so can not use my own body as input to Kinect camera.
I don't matter between Kinect sdk of Microsoft or the OpenNI thing, it just needs to be able to get my rgb and depth images as input instead of Kinect camera's one.
Is it possible? If yes how can I do it?
I subscribe to the same question. I want to feed a Kinect face detection app to read images from the hard drive and return the Animation Units of the recognized face. I want to train a classifier for facial emotion recognition using Animation Units as input and features.
Thanks,
Daniel.

image stitching to generate a Panorama with opencv

Now, i'm doing the experiment with opencv to stitch several images into a panorame, but These pictures are taken at different angles. now i want to do is to project all the images onto a cylindrical surface, then using the SIFT to match the features to get the transform matrix. how should I do it? is there any interface of opencv to do that(to project all the images onto a cylindrical surface, and i don't know any parameter of the camera)?
In the OpenCV sample folder there is a script called stitching_detailed.cpp. It does the whole pipeline for creating panoramas including feature extraction, matching, warping and blending, etc.
You should have a look on it:
https://github.com/Itseez/opencv/blob/master/samples/cpp/stitching_detailed.cpp

Resources