https://developers.google.com/vr/discover/360-degree-media
I have tried for 3 days trying to find something that records Pano videos without special hardware for my iOS app; Google 360 was the only thing I could find.
Has anyone used Google or anything else to capture 360 videos without special hardware? Is it even possible?
Any feedback on this will be greatly appreciated
Thank you
You can capture 360° panoramic photos and videos with normal cameras if you have 6 of them. You need to arrange them into a cube so that each camera points 90° to the 4 others for which it is a neighbor. This will allow you to capture the equivalent of a cube map. You can convert a cube map to equirectangular (which is what most 360° editing software uses) using conversion software, or you can write it yourself if you're so inclined.
There also exist hardware extensions for the iPhone that capture 360° equirectangular panoramas. Some are reasonably cheap, and some are fairly compact.
For just photos, you can take the 6 shots with a single camera, but you need to align it correctly. There are mounts to make it easier.
Related
The iPhone 7 plus and 8 plus (and X) have an effect in the native camera app called "Portrait mode", which simulates a bokeh-like effect by using depth data to blur the background.
I want to add the capability to take photos with this effect in my own app.
I can see that in iOS 11, depth data is available. But I have no idea how to use this to achieve the effect.
Am I missing something -- is it possible to turn on this effect somewhere and just get the image with it applied, rather than having to try and make this complicated algorithm myself?
cheers
Unfortunately portrait mode and portrait lighting aren't open to developers as of iOS 11 so you would have to implement a similar effect on your own. Capturing Depth in iPhone Photography and Image Editing with Depth from this years WWDC go into detail on how to capture and edit images with depth data.
There are two sample projects on the developer site that show you how to capture and visualize depth data using a Metal shader, and on how to detect faces using AVFoundation. You could definitely use these to get you started! If you search for AVCam in the Guides and Sample Code they should be the first two that come up (I would post the links but stack overflow is only letting me add two).
Is there a way to take a picture with the Telephoto lens and the Wideangle lens of the iPhone 7 Plus ?
I explored the different methods, but the best I can come with is to change the camera by removing the input AVCaptureDeviceTypeBuiltInTelephotoCamera and adding the input from AVCaptureDeviceTypeBuiltInWideangleCamera. This takes about 0.5 second however, I would like to capture it simultaneouly. From a hardware point of view, it should be possible since Apple is doing the same when using the AVCaptureDeviceTypeBuiltInDuoCamera.
Does anybody know other methods to capture a photo from both cameras at (almost) the same time?
Thanks!
I wanted to capture from both cameras too, but what I've found is this:
When you are using the AVCaptureDeviceTypeBuiltInDualCamera that
automatically switches between wide and tele, they are synchronized to
the same clock. Simultaneous running of the
AVCaptureDeviceTypeBuiltInTelephotoCamera and
AVCaptureDeviceTypeBuiltInWideAngleCamera cameras is not supported.
Source - https://forums.developer.apple.com/thread/63347
I am interested in VR and trying to get a bit more information. I want to create a similar experience on iOS where I can take a 360 image and be able to view it on a iOS device by tilting the phone around and using the devices gyroscope, as I tilt the phone around it will pan around the 360 image (like on google street view where you can use the tilt gesture).
And something similar to this app: http://bubb.li/
Can anybody give a brief overview how this would be do-able, any sources that could help me achieve this, API's etc...?
Much appreciated.
Two options here: You can use a dedicated device to capture the image for you, or you can write some code to stitch together multiple images taken from the iOS device as you move it around a standing point.
I've used the Ricoh Theta for this (no affiliation). They have a 360 viewer in the SDK for mapping 360 images to a sphere that works exactly as you've asked.
Assuming you've figured out how to create 360 photospheres, you can use Unity and Unreal, and probably development platforms to create navigation between the locations you captured.
Here is a tutorial that looks pretty detailed for doing this in Unity:
https://tutorialsforvr.com/creating-virtual-tour-app-in-vr-using-unity/
One pro of doing this in something like Unity or Unreal is once you have navigation between multiple photo spheres working it's fairly easy to add animation or other interactive elements. I've seen interactive stories done with 360 video using this method.
(I see that the question is from a fairly long time ago, but it was one of the top results when I looked for this same topic)
I am creating simple camera app and I want to add 'image stability' so when hands are shaking the camera does not twitch. Is it possible to do in iOS?
You can do this by getting the raw image from the camera, and only using a subset of the raw image frame, then programmatically picking a new subset for each raw image to use for the next frame. Needless to say, this is a large amount of work and should only be undertaken if you know what you are doing or want to have the most impressive video/picture taking app.
The iPhone 6+ has this built into the hardware and is, I believe, what the previous comment link to avfoundation is talking about.
It looks like when I shoot video with UIImagePickerControllerQualityTypeMedium, on an iPod Touch it comes out 480x360, but on an iPhone 4 it's something higher (can't say just what as I don't have one handy at the moment) and on an iPad 2 presumably the same as the 4, if not something different again.
I'd like to shoot the same quality on all devices -- I have to add some frames and titles, and it'll make my life a lot easier if I just have to code that for one resolution. Is there any way to determine what the different UIImagePickerControllerQualityType values correspond to at run time? (Apart from shooting video with each and then examining the result, that is.)
Or is my only choice to use UIImagePickerControllerQualityType640x480?
If you need more customization/power on iOS than you get wish the higher level objects, such as UIImagePickerController, it is recommended to work at the next lower level: AV Foundation Framework. Apple has some excellent documentation on AV Foundation programming that should come in handy for that purpose.
Unfortunately, even there you are limited to capturing at 640x480 if you do want it standard across all devices. There, however, is a great chart available at the same link (but anchors are broken in the docs, so Ctrl+F to "Capturing Still Images") that lists all the resolutions for various devices under certain quality directives.
Your most solid bet, assuming 640x480 is too small, is to work out some sort of scaling algorithm that would allow you to scale according to the overall resolution.