What possible solution to set custom exposure in duo camera device? - ios

My code was setting AVCaptureExposureModeCustom for AVCaptureDevicePositionBack however while adding support for iPhone 7 Plus I am able to set it for AVCaptureDeviceTypeBuiltInTelephotoCamera but not on AVCaptureDeviceTypeBuiltInDuoCamera.
What could be a possible solution to allow user set exposure for Duo Cam.

AVCaptureDeviceTypeBuiltInDuoCamera don't support RAW capture and manual controls. if you want to achieve manual control you have to select wide-angle or telephoto camera.
When you use the dual camera capture device, RAW capture and most
manual controls are not available. To use these features, specifically
select either the wide-angle or telephoto capture device.
Apple documentation link
The real reason why you cannot do manual controls is because duel camera device uses the system automatically chooses which camera to use during capture, and can combine data from both cameras to improve capture output.
Check iOSDeviceCompatibility for more info.

Related

ios ARKit 3 with iPad Pro 2020, how to use front camera data with back camera tracking?

The ARKit API supports simultaneous world and face tracking via the back and front cameras, but unfortunately due to hardware limitations, the new iPad Pro 2020 is unable to use this feature (probably because the LIDAR camera takes a lot more power). This is a bit of a step back.
Here is an updated reference in the example project:
guard ARWorldTrackingConfiguration.supportsUserFaceTracking else {
fatalError("This sample code requires
iOS 13 / iPad OS 13, and an iOS device with
a front TrueDepth camera. Note: 2020 iPads
do not support user face-tracking while world tracking.")
}
There is also a forum conversation proving that this is an unintentional hardware flaw.
It looks like the mobile technology is not "there yet" for both. However, for my use case I just wanted to be able to switch between front and back tracking modes seamlessly, without needing to reconfigure the tracking space. For example, I would like a button to toggle between "now you track and see my face" mode and "world tracking" mode.
There are 2 cases: it's possible or it's impossible, but maybe there are some alternative approaches depending on that.
Is it possible, or would switching AR tracking modes necessitate setting-up the tracking space again? If so, how would it be achieved?
If it's impossible:
Even if I don't get face-tracking during world-tracking, is there a way to get a front-facing camera feed that I can use with the Vision framework, for example?
Specifically: how do I enable back-facing tracking and get front and back facing camera feeds simultaneously, and disable one or the other selectively? If it's possible even without front-facing tracking and only the basic feed, this will work.

Raw Depth map SDK for IPhone X

I did some search and found various examples, documentation on iPhone X Face ID and how it can be used for various stuff like authentication, animated emojis.
Wanted to check if there is an API/SDK to get the raw depth map from iPhone X sensor to the app?
From my understanding the depth calculation is done based on the projected pattern. This can be used to get depth profile of any object in front of the sensor. (Might be dependent on the texture of the object.)
You'll need at least the iOS 11.1 SDK in Xcode 9.1 (both in beta as of this writing). With that, builtInTrueDepthCamera becomes one of the camera types you use to select a capture device:
let device = AVCaptureDevice.default(.builtInTrueDepthCamera, for: .video, position: .front)
Then you can go on to set up an AVCaptureSession with the TrueDepth camera device, and can use that capture session to capture depth information much like you can with the back dual camera on iPhone 7 Plus and 8 Plus:
Turn on depth capture for photos with AVCapturePhotoOutput.isDepthDataDeliveryEnabled, then snap a picture with AVCapturePhotoSettings.isDepthDataDeliveryEnabled. You can read the depthData from the AVCapturePhoto object you receive after the capture, or turn on embedsDepthDataInPhoto if you just want to fire and forget (and read the data from the captured image file later).
Get a live feed of depth maps with AVCaptureDepthDataOutput. That one is like the video data output; instead of recording directly to a movie file, it gives your delegate a timed sequence of image (or in this case, depth) buffers. If you're also capturing video at the same time, AVCaptureDataOutputSynchronizer might be handy for making sure you get coordinated depth maps and color frames together.
As Apple's Device Compatibility documentation notes, you need to select the builtInTrueDepthCamera device to get any of these depth capture options. If you select the front-facing builtInWideAngleCamera, it becomes like any other selfie camera, capturing only photo and video.
Just to emphasize: from an API point of view, capturing depth with the front-facing TrueDepth camera on iPhone X is a lot like capturing depth with the back-facing dual cameras on iPhone 7 Plus and 8 Plus. So if you want a deep dive on how all this depth capture business works in general, and what you can do with captured depth information, check out the WWDC17 Session 507: Capturing Depth in iPhone Photography talk.

Simultaneous pictures with iPhone 7 Plus cameras

Is there a way to take a picture with the Telephoto lens and the Wideangle lens of the iPhone 7 Plus ?
I explored the different methods, but the best I can come with is to change the camera by removing the input AVCaptureDeviceTypeBuiltInTelephotoCamera and adding the input from AVCaptureDeviceTypeBuiltInWideangleCamera. This takes about 0.5 second however, I would like to capture it simultaneouly. From a hardware point of view, it should be possible since Apple is doing the same when using the AVCaptureDeviceTypeBuiltInDuoCamera.
Does anybody know other methods to capture a photo from both cameras at (almost) the same time?
Thanks!
I wanted to capture from both cameras too, but what I've found is this:
When you are using the AVCaptureDeviceTypeBuiltInDualCamera that
automatically switches between wide and tele, they are synchronized to
the same clock. Simultaneous running of the
AVCaptureDeviceTypeBuiltInTelephotoCamera and
AVCaptureDeviceTypeBuiltInWideAngleCamera cameras is not supported.
Source - https://forums.developer.apple.com/thread/63347

How do I fire a camera connected on USB programatically?

I want to make something like they have at US dmv's where you sit down and it takes your picture, maybe like photobooth.
I want to connect a high end camera via usb, fire the camera and get the picture.
There's the Picture Transfer Protocol http://en.wikipedia.org/wiki/Picture_Transfer_Protocol a nastly little thing. All the cameras I held in my hands so far, claiming they had proper PTP support failed it somewhere. But in theory one can use PTP to remote control a camera, i.e. trigger the shutter, retrieve the picture and so on.
Rater than reimplementing the whole thing I recommend you get some readily usable PTP library. There are some open source ones listed on http://ptp.sourceforge.net
The easiest method is probably to use OpenCV: http://opencv.willowgarage.com/wiki/
If you need a high end camera - most digital SLRs have a tethered mode where you can control the camera, fire the shutter and retrieve the image data. Each camera maker has a proprietary (but normally free) sdk.
For a webcam type camera - these normally run in video mode, you simply grab an image out f the video stream - as PaulR says - use openCV

AV Foundation camera preview layer gets zoomed in, how to zoom out?

The application currently I am using has a main functionality to scan QR/Bar codes continuously using Zxing library (http://code.google.com/p/zxing/). For continuous frame capturing I used to initialize the AVCaptureSession and AVCaptureVideoOutput, AVCaptureVideoPreviewLayer described in the apple Q&A http://developer.apple.com/iphone/library/qa/qa2010/qa1702.html.
My problem is, when I used to run the camera preview, the image I can see through the Video device is much larger (1.5x) than the image we can see through the still camera of the iPhone. Our customer needs to hold the iPhone around 5cm distance from the bar code when he is scanning, but if you hold the iPhone to that parameter, the whole QR code won't be visible and the decoding fails.
Why is Video camera in iPhone 4 enlarges the image (by seeing through the AVCaptureVideoPreviewLayer) ?.
This is a function of the AVCaptureSession video preset, accessible by using the .sessionPreset property. For example, after configuring your captureSession, but before starting it, you would add
captureSession.sessionPreset = AVCaptureSessionPresetPhoto;
See the documentation here:
iOS Reference Document
The default preset for video is 1280x720 (I think) which is a lower resolution than the max supported by the camera. By using the "Photo" preset, you're getting the raw camera data.
You see the same behaviour with the built-in iPhone Camera app. Switch between still and video capture modes and you'll notice that the default zoom level changes. You see a wider view in still mode, whereas video mode zooms in a bit.
My guess is that continuous video capture needs to use a smaller area of the camera sensor to work optimally. If it used the whole sensor perhaps the system couldn't sustain 30 fps. Using a smaller area of the sensor gives the effect of "zooming in" to the scene.
I am answering my own question again. This was not answered even in Apple Dev forum, therefore I directly filed a technical support request from Apple and they have replied that this is a known issue and will be fixed and released with a future version. So there is nothing we can do more than waiting and see.

Resources