Is it possible to get calibration data AVCapturePhoto::cameraCalibrationData for ultra wide camera?
Documentation says:
Camera calibration data is present only if you specified the cameraCalibrationDataDeliveryEnabled and dualCameraDualPhotoDeliveryEnabled settings when requesting capture.
but dualCameraDualPhotoDeliveryEnabled was deprecated.
I tried to set cameraCalibrationDataDeliveryEnabled for builtInDualWideCamera and builtInUltraWideCamera without any success.
The calibration data is meant to give you information about the intrinsics of multiple cameras in a virtual camera capture scenario. This used to be the dual camera (introduced with the iPhone X), but with the release of the iPhone 11 Pro, the API switched it's naming. It's now called isVirtualDeviceConstituentPhotoDeliveryEnabled and you can now specify the set of cameras that should be involved in the capture with virtualDeviceConstituentPhotoDeliveryEnabledDevices.
Note that the calibration data only seem to be available for virtual devices with at least two cameras involved (so builtInDualCamera, builtInDualWideCamera and builtInTripleCamera).
Related
I have a proejct about lidar scan
It is not detail scan so We have a lot of problem of object scan
Along the way. I found a some app. Its name is 3DscannerApp
There are some function such as resolution and max depth for using lidar scan.
Coud I get these infomation about resolution and max depth when I use ARWorldTrackingConfiguration?
Apple provides sample project for putting 3d content or face filters on people faces. The 3d content tracks face anchor and move according to it. But this function is only supported with devices that have TrueDepth Camera. For example, we can not use ARSCNFaceGeometry without TrueDepth. How Facebook or 3. party SDKs like Banuba makes this work with devices without depth camera?
As far as I know, using the MediaPipe and get face mesh is the only possibility without TrueDepth camera.
There's a bit of a delay when detecting planes using ARCore. That's fine, but what do you do when you want to place an object on a plane as the user pans the phone?
With the delay, the object pops into the screen after the plane is detected, rather than appearing as panned, which isn't realistic.
Let's compare two leading AR SDKs
LiDAR scanner in iOS devices for ARKit 4.0
Since an official release of ARKit 3.5 there's a support for brand-new Light Detection And Ranging scanner allowing considerably reduce a time required for detecting a vertical and/or horizontal planes (it operates at nano-second speed). Apple has implemented this sensor on the rear camera of iPad Pro 2020. LiDAR scanner (that is basically direct ToF) gives us almost instant polygonal mesh of real-world environment in AR app, which is suitable for People/Objects Occlusion feature, precise ZDepth-object-placement and a complex collision shape for dynamics. A working distance of Apple LiDAR scanner is up to 5 meters. LiDAR scanner helps you detect planes in poorly-lit rooms with no feature points on walls and a floor.
iToF cameras in Android Devices for ARCore 1.18
3D indirect Time-of-Flight sensor is a sort of scannerless LiDAR. It also surveys the surrounding environment and accurately measures a distance. Although LiDARs and iToFs at their core are almost the same things, a scanner type is more accurate since it uses multiple laser pulses versus just one large flash laser pulse. In Android world, Huawei and Samsung, for instance, include scannerless 3D iToF sensors in their smartphones. Google Pixel 4 doesn't have iToF camera. A working distance of iToF sensor is up to 5 meters and more. Let's see what Google says about its brand-new Depth API:
Google's Depth API uses a depth-from-motion algorithm to create depth maps, which you can obtain using acquireDepthImage() method. This algorithm takes multiple device images from different angles and compares them to estimate the distance to every pixel as a user moves their phone. If the device has an active depth sensor, such as a time-of-flight sensor (or iToF sensor), that data is automatically included in the processed depth. This enhances the existing depth map and enables depth even when the camera is not moving. It also provides better depth on surfaces with few or no features, such as white walls, or in dynamic scenes with moving people or objects.
Recommendations
When you're using AR app built on ARCore without iToF sensor support, you need to detect planes in a well-lit environment containing a rich and unique wall and floor textures (you needn't track repetitive textures or textures like "polka dot"). Also, you may use Augmented Images feature to quickly get ARAnchors with a help of image detection algorithm.
Conclusion
Plane Detection is a very fast stage in case you're using LiDAR or iToF sensors. But for devices without LiDAR or iToF (when you're using ARKit 3.0 and lower or ARCore 1.17 and lower) there will be some delay at plane detection stage.
If you need more details about LiDAR scanner, read my story on Medium.
How can we access Front Facing Camera Images with ARCamera or ARSCNView and is it possible to record ARSCNView just like Camera Recording?
Regarding the front-facing camera: in short, no.
ARKit offers two basic kinds of AR experience:
World Tracking (ARWorldTrackingConfiguration), using the back-facing camera, where a user looks "through" the device at an augmented view of the world around them. (There's also AROrientationTrackingConfiguration, which is a reduced quality version of world tracking, so it still uses only the back-facing camera.)
Face Tracking (ARFaceTrackingConfiguration), supported only with the front-facing TrueDepth camera on iPhone X, where the user sees an augmented view of theirself in the front-facing camera view. (As #TawaNicolas notes, Apple has sample code here... which, until iPhone X actually becomes available, you can read but not run.)
In addition to the hardware requirement, face tracking and world tracking are mostly orthogonal feature sets. So even though there's a way to use the front facing camera (on iPhone X only), it doesn't give you an experience equivalent to what you get with the back facing camera in ARKit.
Regarding video recording in the AR experience: you can use ReplayKit in an ARKit app same as in any other app.
If you want to record just the camera feed, there isn't a high level API for that, but in theory you might have some success feeding the pixel buffers you get in each ARFrame to AVAssetWriter.
As far as I know, ARKit with Front Facing Camera is only supported for iPhone X.
Here's Apple's sample code regarding this topic.
If you want to access the UIKit or AVFoundation cameras, you still can, but separately from ARSCNView. E.g., I'm loading UIKit's UIImagePickerController from an IBAction and it is a little awkward to do so, but it works for my purposes (loading/creating image and video assets).
I am working with Xtion Pro Live on Ubuntu 12.04 with Opencv 2.4.10. I want to do object recognition on daylight.
So far i have achieved object recognition indoors by producing a depth and a disparity map. When i go outdoors the maps that i mentioned above are black and i cannot perform object recognition.
I would like to ask you if Asus Xtion Pro Live can work outdoors.
If it cannot, is there a way to fix it (through code) in order to do object detection outdoors?
I have searched around and i found out that i should take another stereoscopic camera. Could anyone help?
After some research I discovered that the Xtion Pro Live stereoscopic camera, can not be used outdoors because of the IR sensor. This sensor is responsible for the production of depth map and is affected by sunlight. Because of this, there are no clear results. Without clear results the creation of depth and disparity map (with the proper values) is impossible.