Changing ARSCNView background - ios

iOS 13.1.3 on iPad Pro
Seem like there is a a same problem on previous iOS versions.
Using AVCaptureDevice as SCNScene background content
My app uses front cam to create AR face in iOS. At first tour, my app uses default SCNView with camera input. After first tour, I am setting ARSCNView background to a UIImage. After setting this I can't go back to previous state by setting scnview.scene.background to nil or inputdevice.
How can I take it back to previous state that shows camera input?
First, I set like below which is shown successfully.
sceneView.scene.background.contents = UIImage(named: "bruin.jpeg")
Then after 15 seconds, I set it with the code below but I get stable image not video preview layer.
DispatchQueue.main.async {
let captureDevice = AVCaptureDevice.default(.builtInWideAngleCamera,
for: .video,
position: .front)!
self.sceneView.scene.background.contents = captureDevice
}
I get this error in output:
// SceneKit Error: Could not get pixel buffer (CVPixelBufferRef)

Related

Difference between builtInDualCamera and builtInDualWideCamera

I know only a little about iPhone's camera, and I'm bit confused with the differences between builtInDualCamera and builtInDualWideCamera. (so as builtInWideAngleCamera and builtInUltraWideCamera)
builtInDualCamera
A device that consists of a wide-angle and telephoto camera.
builtInWideAngleCamera
A device that consists of two cameras of fixed focal length, one
ultrawide angle and one wide angle.
I guess builtInDualCamera is like iPhone 11's camera and builtInDualCamera is like iPhone X's camera... Is that correct?
I'm working on a camera app (basically using video), and I'm trying to configure the camera when a user opens the app's camera screen. I tried codes in this article, basically picking which camera to use. So my code below is just checking if the device has 3 cameras > 2 cameras > 1 camera, and use one of them when configuring capture session. However, the device types has two similar properties, like builtInDualCamera and builtInDualWideCamera (and also builtInWideAngleCamera and builtInUltraWideCamera). I want to know which iPhone's camera is builtInWideAngleCamera and builtInUltraWideCamera. I added a screenshot as well, but is it like the difference between iPhone X's camera and iPhone 11's camera? (I mean... iPhone 11's camera has separate two camera's whereas iPhone X has two camera's but in a different shape.
import AVFoundation
class CameraManager {
static let shared = CameraManager()
let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInTripleCamera, .builtInDualCamera, .builtInWideAngleCamera], mediaType: .video, position: .back)
func getBestDevice() -> AVCaptureDevice? {
let devices = discoverySession.devices
guard !devices.isEmpty else { fatalError("Missing capture devices.")}
return devices.first
}
}
and use like
CameraManager.shared.getBestDevice()
If I have three cameras (.builtInTripleCamera, .builtInDualCamera, .builtInWideAngleCamera) in the discoverySession property to check which camera to use, every iPhone camera will be categorized one of them? If the device's camera is builtInWideAngleCamera, do I need to add .builtInWideAngleCamera to the discoverySession property in order to use the builtInWideAngleCamera?
If you look at DeviceType page, the cameras are listed from the most basic to the most advanced:
builtInWideAngleCamera exists on all iOS devices since long time ago,
builtInDualCamera exists from iPhone 7,
and builtInTripleCamera is iPhone 11
But seems you don't need a discovery session at all. You seems want to choose "the best back camera for taking videos", so your case falls under "Quickly Choose a Default Device" use case on the page you referenced.
In your case you will have for: .video, position: .back, while the first parameter actually depends on kind of video you want to take. For example you could ask for builtInTripleCamera, if available, then builtInDualCamera, and then settle on builtInWideAngleCamera as a minimal option. Or you may decide that people who don't have builtInTripleCamera just can't use your app:
if let device = AVCaptureDevice.default(.builtInTripleCamera,
for: .video, position: .back) {
return device
} else if let device = AVCaptureDevice.default(.builtInDualCamera,
for: .video, position: .back) {
return device
} else if let device = AVCaptureDevice.default(.builtInWideAngleCamera,
for: .video, position: .back) {
return device
} else {
fatalError("Missing expected back camera device.")
}

Force Redraw of AVPlayerLayer when it is paused on iOS 13

I apply real time effects using CoreImage to video that is played using AVPlayer. The problem is when the player is paused, filters are not applied if you tweak filter parameters using slider.
let videoComposition = AVMutableVideoComposition(asset: asset, applyingCIFiltersWithHandler: {[weak self] request in
// Clamp to avoid blurring transparent pixels at the image edges
let source = request.sourceImage.clampedToExtent()
let output:CIImage
if let filteredOutput = self?.runFilters(source, filters: array)?.cropped(to: request.sourceImage.extent) {
output = filteredOutput
} else {
output = source
}
// Provide the filter output to the composition
request.finish(with: output, context: nil)
})
As a workaround, I used this answer that worked till iOS 12.4, but not anymore in iOS 13 beta 6. Looking for solutions that work on iOS 13.
After reporting this as a bug to Apple and getting some helpful feedback I have a fix:
player.currentItem?.videoComposition = player.currentItem?.videoComposition?.mutableCopy() as? AVVideoComposition
The explanation i got was:
AVPlayer redraws a frame when AVPlayerItem’s videoComposition property gets a new instance or, even if it is the same instance, a property of the instance has been modified.
As a result; forcing a redraw can be achieved by making a 'new' instance simply by copying the existing instance.

swift AVCapturePhotoOutput capturePhoto hangs preview

Showing preview in 1080 x 1440; getting photo with max resolution (3024 x 4032) and quality on iPhone 8 Plus with code:
capturePhotoOutput?.capturePhoto(with: configurePhotoSettings(), delegate: self)
with photo settings:
private func configurePhotoSettings() -> AVCapturePhotoSettings {
let photoSettings = AVCapturePhotoSettings()
photoSettings.isHighResolutionPhotoEnabled = true
photoSettings.isAutoStillImageStabilizationEnabled = (capturePhotoOutput?.isStillImageStabilizationSupported)!
photoSettings.isAutoDualCameraFusionEnabled = (capturePhotoOutput?.isDualCameraFusionSupported)!
return photoSettings
}
Doing this one by one (like sequential shooting mode) and preview freezes each time for a short time even if I do nothing in didFinishProcessingPhoto.
Looking for solution to make capturing smooth, maybe in background thread, but currently I'm stuck..
The reason of preview hangs is feature called optical stabilization.
You just need to turn it off for smooth preview while capturing photo:
photoSettings.isAutoStillImageStabilizationEnabled = false

Using AVCaptureDevice as SCNScene background content

During the SceneKit: What's New presentation at WWCD2017 (44:19) it was stated that we can now use AVCaptureDevice as background content for SCNScene.
Snippet from the presentation:
let captureDevice: AVCaptureDevice = ...
scene.background.contents = captureDevice
However the following code
let captureDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back)!
scene.background.contents = captureDevice
produces an error:
[SceneKit] Error: Cannot get pixel buffer (CVPixelBufferRef)
I also tried to create and start AVCaptureSession and then use the device, but it gives the same result.
What might be an issue here?
Edit
This bug seems to be fixed in iOS 11.2
Original answer
this appears to be a bug in SceneKit.
If that works for you a workaround would be to use an ARSCNView. It gives you access to all the SceneKit APIs, and it automatically draws the video feed as the scene's background.

Knowing resolution of AVCaptureSession's session presets

I'm accessing the camera in iOS and using session presets as so:
captureSession.sessionPreset = AVCaptureSessionPresetMedium;
Pretty standard stuff. However, I'd like to know ahead of time the resolution of the video I'll be getting due to this preset (especially because depending on the device it'll be different). I know there are tables online you can look this up (such as here: http://cmgresearch.blogspot.com/2010/10/augmented-reality-on-iphone-with-ios40.html ). But I'd like to be able to get this programmatically so that I'm not just relying on magic numbers.
So, something like this (theoretically):
[captureSession resolutionForPreset:AVCaptureSessionPresetMedium];
which might return a CGSize of { width: 360, height: 480}. I have not been able to find any such API, so far I've had to resort to waiting to get my first captured image and querying it then (which for other reasons in my program flow is not good).
I am no AVFoundation pro, but I think the way to go is:
captureSession.sessionPreset = AVCaptureSessionPresetMedium;
AVCaptureInput *input = [captureSession.inputs objectAtIndex:0]; // maybe search the input in array
AVCaptureInputPort *port = [input.ports objectAtIndex:0];
CMFormatDescriptionRef formatDescription = port.formatDescription;
CMVideoDimensions dimensions = CMVideoFormatDescriptionGetDimensions(formatDescription);
I'm not sure about the last step and I didn't try it myself. Just found that in the documentation and think it should work.
Searching for CMVideoDimensions in Xcode you'll find the RosyWriter example project. Have a look at that code (I don't have time to do that now).
You can programmatically get the resolution from activeFormat before capture begins, though not before adding inputs and outputs: https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVCaptureDevice_Class/index.html#//apple_ref/occ/instp/AVCaptureDevice/activeFormat
private func getCaptureResolution() -> CGSize {
// Define default resolution
var resolution = CGSize(width: 0, height: 0)
// Get cur video device
let curVideoDevice = useBackCamera ? backCameraDevice : frontCameraDevice
// Set if video portrait orientation
let portraitOrientation = orientation == .Portrait || orientation == .PortraitUpsideDown
// Get video dimensions
if let formatDescription = curVideoDevice?.activeFormat.formatDescription {
let dimensions = CMVideoFormatDescriptionGetDimensions(formatDescription)
resolution = CGSize(width: CGFloat(dimensions.width), height: CGFloat(dimensions.height))
if (portraitOrientation) {
resolution = CGSize(width: resolution.height, height: resolution.width)
}
}
// Return resolution
return resolution
}
FYI, I attach here an official reply from Apple.
This is a follow-up to Bug ID# 13201137.
Engineering has determined that this issue behaves as intended based on the following information:
There are several problems with the included code:
1) The AVCaptureSession has no inputs.
2) The AVCaptureSession has no outputs.
Without at least one input (added to the session using [AVCaptureSession addInput:]) and a compatible output (added using [AVCaptureSession addOutput:]), there will be no active connections, therefore, the session won't actually run in the input device. It doesn't need to -- there are no outputs to which to deliver any camera data.
3) The JAViewController class assumes that the video port's -formatDescription property will be non nil as soon as [AVCaptureSession startRunning] returns.
There is no guarantee that the format description will be updated with the new camera format as soon as startRunning returns. -startRunning starts up the camera and returns when it is completely up and running, but doesn't wait for video frames to be actively flowing through the capture pipeline, which is when the format description would be updated.
You're just querying too fast. If you waited a few milliseconds more, it would be there. But the right way to do this is to listen for the AVCaptureInputPortFormatDescriptionDidChangeNotification.
4) Your JAViewController class creates a PVCameraInfo object in retrieveCameraInfo: and asks it a question, then lets it fall out of scope, where it is released and dealloc'ed.
Therefore, the session doesn't have long enough to run to satisfy your dimensions request. You stop the camera too quickly.
We consider this issue closed. If you have any questions or concern regarding this issue, please update your report directly (http://bugreport.apple.com).
Thank you for taking the time to notify us of this issue.
Best Regards,
Developer Bug Reporting Team
Apple Worldwide Developer Relations
According to Apple, there's no API for that. It stinks, I've had the same problem.
May be you can provide a list of all posible preset resolutions for every iPhone model and check which device model the app is running on? - using something like this...
[[UIDevice currentDevice] platformType] // ex: UIDevice4GiPhone
[[UIDevice currentDevice] platformString] // ex: #"iPhone 4G"
However, you have to update the list for each newer device model. Hope this helps :)
if preset is .photo, the return size is for still photo size, not preview video size
if preset is not .photo, the return size is for video size, not for captured photo size.
if self.session.sessionPreset != .photo {
// return video size, not captured photo size
let format = videoDevice.activeFormat
let formatDescription = format.formatDescription
let dimensions = CMVideoFormatDescriptionGetDimensions(formatDescription)
} else {
// other way to get video size
}
Answer of #Christian Beer is a good way for specified preset.
My way is a good for active preset.
The best way to do what you want (get a known video or image format) is to set the format of the capture device.
First find the capture device you want to use:
if #available(iOS 10.0, *) {
captureDevice = defaultCamera()
} else {
let devices = AVCaptureDevice.devices()
// Loop through all the capture devices on this phone
for device in devices {
// Make sure this particular device supports video
if ((device as AnyObject).hasMediaType(AVMediaType.video)) {
// Finally check the position and confirm we've got the back camera
if((device as AnyObject).position == AVCaptureDevice.Position.back) {
captureDevice = device as AVCaptureDevice
}
}
}
}
self.autoLevelWindowCenter = ALCWindow.frame
if captureDevice != nil && currentUser != nil {
beginSession()
}
}
func defaultCamera() -> AVCaptureDevice? {
if #available(iOS 10.0, *) { // only use the wide angle camera never dual camera
if let device = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera,
for: AVMediaType.video,
position: .back) {
return device
} else {
return nil
}
} else {
return nil
}
}
Then find the formats that that device can use:
let options = captureDevice!.formats
var supportable = options.first as! AVCaptureDevice.Format
for format in options {
let testFormat = format
let description = testFormat.description
if (description.contains("60 fps") && description.contains("1280x 720")){
supportable = testFormat
}
}
You can do more complex parsing of the formats, but you might not care.
Then just set the device to that format:
do {
try captureDevice?.lockForConfiguration()
captureDevice!.activeFormat = supportable
// setup other capture device stuff like autofocus, frame rate, ISO, shutter speed, etc.
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice!))
// add the device to an active CaptureSession
}
You may want to look at the AVFoundation docs and tutorial on AVCaptureSession as there are lots of things you can do with the output as well. For example, you can convert the result to .mp4 using AVAssetExportSession so that you can post it on YouTube, etc.
Hope this helps
Apple is using 4:3 ratio for the iPhone camera.
You can you this ratio to get the frame size of the captured video by fixing either the width or height constraint of the AVCaptureVideoPreviewLayer and set the aspect ratio constraint to 4:3.
In the left image, the width was fixed to 300px and the height was retrieved by setting the 4:3 ratio, and it was 400px.
In the right image, the height was fixed to 300px and width was retrieved by setting the 3:4 ratio, and it was 225px.

Resources