Use an ARSCNView on devices that don't support ARKit - ios

Is it possible to use an ARSCNView, configure it with an ARWorldTrackingConfiguration for device that support it, and configure it another way for devices that don't support it (with A8 chip and lower) and still have the video render in the background?
if ARWorldTrackingConfiguration.isSupported{
let config=ARWorldTrackingConfiguration()
config.planeDetection = .horizontal
sceneView.session.run(config)
}else if AROrientationTrackingConfiguration.isSupported{
let config=AROrientationTrackingConfiguration()
sceneView.session.run(config)
}else{
print("not supported")
//what here? <<<
}

You need a running ARSession to drive the camera feed to an ARSCNView, to run an ARSession you need a supported ARConfiguration, and all configurations require A9.
However, if your below-A9 fallback plan is to have a SceneKit view that doesn't get any of ARKit's motion tracking... you don't need ARKit. Just use a regular SceneKit view (SCNView). To make the camera feed show up behind your SceneKit content, find the AVCaptureDevice for the camera you want, and pass that to the background.contents of the scene in your view.
(Using a capture device as a SceneKit background is new in iOS 11. It doesn't appear to be in the docs (yet?), but it's described in the WWDC17 SceneKit session.)
By the way, there aren't any devices that support orientation tracking but don't support world tracking, so the multiple-branch if statement in your question is sort of overkill.

Below code resolve this issue, it might be broken code
let device = sceneView.device!
let maskGeometry = ARSCNFaceGeometry(device: device)!
mask = Mask(geometry: maskGeometry, maskType: maskType)
replace with
let device = MTLCreateSystemDefaultDevice()
let maskGeometry = ARSCNFaceGeometry(device: device!)!
mask = Mask(geometry: maskGeometry)

Related

Choosing suitable camera for barcode scanning when using AVCaptureDeviceTypeBuiltInTripleCamera

I've had some barcode scanning code in my iOS app for many years now. Recently, users have begun complaining that it doesn't work with an iPhone 13 Pro.
During investigation, it seemed that I should be using the built in triple camera if available. Doing that did fix it for iPhone 13 Pro but subsequently broke it for iPhone 12 Pro, which seemed to be working fine with the previous code.
How are you supposed to choose a suitable camera for all devices? It seems bizarre to me that Apple has suddenly made it so difficult to use this previously working code.
Here is my current code. The "fallback" section is what the code has used for years.
_session = [[AVCaptureSession alloc] init];
// Must use macro camera for barcode scanning on newer devices, otherwise the image is blurry
if (#available(iOS 13.0, *)) {
AVCaptureDeviceDiscoverySession * discoverySession =
[AVCaptureDeviceDiscoverySession discoverySessionWithDeviceTypes:#[AVCaptureDeviceTypeBuiltInTripleCamera]
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionBack];
if (discoverySession.devices.count == 0) {
// no BuiltInTripleCamera
_device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
} else {
_device = discoverySession.devices.firstObject;
}
} else {
// Fallback on earlier versions
_device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
}
The accepted answer works but not all the time. Because lenses have different minimum focus distance it is harder for the device to focus on small barcodes because you have to put you device too close (before the minimum focus distance). This way it will never autofocus on small barcodes. It used to work on older lenses where autofocus was 10-12 cm but newer lenses especially those on iPhone 14 Pros that have the distance 20cm will be problematic.
The solution is to use ideally AVCaptureDeviceTypeBuiltInWideAngleCamera and setting videoZoomFactor on the AVCaptureDevice to zoom in little bit so the barcode will be nicely focused. The value should be calculated based on the input video properties and minimum size of barcode.
For details please refer to this WWDC 2019 video where they address exactly this issue https://developer.apple.com/videos/play/wwdc2021/10047/?time=133.
Here is implementation of class that sets zoom factor on a device that works for me. You can instantiate this class providing your device instance and call applyAutomaticZoomFactorIfNeeded() just before you are about to commit your capture session configuration.
///
/// Calling this method will automatically zoom the device to increase minimum focus distance. This distance appears to be problematic
/// when scanning barcodes too small or if a device's minimum focus distance is too large (like on iPhone 14 Pro and Max - 20cm, iPhone 13 Pro - 15 cm, older iPhones 12 or less.). By zooming
/// the input the device will be able to focus on a preview and complete the scan more easily.
///
/// - See https://developer.apple.com/videos/play/wwdc2021/10047/?time=133 for more detailed explanation and
/// - See https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces
/// for implementation instructions.
///
#available(iOS 15.0, *)
final class DeviceAutomaticVideoZoomFactor {
enum Errors : Error {
case minimumFocusDistanceUnknown
case deviceLockFailed
}
private let device: AVCaptureDevice
private let minimumCodeSize: Float
init(device: AVCaptureDevice, minimumCodeSize: Float) {
self.device = device
self.minimumCodeSize = minimumCodeSize
}
///
/// Optimize the user experience for scanning QR codes down to smaller sizes (determined by `minimumCodeSize`, for example 2x2 cm).
/// When scanning a QR code of that size, the user may need to get closer than the camera's minimum focus distance to fill the rect of interest.
/// To have the QR code both fill the rect and still be in focus, we may need to apply some zoom.
///
func applyAutomaticZoomFactorIfNeeded() throws {
let deviceMinimumFocusDistance = Float(self.device.minimumFocusDistance)
guard deviceMinimumFocusDistance != -1 else {
throw Errors.minimumFocusDistanceUnknown
}
Logger.logIfStaging("Video Zoom Factor", "using device: \(self.device)")
Logger.logIfStaging("Video Zoom Factor", "device minimum focus distance: \(deviceMinimumFocusDistance)")
/*
Set an inital square rect of interest that is 100% of the view's shortest side.
This means that the region of interest will appear in the same spot regardless
of whether the app starts in portrait or landscape.
*/
let formatDimensions = CMVideoFormatDescriptionGetDimensions(self.device.activeFormat.formatDescription)
let rectOfInterestWidth = Double(formatDimensions.height) / Double(formatDimensions.width)
let deviceFieldOfView = self.device.activeFormat.videoFieldOfView
let minimumSubjectDistanceForCode = self.minimumSubjectDistanceForCode(fieldOfView: deviceFieldOfView,
minimumCodeSize: self.minimumCodeSize,
previewFillPercentage: Float(rectOfInterestWidth))
Logger.logIfStaging("Video Zoom Factor", "minimum subject distance: \(minimumSubjectDistanceForCode)")
guard minimumSubjectDistanceForCode < deviceMinimumFocusDistance else {
return
}
let zoomFactor = deviceMinimumFocusDistance / minimumSubjectDistanceForCode
Logger.logIfStaging("Video Zoom Factor", "computed zoom factor: \(zoomFactor)")
try self.device.lockForConfiguration()
self.device.videoZoomFactor = CGFloat(zoomFactor)
self.device.unlockForConfiguration()
Logger.logIfStaging("Video Zoom Factor", "applied zoom factor: \(self.device.videoZoomFactor)")
}
private func minimumSubjectDistanceForCode(fieldOfView: Float,
minimumCodeSize: Float,
previewFillPercentage: Float) -> Float {
/*
Given the camera horizontal field of view, we can compute the distance (mm) to make a code
of minimumCodeSize (mm) fill the previewFillPercentage.
*/
let radians = self.degreesToRadians(fieldOfView / 2)
let filledCodeSize = minimumCodeSize / previewFillPercentage
return filledCodeSize / tan(radians)
}
private func degreesToRadians(_ degrees: Float) -> Float {
return degrees * Float.pi / 180
}
}
Thankfully with the help of reddit I was able to figure out that the solution is simply to replace
AVCaptureDeviceTypeBuiltInTripleCamera
with
AVCaptureDeviceTypeBuiltInWideAngleCamera

Is there a way to display camera images without using AVCaptureVideoPreviewLayer?

Is there a way to display camera images without using AVCaptureVideoPreviewLayer?
I want to do screen capture, but I can not do it.
session = AVCaptureSession()
camera = AVCaptureDevice.default(
AVCaptureDevice.DeviceType.builtInWideAngleCamera,
for: AVMediaType.video,
position: .front) // position: .front
do {
input = try AVCaptureDeviceInput(device: camera)
} catch let error as NSError {
print(error)
}
if(session.canAddInput(input)) {
session.addInput(input)
}
let previewLayer = AVCaptureVideoPreviewLayer(session: session)
cameraView.backgroundColor = UIColor.red
previewLayer.frame = cameraView.bounds
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspect
cameraview.layer.addSublayer(previewLayer)
session.startRunning()
I am currently trying to broadcast a screen capture. It is to synthesize the camera image and some UIView. However, if you use AVCaptureVideoPreviewLayer screen capture can not be done and the camera image is not displayed. Therefore, I want to display the camera image so that screen capture can be performed.
Generally the views that are displayed using GPU directly may not be redrawn on the CPU. This includes situations like openGL content or these preview layers.
The "screen capture" redraws the screen on a new context on CPU which obviously ignores the GPU part.
You should try and play around with adding some outputs on the session which will give you images or rather CMSampleBuffer shots which may be used to generate the image.
There are plenty ways in doing this but you will most likely need to go a step lower. You can add output to your session to receive samples directly. Doing this is a bit of a code so please refer to some other posts like this one. The point in this you will have a didOutputSampleBuffer method which will feed you CMSampleBufferRef objects that may be used to construct pretty much anything in terms of images.
Now in your case I assume you will be aiming to get UIImage from sample buffer. To do so you may again need a bit of code so refer to some other post like this one.
To put it all together you could as well simply use an image view and drop the preview layer. As you get the sample buffer you can create image and update your image view. I am not sure what the performance of this would be but I discourage you on doing this. If image itself is enough for your case then you don't need a view snapshot at all.
But IF you do:
On snapshot create this image. Then overlay your preview layer with and image view that is showing this generated image (add a subview). Then create the snapshot and remove the image view all in a single chunk:
func snapshot() -> UIImage? {
let imageView = UIImageView(frame: self.previewPanelView.bounds)
imageView.image = self.imageFromLatestSampleBuffer()
imageView.contentMode = .aspectFill // Not sure
self.previewPanelView.addSubview(imageView)
let image = createSnapshot()
imageView.removeFromSuperview()
return image
}
Let us know how things turn and you tried, what did or did not work.

Setting lighting in ARKit framework

Ok, I'm new to SceneKit and ARKit here and I just want to set any models I add to my scene to have a certain, bright lighting. I have tried all different configurations of the automatically update lighting settings with ARSceneView, however the only thing that really creates a discernible difference is autoenablesDefaultLighting:
func setup() {
antialiasingMode = .multisampling4X
//autoenablesDefaultLighting = true
preferredFramesPerSecond = 60
contentScaleFactor = 1.3
if let camera = pointOfView?.camera {
camera.wantsHDR = true
camera.wantsExposureAdaptation = true
camera.exposureOffset = -1
camera.minimumExposure = -1
camera.maximumExposure = 3
}
}
Regardless of the lighting obtained from the camera (as I know ArKit is able to do), I just want to set 1 lighting setting always. I want my scene contents to be lit like this:
Is this possible? What would I set sceneView.scene.lightingEnvironment equal to in order to achieve this effect?
According to the docs, you should be able to create a SCNNode in a position and then add a SCNLight to it:
https://developer.apple.com/documentation/scenekit/scnnode
https://developer.apple.com/documentation/scenekit/scnlight

Is 5.1 channel positional audio output in Sprite Kit possible?

I'm trying to play positional audio using the front and back channels in Sprite Kit, and testing on an Apple TV device.
I'm using the following code:
let musicURL = NSBundle.mainBundle().URLForResource("music", withExtension: "m4a")
let music = SKAudioNode(URL: musicURL!)
addChild(music)
music.positional = true
music.position = CGPoint(x: 0, y: 0)
let moveForward = SKAction.moveToY(1024, duration: 2)
let moveBack = SKAction.moveToY(-1024, duration: 2)
let sequence = SKAction.sequence([moveForward, moveBack])
let repeatForever = SKAction.repeatActionForever(sequence)
music.runAction(repeatForever)
What I want to accomplish is a sound that pans from the front to the back channels but Sprite Kit seems to be using just the 2 channel stereo output.
If I use moveToX instead of moveToY I get a sound panning from left to right.
I'm surely missing some initialization code to signal I want a 5.1 sound output, but I'm not sure if the SKAudioNode positional feature only works for 2 channel stereo output.
Is positional audio with more than 2 channels achievable in Sprite Kit or should I resort to AVFoundation or even OpenAL for this?
I have tried similar code with SceneKit and it seems that it also uses only 2 channels for positional audio.
A sound can't be positioned in 3D space using SceneKit. You should not use an SKAudioNode but use AVFoundation directly to play the sound.
First you have to setup the audio session to use a 5.1 channel output layout:
let session = AVAudioSession.sharedInstance()
session.setCategory(AVAudioSessionCategoryPlayback)
session.setActive(true)
session.setPreferredOutputNumberOfChannels(6)
And then wire an AVAudioEnvironmentNode setup to output the 6 output channels.
A starting point can be found in this existing answer:
https://stackoverflow.com/a/35657416/563802

Background for different screen sizes

I am quite new in the SpriteKit development and I am trying to develop my first game.
I have implemented some edges (to avoid the ball going out of the screen) using the visual Scene editor (GameScene.sks) and I had to specify the size of the Scene (640x960).
Now, using the code, I would like to change the background according to the device width/height (because I cannot limit to 640x960).
So, I need to use:
scene.scaleMode = SKSceneScaleMode.Fill
In order to stretch the edges of the scene for different devices
BUT
I would like the backgrounds to be in scale mode 1
Is that possible at all?
This is the code I am using to set up a different background for each device:
if(skView.bounds.width == 768.0) {
var backgroundTexture = SKTexture(imageNamed: "bg_768.jpg")
let background = SKSpriteNode(texture: backgroundTexture)
background.size.width = skView.bounds.width
background.size.height = skView.bounds.height
background.position = CGPointMake(CGRectGetMidX(skView.frame), CGRectGetMidY(skView.frame))
scene.addChild(background)
} else {
//set others backgrounds
}
But if the scene get stretched they get stretched too..!
Is there a way to avoid this?
Thank you!

Resources