Switching Cameras slow in AVCaptureSession - ios

I've looked at many other questions like this, and tried a lot of the solutions, but this case is a bit different. I'm using AVCaptureVideoDataOutputSampleBufferDelegate so that I can apply CIFilters to the live video feed. I'm using the following method to change cameras:
func changeCameras() {
captureSession.stopRunning()
var desiredPosition: AVCaptureDevicePosition?
if front {
desiredPosition = AVCaptureDevicePosition.Back
} else {
desiredPosition = AVCaptureDevicePosition.Front
}
let devices = AVCaptureDevice.devicesWithMediaType(AVMediaTypeVideo) as? [AVCaptureDevice]
for device in devices! {
if device.position == desiredPosition {
self.captureSession.beginConfiguration()
do {
let input = try AVCaptureDeviceInput(device: device)
for oldInput in self.captureSession.inputs {
print(oldInput)
self.captureSession.removeInput(oldInput as! AVCaptureInput)
}
print(input)
self.captureSession.addInput(input)
self.captureSession.commitConfiguration()
dispatch_async(dispatch_get_main_queue(), { () -> Void in
self.captureSession.startRunning()
})
} catch { print("evic failed")}
}
}
front = !front
}
The methods that I am using to set up the camera (called in viewDidLoad) and receive the sampleBuffer from the delegate are here: https://gist.github.com/JoeyBodnar/17e22e3c04093caa54cf240ed8b1b601.
One problem is that when pressing the button to change cameras, it takes a solid 4-5 seconds of the screen freezing before changing. I've tried the above method, as well as creating a separate queue to run the entire function on, and it still takes a long time. I've never had this problem when switching cameras just using the regular AVVideoPreviewLayer, so I think this may be caused in part by the fact that i'm using the sample buffer delegate, but can't quite piece together how/why. Any help is appreciated. thanks!

Related

How to get the reason of isFlashAvailable and isTorchAvailable return false - Reason?

So I build a custom camera app, and in my camera app it has a flash button, I need to update the device flash mode according to the user tap action. Apparently, I stumble to property isFlashAvailable for flash mode and isTorchAvailable for torch mode. It seems pretty straightforward but if this property returns false, I need to know the reason why flash or torch is unavailable. Is there any suggestion of how to get any specific reason for this case?
In docs, it only said that
for example, the device overheats and needs to cool off.
It is fine if that is the only possible reason, but I am not sure, and maybe you have any suggestion about this?
And here are some snippets of my implementation
/// Updates the device's flash on, auto, or off and returns wether it is successfull or not.
#discardableResult
public func updateFlash(mode: AVCaptureDevice.FlashMode) -> Bool {
guard let device = AVCaptureDevice.default(for: .video),
device.hasFlash,
device.isFlashAvailable else { return false }
flashMode = mode
return true
}
/// Updates the device's torch on, auto, or off and returns wether it is successfull or not.
#discardableResult
public func updateTorch(mode: AVCaptureDevice.TorchMode) -> Bool {
guard let device = AVCaptureDevice.default(for: .video),
device.hasTorch,
device.isTorchAvailable,
device.isTorchModeSupported(mode) else { return false }
do {
try device.lockForConfiguration()
device.torchMode = mode
device.unlockForConfiguration()
return true
} catch {
return false
}
}
Reference
https://developer.apple.com/documentation/avfoundation/avcapturedevice/1624627-isflashavailable#declaration
https://developer.apple.com/documentation/avfoundation/avcapturedevice/1624626-istorchavailable

How can I switch between 11 pro cameras using AVFoundation in Xcode

I've just started out learning Swift in Xcode and am creating a simple camera App to get up and running. I have a button that switches between the front and back facing cameras working but want to add the option to also switch between the Tele and the Ultra Wide lens on the iPhone 11 Pro.
I have created the functions to run a new CaptureSession for each lens (if detected) but was just wondering how I can call these functions to UIbutton function
The thing that's got me scratching my head is the if statement used to switch between the front and back camera says if input.device.position == .back {
This only specificities if it's front or back, not the lens itself. What would be an efficient way to make the button then Change from the front to wide, the Tele, the Ultra Wide and back to the front each time the button is pressed?
Apologies for any misuse of terminology, I'm very new to coding in Swift. Thank you!
{
guard let CurrentCameraInput: AVCaptureInput = CaptureSession?.inputs.first else {
return
}
if let input = CurrentCameraInput as? AVCaptureDeviceInput
{
if input.device.position == .back {
SwitchToFrontCamera() }
if input.device.position == .front {
SwitchToBackCamera()
}
}
}
Welcome!
Check you this initializer for AVCaptureDevice. You can specify the DeviceType you want to use, like .builtInUltraWideCamera or .builtInTelephotoCamera.
You can use a AVCaptureDevice.DiscoverySession to get a list of all capture devices available to your app.

Frame dropping when using AVAssetWriter?

I’m working on an app that processes video frames, draws effects on the frame and saves them. When saving the video using AVAssetWriter I get stutters in the resulting video, but when I reduce the amount of processing on every frame, then stutter is reduced.
Writing and processing are on separate processes.
Every processed frame is dispatched in a queue for writing.
Here is the code:
_writingQueue.async {
autoreleasepool {
synchronized(self) {
if self._status.rawValue >= VideoRecordingModelStatus.finishingRecordingPart1.rawValue {
return
}
if !self._haveStartedSession {
self._assetWriter?.startSession(atSourceTime: CMSampleBufferGetPresentationTimeStamp(sampleBuffer))
self._haveStartedSession = true
}
let input = (mediaType == AVMediaType.video) ? self._videoInput : self._audioInput
while !(input?.isReadyForMoreMediaData ?? false) {}
let success = input!.append(sampleBuffer)
if !success {
let error = self._assetWriter?.error
synchronized(self) {
self.transitionToStatus(.failed, error: error as NSError?)
}
}
}
}
}
Resulting video

"__CFRunLoopModeFindSourceForMachPort returned NULL" messages when using AVAudioPlayer

We're working on a SpriteKit game. In order to have more control over sound effects, we switched from using SKAudioNodes to having some AVAudioPlayers. While everything seems to be working well in terms of game play, frame rate, and sounds, we're seeing occasional error(?) messages in the console output when testing on physical devices:
... [general] __CFRunLoopModeFindSourceForMachPort returned NULL for mode 'kCFRunLoopDefaultMode' livePort: #####
It doesn't seem to really cause any harm when it happens (no sound glitches or hiccups in frame rate or anything), but not understanding exactly what the message means and why it's happening is making us nervous.
Details:
The game is all standard SpriteKit, all events driven by SKActions, nothing unusual there.
The uses of AVFoundation stuff are the following. Initialization of app sounds:
class Sounds {
let soundQueue: DispatchQueue
init() {
do {
try AVAudioSession.sharedInstance().setActive(true)
} catch {
print(error.localizedDescription)
}
soundQueue = DispatchQueue.global(qos: .background)
}
func execute(_ soundActions: #escaping () -> Void) {
soundQueue.async(execute: soundActions)
}
}
Creating various sound effect players:
guard let player = try? AVAudioPlayer(contentsOf: url) else {
fatalError("Unable to instantiate AVAudioPlayer")
}
player.prepareToPlay()
Playing a sound effect:
let pan = stereoBalance(...)
sounds.execute {
if player.pan != pan {
player.pan = pan
}
player.play()
}
The AVAudioPlayers are all for short sound effects with no looping, and they get reused. We create about 25 players total, including multiple players for certain effects when they can repeat in quick succession. For a particular effect, we rotate through the players for that effect in a fixed sequence. We have verified that whenever a player is triggered, its isPlaying is false, so we're not trying to invoke play on something that's already playing.
The message isn't that often. Over the course of a 5-10 minute game with possibly thousands of sound effects, we see the message maybe 5-10 times.
The message seems to occur most commonly when a bunch of sound effects are being played in quick succession, but it doesn't feel like it's 100% correlated with that.
Not using the dispatch queue (i.e., having sounds.execute just call soundActions() directly) doesn't fix the issue (though that does cause the game to lag significantly). Changing the dispatch queue to some of the other priorities like .utility also doesn't affect the issue.
Making sounds.execute just return immediately (i.e., don't actually call the closure at all, so there's no play()) does eliminate the messages.
We did find the source code that's producing the message at this link:
https://github.com/apple/swift-corelibs-foundation/blob/master/CoreFoundation/RunLoop.subproj/CFRunLoop.c
but we don't understand it except at an abstract level, and are not sure how run loops are involved in the AVFoundation stuff.
Lots of googling has turned up nothing helpful. And as I indicated, it doesn't seem to be causing noticeable problems at all. It would be nice to know why it's happening though, and either how to fix it or to have certainty that it won't ever be an issue.
We're still working on this, but have experimented enough that it's clear how we should do things. Outline:
Use the scene's audioEngine property.
For each sound effect, make an AVAudioFile for reading the audio's URL from the bundle. Read it into an AVAudioPCMBuffer. Stick the buffers into a dictionary that's indexed by sound effect.
Make a bunch of AVAudioPlayerNodes. Attach() them to the audioEngine. Connect(playerNode, to: audioEngine.mainMixerNode). At the moment we're creating these dynamically, searching through our current list of player nodes to find one that's not playing and making a new one if there's none available. That's probably got more overhead than is needed, since we have to have callbacks to observe when the player node finishes whatever it's playing and set it back to a stopped state. We'll try switching to just a fixed maximum number of active sound effects and rotating through the players in order.
To play a sound effect, grab the buffer for the effect, find a non-busy playerNode, and do playerNode.scheduleBuffer(buffer, ...). And playerNode.play() if it's not currently playing.
I may update this with some more detailed code once we have things fully converted and cleaned up. We still have a couple of long-running AVAudioPlayers that we haven't switched to use AVAudioPlayerNode going through the mixer. But anyway, pumping the vast majority of sound effects through the scheme above has eliminated the error message, and it needs far less stuff sitting around since there's no duplication of the sound effects in-memory like we had before. There's a tiny bit of lag, but we haven't even tried putting some stuff on a background thread yet, and maybe not having to search for and constantly start/stop players would even eliminate it without having to worry about that.
Since switching to this approach, we've had no more runloop complaints.
Edit: Some example code...
import SpriteKit
import AVFoundation
enum SoundEffect: String, CaseIterable {
case playerExplosion = "player_explosion"
// lots more
var url: URL {
guard let url = Bundle.main.url(forResource: self.rawValue, withExtension: "wav") else {
fatalError("Sound effect file \(self.rawValue) missing")
}
return url
}
func audioBuffer() -> AVAudioPCMBuffer {
guard let file = try? AVAudioFile(forReading: self.url) else {
fatalError("Unable to instantiate AVAudioFile")
}
guard let buffer = AVAudioPCMBuffer(pcmFormat: file.processingFormat, frameCapacity: AVAudioFrameCount(file.length)) else {
fatalError("Unable to instantiate AVAudioPCMBuffer")
}
do {
try file.read(into: buffer)
} catch {
fatalError("Unable to read audio file into buffer, \(error.localizedDescription)")
}
return buffer
}
}
class Sounds {
var audioBuffers = [SoundEffect: AVAudioPCMBuffer]()
// more stuff
init() {
for effect in SoundEffect.allCases {
preload(effect)
}
}
func preload(_ sound: SoundEffect) {
audioBuffers[sound] = sound.audioBuffer()
}
func cachedAudioBuffer(_ sound: SoundEffect) -> AVAudioPCMBuffer {
guard let buffer = audioBuffers[sound] else {
fatalError("Audio buffer for \(sound.rawValue) was not preloaded")
}
return buffer
}
}
class Globals {
// Sounds loaded once and shared amount all scenes in the game
static let sounds = Sounds()
}
class SceneAudio {
let stereoEffectsFrame: CGRect
let audioEngine: AVAudioEngine
var playerNodes = [AVAudioPlayerNode]()
var nextPlayerNode = 0
// more stuff
init(stereoEffectsFrame: CGRect, audioEngine: AVAudioEngine) {
self.stereoEffectsFrame = stereoEffectsFrame
self.audioEngine = audioEngine
do {
try audioEngine.start()
let buffer = Globals.sounds.cachedAudioBuffer(.playerExplosion)
// We got up to about 10 simultaneous sounds when really pushing the game
for _ in 0 ..< 10 {
let playerNode = AVAudioPlayerNode()
playerNodes.append(playerNode)
audioEngine.attach(playerNode)
audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: buffer.format)
playerNode.play()
}
} catch {
logging("Cannot start audio engine, \(error.localizedDescription)")
}
}
func soundEffect(_ sound: SoundEffect, at position: CGPoint = .zero) {
guard audioEngine.isRunning else { return }
let buffer = Globals.sounds.cachedAudioBuffer(sound)
let playerNode = playerNodes[nextPlayerNode]
nextPlayerNode = (nextPlayerNode + 1) % playerNodes.count
playerNode.pan = stereoBalance(position)
playerNode.scheduleBuffer(buffer)
}
func stereoBalance(_ position: CGPoint) -> Float {
guard stereoEffectsFrame.width != 0 else { return 0 }
guard position.x <= stereoEffectsFrame.maxX else { return 1 }
guard position.x >= stereoEffectsFrame.minX else { return -1 }
return Float((position.x - stereoEffectsFrame.midX) / (0.5 * stereoEffectsFrame.width))
}
}
class GameScene: SKScene {
var audio: SceneAudio!
// lots more stuff
// somewhere in initialization
// gameFrame is the area where action takes place and which
// determines panning for stereo sound effects
audio = SceneAudio(stereoEffectsFrame: gameFrame, audioEngine: audioEngine)
func destroyPlayer(_ player: SKSpriteNode) {
audio.soundEffect(.playerExplosion, at: player.position)
// more stuff
}
}

AVFoundation: toggle camera fails at CanAddInput

I am trying to add a rotate camera function with AVFoundation to allow the user to toggle between the front-facing and back-facing cameras.
As shown in the code below, I've put in some println() statements and all the values seem legit but the code always drops to the failed else-clause when testing CanAddInput().
I've tried setting the sessionPreset (which is in another function that initializes the session beforehand) to various values including AVCaptureSessionPresetHigh and AVCaptureSessionPresetLow but that didn't help.
#IBAction func rotateCameraPressed(sender: AnyObject) {
// Loop through all the capture devices to find right ones
var backCameraDevice : AVCaptureDevice?
var frontCameraDevice : AVCaptureDevice?
let devices = AVCaptureDevice.devices()
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Define devices
if (device.position == AVCaptureDevicePosition.Back) {
backCameraDevice = device as? AVCaptureDevice
} else if (device.position == AVCaptureDevicePosition.Front) {
frontCameraDevice = device as? AVCaptureDevice
}
}
}
// Assign found devices to corresponding input
var backInput : AVCaptureDeviceInput?
var frontInput : AVCaptureDeviceInput?
var error: NSError?
if let backDevice = backCameraDevice {
println("Back device is \(backDevice)")
backInput = AVCaptureDeviceInput(device : backDevice, error: &error)
}
if let frontDevice = frontCameraDevice {
println("Front device is \(frontDevice)")
frontInput = AVCaptureDeviceInput(device : frontDevice, error: &error)
}
// Now rotate the camera
isBackCamera = !isBackCamera // toggle camera position
if isBackCamera {
// remove front and add back
captureSession!.removeInput(frontInput)
if let bi = backInput {
println("Back input is \(bi)")
if captureSession!.canAddInput(bi) {
captureSession!.addInput(bi)
} else {
println("Cannot add back input!")
}
}
} else {
// remove back and add front
captureSession!.removeInput(backInput)
if let fi = frontInput {
println("Front input is \(fi)")
if captureSession!.canAddInput(fi) {
captureSession!.addInput(fi)
} else {
println("Cannot add front input!")
}
}
}
}
The problem seems to be the fact that the derived input from the devices found in the iteration do not actually match the input in the captureSession variable. This appears to be a new thing since all the code I've seen posted about this would find and remove the input for the current camera by iterating through the list of devices, as I've done in my code.
This doesn't seem to work anymore - well, at least not in the code I posted, which is based upon all the sources I've been able to dig up (that all happen to be in Objective C). The reason canAddInput() fails is that the removeInput() never succeeds; the fact that it doesn't issue the usual error about not being able to have multiple input devices is strange (since it would have helped with the debugging).
Anyway, the fix is to not remove the input on the derived input from the found device (which used to work). Instead, remove the input device that is actually there by going into the captureSession.inputs variable and doing a removeInput() on that.
To scrunch all that babble to code, here's what I did:
for ii in captureSession!.inputs {
captureSession!.removeInput(ii as! AVCaptureInput)
}
And that did the trick! :)

Resources