I am using ReplayKit for getting a video stream. callback is continuously calling but rpSampleType returns something else. I want the video buffer only.
Here is my code
RPScreenRecorder.shared().startCapture(handler: { (cmSampleBuffer, rpSampleType, error) in
if CMSampleBufferDataIsReady(cmSampleBuffer){
switch rpSampleType {
case RPSampleBufferType.video:
// create the CVPixelBuffer
let pixelBuffer = CMSampleBufferGetImageBuffer(cmSampleBuffer)!
let rtcpixelBuffer = RTCCVPixelBuffer(pixelBuffer: pixelBuffer)
default:
print("sample has no matching type")
}
}
}) { (error) in
print(error?.localizedDescription)
}
Related
I've already asked a question without any responses here:
How do I record changes on a CIImage to a video using AVAssetWriter?
But perhaps my question needs to be simpler. My Google search has been fruitless. How do I capture video of a changing CIImage in real time, without using the camera?
Using captureOutput, I get a CMSampleBuffer, which I can make into a CVPixelBuffer. AVAssetWriterInput's mediaType is set to video, but I think it expects compressed video. In addition, I'm not clear if the AVAssetWriterInput expectsMediaDataInRealTime property should be set to true or not.
Seems like it should be fairly simple, but everything I attempted makes my AVAssetWriter's status fail.
Here is my last attempt at making this work. Still failing:
#objc func importLivePreview(){
guard var importedImage = importedDryCIImage else { return }
DispatchQueue.main.async(){
// apply filter to camera image
// this is what makes the CIImage appear that it is changing
importedImage = self.applyFilterAndReturnImage(ciImage: importedImage, orientation: UIImage.Orientation.right, currentCameraRes:currentCameraRes!)
if self.videoIsRecording &&
self.assetWriterPixelBufferInput?.assetWriterInput.isReadyForMoreMediaData == true {
guard let writer: AVAssetWriter = self.assetWriter, writer.status == .writing else {
return
}
guard let cv:CVPixelBuffer = self.buffer(from: importedImage) else {
print("CVPixelBuffer could not be created.")
return
}
self.MTLContext?.render(_:importedImage, to:cv)
self.currentSampleTime = CMTimeMakeWithSeconds(0.1, preferredTimescale: 1000000000)
guard let currentSampleTime = self.currentSampleTime else {
return
}
let success = self.assetWriterPixelBufferInput?.append(cv, withPresentationTime: currentSampleTime)
if success == false {
print("Pixel Buffer input failed")
}
}
guard let MTLView = self.MTLCaptureView else {
print("MTLCaptureView is not found or nil.")
return
}
// update the MTKView with the changed CIImage so the user can see the changed image
MTLView.image = importedImage
}
}
I got it working. The problem was is that I wasn't offsetting currentSampleTime. This example doesn't have accurate offsets, but it shows that it needs to be added onto the last time.
#objc func importLivePreview(){
guard var importedImage = importedDryCIImage else { return }
DispatchQueue.main.async(){
// apply filter to camera image
// this is what makes the CIImage appear that it is changing
importedImage = self.applyFilterAndReturnImage(ciImage: importedImage, orientation: UIImage.Orientation.right, currentCameraRes:currentCameraRes!)
if self.videoIsRecording &&
self.assetWriterPixelBufferInput?.assetWriterInput.isReadyForMoreMediaData == true {
guard let writer: AVAssetWriter = self.assetWriter, writer.status == .writing else {
return
}
guard let cv:CVPixelBuffer = self.buffer(from: importedImage) else {
print("CVPixelBuffer could not be created.")
return
}
self.MTLContext?.render(_:importedImage, to:cv)
guard let currentSampleTime = self.currentSampleTime else {
return
}
// offset currentSampleTime
let sampleTimeOffset = CMTimeMakeWithSeconds(0.1, preferredTimescale: 1000000000)
self.currentSampleTime = CMTimeAdd(currentSampleTime, sampleTimeOffset)
print("currentSampleTime = \(String(describing: currentSampleTime))")
let success = self.assetWriterPixelBufferInput?.append(cv, withPresentationTime: currentSampleTime)
if success == false {
print("Pixel Buffer input failed")
}
}
guard let MTLView = self.MTLCaptureView else {
print("MTLCaptureView is not found or nil.")
return
}
// update the MTKView with the changed CIImage so the user can see the changed image
MTLView.image = importedImage
}
}
All of the camera tweaks I am wanting to use are working except the zoomfactor. I am lost as to why this is happening...any ideas? The custom exposure and focus settings work fine. Did something change in iOS that im not aware of?
captureSession = AVCaptureSession()
captureSession?.sessionPreset = AVCaptureSessionPresetPhoto
stillImageOutput = AVCapturePhotoOutput()
let device = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
do{
do{
try device?.lockForConfiguration()
device?.setFocusModeLockedWithLensPosition(focusValue, completionHandler: {(time) -> Void in})
device?.setExposureModeCustomWithDuration(CMTimeMake(1, exposureValue), iso: ISOValue, completionHandler: {(time) -> Void in})
let zoomFactor:CGFloat = 16
device?.videoZoomFactor = zoomFactor
device?.unlockForConfiguration()
}catch{
print(error)
}
stillImageOutput.isHighResolutionCaptureEnabled = true
let input = try AVCaptureDeviceInput(device: device)
if(captureSession.canAddInput(input)){
captureSession.addInput(input)
if(captureSession.canAddOutput(stillImageOutput)){
captureSession.addOutput(stillImageOutput)
captureSession.startRunning()
let captureVideoLayer: AVCaptureVideoPreviewLayer = AVCaptureVideoPreviewLayer.init(session: captureSession)
captureVideoLayer.frame = self.previewView.bounds
captureVideoLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
self.previewView.layer.insertSublayer(captureVideoLayer, at: 0)
}
}
}catch{
print(error)
}
turns out i was just setting my device settings at the wrong point. If the lockconfig try block is moved below the capture session stuff it then works as intended.
Using Apple's example code for Photo & Video Acquisition (i.e. AVFoundation), I tried to change the device-zoom of my iPhone camera in code.
With the help of user2345335, I realised that the code-location where you place your zoom manipulation properties matters - and also, make sure you use device.lockForConfiguration() prior to any videoDevice manipulation ! Both are important (code-location and locking !!).
Here the Link and a screenshot to see the download-button where the original Apple example can be taken from :
(AVFoundation Apple Code example: Link)
Here is the code excerpt of the original Apple example with MY CODE THAT MANIPULATES THE ZOOM inserted at the correct spot :)
(Swift-4.2 / Xcode 10.0, iOS 11.0 SDK)
// Call this on the session queue.
private func configureSession() {
if setupResult != .success {
return
}
// ... missing original code (not important for this illustration)...
// Add video input.
do {
var defaultVideoDevice: AVCaptureDevice?
// ... missing original code (not important for this illustration)...
if session.canAddInput(videoDeviceInput) {
session.addInput(videoDeviceInput)
self.videoDeviceInput = videoDeviceInput
// ... missing original code (not important for this illustration)...
} else {
print("Could not add video device input to the session")
setupResult = .configurationFailed
session.commitConfiguration()
return
}
// !!!!!!!!!!!!!!!! MY CODE THAT MANIPULATES THE ZOOM !!!!!!!!!!!!!!!!!!!!!!!!!!!!
// !!!!!!!!!!!!!!!! PLACE IT HERE AND ZOOM WILL WORK !!!!!!!!!!!!!!!!!!!!!!!!!!!!
guard let device = defaultVideoDevice else { return }
do {
try device.lockForConfiguration()
defer { device.unlockForConfiguration() }
device.videoZoomFactor = 10.0
} catch {
debugPrint(error)
}
// !!!!!!!!!!!!!! END OF MY CODE THAT MANIPULATES THE ZOOM !!!!!!!!!!!!!!!!!!!!!!!!!!
} catch {
print("Could not create video device input: \(error)")
setupResult = .configurationFailed
session.commitConfiguration()
return
}
// Add audio input.
do {
let audioDevice = AVCaptureDevice.default(for: .audio)
let audioDeviceInput = try AVCaptureDeviceInput(device: audioDevice!)
if session.canAddInput(audioDeviceInput) {
session.addInput(audioDeviceInput)
} else {
print("Could not add audio device input to the session")
}
} catch {
print("Could not create audio device input: \(error)")
}
// Add photo output.
if session.canAddOutput(photoOutput) {
session.addOutput(photoOutput)
photoOutput.isHighResolutionCaptureEnabled = true
photoOutput.isLivePhotoCaptureEnabled = photoOutput.isLivePhotoCaptureSupported
photoOutput.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliverySupported
livePhotoMode = photoOutput.isLivePhotoCaptureSupported ? .on : .off
depthDataDeliveryMode = photoOutput.isDepthDataDeliverySupported ? .on : .off
} else {
print("Could not add photo output to the session")
setupResult = .configurationFailed
session.commitConfiguration()
return
}
session.commitConfiguration()
}
I have faced an issue with zoom factor not working on some devices (usualy the newer ones with wide lens camera) when output.isDepthDataDeliveryEnabled was not set and therefore the default value was true . The problem was present only when AVCaptureSession.Preset was set to .photo.
A newbie to swift! I am trying to implement an app that converts speech to text using speech recognizer.
Problem
SFSpeechRecognizer().isAvailable is false
private let request = SFSpeechAudioBufferRecognitionRequest()
private var task: SFSpeechRecognitionTask?
private let engine = AVAudioEngine()
func recognize() {
guard let node = engine.inputNode else {
return
}
let recordingFormat = node.outputFormat(forBus: 0)
node.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { buffer, _ in
self.recognitionRequest.append(buffer);
}
engine.prepare()
do {
try engine.start()
} catch {
return print(error)
}
guard let systemRecognizer = SFSpeechRecognizer() else {
return
}
if !systemRecognizer.isAvailable {
self.log(.debug, msg: "Entered this condition and stopped!")
return
}
}
Question
I am not sure why it stops in the simulator. Does microphone works in iPhone simulator?
Update
I tried testing with a audio file with below code,
let audioFile = Bundle.main.url(forResource: "create_activity", withExtension: "m4a", subdirectory: "Sample Recordings")
let recognitionRequest = SFSpeechURLRecognitionRequest(url: audioFile!)
getting error which says, Error Domain=kAFAssistantErrorDomain Code=1101 "(null)"
It looks that simulator has gained access to microphone with iOS 11.
Unfortunately I was not able to find any documentation confirming that, but can confirm this functionality with the following code sample. Works perfectly fine on iOS 11 simulator, but does nothing on iOS 10 simulator (or earlier).
import UIKit
import Speech
class ViewController: UIViewController {
private var recognizer = SFSpeechRecognizer()
private var request = SFSpeechAudioBufferRecognitionRequest()
private let engine = AVAudioEngine()
override func viewDidLoad() {
super.viewDidLoad()
requestPermissions()
}
private func requestPermissions() {
//
// Do not forget to add `NSMicrophoneUsageDescription` and `NSSpeechRecognitionUsageDescription` to `Info.plist`
//
// Request recording permission
AVAudioSession.sharedInstance().requestRecordPermission { allowed in
if allowed {
// Request speech recognition authorization
SFSpeechRecognizer.requestAuthorization { status in
switch status {
case .authorized: self.prepareSpeechRecognition()
case .notDetermined, .denied, .restricted: print("SFSpeechRecognizer authorization status: \(status).")
}
}
} else {
print("AVAudioSession record permission: \(allowed).")
}
}
}
private func prepareSpeechRecognition() {
// Check if recognizer is available (has failable initializer)
guard let recognizer = recognizer else {
print("SFSpeechRecognizer not supported.")
return
}
// Prepare recognition task
recognizer.recognitionTask(with: request) { (result, error) in
if let result = result {
print("SFSpeechRecognizer result: \(result.bestTranscription.formattedString)")
} else {
print("SFSpeechRecognizer error: \(String(describing: error))")
}
}
// Install tap to audio engine input node
let inputNode = engine.inputNode
let busNumber = 0
let recordingFormat = inputNode.outputFormat(forBus: busNumber)
inputNode.installTap(onBus: busNumber, bufferSize: 1024, format: recordingFormat) { buffer, time in
self.request.append(buffer);
}
// Prepare and start audio engine
engine.prepare()
do {
try engine.start()
} catch {
return print(error)
}
}
}
Do not forget to add NSMicrophoneUsageDescription and NSSpeechRecognitionUsageDescription to Info.plist.
My goal is to use an AVCaptureSession to programmatically lock focus, capture one image, activate the flash, then capture a second image after some delay.
I have managed to get the captures to work using an AVCaptureSession instance and an AVCaptureStillImageOutput. However, the images I get when calling captureStillImageAsynchronouslyFromConnection(_:completionHandler:) are 1920 x 1080, not the full 12 megapixel image my iPhone 6S camera is capable of.
Here is my capture function:
func captureImageFromStream(completion: (result: UIImage) -> Void)
{
if let stillOutput = self.stillImageOutput {
var videoConnection : AVCaptureConnection?
for connection in stillOutput.connections {
for port in connection.inputPorts! {
if port.mediaType == AVMediaTypeVideo {
videoConnection = connection as? AVCaptureConnection
break
}
}
if videoConnection != nil {
break
}
}
if videoConnection != nil {
stillOutput.captureStillImageAsynchronouslyFromConnection(videoConnection) {
(imageDataSampleBuffer, error) -> Void in
if error == nil {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer)
if let image = UIImage(data: imageData) {
completion(result: image)
}
}
else {
NSLog("ImageCapture Error: \(error)")
}
}
}
}
}
What modifications should I make to capture the image I'm looking for? I'm new to Swift, so please excuse any beginner mistakes I've made.
Before you addOutput the stillImageOutput and startRunning, you need to set your capture session preset to photo:
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
I am wondering why my capture session is starting up slowly when the app starts. This doesn't happen every single time I start the app, so I am not sure if it is just other variables of the actual phone or something else. I am not a very good concurrency/parallel programmer, so it is more than likely my crappy coding :(
I would GREATLY appreciate it if someone could identify what is making it slow sometimes. I have read that all calls from a capture session can be blocking, so I have tried my best to dispatch those calls to another queue without having any race conditions. I was learning about how to go about coding this way in swift form here
Here is my code where i initialize and start everything up: My queues are serial queues
/**************************************************************************
VIEW DID LOAD
***************************************************************************/
override func viewDidLoad() {
super.viewDidLoad()
println("Initializing the cameraCaptureDevice with MediaTypeVideo")
//------INIT CAMERA CAPTURE DEVICE TO BEGIN WITH------
self.cameraCaptureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
println("Done initializing camera")
var error1: NSError? = nil
println("Getting array of available capture devices")
//------GRAB ALL OF THE DEVICES------
let devices = AVCaptureDevice.devices()
//------FIND THE CAMERA MATCHING THE POSITION------
for device in devices {
if device.position == self.cameraCapturePosition {
self.cameraCaptureDevice = device as? AVCaptureDevice
println("Back camera has been added")
self.usingBackCamera = true
}
}
//------ INIT MOVIE FILE OUTPUT ------
self.movieFileOutput = AVCaptureMovieFileOutput()
//------SET UP PREVIEW LAYER-----
self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.session)
if let preview = self.videoPreviewLayer {
println("Video Preview Layer set")
preview.videoGravity = AVLayerVideoGravityResizeAspectFill
}
else {
println("Video Preview Layer is nil!!! Could not set AVLayerVideoGravityResizeAspectFill")
}
println("Camera successully can display")
//------SET JPEG OUTPUT------
println("Setting JPEG Output")
self.stillImageOutput = AVCaptureStillImageOutput()
let outputSettings = [ AVVideoCodecKey : AVVideoCodecJPEG ]
if let imageOutput = self.stillImageOutput {
imageOutput.outputSettings = outputSettings
}
else {
println("still image output is nil, could notset output settings")
}
println("Successfully configured JPEG Ouput")
//------SET MOVIE FILE OUPUT MAX DURATION AND MIN FREE DISK SPACE------
println("Setting Movie File Max Duration")
let maxDuration:CMTime = CMTimeMakeWithSeconds(self.totalTime, self.preferredTimeScale)
if let movieOutput = self.movieFileOutput {
movieOutput.maxRecordedDuration = maxDuration
println("Successully set movie file max duration")
println("Setting movie file minimun byte space")
movieOutput.minFreeDiskSpaceLimit = self.minFreeSpace
println("Successfully added minium free space")
}
else {
println("Movie file output is nil, could not set maximum recording duration or minimum free space")
}
//------ GRAB THE DEVICE'S SUPPORTED FRAME RATE RANGES ------
if let device = self.cameraCaptureDevice {
println("Setting frame rates")
let supportedFrameRateRanges = device.activeFormat.videoSupportedFrameRateRanges
for range in supportedFrameRateRanges {
// Workaround until finding a better way
// frame rate should be 1 - 30
if (range.minFrameRate >= 1 || range.minFrameRate <= 30) == true && (range.maxFrameRate <= 30 || range.maxFrameRate >= 1) == true {
println("Frame rate is supported")
self.frameRateSupported = true
}
else {
println("Frame rate is not supported")
self.frameRateSupported = false
}
}
var error: NSError?
if frameRateSupported && device.lockForConfiguration(&error) {
device.activeVideoMaxFrameDuration = self.frameDuration
device.activeVideoMinFrameDuration = self.frameDuration
device.unlockForConfiguration()
println("SUCCESS")
}
else {
println("frame rate is not supported or there was an error")
if let err = error {
println("There was an error setting framerate: \(err.description)")
}
else {
println("Frame rate is not supported")
}
}
}
else {
println("camera capture device is nil, could not set frame rate")
}
//------ INIT AUDIO CAPTURE DEVICE ------
self.audioCaptureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeAudio)
var error2: NSError? = nil
let audioDeviceInput = AVCaptureDeviceInput(device: self.audioCaptureDevice, error: &error2)
//------ADD CAMERA CAPTURE DEVICE TO CAPTURE SESSION INPUT------
if let captureDevice = self.cameraCaptureDevice {
if error1 == nil {
println("Trying to add video input")
self.videoDeviceInput = AVCaptureDeviceInput(device: captureDevice, error: &error1)
}
else {
println("Could not create video input")
}
}
else {
println("Could not create camera capture device")
}
//------ ADD INPUTS AND OUTPUTS AS WELL AS OTHER SESSION CONFIGURATIONS------
dispatch_async(self.sessionQueue) {
println("Trying to add audio output")
if let input = audioDeviceInput {
self.session.addInput(audioDeviceInput)
println("Successfully added audio output")
}
else {
println("Could not create audio input")
}
if self.session.canAddInput(self.videoDeviceInput) {
self.session.addInput(self.videoDeviceInput)
println("Successfully added video input")
}
else {
println("Could not add video input")
}
println("initializing video capture session")
//----- SET THE IMAGE QUALITY / RESOLUTION -----
//Options:
// AVCaptureSessionPresetHigh - Highest recording quality (varies per device)
// AVCaptureSessionPresetMedium - Suitable for WiFi sharing (actual values may change)
// AVCaptureSessionPresetLow - Suitable for 3G sharing (actual values may change)
// AVCaptureSessionPreset640x480 - 640x480 VGA (check its supported before setting it)
// AVCaptureSessionPreset1280x720 - 1280x720 720p HD (check its supported before setting it)
// AVCaptureSessionPresetPhoto - Full photo resolution (not supported for video output)
if self.session.canSetSessionPreset(AVCaptureSessionPresetHigh) {
println("Capture Session preset is set to High Quality")
self.session.sessionPreset = AVCaptureSessionPresetHigh
}
else {
println("Capture Session preset is set to Medium Quality")
self.session.sessionPreset = AVCaptureSessionPresetMedium
}
//------ADD JPEG OUTPUT AND MOVIE FILE OUTPUT TO SESSION OUTPUT------
println("Adding still image and movie file output")
if self.session.canAddOutput(self.stillImageOutput) && self.session.canAddOutput(self.movieFileOutput) {
self.session.addOutput(self.stillImageOutput)
self.session.addOutput(self.movieFileOutput)
println("Successfully added outputs")
}
else {
//------ IF OUTPUTS COULD NOT BE ADDED, THEN APP SHOULD NOT RUN ON DEVICE!!!!! ------
println("Could Not Add still image and movie file output")
}
//------WE CALL A METHOD AS IT ALSO HAS TO BE DONE AFTER CHANGING CAMERA------
self.setCameraOutputProperties()
//------DISPLAY PREVIEW LAYER------
if let videoLayer = self.videoPreviewLayer {
self.videoPreviewView.layer.addSublayer(self.videoPreviewLayer)
println("Video Preview Layer Added as sublayer")
self.videoPreviewLayer!.frame = self.videoPreviewView.layer.frame
println("Video Preview frame set")
}
else {
println("videoPreviewLayer is nil, could not add sublayer or set frame")
}
self.view.sendSubviewToBack(self.videoPreviewView)
}
}
/**************************************************************************
VIEW DID APPEAR
***************************************************************************/
override func viewDidAppear(animated: Bool) {
println("About to start the capture session")
//------INITIALIZE THE CAMERA------
dispatch_async(self.startSessionQueue) {
if self.beenHereBefore == false {
println("Have not seen this view before.... starting the session")
//------ START THE PREVIEW SESSION ------
self.startSession()
/*
CHECK TO MAKE SURE THAT THIS CODE IS REALLY NEEDED FOR AUTHORIZATION
*/
// ----- SET MEDIA TYPE ------
var mediaTypeVideo = AVMediaTypeVideo
AVCaptureDevice.requestAccessForMediaType(mediaTypeVideo, completionHandler: { (granted) -> Void in
//------ GRANTED ACCESS TO MEDIATYPE ------
if granted {
self.deviceAuthorized = AVAuthorizationStatus.Authorized
}
//------ NOT GRANTED ACCESS TO MEDIATYPE ------
else {
dispatch_async(dispatch_get_main_queue()) {
UIAlertView(title: "CopWatch", message: "CopWatch does not have permission to use the camera, please change your privacy settings.", delegate: self, cancelButtonTitle: "OK")
self.deviceAuthorized = AVAuthorizationStatus.Denied
dispatch_resume(dispatch_get_main_queue())
}
}
})
}
else {
println("Been Here Before")
self.session.startRunning()
}
self.weAreRecording = false
}
}
and here is the method that starts the video preview
/**************************************************************************
START SESSION
**************************************************************************/
func startSession() {
println("Checking to see if the session is already running before starting the session")
//------ START SESSION IF IT IS NOT ALREADY RUNNING------
if !self.session.running {
//------START CAMERA------
println("Session is not already running, starting the session now")
self.session.startRunning()
self.isSessionRunning = true
println("Capture Session initiated")
}
else {
println("Session is already running, no need to start it again")
}
}
It seems that I have found the answer.
I was adding the videoPreviewLayer as a subview and sending it to the back of the view in the asynchronous dispatch call. Apparently, the application did not like this and caused things to be very, very slow to start up.
I move this code
//------DISPLAY PREVIEW LAYER------
if let videoLayer = self.videoPreviewLayer {
self.videoPreviewView.layer.addSublayer(self.videoPreviewLayer)
println("Video Preview Layer Added as sublayer")
self.videoPreviewLayer!.frame = self.videoPreviewView.layer.frame
println("Video Preview frame set")
}
else {
println("videoPreviewLayer is nil, could not add sublayer or set frame")
}
self.view.sendSubviewToBack(self.videoPreviewView)
up to here like this:
//------SET UP PREVIEW LAYER-----
self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.session)
if let preview = self.videoPreviewLayer {
println("Video Preview Layer set")
preview.videoGravity = AVLayerVideoGravityResizeAspectFill
}
else {
println("Video Preview Layer is nil!!! Could not set AVLayerVideoGravityResizeAspectFill")
}
println("Camera successully can display")
//------DISPLAY PREVIEW LAYER------
if let videoLayer = self.videoPreviewLayer {
self.videoPreviewView.layer.addSublayer(self.videoPreviewLayer)
println("Video Preview Layer Added as sublayer")
self.videoPreviewLayer!.frame = self.videoPreviewView.layer.frame
println("Video Preview frame set")
self.view.sendSubviewToBack(self.videoPreviewView)
}
else {
println("videoPreviewLayer is nil, could not add sublayer or set frame")
}
I should have been able to see this issue, but I guess this is what happens when you optimize at the wrong times. Now it is pretty responsive.
Moral of the story, if you are programming with AVFoundation, don't set up and add your video preview layer as a subview of your view in the current view controller in an asynchronous queue.