Programmatically Capture Maximum Resolution Image using AVCaptureSession - ios

My goal is to use an AVCaptureSession to programmatically lock focus, capture one image, activate the flash, then capture a second image after some delay.
I have managed to get the captures to work using an AVCaptureSession instance and an AVCaptureStillImageOutput. However, the images I get when calling captureStillImageAsynchronouslyFromConnection(_:completionHandler:) are 1920 x 1080, not the full 12 megapixel image my iPhone 6S camera is capable of.
Here is my capture function:
func captureImageFromStream(completion: (result: UIImage) -> Void)
{
if let stillOutput = self.stillImageOutput {
var videoConnection : AVCaptureConnection?
for connection in stillOutput.connections {
for port in connection.inputPorts! {
if port.mediaType == AVMediaTypeVideo {
videoConnection = connection as? AVCaptureConnection
break
}
}
if videoConnection != nil {
break
}
}
if videoConnection != nil {
stillOutput.captureStillImageAsynchronouslyFromConnection(videoConnection) {
(imageDataSampleBuffer, error) -> Void in
if error == nil {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer)
if let image = UIImage(data: imageData) {
completion(result: image)
}
}
else {
NSLog("ImageCapture Error: \(error)")
}
}
}
}
}
What modifications should I make to capture the image I'm looking for? I'm new to Swift, so please excuse any beginner mistakes I've made.

Before you addOutput the stillImageOutput and startRunning, you need to set your capture session preset to photo:
captureSession.sessionPreset = AVCaptureSessionPresetPhoto

Related

how to get video stream only through 'ReplayKit' iOS Swift

I am using ReplayKit for getting a video stream. callback is continuously calling but rpSampleType returns something else. I want the video buffer only.
Here is my code
RPScreenRecorder.shared().startCapture(handler: { (cmSampleBuffer, rpSampleType, error) in
if CMSampleBufferDataIsReady(cmSampleBuffer){
switch rpSampleType {
case RPSampleBufferType.video:
// create the CVPixelBuffer
let pixelBuffer = CMSampleBufferGetImageBuffer(cmSampleBuffer)!
let rtcpixelBuffer = RTCCVPixelBuffer(pixelBuffer: pixelBuffer)
default:
print("sample has no matching type")
}
}
}) { (error) in
print(error?.localizedDescription)
}

How to save the previewed (Zoomed) image as Photo using AVFoundation?

So, I've magnet out how to make a Zoom Effect with CATransform3DMakeScale(2.4, 2.4, 2.4) but now I have problems trying to save the Zoome preview Message (code as I do Zooming):
// init camera device
let captureDevice : AVCaptureDevice? = initCaptureDevice()
print("camera was initialized")
// Prepare output
initOutput()
if (captureDevice != nil) {
let deviceInput : AVCaptureInput? = initInputDevice(captureDevice: captureDevice!)
if (deviceInput != nil) {
initSession(deviceInput: deviceInput!)
// Preview Size
let previewLayer: AVCaptureVideoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.session)
previewLayer.frame = self.view.bounds
previewLayer.transform = CATransform3DMakeScale(2.4, 2.4, 2.4)
imagePreviewScale = previewLayer.contentsScale
self.view.layer.addSublayer(previewLayer)
self.session?.startRunning()
Now i tried to save the previewed Zoomed image like so:
let videoConnection : AVCaptureConnection? = self.imageOutput?.connection(withMediaType: AVMediaTypeVideo)
if (videoConnection != nil) {
videoConnection?.videoScaleAndCropFactor = imagePreviewScale
self.imageOutput?.captureStillImageAsynchronously(from: videoConnection, completionHandler: { (imageDataSampleBuffer, error) -> Void in
if (imageDataSampleBuffer != nil) {
// Capture JPEG
let imageData : NSData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer) as NSData
// JPEG
let image = UIImage(data: imageData as Data)
and added the line
imagePreviewScale = previewLayer.contentsScale
But still nothing happens, please anyone can tell me how to save The Exactly Zoomed picture?
The problem is that previewLayer.contentsScale is 1, so you're setting videoScaleAndCropFactor to 1.
You need to set it to
videoConnection?.videoScaleAndCropFactor = 2.4

Does an iphone6 have a front-camera flash

When I set the flashmode for my front camera and then call
let videoConnection = stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo)
stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: process)
I get the following error message:
error while capturing still image: Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSUnderlyingError=0x12eeb7200 {Error Domain=NSOSStatusErrorDomain Code=-16800 "(null)"}, NSLocalizedFailureReason=An unknown error occurred (-16800), NSLocalizedDescription=The operation could not be completed}
If I don't set the camera's flashMode and then call:
let videoConnection = stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo)
stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: process)
The front camera takes a picture and doesn't throw the error. So I wonder, does a front-camera flash exist for iphones. It should considering that snapchat has one. And the default camera app on an iPhone has a front camera flash. So I'm not entirely sure what's going on. Currently, this is how I set up my camera:
func getCameraStreamLayer() -> CALayer? {
captureSession = AVCaptureSession()
captureSession!.sessionPreset = AVCaptureSessionPresetPhoto
currentCamera = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput!.outputSettings = [ AVVideoCodecKey: AVVideoCodecJPEG ]
if let input = try? AVCaptureDeviceInput(device: currentCamera) as AVCaptureDeviceInput{
if captureSession!.canAddInput(input) && captureSession!.canAddOutput(stillImageOutput) {
captureSession!.addInput(input)
captureSession!.addOutput(stillImageOutput)
}
}
return AVCaptureVideoPreviewLayer(session: captureSession)
}
func toggleFlash() {
flash = !flash
if flash {
for case let (device as AVCaptureDevice) in AVCaptureDevice.devices() {
if device.hasFlash && device.flashAvailable {
if device.isFlashModeSupported(.On) {
do {
try device.lockForConfiguration()
device.flashMode = .On
device.unlockForConfiguration()
} catch {
print("Something went wrong")
}
}
}
}
}else {//turn off flash
}
}
func photograph(process: (CMSampleBuffer!,NSError!)->()) {
let videoConnection = stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo)
stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: process)
}
func flipCamera() {
guard let session = captureSession where session.running == true else {
return
}
session.beginConfiguration()
let currentCameraInput = session.inputs[0] as! AVCaptureDeviceInput
session.removeInput(currentCameraInput)
let newCamera = {
let devices = AVCaptureDevice.devicesWithMediaType(AVMediaTypeVideo)
for case let device as AVCaptureDevice in devices {
if(device.position == .Front && currentCameraInput.device.position == .Back){
return device
}
if(device.position == .Back && currentCameraInput.device.position == .Front){
return device
}
}
return nil
}() as AVCaptureDevice?
currentCamera = newCamera!
if let newVideoInput = try? AVCaptureDeviceInput(device: newCamera) {
captureSession?.addInput(newVideoInput)
}
captureSession?.commitConfiguration()
}
I'm not sure what I should do. I've tried to create a new capture session and then lock and then set the flashMode for the camera. I still get the same error.
iPhone 6 does not have a front facing flash camera, however the iPhone 6s and up does.
There are "hack" solutions in the app store that flash the screen brightly to generate light in front facing mode, but there's no actual flash.

AVCaptureStillImageOutput & UIImagePNGRepresentation

I am having a hard time for something I think shouldn’t be so difficult, so I presume I must be looking at the problem from the wrong angle.
In order to understand how AVCaptureStillImageOutput and the camera work I made a tiny app.
This app is able to take a picture and save it as a PNG file (I do not want JPEG). The next time the app is launched, it checks if a file is present and if it is, the image stored inside the file is used as the background view of the app. The idea is rather simple.
The problem is that it does not work. If someone can tell me what I am doing wrong that will be very helpful.
I would like the picture to appear as a background the same way it was on the display when it was taken, but it is rotated or has the wrong scale ..etc..
Here is the relevant code (I can provide more information if ever needed).
The viewDidLoad method:
override func viewDidLoad() {
super.viewDidLoad()
// For the photo capture:
captureSession.sessionPreset = AVCaptureSessionPresetHigh
// Select the appropriate capture devices:
for device in AVCaptureDevice.devices() {
// Make sure this particular device supports video.
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the back camera.
if(device.position == AVCaptureDevicePosition.Back) {
captureDevice = device as? AVCaptureDevice
}
}
}
tapGesture = UITapGestureRecognizer(target: self, action: Selector("takePhoto"))
self.view.addGestureRecognizer(tapGesture)
let filePath = self.toolBox.getDocumentsDirectory().stringByAppendingPathComponent("BackGroundImage#2x.png")
if !NSFileManager.defaultManager().fileExistsAtPath(filePath) {return}
let bgImage = UIImage(contentsOfFile: filePath),
bgView = UIImageView(image: bgImage)
self.view.addSubview(bgView)
}
The method to handle the picture taking:
func takePhoto() {
if !captureSession.running {
beginPhotoCaptureSession()
return
}
if let videoConnection = stillImageOutput.connectionWithMediaType(AVMediaTypeVideo) {
stillImageOutput.captureStillImageAsynchronouslyFromConnection(videoConnection) {
(imageDataSampleBuffer, error) -> Void in
if error == nil {
var localImage = UIImage(fromSampleBuffer: imageDataSampleBuffer)
UIGraphicsBeginImageContext(localImage!.size)
CGContextRotateCTM (UIGraphicsGetCurrentContext(), CGFloat(M_PI_2))
//localImage!.drawAtPoint(CGPointZero)
localImage!.drawAtPoint(CGPoint(x: -localImage!.size.height, y: -localImage!.size.width))
//localImage!.drawAtPoint(CGPoint(x: -localImage!.size.width, y: -localImage!.size.height))
localImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
localImage = resizeImage(localImage!, toSize: self.view.frame.size)
if let data = UIImagePNGRepresentation(localImage!) {
let bitMapName = "BackGroundImage#2x"
let filename = self.toolBox.getDocumentsDirectory().stringByAppendingPathComponent("\(bitMapName).png")
data.writeToFile(filename, atomically: true)
print("Picture saved: \(bitMapName)\n\(filename)")
}
} else {print("Error on taking a picture:\n\(error)")}
}
}
captureSession.stopRunning()
previewLayer!.removeFromSuperlayer()
}
The method to start the AVCaptureSession:
func beginPhotoCaptureSession() {
do {let input = try AVCaptureDeviceInput(device: captureDevice)
captureSession.addInput(input)
} catch let error as NSError {
// Handle any errors:
print(error)
}
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.frame = self.view.layer.frame
self.view.layer.addSublayer(previewLayer!)
captureSession.startRunning()
stillImageOutput.outputSettings = [kCVPixelBufferPixelFormatTypeKey:Int(kCVPixelFormatType_32BGRA)]
if captureSession.canAddOutput(stillImageOutput) {
captureSession.addOutput(stillImageOutput)
}
}
As an example here is an image of a picture taken with the app:
Now here is what I get as the background of the app when it is relaunched:
If it was working correctly the 2 pictures would be similar.
I cant see rotation in screenshot.. But the scale is a problem which is related to your code when you are drawing it in function takePhoto. You can try
func takePhoto() {
if !captureSession.running {
beginPhotoCaptureSession()
return
}
if let videoConnection = stillImageOutput.connectionWithMediaType(AVMediaTypeVideo) {
stillImageOutput.captureStillImageAsynchronouslyFromConnection(videoConnection) {
(imageDataSampleBuffer, error) -> Void in
if error == nil {
if let data = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer) {
let bitMapName = "BackGroundImage#2x"
let filename = self.toolBox.getDocumentsDirectory().stringByAppendingPathComponent("\(bitMapName).png")
data.writeToFile(filename, atomically: true)
print("Picture saved: \(bitMapName)\n\(filename)")
}
} else {print("Error on taking a picture:\n\(error)")}
}
}
captureSession.stopRunning()
previewLayer!.removeFromSuperlayer()
}
For those who might be hitting the same issue at one point, I post below what I fixed in the code to make it work. There may still be some improvement needed to support all possible orientation, but it's OK for a start.
if let videoConnection = stillImageOutput.connectionWithMediaType(AVMediaTypeVideo) {
stillImageOutput.captureStillImageAsynchronouslyFromConnection(videoConnection) {
(imageDataSampleBuffer, error) -> Void in
if error == nil {
var localImage = UIImage(fromSampleBuffer: imageDataSampleBuffer)
var imageSize = CGSize(width: UIScreen.mainScreen().bounds.height * UIScreen.mainScreen().scale,
height: UIScreen.mainScreen().bounds.width * UIScreen.mainScreen().scale)
localImage = resizeImage(localImage!, toSize: imageSize)
imageSize = CGSize(width: imageSize.height, height: imageSize.width)
UIGraphicsBeginImageContext(imageSize)
CGContextRotateCTM (UIGraphicsGetCurrentContext(), CGFloat(M_PI_2))
localImage!.drawAtPoint(CGPoint(x: 0.0, y: -imageSize.width))
localImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
if let data = UIImagePNGRepresentation(localImage!) {
data.writeToFile(self.bmpFilePath, atomically: true)
}
} else {print("Error on taking a picture:\n\(error)")}
}
}

Capture Session Starts Up Slow

I am wondering why my capture session is starting up slowly when the app starts. This doesn't happen every single time I start the app, so I am not sure if it is just other variables of the actual phone or something else. I am not a very good concurrency/parallel programmer, so it is more than likely my crappy coding :(
I would GREATLY appreciate it if someone could identify what is making it slow sometimes. I have read that all calls from a capture session can be blocking, so I have tried my best to dispatch those calls to another queue without having any race conditions. I was learning about how to go about coding this way in swift form here
Here is my code where i initialize and start everything up: My queues are serial queues
/**************************************************************************
VIEW DID LOAD
***************************************************************************/
override func viewDidLoad() {
super.viewDidLoad()
println("Initializing the cameraCaptureDevice with MediaTypeVideo")
//------INIT CAMERA CAPTURE DEVICE TO BEGIN WITH------
self.cameraCaptureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
println("Done initializing camera")
var error1: NSError? = nil
println("Getting array of available capture devices")
//------GRAB ALL OF THE DEVICES------
let devices = AVCaptureDevice.devices()
//------FIND THE CAMERA MATCHING THE POSITION------
for device in devices {
if device.position == self.cameraCapturePosition {
self.cameraCaptureDevice = device as? AVCaptureDevice
println("Back camera has been added")
self.usingBackCamera = true
}
}
//------ INIT MOVIE FILE OUTPUT ------
self.movieFileOutput = AVCaptureMovieFileOutput()
//------SET UP PREVIEW LAYER-----
self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.session)
if let preview = self.videoPreviewLayer {
println("Video Preview Layer set")
preview.videoGravity = AVLayerVideoGravityResizeAspectFill
}
else {
println("Video Preview Layer is nil!!! Could not set AVLayerVideoGravityResizeAspectFill")
}
println("Camera successully can display")
//------SET JPEG OUTPUT------
println("Setting JPEG Output")
self.stillImageOutput = AVCaptureStillImageOutput()
let outputSettings = [ AVVideoCodecKey : AVVideoCodecJPEG ]
if let imageOutput = self.stillImageOutput {
imageOutput.outputSettings = outputSettings
}
else {
println("still image output is nil, could notset output settings")
}
println("Successfully configured JPEG Ouput")
//------SET MOVIE FILE OUPUT MAX DURATION AND MIN FREE DISK SPACE------
println("Setting Movie File Max Duration")
let maxDuration:CMTime = CMTimeMakeWithSeconds(self.totalTime, self.preferredTimeScale)
if let movieOutput = self.movieFileOutput {
movieOutput.maxRecordedDuration = maxDuration
println("Successully set movie file max duration")
println("Setting movie file minimun byte space")
movieOutput.minFreeDiskSpaceLimit = self.minFreeSpace
println("Successfully added minium free space")
}
else {
println("Movie file output is nil, could not set maximum recording duration or minimum free space")
}
//------ GRAB THE DEVICE'S SUPPORTED FRAME RATE RANGES ------
if let device = self.cameraCaptureDevice {
println("Setting frame rates")
let supportedFrameRateRanges = device.activeFormat.videoSupportedFrameRateRanges
for range in supportedFrameRateRanges {
// Workaround until finding a better way
// frame rate should be 1 - 30
if (range.minFrameRate >= 1 || range.minFrameRate <= 30) == true && (range.maxFrameRate <= 30 || range.maxFrameRate >= 1) == true {
println("Frame rate is supported")
self.frameRateSupported = true
}
else {
println("Frame rate is not supported")
self.frameRateSupported = false
}
}
var error: NSError?
if frameRateSupported && device.lockForConfiguration(&error) {
device.activeVideoMaxFrameDuration = self.frameDuration
device.activeVideoMinFrameDuration = self.frameDuration
device.unlockForConfiguration()
println("SUCCESS")
}
else {
println("frame rate is not supported or there was an error")
if let err = error {
println("There was an error setting framerate: \(err.description)")
}
else {
println("Frame rate is not supported")
}
}
}
else {
println("camera capture device is nil, could not set frame rate")
}
//------ INIT AUDIO CAPTURE DEVICE ------
self.audioCaptureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeAudio)
var error2: NSError? = nil
let audioDeviceInput = AVCaptureDeviceInput(device: self.audioCaptureDevice, error: &error2)
//------ADD CAMERA CAPTURE DEVICE TO CAPTURE SESSION INPUT------
if let captureDevice = self.cameraCaptureDevice {
if error1 == nil {
println("Trying to add video input")
self.videoDeviceInput = AVCaptureDeviceInput(device: captureDevice, error: &error1)
}
else {
println("Could not create video input")
}
}
else {
println("Could not create camera capture device")
}
//------ ADD INPUTS AND OUTPUTS AS WELL AS OTHER SESSION CONFIGURATIONS------
dispatch_async(self.sessionQueue) {
println("Trying to add audio output")
if let input = audioDeviceInput {
self.session.addInput(audioDeviceInput)
println("Successfully added audio output")
}
else {
println("Could not create audio input")
}
if self.session.canAddInput(self.videoDeviceInput) {
self.session.addInput(self.videoDeviceInput)
println("Successfully added video input")
}
else {
println("Could not add video input")
}
println("initializing video capture session")
//----- SET THE IMAGE QUALITY / RESOLUTION -----
//Options:
// AVCaptureSessionPresetHigh - Highest recording quality (varies per device)
// AVCaptureSessionPresetMedium - Suitable for WiFi sharing (actual values may change)
// AVCaptureSessionPresetLow - Suitable for 3G sharing (actual values may change)
// AVCaptureSessionPreset640x480 - 640x480 VGA (check its supported before setting it)
// AVCaptureSessionPreset1280x720 - 1280x720 720p HD (check its supported before setting it)
// AVCaptureSessionPresetPhoto - Full photo resolution (not supported for video output)
if self.session.canSetSessionPreset(AVCaptureSessionPresetHigh) {
println("Capture Session preset is set to High Quality")
self.session.sessionPreset = AVCaptureSessionPresetHigh
}
else {
println("Capture Session preset is set to Medium Quality")
self.session.sessionPreset = AVCaptureSessionPresetMedium
}
//------ADD JPEG OUTPUT AND MOVIE FILE OUTPUT TO SESSION OUTPUT------
println("Adding still image and movie file output")
if self.session.canAddOutput(self.stillImageOutput) && self.session.canAddOutput(self.movieFileOutput) {
self.session.addOutput(self.stillImageOutput)
self.session.addOutput(self.movieFileOutput)
println("Successfully added outputs")
}
else {
//------ IF OUTPUTS COULD NOT BE ADDED, THEN APP SHOULD NOT RUN ON DEVICE!!!!! ------
println("Could Not Add still image and movie file output")
}
//------WE CALL A METHOD AS IT ALSO HAS TO BE DONE AFTER CHANGING CAMERA------
self.setCameraOutputProperties()
//------DISPLAY PREVIEW LAYER------
if let videoLayer = self.videoPreviewLayer {
self.videoPreviewView.layer.addSublayer(self.videoPreviewLayer)
println("Video Preview Layer Added as sublayer")
self.videoPreviewLayer!.frame = self.videoPreviewView.layer.frame
println("Video Preview frame set")
}
else {
println("videoPreviewLayer is nil, could not add sublayer or set frame")
}
self.view.sendSubviewToBack(self.videoPreviewView)
}
}
/**************************************************************************
VIEW DID APPEAR
***************************************************************************/
override func viewDidAppear(animated: Bool) {
println("About to start the capture session")
//------INITIALIZE THE CAMERA------
dispatch_async(self.startSessionQueue) {
if self.beenHereBefore == false {
println("Have not seen this view before.... starting the session")
//------ START THE PREVIEW SESSION ------
self.startSession()
/*
CHECK TO MAKE SURE THAT THIS CODE IS REALLY NEEDED FOR AUTHORIZATION
*/
// ----- SET MEDIA TYPE ------
var mediaTypeVideo = AVMediaTypeVideo
AVCaptureDevice.requestAccessForMediaType(mediaTypeVideo, completionHandler: { (granted) -> Void in
//------ GRANTED ACCESS TO MEDIATYPE ------
if granted {
self.deviceAuthorized = AVAuthorizationStatus.Authorized
}
//------ NOT GRANTED ACCESS TO MEDIATYPE ------
else {
dispatch_async(dispatch_get_main_queue()) {
UIAlertView(title: "CopWatch", message: "CopWatch does not have permission to use the camera, please change your privacy settings.", delegate: self, cancelButtonTitle: "OK")
self.deviceAuthorized = AVAuthorizationStatus.Denied
dispatch_resume(dispatch_get_main_queue())
}
}
})
}
else {
println("Been Here Before")
self.session.startRunning()
}
self.weAreRecording = false
}
}
and here is the method that starts the video preview
/**************************************************************************
START SESSION
**************************************************************************/
func startSession() {
println("Checking to see if the session is already running before starting the session")
//------ START SESSION IF IT IS NOT ALREADY RUNNING------
if !self.session.running {
//------START CAMERA------
println("Session is not already running, starting the session now")
self.session.startRunning()
self.isSessionRunning = true
println("Capture Session initiated")
}
else {
println("Session is already running, no need to start it again")
}
}
It seems that I have found the answer.
I was adding the videoPreviewLayer as a subview and sending it to the back of the view in the asynchronous dispatch call. Apparently, the application did not like this and caused things to be very, very slow to start up.
I move this code
//------DISPLAY PREVIEW LAYER------
if let videoLayer = self.videoPreviewLayer {
self.videoPreviewView.layer.addSublayer(self.videoPreviewLayer)
println("Video Preview Layer Added as sublayer")
self.videoPreviewLayer!.frame = self.videoPreviewView.layer.frame
println("Video Preview frame set")
}
else {
println("videoPreviewLayer is nil, could not add sublayer or set frame")
}
self.view.sendSubviewToBack(self.videoPreviewView)
up to here like this:
//------SET UP PREVIEW LAYER-----
self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.session)
if let preview = self.videoPreviewLayer {
println("Video Preview Layer set")
preview.videoGravity = AVLayerVideoGravityResizeAspectFill
}
else {
println("Video Preview Layer is nil!!! Could not set AVLayerVideoGravityResizeAspectFill")
}
println("Camera successully can display")
//------DISPLAY PREVIEW LAYER------
if let videoLayer = self.videoPreviewLayer {
self.videoPreviewView.layer.addSublayer(self.videoPreviewLayer)
println("Video Preview Layer Added as sublayer")
self.videoPreviewLayer!.frame = self.videoPreviewView.layer.frame
println("Video Preview frame set")
self.view.sendSubviewToBack(self.videoPreviewView)
}
else {
println("videoPreviewLayer is nil, could not add sublayer or set frame")
}
I should have been able to see this issue, but I guess this is what happens when you optimize at the wrong times. Now it is pretty responsive.
Moral of the story, if you are programming with AVFoundation, don't set up and add your video preview layer as a subview of your view in the current view controller in an asynchronous queue.

Resources