Swift - Realtime images from cam to server - ios

I hope someone can help me!
I am making an app that sends frames of the camera to server, and server make some process. App sends 5-8 images per second (On NSData format)
I have tried different ways to do that, the two methods works but have different problems.
I will explain those situations, and maybe someone can help me.
First situation i tried is using AVCaptureVideoDataOutput mode.
Code below:
let captureSession = AVCaptureSession()
captureSession.sessionPreset=AVCaptureSessionPresetiFrame960x540
captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &error))
let output=AVCaptureVideoDataOutput();
output.videoSettings=[kCVPixelBufferPixelFormatTypeKey:kCVPixelFormatType_32BGRA]
let cameraQueue = dispatch_queue_create("cameraQueue", DISPATCH_QUEUE_SERIAL)
output.setSampleBufferDelegate(self, queue: cameraQueue)
captureSession.addOutput(output)
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
viewPreview?.layer.addSublayer(videoPreviewLayer)
captureSession.startRunning()
This view delegates:
AVCaptureMetadataOutputObjectsDelegate
AVCaptureVideoDataOutputSampleBufferDelegate
and call the delegate method:
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBufferRef!, fromConnection connection: AVCaptureConnection!)
{
let imagen:UIImage=imageFromSampleBuffer(sampleBuffer)
let dataImg:NSdata=UIImageJPEGRepresentation(imagen,1.0)
//Here I send the NSData to server correctly.
}
This method call imageFromSampleBuffer and converts samplebuffer to uiimage.
func imageFromSampleBuffer(sampleBuffer :CMSampleBufferRef) -> UIImage {
let imageBuffer: CVImageBufferRef = CMSampleBufferGetImageBuffer(sampleBuffer)
CVPixelBufferLockBaseAddress(imageBuffer, 0)
let baseAddress: UnsafeMutablePointer<Void> = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, Int(0))
let bytesPerRow: Int = CVPixelBufferGetBytesPerRow(imageBuffer)
let width: Int = CVPixelBufferGetWidth(imageBuffer)
let height: Int = CVPixelBufferGetHeight(imageBuffer)
let colorSpace: CGColorSpaceRef = CGColorSpaceCreateDeviceRGB()
let bitsPerCompornent: Int = 8
var bitmapInfo = CGBitmapInfo((CGBitmapInfo.ByteOrder32Little.rawValue | CGImageAlphaInfo.PremultipliedFirst.rawValue) as UInt32)
let newContext: CGContextRef = CGBitmapContextCreate(baseAddress, width, height, bitsPerCompornent, bytesPerRow, colorSpace, bitmapInfo) as CGContextRef
let imageRef: CGImageRef = CGBitmapContextCreateImage(newContext)
let resultImage = UIImage(CGImage: imageRef, scale: 1.0, orientation: UIImageOrientation.Right)!
return resultImage
}
Here finish the first method to do that, the problem is "infinite memory use", and app crashed after....2 minutes.
I Debug and problem is on UIImageJPEGRepresentation(imagen,1.0) method, there are any form to release memory after use the method???
Second (and I think best way i found) to do that is using "AVCaptureStillImageOutput"
Code below:
var stillImageOutput: AVCaptureStillImageOutput = AVCaptureStillImageOutput()
if session.canAddOutput(stillImageOutput){
stillImageOutput.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
session.addOutput(stillImageOutput)
self.stillImageOutput = stillImageOutput
}
var timer = NSTimer.scheduledTimerWithTimeInterval(0.2, target: self, selector: Selector("methodToBeCalled"), userInfo: nil, repeats: true)
func methodToBeCalled(){
dispatch_async(self.sessionQueue!, {
// Update the orientation on the still image output video connection before capturing.
let videoOrientation = (self.previewView.layer as! AVCaptureVideoPreviewLayer).connection.videoOrientation
self.stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo).videoOrientation = videoOrientation
self.stillImageOutput!.captureStillImageAsynchronouslyFromConnection(self.stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo), completionHandler: {
(imageDataSampleBuffer: CMSampleBuffer!, error: NSError!) in
if error == nil {
let dataImg:NSdata= AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer)
//Here I send the NSData to server correctly.
}else{println(error)}
})
})
}
This works perfectly and without memory leaks, but when the app takes a Screenshot, the phone makes the tipical sound of "take a photo", and i can not allow it, there are any way to do that without make the sound??.
If someone needs the code i can share the links where i found them.
Thanks a lot!

Did you ever manage to solve this problem yourself?
I stumbled upon this question because I am converting a Objective-C project with an AVCaptureSession to Swift. What my code does differently is discard late frames in the AVCaptureVideoDataOutput, perhaps this is causing your memory problem.
output.alwaysDiscardsLateVideoFrames = true
Insert this line right after you define the video data output and before you create the queue:
let output=AVCaptureVideoDataOutput();
output.videoSettings=[kCVPixelBufferPixelFormatTypeKey:kCVPixelFormatType_32BGRA]
output.alwaysDiscardsLateVideoFrames = true
let cameraQueue = dispatch_queue_create("cameraQueue", DISPATCH_QUEUE_SERIAL)
I am, of course, referring to the first of your two solutions.

There is no way to disallow users from taking a screenshot. Not even snapchat can do that.

Related

Memory Crash when detecting object with MLModel

I have made an MLModel in CreateML that will detect hockey pucks in images. I use the camera on the phone to take a video, and while it is being recorded, I convert each frame to a CGImage and try to detect pucks in each frame.
At first when I received the memory crashes, I tried removing a trajectory detection I was running at the same time, however this made no change. When monitoring the memory usage during runtime, my app uses a small and consistent amount of memory; it is "Other processes" that goes over the limit, which is quite confusing. I also removed a for loop that filtered out objects with low confidence (below 0.5) but this does not have an effect either.
Being new to MLModel and machine learning, can anybody steer me in the right direction? Please let me know if any more details are needed, if I missed something.I will attach all of the code because it is only 100 lines or so, and it may be important for context. However, the initializeCaptureSession method and captureOutput method would probably be the ones to look at.
import UIKit
import AVFoundation
import ImageIO
import Vision
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate {
var cameraPreviewLayer: AVCaptureVideoPreviewLayer?
var camera: AVCaptureDevice?
var microphone: AVCaptureDevice?
let session = AVCaptureSession()
var videoDataOutput = AVCaptureVideoDataOutput()
var audioDataOutput = AVCaptureAudioDataOutput()
#IBOutlet var trajectoriesLabel: UILabel!
#IBOutlet var pucksLabel: UILabel!
override func viewDidLoad() {
super.viewDidLoad()
initializeCaptureSession()
// Do any additional setup after loading the view.
}
// Lazily create a single instance of VNDetectTrajectoriesRequest.
private lazy var request: VNDetectTrajectoriesRequest = {
request.objectMinimumNormalizedRadius = 0.0
request.objectMaximumNormalizedRadius = 0.5
return VNDetectTrajectoriesRequest(frameAnalysisSpacing: .zero, trajectoryLength: 10, completionHandler: completionHandler)
}()
// AVCaptureVideoDataOutputSampleBufferDelegate callback.
func captureOutput(_ output: AVCaptureOutput,
didOutput sampleBuffer: CMSampleBuffer,
from connection: AVCaptureConnection) {
// Process the results.
do {
let requestHandler = VNImageRequestHandler(cmSampleBuffer: sampleBuffer)
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else{
print("cannot make pixelbuffer for image conversion")
return
}
CVPixelBufferLockBaseAddress(pixelBuffer, .readOnly)
let baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer)
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
let bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue)
guard let context = CGContext(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo.rawValue) else{
print("cannot make context for image conversion")
return
}
guard let cgImage = context.makeImage() else{
print("cannot make cgimage for image conversion")
return
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly)
let model = try VNCoreMLModel(for: PucksV7(configuration: MLModelConfiguration()).model)
let request = VNCoreMLRequest(model: model)
let handler = VNImageRequestHandler(cgImage: cgImage, options: [:])
try? handler.perform([request])
guard let pucks = request.results as? [VNDetectedObjectObservation] else{
print("Could not convert detected pucks")
return
}
DispatchQueue.main.async {
self.pucksLabel.text = "Pucks: \(pucks.count)"
}
try requestHandler.perform([request])
} catch {
// Handle the error.
}
}
func completionHandler(request: VNRequest, error: Error?) {
//identify results
guard let observations = request.results as? [VNTrajectoryObservation] else { return }
// Process the results.
self.trajectoriesLabel.text = "Trajectories: \(observations.count)"
}
func initializeCaptureSession(){
session.sessionPreset = .hd1920x1080
camera = AVCaptureDevice.default(for: .video)
microphone = AVCaptureDevice.default(for: .audio)
do{
session.beginConfiguration()
//adding camera
let cameraCaptureInput = try AVCaptureDeviceInput(device: camera!)
if session.canAddInput(cameraCaptureInput){
session.addInput(cameraCaptureInput)
}
//output
let queue = DispatchQueue(label: "output")
if session.canAddOutput(videoDataOutput) {
videoDataOutput.alwaysDiscardsLateVideoFrames = true
videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA]
videoDataOutput.setSampleBufferDelegate(self, queue: queue)
session.addOutput(videoDataOutput)
}
let captureConnection = videoDataOutput.connection(with: .video)
// Always process the frames
captureConnection?.isEnabled = true
do {
try camera!.lockForConfiguration()
camera!.unlockForConfiguration()
} catch {
print(error)
}
session.commitConfiguration()
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: session)
cameraPreviewLayer?.videoGravity = .resizeAspectFill
cameraPreviewLayer?.frame = view.bounds
cameraPreviewLayer?.connection?.videoOrientation = .landscapeRight
view.layer.insertSublayer(cameraPreviewLayer!, at: 0)
DispatchQueue.global(qos: .background).async {
self.session.startRunning()
}
} catch {
print(error.localizedDescription)
}
}
}
Execution speed. You are dispatching threads faster than they can be processed.
In my experience, not on this platform, object detection using a cnn is not fast enough to process every frame from the camera in real-time at 30 fps.
With hardware acceleration, like the "Apple Neural Engine", it is possible (I have an FPGA on my desk that does this task in real time in "hardware" using 15 watts).
I would suggest processing every 50th frame and speed it up until it fails.
The other issue is image size. To be performant the image must be as small as possible and still detect the feature.
The larger the input image, the more convolution layers are required. Most models are in the smaller ranges like 200x200 pixels.

Realtime zoom from AVCaptureVideoPreviewLayer

I am implementing a camera application. I initiate the camera as follows:
let input = try AVCaptureDeviceInput(device: captureDevice!)
captureSession = AVCaptureSession()
captureSession?.addInput(input)
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession!)
videoPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
previewView.layer.insertSublayer(videoPreviewLayer!, at: 0)
Now I want to have a small rectangle on top of the preview layer. In that rectangle area, I want to zoom a specific area from the preview layer. To do it, I add a new UIView on top of other views, but I don't know how to display a specific area from the previewer (e.g. zoom factor = 2).
The following figure shows what I want to have:
How can I do it?
Finally, I found a solution.
The idea is to extract the real-time frames from the output of the camera, then use an UIImage view to show the enlarged frame. Following is the portion of code to add an video output:
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "sample buffer"))
guard captureSession.canAddOutput(videoOutput) else { return }
captureSession.addOutput(videoOutput)
and we need to implement a delegate function:
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
guard let uiImage = imageFromSampleBuffer(sampleBuffer: sampleBuffer) else { return }
DispatchQueue.main.async { [unowned self] in
self.delegate?.captured(image: uiImage)
}
}
private func imageFromSampleBuffer(sampleBuffer: CMSampleBuffer) -> UIImage? {
guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return nil }
let ciImage = CIImage(cvPixelBuffer: imageBuffer)
guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { return nil }
return UIImage(cgImage: cgImage)
}
The code was taken from this article.

How can I apply filter for each frame of a video in AVCaptureSession?

I am writing an app which needs to apply filter to a video captured using AVCaptureSession. The filtered output is written to an output file. I am current using CIFilter and CIImage for filter each video frame.
Here is the code:
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
...
let pixelBuffer = CMSampleBufferGetImageBuffer(samples)!
let options = [kCVPixelBufferPixelFormatTypeKey as String : kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]
let cameraImage = CIImage(cvImageBuffer: pixelBuffer, options: options)
let filter = CIFilter(name: "CIGaussianBlur")!
filter.setValue((70.0), forKey: kCIInputRadiusKey)
filter.setValue(cameraImage, forKey: kCIInputImageKey)
let result = filter.outputImage!
var pixBuffer:CVPixelBuffer? = nil;
let fmt = CVPixelBufferGetPixelFormatType(pixelBuffer)
CVPixelBufferCreate(kCFAllocatorSystemDefault,
CVPixelBufferGetWidth(pixelBuffer),
CVPixelBufferGetHeight(pixelBuffer),
fmt,
CVBufferGetAttachments(pixelBuffer, .shouldPropagate),
&pixBuffer);
CVBufferPropagateAttachments(pixelBuffer, pixBuffer!)
let eaglContext = EAGLContext(api: EAGLRenderingAPI.openGLES3)!
eaglContext.isMultiThreaded = true
let contextOptions = [kCIContextWorkingColorSpace : NSNull(), kCIContextOutputColorSpace: NSNull()]
let context = CIContext(eaglContext: eaglContext, options: contextOptions)
CVPixelBufferLockBaseAddress( pixBuffer!, CVPixelBufferLockFlags(rawValue: 0))
context.render(result, to: pixBuffer!)
CVPixelBufferUnlockBaseAddress( pixBuffer!, CVPixelBufferLockFlags(rawValue: 0))
var timeInfo = CMSampleTimingInfo(duration: sampleBuffer.duration,
presentationTimeStamp: sampleBuffer.presentationTimeStamp,
decodeTimeStamp: sampleBuffer.decodeTimeStamp)
var sampleBuf:CMSampleBuffer? = nil;
CMSampleBufferCreateReadyWithImageBuffer(kCFAllocatorDefault,
pixBuffer!,
samples.formatDescription!,
&timeInfo,
&sampleBuf)
// write to video file
let ret = assetWriterInput.append(sampleBuf!)
...
}
The ret from the AVAssetWriterInput.append is always false. What am I doing wrong here? Also, the approach I am using is very inefficient. A few temp copies are created along the way. Is it possible to it in-place?
I used almost the same code with the the same problem. As I found out there was something wrong with pixel buffer created for rendering. append(sampleBuffer:) was always returning false and assetWriter.error was
Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could
not be completed" UserInfo={NSUnderlyingError=0x17024ba30 {Error
Domain=NSOSStatusErrorDomain Code=-12780 "(null)"},
NSLocalizedFailureReason=An unknown error occurred (-12780),
NSLocalizedDescription=The operation could not be completed}
They say this is a bug (as described here), already posted: https://bugreport.apple.com/web/?problemID=34574848.
But unexpectedly I found that problem goes away when using original pixel buffer for rendering. See code below:
let sourcePixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let sourceImage = CIImage(cvImageBuffer: sourcePixelBuffer)
let filter = CIFilter(name: "CIGaussianBlur", withInputParameters: [kCIInputImageKey: sourceImage])!
let filteredImage = filter.outputImage!
var pixelBuffer: CVPixelBuffer? = nil
let width = CVPixelBufferGetWidth(sourcePixelBuffer)
let height = CVPixelBufferGetHeight(sourcePixelBuffer)
let pixelFormat = CVPixelBufferGetPixelFormatType(sourcePixelBuffer)
let attributes = CVBufferGetAttachments(sourcePixelBuffer, .shouldPropagate)!
CVPixelBufferCreate(nil, width, height, pixelFormat, attributes, &pixelBuffer)
CVBufferPropagateAttachments(sourcePixelBuffer, pixelBuffer!)
var filteredPixelBuffer = pixelBuffer! // this never works
filteredPixelBuffer = sourcePixelBuffer // 0_0
let context = CIContext(options: [kCIContextOutputColorSpace: CGColorSpace(name: CGColorSpace.sRGB)!])
context.render(filteredImage, to: filteredPixelBuffer) // modifying original image buffer here!
let presentationTimestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
var timing = CMSampleTimingInfo(duration: kCMTimeInvalid, presentationTimeStamp: presentationTimestamp, decodeTimeStamp: kCMTimeInvalid)
var processedSampleBuffer: CMSampleBuffer? = nil
var formatDescription: CMFormatDescription? = nil
CMVideoFormatDescriptionCreateForImageBuffer(nil, filteredPixelBuffer, &formatDescription)
CMSampleBufferCreateReadyWithImageBuffer(nil, filteredPixelBuffer, formatDescription!, &timing, &processedSampleBuffer)
print(assetInput!.append(processedSampleBuffer!))
Sure, we all know you are not allowed to modify sample buffer, but somehow this approach gives normal processed video. Trick is dirty and I can't say if it will be fine in cases when you have preview layer or some concurrent processing routines.

How to save a live video capture where CIFilter is applied, in Swift 3.0?

I have an imageView, beneath that a 'record' button. When app starts, video capture starts (just capture, no saving anywhere) using AVCaptureSession object. Tap the record button, recording (actual saving) should start, tap the same again and the same will stop.
Plain video capturing starts using below code.
let captureSession = AVCaptureSession()
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self as AVCaptureVideoDataOutputSampleBufferDelegate, queue: DispatchQueue(label: "sample buffer delegate"))
captureSession.addOutput(videoOutput)
captureSession.startRunning()
Below is the record button action
#IBAction func recordFunc() {
if(startRecording) {
//Initialise the saving video tools
let fileUrl = docsurl.appendingPathComponent("newMovie1.mp4")
//docsurl has the document directory path
videoWriter = try? AVAssetWriter(outputURL: fileUrl, fileType: AVFileTypeQuickTimeMovie)
let outputSettings: [String : Any] = [AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : NSNumber(value: Float(outputSize.width)), AVVideoHeightKey : NSNumber(value: Float(outputSize.height))]
writerInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: outputSettings)
writerInput.expectsMediaDataInRealTime = true
videoWriter.add(writerInput)
//Now, the killer line for saving the video
adapter = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: writerInput, sourcePixelBufferAttributes: nil)
videoWriter.startWriting()
videoWriter.startSession(atSourceTime: kCMTimeZero)
}
else {
//Video saving is complete !!
self.writerInput.markAsFinished()
self.videoWriter.finishWriting { () -> Void in
print("FINISHED.")
}
}
}
In the above code, adapter - AVAssetWriterInputPixelBufferAdaptor type, has some special role in saving video.
Now the main the decisive code block, causing pain is here: The AVCaptureVideoDataOutputSampleBufferDelegate's captureOutput() method
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!){
connection.videoOrientation = AVCaptureVideoOrientation(rawValue: UIApplication.shared.statusBarOrientation.rawValue)!
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
cameraImage = CIImage(cvPixelBuffer: pixelBuffer!)
let comicEffect = CIFilter(name: "CIComicEffect")! // The great 'CIFilter'
//Below 2 lines calls a function which detects a face and attaches two cartoon eye images to the user's eye positions.
let leftEyeImage = eyeImage(cameraImage: cameraImage!, backgroundImage: cameraImage!, leftEye: true)
let rightEyeImage = eyeImage(cameraImage: cameraImage!, backgroundImage: leftEyeImage, leftEye: false)
comicEffect.setValue(rightEyeImage, forKey: kCIInputImageKey)
let outputImage = comicEffect.value(forKey: kCIOutputImageKey) as! CIImage
let filteredImage = UIImage(ciImage: outputImage)
//Feed the imageView, the UIImage thing 'filteredImage'
DispatchQueue.main.async(){
self.imgViewActual.image = filteredImage
}
//The MAIN line, only responsible for saving is below.
self.adapter.append(somePixelBuffr , withPresentationTime: someTime)
}
So, somePixelBuffr = ??? and someTime = ???
Or is there anything I should add here or modify here. Please suggest.
Constraints: Solution in Swift >=3.0, Video capturing should not lag.

Recording video using AVFoundation Swift

I created a AVCaptureSession and manipulate each frame (morphing) the users face,adding layers etc. How can i turn those frames into a video that can be saved to the camera roll?
Here's how i set up AVCaptureSession
func setupCapture() {
let session : AVCaptureSession = AVCaptureSession()
session.sessionPreset = AVCaptureSessionPreset640x480
let device : AVCaptureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
let deviceInput : AVCaptureDeviceInput = try! AVCaptureDeviceInput(device: device)
if session.canAddInput(deviceInput) {
session.addInput(deviceInput)
}
stillImageOutput = AVCaptureStillImageOutput()
videoDataOutput = AVCaptureVideoDataOutput()
let rgbOutputSettings = [kCVPixelBufferPixelFormatTypeKey as String: NSNumber(unsignedInt: kCMPixelFormat_32BGRA)]
videoDataOutput.videoSettings = rgbOutputSettings
videoDataOutput.alwaysDiscardsLateVideoFrames = true
videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL)
videoDataOutput.setSampleBufferDelegate(self, queue: videoDataOutputQueue)
if session.canAddOutput(videoDataOutput) {
session.addOutput(videoDataOutput)
}
videoDataOutput.connectionWithMediaType(AVMediaTypeVideo).enabled = false
effectiveScale = 1.0
previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer.backgroundColor = UIColor.blackColor().CGColor
previewLayer.videoGravity = AVLayerVideoGravityResizeAspect
let rootLayer : CALayer = previewView.layer
rootLayer.masksToBounds = true
previewLayer.frame = rootLayer.bounds
rootLayer.addSublayer(previewLayer)
session.startRunning()
}
Than i use the CMSampleBuffer to get a CIImage which i add my effects to.
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
let pixelBuffer : CVPixelBufferRef = CMSampleBufferGetImageBuffer(sampleBuffer)!
let attachments : CFDictionaryRef = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, pixelBuffer, CMAttachmentMode( kCMAttachmentMode_ShouldPropagate))!
let ciImage : CIImage = CIImage(CVPixelBuffer: pixelBuffer, options: attachments as? [String : AnyObject])
How can i record video doing this?
I found a solution to your problem. I too am working in Swift and had the same question. I'm thankful you posted this question because it helped me a lot. I was able to successfully process frames and then write them to the camera roll. I have not perfected it, but it is possible.
It turns out once you start messing around with AVCaptureDataOutput you allegedly lose the ability to use AVCaptureMovieFileOutput see discussion here.
A concise solution to your problem can be found here. This is the version I implemented. Feed the sampleBuffer to the AVAssetWriterInput object.
A more verbose guide to solving the problem describes exactly what parts (i.e. AVAssetWriter, AVAssetWriterInput and outSettings etc..) are needed and what they look like.
The links I posted are in obj-c but can be translated into swift. I hope you have already solved your problem.

Resources