I created a AVCaptureSession and manipulate each frame (morphing) the users face,adding layers etc. How can i turn those frames into a video that can be saved to the camera roll?
Here's how i set up AVCaptureSession
func setupCapture() {
let session : AVCaptureSession = AVCaptureSession()
session.sessionPreset = AVCaptureSessionPreset640x480
let device : AVCaptureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
let deviceInput : AVCaptureDeviceInput = try! AVCaptureDeviceInput(device: device)
if session.canAddInput(deviceInput) {
session.addInput(deviceInput)
}
stillImageOutput = AVCaptureStillImageOutput()
videoDataOutput = AVCaptureVideoDataOutput()
let rgbOutputSettings = [kCVPixelBufferPixelFormatTypeKey as String: NSNumber(unsignedInt: kCMPixelFormat_32BGRA)]
videoDataOutput.videoSettings = rgbOutputSettings
videoDataOutput.alwaysDiscardsLateVideoFrames = true
videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL)
videoDataOutput.setSampleBufferDelegate(self, queue: videoDataOutputQueue)
if session.canAddOutput(videoDataOutput) {
session.addOutput(videoDataOutput)
}
videoDataOutput.connectionWithMediaType(AVMediaTypeVideo).enabled = false
effectiveScale = 1.0
previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer.backgroundColor = UIColor.blackColor().CGColor
previewLayer.videoGravity = AVLayerVideoGravityResizeAspect
let rootLayer : CALayer = previewView.layer
rootLayer.masksToBounds = true
previewLayer.frame = rootLayer.bounds
rootLayer.addSublayer(previewLayer)
session.startRunning()
}
Than i use the CMSampleBuffer to get a CIImage which i add my effects to.
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
let pixelBuffer : CVPixelBufferRef = CMSampleBufferGetImageBuffer(sampleBuffer)!
let attachments : CFDictionaryRef = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, pixelBuffer, CMAttachmentMode( kCMAttachmentMode_ShouldPropagate))!
let ciImage : CIImage = CIImage(CVPixelBuffer: pixelBuffer, options: attachments as? [String : AnyObject])
How can i record video doing this?
I found a solution to your problem. I too am working in Swift and had the same question. I'm thankful you posted this question because it helped me a lot. I was able to successfully process frames and then write them to the camera roll. I have not perfected it, but it is possible.
It turns out once you start messing around with AVCaptureDataOutput you allegedly lose the ability to use AVCaptureMovieFileOutput see discussion here.
A concise solution to your problem can be found here. This is the version I implemented. Feed the sampleBuffer to the AVAssetWriterInput object.
A more verbose guide to solving the problem describes exactly what parts (i.e. AVAssetWriter, AVAssetWriterInput and outSettings etc..) are needed and what they look like.
The links I posted are in obj-c but can be translated into swift. I hope you have already solved your problem.
Related
I'm trying to capture camera frames in realtime to be processed using Firebase ML KIT. I've successfully displayed the camera view but I can't seem to get the captureOutput delegate function to be called.
P.s I'm new to iOS development.
private func startLiveVideo() {
self.session.sessionPreset = AVCaptureSession.Preset.photo
let captureDevice = AVCaptureDevice.default(for: AVMediaType.video)
let deviceInput = try! AVCaptureDeviceInput(device: captureDevice!)
self.session.addInput(deviceInput)
let deviceOutput = AVCaptureVideoDataOutput()
deviceOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)]
deviceOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
self.session.addOutput(AVCaptureVideoDataOutput())
let imageLayer = AVCaptureVideoPreviewLayer(session: session)
imageLayer.frame = CGRect(x: 0, y: 0, width: self.imageView.frame.size.width + 100, height: self.imageView.frame.size.height)
imageLayer.videoGravity = .resizeAspectFill
imageView.layer.addSublayer(imageLayer)
self.session.startRunning()
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
print("Frame captured")
}
You add the delegate for
let deviceOutput = AVCaptureVideoDataOutput()
deviceOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)]
deviceOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
but add another instance here
self.session.addOutput(AVCaptureVideoDataOutput())
so replace it with
self.session.addOutput(deviceOutput)
It worked just fine after converting to Swift 5.
I am trying to process the realtime video form the iPhone camera by using the function in AVCaptureVideoDataOutputSampleBufferDelegate.
The video had been edited but the direction of the video is changed, and the proportion of the video is strange.
I use the following code to edit the video.
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
guard let captureDevice = AVCaptureDevice.default(for: .video) else { return }
guard let input = try? AVCaptureDeviceInput(device: captureDevice) else { return }
captureSession.addInput(input)
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
captureSession.addOutput(dataOutput)
let preview = AVCaptureVideoPreviewLayer(session: captureSession)
preview.frame = cview.frame
cview.layer.addSublayer(preview)
captureSession.startRunning()
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let cameraImage = CIImage(cvPixelBuffer: imageBuffer!)
let comicEffect = CIFilter(name: "CIComicEffect")
comicEffect!.setValue(cameraImage, forKey: kCIInputImageKey)
let filteredImage = UIImage(ciImage: comicEffect!.value(forKey: kCIOutputImageKey) as! CIImage!)
DispatchQueue.main.async {
self.image.image = filteredImage
}
}
And it returns the following output:
To make the picture easier to compare, I removed the comicEffect:
The correct proportion should be like:
May I know how should I solve this problem?
I have an imageView, beneath that a 'record' button. When app starts, video capture starts (just capture, no saving anywhere) using AVCaptureSession object. Tap the record button, recording (actual saving) should start, tap the same again and the same will stop.
Plain video capturing starts using below code.
let captureSession = AVCaptureSession()
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self as AVCaptureVideoDataOutputSampleBufferDelegate, queue: DispatchQueue(label: "sample buffer delegate"))
captureSession.addOutput(videoOutput)
captureSession.startRunning()
Below is the record button action
#IBAction func recordFunc() {
if(startRecording) {
//Initialise the saving video tools
let fileUrl = docsurl.appendingPathComponent("newMovie1.mp4")
//docsurl has the document directory path
videoWriter = try? AVAssetWriter(outputURL: fileUrl, fileType: AVFileTypeQuickTimeMovie)
let outputSettings: [String : Any] = [AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : NSNumber(value: Float(outputSize.width)), AVVideoHeightKey : NSNumber(value: Float(outputSize.height))]
writerInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: outputSettings)
writerInput.expectsMediaDataInRealTime = true
videoWriter.add(writerInput)
//Now, the killer line for saving the video
adapter = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: writerInput, sourcePixelBufferAttributes: nil)
videoWriter.startWriting()
videoWriter.startSession(atSourceTime: kCMTimeZero)
}
else {
//Video saving is complete !!
self.writerInput.markAsFinished()
self.videoWriter.finishWriting { () -> Void in
print("FINISHED.")
}
}
}
In the above code, adapter - AVAssetWriterInputPixelBufferAdaptor type, has some special role in saving video.
Now the main the decisive code block, causing pain is here: The AVCaptureVideoDataOutputSampleBufferDelegate's captureOutput() method
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!){
connection.videoOrientation = AVCaptureVideoOrientation(rawValue: UIApplication.shared.statusBarOrientation.rawValue)!
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
cameraImage = CIImage(cvPixelBuffer: pixelBuffer!)
let comicEffect = CIFilter(name: "CIComicEffect")! // The great 'CIFilter'
//Below 2 lines calls a function which detects a face and attaches two cartoon eye images to the user's eye positions.
let leftEyeImage = eyeImage(cameraImage: cameraImage!, backgroundImage: cameraImage!, leftEye: true)
let rightEyeImage = eyeImage(cameraImage: cameraImage!, backgroundImage: leftEyeImage, leftEye: false)
comicEffect.setValue(rightEyeImage, forKey: kCIInputImageKey)
let outputImage = comicEffect.value(forKey: kCIOutputImageKey) as! CIImage
let filteredImage = UIImage(ciImage: outputImage)
//Feed the imageView, the UIImage thing 'filteredImage'
DispatchQueue.main.async(){
self.imgViewActual.image = filteredImage
}
//The MAIN line, only responsible for saving is below.
self.adapter.append(somePixelBuffr , withPresentationTime: someTime)
}
So, somePixelBuffr = ??? and someTime = ???
Or is there anything I should add here or modify here. Please suggest.
Constraints: Solution in Swift >=3.0, Video capturing should not lag.
I'm trying to build an app which will capture frames from the camera and process them with OpenCV before saving those files to the device, but at a specific frame rate.
What I'm stuck on at the moment is the fact that AVCaptureVideoDataOutputSampleBufferDelegate doesn't appear to respect the AVCaptureDevice.activeVideoMinFrameDuration, or AVCaptureDevice.activeVideoMaxFrameDuration settings.
captureOutput runs far quicker than 2 frames per second as the above settings would indicate.
Do you happen to know how one could achieve this, with or without the delegate?
ViewController:
override func viewDidLoad() {
super.viewDidLoad()
}
override func viewDidAppear(animated: Bool) {
setupCaptureSession()
}
func setupCaptureSession() {
let session : AVCaptureSession = AVCaptureSession()
session.sessionPreset = AVCaptureSessionPreset1280x720
let videoDevices : [AVCaptureDevice] = AVCaptureDevice.devices() as! [AVCaptureDevice]
for device in videoDevices {
if device.position == AVCaptureDevicePosition.Back {
let captureDevice : AVCaptureDevice = device
do {
try captureDevice.lockForConfiguration()
captureDevice.activeVideoMinFrameDuration = CMTimeMake(1, 2)
captureDevice.activeVideoMaxFrameDuration = CMTimeMake(1, 2)
captureDevice.unlockForConfiguration()
let input : AVCaptureDeviceInput = try AVCaptureDeviceInput(device: captureDevice)
if session.canAddInput(input) {
try session.addInput(input)
}
let output : AVCaptureVideoDataOutput = AVCaptureVideoDataOutput()
let dispatch_queue : dispatch_queue_t = dispatch_queue_create("streamoutput", nil)
output.setSampleBufferDelegate(self, queue: dispatch_queue)
session.addOutput(output)
session.startRunning()
let previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer.connection.videoOrientation = .LandscapeRight
let previewBounds : CGRect = CGRectMake(0,0,self.view.frame.width/2,self.view.frame.height+20)
previewLayer.backgroundColor = UIColor.blackColor().CGColor
previewLayer.frame = previewBounds
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
self.imageView.layer.addSublayer(previewLayer)
self.previewMat.frame = CGRectMake(previewBounds.width, 0, previewBounds.width, previewBounds.height)
} catch _ {
}
break
}
}
}
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
self.wrapper.processBuffer(self.getUiImageFromBuffer(sampleBuffer), self.previewMat)
}
So I've figured out the problem.
In the comments section for AVCaptureDevice.h above the activeVideoMinFrameDuration property it states:
On iOS, the receiver's activeVideoMinFrameDuration resets to its
default value under the following conditions:
The receiver's activeFormat changes
The receiver's AVCaptureDeviceInput's session's sessionPreset changes
The receiver's AVCaptureDeviceInput is added to a session
The last bullet point was causing my problem, so doing the following solved the problem for me:
do {
let input : AVCaptureDeviceInput = try AVCaptureDeviceInput(device: captureDevice)
if session.canAddInput(input) {
try session.addInput(input)
}
try captureDevice.lockForConfiguration()
captureDevice.activeVideoMinFrameDuration = CMTimeMake(value: 1, timescale: 2)
captureDevice.activeVideoMaxFrameDuration = CMTimeMake(value: 1, timescale: 2)
captureDevice.unlockForConfiguration()
let output : AVCaptureVideoDataOutput = AVCaptureVideoDataOutput()
let dispatch_queue : dispatch_queue_t = dispatch_queue_create("streamoutput", nil)
output.setSampleBufferDelegate(self, queue: dispatch_queue)
session.addOutput(output)
I hope someone can help me!
I am making an app that sends frames of the camera to server, and server make some process. App sends 5-8 images per second (On NSData format)
I have tried different ways to do that, the two methods works but have different problems.
I will explain those situations, and maybe someone can help me.
First situation i tried is using AVCaptureVideoDataOutput mode.
Code below:
let captureSession = AVCaptureSession()
captureSession.sessionPreset=AVCaptureSessionPresetiFrame960x540
captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &error))
let output=AVCaptureVideoDataOutput();
output.videoSettings=[kCVPixelBufferPixelFormatTypeKey:kCVPixelFormatType_32BGRA]
let cameraQueue = dispatch_queue_create("cameraQueue", DISPATCH_QUEUE_SERIAL)
output.setSampleBufferDelegate(self, queue: cameraQueue)
captureSession.addOutput(output)
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
viewPreview?.layer.addSublayer(videoPreviewLayer)
captureSession.startRunning()
This view delegates:
AVCaptureMetadataOutputObjectsDelegate
AVCaptureVideoDataOutputSampleBufferDelegate
and call the delegate method:
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBufferRef!, fromConnection connection: AVCaptureConnection!)
{
let imagen:UIImage=imageFromSampleBuffer(sampleBuffer)
let dataImg:NSdata=UIImageJPEGRepresentation(imagen,1.0)
//Here I send the NSData to server correctly.
}
This method call imageFromSampleBuffer and converts samplebuffer to uiimage.
func imageFromSampleBuffer(sampleBuffer :CMSampleBufferRef) -> UIImage {
let imageBuffer: CVImageBufferRef = CMSampleBufferGetImageBuffer(sampleBuffer)
CVPixelBufferLockBaseAddress(imageBuffer, 0)
let baseAddress: UnsafeMutablePointer<Void> = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, Int(0))
let bytesPerRow: Int = CVPixelBufferGetBytesPerRow(imageBuffer)
let width: Int = CVPixelBufferGetWidth(imageBuffer)
let height: Int = CVPixelBufferGetHeight(imageBuffer)
let colorSpace: CGColorSpaceRef = CGColorSpaceCreateDeviceRGB()
let bitsPerCompornent: Int = 8
var bitmapInfo = CGBitmapInfo((CGBitmapInfo.ByteOrder32Little.rawValue | CGImageAlphaInfo.PremultipliedFirst.rawValue) as UInt32)
let newContext: CGContextRef = CGBitmapContextCreate(baseAddress, width, height, bitsPerCompornent, bytesPerRow, colorSpace, bitmapInfo) as CGContextRef
let imageRef: CGImageRef = CGBitmapContextCreateImage(newContext)
let resultImage = UIImage(CGImage: imageRef, scale: 1.0, orientation: UIImageOrientation.Right)!
return resultImage
}
Here finish the first method to do that, the problem is "infinite memory use", and app crashed after....2 minutes.
I Debug and problem is on UIImageJPEGRepresentation(imagen,1.0) method, there are any form to release memory after use the method???
Second (and I think best way i found) to do that is using "AVCaptureStillImageOutput"
Code below:
var stillImageOutput: AVCaptureStillImageOutput = AVCaptureStillImageOutput()
if session.canAddOutput(stillImageOutput){
stillImageOutput.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
session.addOutput(stillImageOutput)
self.stillImageOutput = stillImageOutput
}
var timer = NSTimer.scheduledTimerWithTimeInterval(0.2, target: self, selector: Selector("methodToBeCalled"), userInfo: nil, repeats: true)
func methodToBeCalled(){
dispatch_async(self.sessionQueue!, {
// Update the orientation on the still image output video connection before capturing.
let videoOrientation = (self.previewView.layer as! AVCaptureVideoPreviewLayer).connection.videoOrientation
self.stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo).videoOrientation = videoOrientation
self.stillImageOutput!.captureStillImageAsynchronouslyFromConnection(self.stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo), completionHandler: {
(imageDataSampleBuffer: CMSampleBuffer!, error: NSError!) in
if error == nil {
let dataImg:NSdata= AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer)
//Here I send the NSData to server correctly.
}else{println(error)}
})
})
}
This works perfectly and without memory leaks, but when the app takes a Screenshot, the phone makes the tipical sound of "take a photo", and i can not allow it, there are any way to do that without make the sound??.
If someone needs the code i can share the links where i found them.
Thanks a lot!
Did you ever manage to solve this problem yourself?
I stumbled upon this question because I am converting a Objective-C project with an AVCaptureSession to Swift. What my code does differently is discard late frames in the AVCaptureVideoDataOutput, perhaps this is causing your memory problem.
output.alwaysDiscardsLateVideoFrames = true
Insert this line right after you define the video data output and before you create the queue:
let output=AVCaptureVideoDataOutput();
output.videoSettings=[kCVPixelBufferPixelFormatTypeKey:kCVPixelFormatType_32BGRA]
output.alwaysDiscardsLateVideoFrames = true
let cameraQueue = dispatch_queue_create("cameraQueue", DISPATCH_QUEUE_SERIAL)
I am, of course, referring to the first of your two solutions.
There is no way to disallow users from taking a screenshot. Not even snapchat can do that.