CMSampleBuffer from AVCaptureVideoDataOutput unexpectedly found nil - ios

I am trying to convert the frames that I'm getting from AVCaptureVideoDataOutput delegate (as CMSampleBuffer) to UIImage. However I'm getting a fatal error: unexpectedly found nil while unwrapping an Optional value Can someone tell me what is wrong with my code? I am assuming that there is something wrong with my sampleBufferToUIImage function.
Function to convert CMSampleBuffer to UIImage:
func sampleBufferToUIImage(sampleBuffer: CMSampleBuffer) -> UIImage{
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: 0))
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer!)
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!)
let width = CVPixelBufferGetWidth(imageBuffer!)
let height = CVPixelBufferGetHeight(imageBuffer!)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.noneSkipFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue)
let context = CGContext(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)
// *********Getting the error from this line***********
let quartzImage = context!.makeImage()
CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: 0))
let image = UIImage(cgImage: quartzImage!)
return image
}
delegate where I'm reading frame:
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
if count <= 0 {
// Calling my function to convert to UIImage.
let image = sampleBufferToUIImage(sampleBuffer: sampleBuffer)
let imageData = UIImagePNGRepresentation(image)
uploadImage(jpgData: imageData)
}
count = count + 1
}
Setting up AVSession:
func setupCameraSession() {
captureSession.sessionPreset = AVCaptureSessionPresetHigh
// Declare AVCaptureDevice to default(back camera). The "as" changes removes the optional?
let captureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo) as AVCaptureDevice
do {
let deviceInput = try AVCaptureDeviceInput(device: captureDevice)
if (captureSession.canAddInput(deviceInput) == true) {
captureSession.addInput(deviceInput)
}
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString) : NSNumber(value: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange as UInt32)]
dataOutput.alwaysDiscardsLateVideoFrames = true
if (captureSession.canAddOutput(dataOutput) == true) {
captureSession.addOutput(dataOutput)
}
} catch {
}

Try this, works for me on Swift 3
// Sample buffer handling delegate function
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
let myPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
myCIimage = CIImage(cvPixelBuffer: myPixelBuffer!)
videoImage = UIImage(ciImage: myCIimage)
uIimage.image = videoImage
}
// AV Session
func startVideoDisplay() {
do {
let tryDeviceInput = try AVCaptureDeviceInput(device: cameraDevice)
cameraCaptureSession.addInput(tryDeviceInput)
} catch { print(error.localizedDescription) }
caViewLayer = AVCaptureVideoPreviewLayer(session: cameraCaptureSession)
view.layer.addSublayer(caViewLayer)
cameraCaptureSession.startRunning()
let myQueue = DispatchQueue(label: "se.paredes.FunAV", qos: .userInteractive, attributes: .concurrent)
let theOutput = AVCaptureVideoDataOutput()
theOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString): NSNumber(value:kCVPixelFormatType_32BGRA)]
theOutput.alwaysDiscardsLateVideoFrames = true
theOutput.setSampleBufferDelegate(self, queue: myQueue)
if cameraCaptureSession.canAddOutput(theOutput) {
cameraCaptureSession.addOutput(theOutput)
}
cameraCaptureSession.commitConfiguration()
}

AVCaptureVideoDataOutput's video setting was incorrect. Change kCVPixelFormatType_420YpCbCr8BiPlanarFullRange to kCVPixelFormatType_32BGRA

Related

CMSampleBuffer automatically rotate every time

Here I try to select video from photo library and read it frame by frame as a samplebuffer so that I can crop or rotate later.but the problem is CMSampleBuffer by default rotated.The variables are I use for initialization is
var asset:AVAsset! //load asset from url
var assetReader:AVAssetReader!
var assetVideoOutput: AVAssetReaderTrackOutput! // add assetreader for video
var assetAudioOutput: AVAssetReaderTrackOutput! // add assetreader for audio
var readQueue: DispatchQueue!
the settings of previous variables looks like this.
func resetRendering() {
do{
assetReader = try! AVAssetReader(asset: asset)
}catch {
print(error)
}
var tracks = asset.tracks(withMediaType: .video)
var track : AVAssetTrack!
if tracks.count > 0 {
track = tracks[0]
}
let decompressionvideoSettings:[String:Any] = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA]
assetVideoOutput = AVAssetReaderTrackOutput(track: track, outputSettings: decompressionvideoSettings)
tracks = asset.tracks(withMediaType: .audio)
if tracks.count > 0 {
track = tracks[0]
}
let audioReadSettings = [AVFormatIDKey: kAudioFormatLinearPCM]
assetAudioOutput = AVAssetReaderTrackOutput(track: track, outputSettings: audioReadSettings)
if (assetAudioOutput.responds(to: #selector(getter: AVAssetReaderOutput.alwaysCopiesSampleData))) {
assetAudioOutput.alwaysCopiesSampleData = false
}
if assetReader.canAdd(assetAudioOutput) {
assetReader.add(assetAudioOutput)
}
if (assetVideoOutput.responds(to: #selector(getter: AVAssetReaderOutput.alwaysCopiesSampleData))) {
assetVideoOutput.alwaysCopiesSampleData = false
}
if assetReader.canAdd(assetVideoOutput) {
assetReader.add(assetVideoOutput)
}
}
now when I try to read video frame by frame and convert into image it automatically rotate frame in 90 degrees.the conversion extention from samplebuffer to uiimage looks like this
extension CMSampleBuffer {
var uiImage: UIImage? {
guard let imageBuffer = CMSampleBufferGetImageBuffer(self) else { return nil }
CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.noneSkipFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue)
guard let context = CGContext(data: baseAddress,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: bytesPerRow,
space: colorSpace,
bitmapInfo: bitmapInfo.rawValue) else { return nil }
guard let cgImage = context.makeImage() else { return nil }
CVPixelBufferUnlockBaseAddress(imageBuffer,CVPixelBufferLockFlags(rawValue: 0));
return UIImage(cgImage: cgImage)
}
}
resulted photo is
and actual photo is this one
Update
CGImage doesn't hold orientation property. So you have to manually set when you convert to UIImage. We get orientation from video track and set it to UIImage.
Below answer's right/left or up/down might be opposite. Pls try and check.
//pls get orientation property outside of image processing to avoid redundancy
let videoTrack = videoAsset.tracks(withMediaType: AVMediaType.video).first!
let videoSize = videoTrack.naturalSize
let transform = videoTrack.preferredTransform
var orientation = UIImage.Orientation.right
switch (transform.tx, transform.ty) {
case (0, 0): orientation = UIImage.Orientation.up
case (videoSize.width, videoSize.height): UIImage.Orientation.right
case (0, videoSize.width): UIImage.Orientation.left
default: orientation = UIImage.Orientation.down
}
// then set orientation in image processing such as captureOutput
let uiImage = UIImage.init(cgImage: cgImage, scale: 1.0, orientation: orientation)
Hope this helps.

Google Face Detection crashing when converting to image and trying to detect face

I am creating a custom camera with filters. When I add the following line it crashes without showing any exception.
//Setting video output
func setupBuffer() {
videoBuffer = AVCaptureVideoDataOutput()
videoBuffer?.alwaysDiscardsLateVideoFrames = true
videoBuffer?.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString): NSNumber(value: kCVPixelFormatType_32RGBA)]
videoBuffer?.setSampleBufferDelegate(self, queue: DispatchQueue.main)
captureSession?.addOutput(videoBuffer)
}
public func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
if connection.videoOrientation != .portrait {
connection.videoOrientation = .portrait
}
guard let image = GMVUtility.sampleBufferTo32RGBA(sampleBuffer) else {
print("No Image 😂")
return
}
pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
ciImage = CIImage(cvImageBuffer: pixelBuffer!, options: CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate)as! [String : Any]?)
CameraView.filter = CIFilter(name: "CIPhotoEffectProcess")
CameraView.filter?.setValue(ciImage, forKey: kCIInputImageKey)
let cgimg = CameraView.context.createCGImage(CameraView.filter!.outputImage!, from: ciImage.extent)
DispatchQueue.main.async {
self.preview.image = UIImage(cgImage: cgimg!)
}
}
But it's crashing on -
guard let image = GMVUtility.sampleBufferTo32RGBA(sampleBuffer) else {
print("No Image 😂")
return
}
When I pass image which is created from CIImage, it doesn't recognize the face in the image.
Complete code file is https://www.dropbox.com/s/y1ewd1sh18h3ezj/CameraView.swift.zip?dl=0
1) Create separate queue for buffer.
fileprivate var videoDataOutputQueue = DispatchQueue(label: "VideoDataOutputQueue")
2) Setup buffer with this
let videoBuffer = AVCaptureVideoDataOutput()
videoBuffer?.alwaysDiscardsLateVideoFrames = true
videoBuffer?.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString): NSNumber(value: kCVPixelFormatType_32BGRA)]
videoBuffer?.setSampleBufferDelegate(self, queue: videoDataOutputQueue ) //
captureSession?.addOutput(videoBuffer)

How to AVCaptureSession.sessionPreset to 720x1280px(or 1/1.77 scale of anything)

I am using swift3 and can't change my resolution to custom values when i use AVCaptureSessionPresetMedium etc. it doesn't fit the screen scale(1/1.77).
let output = AVCaptureVideoDataOutput()
output.setSampleBufferDelegate(self, queue: sampleQueue)
let metaOutput = AVCaptureMetadataOutput()
metaOutput.setMetadataObjectsDelegate(self, queue: faceQueue)
session.beginConfiguration()
// Desired resolution : 720x1280px
// session.sessionPreset = AVCaptureSessionPresetMedium;
if session.canAddInput(input) {
session.addInput(input)
}
if session.canAddOutput(output) {
output .alwaysDiscardsLateVideoFrames = true;
session.addOutput(output)
connection1 = output.connection(withMediaType: AVMediaTypeVideo)
connection1?.preferredVideoStabilizationMode = AVCaptureVideoStabilizationMode.auto;
connection1?.videoOrientation = .portrait
connection1?.isVideoMirrored = true;
}
if session.canAddOutput(metaOutput) {
output .alwaysDiscardsLateVideoFrames = true;
session.addOutput(metaOutput)
connection2 = metaOutput.connection(withMediaType: AVMediaTypeMetadata)
connection2?.preferredVideoStabilizationMode = AVCaptureVideoStabilizationMode.auto;
connection2?.videoOrientation = .portrait
connection2?.isVideoMirrored = true
}
You should use the AVCaptureSessionPreset1280x720 preset. The presets are denoted in landscape but the capture setting of 1280x720 is the same as 720x1280 the only difference is the orientation. For example with an app that supports rotation:
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!)
{
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
cameraImage = CIImage(cvPixelBuffer: pixelBuffer)
print(cameraImage?.extent ?? "")
}
Will print (0.0, 0.0, 1280.0, 720.0) when in landscape, and (0.0, 0.0, 720.0, 1280.0) when in portrait.

AVCaptureStillImageOutput.pngStillImageNSDataRepresentation?

I am working with AVCaptureStillImageOutput for the first time, I save a JPEG image at some point.
Instead of a JPEG image I would like to save a PNG image. What do I need to do for that?
I have those 3 lines of code along the app:
let stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput.outputSettings = [AVVideoCodecKey:AVVideoCodecJPEG]
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer)
Is there a simple way to modify those lines to get what I want?
After browsing the net, it seems like the anser is NO (unless I have not been lucky enough), nevertheless I still believe there must be some good solution.
There is sample code in the AVFoundation Programming Guide that shows how to convert a CMSampleBuffer to a UIImage (under Converting CMSampleBuffer to a UIImage Object). From there, you can use UIImagePNGRepresentation(image) to encode it as PNG data.
Here is a Swift translation of that code:
extension UIImage
{
// Translated from <https://developer.apple.com/library/ios/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/06_MediaRepresentations.html#//apple_ref/doc/uid/TP40010188-CH2-SW4>
convenience init?(fromSampleBuffer sampleBuffer: CMSampleBuffer)
{
guard let imageBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return nil }
if CVPixelBufferLockBaseAddress(imageBuffer, kCVPixelBufferLock_ReadOnly) != kCVReturnSuccess { return nil }
defer { CVPixelBufferUnlockBaseAddress(imageBuffer, kCVPixelBufferLock_ReadOnly) }
let context = CGBitmapContextCreate(
CVPixelBufferGetBaseAddress(imageBuffer),
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer),
8,
CVPixelBufferGetBytesPerRow(imageBuffer),
CGColorSpaceCreateDeviceRGB(),
CGBitmapInfo.ByteOrder32Little.rawValue | CGImageAlphaInfo.PremultipliedFirst.rawValue)
guard let quartzImage = CGBitmapContextCreateImage(context) else { return nil }
self.init(CGImage: quartzImage)
}
}
Here is Swift 4 version of the above code.
extension UIImage
{
convenience init?(fromSampleBuffer sampleBuffer: CMSampleBuffer)
{
guard let imageBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return nil }
if CVPixelBufferLockBaseAddress(imageBuffer, .readOnly) != kCVReturnSuccess { return nil }
defer { CVPixelBufferUnlockBaseAddress(imageBuffer, .readOnly) }
let context = CGContext(
data: CVPixelBufferGetBaseAddress(imageBuffer),
width: CVPixelBufferGetWidth(imageBuffer),
height: CVPixelBufferGetHeight(imageBuffer),
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(imageBuffer),
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue)
guard let quartzImage = context!.makeImage() else { return nil }
self.init(cgImage: quartzImage)
}
}

Make an UIImage from a CMSampleBuffer

This is not the same as the countless questions about converting a CMSampleBuffer to a UIImage. I'm simply wondering why I can't convert it like this:
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage * imageFromCoreImageLibrary = [CIImage imageWithCVPixelBuffer: pixelBuffer];
UIImage * imageForUI = [UIImage imageWithCIImage: imageFromCoreImageLibrary];
It seems a lot simpler because it works for YCbCr color spaces, as well as RGBA and others. Is there something wrong with that code?
With Swift 3 and iOS 10 AVCapturePhotoOutput :
Includes :
import UIKit
import CoreData
import CoreMotion
import AVFoundation
Create an UIView for preview and link it to the Main Class
#IBOutlet var preview: UIView!
Create this to setup the camera session (kCVPixelFormatType_32BGRA is important !!) :
lazy var cameraSession: AVCaptureSession = {
let s = AVCaptureSession()
s.sessionPreset = AVCaptureSessionPresetHigh
return s
}()
lazy var previewLayer: AVCaptureVideoPreviewLayer = {
let previewl:AVCaptureVideoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.cameraSession)
previewl.frame = self.preview.bounds
return previewl
}()
func setupCameraSession() {
let captureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo) as AVCaptureDevice
do {
let deviceInput = try AVCaptureDeviceInput(device: captureDevice)
cameraSession.beginConfiguration()
if (cameraSession.canAddInput(deviceInput) == true) {
cameraSession.addInput(deviceInput)
}
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString) : NSNumber(value: **kCVPixelFormatType_32BGRA** as UInt32)]
dataOutput.alwaysDiscardsLateVideoFrames = true
if (cameraSession.canAddOutput(dataOutput) == true) {
cameraSession.addOutput(dataOutput)
}
cameraSession.commitConfiguration()
let queue = DispatchQueue(label: "fr.popigny.videoQueue", attributes: [])
dataOutput.setSampleBufferDelegate(self, queue: queue)
}
catch let error as NSError {
NSLog("\(error), \(error.localizedDescription)")
}
}
In WillAppear :
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
setupCameraSession()
}
In Didappear :
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
preview.layer.addSublayer(previewLayer)
cameraSession.startRunning()
}
Create a function to capture output :
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
// Here you collect each frame and process it
let ts:CMTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
self.mycapturedimage = imageFromSampleBuffer(sampleBuffer: sampleBuffer)
}
Here is the code that convert an kCVPixelFormatType_32BGRA CMSampleBuffer to an UIImage the key things is the bitmapInfo that must correspond to 32BGRA 32 little with premultfirst and alpha info :
func imageFromSampleBuffer(sampleBuffer : CMSampleBuffer) -> UIImage
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly);
// Get the number of bytes per row for the pixel buffer
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer!);
// Get the number of bytes per row for the pixel buffer
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!);
// Get the pixel buffer width and height
let width = CVPixelBufferGetWidth(imageBuffer!);
let height = CVPixelBufferGetHeight(imageBuffer!);
// Create a device-dependent RGB color space
let colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
var bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Little.rawValue
bitmapInfo |= CGImageAlphaInfo.premultipliedFirst.rawValue & CGBitmapInfo.alphaInfoMask.rawValue
//let bitmapInfo: UInt32 = CGBitmapInfo.alphaInfoMask.rawValue
let context = CGContext.init(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo)
// Create a Quartz image from the pixel data in the bitmap graphics context
let quartzImage = context?.makeImage();
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly);
// Create an image object from the Quartz image
let image = UIImage.init(cgImage: quartzImage!);
return (image);
}
For JPEG images:
Swift 4:
let buff: CMSampleBuffer ... // Have you have CMSampleBuffer
if let imageData = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: buff, previewPhotoSampleBuffer: nil) {
let image = UIImage(data: imageData) // Here you have UIImage
}
Use following code to convert image from PixelBuffer
Option 1:
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef myImage = [context
createCGImage:ciImage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(pixelBuffer),
CVPixelBufferGetHeight(pixelBuffer))];
UIImage *uiImage = [UIImage imageWithCGImage:myImage];
Option 2:
int w = CVPixelBufferGetWidth(pixelBuffer);
int h = CVPixelBufferGetHeight(pixelBuffer);
int r = CVPixelBufferGetBytesPerRow(pixelBuffer);
int bytesPerPixel = r/w;
unsigned char *buffer = CVPixelBufferGetBaseAddress(pixelBuffer);
UIGraphicsBeginImageContext(CGSizeMake(w, h));
CGContextRef c = UIGraphicsGetCurrentContext();
unsigned char* data = CGBitmapContextGetData(c);
if (data != NULL) {
int maxY = h;
for(int y = 0; y<maxY; y++) {
for(int x = 0; x<w; x++) {
int offset = bytesPerPixel*((w*y)+x);
data[offset] = buffer[offset]; // R
data[offset+1] = buffer[offset+1]; // G
data[offset+2] = buffer[offset+2]; // B
data[offset+3] = buffer[offset+3]; // A
}
}
}
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I wrote a simple extension for use with Swift 4.x/3.x to produce a UIImage from a CMSampleBuffer.
This also handles scaling and orientation, though you can just accept default values if they work for you.
import UIKit
import AVFoundation
extension CMSampleBuffer {
func image(orientation: UIImageOrientation = .up,
scale: CGFloat = 1.0) -> UIImage? {
if let buffer = CMSampleBufferGetImageBuffer(self) {
let ciImage = CIImage(cvPixelBuffer: buffer)
return UIImage(ciImage: ciImage,
scale: scale,
orientation: orientation)
}
return nil
}
}
If it can obtain buffer data from the image, it will proceed, otherwise nil is returned
Using the buffer, it initializes a CIImage
It returns a UIImage initialized with the ciImage value, along with the scale & orientation values. If none are provided, the defaults of up and 1.0 are used respectively
TO ALL: don't use methods like:
private let context = CIContext()
private func imageFromSampleBuffer2(_ sampleBuffer: CMSampleBuffer) -> UIImage? {
guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return nil }
let ciImage = CIImage(cvPixelBuffer: imageBuffer)
guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { return nil }
return UIImage(cgImage: cgImage)
}
they eat much more cpu, more time to convert
use solution from https://stackoverflow.com/a/40193359/7767664
don't forget to set next setting for AVCaptureVideoDataOutput
videoOutput = AVCaptureVideoDataOutput()
videoOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as String) : NSNumber(value: kCVPixelFormatType_32BGRA as UInt32)]
//videoOutput.alwaysDiscardsLateVideoFrames = true
videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "MyQueue"))
convert method
func imageFromSampleBuffer(_ sampleBuffer : CMSampleBuffer) -> UIImage {
// Get a CMSampleBuffer's Core Video image buffer for the media data
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly);
// Get the number of bytes per row for the pixel buffer
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer!);
// Get the number of bytes per row for the pixel buffer
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!);
// Get the pixel buffer width and height
let width = CVPixelBufferGetWidth(imageBuffer!);
let height = CVPixelBufferGetHeight(imageBuffer!);
// Create a device-dependent RGB color space
let colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
var bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Little.rawValue
bitmapInfo |= CGImageAlphaInfo.premultipliedFirst.rawValue & CGBitmapInfo.alphaInfoMask.rawValue
//let bitmapInfo: UInt32 = CGBitmapInfo.alphaInfoMask.rawValue
let context = CGContext.init(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo)
// Create a Quartz image from the pixel data in the bitmap graphics context
let quartzImage = context?.makeImage();
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly);
// Create an image object from the Quartz image
let image = UIImage.init(cgImage: quartzImage!);
return (image);
}
Swift 5.0
if let cvImageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
let ciimage = CIImage(cvImageBuffer: cvImageBuffer)
let context = CIContext()
if let cgImage = context.createCGImage(ciimage, from: ciimage.extent) {
let uiImage = UIImage(cgImage: cgImage)
}
}
This is going to come up a lot in connection with the iOS 10 AVCapturePhotoOutput class. Suppose the user wants to snap a photo and you call capturePhoto(with:delegate:) and your settings include a request for a preview image. This is a splendidly efficient way to get a preview image, but how are you going to display it in your interface? The preview image arrives as a CMSampleBuffer in your implementation of the delegate method:
func capture(_ output: AVCapturePhotoOutput,
didFinishProcessingPhotoSampleBuffer buff: CMSampleBuffer?,
previewPhotoSampleBuffer: CMSampleBuffer?,
resolvedSettings: AVCaptureResolvedPhotoSettings,
bracketSettings: AVCaptureBracketedStillImageSettings?,
error: Error?) {
You need to transform a CMSampleBuffer, previewPhotoSampleBuffer into a UIImage. How are you going to do that? Like this:
if let prev = previewPhotoSampleBuffer {
if let buff = CMSampleBufferGetImageBuffer(prev) {
let cim = CIImage(cvPixelBuffer: buff)
let im = UIImage(ciImage: cim)
// and now you have a UIImage! do something with it ...
}
}
A Swift 4 / iOS 11 version of Popigny's answer:
import Foundation
import AVFoundation
import UIKit
class ViewController : UIViewController {
let captureSession = AVCaptureSession()
let photoOutput = AVCapturePhotoOutput()
let cameraPreview = UIView(frame: .zero)
let progressIndicator = ProgressIndicator()
override func viewDidLoad() {
super.viewDidLoad()
setupVideoPreview()
do {
try setupCaptureSession()
} catch {
let errorMessage = String(describing:error)
print("[--ERROR--]: \(#file):\(#function):\(#line): " + errorMessage)
alert(title: "Error", message: errorMessage)
}
}
private func setupCaptureSession() throws {
let deviceDiscovery = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera], mediaType: AVMediaType.video, position: AVCaptureDevice.Position.back)
let devices = deviceDiscovery.devices
guard let captureDevice = devices.first else {
let errorMessage = "No camera available"
print("[--ERROR--]: \(#file):\(#function):\(#line): " + errorMessage)
alert(title: "Error", message: errorMessage)
return
}
let captureDeviceInput = try AVCaptureDeviceInput(device: captureDevice)
captureSession.addInput(captureDeviceInput)
captureSession.sessionPreset = AVCaptureSession.Preset.photo
captureSession.startRunning()
if captureSession.canAddOutput(photoOutput) {
captureSession.addOutput(photoOutput)
}
}
private func setupVideoPreview() {
let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.bounds = view.bounds
previewLayer.position = CGPoint(x:view.bounds.midX, y:view.bounds.midY)
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
cameraPreview.layer.addSublayer(previewLayer)
cameraPreview.addGestureRecognizer(UITapGestureRecognizer(target: self, action:#selector(capturePhoto)))
cameraPreview.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(cameraPreview)
let viewsDict = ["cameraPreview":cameraPreview]
view.addConstraints(NSLayoutConstraint.constraints(withVisualFormat: "V:|-0-[cameraPreview]-0-|", options: [], metrics: nil, views: viewsDict))
view.addConstraints(NSLayoutConstraint.constraints(withVisualFormat: "H:|-0-[cameraPreview]-0-|", options: [], metrics: nil, views: viewsDict))
}
#objc func capturePhoto(_ sender: UITapGestureRecognizer) {
progressIndicator.add(toView: view)
let photoOutputSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey:AVVideoCodecType.jpeg])
photoOutput.capturePhoto(with: photoOutputSettings, delegate: self)
}
func saveToPhotosAlbum(_ image: UIImage) {
UIImageWriteToSavedPhotosAlbum(image, self, #selector(photoWasSavedToAlbum), nil)
}
#objc func photoWasSavedToAlbum(_ image: UIImage, _ error: Error?, _ context: Any?) {
alert(message: "Photo saved to device photo album")
}
func alert(title: String?=nil, message:String?=nil) {
let alert = UIAlertController(title: title, message: message, preferredStyle: .alert)
alert.addAction(UIAlertAction(title: "OK", style: .default, handler: nil))
present(alert, animated:true)
}
}
extension ViewController : AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
guard let photoData = photo.fileDataRepresentation() else {
let errorMessage = "Photo capture did not provide output data"
print("[--ERROR--]: \(#file):\(#function):\(#line): " + errorMessage)
alert(title: "Error", message: errorMessage)
return
}
guard let image = UIImage(data: photoData) else {
let errorMessage = "could not create image to save"
print("[--ERROR--]: \(#file):\(#function):\(#line): " + errorMessage)
alert(title: "Error", message: errorMessage)
return
}
saveToPhotosAlbum(image)
progressIndicator.hide()
}
}
A full example project to see this in context: https://github.com/cruinh/CameraCapture

Resources