I am using WebRTC to capture video from user camera. In some place I want to get current picture as UIImage for saving it in photo library. I am using localVideoView to show video from local camera, but when I try to make screenshot of that view, it is empty (just blue background).
This is my code to make screenshot:
func screenShotMethod() {
DispatchQueue.main.async {
//Create the UIImage
UIGraphicsBeginImageContext(self.localVideoView!.frame.size)
self.localVideoView?.layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
//Save it to the camera roll
UIImageWriteToSavedPhotosAlbum(image!, nil, nil, nil)
}
}
Here is the sample code to capture photo without camera preview during a WebRTC video call. Have to invoke TakePicture() method through signal or button tap.
import AVFoundation
import WebRTC
import UIKit
import Foundation
class CallViewController: UIViewController{
var captureSession : AVCaptureSession?
func TakePicture() {
DispatchQueue.main.async { [self] in
captureSession = AVCaptureSession()
captureSession!.beginConfiguration()
let photoOutput = AVCapturePhotoOutput()
photoOutput.isHighResolutionCaptureEnabled = true
photoOutput.isLivePhotoCaptureEnabled = false
if let captureDevice = AVCaptureDevice.default(for: .video){
do
{
let input = try AVCaptureDeviceInput(device: captureDevice)
if captureSession!.canAddInput(input){
captureSession!.addInput(input)
}
} catch let error {
}
if captureSession!.canAddOutput(photoOutput){
captureSession!.addOutput(photoOutput)
}
let cameraLayer = AVCaptureVideoPreviewLayer()
cameraLayer.session = captureSession
captureSession!.commitConfiguration()
captureSession!.startRunning()
let photoSettings = AVCapturePhotoSettings()
//photoSettings.flashMode = .auto //check device properties before turning on flash
photoSettings.photoQualityPrioritization = .balanced
photoOutput.capturePhoto(with: photoSettings, delegate: self)
}
}
}
}
extension CallViewController: AVCapturePhotoCaptureDelegate{
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
captureSession!.stopRunning()
captureSession = nil
let imageData = photo.fileDataRepresentation()
//Do the rest with image bytes
}
}
Related
I have made a custom camera and want to overlay another image over it. I am using AVKit now to get the custom camera. I was able to overlay the image when I was using the built-in camera. This is the code for what I have for the custom camera. "newImage" is the image that i would like to overlay over the camera.
import UIKit
import AVKit
class liveView: UIViewController, AVCapturePhotoCaptureDelegate {
#IBOutlet weak var previewView: UIView!
#IBOutlet weak var captureImageView: UIImageView!
var captureSession: AVCaptureSession!
var stillImageOutput: AVCapturePhotoOutput!
var videoPreviewLayer: AVCaptureVideoPreviewLayer!
var newImage: UIImage!
override func viewDidLoad() {
super.viewDidLoad()
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
captureSession = AVCaptureSession()
captureSession.sessionPreset = .medium
guard let backCamera = AVCaptureDevice.default(for: AVMediaType.video)
else {
print("Unable to access back camera!")
return
}
do {
let input = try AVCaptureDeviceInput(device: backCamera)
stillImageOutput = AVCapturePhotoOutput()
if captureSession.canAddInput(input) && captureSession.canAddOutput(stillImageOutput) {
captureSession.addInput(input)
captureSession.addOutput(stillImageOutput)
// videoPreviewLayer?.frame = self.newImage.accessibilityFrame
setupLivePreview()
}
}
catch let error {
print("Error Unable to initialize back camera: \(error.localizedDescription)")
}
}
func setupLivePreview() {
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer.videoGravity = .resizeAspect
videoPreviewLayer.connection?.videoOrientation = .portrait
previewView.layer.addSublayer(videoPreviewLayer)
DispatchQueue.global(qos: .userInitiated).async {
self.captureSession.startRunning()
DispatchQueue.main.async {
self.videoPreviewLayer.frame = self.previewView.bounds
}
}
}
#IBAction func didTakePhoto(_sender : UIBarButtonItem) {
let settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg])
stillImageOutput.capturePhoto(with: settings, delegate: self)
}
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
guard let imageData = photo.fileDataRepresentation()
else { return }
let image = UIImage(data: imageData)
captureImageView.image = image
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
self.captureSession.stopRunning()
}
}
I am making an iOS where a user takes a picture and then I want to use Google's MLKit from Firebase to detect text in the picture. I have set up a custom camera UIViewController that we'll call CameraViewController. There is a simple button that a user will press to take a picture. I have followed Firebase's documentation, here, but MLKit is not working for me. Here is the code I have for your refrence and then we'll talk about what the problem is.
1.Here are my imports, class delegates, and outlets:
import UIKit
import AVFoundation
import Firebase
class CameraViewController: UIViewController, AVCapturePhotoCaptureDelegate {
var captureSession: AVCaptureSession?
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
var capturePhotoOutput: AVCapturePhotoOutput?
#IBOutlet var previewView: UIView!
#IBOutlet var captureButton: UIButton!
}
2.In the viewDidLoad, I set up the "previewView" so that the user has a "view finder":
override func viewDidLoad() {
super.viewDidLoad()
let captureDevice = AVCaptureDevice.default(for: .video)!
do {
let input = try AVCaptureDeviceInput(device: captureDevice)
captureSession = AVCaptureSession()
captureSession?.addInput(input)
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession!)
videoPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
previewView.layer.addSublayer(videoPreviewLayer!)
captureSession?.startRunning()
capturePhotoOutput = AVCapturePhotoOutput()
capturePhotoOutput?.isHighResolutionCaptureEnabled = true
captureSession?.addOutput(capturePhotoOutput!)
} catch {
print(error)
}
}
3.Here is my action for the button that takes the image
#IBAction func captureButtonTapped(_ sender: Any) {
guard let capturePhotoOutput = self.capturePhotoOutput else { return }
let photoSettings = AVCapturePhotoSettings()
photoSettings.isAutoStillImageStabilizationEnabled = true
photoSettings.isHighResolutionPhotoEnabled = true
photoSettings.flashMode = .off
capturePhotoOutput.capturePhoto(with: photoSettings, delegate: self)
}
4.This is where I receive the picture taken using the didFinishProcessingPhoto delegate method and start using MLKit
func photoOutput(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhoto photoSampleBuffer: CMSampleBuffer?, previewPhoto previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?) {
guard error == nil,
let photoSampleBuffer = photoSampleBuffer else {
print("Error capturing photo: \(String(describing: error))")
return
}
guard let imageData =
AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: photoSampleBuffer, previewPhotoSampleBuffer: previewPhotoSampleBuffer) else {
return
}
let capturedImage = UIImage.init(data: imageData , scale: 1.0)
captureNormal()
DispatchQueue.main.asyncAfter(deadline: .now()+0.1) {
self.captureSession?.stopRunning()
self.processText(with: capturedImage!)
// Here is where I call the function processText where MLKit is run
}
}
5.Lastly, here is my function processText(with:UIImage) that uses MLKit
func processText(with image: UIImage) {
let vision = Vision.vision()
let textRecognizer = vision.onDeviceTextRecognizer()
let visionImage = VisionImage(image: image)
textRecognizer.process(visionImage) { result, error in
if error != nil {
print("MLKIT ERROR - \(error)")
} else {
let resultText = result?.text
print("MLKIT RESULT - \(resultText)")
}
}
}
Ok, that was a lot, thank you for reading all of that. Alright, so the problem is that this does not work. I do get a proper UIImage in step 4 so it's not that. Here's a screenshot of an example of what I am trying to scan...
MLKit should be able to easily detect this text. But every time I try, result?.text is always printed as nil. I'm out of ideas. Does anyone have any ideas on how to fix this? If so, thanks a lot!
I'm using an iPhone 7+ with ios 11 installed, and I'm trying to adapt some code that captures regular images to also capture depth.
When I call capturePhotoOutput?.isDepthDataDeliverySupported it returns false. I was under the impression I would be able to use my iPhone 7+ to capture depth.
Am I missing a permission from info.plist? Or have I made a more fundamental error?
//
// RecorderViewController.swift
import UIKit
import AVFoundation
class RecorderViewController: UIViewController {
#IBOutlet weak var previewView: UIView!
#IBAction func onTapTakePhoto(_ sender: Any) {
// Make sure capturePhotoOutput is valid
guard let capturePhotoOutput = self.capturePhotoOutput else { return }
// Get an instance of AVCapturePhotoSettings class
let photoSettings = AVCapturePhotoSettings()
// Set photo settings for our need
photoSettings.isAutoStillImageStabilizationEnabled = true
photoSettings.isHighResolutionPhotoEnabled = true
photoSettings.flashMode = .auto
// Call capturePhoto method by passing our photo settings and a
// delegate implementing AVCapturePhotoCaptureDelegate
capturePhotoOutput.capturePhoto(with: photoSettings, delegate: self)
}
var captureSession: AVCaptureSession?
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
var capturePhotoOutput: AVCapturePhotoOutput?
override func viewDidLoad() {
super.viewDidLoad()
//let captureDevice = AVCaptureDevice.default(for: AVMediaType.video)
let captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInDualCamera, for: .video, position: .back)
do {
let input = try AVCaptureDeviceInput(device: captureDevice!)
captureSession = AVCaptureSession()
captureSession?.addInput(input)
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession!)
videoPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
previewView.layer.addSublayer(videoPreviewLayer!)
capturePhotoOutput = AVCapturePhotoOutput()
capturePhotoOutput?.isHighResolutionCaptureEnabled = true
if (capturePhotoOutput?.isDepthDataDeliverySupported)!
{
capturePhotoOutput?.isDepthDataDeliveryEnabled = true
}
else{
print ("DEPTH NOT SUPPORTED!")
}
// Set the output on the capture session
captureSession?.addOutput(capturePhotoOutput!)
captureSession?.startRunning()
} catch {
print(error)
}
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
}
extension RecorderViewController : AVCapturePhotoCaptureDelegate {
func photoOutput(_ captureOutput: AVCapturePhotoOutput,
didFinishProcessingPhoto photoSampleBuffer: CMSampleBuffer?,
previewPhoto previewPhotoSampleBuffer: CMSampleBuffer?,
resolvedSettings: AVCaptureResolvedPhotoSettings,
bracketSettings: AVCaptureBracketedStillImageSettings?,
error: Error?) {
// get captured image
// Make sure we get some photo sample buffer
guard error == nil,
let photoSampleBuffer = photoSampleBuffer else {
print("Error capturing photo: \(String(describing: error))")
return
}
// Convert photo same buffer to a jpeg image data by using // AVCapturePhotoOutput
guard let imageData =
AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: photoSampleBuffer, previewPhotoSampleBuffer: previewPhotoSampleBuffer) else {
return
}
// Initialise a UIImage with our image data
let capturedImage = UIImage.init(data: imageData , scale: 1.0)
if let image = capturedImage {
// Save our captured image to photos album
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
}
}
}
I am trying to figure out how to record a video using AVFoundation in Swift. I have got as far as creating a custom camera but I only figured out how to take still pictures with it and I can't figure out how to record video. Hope you can help me figure this one out.
I want to hold the takePhotoButton to record the video and then it will be previewed where I preview my current still photos. Your help will really help me continuing my project. Thanks a lot!
import UIKit
import AVFoundation
#available(iOS 10.0, *)
class CameraViewController: UIViewController,AVCaptureVideoDataOutputSampleBufferDelegate {
let photoSettings = AVCapturePhotoSettings()
var audioPlayer = AVAudioPlayer()
var captureSession = AVCaptureSession()
var videoDeviceInput: AVCaptureDeviceInput!
var previewLayer = AVCaptureVideoPreviewLayer()
var frontCamera: Bool = false
var captureDevice:AVCaptureDevice!
var takePhoto = false
override func viewDidLoad() {
super.viewDidLoad()
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
prepareCamera()
}
func prepareCamera() {
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
if let availableDevices = AVCaptureDeviceDiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaTypeVideo, position: .back).devices {
captureDevice = availableDevices.first
beginSession()
}
}
func frontCamera(_ front: Bool){
let devices = AVCaptureDevice.devices()
do{
try captureSession.removeInput(AVCaptureDeviceInput(device:captureDevice!))
}catch{
print("Error")
}
for device in devices!{
if((device as AnyObject).hasMediaType(AVMediaTypeVideo)){
if front{
if (device as AnyObject).position == AVCaptureDevicePosition.front {
captureDevice = device as? AVCaptureDevice
do{
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice!))
}catch{}
break
}
}else{
if (device as AnyObject).position == AVCaptureDevicePosition.back {
captureDevice = device as? AVCaptureDevice
do{
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice!))
}catch{}
break
}
}
}
}
}
func beginSession () {
do {
let captureDeviceInput = try AVCaptureDeviceInput(device: captureDevice)
if let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) {
self.previewLayer = previewLayer
containerView.layer.addSublayer(previewLayer as? CALayer ?? CALayer())
self.previewLayer.frame = self.view.layer.frame
self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.portrait
captureSession.startRunning()
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString):NSNumber(value:kCVPixelFormatType_32BGRA)]
dataOutput.alwaysDiscardsLateVideoFrames = true
if captureSession.canAddOutput(dataOutput) {
captureSession.addOutput(dataOutput)
photoSettings.isHighResolutionPhotoEnabled = true
photoSettings.isAutoStillImageStabilizationEnabled = true
}
captureSession.commitConfiguration()
let queue = DispatchQueue(label: "com.NightOut.captureQueue")
dataOutput.setSampleBufferDelegate(self, queue: queue)
}
}
#IBAction func takePhoto(_ sender: Any) {
takePhoto = true
photoSettings.isHighResolutionPhotoEnabled = true
photoSettings.isAutoStillImageStabilizationEnabled = true
}
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
if takePhoto {
takePhoto = false
if let image = self.getImageFromSampleBuffer(buffer: sampleBuffer) {
let photoVC = UIStoryboard(name: "Main", bundle: nil).instantiateViewController(withIdentifier: "PhotoVC") as! PhotoPreviewViewController
photoVC.takenPhoto = image
DispatchQueue.main.async {
self.present(photoVC, animated: true, completion: {
self.stopCaptureSession()
})
}
}
}
}
func getImageFromSampleBuffer (buffer:CMSampleBuffer) -> UIImage? {
if let pixelBuffer = CMSampleBufferGetImageBuffer(buffer) {
let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
let context = CIContext()
let imageRect = CGRect(x: 0, y: 0, width: CVPixelBufferGetWidth(pixelBuffer), height: CVPixelBufferGetHeight(pixelBuffer))
if let image = context.createCGImage(ciImage, from: imageRect) {
return UIImage(cgImage: image, scale: UIScreen.main.scale, orientation: .leftMirrored)
}
}
return nil
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
self.captureSession.stopRunning()
}
func stopCaptureSession () {
self.captureSession.stopRunning()
if let inputs = captureSession.inputs as? [AVCaptureDeviceInput] {
for input in inputs {
self.captureSession.removeInput(input)
}
}
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
}
#IBAction func DismissButtonAction(_ sender: UIButton) {
UIView.animate(withDuration: 0.1, animations: {
self.DismissButton.transform = CGAffineTransform.identity.scaledBy(x: 0.8, y: 0.8)
}, completion: { (finish) in
UIView.animate(withDuration: 0.1, animations: {
self.DismissButton.transform = CGAffineTransform.identity
})
})
performSegue(withIdentifier: "Segue", sender: nil)
}
}
To identify the holding down the button and releasing it, can be done in different ways. The easiest way would be adding a target for UIControlEvents.TouchUpInside and UIControlEvents.TouchDown for capture button like below.
aButton.addTarget(self, action: Selector("holdRelease:"), forControlEvents: UIControlEvents.TouchUpInside);
aButton.addTarget(self, action: Selector("HoldDown:"), forControlEvents: UIControlEvents.TouchDown)
//target functions
func HoldDown(sender:UIButton)
{
// Start recording the video
}
func holdRelease(sender:UIButton)
{
// Stop recording the video
}
There are other ways as well, like adding a long tap gesture recognizer to button or view and start/stop based on recognizer state. More info can be found here in another SO answer UIButton with hold down action and release action
Video Recording
You need to add AVCaptureMovieFileOutput to your capture session and use the method startRecordingToOutputFileURL to start the video recording.
Things to notice
Implement AVCaptureFileOutputRecordingDelegate method to identify the start and didFinish recording
File path should be meaningful, Which means you should give the correct file path which your app has access.
Have this code inside HoldDown() method to start recording
let videoFileOutput = AVCaptureMovieFileOutput()
self.captureSession?.addOutput(videoFileOutput)
let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] as URL
let filePath = documentsURL.appendingPathComponent("tempMovie")
videoFileOutput.startRecording(toOutputFileURL: filePath, recordingDelegate: self)
to stop recording use vidoeFileOutput.stopRecording()
You need to use AVCaptureMovieFileOutput. Add AVCaptureMovieFileOutput to a capture session using addOutput(_:)
Starting a Recording
You start recording a QuickTime movie using
startRecording(to:recordingDelegate:). You need to supply a
file-based URL and a delegate. The URL must not identify an existing
file, because the movie file output does not overwrite existing
resources. You must also have permission to write to the specified
location. The delegate must conform to the
AVCaptureFileOutputRecordingDelegate protocol, and must implement the
fileOutput(_:didFinishRecordingTo:from:error:)
method.
See docs for more info.
I am working on a custom camera app and the tutorial uses AVCaptureStillImageOutput, which is deprecated for ios 10. I have set up the camera and am now stuck on how to take the photo
Here is my full view where i have the camera
import UIKit
import AVFoundation
var cameraPos = "back"
class View3: UIViewController,UIImagePickerControllerDelegate,UINavigationControllerDelegate {
#IBOutlet weak var clickButton: UIButton!
#IBOutlet var cameraView: UIView!
var session: AVCaptureSession?
var stillImageOutput: AVCapturePhotoOutput?
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
override func viewDidLoad() {
super.viewDidLoad()
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
clickButton.center.x = cameraView.bounds.width/2
loadCamera()
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
}
#IBAction func clickCapture(_ sender: UIButton) {
if let videoConnection = stillImageOutput!.connection(withMediaType: AVMediaTypeVideo) {
// This is where I need help
}
}
#IBAction func changeDevice(_ sender: UIButton) {
if cameraPos == "back"
{cameraPos = "front"}
else
{cameraPos = "back"}
loadCamera()
}
func loadCamera()
{
session?.stopRunning()
videoPreviewLayer?.removeFromSuperlayer()
session = AVCaptureSession()
session!.sessionPreset = AVCaptureSessionPresetPhoto
var backCamera = AVCaptureDevice.defaultDevice(withDeviceType: .builtInWideAngleCamera, mediaType: AVMediaTypeVideo, position: .front)
if cameraPos == "back"
{
backCamera = AVCaptureDevice.defaultDevice(withDeviceType: .builtInWideAngleCamera, mediaType: AVMediaTypeVideo, position: .back)
}
var error: NSError?
var input: AVCaptureDeviceInput!
do {
input = try AVCaptureDeviceInput(device: backCamera)
} catch let error1 as NSError {
error = error1
input = nil
print(error!.localizedDescription)
}
if error == nil && session!.canAddInput(input) {
session!.addInput(input)
stillImageOutput = AVCapturePhotoOutput()
if session!.canAddOutput(stillImageOutput) {
session!.addOutput(stillImageOutput)
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: session)
videoPreviewLayer?.frame = cameraView.bounds
videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
videoPreviewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.portrait
cameraView.layer.addSublayer(videoPreviewLayer!)
session!.startRunning()
} }
}
}
This is where i need help
#IBAction func clickCapture(_ sender: UIButton) {
if let videoConnection = stillImageOutput!.connection(withMediaType: AVMediaTypeVideo) {
// This is where I need help
}
}
I have gone through the answer here How to use AVCapturePhotoOutput
but i do not understand how to incorporate that code in this code, as it involves declaring a new class
You are almost there.
For Output as AVCapturePhotoOutput
Check out AVCapturePhotoOutput documentation for more help.
These are the steps to capture a photo.
Create an AVCapturePhotoOutput object. Use its properties to
determine supported capture settings and to enable certain features
(for example, whether to capture Live Photos).
Create and configure an AVCapturePhotoSettings object to choose
features and settings for a specific capture (for example, whether
to enable image stabilization or flash).
Capture an image by passing your photo settings object to the
capturePhoto(with:delegate:) method along with a delegate object
implementing the AVCapturePhotoCaptureDelegate protocol. The photo
capture output then calls your delegate to notify you of significant
events during the capture process.
have this below code on your clickCapture method and don't forgot to confirm and implement to delegate in your class.
let settings = AVCapturePhotoSettings()
let previewPixelType = settings.availablePreviewPhotoPixelFormatTypes.first!
let previewFormat = [kCVPixelBufferPixelFormatTypeKey as String: previewPixelType,
kCVPixelBufferWidthKey as String: 160,
kCVPixelBufferHeightKey as String: 160,
]
settings.previewPhotoFormat = previewFormat
self.cameraOutput.capturePhoto(with: settings, delegate: self)
For Output as AVCaptureStillImageOutput
if you intend to snap a photo from video connection. you can follow the below steps.
Step 1: Get the connection
if let videoConnection = stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo) {
// ...
// Code for photo capture goes here...
}
Step 2: Capture the photo
Call the captureStillImageAsynchronouslyFromConnection function on
the stillImageOutput.
The sampleBuffer represents the data that is captured.
stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: { (sampleBuffer, error) -> Void in
// ...
// Process the image data (sampleBuffer) here to get an image file we can put in our captureImageView
})
Step 3: Process the Image Data
We will need to to take a few steps to process the image data found in sampleBuffer in order to end up with a UIImage that we can insert into our captureImageView and easily use elsewhere in our app.
if sampleBuffer != nil {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProviderCreateWithCFData(imageData)
let cgImageRef = CGImageCreateWithJPEGDataProvider(dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)
let image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Right)
// ...
// Add the image to captureImageView here...
}
Step 4: Save the image
Based on your need either save the image to photos gallery or show that in a image view
For more details check out Create custom camera view guide under Snap a Photo