AVCapture Session To Capture Image SWIFT - ios

I have created an AVCaptureSession to capture video output and display it to the user via UIView. Now I want to be able to click a button (takePhoto method) and display the image from the session in an UIImageView. I have tried to iterate through each devices connection and try to save the output but that hasnt worked. The code I have is below
let captureSession = AVCaptureSession()
var stillImageOutput: AVCaptureStillImageOutput!
#IBOutlet var imageView: UIImageView!
#IBOutlet var cameraView: UIView!
// If we find a device we'll store it here for later use
var captureDevice : AVCaptureDevice?
override func viewDidLoad() {
// Do any additional setup after loading the view, typically from a nib.
super.viewDidLoad()
println("I AM AT THE CAMERA")
captureSession.sessionPreset = AVCaptureSessionPresetLow
self.captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
if(captureDevice != nil){
beginSession()
}
}
func beginSession() {
self.stillImageOutput = AVCaptureStillImageOutput()
self.captureSession.addOutput(self.stillImageOutput)
var err : NSError? = nil
self.captureSession.addInput(AVCaptureDeviceInput(device: self.captureDevice, error: &err))
if err != nil {
println("error: \(err?.localizedDescription)")
}
var previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
self.cameraView.layer.addSublayer(previewLayer)
previewLayer?.frame = self.cameraView.layer.frame
captureSession.startRunning()
}
#IBAction func takePhoto(sender: UIButton) {
self.stillImageOutput.captureStillImageAsynchronouslyFromConnection(self.stillImageOutput.connectionWithMediaType(AVMediaTypeVideo)) { (buffer:CMSampleBuffer!, error:NSError!) -> Void in
var image = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(buffer)
var data_image = UIImage(data: image)
self.imageView.image = data_image
}
}
}

You should try adding a new thread when adding input and outputs to the session before starting it. In Apple's documentation they state
Important: The startRunning method is a blocking call which can take some time, therefore you should perform session setup on a serial queue so that the main queue isn't blocked (which keeps the UI responsive). See AVCam for iOS for the canonical implementation example.
Try using a dispatch in the create session method such as below
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), { // 1
self.captureSession.addOutput(self.stillImageOutput)
self.captureSession.addInput(AVCaptureDeviceInput(device: self.captureDevice, error: &err))
self.captureSession.sessionPreset = AVCaptureSessionPresetPhoto
if err != nil {
println("error: \(err?.localizedDescription)")
}
var previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
previewLayer?.frame = self.cameraView.layer.bounds
previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
dispatch_async(dispatch_get_main_queue(), { // 2
// 3
self.cameraView.layer.addSublayer(previewLayer)
self.captureSession.startRunning()
});
});

Related

Why is AVCaptureSession method canAddOutput returning false?

I'm trying to build a camera app, and I'm trying to set up my capture session within viewDidLoad() in my main view controller. For some reason, whenever I run the app on my phone, AVCaptureSession method canAddOutput is evaluated as false:
var captureSession: AVCaptureSession!
var photoOutput: AVCapturePhotoOutput!
var previewLayer : AVCaptureVideoPreviewLayer!
//MARK: Outlets
#IBOutlet weak var previewView: UIView!
override func viewDidLoad() {
super.viewDidLoad()
captureSession = AVCaptureSession()
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
//Ask permission to camera
let device = AVCaptureDevice.defaultDevice(withDeviceType: .builtInWideAngleCamera, mediaType: AVMediaTypeVideo, position: .back)
AVCaptureDevice.requestAccess(forMediaType: AVMediaTypeVideo, completionHandler: { (granted: Bool) in
if granted {
print("granted")
//Set up session
if let input = try? AVCaptureDeviceInput(device: device) {
print("Input = device")
if (self.captureSession.canAddInput(input)) {
self.captureSession.addInput(input)
print("Input added to capture session")
if (self.captureSession.canAddOutput(self.photoOutput)) {
print("Output added to capture session")
self.captureSession.addOutput(self.photoOutput)
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
self.previewLayer.frame = self.previewView.bounds
self.previewView.layer.addSublayer(self.previewLayer!)
self.captureSession.startRunning()
print("Session is running")
}
}
}
}
else {
print("Goodbye")
}
})
}
Unfortunately, I can only get it to print up until "Input added to capture session". Any suggestions would help - thanks!
You have to remove previous outputs added in session. you can use for loop for that.
for outputs in captureSession.outputs{ captureSession.removeOutput(outputs) }
then try to add new out put

Capturing image and setting it to UIImageView

I am trying to capture an Image and set it to the UIImageView, hence to create the camera I have the following code:
class HomeController: BaseController, UIImagePickerControllerDelegate {
var detector: AFDXDetector?
var captureSession : AVCaptureSession?
var stillImageOutput : AVCapturePhotoOutput?
var previewLayer : AVCaptureVideoPreviewLayer?
var camera : AVCaptureDevice!
#IBOutlet weak var cameraBtn: UIButton!
#IBOutlet weak var cameraView: UIView!
#IBOutlet weak var cameraImageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
startCamera()
}
func startCamera() {
do {
captureSession = AVCaptureSession()
camera = AVCaptureDevice.defaultDevice(withDeviceType: .builtInWideAngleCamera, mediaType: AVMediaTypeVideo, position: .front)
captureSession?.sessionPreset = AVCaptureSessionPreset1280x720
let input = try AVCaptureDeviceInput(device: camera)
if (captureSession?.canAddInput(input))!{
captureSession?.addInput(input)
stillImageOutput = AVCapturePhotoOutput()
if (captureSession?.canAddOutput(stillImageOutput))!{
print("output added")
captureSession?.canAddOutput(stillImageOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.videoGravity = AVLayerVideoGravityResizeAspect
previewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.portrait
cameraView.layer.addSublayer(previewLayer!)
captureSession?.startRunning()
}
}
} catch {
}
}
#IBAction func cameraBtnPressed(_ sender: Any) {
if (stillImageOutput?.connection(withMediaType: AVMediaTypeVideo)) != nil
{
print("video connection detected")
}
}
}
For some reason, the print statement "video connection detected" doesn't get called although the camera is working
Does anyone else know why?
Within the if statment of captureSession?.canAddOutput(stillImageOutput) change captureSession?.canAddOutput(stillImageOutput) to .addOutput

Unable to use AVCapturePhotoOutput to capture photo swift + xcode

I am working on a custom camera app and the tutorial uses AVCaptureStillImageOutput, which is deprecated for ios 10. I have set up the camera and am now stuck on how to take the photo
Here is my full view where i have the camera
import UIKit
import AVFoundation
var cameraPos = "back"
class View3: UIViewController,UIImagePickerControllerDelegate,UINavigationControllerDelegate {
#IBOutlet weak var clickButton: UIButton!
#IBOutlet var cameraView: UIView!
var session: AVCaptureSession?
var stillImageOutput: AVCapturePhotoOutput?
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
override func viewDidLoad() {
super.viewDidLoad()
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
clickButton.center.x = cameraView.bounds.width/2
loadCamera()
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
}
#IBAction func clickCapture(_ sender: UIButton) {
if let videoConnection = stillImageOutput!.connection(withMediaType: AVMediaTypeVideo) {
// This is where I need help
}
}
#IBAction func changeDevice(_ sender: UIButton) {
if cameraPos == "back"
{cameraPos = "front"}
else
{cameraPos = "back"}
loadCamera()
}
func loadCamera()
{
session?.stopRunning()
videoPreviewLayer?.removeFromSuperlayer()
session = AVCaptureSession()
session!.sessionPreset = AVCaptureSessionPresetPhoto
var backCamera = AVCaptureDevice.defaultDevice(withDeviceType: .builtInWideAngleCamera, mediaType: AVMediaTypeVideo, position: .front)
if cameraPos == "back"
{
backCamera = AVCaptureDevice.defaultDevice(withDeviceType: .builtInWideAngleCamera, mediaType: AVMediaTypeVideo, position: .back)
}
var error: NSError?
var input: AVCaptureDeviceInput!
do {
input = try AVCaptureDeviceInput(device: backCamera)
} catch let error1 as NSError {
error = error1
input = nil
print(error!.localizedDescription)
}
if error == nil && session!.canAddInput(input) {
session!.addInput(input)
stillImageOutput = AVCapturePhotoOutput()
if session!.canAddOutput(stillImageOutput) {
session!.addOutput(stillImageOutput)
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: session)
videoPreviewLayer?.frame = cameraView.bounds
videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
videoPreviewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.portrait
cameraView.layer.addSublayer(videoPreviewLayer!)
session!.startRunning()
} }
}
}
This is where i need help
#IBAction func clickCapture(_ sender: UIButton) {
if let videoConnection = stillImageOutput!.connection(withMediaType: AVMediaTypeVideo) {
// This is where I need help
}
}
I have gone through the answer here How to use AVCapturePhotoOutput
but i do not understand how to incorporate that code in this code, as it involves declaring a new class
You are almost there.
For Output as AVCapturePhotoOutput
Check out AVCapturePhotoOutput documentation for more help.
These are the steps to capture a photo.
Create an AVCapturePhotoOutput object. Use its properties to
determine supported capture settings and to enable certain features
(for example, whether to capture Live Photos).
Create and configure an AVCapturePhotoSettings object to choose
features and settings for a specific capture (for example, whether
to enable image stabilization or flash).
Capture an image by passing your photo settings object to the
capturePhoto(with:delegate:) method along with a delegate object
implementing the AVCapturePhotoCaptureDelegate protocol. The photo
capture output then calls your delegate to notify you of significant
events during the capture process.
have this below code on your clickCapture method and don't forgot to confirm and implement to delegate in your class.
let settings = AVCapturePhotoSettings()
let previewPixelType = settings.availablePreviewPhotoPixelFormatTypes.first!
let previewFormat = [kCVPixelBufferPixelFormatTypeKey as String: previewPixelType,
kCVPixelBufferWidthKey as String: 160,
kCVPixelBufferHeightKey as String: 160,
]
settings.previewPhotoFormat = previewFormat
self.cameraOutput.capturePhoto(with: settings, delegate: self)
For Output as AVCaptureStillImageOutput
if you intend to snap a photo from video connection. you can follow the below steps.
Step 1: Get the connection
if let videoConnection = stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo) {
// ...
// Code for photo capture goes here...
}
Step 2: Capture the photo
Call the captureStillImageAsynchronouslyFromConnection function on
the stillImageOutput.
The sampleBuffer represents the data that is captured.
stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: { (sampleBuffer, error) -> Void in
// ...
// Process the image data (sampleBuffer) here to get an image file we can put in our captureImageView
})
Step 3: Process the Image Data
We will need to to take a few steps to process the image data found in sampleBuffer in order to end up with a UIImage that we can insert into our captureImageView and easily use elsewhere in our app.
if sampleBuffer != nil {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProviderCreateWithCFData(imageData)
let cgImageRef = CGImageCreateWithJPEGDataProvider(dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)
let image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Right)
// ...
// Add the image to captureImageView here...
}
Step 4: Save the image
Based on your need either save the image to photos gallery or show that in a image view
For more details check out Create custom camera view guide under Snap a Photo

AVFoundation camera focus stops working after switching from barcode scanner

I have an application that uses RSBarcodes_Swift to scan for barcodes.
When the scan is successful i need to segue to camera view controller to take pictures. The problem i have is that after the barcode scanner, AVFoundation touch to focus stops working. What i mean by this is that whenever i touch to focus, camera just briefly tries to focus and than resets to default lens position. When i don't use barcode scanner and go directly to camera, everything works fine. I have also used a couple of other barcode scanners, but the result is the same. Is there any way that i somehow reset the camera usage, or dispose of barcode scanner when i'm done with it?
This is the code i use to present the camera and have touch to focus capability:
import Foundation
import AVFoundation
public class CameraViewController : UIViewController {
var backCamera: AVCaptureDevice?
var captureSession: AVCaptureSession?
var stillImageOutput: AVCaptureStillImageOutput?
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
var previewView: UIView?
public override func viewWillAppear(animated: Bool) {
super.viewWillAppear(animated)
previewView = UIView()
previewView!.frame = UIScreen.mainScreen().bounds
let shortTap = UITapGestureRecognizer(target: self, action: #selector(shortTapRecognize))
shortTap.numberOfTapsRequired = 1
shortTap.numberOfTouchesRequired = 1
previewView!.addGestureRecognizer(shortTap)
self.view.addSubview(previewView!)
captureSession = AVCaptureSession()
captureSession!.sessionPreset = AVCaptureSessionPresetPhoto
backCamera = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
var error: NSError?
var input: AVCaptureDeviceInput!
do {
input = try AVCaptureDeviceInput(device: backCamera!)
} catch let error1 as NSError{
error = error1
input = nil
print(error!.localizedDescription)
}
if error == nil && captureSession!.canAddInput(input){
captureSession!.addInput(input)
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput?.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if captureSession!.canAddOutput(stillImageOutput){
captureSession!.addOutput(stillImageOutput)
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer!.frame = previewView!.bounds
videoPreviewLayer!.videoGravity = AVLayerVideoGravityResizeAspect
videoPreviewLayer!.connection?.videoOrientation = AVCaptureVideoOrientation.LandscapeRight
previewView!.layer.addSublayer(videoPreviewLayer!)
captureSession!.startRunning()
}
}
}
public override func viewDidAppear(animated: Bool) {
super.viewDidAppear(animated)
}
public override func viewDidLoad() {
super.viewDidLoad()
}
public override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
func shortTapRecognize(tap: UITapGestureRecognizer){
if tap.state == UIGestureRecognizerState.Ended {
let pointInPreview = tap.locationInView(tap.view)
let pointInCamera = videoPreviewLayer!.captureDevicePointOfInterestForPoint(pointInPreview)
if backCamera!.focusPointOfInterestSupported{
do {
try backCamera!.lockForConfiguration()
} catch let error as NSError{
print(error.localizedDescription)
}
backCamera!.focusPointOfInterest = pointInCamera
backCamera!.focusMode = .AutoFocus
backCamera!.unlockForConfiguration()
}
}
}
}

get a SubLayer with swift2 and Xcode 7

I would save a sublayer. But I can not get a target.
I want to save the layer that is displayed on the screen that contains the view of the camera. When I want it back, I get a white image. I concluded that I targeted my male layer when I record in dernnière function.
Here is my code:
#IBOutlet weak var blutEffect: UIVisualEffectView!
#IBOutlet weak var background: UIView! // vu d'affichage
var previewLayer = AVCaptureVideoPreviewLayer()
var captureSession = AVCaptureSession()
override func viewDidLoad() {
super.viewDidLoad()
camera()
}
#IBAction func takePhoto(sender: AnyObject) {
captureSession.stopRunning()
}
func camera () {
captureSession = AVCaptureSession()
previewLayer = AVCaptureVideoPreviewLayer()
var captureDevice : AVCaptureDevice?
let devices = AVCaptureDevice.devices()
captureSession.sessionPreset = AVCaptureSessionPresetHigh
background.layer.sublayers?.removeAll()
for device in devices {
if (device.hasMediaType(AVMediaTypeVideo)) {
if(device.position == AVCaptureDevicePosition.Back) {
captureDevice = device as? AVCaptureDevice
if captureDevice != nil {
do {
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
} catch _ as NSError {
print("ERROR")
}
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
self.background.layer.addSublayer(previewLayer)
previewLayer.frame = self.background.layer.frame
captureSession.startRunning()
}
}
}
}
}
func screenShotMethod() {
UIGraphicsBeginImageContext(background.layer.frame.size)
background.layer.renderInContext(UIGraphicsGetCurrentContext())
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
//Save it to the camera roll
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
}
Thanks for your help !

Resources