Get output from AVCaptureSession in Swift to send to server - ios

I've managed to write some code that opens the camera and previews the video. I now want to capture the frames from the output to send to a server ideally encoded as H.264
Here's what I've got:
import UIKit
import AVFoundation
class ViewController: UIViewController {
let captureSession = AVCaptureSession()
var previewLayer : AVCaptureVideoPreviewLayer?
// If we find a device we'll store it here for later use
var captureDevice : AVCaptureDevice?
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
captureSession.sessionPreset = AVCaptureSessionPresetHigh
let devices = AVCaptureDevice.devices()
// Loop through all the capture devices on this phone
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the back camera
if(device.position == AVCaptureDevicePosition.Back) {
captureDevice = device as? AVCaptureDevice
if captureDevice != nil {
println("Capture device found")
beginSession()
}
}
}
}
}
func beginSession() {
var err : NSError? = nil
captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &err))
if err != nil {
println("error: \(err?.localizedDescription)")
}
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
self.view.layer.addSublayer(previewLayer)
previewLayer?.frame = self.view.layer.frame
captureSession.startRunning()
}
}
This open the camera successfully and I can preview the footage.
I've found this Objective C code that looks like it gets the output but I don't know how to convert it to swift. It's using AVCaptureVideoDataOutput, AVAssetWriter, AVAssetWriterInput and AVAssetWriterInputPixelBufferAdaptor to write frames out to an H.264 encoded movie file.
Can use AVCaptureVideoDataOutput and AVCaptureMovieFileOutput at the same time?
Can someone help convert it or give me pointers as to how to get the frames out of my current code?

Apple has a sample project AVCam in ObjectiveC that works with this things.
Here's another question on SO about using AVCamera in Swift.
I personally used this https://github.com/alex-chan/AVCamSwift, and it's fine. I only had to convert it to Latest Swift syntax in Xcode and it worked fine.
Another suggestion is to use the ObjectiveC code that you found and import it in you Swift code through a bridging header.

Related

UIButton to capture image not responding

I am new to iOS development and I am trying to follow the following tutorial to learn how to capture images and then save them on a server for my purposes:
https://medium.com/#rizwanm/swift-camera-part-2-c6de440a9404
The tutorial is great and I am only initializing the camera and trying to snap the image (not doing the QR part).
Unfortunately, when I hit the 'capture' button, the image is not being taken or saved in my gallery and I am not able to understand why. I use Xcode 8.3 and I am running iOS 10.3 on my iPhone.
I have connected the button with the view controller and called it in the function onTapTakePhoto as shown in the code below. Please advise as to why this might be happening.
import UIKit
import AVFoundation
class ViewController: UIViewController {
#IBOutlet weak var previewView: UIView!
#IBOutlet weak var CaptureButton: UIButton!
//helps transfer data between one or more input device like camera
var captureSession: AVCaptureSession?
//instance variable
var capturePhotoOutput: AVCapturePhotoOutput? //capturePhotoOutput will help us snap a live photo
//helps render the camera view finder in ViewController
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
override func viewDidLoad() {
super.viewDidLoad()
CaptureButton.layer.cornerRadius = CaptureButton.frame.size.width / 2
CaptureButton.clipsToBounds = true
//set up camera here
let captureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
// get an instance of the AVCAptureDevicenput
// serves as a middle man to attach input device to capture device
// chance the input device unavailable so wrap in do catch
do {
// Get an instance of the AVCaptureDeviceInput class using the previous deivce object
let input = try AVCaptureDeviceInput(device: captureDevice)
// Initialize the captureSession object
captureSession = AVCaptureSession()
// Set the input devcie on the capture session
captureSession?.addInput(input)
//set up preview view to see live feed
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
previewView.layer.addSublayer(videoPreviewLayer!)
// finally start the capture session
captureSession?.startRunning()
// get an instance of AVCapturePhotoOutput class
capturePhotoOutput = AVCapturePhotoOutput()
capturePhotoOutput?.isHighResolutionCaptureEnabled = true
//set the outut on the capture session
captureSession?.addOutput(capturePhotoOutput)
} catch {
print (error)
return
}
}
#IBAction func onTapTakePhoto(_ sender: UIButton) {
// Make sure capturePhotoOutput is valid
guard let capturePhotoOutput = self.capturePhotoOutput else { return }
// Get an instance of AVCapturePhotoSettings class
let photoSettings = AVCapturePhotoSettings()
// Set photo settings for our need
photoSettings.isAutoStillImageStabilizationEnabled = true
photoSettings.isHighResolutionPhotoEnabled = true
photoSettings.flashMode = .auto
// Call capturePhoto method by passing our photo settings and a delegate implementing AVCapturePhotoCaptureDelegate
capturePhotoOutput.capturePhoto(with: photoSettings, delegate: self)
}
}
// to get the captured image
extension ViewController : AVCapturePhotoCaptureDelegate {
func capture(_ captureOutput: AVCapturePhotoOutput,
didFinishProcessingPhotoSampleBuffer photoSampleBuffer: CMSampleBuffer?,
previewPhotoSampleBuffer: CMSampleBuffer?,
resolvedSettings: AVCaptureResolvedPhotoSettings,
bracketSettings: AVCaptureBracketedStillImageSettings?,
error: Error?) {
// Make sure we get some photo sample buffer
guard error == nil,
let photoSampleBuffer = photoSampleBuffer else {
print("Error capturing photo: \(String(describing: error))")
return
}
// Convert photo same buffer to a jpeg image data by using AVCapturePhotoOutput
guard let imageData = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: photoSampleBuffer, previewPhotoSampleBuffer: previewPhotoSampleBuffer) else {
return
}
// Initialise an UIImage with our image data
let capturedImage = UIImage.init(data: imageData , scale: 1.0)
if let image = capturedImage {
// Save our captured image to photos album
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
}
}
}
One thing that you should check is that, if you have added the permission in the info.plist file to access the camera.
You have to make a key-value entry for the key
Privacy - Photo Library Usage Description
(if you are using the gallery)
and
Privacy - Camera Usage Description
for using the camera itself.
Refer to this for more details.
If it doesn't work, put a debugger on the click action of the button and check the code flow. If the debugger is not hit after pressing the button, there could be an issue with the button action outlet, try to remake the outlet action.
First of all with the help of break points, check which section of your code is not being executed.There might be some problem related to your button connection.
OR
Try this one
https://www.youtube.com/watch?v=994Hsi1zs6Q
OR
this one in Objective-C
https://github.com/omergul/LLSimpleCamera
Thank you for the answers :)
The problem was a missing connection to my action button code from the story board. Feels like such a silly mistake in retrospect
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
if (info[UIImagePickerControllerOriginalImage] as? UIImage) != nil {
}
dismiss(animated: true, completion:
{
// write your set image code for UIButton is here.
})
}

iOS10, camera permission - black screen for "camera app"

Of course with iOS 10, you now have to do THIS
to use the phone's camera. On first launch, the user gets a question such as,
and all is well.
BUT we have a client app that is a "camera app": when you launch the app it simply immediately launches the camera, when the app is running the camera is running and is shown fullscreen. The code to do so is the usual way, see below.
The problem is - the first launch of the app on a phone, the user is asked the question; user says yes. But then, the camera is just black on devices we have tried. It does not crash (as it would if you forget the plist item) but it goes black and stays black.
If the user quits the app and launches it again - it's fine, everything works.
What the heck is the workflow for a "camera app"? I can't see a good solution, but there must be one for the various camera apps out there - which immediately go to fullscreen camera when you launch the app.
class CameraPlane:UIViewController
{
...
func cameraBegin()
{
captureSession = AVCaptureSession()
captureSession!.sessionPreset = AVCaptureSessionPresetPhoto
let backCamera = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
var error: NSError?
var input: AVCaptureDeviceInput!
do {
input = try AVCaptureDeviceInput(device: backCamera)
} catch let error1 as NSError
{
error = error1
input = nil
}
if ( error != nil )
{
print("probably on simulator? no camera?")
return;
}
if ( captureSession!.canAddInput(input) == false )
{
print("capture session problem?")
return;
}
captureSession!.addInput(input)
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput!.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if ( captureSession!.canAddOutput(stillImageOutput) == false )
{
print("capture session with stillImageOutput problem?")
return;
}
captureSession!.addOutput(stillImageOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer!.videoGravity = AVLayerVideoGravityResizeAspectFill
// or, AVLayerVideoGravityResizeAspect
fixConnectionOrientation()
view.layer.addSublayer(previewLayer!)
captureSession!.startRunning()
previewLayer!.frame = view.bounds
}
Note: it's likely OP's code was actually working correctly in terms of the new ISO10 permission string, and OP had another problem causing the black screen.
From the code you've posted, I can't tell why you experience this kind of behavior. I can only give you the code that is working for me.
This code also runs on iOS 9. Note that I am loading the camera in viewDidAppear to make sure that all constraints are set.
import AVFoundation
class ViewController : UIViewController {
//the view where the camera feed is shown
#IBOutlet weak var cameraView: UIView!
var captureSession: AVCaptureSession = {
let session = AVCaptureSession()
session.sessionPreset = AVCaptureSessionPresetPhoto
return session
}()
var sessionOutput = AVCaptureStillImageOutput()
var previewLayer = AVCaptureVideoPreviewLayer()
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
let devices = AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo) as? [AVCaptureDevice]
guard let backCamera = (devices?.first { $0.position == .back }) else {
print("The back camera is currently not available")
return
}
do {
let input = try AVCaptureDeviceInput(device: backCamera)
if captureSession.canAddInput(input){
captureSession.addInput(input)
sessionOutput.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if captureSession.canAddOutput(sessionOutput) {
captureSession.addOutput(sessionOutput)
captureSession.startRunning()
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer.connection.videoOrientation = .portrait
cameraView.layer.addSublayer(previewLayer)
previewLayer.position = CGPoint(x: cameraView.frame.width / 2, y: cameraView.frame.height / 2)
previewLayer.bounds = cameraView.frame
}
}
} catch {
print("Could not create a AVCaptureSession")
}
}
}
If you want the camera to show fullscreen you can simply use view instead of cameraView. In my camera implementation the camera feed does not cover the entire view, there's still some navigation stuff.
What's happening is that on that first launch, you're activating the camera before it can check what the permissions are and display the appropriate UIAlertController. What you'd want to do is include this code inside an if statement to check the status of the camera permissions (AVAuthorizationStatus). Make sure that if it's not allowed to ask for permission before displaying the camera. See this question for more help.

Can a AVCaptureFileOutputRecordingDelegate be added to a subclass UIView?

I am having an issue creating a capture session in a custom UIView. I set the delegate like this
class Camera: UIView, AVCaptureFileOutputRecordingDelegate, AVAudioRecorderDelegate {
}
and then I set everything up and set the delegate like this
self.recordingDelegate? = self
captureSession.sessionPreset = AVCaptureSessionPresetHigh
let devices = AVCaptureDevice.devices()
for device in devices {
if (device.hasMediaType(AVMediaTypeVideo)) {
if(device.position == AVCaptureDevicePosition.Back) {
captureDevice = device as? AVCaptureDevice
if captureDevice != nil {
beginSession()
}
}
}
}
and all goes well. However, in the beginSession function:
func beginSession() {
let err : NSError? = nil
do {
self.captureSession.addInput(try AVCaptureDeviceInput(device: self.captureDevice!))
}
catch {
print("dang")
}
if err != nil {
print("error: \(err?.localizedDescription)")
}
...
The catch is thrown when I try to add the capture device input and there for it is not being added and I can not figure out why.
All of my code I am currently using was working fine before when I had it inside a UIViewController but when I switched it over to a subclass of UIView it stopped working. Any help would be appreciated if more code is needed let me know thank you!
I figured it out the iOS device I was using did not have the camera enabled for some reason there for the input could not be added which made the preview layer unable to capture any data

Capturing image with avfoundation

I am using AvFoundation for camera.
This is my live preview:
It looks good. When user presses to "Button" I am creating a snapshot on same screen. (Like snapchat)
I am using following code for capturing image and showing it on the screen:
self.stillOutput.captureStillImageAsynchronouslyFromConnection(videoConnection){
(imageSampleBuffer : CMSampleBuffer!, _) in
let imageDataJpeg = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageSampleBuffer)
let pickedImage: UIImage = UIImage(data: imageDataJpeg)!
self.captureSession.stopRunning()
self.previewImageView.frame = CGRect(x:0, y:0, width:UIScreen.mainScreen().bounds.width, height:UIScreen.mainScreen().bounds.height)
self.previewImageView.image = pickedImage
self.previewImageView.layer.zPosition = 100
}
After user captures an image screen looks like this:
Please look at the marked area. It wasn't looking on the live preview screen(Screenshot 1).
I mean live preview is not showing everything. But I am sure my live preview works well because I compared with other camera apps and everything was same as my live preview screen. I guess I have a problem with captured image.
I am creating live preview with following code:
override func viewWillAppear(animated: Bool) {
super.viewWillAppear(animated)
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
let devices = AVCaptureDevice.devices()
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the back camera
if(device.position == AVCaptureDevicePosition.Back) {
captureDevice = device as? AVCaptureDevice
}
}
}
if captureDevice != nil {
beginSession()
}
}
func beginSession() {
let err : NSError? = nil
do {
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
} catch{
}
captureSession.addOutput(stillOutput)
if err != nil {
print("error: \(err?.localizedDescription)")
}
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.videoGravity=AVLayerVideoGravityResizeAspectFill
self.cameraLayer.layer.addSublayer(previewLayer!)
previewLayer?.frame = self.cameraLayer.frame
captureSession.startRunning()
}
My cameraLayer :
How can I resolve this problem?
Presumably you are using an AVCaptureVideoPreviewLayer. So it sounds like this layer is incorrectly placed or incorrectly sized, or it has the wrong AVLayerVideoGravity setting. Part of the image is off the screen or cropped; that's why you don't see that part of what the camera sees while you are previewing.
Ok, I found the solution.
I used the
captureSession.sessionPreset = AVCaptureSessionPresetHigh
Instead of
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
Then problem fixed.

Unable to get devices using AVCaptureDevice

I've managed to find some code that would give me access to the devices of a phone (such as the camera). The issue is that when I compile the code (and I'm printing the different devices) using Xcode, I get an empty array.
Here is what I wrote:
import UIKit
import AVFoundation
class ViewController: UIViewController {
let captureSession = AVCaptureSession()
var previewLayer : AVCaptureVideoPreviewLayer?
// If we find a device we'll store it here for later us
var captureDevice : AVCaptureDevice?
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
captureSession.sessionPreset = AVCaptureSessionPresetHigh
let devices = AVCaptureDevice.devices()
println(devices)
// Loop through all the capture devices on this phone
for (device in devices) {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the back camera
if(device.position == AVCaptureDevicePosition.Back) {
captureDevice = device as? AVCaptureDevice
if captureDevice != nil {
println("Capture device found")
beginSession()
}
}
}
}
}
func beginSession() {
var err : NSError? = nil
captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &err))
if err != nil {
println("error: \(err?.localizedDescription)")
}
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
self.view.layer.addSublayer(previewLayer)
previewLayer?.frame = self.view.layer.frame
captureSession.startRunning()
}
}
Do you have any ideas as to why I am getting an empty array?
If you're only running it in the simulator, the array will always be empty because it has no physical hardware to choose from. In fact, if you try to access physical hardware inside the Simulator, it will crash. If you plug a device in and still get an empty array, let me know.
first check the current status of the authorization
AVCaptureDevice.authorizationStatusForMediaType(AVMediaTypeVideo)
more detail you can read this article
Check the current Status of Authorization for Swift 4
AVCaptureDevice.authorizationStatus(for: AVMediaType.video)

Resources