I am new to iOS development and I am trying to follow the following tutorial to learn how to capture images and then save them on a server for my purposes:
https://medium.com/#rizwanm/swift-camera-part-2-c6de440a9404
The tutorial is great and I am only initializing the camera and trying to snap the image (not doing the QR part).
Unfortunately, when I hit the 'capture' button, the image is not being taken or saved in my gallery and I am not able to understand why. I use Xcode 8.3 and I am running iOS 10.3 on my iPhone.
I have connected the button with the view controller and called it in the function onTapTakePhoto as shown in the code below. Please advise as to why this might be happening.
import UIKit
import AVFoundation
class ViewController: UIViewController {
#IBOutlet weak var previewView: UIView!
#IBOutlet weak var CaptureButton: UIButton!
//helps transfer data between one or more input device like camera
var captureSession: AVCaptureSession?
//instance variable
var capturePhotoOutput: AVCapturePhotoOutput? //capturePhotoOutput will help us snap a live photo
//helps render the camera view finder in ViewController
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
override func viewDidLoad() {
super.viewDidLoad()
CaptureButton.layer.cornerRadius = CaptureButton.frame.size.width / 2
CaptureButton.clipsToBounds = true
//set up camera here
let captureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
// get an instance of the AVCAptureDevicenput
// serves as a middle man to attach input device to capture device
// chance the input device unavailable so wrap in do catch
do {
// Get an instance of the AVCaptureDeviceInput class using the previous deivce object
let input = try AVCaptureDeviceInput(device: captureDevice)
// Initialize the captureSession object
captureSession = AVCaptureSession()
// Set the input devcie on the capture session
captureSession?.addInput(input)
//set up preview view to see live feed
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
previewView.layer.addSublayer(videoPreviewLayer!)
// finally start the capture session
captureSession?.startRunning()
// get an instance of AVCapturePhotoOutput class
capturePhotoOutput = AVCapturePhotoOutput()
capturePhotoOutput?.isHighResolutionCaptureEnabled = true
//set the outut on the capture session
captureSession?.addOutput(capturePhotoOutput)
} catch {
print (error)
return
}
}
#IBAction func onTapTakePhoto(_ sender: UIButton) {
// Make sure capturePhotoOutput is valid
guard let capturePhotoOutput = self.capturePhotoOutput else { return }
// Get an instance of AVCapturePhotoSettings class
let photoSettings = AVCapturePhotoSettings()
// Set photo settings for our need
photoSettings.isAutoStillImageStabilizationEnabled = true
photoSettings.isHighResolutionPhotoEnabled = true
photoSettings.flashMode = .auto
// Call capturePhoto method by passing our photo settings and a delegate implementing AVCapturePhotoCaptureDelegate
capturePhotoOutput.capturePhoto(with: photoSettings, delegate: self)
}
}
// to get the captured image
extension ViewController : AVCapturePhotoCaptureDelegate {
func capture(_ captureOutput: AVCapturePhotoOutput,
didFinishProcessingPhotoSampleBuffer photoSampleBuffer: CMSampleBuffer?,
previewPhotoSampleBuffer: CMSampleBuffer?,
resolvedSettings: AVCaptureResolvedPhotoSettings,
bracketSettings: AVCaptureBracketedStillImageSettings?,
error: Error?) {
// Make sure we get some photo sample buffer
guard error == nil,
let photoSampleBuffer = photoSampleBuffer else {
print("Error capturing photo: \(String(describing: error))")
return
}
// Convert photo same buffer to a jpeg image data by using AVCapturePhotoOutput
guard let imageData = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: photoSampleBuffer, previewPhotoSampleBuffer: previewPhotoSampleBuffer) else {
return
}
// Initialise an UIImage with our image data
let capturedImage = UIImage.init(data: imageData , scale: 1.0)
if let image = capturedImage {
// Save our captured image to photos album
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
}
}
}
One thing that you should check is that, if you have added the permission in the info.plist file to access the camera.
You have to make a key-value entry for the key
Privacy - Photo Library Usage Description
(if you are using the gallery)
and
Privacy - Camera Usage Description
for using the camera itself.
Refer to this for more details.
If it doesn't work, put a debugger on the click action of the button and check the code flow. If the debugger is not hit after pressing the button, there could be an issue with the button action outlet, try to remake the outlet action.
First of all with the help of break points, check which section of your code is not being executed.There might be some problem related to your button connection.
OR
Try this one
https://www.youtube.com/watch?v=994Hsi1zs6Q
OR
this one in Objective-C
https://github.com/omergul/LLSimpleCamera
Thank you for the answers :)
The problem was a missing connection to my action button code from the story board. Feels like such a silly mistake in retrospect
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
if (info[UIImagePickerControllerOriginalImage] as? UIImage) != nil {
}
dismiss(animated: true, completion:
{
// write your set image code for UIButton is here.
})
}
Related
Background
I am using AVCaptureSession with AVCapturePhotoOutput to save captures as JPEG images.
let captureSession = AVCaptureSession()
let stillImageOutput = AVCapturePhotoOutput()
var captureDevice : AVCaptureDevice?
...
func setupCamera() {
captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera, for: AVMediaType.video, position: .back)
if (captureDevice != nil) {
captureSession.addInput(try AVCaptureDeviceInput(device: captureDevice!))
if captureSession.canAddOutput(stillImageOutput) {
captureSession.addOutput(stillImageOutput)
}
}
}
The AVCaptureDevice is set to automatically and continuously adjust exposure settings
func configureCamera() {
do {
try captureDevice?.lockForConfiguration()
captureDevice?.exposureMode = AVCaptureDevice.ExposureMode.continuousAutoExposure
captureDevice?.unlockForConfiguration()
} catch let error as NSError {
// Errors handled here...
}
}
The capture is started by
func capture(){
// Get an instance of AVCapturePhotoSettings class
let photoSettings = AVCapturePhotoSettings()
// Set photo settings
photoSettings.isAutoStillImageStabilizationEnabled = true
photoSettings.flashMode = .off
// Call capturePhoto method by passing photo settings and a
// delegate implementing AVCapturePhotoCaptureDelegate
stillImageOutput.capturePhoto(with: photoSettings, delegate: self)
}
The parent class is set as an AVCapturePhotoCaptureDelegate and the photoOutput is handled by it
//Delegate
func photoOutput(_ captureOutput: AVCapturePhotoOutput,
didFinishProcessingPhoto photoSampleBuffer: CMSampleBuffer?,
previewPhoto previewPhotoSampleBuffer: CMSampleBuffer?,
resolvedSettings: AVCaptureResolvedPhotoSettings,
bracketSettings: AVCaptureBracketedStillImageSettings?,
error: Error?) {
// Make sure there is a photo sample buffer
guard error == nil,
let photoSampleBuffer = photoSampleBuffer else {
//Errors handled here
return
}
// Convert photo same buffer to a jpeg image data by using // AVCapturePhotoOutput
guard let imageData =
AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: photoSampleBuffer, previewPhotoSampleBuffer: previewPhotoSampleBuffer) else {
return
}
let capturedImage = UIImage.init(data: imageData , scale: 1.0)
if let image = capturedImage {
//save photo ...
}
}
And everything works as it should, however...
Problem
I need to know the exposure duration and ISO values that were used for each capture. The values vary because the camera is set to automatically adjust exposure and it has to be like that.
I know the metadata of the capture holds these values but I can't figure out how to access them.
The exposure duration and ISO values are necessary for fine tuning the exposure to achieve optimal results. After fine tuning the capture is started with these manual exposure values
captureDevice?.setExposureModeCustom(duration: customTime, iso: customISO, completionHandler: nil)
Instead of getting the used ISO and exposure duration from the capture metadata, I read these values just before capturing a photo. When doing it this way it is important to check that the exposure has finished adjusting.
Just before calling the capture:
check that the auto exposure is not adjusting
while ((captureDevice?.isAdjustingExposure)!){
usleep(100000) // wait 100 msec
}
Read the current exposure parameters
let current_exposure_duration : CMTime = (captureDevice?.exposureDuration)!
let current_exposure_ISO : Float = (captureDevice?.iso)!
And then take a photo
stillImageOutput.capturePhoto(with: photoSettings, delegate: self)
I'm trying to capture RAW files with AVFoundation. However I'm getting empty array in __availableRawPhotoPixelFormatTypes
Here is my snippet
if self._photoOutput == nil {
self._photoOutput = AVCapturePhotoOutput()
print(self._photoOutput!.__availableRawPhotoPixelFormatTypes)
}
And the output is empty array []
What may cause this?
Here are three things that will cause the availableRawPhotoPixelFormatTypes array to be empty:
You are reading the availableRawPhotoPixelFormatTypes property before adding your _photoOutput to an AVCaptureSession with a video source.
You are using the dual camera input. If so, you can't capture RAW images.
You are using the front camera. If so, you can't capture RAW images.
Here is some modified sample code from an excellent Apple guide (see link below). I have copied from several places and updated it slightly for brevity, simplicity and better overview:
let session = AVCaptureSession()
let photoOutput = AVCapturePhotoOutput()
private func configureSession() {
// Get camera device.
guard let videoCaptureDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back) else {
print("Unable to get camera device.")
return
}
// Create a capture input.
guard let videoInput = try? AVCaptureDeviceInput(device: videoCaptureDevice) else {
print("Unable to obtain video input for default camera.")
return
}
// Make sure inputs and output can be added to session.
guard session.canAddInput(videoInput) else { return }
guard session.canAddOutput(photoOutput) else { return }
// Configure the session.
session.beginConfiguration()
session.sessionPreset = .photo
session.addInput(videoInput)
// availableRawPhotoPixelFormatTypes is empty.
session.addOutput(photoOutput)
// availableRawPhotoPixelFormatTypes should not be empty.
session.commitConfiguration()
}
private func capturePhoto() {
// Photo settings for RAW capture.
let rawFormatType = kCVPixelFormatType_14Bayer_RGGB
// At this point the array should not be empty (session has been configured).
guard photoOutput.availableRawPhotoPixelFormatTypes.contains(NSNumber(value: rawFormatType).uint32Value) else {
print("No available RAW pixel formats")
return
}
let photoSettings = AVCapturePhotoSettings(rawPixelFormatType: rawFormatType)
photoOutput.capturePhoto(with: photoSettings, delegate: self)
}
// MARK: - AVCapturePhotoCaptureDelegate methods
func photoOutput(_ output: AVCapturePhotoOutput,
didFinishProcessingRawPhoto rawSampleBuffer: CMSampleBuffer?,
previewPhoto previewPhotoSampleBuffer: CMSampleBuffer?,
resolvedSettings: AVCaptureResolvedPhotoSettings,
bracketSettings: AVCaptureBracketedStillImageSettings?,
error: Error?) {
guard error == nil, let rawSampleBuffer = rawSampleBuffer else {
print("Error capturing RAW photo:\(error)")
return
}
// Do something with the rawSampleBuffer.
}
Apple's Photo Capture Guide:
https://developer.apple.com/library/content/documentation/AudioVideo/Conceptual/PhotoCaptureGuide/index.html
The availableRawPhotoPixelFormatTypes property:
https://developer.apple.com/documentation/avfoundation/avcapturephotooutput/1778628-availablerawphotopixelformattype)
iPhone camera capabilities:
https://developer.apple.com/library/content/documentation/DeviceInformation/Reference/iOSDeviceCompatibility/Cameras/Cameras.html
One supplyment to #Thomas's answer:
If you change AVCaptureDevice.activeFormat, AVCaptureSession.sessionPreset will be set to inputPriority automatically. In such situation, availableRawPhotoPixelFormatTypes will be empty too.
I need access to my devices camera data (not image data! Already have that). Such as, "pinhole" fx & fy & anything else I can possibly get.
Currently, I'm using AVFoundation's 'AVCaptureSession' with a custom UI. But previously I used 'UIImagePickerController' which has a delegate method called
imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any])
I was able to retrieve the taken photograph from the "info" dictionary. It also gave me very detailed information regarding the camera & its capabilities. As it stands, I don't know how to retrieve this same detailed information during 'AVCaptureSession' photography. Please help
You need to use AVCapturePhotoOutput and not AVCaptureStillImageOutput.
First the code below needs the following variables in your class
private let photoOutput = AVCapturePhotoOutput()
private var inProgressPhotoCaptureDelegates = [Int64 : AVPhotoCaptureDelegate]()
private let sessionQueue = DispatchQueue(label: "session queue", attributes: [], target: nil) // Communicate with the session
private var videoDeviceOrientation : AVCaptureVideoOrientation = .portrait // this needs updated as the device orientation changes
Add photo output at AVSession setup
// Add photo output.
if session.canAddOutput(photoOutput) {
session.addOutput(photoOutput)
self.photoOutput.isHighResolutionCaptureEnabled = true
}
Your capturePhoto() function should be setup as follows
func capturePhoto(aspectRatio : Float, metaData : NSDictionary?) {
sessionQueue.async {
// Update the photo output's connection to match the video orientation of the video preview layer.
if let photoOutputConnection = self.photoOutput.connection(with: AVMediaType.video) {
photoOutputConnection.videoOrientation = self.videoDeviceOrientation
}
// Capture a JPEG photo with flash set to off and high resolution photo enabled.
let photoSettings = AVCapturePhotoSettings()
photoSettings.flashMode = .off
photoSettings.isHighResolutionPhotoEnabled = true
if photoSettings.availablePreviewPhotoPixelFormatTypes.count > 0 {
photoSettings.previewPhotoFormat = [kCVPixelBufferPixelFormatTypeKey as String : photoSettings.availablePreviewPhotoPixelFormatTypes.first!]
}
// Use a separate object for the photo capture delegate to isolate each capture life cycle.
let photoCaptureDelegate = MyAVPhotoCaptureDelegate(completed: { [unowned self] photoCaptureDelegate in
// When the capture is complete, remove a reference to the photo capture delegate so it can be deallocated.
self.sessionQueue.async { [unowned self] in
self.inProgressPhotoCaptureDelegates[photoCaptureDelegate.requestedPhotoSettings.uniqueID] = nil
}
)
/*
The Photo Output keeps a weak reference to the photo capture delegate so
we store it in an array to maintain a strong reference to this object
until the capture is completed.
*/
self.inProgressPhotoCaptureDelegates[photoCaptureDelegate.requestedPhotoSettings.uniqueID] = photoCaptureDelegate
self.photoOutput.capturePhoto(with: photoSettings, delegate: photoCaptureDelegate)
}
}
The MyAVPhotoCaptureDelegate class referenced in the capturePhoto() function above will need to be setup as follows
class MyAVPhotoCaptureDelegate: NSObject, AVCapturePhotoCaptureDelegate {
init(completed: #escaping (AVPhotoCaptureDelegate) -> ()) {
self.completed = completed
}
private func didFinish() {
completed(self)
}
func photoOutput(_ captureOutput: AVCapturePhotoOutput, willCapturePhotoFor resolvedSettings: AVCaptureResolvedPhotoSettings) {
}
func photoOutput(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhoto photoSampleBuffer: CMSampleBuffer?, previewPhoto previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?) {
if let photoSampleBuffer = photoSampleBuffer {
let propertiesDictionary = NSMutableDictionary()
if let exif = CMGetAttachment(photoSampleBuffer, kCGImagePropertyExifDictionary as NSString, nil) {
if let exifDictionary = exif as? NSMutableDictionary {
// view exif data
}
}
photoData = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: photoSampleBuffer, previewPhotoSampleBuffer: previewPhotoSampleBuffer)
}
else {
print("Error capturing photo: \(String(describing:error))")
return
}
}
func photoOutput(_ captureOutput: AVCapturePhotoOutput, didFinishCaptureFor resolvedSettings: AVCaptureResolvedPhotoSettings, error: Error?) {
// Use PHPhotoLibrary to save photoData to photo library
...
}
private let completed : (MyAVPhotoCaptureDelegate) -> ()
}
Most of this code comes from my version of AVCam with the specifics of my implementation removed. I have left out the code for saving to the photo library but you can extract that from the sample code. You can view the exif data at the point I have commented "view exif data"
I am using Swift 3, Xcode 8.2.
I have a custom camera view which displays the video feed fine and a button that I want to act as a shutter. When the user taps on the button, I want a picture taken and it to be displayed on the screen. (e.g. like a Snapchat or Facebook Messenger style camera behavior)
Here is my code:
import UIKit
import AVFoundation
class CameraVC: UIViewController, AVCapturePhotoCaptureDelegate {
// this is where the camera feed from the phone is going to be displayed
#IBOutlet var cameraView : UIView!
var shutterButton : UIButton = UIButton.init(type: .custom)
// manages capture activity and coordinates the flow of data from input devices to capture outputs.
var capture_session = AVCaptureSession()
// a capture output for use in workflows related to still photography.
var session_output = AVCapturePhotoOutput()
// preview layer that we will have on our view so users can see the photo we took
var preview_layer = AVCaptureVideoPreviewLayer()
// still picture image is what we show as the picture taken, frozen on the screen
var still_picture_image : UIImage!
... //more code in viewWillAppear that sets up the camera feed
// called when the shutter button is pressed
func shutterButtonPressed() {
// get the actual video feed and take a photo from that feed
session_output.capturePhoto(with: AVCapturePhotoSettings.init(format: [AVVideoCodecKey : AVVideoCodecJPEG]), delegate: self as AVCapturePhotoCaptureDelegate)
}
func capture(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhotoSampleBuffer photoSampleBuffer: CMSampleBuffer?, previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?) {
// take the session output, get the buffer, and create an image from that buffer
if let sampleBuffer = photoSampleBuffer,
let previewBuffer = previewPhotoSampleBuffer,
let imageData = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: sampleBuffer, previewPhotoSampleBuffer: previewBuffer) {
print("Here") // doesn't get here
}
}
It doesn't seem to print "Here" when running this code and I can't find any Swift 3 tutorials on how to display this image. I'm guessing I want to take the imageData and assign it to my still_picture_image and overlay that over the camera feed somehow.
Any help or a point in the right direction would be great help.
EDIT
After adding the following to my code:
if let error = error {
print(error.localizedDescription)
}
But I still don't get any error printed.
Add the following code into your delegate method to print out the error being thrown:
if let error = error {
print(error.localizedDescription)
}
Once you get your error resolved, I think this post should help you to extract the image: Taking photo with custom camera Swift 3
Okay, I figured out my problem:
First, drag an UIImageView to the Storyboard and have it take up the entire screen. This is where the still picture will be displayed after pressing the shutter button.
Create that variable in the code and link it.
#IBOutlet weak var stillPicture : UIImageView!
Then, in viewDidLoad make sure that you insert the UIImageView on top of the camera view.
self.view.insertSubview(stillPicture, aboveSubview: your_camera_view)
This is the function that is called when the shutter button is clicked:
func shutterButtonPressed() {
let settings = AVCapturePhotoSettings()
let previewPixelType = settings.availablePreviewPhotoPixelFormatTypes.first!
let previewFormat = [kCVPixelBufferPixelFormatTypeKey as String: previewPixelType,
kCVPixelBufferWidthKey as String: 160,
kCVPixelBufferHeightKey as String: 160,
]
settings.previewPhotoFormat = previewFormat
session_output.capturePhoto(with: settings, delegate: self)
}
Then, in your capture delegate:
func capture(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhotoSampleBuffer photoSampleBuffer: CMSampleBuffer?, previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?) {
if let error = error {
print(error.localizedDescription)
}
// take the session output, get the buffer, and create an image from that buffer
if let sampleBuffer = photoSampleBuffer, let previewBuffer = previewPhotoSampleBuffer, let dataImage = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: sampleBuffer, previewPhotoSampleBuffer: previewBuffer) {
// this is the image that the user has taken!
let takenImage : UIImage = UIImage(data: dataImage)!
stillPicture?.image = takenImage
} else {
print("Error setting up photo capture")
}
}
Is there any way to record IOS screen programmatically. Means whatever activity you are doing like clicking buttons, Scrolling tableviews.
Even if a video is playing that will be captured again along with some other activity?
Have tried these
https://www.raywenderlich.com/30200/avfoundation-tutorial-adding-overlays-and-animations-to-videos
https://github.com/alskipp/ASScreenRecorder
but with these libraries won't provide quality video. I need quality video.
The issue is that with video playing in the background when i capture screen it does not show smooth video. It shows like one frame of video and then after 3-4 secs 2nd frame and so on. Also quality of video is not good its blurred
As of iOS 9, it looks like ReplayKit is available to greatly simplify this.
https://developer.apple.com/reference/replaykit
https://code.tutsplus.com/tutorials/ios-9-an-introduction-to-replaykit--cms-25458
Update: This may be less relevant now that iOS 11 has a built-in screen recorder, but the following Swift 3 code worked for me:
import ReplayKit
#IBAction func toggleRecording(_ sender: UIBarButtonItem) {
let r = RPScreenRecorder.shared()
guard r.isAvailable else {
print("ReplayKit unavailable")
return
}
if r.isRecording {
self.stopRecording(sender, r)
}
else {
self.startRecording(sender, r)
}
}
func startRecording(_ sender: UIBarButtonItem, _ r: RPScreenRecorder) {
r.startRecording(handler: { (error: Error?) -> Void in
if error == nil { // Recording has started
sender.title = "Stop"
} else {
// Handle error
print(error?.localizedDescription ?? "Unknown error")
}
})
}
func stopRecording(_ sender: UIBarButtonItem, _ r: RPScreenRecorder) {
r.stopRecording( handler: { previewViewController, error in
sender.title = "Record"
if let pvc = previewViewController {
if UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiom.pad {
pvc.modalPresentationStyle = UIModalPresentationStyle.popover
pvc.popoverPresentationController?.sourceRect = CGRect.zero
pvc.popoverPresentationController?.sourceView = self.view
}
pvc.previewControllerDelegate = self
self.present(pvc, animated: true, completion: nil)
}
else if let error = error {
print(error.localizedDescription)
}
})
}
// MARK: RPPreviewViewControllerDelegate
func previewControllerDidFinish(_ previewController: RPPreviewViewController) {
previewController.dismiss(animated: true, completion: nil)
}
ReplayKit is available, although you are not allowed to access the result video, the only way I've found so far is to make a number of screenshots (store them in array of images) and then convert these images to the video, not very efficient from performance standpoint though, but might work when you don't really need a 30/60 fps screen recording and might be ok w/ 6-20 pfs. Here's the full example.
Check out ScreenCaptureView, this has video-recording support built-in (see link).
What this does is it saves the contents of a UIView to a UIImage. The author suggests you can save a video of the app in use by passing the frames through AVCaptureSession.
I believe it hasn't been tested with an OpenGL subview, but assuming that it works you might be able to modify it slightly to include audio and then you'd be set.
AVCaptureSession Sample
AVCaptureSession Reference
import UIKit
import AVFoundation
class ViewController: UIViewController {
let captureSession = AVCaptureSession()
let stillImageOutput = AVCaptureStillImageOutput()
var error: NSError?
override func viewDidLoad() {
super.viewDidLoad()
let devices = AVCaptureDevice.devices().filter{ $0.hasMediaType(AVMediaTypeVideo) && $0.position == AVCaptureDevicePosition.Back }
if let captureDevice = devices.first as? AVCaptureDevice {
captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &error))
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
captureSession.startRunning()
stillImageOutput.outputSettings = [AVVideoCodecKey:AVVideoCodecJPEG]
if captureSession.canAddOutput(stillImageOutput) {
captureSession.addOutput(stillImageOutput)
}
if let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) {
previewLayer.bounds = view.bounds
previewLayer.position = CGPointMake(view.bounds.midX, view.bounds.midY)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
let cameraPreview = UIView(frame: CGRectMake(0.0, 0.0, view.bounds.size.width, view.bounds.size.height))
cameraPreview.layer.addSublayer(previewLayer)
cameraPreview.addGestureRecognizer(UITapGestureRecognizer(target: self, action:"saveToCamera:"))
view.addSubview(cameraPreview)
}
}
}
func saveToCamera(sender: UITapGestureRecognizer) {
if let videoConnection = stillImageOutput.connectionWithMediaType(AVMediaTypeVideo) {
stillImageOutput.captureStillImageAsynchronouslyFromConnection(videoConnection) {
(imageDataSampleBuffer, error) -> Void in
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer)
UIImageWriteToSavedPhotosAlbum(UIImage(data: imageData), nil, nil, nil)
}
}
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
}
}
You can use this library to record a view : screen-cap-view available on GitHub written in Objective C.
**And to use it in swift:**
--> Drag and drop the .m and .h files in your xcode project.
--> Make a header file and import the this file in that : *#import "IAScreenCaptureView.h"*
--> Then give a View this class from the PropertyInspector and then make a IBOutlet for that view . Something like this:
*#IBOutlet weak var contentView: IAScreenCaptureView!*
--> Then Finally just simply start and stop the recording of the view where ever and when ever you want and for that the code will be like this :
For Starting the Recording : *contentView.startRecording()*
For Stoping the Recording : *contentView.stopRecording()*
//Hope this helps.Happy coding. \o/ , ¯\_(ツ)_/¯ ,(╯°□°)╯︵ ┻━┻