AVCaptureStillImageOutput & UIImagePNGRepresentation - ios

I am having a hard time for something I think shouldn’t be so difficult, so I presume I must be looking at the problem from the wrong angle.
In order to understand how AVCaptureStillImageOutput and the camera work I made a tiny app.
This app is able to take a picture and save it as a PNG file (I do not want JPEG). The next time the app is launched, it checks if a file is present and if it is, the image stored inside the file is used as the background view of the app. The idea is rather simple.
The problem is that it does not work. If someone can tell me what I am doing wrong that will be very helpful.
I would like the picture to appear as a background the same way it was on the display when it was taken, but it is rotated or has the wrong scale ..etc..
Here is the relevant code (I can provide more information if ever needed).
The viewDidLoad method:
override func viewDidLoad() {
super.viewDidLoad()
// For the photo capture:
captureSession.sessionPreset = AVCaptureSessionPresetHigh
// Select the appropriate capture devices:
for device in AVCaptureDevice.devices() {
// Make sure this particular device supports video.
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the back camera.
if(device.position == AVCaptureDevicePosition.Back) {
captureDevice = device as? AVCaptureDevice
}
}
}
tapGesture = UITapGestureRecognizer(target: self, action: Selector("takePhoto"))
self.view.addGestureRecognizer(tapGesture)
let filePath = self.toolBox.getDocumentsDirectory().stringByAppendingPathComponent("BackGroundImage#2x.png")
if !NSFileManager.defaultManager().fileExistsAtPath(filePath) {return}
let bgImage = UIImage(contentsOfFile: filePath),
bgView = UIImageView(image: bgImage)
self.view.addSubview(bgView)
}
The method to handle the picture taking:
func takePhoto() {
if !captureSession.running {
beginPhotoCaptureSession()
return
}
if let videoConnection = stillImageOutput.connectionWithMediaType(AVMediaTypeVideo) {
stillImageOutput.captureStillImageAsynchronouslyFromConnection(videoConnection) {
(imageDataSampleBuffer, error) -> Void in
if error == nil {
var localImage = UIImage(fromSampleBuffer: imageDataSampleBuffer)
UIGraphicsBeginImageContext(localImage!.size)
CGContextRotateCTM (UIGraphicsGetCurrentContext(), CGFloat(M_PI_2))
//localImage!.drawAtPoint(CGPointZero)
localImage!.drawAtPoint(CGPoint(x: -localImage!.size.height, y: -localImage!.size.width))
//localImage!.drawAtPoint(CGPoint(x: -localImage!.size.width, y: -localImage!.size.height))
localImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
localImage = resizeImage(localImage!, toSize: self.view.frame.size)
if let data = UIImagePNGRepresentation(localImage!) {
let bitMapName = "BackGroundImage#2x"
let filename = self.toolBox.getDocumentsDirectory().stringByAppendingPathComponent("\(bitMapName).png")
data.writeToFile(filename, atomically: true)
print("Picture saved: \(bitMapName)\n\(filename)")
}
} else {print("Error on taking a picture:\n\(error)")}
}
}
captureSession.stopRunning()
previewLayer!.removeFromSuperlayer()
}
The method to start the AVCaptureSession:
func beginPhotoCaptureSession() {
do {let input = try AVCaptureDeviceInput(device: captureDevice)
captureSession.addInput(input)
} catch let error as NSError {
// Handle any errors:
print(error)
}
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.frame = self.view.layer.frame
self.view.layer.addSublayer(previewLayer!)
captureSession.startRunning()
stillImageOutput.outputSettings = [kCVPixelBufferPixelFormatTypeKey:Int(kCVPixelFormatType_32BGRA)]
if captureSession.canAddOutput(stillImageOutput) {
captureSession.addOutput(stillImageOutput)
}
}
As an example here is an image of a picture taken with the app:
Now here is what I get as the background of the app when it is relaunched:
If it was working correctly the 2 pictures would be similar.

I cant see rotation in screenshot.. But the scale is a problem which is related to your code when you are drawing it in function takePhoto. You can try
func takePhoto() {
if !captureSession.running {
beginPhotoCaptureSession()
return
}
if let videoConnection = stillImageOutput.connectionWithMediaType(AVMediaTypeVideo) {
stillImageOutput.captureStillImageAsynchronouslyFromConnection(videoConnection) {
(imageDataSampleBuffer, error) -> Void in
if error == nil {
if let data = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer) {
let bitMapName = "BackGroundImage#2x"
let filename = self.toolBox.getDocumentsDirectory().stringByAppendingPathComponent("\(bitMapName).png")
data.writeToFile(filename, atomically: true)
print("Picture saved: \(bitMapName)\n\(filename)")
}
} else {print("Error on taking a picture:\n\(error)")}
}
}
captureSession.stopRunning()
previewLayer!.removeFromSuperlayer()
}

For those who might be hitting the same issue at one point, I post below what I fixed in the code to make it work. There may still be some improvement needed to support all possible orientation, but it's OK for a start.
if let videoConnection = stillImageOutput.connectionWithMediaType(AVMediaTypeVideo) {
stillImageOutput.captureStillImageAsynchronouslyFromConnection(videoConnection) {
(imageDataSampleBuffer, error) -> Void in
if error == nil {
var localImage = UIImage(fromSampleBuffer: imageDataSampleBuffer)
var imageSize = CGSize(width: UIScreen.mainScreen().bounds.height * UIScreen.mainScreen().scale,
height: UIScreen.mainScreen().bounds.width * UIScreen.mainScreen().scale)
localImage = resizeImage(localImage!, toSize: imageSize)
imageSize = CGSize(width: imageSize.height, height: imageSize.width)
UIGraphicsBeginImageContext(imageSize)
CGContextRotateCTM (UIGraphicsGetCurrentContext(), CGFloat(M_PI_2))
localImage!.drawAtPoint(CGPoint(x: 0.0, y: -imageSize.width))
localImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
if let data = UIImagePNGRepresentation(localImage!) {
data.writeToFile(self.bmpFilePath, atomically: true)
}
} else {print("Error on taking a picture:\n\(error)")}
}
}

Related

How do I capture video of a changing CIImage without using the camera?

I've already asked a question without any responses here:
How do I record changes on a CIImage to a video using AVAssetWriter?
But perhaps my question needs to be simpler. My Google search has been fruitless. How do I capture video of a changing CIImage in real time, without using the camera?
Using captureOutput, I get a CMSampleBuffer, which I can make into a CVPixelBuffer. AVAssetWriterInput's mediaType is set to video, but I think it expects compressed video. In addition, I'm not clear if the AVAssetWriterInput expectsMediaDataInRealTime property should be set to true or not.
Seems like it should be fairly simple, but everything I attempted makes my AVAssetWriter's status fail.
Here is my last attempt at making this work. Still failing:
#objc func importLivePreview(){
guard var importedImage = importedDryCIImage else { return }
DispatchQueue.main.async(){
// apply filter to camera image
// this is what makes the CIImage appear that it is changing
importedImage = self.applyFilterAndReturnImage(ciImage: importedImage, orientation: UIImage.Orientation.right, currentCameraRes:currentCameraRes!)
if self.videoIsRecording &&
self.assetWriterPixelBufferInput?.assetWriterInput.isReadyForMoreMediaData == true {
guard let writer: AVAssetWriter = self.assetWriter, writer.status == .writing else {
return
}
guard let cv:CVPixelBuffer = self.buffer(from: importedImage) else {
print("CVPixelBuffer could not be created.")
return
}
self.MTLContext?.render(_:importedImage, to:cv)
self.currentSampleTime = CMTimeMakeWithSeconds(0.1, preferredTimescale: 1000000000)
guard let currentSampleTime = self.currentSampleTime else {
return
}
let success = self.assetWriterPixelBufferInput?.append(cv, withPresentationTime: currentSampleTime)
if success == false {
print("Pixel Buffer input failed")
}
}
guard let MTLView = self.MTLCaptureView else {
print("MTLCaptureView is not found or nil.")
return
}
// update the MTKView with the changed CIImage so the user can see the changed image
MTLView.image = importedImage
}
}
I got it working. The problem was is that I wasn't offsetting currentSampleTime. This example doesn't have accurate offsets, but it shows that it needs to be added onto the last time.
#objc func importLivePreview(){
guard var importedImage = importedDryCIImage else { return }
DispatchQueue.main.async(){
// apply filter to camera image
// this is what makes the CIImage appear that it is changing
importedImage = self.applyFilterAndReturnImage(ciImage: importedImage, orientation: UIImage.Orientation.right, currentCameraRes:currentCameraRes!)
if self.videoIsRecording &&
self.assetWriterPixelBufferInput?.assetWriterInput.isReadyForMoreMediaData == true {
guard let writer: AVAssetWriter = self.assetWriter, writer.status == .writing else {
return
}
guard let cv:CVPixelBuffer = self.buffer(from: importedImage) else {
print("CVPixelBuffer could not be created.")
return
}
self.MTLContext?.render(_:importedImage, to:cv)
guard let currentSampleTime = self.currentSampleTime else {
return
}
// offset currentSampleTime
let sampleTimeOffset = CMTimeMakeWithSeconds(0.1, preferredTimescale: 1000000000)
self.currentSampleTime = CMTimeAdd(currentSampleTime, sampleTimeOffset)
print("currentSampleTime = \(String(describing: currentSampleTime))")
let success = self.assetWriterPixelBufferInput?.append(cv, withPresentationTime: currentSampleTime)
if success == false {
print("Pixel Buffer input failed")
}
}
guard let MTLView = self.MTLCaptureView else {
print("MTLCaptureView is not found or nil.")
return
}
// update the MTKView with the changed CIImage so the user can see the changed image
MTLView.image = importedImage
}
}

Downsized Image gets blurry after being copied to Pasteboard - Swift 3.0

I am capturing an image which is then placed in a small imageView. The picture is not blurry in the small imageView, but when I copy it to the clipboard, I am resizing the picture so that it is the same size as the imageView, but now it is blurry when I paste.
Here is the code:
import UIKit
import AVFoundation
class ViewController: UIViewController, UINavigationControllerDelegate, UIImagePickerControllerDelegate {
#IBOutlet weak var cameraView: UIView!
#IBOutlet weak var imageView: UIImageView!
var session: AVCaptureSession?
var stillImageOutput: AVCaptureStillImageOutput?
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
var captureDevice:AVCaptureDevice?
var imagePicker: UIImagePickerController!
override func viewDidLoad() {
super.viewDidLoad()
alignment()
tapToCopy()
}
override func viewWillAppear(_ animated: Bool) {
session = AVCaptureSession()
session!.sessionPreset = AVCaptureSessionPresetPhoto
let videoDevices = AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo)
for device in videoDevices!{
let device = device as! AVCaptureDevice
if device.position == AVCaptureDevicePosition.front {
captureDevice = device
}
}
//We will make a new AVCaptureDeviceInput and attempt to associate it with our backCamera input device.
//There is a chance that the input device might not be available, so we will set up a try catch to handle any potential errors we might encounter.
var error: NSError?
var input: AVCaptureDeviceInput!
do {
input = try AVCaptureDeviceInput(device: captureDevice)
} catch let error1 as NSError {
error = error1
input = nil
print(error!.localizedDescription)
}
if error == nil && session!.canAddInput(input) {
session!.addInput(input)
// ...
// The remainder of the session setup will go here...
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput?.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if session!.canAddOutput(stillImageOutput) {
session!.addOutput(stillImageOutput)
// ...
// Configure the Live Preview here...
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: session)
videoPreviewLayer!.videoGravity = AVLayerVideoGravityResizeAspect
videoPreviewLayer!.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
cameraView.layer.addSublayer(videoPreviewLayer!)
session!.startRunning()
}
}
}
func alignment() {
let height = view.bounds.size.height
let width = view.bounds.size.width
cameraView.bounds.size.height = height / 10
cameraView.bounds.size.width = height / 10
cameraView.layer.cornerRadius = height / 20
imageView.bounds.size.height = height / 10
imageView.bounds.size.width = height / 10
imageView.layer.cornerRadius = height / 20
imageView.clipsToBounds = true
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
videoPreviewLayer!.frame = cameraView.bounds
}
#IBAction func takePic(_ sender: Any) {
if (stillImageOutput!.connection(withMediaType: AVMediaTypeVideo)) != nil {
let videoConnection = stillImageOutput!.connection(withMediaType: AVMediaTypeVideo)
// ...
// Code for photo capture goes here...
stillImageOutput?.captureStillImageAsynchronously(from: videoConnection, completionHandler: { (sampleBuffer, error) -> Void in
// ...
// Process the image data (sampleBuffer) here to get an image file we can put in our captureImageView
if sampleBuffer != nil {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProvider(data: imageData as! CFData)
let cgImageRef = CGImage(jpegDataProviderSource: dataProvider!, decode: nil, shouldInterpolate: true, intent: CGColorRenderingIntent.defaultIntent)
let image = UIImage(cgImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.right)
// ...
// Add the image to captureImageView here...
self.imageView.image = self.resizeImage(image: image, newHeight: self.view.bounds.size.height / 10)
}
})
}
}
func tapToCopy() {
let imageTap = UITapGestureRecognizer(target: self, action: #selector(self.copyToClipboard(recognizer:)))
imageTap.numberOfTapsRequired = 1
imageView.isUserInteractionEnabled = true
imageView.addGestureRecognizer(imageTap)
}
func copyToClipboard(recognizer: UITapGestureRecognizer) {
UIPasteboard.general.image = self.resizeImage(image: imageView.image!, newHeight: self.view.bounds.size.height / 10)
}
func resizeImage(image: UIImage, newHeight: CGFloat) -> UIImage {
let scale = newHeight / image.size.height
let newWidth = image.size.width * scale
UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newHeight))
image.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
}
First of all, you are saying:
UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newHeight))
Never, never, never call UIGraphicsBeginImageContext. Just pretend it doesn't exist. Always call UIGraphicsBeginImageContextWithOptions instead. It takes two extra parameters, which should just about always be false and 0. Things will be a lot better when you make that change, because the image will contain scale information that you are currently stripping out.
Another problem is that you are resizing the same image twice in succession — once to display the image in the image view, and then again resizing it some more when you pull the image from the image view and put it on the pasteboard. Don't do that! Instead, store the original image, without resizing. Later, you can put that on the pasteboard — or resize the original image so that you are only resizing it once, totally separate from the image in the image view.

Simulate AVLayerVideoGravityResizeAspectFill: crop and center video to mimic preview without losing sharpness

Based on this SO post, the code below rotates, centers, and crops a video captured live by the user.
The capture session uses AVCaptureSessionPresetHigh for the preset value, and the preview layer uses AVLayerVideoGravityResizeAspectFill for video gravity. This preview is extremely sharp.
The exported video, however, is not as sharp, ostensibly because scaling from the 1920x1080 resolution for the back camera on the 5S to 320x568 (target size for the exported video) introduces fuzziness from throwing away pixels?
Assuming there is no way to scale from 1920x1080 to 320x568 without some fuzziness, the question becomes: how to mimic the sharpness of the preview layer?
Somehow Apple is using an algorithm to convert a 1920x1080 video into a crisp-looking preview frame of 320x568.
Is there a way to mimic this with either AVAssetWriter or AVAssetExportSession?
func cropVideo() {
// Set start time
let startTime = NSDate().timeIntervalSince1970
// Create main composition & its tracks
let mainComposition = AVMutableComposition()
let compositionVideoTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
let compositionAudioTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
// Get source video & audio tracks
let videoPath = getFilePath(curSlice!.getCaptureURL())
let videoURL = NSURL(fileURLWithPath: videoPath)
let videoAsset = AVURLAsset(URL: videoURL, options: nil)
let sourceVideoTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0]
let sourceAudioTrack = videoAsset.tracksWithMediaType(AVMediaTypeAudio)[0]
let videoSize = sourceVideoTrack.naturalSize
// Get rounded time for video
let roundedDur = floor(curSlice!.getDur() * 100) / 100
let videoDur = CMTimeMakeWithSeconds(roundedDur, 100)
// Add source tracks to composition
do {
try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoDur), ofTrack: sourceVideoTrack, atTime: kCMTimeZero)
try compositionAudioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoDur), ofTrack: sourceAudioTrack, atTime: kCMTimeZero)
} catch {
print("Error with insertTimeRange while exporting video: \(error)")
}
// Create video composition
// -- Set video frame
let outputSize = view.bounds.size
let videoComposition = AVMutableVideoComposition()
print("Video composition duration: \(CMTimeGetSeconds(mainComposition.duration))")
// -- Set parent layer
let parentLayer = CALayer()
parentLayer.frame = CGRectMake(0, 0, outputSize.width, outputSize.height)
parentLayer.contentsGravity = kCAGravityResizeAspectFill
// -- Set composition props
videoComposition.renderSize = CGSize(width: outputSize.width, height: outputSize.height)
videoComposition.frameDuration = CMTimeMake(1, Int32(frameRate))
// -- Create video composition instruction
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, videoDur)
// -- Use layer instruction to match video to output size, mimicking AVLayerVideoGravityResizeAspectFill
let videoLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: compositionVideoTrack)
let videoTransform = getResizeAspectFillTransform(videoSize, outputSize: outputSize)
videoLayerInstruction.setTransform(videoTransform, atTime: kCMTimeZero)
// -- Add layer instruction
instruction.layerInstructions = [videoLayerInstruction]
videoComposition.instructions = [instruction]
// -- Create video layer
let videoLayer = CALayer()
videoLayer.frame = parentLayer.frame
// -- Add sublayers to parent layer
parentLayer.addSublayer(videoLayer)
// -- Set animation tool
videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, inLayer: parentLayer)
// Create exporter
let outputURL = getFilePath(getUniqueFilename(gMP4File))
let exporter = AVAssetExportSession(asset: mainComposition, presetName: AVAssetExportPresetHighestQuality)!
exporter.outputURL = NSURL(fileURLWithPath: outputURL)
exporter.outputFileType = AVFileTypeMPEG4
exporter.videoComposition = videoComposition
exporter.shouldOptimizeForNetworkUse = true
exporter.canPerformMultiplePassesOverSourceMediaData = true
// Export to video
exporter.exportAsynchronouslyWithCompletionHandler({
// Log status
let asset = AVAsset(URL: exporter.outputURL!)
print("Exported slice video. Tracks: \(asset.tracks.count). Duration: \(CMTimeGetSeconds(asset.duration)). Size: \(exporter.estimatedOutputFileLength). Status: \(getExportStatus(exporter)). Output URL: \(exporter.outputURL!). Export time: \( NSDate().timeIntervalSince1970 - startTime).")
// Tell delegate
//delegate.didEndExport(exporter)
self.curSlice!.setOutputURL(exporter.outputURL!.lastPathComponent!)
gUser.save()
})
}
// Returns transform, mimicking AVLayerVideoGravityResizeAspectFill, that converts video of <inputSize> to one of <outputSize>
private func getResizeAspectFillTransform(videoSize: CGSize, outputSize: CGSize) -> CGAffineTransform {
// Compute ratios between video & output sizes
let widthRatio = outputSize.width / videoSize.width
let heightRatio = outputSize.height / videoSize.height
// Set scale to larger of two ratios since goal is to fill output bounds
let scale = widthRatio >= heightRatio ? widthRatio : heightRatio
// Compute video size after scaling
let newWidth = videoSize.width * scale
let newHeight = videoSize.height * scale
// Compute translation required to center image after scaling
// -- Assumes CoreAnimationTool places video frame at (0, 0). Because scale transform is applied first, we must adjust
// each translation point by scale factor.
let translateX = (outputSize.width - newWidth) / 2 / scale
let translateY = (outputSize.height - newHeight) / 2 / scale
// Set transform to resize video while retaining aspect ratio
let resizeTransform = CGAffineTransformMakeScale(scale, scale)
// Apply translation & create final transform
let finalTransform = CGAffineTransformTranslate(resizeTransform, translateX, translateY)
// Return final transform
return finalTransform
}
320x568 video taken with Tim's code:
640x1136 video taken with Tim's code:
Try this. Start a new Single View project in Swift, replace the ViewController with this code and you should be good to go!
I've set up a previewLayer which is a different size from the output, change it at the top of the file.
I added some basic orientation support. Outputs slightly different sizes for Landscape Vs. Portrait. You can specify whatever video size dimensions you like in here and it should work fine.
Checkout the videoSettings dictionary (line 278ish) for the codec and sizes of the output file. You can also add other settings in here to deal with keyFrameIntervals etc. to tweak outputsize.
I added a recording image to show when it's recording (Tap starts, tap ends), you'll need to add some asset into Assets.xcassets called recording (or comment out that line 106 where it trys to load it).
That's pretty much it. Good luck!
Oh, it's dumping the video into a project directory, you'll need to go to Window / Devices and download the Container to see the video easily. In the TODO there's a section where you could hook in and copy the file to the PhotoLibrary (makes testing WAY easier).
import UIKit
import AVFoundation
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate {
let CAPTURE_SIZE_LANDSCAPE: CGSize = CGSizeMake(1280, 720)
let CAPTURE_SIZE_PORTRAIT: CGSize = CGSizeMake(720, 1280)
var recordingImage : UIImageView = UIImageView()
var previewLayer : AVCaptureVideoPreviewLayer?
var audioQueue : dispatch_queue_t?
var videoQueue : dispatch_queue_t?
let captureSession = AVCaptureSession()
var assetWriter : AVAssetWriter?
var assetWriterInputCamera : AVAssetWriterInput?
var assetWriterInputAudio : AVAssetWriterInput?
var outputConnection: AVCaptureConnection?
var captureDeviceBack : AVCaptureDevice?
var captureDeviceFront : AVCaptureDevice?
var captureDeviceMic : AVCaptureDevice?
var sessionSetupDone: Bool = false
var isRecordingStarted = false
//var recordingStartedTime = kCMTimeZero
var videoOutputURL : NSURL?
var captureSize: CGSize = CGSizeMake(1280, 720)
var previewFrame: CGRect = CGRectMake(0, 0, 180, 360)
var captureDeviceTrigger = true
var captureDevice: AVCaptureDevice? {
get {
return captureDeviceTrigger ? captureDeviceFront : captureDeviceBack
}
}
override func supportedInterfaceOrientations() -> UIInterfaceOrientationMask {
return UIInterfaceOrientationMask.AllButUpsideDown
}
override func shouldAutorotate() -> Bool {
if isRecordingStarted {
return false
}
if UIDevice.currentDevice().orientation == UIDeviceOrientation.PortraitUpsideDown {
return false
}
if let cameraPreview = self.previewLayer {
if let connection = cameraPreview.connection {
if connection.supportsVideoOrientation {
switch UIDevice.currentDevice().orientation {
case .LandscapeLeft:
connection.videoOrientation = .LandscapeRight
case .LandscapeRight:
connection.videoOrientation = .LandscapeLeft
case .Portrait:
connection.videoOrientation = .Portrait
case .FaceUp:
return false
case .FaceDown:
return false
default:
break
}
}
}
}
return true
}
override func viewDidLoad() {
super.viewDidLoad()
setupViewControls()
//self.recordingStartedTime = kCMTimeZero
// Setup capture session related logic
videoQueue = dispatch_queue_create("video_write_queue", DISPATCH_QUEUE_SERIAL)
audioQueue = dispatch_queue_create("audio_write_queue", DISPATCH_QUEUE_SERIAL)
setupCaptureDevices()
pre_start()
}
//MARK: UI methods
func setupViewControls() {
// TODO: I have an image (red circle) in an Assets.xcassets. Replace the following with your own image
recordingImage.frame = CGRect(x: 0, y: 0, width: 50, height: 50)
recordingImage.image = UIImage(named: "recording")
recordingImage.hidden = true
self.view.addSubview(recordingImage)
// Setup tap to record and stop
let tapGesture = UITapGestureRecognizer(target: self, action: "didGetTapped:")
tapGesture.numberOfTapsRequired = 1
self.view.addGestureRecognizer(tapGesture)
}
func didGetTapped(selector: UITapGestureRecognizer) {
if self.isRecordingStarted {
self.view.gestureRecognizers![0].enabled = false
recordingImage.hidden = true
self.stopRecording()
} else {
recordingImage.hidden = false
self.startRecording()
}
self.isRecordingStarted = !self.isRecordingStarted
}
func switchCamera(selector: UIButton) {
self.captureDeviceTrigger = !self.captureDeviceTrigger
pre_start()
}
//MARK: Video logic
func setupCaptureDevices() {
let devices = AVCaptureDevice.devices()
for device in devices {
if device.hasMediaType(AVMediaTypeVideo) {
if device.position == AVCaptureDevicePosition.Front {
captureDeviceFront = device as? AVCaptureDevice
NSLog("Video Controller: Setup. Front camera is found")
}
if device.position == AVCaptureDevicePosition.Back {
captureDeviceBack = device as? AVCaptureDevice
NSLog("Video Controller: Setup. Back camera is found")
}
}
if device.hasMediaType(AVMediaTypeAudio) {
captureDeviceMic = device as? AVCaptureDevice
NSLog("Video Controller: Setup. Audio device is found")
}
}
}
func alertPermission() {
let permissionAlert = UIAlertController(title: "No Permission", message: "Please allow access to Camera and Microphone", preferredStyle: UIAlertControllerStyle.Alert)
permissionAlert.addAction(UIAlertAction(title: "Go to settings", style: .Default, handler: { (action: UIAlertAction!) in
print("Video Controller: Permission for camera/mic denied. Going to settings")
UIApplication.sharedApplication().openURL(NSURL(string: UIApplicationOpenSettingsURLString)!)
print(UIApplicationOpenSettingsURLString)
}))
presentViewController(permissionAlert, animated: true, completion: nil)
}
func pre_start() {
NSLog("Video Controller: pre_start")
let videoPermission = AVCaptureDevice.authorizationStatusForMediaType(AVMediaTypeVideo)
let audioPermission = AVCaptureDevice.authorizationStatusForMediaType(AVMediaTypeAudio)
if (videoPermission == AVAuthorizationStatus.Denied) || (audioPermission == AVAuthorizationStatus.Denied) {
self.alertPermission()
pre_start()
return
}
if (videoPermission == AVAuthorizationStatus.Authorized) {
self.start()
return
}
AVCaptureDevice.requestAccessForMediaType(AVMediaTypeVideo, completionHandler: { (granted :Bool) -> Void in
self.pre_start()
})
}
func start() {
NSLog("Video Controller: start")
if captureSession.running {
captureSession.beginConfiguration()
if let currentInput = captureSession.inputs[0] as? AVCaptureInput {
captureSession.removeInput(currentInput)
}
do {
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
} catch {
print("Video Controller: begin session. Error adding video input device")
}
captureSession.commitConfiguration()
return
}
do {
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
try captureSession.addInput(AVCaptureDeviceInput(device: captureDeviceMic))
} catch {
print("Video Controller: start. error adding device: \(error)")
}
if let layer = AVCaptureVideoPreviewLayer(session: captureSession) {
self.previewLayer = layer
layer.videoGravity = AVLayerVideoGravityResizeAspect
if let layerConnection = layer.connection {
if UIDevice.currentDevice().orientation == .LandscapeRight {
layerConnection.videoOrientation = AVCaptureVideoOrientation.LandscapeLeft
} else if UIDevice.currentDevice().orientation == .LandscapeLeft {
layerConnection.videoOrientation = AVCaptureVideoOrientation.LandscapeRight
} else if UIDevice.currentDevice().orientation == .Portrait {
layerConnection.videoOrientation = AVCaptureVideoOrientation.Portrait
}
}
// TODO: Set the output size of the Preview Layer here
layer.frame = previewFrame
self.view.layer.insertSublayer(layer, atIndex: 0)
}
let bufferVideoQueue = dispatch_queue_create("sample buffer delegate", DISPATCH_QUEUE_SERIAL)
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self, queue: bufferVideoQueue)
captureSession.addOutput(videoOutput)
if let connection = videoOutput.connectionWithMediaType(AVMediaTypeVideo) {
self.outputConnection = connection
}
let bufferAudioQueue = dispatch_queue_create("audio buffer delegate", DISPATCH_QUEUE_SERIAL)
let audioOutput = AVCaptureAudioDataOutput()
audioOutput.setSampleBufferDelegate(self, queue: bufferAudioQueue)
captureSession.addOutput(audioOutput)
captureSession.startRunning()
}
func getAssetWriter() -> AVAssetWriter? {
NSLog("Video Controller: getAssetWriter")
let fileManager = NSFileManager.defaultManager()
let urls = fileManager.URLsForDirectory(.DocumentDirectory, inDomains: .UserDomainMask)
guard let documentDirectory: NSURL = urls.first else {
print("Video Controller: getAssetWriter: documentDir Error")
return nil
}
let local_video_name = NSUUID().UUIDString + ".mp4"
self.videoOutputURL = documentDirectory.URLByAppendingPathComponent(local_video_name)
guard let url = self.videoOutputURL else {
return nil
}
self.assetWriter = try? AVAssetWriter(URL: url, fileType: AVFileTypeMPEG4)
guard let writer = self.assetWriter else {
return nil
}
let videoSettings: [String : AnyObject] = [
AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : captureSize.width,
AVVideoHeightKey : captureSize.height,
]
assetWriterInputCamera = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
assetWriterInputCamera?.expectsMediaDataInRealTime = true
writer.addInput(assetWriterInputCamera!)
let audioSettings : [String : AnyObject] = [
AVFormatIDKey : NSInteger(kAudioFormatMPEG4AAC),
AVNumberOfChannelsKey : 2,
AVSampleRateKey : NSNumber(double: 44100.0)
]
assetWriterInputAudio = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: audioSettings)
assetWriterInputAudio?.expectsMediaDataInRealTime = true
writer.addInput(assetWriterInputAudio!)
return writer
}
func configurePreset() {
NSLog("Video Controller: configurePreset")
if captureSession.canSetSessionPreset(AVCaptureSessionPreset1280x720) {
captureSession.sessionPreset = AVCaptureSessionPreset1280x720
} else {
captureSession.sessionPreset = AVCaptureSessionPreset1920x1080
}
}
func startRecording() {
NSLog("Video Controller: Start recording")
captureSize = UIDeviceOrientationIsLandscape(UIDevice.currentDevice().orientation) ? CAPTURE_SIZE_LANDSCAPE : CAPTURE_SIZE_PORTRAIT
if let connection = self.outputConnection {
if connection.supportsVideoOrientation {
if UIDevice.currentDevice().orientation == .LandscapeRight {
connection.videoOrientation = AVCaptureVideoOrientation.LandscapeLeft
NSLog("orientation: right")
} else if UIDevice.currentDevice().orientation == .LandscapeLeft {
connection.videoOrientation = AVCaptureVideoOrientation.LandscapeRight
NSLog("orientation: left")
} else {
connection.videoOrientation = AVCaptureVideoOrientation.Portrait
NSLog("orientation: portrait")
}
}
}
if let writer = getAssetWriter() {
self.assetWriter = writer
let recordingClock = self.captureSession.masterClock
writer.startWriting()
writer.startSessionAtSourceTime(CMClockGetTime(recordingClock))
}
}
func stopRecording() {
NSLog("Video Controller: Stop recording")
if let writer = self.assetWriter {
writer.finishWritingWithCompletionHandler{Void in
print("Recording finished")
// TODO: Handle the video file, copy it from the temp directory etc.
}
}
}
//MARK: Implementation for AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
if !self.isRecordingStarted {
return
}
if let audio = self.assetWriterInputAudio where connection.audioChannels.count > 0 && audio.readyForMoreMediaData {
dispatch_async(audioQueue!) {
audio.appendSampleBuffer(sampleBuffer)
}
return
}
if let camera = self.assetWriterInputCamera where camera.readyForMoreMediaData {
dispatch_async(videoQueue!) {
camera.appendSampleBuffer(sampleBuffer)
}
}
}
}
Additional Edit Info
Its seems from our additional conversations in the comments that what you want is to reduce the physical size of the output video while keeping the dimensions as high as you can (to retain quality). Remember, the size you position a layer on the screen is POINTs, not PIXELS. You're writing an output file in pixels - it's not a 1:1 comparison to the iPhone screen reference units.
To reduce the size of the output file, you have two easy options:
Reduce the resolution - but if you go too small, you'll lose quality when playing it back, especially if when playing it back it gets scaled up again. Try 640x360 or 720x480 for the output pixels.
Adjust the compression settings. The iPhone has default settings that typically produce a higher quality (larger output file size) video.
Replace the video settings with these options and see how you go:
let videoSettings: [String : AnyObject] = [
AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : captureSize.width,
AVVideoHeightKey : captureSize.height,
AVVideoCompressionPropertiesKey : [
AVVideoAverageBitRateKey : 2000000,
AVVideoProfileLevelKey : H264_Main_4_1,
AVVideoMaxKeyFrameIntervalKey : 90,
]
]
The AVCompressionProperties tell AVFoundation how to actually compress the video. The lower the bit rate, the higher the compression (and therefore the better it streams but ALSO the less disk space it uses BUT it will have lower quality). MaxKeyFrame interval is how often it writes out an uncompressed frame, setting this higher (in our ~30 frames per second video 90 will be once every 1.5 seconds) also reduces quality but decreases size too. You'll find the constants referenced here https://developer.apple.com/library/prerelease/ios/documentation/AVFoundation/Reference/AVFoundation_Constants/index.html#//apple_ref/doc/constant_group/Video_Settings

AVCaptureSession Image Is Size Of Screen [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
I'm using AVCaptureSession to create a camera and I'm trying to take a picture with it. Here is the code that loads the camera...
func reloadCamera() {
cameraView.backgroundColor = UIColor.clearColor()
captureSession = AVCaptureSession()
captureSession!.sessionPreset = AVCaptureSessionPresetHigh
let backCamera = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
if (camera == false) {
let videoDevices = AVCaptureDevice.devicesWithMediaType(AVMediaTypeVideo)
for device in videoDevices {
let device = device as! AVCaptureDevice
if device.position == AVCaptureDevicePosition.Front {
captureDevice = device
break
} else {
captureDevice = backCamera
}
}
} else {
captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
}
do {
let input = try AVCaptureDeviceInput(device: captureDevice)
if captureSession!.canAddInput(input) {
captureSession!.addInput(input)
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput!.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if captureSession!.canAddOutput(stillImageOutput) {
captureSession!.addOutput(stillImageOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer!.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer!.connection?.videoOrientation = AVCaptureVideoOrientation.Portrait
previewLayer?.frame = cameraView.bounds
cameraView.layer.addSublayer(previewLayer!)
captureSession!.startRunning()
}
}
} catch let error as NSError {
// Handle any errors
print(error)
}
}
Here is how I take a photo...
func didPressTakePhoto(){
toggleFlash()
if let videoConnection = stillImageOutput?.connectionWithMediaType(AVMediaTypeVideo){
videoConnection.videoOrientation = (previewLayer?.connection.videoOrientation)!
stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: {
(sampleBuffer, error) in
if sampleBuffer != nil {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
self.capturedImage = UIImage(data: imageData)
if self.camera == true {
self.capturedImage = UIImage(CGImage: self.capturedImage.CGImage!, scale: 1.0, orientation: UIImageOrientation.Right)
} else {
self.capturedImage = UIImage(CGImage: self.capturedImage.CGImage!, scale: 1.0, orientation: UIImageOrientation.LeftMirrored)
}
self.tempImageView.image = self.capturedImage
UIImageWriteToSavedPhotosAlbum(self.capturedImage, nil, nil, nil);
self.tempImageView.hidden = false
self.goButton.hidden = false
self.cameraView.hidden = true
self.removeImageButton.hidden = false
self.captureButton.hidden = true
self.flashChanger.hidden = true
self.switchCameraButton.hidden = true
}
})
}
}
But what's happening is the picture that is taken is as large as the entire screen (like Snapchat), but I only want it to be as big as the UIView I'm taking it from. Tell me if I you need any more information. Thank you!
First of all, you are the one who is setting the capture session's preset to AVCaptureSessionPresetHigh. If you don't need that, don't do that; use a smaller-size preset. For example, use AVCaptureSessionPreset640x480 to get a smaller size.
Second, no matter what the size of the resulting photo may be, reducing it to the size you need, and displaying it at that size, is entirely up to you. Ultimately, it's just a matter of the size and content mode of your image view, though it is always better to reduce the image size as well, in order to avoid wasting memory (as you would be doing by displaying an image that is significantly larger then what the user is shown).

How to take multiple photos in a sequence (1s delay each), using Swift on iOS8.1?

I am trying to take few photos after a single user click on a Camera preview so I can present them and user can pick one that was timed best or use all in a "film strip" mode. The expected user experience is: "I open a camera, take a picture and then I see 5 pictures taken second by second. I do not have to press the 'take picture' button 5 times, one is just enough to start the sequence".
I am new to iOS and Swift and I base my work on a 'Swift Recipes' book (https://www.safaribooksonline.com/library/view/ios-8-swift/9781491908969/).
The source code to take a single photo is:
controller = UIImagePickerController()
if let theController = controller{
theController.sourceType = .Camera
theController.mediaTypes = [kUTTypeImage as NSString]
theController.allowsEditing = true
theController.delegate = self
presentViewController(theController, animated: true, completion: nil)
}
plus the relevant image handling (via func imagePickerController). I tried playing with the controller objecct above, but miserably failed :( I believe that this is not the right way to get what I look for, but I struggle to find the right one. Any help will be appreciated.
I don't know if there is a mode for this behavior in UIImagePickerController. If I were you I would use AVFoundation to implement this.
import UIKit
import AVFoundation
class ViewController: UIViewController {
var connection: AVCaptureConnection!
var output: AVCaptureStillImageOutput!
var videoPreviewLayer: AVCaptureVideoPreviewLayer!
#IBOutlet var videPreviewView: UIView!
override func viewDidLoad() {
super.viewDidLoad()
self.createCamera()
}
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
self.videoPreviewLayer.bounds = self.videPreviewView.bounds
self.videoPreviewLayer.position = CGPoint(x: CGRectGetMidX(self.videPreviewView.bounds), y: CGRectGetMidY(self.videPreviewView.bounds))
}
func createCamera() {
let captureSession = AVCaptureSession()
if captureSession.canSetSessionPreset(AVCaptureSessionPresetHigh) {
captureSession.sessionPreset = AVCaptureSessionPresetHigh
} else {
println("Error: Couldn't set preset = \(AVCaptureSessionPresetHigh)")
return
}
let cameraDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
var error: NSError?
let inputDevice = AVCaptureDeviceInput.deviceInputWithDevice(cameraDevice, error: &error) as AVCaptureDeviceInput
if let error = error {
println("Error: \(error)")
return
}
if captureSession.canAddInput(inputDevice) {
captureSession.addInput(inputDevice)
} else {
println("Error: Couldn't add input device")
return
}
let imageOutput = AVCaptureStillImageOutput()
if captureSession.canAddOutput(imageOutput) {
captureSession.addOutput(imageOutput)
} else {
println("Error: Couldn't add output")
return
}
// Store imageOutput. We will need it to take photo
self.output = imageOutput
let connection = imageOutput.connections.first as AVCaptureConnection!
// Store this connection in property. We will need it when we take image.
self.connection = connection
connection.videoOrientation = AVCaptureVideoOrientation.Portrait
captureSession.startRunning()
// This will preview the camera
let videoLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
videoLayer.contentsScale = UIScreen.mainScreen().scale
self.videPreviewView.layer.addSublayer(videoLayer)
// Store this layer instance in property. We will place it into a view
self.videoPreviewLayer = videoLayer
}
func takePhoto(completion: (UIImage?, NSError?) -> Void) {
self.output.captureStillImageAsynchronouslyFromConnection(self.connection) { buffer, error in
if let error = error {
completion(nil, error)
} else {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(buffer)
let image = UIImage(data: imageData, scale: UIScreen.mainScreen().scale)
completion(image, nil)
}
}
}
}
Now all you have to do is create an action that calls takePhoto with 3 times. You can use NSTimer to do this.

Resources