The app is crashing at random points in this function. I believe I need to scale it down but I am not sure. The only requirements I have for the image is that it remains a square and it remains decently sized because I need it to be big enough to take the entire screens width.
Here is an error that sometimes comes along with the crash:
warning: could not load any Objective-C class information. This will significantly reduce the quality of type information available.
#IBAction func didPressTakePhoto(sender: UIButton) {
self.previewLayer?.connection.enabled = false
if let videoConnection = stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo) {
videoConnection.videoOrientation = AVCaptureVideoOrientation.Portrait
stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: {(sampleBuffer, error) in
if (sampleBuffer != nil) {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProviderCreateWithCFData(imageData)
let cgImageRef = CGImageCreateWithJPEGDataProvider(dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)
var image = UIImage()
if UIDevice.currentDevice().orientation == .Portrait{
image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Right)
}else if UIDevice.currentDevice().orientation == .LandscapeLeft{
image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Up)
}else if UIDevice.currentDevice().orientation == .LandscapeRight{
image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Down)
}
//Crop the image to a square
let imageSize: CGSize = image.size
let width: CGFloat = imageSize.width
let height: CGFloat = imageSize.height
if width != height {
let newDimension: CGFloat = min(width, height)
let widthOffset: CGFloat = (width - newDimension) / 2
let heightOffset: CGFloat = (height - newDimension) / 2
UIGraphicsBeginImageContextWithOptions(CGSizeMake(newDimension, newDimension), false, 0.0)
image.drawAtPoint(CGPointMake(-widthOffset, -heightOffset), blendMode: .Copy, alpha: 1.0)
image = UIGraphicsGetImageFromCurrentImageContext()
let imageData: NSData = UIImageJPEGRepresentation(image, 0.1)!
UIGraphicsEndImageContext()
self.captImage = UIImage(data: imageData)!
}
}
self.performSegueWithIdentifier("fromCustomCamera", sender: self)
})
}
}
This code is running in my viewDidAppear and stillImageOutput is returning nil when I take a photo.
if self.isRunning == false{
captureSession = AVCaptureSession()
captureSession!.sessionPreset = AVCaptureSessionPresetPhoto
let backCamera = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
var error: NSError?
do {
input = try AVCaptureDeviceInput(device: backCamera)
} catch let error1 as NSError {
error = error1
print(error)
input = nil
}
if error == nil && captureSession!.canAddInput(input) {
captureSession!.addInput(input)
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput!.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if captureSession!.canAddOutput(stillImageOutput) {
captureSession!.addOutput(stillImageOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer!.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer!.connection?.videoOrientation = AVCaptureVideoOrientation.Portrait
previewView.layer.addSublayer(previewLayer!)
captureSession!.startRunning()
self.isRunning = true
}
}
}
Fixed it. The reason it was crashing was actually due to my images being way too big. I had to compress them.
Related
I'm currently having an issue with AVCaptureStillImageOutput where when I try to take a picture the image is currently nil. My current attempts at bug fixing have found that captureStillImageAsynchronously method isn't being called at all and I haven't been able to test whether the sample buffer is nil or not. I'm using this method to feed the camera image into another method that combines the camera image and another image into a single image. The thread fails during that last method. When I try to examine the image from the capture method it is unavailable. What do I need to do to get the camera capture working?
public func capturePhotoOutput()->UIImage
{
var image:UIImage = UIImage()
if let videoConnection = stillImageOutput!.connection(withMediaType: AVMediaTypeVideo)
{
print("Video Connection established ---------------------")
stillImageOutput?.captureStillImageAsynchronously(from: videoConnection, completionHandler: {(sampleBuffer, error) in
if (sampleBuffer != nil)
{
print("Sample Buffer not nil ---------------------")
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProvider(data: imageData! as CFData)
let cgImageRef = CGImage(jpegDataProviderSource: dataProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent)
let camImage = UIImage(cgImage: cgImageRef!, scale: CGFloat(1.0), orientation: UIImageOrientation.right)
image = camImage
}
else
{
print("nil sample buffer ---------------------")
}
})
}
if (stillImageOutput?.isCapturingStillImage)!
{
print("image capture in progress ---------------------")
}
else
{
print("capture not in progress -------------------")
}
return image
}
EDIT: Added below method where the camera image is being used.
func takePicture()-> UIImage
{
/*
videoComponent!.getVideoController().capturePhotoOutput
{ (image) in
//Your code
guard let topImage = image else
{
print("No image")
return
}
}
*/
let topImage = videoComponent!.getVideoController().capturePhotoOutput() //overlay + Camera
let bottomImage = captureTextView() //text
let size = CGSize(width:(topImage.size.width),height:(topImage.size.height)+(bottomImage.size.height))
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
topImage.draw(in: CGRect(x:0, y:0, width:size.width, height: (topImage.size.height)))
bottomImage.draw(in: CGRect(x:(size.width-bottomImage.size.width)/2, y:(topImage.size.height), width: bottomImage.size.width, height: (bottomImage.size.height)))
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
If you use async method the function will return a wrong value, because the async call is still in progress. You can use a completion block, like that:
public func capturePhotoOutput(completion: (UIImage?) -> ())
{
if let videoConnection = stillImageOutput!.connection(withMediaType: AVMediaTypeVideo)
{
print("Video Connection established ---------------------")
stillImageOutput?.captureStillImageAsynchronously(from: videoConnection, completionHandler: {(sampleBuffer, error) in
if (sampleBuffer != nil)
{
print("Sample Buffer not nil ---------------------")
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProvider(data: imageData! as CFData)
let cgImageRef = CGImage(jpegDataProviderSource: dataProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent)
let camImage = UIImage(cgImage: cgImageRef!, scale: CGFloat(1.0), orientation: UIImageOrientation.right)
completion(camImage)
}
else
{
completion(nil)
}
})
}
else
{
completion(nil)
}
}
How to use it:
capturePhotoOutput
{ (image) in
guard let topImage = image else{
print("No image")
return
}
//Your code
}
Edit:
func takePicture()
{
videoComponent!.getVideoController().capturePhotoOutput
{ (image) in
guard let topImage = image else
{
print("No image")
return
}
let bottomImage = self.captureTextView() //text
let size = CGSize(width:(topImage.size.width),height:(topImage.size.height)+(bottomImage.size.height))
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
topImage.draw(in: CGRect(x:0, y:0, width:size.width, height: (topImage.size.height)))
bottomImage.draw(in: CGRect(x:(size.width-bottomImage.size.width)/2, y:(topImage.size.height), width: bottomImage.size.width, height: (bottomImage.size.height)))
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
self.setPicture(image: newImage)
}
}
func setPicture(image:UIImage)
{
//Your code after takePicture
}
Background:
I'm running a swift 2 application with the following two options.
Option A:
The user can enter a number to sign in. In this case, his/her picture is shown in a UIImageView.
Option B:
The user can use an NFC tag to sign in. In this case, the UIImageView is replaced with a camera layer that shows live camera stream and uses CIContext to capture an image on a button press.
Problem:
The issue I'm facing is that sometimes, when I choose option A (not using the camera layer), the app crashes. Since I'm unable to reproduce the crash deterministically, I have hit a dead end to understand why the app is crashing.
EDIT: The camera layer is used in both options but is hidden in option A.
Crashlytics generates the following crash log:
0 libswiftCore.dylib specialized _fatalErrorMessage(StaticString, StaticString, StaticString, UInt) -> () + 44
1 CameraLayerView.swift line 20 CameraLayerView.init(coder : NSCoder) -> CameraLayerView?
2 CameraLayerView.swift line 0 #objc CameraLayerView.init(coder : NSCoder) -> CameraLayerView?
3 UIKit -[UIClassSwapper initWithCoder:] + 248
32 UIKit UIApplicationMain + 208
33 AppDelegate.swift line 17 main
34 libdispatch.dylib (Missing)
I've checked line#20 in CameraLayerView but it is just an initialization statement
private let ciContext = CIContext(EAGLContext: EAGLContext(API: .OpenGLES2))
Mentioned below is the CameraLayerView file. Any help would be appreciated
var captureSession = AVCaptureSession()
var sessionOutput = AVCaptureVideoDataOutput()
var previewLayer = AVCaptureVideoPreviewLayer()
private var pixelBuffer : CVImageBuffer!
private var attachments : CFDictionary!
private var ciImage : CIImage!
private let ciContext = CIContext(EAGLContext: EAGLContext(API: .OpenGLES2))
private var imageOptions : [String : AnyObject]!
var faceFound = false
var image : UIImage!
override func layoutSubviews() {
previewLayer.position = CGPoint(x: self.frame.width/2, y: self.frame.height/2)
previewLayer.bounds = self.frame
self.layer.borderWidth = 2.0
self.layer.borderColor = UIColor.redColor().CGColor
}
func loadCamera() {
let camera = AVCaptureDevice.devicesWithMediaType(AVMediaTypeVideo)
for device in camera {
if device.position == .Front {
do{
for input in captureSession.inputs {
captureSession.removeInput(input as! AVCaptureInput)
}
for output in captureSession.outputs {
captureSession.removeOutput(output as! AVCaptureOutput)
}
previewLayer.removeFromSuperlayer()
previewLayer.session = nil
let input = try AVCaptureDeviceInput(device: device as! AVCaptureDevice)
if captureSession.canAddInput(input) {
captureSession.addInput(input)
sessionOutput.videoSettings = [String(kCVPixelBufferPixelFormatTypeKey) : Int(kCVPixelFormatType_32BGRA)]
sessionOutput.setSampleBufferDelegate(self, queue: dispatch_get_global_queue(Int(QOS_CLASS_BACKGROUND.rawValue), 0))
sessionOutput.alwaysDiscardsLateVideoFrames = true
if captureSession.canAddOutput(sessionOutput) {
captureSession.addOutput(sessionOutput)
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
captureSession.startRunning()
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
switch UIDevice.currentDevice().orientation.rawValue {
case 1:
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.Portrait
break
case 2:
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.PortraitUpsideDown
break
case 3:
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.LandscapeRight
break
case 4:
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.LandscapeLeft
break
default:
break
}
self.layer.addSublayer(previewLayer)
}
}
} catch {
print("Error")
}
}
}
}
func takePicture() -> UIImage {
self.previewLayer.removeFromSuperlayer()
self.captureSession.stopRunning()
return image
}
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate)
ciImage = CIImage(CVPixelBuffer: pixelBuffer!, options: attachments as? [String : AnyObject])
if UIDevice.currentDevice().orientation == .PortraitUpsideDown {
imageOptions = [CIDetectorImageOrientation : 8]
} else if UIDevice.currentDevice().orientation == .LandscapeLeft {
imageOptions = [CIDetectorImageOrientation : 3]
} else if UIDevice.currentDevice().orientation == .LandscapeRight {
imageOptions = [CIDetectorImageOrientation : 1]
} else {
imageOptions = [CIDetectorImageOrientation : 6]
}
let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: ciContext, options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])
let features = faceDetector.featuresInImage(ciImage, options: imageOptions)
if features.count == 0 {
if faceFound == true {
faceFound = false
dispatch_async(dispatch_get_main_queue()) {
self.layer.borderColor = UIColor.redColor().CGColor
}
}
} else {
if UIDevice.currentDevice().orientation == .PortraitUpsideDown {
image = UIImage(CGImage: ciContext.createCGImage(ciImage, fromRect: ciImage.extent), scale: 1.0, orientation: UIImageOrientation.Left)
} else if UIDevice.currentDevice().orientation == .LandscapeLeft {
image = UIImage(CGImage: ciContext.createCGImage(ciImage, fromRect: ciImage.extent), scale: 1.0, orientation: UIImageOrientation.Down)
} else if UIDevice.currentDevice().orientation == .LandscapeRight {
image = UIImage(CGImage: ciContext.createCGImage(ciImage, fromRect: ciImage.extent), scale: 1.0, orientation: UIImageOrientation.Up)
} else {
image = UIImage(CGImage: ciContext.createCGImage(ciImage, fromRect: ciImage.extent), scale: 1.0, orientation: UIImageOrientation.Right)
}
if faceFound == false {
faceFound = true
for feature in features {
if feature.isKindOfClass(CIFaceFeature) {
dispatch_async(dispatch_get_main_queue()) {
self.layer.borderColor = UIColor.greenColor().CGColor
}
}
}
}
}
}
I tested a theory and it worked. Since ciContext was being initialised with view initialisation, it seemed like the app was crashing due to a race condition. I moved the initialisation for ciContext into my loadCamera method and it hasn't crashed since.
UPDATE
Another thing I noticed was that, in various tutorials and blog posts on the internet, the statement let ciContext = CIContext(EAGLContext: EAGLContext(API: .OpenGLES2)) was declared in two seperate statements such that it became
let eaglContext = EAGLContext(API: .OpenGLES2)
let ciContext = CIContext(EAGLContext: eaglContext)
I still don't what exactly was causing the app to crash in the first place but these two changes seemed to have fix the problem
CORRECT ANSWER
Finally found the culprit. In my viewController that used ciContext, I had a timer that was not being invalidated hence keeping a strong reference to the viewController. On every subsequent visit, it would create a new viewController while the previous one was never released from memory. This resulted in the memory filling up overtime. Once it passed a certain threshold, the ciContext intialiser would return nil because of low memory which would in turn crash the app.
I am using CIDetector to detect face in a UIImage. i am getting the face rect correctly but when i crop the image to detected face rect. it is not showing on my image view.
I have already checked. my image is not nil
Here is my code :-
#IBAction func detectFaceOnImageView(_: UIButton) {
let image = myImageView.getFaceImage()
myImageView.image = image
}
extension UIView {
func getFaceImage() -> UIImage? {
let faceDetectorOptions: [String: AnyObject] = [CIDetectorAccuracy: CIDetectorAccuracyHigh as AnyObject]
let faceDetector: CIDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: faceDetectorOptions)!
let viewScreenShotImage = generateScreenShot(scaleTo: 1.0)
if viewScreenShotImage.cgImage != nil {
let sourceImage = CIImage(cgImage: viewScreenShotImage.cgImage!)
let features = faceDetector.features(in: sourceImage)
if features.count > 0 {
var faceBounds = CGRect.zero
var faceImage: UIImage?
for feature in features as! [CIFaceFeature] {
faceBounds = feature.bounds
let faceCroped: CIImage = sourceImage.cropping(to: faceBounds)
faceImage = UIImage(ciImage: faceCroped)
}
return faceImage
} else {
return nil
}
} else {
return nil
}
}
func generateScreenShot(scaleTo: CGFloat = 3.0) -> UIImage {
let rect = self.bounds
UIGraphicsBeginImageContextWithOptions(rect.size, false, 0.0)
let context = UIGraphicsGetCurrentContext()
self.layer.render(in: context!)
let screenShotImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
let aspectRatio = screenShotImage.size.width / screenShotImage.size.height
let resizedScreenShotImage = screenShotImage.scaleImage(toSize: CGSize(width: self.bounds.size.height * aspectRatio * scaleTo, height: self.bounds.size.height * scaleTo))
return resizedScreenShotImage!
}
}
For More Information, I am attaching Screen Shots of values .
Screen Shot 1
Screen Shot 2
Screen Shot 3
Try this:
let faceCroped: CIImage = sourceImage.cropping(to: faceBounds)
//faceImage = UIImage(ciImage: faceCroped)
let cgImage: CGImage = {
let context = CIContext(options: nil)
return context.createCGImage(faceCroped, from: faceCroped.extent)!
}()
faceImage = UIImage(cgImage: cgImage)
I am capturing an image which is then placed in a small imageView. The picture is not blurry in the small imageView, but when I copy it to the clipboard, I am resizing the picture so that it is the same size as the imageView, but now it is blurry when I paste.
Here is the code:
import UIKit
import AVFoundation
class ViewController: UIViewController, UINavigationControllerDelegate, UIImagePickerControllerDelegate {
#IBOutlet weak var cameraView: UIView!
#IBOutlet weak var imageView: UIImageView!
var session: AVCaptureSession?
var stillImageOutput: AVCaptureStillImageOutput?
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
var captureDevice:AVCaptureDevice?
var imagePicker: UIImagePickerController!
override func viewDidLoad() {
super.viewDidLoad()
alignment()
tapToCopy()
}
override func viewWillAppear(_ animated: Bool) {
session = AVCaptureSession()
session!.sessionPreset = AVCaptureSessionPresetPhoto
let videoDevices = AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo)
for device in videoDevices!{
let device = device as! AVCaptureDevice
if device.position == AVCaptureDevicePosition.front {
captureDevice = device
}
}
//We will make a new AVCaptureDeviceInput and attempt to associate it with our backCamera input device.
//There is a chance that the input device might not be available, so we will set up a try catch to handle any potential errors we might encounter.
var error: NSError?
var input: AVCaptureDeviceInput!
do {
input = try AVCaptureDeviceInput(device: captureDevice)
} catch let error1 as NSError {
error = error1
input = nil
print(error!.localizedDescription)
}
if error == nil && session!.canAddInput(input) {
session!.addInput(input)
// ...
// The remainder of the session setup will go here...
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput?.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if session!.canAddOutput(stillImageOutput) {
session!.addOutput(stillImageOutput)
// ...
// Configure the Live Preview here...
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: session)
videoPreviewLayer!.videoGravity = AVLayerVideoGravityResizeAspect
videoPreviewLayer!.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
cameraView.layer.addSublayer(videoPreviewLayer!)
session!.startRunning()
}
}
}
func alignment() {
let height = view.bounds.size.height
let width = view.bounds.size.width
cameraView.bounds.size.height = height / 10
cameraView.bounds.size.width = height / 10
cameraView.layer.cornerRadius = height / 20
imageView.bounds.size.height = height / 10
imageView.bounds.size.width = height / 10
imageView.layer.cornerRadius = height / 20
imageView.clipsToBounds = true
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
videoPreviewLayer!.frame = cameraView.bounds
}
#IBAction func takePic(_ sender: Any) {
if (stillImageOutput!.connection(withMediaType: AVMediaTypeVideo)) != nil {
let videoConnection = stillImageOutput!.connection(withMediaType: AVMediaTypeVideo)
// ...
// Code for photo capture goes here...
stillImageOutput?.captureStillImageAsynchronously(from: videoConnection, completionHandler: { (sampleBuffer, error) -> Void in
// ...
// Process the image data (sampleBuffer) here to get an image file we can put in our captureImageView
if sampleBuffer != nil {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProvider(data: imageData as! CFData)
let cgImageRef = CGImage(jpegDataProviderSource: dataProvider!, decode: nil, shouldInterpolate: true, intent: CGColorRenderingIntent.defaultIntent)
let image = UIImage(cgImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.right)
// ...
// Add the image to captureImageView here...
self.imageView.image = self.resizeImage(image: image, newHeight: self.view.bounds.size.height / 10)
}
})
}
}
func tapToCopy() {
let imageTap = UITapGestureRecognizer(target: self, action: #selector(self.copyToClipboard(recognizer:)))
imageTap.numberOfTapsRequired = 1
imageView.isUserInteractionEnabled = true
imageView.addGestureRecognizer(imageTap)
}
func copyToClipboard(recognizer: UITapGestureRecognizer) {
UIPasteboard.general.image = self.resizeImage(image: imageView.image!, newHeight: self.view.bounds.size.height / 10)
}
func resizeImage(image: UIImage, newHeight: CGFloat) -> UIImage {
let scale = newHeight / image.size.height
let newWidth = image.size.width * scale
UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newHeight))
image.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
}
First of all, you are saying:
UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newHeight))
Never, never, never call UIGraphicsBeginImageContext. Just pretend it doesn't exist. Always call UIGraphicsBeginImageContextWithOptions instead. It takes two extra parameters, which should just about always be false and 0. Things will be a lot better when you make that change, because the image will contain scale information that you are currently stripping out.
Another problem is that you are resizing the same image twice in succession — once to display the image in the image view, and then again resizing it some more when you pull the image from the image view and put it on the pasteboard. Don't do that! Instead, store the original image, without resizing. Later, you can put that on the pasteboard — or resize the original image so that you are only resizing it once, totally separate from the image in the image view.
I made an AVFoundation camera to crop square images based on #fsaint answer: Cropping AVCaptureVideoPreviewLayer output to a square . The sizing of the photo is great, that works perfectly. However the image quality is noticeably degraded (See Below: first image is preview layer showing good resolution, second is the degraded image that was captured). It definitely has to do with what happens in processImage: as the image resolution is fine without it, just not the right aspect ratio. The documentation on image processing is pretty bare, any insights are GREATLY appreciated!!
Setting up camera:
func setUpCamera() {
captureSession = AVCaptureSession()
captureSession!.sessionPreset = AVCaptureSessionPresetPhoto
let backCamera = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
if ((backCamera?.hasFlash) != nil) {
do {
try backCamera.lockForConfiguration()
backCamera.flashMode = AVCaptureFlashMode.Auto
backCamera.unlockForConfiguration()
} catch {
// error handling
}
}
var error: NSError?
var input: AVCaptureDeviceInput!
do {
input = try AVCaptureDeviceInput(device: backCamera)
} catch let error1 as NSError {
error = error1
input = nil
}
if error == nil && captureSession!.canAddInput(input) {
captureSession!.addInput(input)
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput!.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if captureSession!.canAddOutput(stillImageOutput) {
captureSession!.sessionPreset = AVCaptureSessionPresetHigh;
captureSession!.addOutput(stillImageOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer!.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer!.connection?.videoOrientation = AVCaptureVideoOrientation.Portrait
previewVideoView.layer.addSublayer(previewLayer!)
captureSession!.startRunning()
}
}
}
Snapping photo:
#IBAction func onSnapPhotoButtonPressed(sender: UIButton) {
if let videoConnection = self.stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo) {
videoConnection.videoOrientation = AVCaptureVideoOrientation.Portrait
self.stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: {(sampleBuffer, error) in
if (sampleBuffer != nil) {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProviderCreateWithCFData(imageData)
let cgImageRef = CGImageCreateWithJPEGDataProvider(dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)
let image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Right)
self.processImage(image)
self.clearPhotoButton.hidden = false
self.nextButton.hidden = false
self.view.bringSubviewToFront(self.imageView)
}
})
}
}
Process image to square:
func processImage(image:UIImage) {
let deviceScreen = previewLayer?.bounds
let width:CGFloat = (deviceScreen?.size.width)!
UIGraphicsBeginImageContext(CGSizeMake(width, width))
let aspectRatio:CGFloat = image.size.height * width / image.size.width
image.drawInRect(CGRectMake(0, -(aspectRatio - width) / 2.0, width, aspectRatio))
let smallImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let cropRect = CGRectMake(0, 0, width, width)
let imageRef:CGImageRef = CGImageCreateWithImageInRect(smallImage.CGImage, cropRect)!
imageView.image = UIImage(CGImage: imageRef)
}
There are a few things wrong with your processImage() function.
First of all, you're creating a new graphics context with UIGraphicsBeginImageContext().
According to the Apple docs on this function:
This function is equivalent to calling the UIGraphicsBeginImageContextWithOptions function with the opaque parameter set to NO and a scale factor of 1.0.
Because the scale factor is 1.0, it is going to look pixelated when displayed on-screen, as the screen's resolution is (most likely) higher.
You want to be using the UIGraphicsBeginImageContextWithOptions() function, and pass 0.0 for the scale factor. According to the docs on this function, for the scale argument:
If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.
For example:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, width), NO, 0.0);
Your output should now look nice and crisp, as it is being rendered with the same scale as the screen.
Second of all, there's a problem with the width you're passing in.
let width:CGFloat = (deviceScreen?.size.width)!
UIGraphicsBeginImageContext(CGSizeMake(width, width))
You shouldn't be passing in the width of the screen here, it should be the width of the image. For example:
let width:CGFloat = image.size.width
You will then have to change the aspectRatio variable to take the screen width, such as:
let aspectRatio:CGFloat = image.size.height * (deviceScreen?.size.width)! / image.size.width
Third of all, you can simplify your cropping function significantly.
func processImage(image:UIImage) {
let screenWidth = UIScreen.mainScreen().bounds.size.width
let width:CGFloat = image.size.width
let height:CGFloat = image.size.height
let aspectRatio = screenWidth/width;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(screenWidth, screenWidth), false, 0.0) // create context
let ctx = UIGraphicsGetCurrentContext()
CGContextTranslateCTM(ctx, 0, (screenWidth-(aspectRatio*height))*0.5) // shift context up, to create a sqaured 'frame' for your image to be drawn in
image.drawInRect(CGRect(origin:CGPointZero, size: CGSize(width:screenWidth, height:height*aspectRatio))) // draw image
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
imageView.image = img
}
There's no need to draw the image twice, you only need to just translate the context up, and then draw the image.