iphone image captured from camera rotate automatically Swift - ios

Programmatically, I have captured image from my camera in my app. It has been fetched nicely, but when I shift to another, view and dismiss that view at that time my image I want to rotate into landscape. I captured images from camera. When I fetch image from photo library no issue has been found.
Following image is my original image.
Screen Shot
And I want to rotate image.
Screen Shot
Code Below:
var captureSesssion : AVCaptureSession!
var cameraOutput : AVCapturePhotoOutput!
var previewLayer : AVCaptureVideoPreviewLayer!
var device:AVCaptureDevice!
var currentViewControl = UIViewController()
var tempImageHolder = UIImage()
func beginCamera(_ targetView: UIView, _ currentVC: UIViewController)
{
currentViewControl = currentVC
// Do any additional setup after loading the view, typically from a nib.
captureSesssion = AVCaptureSession()
captureSesssion.sessionPreset = AVCaptureSession.Preset.photo
cameraOutput = AVCapturePhotoOutput()
if #available(iOS 11.1, *) {
let availableDevices = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera, .builtInTelephotoCamera, .builtInTrueDepthCamera], mediaType: AVMediaType.video, position: .back).devices
device = availableDevices.first
} else {
// Fallback on earlier versions
device = AVCaptureDevice.default(for: .video)!
}
if let input = try? AVCaptureDeviceInput(device: device!) {
if (captureSesssion.canAddInput(input)) {
captureSesssion.addInput(input)
if (captureSesssion.canAddOutput(cameraOutput)) {
captureSesssion.addOutput(cameraOutput)
let connection = cameraOutput.connection(with: AVFoundation.AVMediaType.video)
connection?.videoOrientation = .portrait
previewLayer = AVCaptureVideoPreviewLayer(session: captureSesssion)
self.previewLayer.frame = targetView.frame
self.previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
//setOutputOrientation()
targetView.layer.addSublayer(previewLayer)
captureSesssion.startRunning()
}
} else {
print("issue here : captureSesssion.canAddInput")
}
} else {
print("some problem here")
}
previewLayer?.frame.size = targetView.frame.size
}
func image(_ image: UIImage, didFinishSavingWithError error: Error?, contextInfo: UnsafeRawPointer) {
if let error = error {
// we got back an error!
let alert = UIAlertController(title: "Error", message: error.localizedDescription, preferredStyle: .alert)
alert.addAction(UIAlertAction(title: "OK", style: .default))
currentViewControl.present(alert, animated: true)
} else {
}}
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
let imageData = photo.fileDataRepresentation()
let imageSize: Int = imageData!.count
print(imageSize)
tempImageHolder = UIImage(data: (imageData! as NSData) as Data)!
//When use this below line works fine when take images from left landscape, but when take images from right landscape image not show properly screen shot below:
tempImageHolder = UIImage(cgImage: tempImageHolder.cgImage!, scale: tempImageHolder.scale, orientation: .up)
}
}
When take images from left landscape:
Input: Screen Shot
Output: Screen Shot
When take images from right landscape
Input: Screen Shot
Output: Screen Shot
UIImage(cgImage: tempImageHolder.cgImage!, scale: tempImageHolder.scale, orientation: .up)
When use this above line works fine when take images from left landscape, but when take images from right landscape image not show properly screen shot
Can someone please explain to me how to rotate image. Any help would be greatly appreciated.

This is a really annoying problem, and I really cannot believe AVFoundation does not have anything easier for us. But in my apps I do this by holding around the current device orientation. When photo output comes back with an image, rotate the image using orientation to make sure everything is pictured upward.
class MyViewController: UIViewController {
var currentOrientation: UIDeviceOrientation = .portrait
override func viewDidLoad() {
NotificationCenter.default.addObserver(self, selector: #selector(orientationChanged), name: UIDevice.orientationDidChangeNotification, object: nil)
}
#objc func orientationChanged() {
currentOrientation = UIDevice.current.orientation
}
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
guard let img = photo.cgImageRepresentation()?.takeUnretainedValue() else { return }
let temp = CIImage(cgImage: img)
var ciImage = temp;
switch currentOrientation {
case .portrait:
ciImage = temp.oriented(forExifOrientation: 6)
case .landscapeRight:
ciImage = temp.oriented(forExifOrientation: 3)
case .landscapeLeft:
ciImage = temp.oriented(forExifOrientation: 1)
default:
break
}
guard let cgImage = CIContext(options: nil).createCGImage(ciImage, from: ciImage.extent) else { return }
let fixedImage = UIImage(cgImage: cgImage)
}
}
In the end, use fixedImage for your needs. Please note that I subscribe to notification center for changes in orientation.

You should set the orientation in output before capture the photo.
// set the image orientation in output
if let photoOutputConnection = self.photoOutput.connection(with: .video) {
photoOutputConnection.videoOrientation = videoPreviewLayerOrientation!
}
self.photoOutput.capturePhoto(with: photoSettings, delegate: photoCaptureProcessor) // capture image

enum CGImageOrientation {
case up
case down
case left
case right
case upMirrored
case downMirrored
case leftMirrored
case rightMirrored
}
extension CGImage {
func orientImage(_ imageOrientation: CGImageOrientation) ->
CGImage? {
return orientImageWithTransform(imageOrientation).0
}
func orientImageWithTransform(_ imageOrientation: CGImageOrientation) -> (CGImage?, CGAffineTransform) {
var transform = CGAffineTransform.identity
if imageOrientation == .up { return (self.copy(), transform)}
let size = CGSize(width: width, height: height)
let newSize = [.left,.leftMirrored, .right, .rightMirrored].contains(imageOrientation)
? CGSize(width: size.height, height: size.width) : size
// Guard that we have color space and core graphics context.
guard let colorSpace = self.colorSpace,
// New graphic context uses transformed width and height.
let ctx = CGContext(data: nil, width: Int(newSize.width), height: Int(newSize.height),
bitsPerComponent: self.bitsPerComponent, bytesPerRow: 0,
space: colorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)
else { return (nil, transform)}
// OK, now the actual work of constructing transform and creating new image.
switch imageOrientation {
case .down, .downMirrored:
transform = transform.translatedBy(x: size.width, y: size.height)
transform = transform.rotated(by: CGFloat.pi)
break
case .left,.leftMirrored:
transform = transform.translatedBy(x: size.height, y: 0)
transform = transform.rotated(by: CGFloat.pi/2)
break
case .right, .rightMirrored:
transform = transform.translatedBy(x: 0, y: size.width)
transform = transform.rotated(by: -CGFloat.pi/2)
break
case .up, .upMirrored:
break
}
if [.upMirrored, .downMirrored,.leftMirrored, .rightMirrored].contains(imageOrientation) {
transform = transform.translatedBy(x: size.width, y: 0)
transform = transform.scaledBy(x: -1, y: 1)
}
ctx.concatenate(transform)
// Interestingly, drawing with the original width and height?!
// So width and height here are pre-transform.
ctx.draw(self, in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
return (ctx.makeImage(), transform)
}
}
To use:-
var tempImage = imgae
guard let cgImage = imgae?.cgImage else {return}
if let orientedImage = cgImage.orientImage(.upMirrored) {
tempImage = UIImage(cgImage: orientedImage)
}
let image = UIImageView(image: tempImage)

The problem there is that you are discarding your image metadata (including the orientation) when converting your UIImage to CGImage. You need to render a new UIImage before accessing its cgImage property as I showed in this post
if let data = photo.fileDataRepresentation(), let image = UIImage(data: data) {
let renderedImage = UIGraphicsImageRenderer(size: size, format: imageRendererFormat).image { _ in image.draw(at: .zero) }
// use renderedImage image
}

Try following function to fix the orientation:
extension UIImage {
func makeFixOrientation() -> UIImage {
if self.imageOrientation == UIImage.Orientation.up {
return self
}
UIGraphicsBeginImageContextWithOptions(self.size, false, self.scale)
self.draw(in: CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height))
let normalizedImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return normalizedImage;
}
}

Related

captureStillImageAsynchronously Issue

I'm currently having an issue with AVCaptureStillImageOutput where when I try to take a picture the image is currently nil. My current attempts at bug fixing have found that captureStillImageAsynchronously method isn't being called at all and I haven't been able to test whether the sample buffer is nil or not. I'm using this method to feed the camera image into another method that combines the camera image and another image into a single image. The thread fails during that last method. When I try to examine the image from the capture method it is unavailable. What do I need to do to get the camera capture working?
public func capturePhotoOutput()->UIImage
{
var image:UIImage = UIImage()
if let videoConnection = stillImageOutput!.connection(withMediaType: AVMediaTypeVideo)
{
print("Video Connection established ---------------------")
stillImageOutput?.captureStillImageAsynchronously(from: videoConnection, completionHandler: {(sampleBuffer, error) in
if (sampleBuffer != nil)
{
print("Sample Buffer not nil ---------------------")
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProvider(data: imageData! as CFData)
let cgImageRef = CGImage(jpegDataProviderSource: dataProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent)
let camImage = UIImage(cgImage: cgImageRef!, scale: CGFloat(1.0), orientation: UIImageOrientation.right)
image = camImage
}
else
{
print("nil sample buffer ---------------------")
}
})
}
if (stillImageOutput?.isCapturingStillImage)!
{
print("image capture in progress ---------------------")
}
else
{
print("capture not in progress -------------------")
}
return image
}
EDIT: Added below method where the camera image is being used.
func takePicture()-> UIImage
{
/*
videoComponent!.getVideoController().capturePhotoOutput
{ (image) in
//Your code
guard let topImage = image else
{
print("No image")
return
}
}
*/
let topImage = videoComponent!.getVideoController().capturePhotoOutput() //overlay + Camera
let bottomImage = captureTextView() //text
let size = CGSize(width:(topImage.size.width),height:(topImage.size.height)+(bottomImage.size.height))
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
topImage.draw(in: CGRect(x:0, y:0, width:size.width, height: (topImage.size.height)))
bottomImage.draw(in: CGRect(x:(size.width-bottomImage.size.width)/2, y:(topImage.size.height), width: bottomImage.size.width, height: (bottomImage.size.height)))
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
If you use async method the function will return a wrong value, because the async call is still in progress. You can use a completion block, like that:
public func capturePhotoOutput(completion: (UIImage?) -> ())
{
if let videoConnection = stillImageOutput!.connection(withMediaType: AVMediaTypeVideo)
{
print("Video Connection established ---------------------")
stillImageOutput?.captureStillImageAsynchronously(from: videoConnection, completionHandler: {(sampleBuffer, error) in
if (sampleBuffer != nil)
{
print("Sample Buffer not nil ---------------------")
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProvider(data: imageData! as CFData)
let cgImageRef = CGImage(jpegDataProviderSource: dataProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent)
let camImage = UIImage(cgImage: cgImageRef!, scale: CGFloat(1.0), orientation: UIImageOrientation.right)
completion(camImage)
}
else
{
completion(nil)
}
})
}
else
{
completion(nil)
}
}
How to use it:
capturePhotoOutput
{ (image) in
guard let topImage = image else{
print("No image")
return
}
//Your code
}
Edit:
func takePicture()
{
videoComponent!.getVideoController().capturePhotoOutput
{ (image) in
guard let topImage = image else
{
print("No image")
return
}
let bottomImage = self.captureTextView() //text
let size = CGSize(width:(topImage.size.width),height:(topImage.size.height)+(bottomImage.size.height))
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
topImage.draw(in: CGRect(x:0, y:0, width:size.width, height: (topImage.size.height)))
bottomImage.draw(in: CGRect(x:(size.width-bottomImage.size.width)/2, y:(topImage.size.height), width: bottomImage.size.width, height: (bottomImage.size.height)))
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
self.setPicture(image: newImage)
}
}
func setPicture(image:UIImage)
{
//Your code after takePicture
}

Can't get screenshot of only the UIView which shows camera (AVCapturePhotoOutput) in Swift

I want to save the screenshot that shows camera view (AVCapturePhotoOutput).
I set cameraView (shows camera view) and add other UIView that has some parts like UIButton.
But if I save it, the picture saved to camera roll is just a white view.
(When I tried to save self.view's screenshot, it did work well.)
How can I solve it?
class ViewController: UIViewController, AVCapturePhotoCaptureDelegate{
//for camera
var captureDevice : AVCaptureDevice!
var captureSesssion: AVCaptureSession!
var stillImageOutput: AVCapturePhotoOutput?
var previewLayer: AVCaptureVideoPreviewLayer?
var videoConnection:AVCaptureConnection!
var topView: TopView! //on cameraView. It has some buttons and imageViews.
var cameraView:UIView! //that shows camera's view
override func viewDidLoad() {
super.viewDidLoad()
self.view.backgroundColor = (UIColor.black)
cameraView = UIView(frame: CGRect(x: 0.0, y: 0.0, width: self.view.frame.size.width, height: self.view.frame.size.height))
self.view.addSubview(cameraView)
captureSesssion = AVCaptureSession()
stillImageOutput = AVCapturePhotoOutput()
captureSesssion.sessionPreset = AVCaptureSessionPreset1280x720
captureDevice = AVCaptureDevice.defaultDevice(withDeviceType: .builtInWideAngleCamera, mediaType: AVMediaTypeVideo, position: .front)
do {
let input = try AVCaptureDeviceInput(device: captureDevice)
if (captureSesssion.canAddInput(input)) {
captureSesssion.addInput(input)
if (captureSesssion.canAddOutput(stillImageOutput)) {
captureSesssion.addOutput(stillImageOutput)
captureSesssion.startRunning()
previewLayer = AVCaptureVideoPreviewLayer(session: captureSesssion)
previewLayer?.videoGravity = AVLayerVideoGravityResizeAspect
previewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.portrait
previewLayer?.connection.automaticallyAdjustsVideoMirroring = false
previewLayer?.connection.isVideoMirrored = true
cameraView.layer.addSublayer(previewLayer!)
previewLayer?.position = CGPoint(x: self.view.frame.width / 2, y: self.view.frame.height / 2)
previewLayer?.bounds = self.view.frame
}
}
}
catch {
print(error)
}
topView = TopView.init(frame: CGRect(x: 0, y: 0, width: 320, height: 568))
topView.viewController = self
self.view.addSubview(topView)
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
let touch = touches.first!
captureSesssion.stopRunning() //for taking still picture
let layer = UIApplication.shared.keyWindow?.layer
let scale = UIScreen.main.scale
UIGraphicsBeginImageContextWithOptions((layer?.frame.size)!, false, scale);
cameraView.layer.render(in: UIGraphicsGetCurrentContext()!)
let screenshot = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
//save the screenshot to camera roll
UIImageWriteToSavedPhotosAlbum(screenshot!, nil, nil, nil);
}
}
UPDATE:
Added code of making CAShapeLayer, but it can't show camera view and when finished saving image, the image is only white picture....
cameraView = UIView(frame: CGRect(x: 0.0, y: 0.0, width: self.view.frame.size.width, height: self.view.frame.size.height))
self.view.addSubview(cameraView)
let cropRectOverlay = CAShapeLayer() //added
cameraView.layer.mask = cropRectOverlay //added
cameraView.layer.addSublayer(cropRectOverlay) //added
...
previewLayer?.connection.automaticallyAdjustsVideoMirroring = false
previewLayer?.connection.isVideoMirrored = true
cameraView.layer.addSublayer(cropRectOverlay) //added
...
UIGraphicsBeginImageContextWithOptions(cameraView.bounds.size, false, 0); //added
cameraView.layer.render(in: UIGraphicsGetCurrentContext()!) //added
let image = UIGraphicsGetImageFromCurrentImageContext(); //added
UIGraphicsEndImageContext(); //added
//When added resultImageView, it shows black view.
//let resultImageView = UIImageView(frame: self.view.frame)
//resultImageView.image = image
//self.view.addSubview(resultImageView)
UIImageWriteToSavedPhotosAlbum(image!, nil, nil, nil);
Image from capture session
let stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput.outputSettings = [AVVideoCodecKey:AVVideoCodecJPEG]
if captureSession.canAddOutput(stillImageOutput) {
captureSession.addOutput(stillImageOutput)
}
func captureImage() {
let videoConnection = stillImageOutput.connection(withMediaType: AVMediaTypeVideo)
stillImageOutput.captureStillImageAsynchronously(from: videoConnection, completionHandler: {(ingDataBuffer, error) in
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(ingDataBuffer)
self.imgView.image = UIImage(data: imageData!)
})
}
Screenshot of UIView
UIGraphicsBeginImageContextWithOptions(cameraView.bounds.size, false, 0);
cameraView.layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
AVCapturePhotoCaptureDelegate
Set delegate to self
func capture(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhotoSampleBuffer photoSampleBuffer: CMSampleBuffer?, previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: NSError?) {
if let sampleBuffer = photoSampleBuffer, let previewBuffer = previewPhotoSampleBuffer, let dataImage = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: sampleBuffer, previewPhotoSampleBuffer: previewBuffer) {
print(image: UIImage(data: dataImage).size)
}
}

Downsized Image gets blurry after being copied to Pasteboard - Swift 3.0

I am capturing an image which is then placed in a small imageView. The picture is not blurry in the small imageView, but when I copy it to the clipboard, I am resizing the picture so that it is the same size as the imageView, but now it is blurry when I paste.
Here is the code:
import UIKit
import AVFoundation
class ViewController: UIViewController, UINavigationControllerDelegate, UIImagePickerControllerDelegate {
#IBOutlet weak var cameraView: UIView!
#IBOutlet weak var imageView: UIImageView!
var session: AVCaptureSession?
var stillImageOutput: AVCaptureStillImageOutput?
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
var captureDevice:AVCaptureDevice?
var imagePicker: UIImagePickerController!
override func viewDidLoad() {
super.viewDidLoad()
alignment()
tapToCopy()
}
override func viewWillAppear(_ animated: Bool) {
session = AVCaptureSession()
session!.sessionPreset = AVCaptureSessionPresetPhoto
let videoDevices = AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo)
for device in videoDevices!{
let device = device as! AVCaptureDevice
if device.position == AVCaptureDevicePosition.front {
captureDevice = device
}
}
//We will make a new AVCaptureDeviceInput and attempt to associate it with our backCamera input device.
//There is a chance that the input device might not be available, so we will set up a try catch to handle any potential errors we might encounter.
var error: NSError?
var input: AVCaptureDeviceInput!
do {
input = try AVCaptureDeviceInput(device: captureDevice)
} catch let error1 as NSError {
error = error1
input = nil
print(error!.localizedDescription)
}
if error == nil && session!.canAddInput(input) {
session!.addInput(input)
// ...
// The remainder of the session setup will go here...
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput?.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if session!.canAddOutput(stillImageOutput) {
session!.addOutput(stillImageOutput)
// ...
// Configure the Live Preview here...
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: session)
videoPreviewLayer!.videoGravity = AVLayerVideoGravityResizeAspect
videoPreviewLayer!.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
cameraView.layer.addSublayer(videoPreviewLayer!)
session!.startRunning()
}
}
}
func alignment() {
let height = view.bounds.size.height
let width = view.bounds.size.width
cameraView.bounds.size.height = height / 10
cameraView.bounds.size.width = height / 10
cameraView.layer.cornerRadius = height / 20
imageView.bounds.size.height = height / 10
imageView.bounds.size.width = height / 10
imageView.layer.cornerRadius = height / 20
imageView.clipsToBounds = true
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
videoPreviewLayer!.frame = cameraView.bounds
}
#IBAction func takePic(_ sender: Any) {
if (stillImageOutput!.connection(withMediaType: AVMediaTypeVideo)) != nil {
let videoConnection = stillImageOutput!.connection(withMediaType: AVMediaTypeVideo)
// ...
// Code for photo capture goes here...
stillImageOutput?.captureStillImageAsynchronously(from: videoConnection, completionHandler: { (sampleBuffer, error) -> Void in
// ...
// Process the image data (sampleBuffer) here to get an image file we can put in our captureImageView
if sampleBuffer != nil {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProvider(data: imageData as! CFData)
let cgImageRef = CGImage(jpegDataProviderSource: dataProvider!, decode: nil, shouldInterpolate: true, intent: CGColorRenderingIntent.defaultIntent)
let image = UIImage(cgImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.right)
// ...
// Add the image to captureImageView here...
self.imageView.image = self.resizeImage(image: image, newHeight: self.view.bounds.size.height / 10)
}
})
}
}
func tapToCopy() {
let imageTap = UITapGestureRecognizer(target: self, action: #selector(self.copyToClipboard(recognizer:)))
imageTap.numberOfTapsRequired = 1
imageView.isUserInteractionEnabled = true
imageView.addGestureRecognizer(imageTap)
}
func copyToClipboard(recognizer: UITapGestureRecognizer) {
UIPasteboard.general.image = self.resizeImage(image: imageView.image!, newHeight: self.view.bounds.size.height / 10)
}
func resizeImage(image: UIImage, newHeight: CGFloat) -> UIImage {
let scale = newHeight / image.size.height
let newWidth = image.size.width * scale
UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newHeight))
image.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
}
First of all, you are saying:
UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newHeight))
Never, never, never call UIGraphicsBeginImageContext. Just pretend it doesn't exist. Always call UIGraphicsBeginImageContextWithOptions instead. It takes two extra parameters, which should just about always be false and 0. Things will be a lot better when you make that change, because the image will contain scale information that you are currently stripping out.
Another problem is that you are resizing the same image twice in succession — once to display the image in the image view, and then again resizing it some more when you pull the image from the image view and put it on the pasteboard. Don't do that! Instead, store the original image, without resizing. Later, you can put that on the pasteboard — or resize the original image so that you are only resizing it once, totally separate from the image in the image view.

App is crashing silently during custom camera (Swift)

The app is crashing at random points in this function. I believe I need to scale it down but I am not sure. The only requirements I have for the image is that it remains a square and it remains decently sized because I need it to be big enough to take the entire screens width.
Here is an error that sometimes comes along with the crash:
warning: could not load any Objective-C class information. This will significantly reduce the quality of type information available.
#IBAction func didPressTakePhoto(sender: UIButton) {
self.previewLayer?.connection.enabled = false
if let videoConnection = stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo) {
videoConnection.videoOrientation = AVCaptureVideoOrientation.Portrait
stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: {(sampleBuffer, error) in
if (sampleBuffer != nil) {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProviderCreateWithCFData(imageData)
let cgImageRef = CGImageCreateWithJPEGDataProvider(dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)
var image = UIImage()
if UIDevice.currentDevice().orientation == .Portrait{
image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Right)
}else if UIDevice.currentDevice().orientation == .LandscapeLeft{
image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Up)
}else if UIDevice.currentDevice().orientation == .LandscapeRight{
image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Down)
}
//Crop the image to a square
let imageSize: CGSize = image.size
let width: CGFloat = imageSize.width
let height: CGFloat = imageSize.height
if width != height {
let newDimension: CGFloat = min(width, height)
let widthOffset: CGFloat = (width - newDimension) / 2
let heightOffset: CGFloat = (height - newDimension) / 2
UIGraphicsBeginImageContextWithOptions(CGSizeMake(newDimension, newDimension), false, 0.0)
image.drawAtPoint(CGPointMake(-widthOffset, -heightOffset), blendMode: .Copy, alpha: 1.0)
image = UIGraphicsGetImageFromCurrentImageContext()
let imageData: NSData = UIImageJPEGRepresentation(image, 0.1)!
UIGraphicsEndImageContext()
self.captImage = UIImage(data: imageData)!
}
}
self.performSegueWithIdentifier("fromCustomCamera", sender: self)
})
}
}
This code is running in my viewDidAppear and stillImageOutput is returning nil when I take a photo.
if self.isRunning == false{
captureSession = AVCaptureSession()
captureSession!.sessionPreset = AVCaptureSessionPresetPhoto
let backCamera = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
var error: NSError?
do {
input = try AVCaptureDeviceInput(device: backCamera)
} catch let error1 as NSError {
error = error1
print(error)
input = nil
}
if error == nil && captureSession!.canAddInput(input) {
captureSession!.addInput(input)
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput!.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if captureSession!.canAddOutput(stillImageOutput) {
captureSession!.addOutput(stillImageOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer!.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer!.connection?.videoOrientation = AVCaptureVideoOrientation.Portrait
previewView.layer.addSublayer(previewLayer!)
captureSession!.startRunning()
self.isRunning = true
}
}
}
Fixed it. The reason it was crashing was actually due to my images being way too big. I had to compress them.

AVFoundation camera image quality degraded upon processing

I made an AVFoundation camera to crop square images based on #fsaint answer: Cropping AVCaptureVideoPreviewLayer output to a square . The sizing of the photo is great, that works perfectly. However the image quality is noticeably degraded (See Below: first image is preview layer showing good resolution, second is the degraded image that was captured). It definitely has to do with what happens in processImage: as the image resolution is fine without it, just not the right aspect ratio. The documentation on image processing is pretty bare, any insights are GREATLY appreciated!!
Setting up camera:
func setUpCamera() {
captureSession = AVCaptureSession()
captureSession!.sessionPreset = AVCaptureSessionPresetPhoto
let backCamera = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
if ((backCamera?.hasFlash) != nil) {
do {
try backCamera.lockForConfiguration()
backCamera.flashMode = AVCaptureFlashMode.Auto
backCamera.unlockForConfiguration()
} catch {
// error handling
}
}
var error: NSError?
var input: AVCaptureDeviceInput!
do {
input = try AVCaptureDeviceInput(device: backCamera)
} catch let error1 as NSError {
error = error1
input = nil
}
if error == nil && captureSession!.canAddInput(input) {
captureSession!.addInput(input)
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput!.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if captureSession!.canAddOutput(stillImageOutput) {
captureSession!.sessionPreset = AVCaptureSessionPresetHigh;
captureSession!.addOutput(stillImageOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer!.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer!.connection?.videoOrientation = AVCaptureVideoOrientation.Portrait
previewVideoView.layer.addSublayer(previewLayer!)
captureSession!.startRunning()
}
}
}
Snapping photo:
#IBAction func onSnapPhotoButtonPressed(sender: UIButton) {
if let videoConnection = self.stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo) {
videoConnection.videoOrientation = AVCaptureVideoOrientation.Portrait
self.stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: {(sampleBuffer, error) in
if (sampleBuffer != nil) {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProviderCreateWithCFData(imageData)
let cgImageRef = CGImageCreateWithJPEGDataProvider(dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)
let image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Right)
self.processImage(image)
self.clearPhotoButton.hidden = false
self.nextButton.hidden = false
self.view.bringSubviewToFront(self.imageView)
}
})
}
}
Process image to square:
func processImage(image:UIImage) {
let deviceScreen = previewLayer?.bounds
let width:CGFloat = (deviceScreen?.size.width)!
UIGraphicsBeginImageContext(CGSizeMake(width, width))
let aspectRatio:CGFloat = image.size.height * width / image.size.width
image.drawInRect(CGRectMake(0, -(aspectRatio - width) / 2.0, width, aspectRatio))
let smallImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let cropRect = CGRectMake(0, 0, width, width)
let imageRef:CGImageRef = CGImageCreateWithImageInRect(smallImage.CGImage, cropRect)!
imageView.image = UIImage(CGImage: imageRef)
}
There are a few things wrong with your processImage() function.
First of all, you're creating a new graphics context with UIGraphicsBeginImageContext().
According to the Apple docs on this function:
This function is equivalent to calling the UIGraphicsBeginImageContextWithOptions function with the opaque parameter set to NO and a scale factor of 1.0.
Because the scale factor is 1.0, it is going to look pixelated when displayed on-screen, as the screen's resolution is (most likely) higher.
You want to be using the UIGraphicsBeginImageContextWithOptions() function, and pass 0.0 for the scale factor. According to the docs on this function, for the scale argument:
If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.
For example:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, width), NO, 0.0);
Your output should now look nice and crisp, as it is being rendered with the same scale as the screen.
Second of all, there's a problem with the width you're passing in.
let width:CGFloat = (deviceScreen?.size.width)!
UIGraphicsBeginImageContext(CGSizeMake(width, width))
You shouldn't be passing in the width of the screen here, it should be the width of the image. For example:
let width:CGFloat = image.size.width
You will then have to change the aspectRatio variable to take the screen width, such as:
let aspectRatio:CGFloat = image.size.height * (deviceScreen?.size.width)! / image.size.width
Third of all, you can simplify your cropping function significantly.
func processImage(image:UIImage) {
let screenWidth = UIScreen.mainScreen().bounds.size.width
let width:CGFloat = image.size.width
let height:CGFloat = image.size.height
let aspectRatio = screenWidth/width;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(screenWidth, screenWidth), false, 0.0) // create context
let ctx = UIGraphicsGetCurrentContext()
CGContextTranslateCTM(ctx, 0, (screenWidth-(aspectRatio*height))*0.5) // shift context up, to create a sqaured 'frame' for your image to be drawn in
image.drawInRect(CGRect(origin:CGPointZero, size: CGSize(width:screenWidth, height:height*aspectRatio))) // draw image
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
imageView.image = img
}
There's no need to draw the image twice, you only need to just translate the context up, and then draw the image.

Resources