How to change uiimageview content mode at runtime? - ios

I have a UIViewController with an UIImageView, if I tap the screen, i want the UIImageView to change its contentMode. But I found out that this is not working with some images, mainly those from AVCaptureSession.
Screenshots:
Aspect fill
Aspect fit
I also found out that it's working fine when I change device orientation to landscape and back. But when I tap the screen is not working again.
Aspect fit after changed orientation to landscape and back (this is how I want it to look everytime in aspect fit)
My code:
CameraController:
class CameraController: UIViewController {
private var captureSession = AVCaptureSession()
private var captureDevice: AVCaptureDevice!
private var capturePhotoOutput = AVCapturePhotoOutput()
private var previewLayer: AVCaptureVideoPreviewLayer!
override func viewDidLoad() {
super.viewDidLoad()
setupCaptureSession()
setupCaptureDevice()
setupInputAndOutput()
setupPreviewLayer()
startCaptureSession()
setupLayout()
}
private func setupCaptureSession() {
captureSession.sessionPreset = .photo
}
private func setupCaptureDevice() {
guard let device = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .back).devices.first else { return }
captureDevice = device
}
private func setupInputAndOutput() {
do {
let captureDeviceInput = try AVCaptureDeviceInput(device: captureDevice)
captureSession.addInput(captureDeviceInput)
let captureSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg])
capturePhotoOutput.setPreparedPhotoSettingsArray([captureSettings], completionHandler: nil)
captureSession.addOutput(capturePhotoOutput)
} catch {
print(error.localizedDescription)
}
}
private func setupPreviewLayer() {
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = .resizeAspectFill
previewLayer.connection?.videoOrientation = .portrait
previewLayer.frame = view.frame
view.layer.insertSublayer(previewLayer, at: 0)
}
private func startCaptureSession() {
captureSession.startRunning()
}
private func setupLayout() {
let captureButton = UIButton(frame: CGRect(x: 0, y: 0, width: 44, height: 44))
captureButton.backgroundColor = .white
captureButton.layer.cornerRadius = 22
captureButton.addTarget(self, action: #selector(didPressCaptureButton), for: .touchUpInside)
captureButton.center.x = view.center.x
captureButton.center.y = view.frame.height - 50
view.addSubview(captureButton)
}
#objc private func didPressCaptureButton() {
let settings = AVCapturePhotoSettings()
capturePhotoOutput.capturePhoto(with: settings, delegate: self)
}
}
extension CameraController: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
guard let imageData = photo.fileDataRepresentation() else { return }
guard let image = UIImage(data: imageData) else { return }
print("Image size: ", image.size)
let previewController = PreviewController()
previewController.image = image
present(previewController, animated: true, completion: {
self.captureSession.stopRunning()
})
}
}
PreviewController:
class PreviewController: UIViewController {
var imageView: UIImageView!
var image: UIImage!
override func viewDidLoad() {
super.viewDidLoad()
setupImageView()
}
func setupImageView() {
imageView = UIImageView(image: image)
imageView.contentMode = .scaleAspectFill
view.addSubview(imageView)
imageView.addConstraintsToFillSuperview()
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
let contentMode: UIViewContentMode = imageView.contentMode == .scaleAspectFill ? .scaleAspectFit : .scaleAspectFill
imageView.contentMode = contentMode
}
}
What am I doing wrong here?
Thank You!

You can change ImageView contentMode on runtime with:
self.imageView.contentMode = newContentMode
self.imageView.setNeedsDisplay()

Related

Crop CGRect from UIImage taken from camera

I have a view controller which takes a photo with a circular view in the center.
After taking a photo, I need to crop the CGRect with which I created the circular view. I need to crop the rectangle, not the circle.
I tried https://stackoverflow.com/a/57258806/12411655 and many other solutions, but it doesn't crop CGRect that I need.
How do I convert the CGRect in the view's coordinates to UIImage's coordinates?
class CircularCameraViewController: UIViewController {
var captureSession: AVCaptureSession!
var capturePhotoOutput: AVCapturePhotoOutput!
var cropRect: CGRect!
public lazy var shutterButton: ShutterButton = {
let button = ShutterButton()
button.translatesAutoresizingMaskIntoConstraints = false
button.addTarget(self, action: #selector(capturePhoto), for: .touchUpInside)
return button
}()
private lazy var cancelButton: UIButton = {
let button = UIButton()
button.setTitle("Cancel", for: .normal)
button.translatesAutoresizingMaskIntoConstraints = false
button.addTarget(self, action: #selector(dismissCamera), for: .touchUpInside)
return button
}()
private lazy var flashButton: UIButton = {
let image = UIImage(named: "flash", in: Bundle(for: ScannerViewController.self), compatibleWith: nil)?.withRenderingMode(.alwaysTemplate)
let button = UIButton()
button.setImage(image, for: .normal)
button.translatesAutoresizingMaskIntoConstraints = false
button.addTarget(self, action: #selector(toggleFlash), for: .touchUpInside)
button.tintColor = .white
return button
}()
override func viewDidLoad() {
super.viewDidLoad()
setupCamera()
setupPhotoOutput()
setupViews()
setupConstraints()
captureSession.startRunning()
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
}
override func viewWillDisappear(_ animated: Bool) {
captureSession.stopRunning()
}
private func setupCamera() {
let captureDevice = AVCaptureDevice.default(for: AVMediaType.video)
var input: AVCaptureDeviceInput
do {
input = try AVCaptureDeviceInput(device: captureDevice!)
} catch {
fatalError("Error configuring capture device: \(error)");
}
captureSession = AVCaptureSession()
captureSession.addInput(input)
// Setup the preview view.
let videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
videoPreviewLayer.frame = view.layer.bounds
view.layer.addSublayer(videoPreviewLayer)
let camPreviewBounds = view.bounds
cropRect = CGRect(
x: camPreviewBounds.minX + (camPreviewBounds.width - 150) * 0.5,
y: camPreviewBounds.minY + (camPreviewBounds.height - 150) * 0.5,
width: 150,
height: 150
)
let path = UIBezierPath(roundedRect: camPreviewBounds, cornerRadius: 0)
path.append(UIBezierPath(ovalIn: cropRect))
let layer = CAShapeLayer()
layer.path = path.cgPath
layer.fillRule = CAShapeLayerFillRule.evenOdd;
layer.fillColor = UIColor.black.cgColor
layer.opacity = 0.5;
view.layer.addSublayer(layer)
}
private func setupViews() {
view.addSubview(shutterButton)
view.addSubview(flashButton)
view.addSubview(cancelButton)
}
private func setupConstraints() {
var cancelButtonConstraints = [NSLayoutConstraint]()
var shutterButtonConstraints = [NSLayoutConstraint]()
var flashConstraints = [NSLayoutConstraint]()
shutterButtonConstraints = [
shutterButton.centerXAnchor.constraint(equalTo: view.centerXAnchor),
shutterButton.widthAnchor.constraint(equalToConstant: 65.0),
shutterButton.heightAnchor.constraint(equalToConstant: 65.0)
]
flashConstraints = [
flashButton.leftAnchor.constraint(equalTo: view.leftAnchor, constant: 24.0),
flashButton.topAnchor.constraint(equalTo: view.topAnchor, constant: 30)
]
if #available(iOS 11.0, *) {
cancelButtonConstraints = [
cancelButton.leftAnchor.constraint(equalTo: view.safeAreaLayoutGuide.leftAnchor, constant: 24.0),
view.safeAreaLayoutGuide.bottomAnchor.constraint(equalTo: cancelButton.bottomAnchor, constant: (65.0 / 2) - 10.0)
]
let shutterButtonBottomConstraint = view.safeAreaLayoutGuide.bottomAnchor.constraint(equalTo: shutterButton.bottomAnchor, constant: 8.0)
shutterButtonConstraints.append(shutterButtonBottomConstraint)
} else {
cancelButtonConstraints = [
cancelButton.leftAnchor.constraint(equalTo: view.leftAnchor, constant: 24.0),
view.bottomAnchor.constraint(equalTo: cancelButton.bottomAnchor, constant: (65.0 / 2) - 10.0)
]
let shutterButtonBottomConstraint = view.bottomAnchor.constraint(equalTo: shutterButton.bottomAnchor, constant: 8.0)
shutterButtonConstraints.append(shutterButtonBottomConstraint)
}
NSLayoutConstraint.activate(cancelButtonConstraints + shutterButtonConstraints + flashConstraints)
}
private func setupPhotoOutput() {
capturePhotoOutput = AVCapturePhotoOutput()
capturePhotoOutput.isHighResolutionCaptureEnabled = true
captureSession.addOutput(capturePhotoOutput!)
}
#objc func dismissCamera() {
self.dismiss(animated: true, completion: nil)
}
#objc private func toggleFlash() {
if let avDevice = AVCaptureDevice.default(for: AVMediaType.video) {
if (avDevice.hasTorch) {
do {
try avDevice.lockForConfiguration()
} catch {
print("aaaa")
}
if avDevice.isTorchActive {
avDevice.torchMode = AVCaptureDevice.TorchMode.off
} else {
avDevice.torchMode = AVCaptureDevice.TorchMode.on
}
}
// unlock your device
avDevice.unlockForConfiguration()
}
}
}
extension CircularCameraViewController : AVCapturePhotoCaptureDelegate {
#objc private func capturePhoto() {
let photoSettings = AVCapturePhotoSettings()
photoSettings.isAutoStillImageStabilizationEnabled = true
photoSettings.isHighResolutionPhotoEnabled = true
photoSettings.flashMode = .auto
// Set ourselves as the delegate for `capturePhoto`.
capturePhotoOutput?.capturePhoto(with: photoSettings, delegate: self)
}
#available(iOS 11.0, *)
func photoOutput(_ output: AVCapturePhotoOutput,
didFinishProcessingPhoto photo: AVCapturePhoto,
error: Error?) {
guard error == nil else {
fatalError("Failed to capture photo: \(String(describing: error))")
}
guard let imageData = photo.fileDataRepresentation() else {
fatalError("Failed to convert pixel buffer")
}
guard let image = UIImage(data: imageData) else {
fatalError("Failed to convert image data to UIImage")
}
guard let croppedImg = image.cropToRect(rect: cropRect) else {
fatalError("Failed to crop image")
}
UIImageWriteToSavedPhotosAlbum(croppedImg, nil, nil, nil);
}
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photoSampleBuffer: CMSampleBuffer?, previewPhoto previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?) {
guard error == nil, let photoSample = photoSampleBuffer else {
fatalError("Failed to capture photo: \(String(describing: error))")
}
guard let imgData = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: photoSample, previewPhotoSampleBuffer: previewPhotoSampleBuffer) else {
fatalError("Failed to get image data: \(String(describing: error))")
}
guard let image = UIImage(data: imgData) else {
fatalError("Failed to convert image data to UIImage: \(String(describing: error))")
}
}
}
UIImage extension:
func cropToRect(rect: CGRect!) -> UIImage? {
let scaledRect = CGRect(x: rect.origin.x * self.scale, y: rect.origin.y * self.scale, width: rect.size.width * self.scale, height: rect.size.height * self.scale);
guard let imageRef: CGImage = self.cgImage?.cropping(to:scaledRect)
else {
return nil
}
let croppedImage: UIImage = UIImage(cgImage: imageRef, scale: self.scale, orientation: self.imageOrientation)
return croppedImage
}
When cropping an image, you need to scale the "crop rect" from its size relative to the image size.
Also, when capturing from the camera, you need to take .imageOrientation into account.
Try changing your UIImage extension to this:
extension UIImage {
func cropToRect(rect: CGRect, viewSize: CGSize) -> UIImage? {
var cr = rect
switch self.imageOrientation {
case .right, .rightMirrored, .left, .leftMirrored:
// rotate the crop rect if needed
cr.origin.x = rect.origin.y
cr.origin.y = rect.origin.x
cr.size.width = rect.size.height
cr.size.height = rect.size.width
default:
break
}
let imageViewScale = max(self.size.width / viewSize.width,
self.size.height / viewSize.height)
// scale the crop rect
let cropZone = CGRect(x:cr.origin.x * imageViewScale,
y:cr.origin.y * imageViewScale,
width:cr.size.width * imageViewScale,
height:cr.size.height * imageViewScale)
// Perform cropping in Core Graphics
guard let cutImageRef: CGImage = self.cgImage?.cropping(to:cropZone)
else {
return nil
}
// Return image to UIImage
let croppedImage: UIImage = UIImage(cgImage: cutImageRef, scale: self.scale, orientation: self.imageOrientation)
return croppedImage
}
}
and change your call in photoOutput() to:
guard let croppedImg = image.cropToRect(rect: cropRect, viewSize: view.frame.size) else {
fatalError("Failed to crop image")
}
Since your code is using the full view, that should work fine. If you change it to use a different sized view as your videoPreviewLayer then use that size instead of view.frame.size.

view.frame change on orientation change - no change

I am developing an app with SwiftUI - using some UIKit components - that has Picture in Picture, and I am trying to keep the picture in a specific corner of the screen when rotating the device. In order to do this I need to change the position of the frame on the view, and so I have registered with UIDevice.orientationDidChangeNotification and when this notification comes through I change the view frame in the UIViewController like this:
#objc func onViewDidTransition() {
view.frame = CGRect(x: 100, y: 100, width: 200, height: 200)
self.cameraPreviewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.landscapeRight
}
However, this doesn't seem to do anything. The frame doesn't change at all. As though the UIView has kept the old frame and changing to the new CGRect does nothing. When I inspect the object with view.frame. there is no option for x or y, as though these properties are not changeable after initialisation. Is this right? Is there no way for me to change the position of the frame?
EDIT: Code.
The frame is setup in the setup() function of CustomCameraController
func setup() {
view.frame = CGRect(x: 225, y: 0, width: 150, height: 150)
And later it is modified in onViewDidTransition()
#objc func onViewDidTransition() {
view.frame = CGRect(x: 662, y: 0, width: 150, height: 150)
This should put the picture in the top right corner (iphone x) when transitioned to landscape, but it doesn't. The image stays at 200 pt from the left corner.
Minimum reproducible ContentView code
import SwiftUI
import AVFoundation
struct CustomCameraPhotoView: View {
#State private var image: Image?
#State private var showingCustomCamera = false
#State private var showImagePicker = false
#State private var inputImage: UIImage?
#State private var url: URL?
var body: some View {
CustomCameraView(image: self.$inputImage)
}
}
struct CustomCameraView: View {
#Binding var image: UIImage?
#State var didTapCapture: Bool = false
var body: some View {
ZStack() {
CustomCameraRepresentable(image: self.$image, didTapCapture: $didTapCapture)
}
}
}
struct CustomCameraRepresentable: UIViewControllerRepresentable {
#Environment(\.presentationMode) var presentationMode
#Binding var image: UIImage?
#Binding var didTapCapture: Bool
func makeUIViewController(context: Context) -> CustomCameraController {
let controller = CustomCameraController()
controller.delegate = context.coordinator
return controller
}
func updateUIViewController(_ cameraViewController: CustomCameraController, context: Context) {
if(self.didTapCapture) {
cameraViewController.didTapRecord()
}
}
func makeCoordinator() -> Coordinator {
Coordinator(self)
}
class Coordinator: NSObject, UINavigationControllerDelegate, AVCapturePhotoCaptureDelegate {
let parent: CustomCameraRepresentable
init(_ parent: CustomCameraRepresentable) {
self.parent = parent
}
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
parent.didTapCapture = false
if let imageData = photo.fileDataRepresentation() {
parent.image = UIImage(data: imageData)
}
parent.presentationMode.wrappedValue.dismiss()
}
}
}
class CustomCameraController: UIViewController {
var image: UIImage?
var captureSession = AVCaptureSession()
var backCamera: AVCaptureDevice?
var frontCamera: AVCaptureDevice?
var currentCamera: AVCaptureDevice?
var photoOutput: AVCapturePhotoOutput?
var cameraPreviewLayer: AVCaptureVideoPreviewLayer?
//DELEGATE
var delegate: AVCapturePhotoCaptureDelegate?
func didTapRecord() {
let settings = AVCapturePhotoSettings()
photoOutput?.capturePhoto(with: settings, delegate: delegate!)
}
override func viewDidLoad() {
super.viewDidLoad()
setup()
}
func setup() {
view.frame = CGRect(x: 225, y: 0, width: 150, height: 150)
view.layer.cornerRadius = 40
view.layer.masksToBounds = true
view.backgroundColor = .white
setupCaptureSession()
setupDevice()
setupInputOutput()
setupPreviewLayer()
startRunningCaptureSession()
NotificationCenter.default.addObserver(self, selector: #selector(onViewDidTransition), name: UIDevice.orientationDidChangeNotification, object: nil)
}
deinit {
NotificationCenter.default.removeObserver(self, name: UIDevice.orientationDidChangeNotification, object: nil)
}
#objc func onViewDidTransition() {
view.frame = CGRect(x: 662, y: 0, width: 150, height: 150)
self.cameraPreviewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.landscapeRight
//UIDevice.current.orientation
}
func setupCaptureSession() {
let cameraMediaType = AVMediaType.video
let cameraAuthorizationStatus = AVCaptureDevice.authorizationStatus(for: cameraMediaType)
switch cameraAuthorizationStatus {
case .denied: break
case .authorized: break
case .restricted: break
case .notDetermined:
// Prompting user for the permission to use the camera.
AVCaptureDevice.requestAccess(for: cameraMediaType) { granted in
if granted {
print("Granted access to \(cameraMediaType)")
} else {
print("Denied access to \(cameraMediaType)")
}
}
#unknown default:
break
}
captureSession.sessionPreset = AVCaptureSession.Preset.iFrame1280x720
}
func setupDevice() {
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera],
mediaType: AVMediaType.video,
position: AVCaptureDevice.Position.front)
for device in deviceDiscoverySession.devices {
switch device.position {
case AVCaptureDevice.Position.front:
self.frontCamera = device
case AVCaptureDevice.Position.back:
self.backCamera = device
default:
break
}
}
self.currentCamera = self.frontCamera
}
func setupInputOutput() {
do {
let captureDeviceInput = try AVCaptureDeviceInput(device: currentCamera!)
captureSession.addInput(captureDeviceInput)
photoOutput = AVCapturePhotoOutput()
photoOutput?.setPreparedPhotoSettingsArray([AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.hevc])], completionHandler: nil)
captureSession.addOutput(photoOutput!)
} catch {
print(error)
}
}
func setupPreviewLayer()
{
self.cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
self.cameraPreviewLayer?.cornerRadius = 40
self.cameraPreviewLayer?.masksToBounds = true
self.cameraPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
self.cameraPreviewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
self.cameraPreviewLayer?.frame = self.view.frame
self.view.layer.insertSublayer(cameraPreviewLayer!, at: 0)
}
func startRunningCaptureSession(){
captureSession.startRunning()
}
}
Dumb mistake. I was modifying the frame of the View:
view.frame = CGRect(x: 662, y: 0, width: 150, height: 150)
When I should have been looking at the frame of the camera preview layer:
self.cameraPreviewLayer?.frame = self.view.frame
I have changed the code to modify the camera preview layer and leave the view frame alone, and now it works.
self.cameraPreviewLayer?.frame = CGRect(x: 662, y: 0, width: 150, height: 150)

When user allow using photo I can't see any thing in swift 3

I have collection view with Images of photo library But there is a problem just for first time that user Allows using photos - when the app runs for the first time and user allows to use photos the user can't see anyImages and should dismiss that view controller and come back again to see the Images
here is the codes :
import UIKit
import Photos
class typeandtranslateViewController: UIViewController , UIImagePickerControllerDelegate , UINavigationControllerDelegate , UICollectionViewDelegate, UICollectionViewDataSource , UITextFieldDelegate {
static var checkTextField = Bool()
#IBOutlet var backgroundimg: UIImageView!
#IBOutlet var frontimg: UIImageView!
#IBOutlet weak var typeView: UIView!
let arr_img = NSMutableArray()
let arr_selected = NSMutableArray()
#IBOutlet var collview: UICollectionView!
#IBOutlet weak var sefareshTitleTextField: UITextField!
#IBAction func caneraButton(_ sender: UIButton) {
if UIImagePickerController.isSourceTypeAvailable(UIImagePickerControllerSourceType.camera) {
let imagePicker = UIImagePickerController()
imagePicker.delegate = self
imagePicker.sourceType = UIImagePickerControllerSourceType.camera ;
imagePicker.allowsEditing = false
self.present(imagePicker,animated: true , completion: nil)
}
print("Camera!")
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
let selectedImage = info[UIImagePickerControllerOriginalImage] as! UIImage
UIImageWriteToSavedPhotosAlbum(selectedImage,self,nil,nil)
dismiss(animated: true, completion: nil)
print("save Image ")
}
#IBOutlet weak var viewCamera: UIView!
override func viewDidLoad() {
super.viewDidLoad()
print("Text Field Condition ")
if sefareshTitleTextField!.text! == "" {
typeandtranslateViewController.checkTextField = false
print("sefaresh title is nill")
} else if sefareshTitleTextField!.text! != "" {
typeandtranslateViewController.checkTextField = true
print("sefaresh title isnt nill")
}
self.sefareshTitleTextField.delegate = self
collview?.allowsMultipleSelection = true
let allPhotosOptions : PHFetchOptions = PHFetchOptions.init()
allPhotosOptions.sortDescriptors = [NSSortDescriptor(key: "creationDate", ascending: true)]
let allPhotosResult = PHAsset.fetchAssets(with: .image, options: allPhotosOptions)
allPhotosResult.enumerateObjects({ (asset, idx, stop) in
self.arr_img.add(asset)
})
self.typeView.layer.cornerRadius = self.typeView.frame.size.height/50
self.typeView.layer.borderWidth = 1
self.typeView.layer.borderColor = UIColor.clear.cgColor
self.typeView.clipsToBounds = true
self.viewCamera.layer.cornerRadius = 5
self.viewCamera.layer.borderWidth = 1
self.viewCamera.layer.borderColor = UIColor.clear.cgColor
self.viewCamera.clipsToBounds = true
self.tabBarController?.tabBar.isHidden = true
self.navigationController?.isNavigationBarHidden = true
let blurEffect = UIBlurEffect(style: UIBlurEffectStyle.light)
let blurView = UIVisualEffectView(effect: blurEffect)
blurView.frame = CGRect(x: self.backgroundimg.frame.origin.x, y: self.backgroundimg.frame.origin.y, width: self.backgroundimg.frame.size.width, height: self.backgroundimg.frame.size.height)
blurView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
self.backgroundimg.addSubview(blurView)
}
func getAssetThumbnail(asset: PHAsset, size: CGFloat) -> UIImage {
let retinaScale = UIScreen.main.scale
let retinaSquare = CGSize(width: size * retinaScale, height: size * retinaScale)//CGSizeMake(size * retinaScale, size * retinaScale)
let cropSizeLength = min(asset.pixelWidth, asset.pixelHeight)
let square = CGRect(x: 0, y: 0, width: cropSizeLength, height: cropSizeLength)//CGRectMake(0, 0, CGFloat(cropSizeLength), CGFloat(cropSizeLength))
let cropRect = square.applying(CGAffineTransform(scaleX: 1.0/CGFloat(asset.pixelWidth), y: 1.0/CGFloat(asset.pixelHeight)))
let manager = PHImageManager.default()
let options = PHImageRequestOptions()
var thumbnail = UIImage()
options.isSynchronous = true
options.deliveryMode = .highQualityFormat
options.resizeMode = .exact
options.normalizedCropRect = cropRect
manager.requestImage(for: asset, targetSize: retinaSquare, contentMode: .aspectFit, options: options, resultHandler: {(result, info)->Void in
thumbnail = result!
})
return thumbnail
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
func textFieldShouldReturn(_ textField: UITextField) -> Bool {
self.view.endEditing(true);
if sefareshTitleTextField!.text == "" {
typeandtranslateViewController.checkTextField = false
print("sefaresh title is nill")
} else if sefareshTitleTextField!.text! != "" {
typeandtranslateViewController.checkTextField = true
print("sefaresh title isnt nill")
}
return false;
}
func textFieldDidBeginEditing(_ textField: UITextField) {
if sefareshTitleTextField!.text! == "" {
typeandtranslateViewController.checkTextField = false
print("sefaresh title is nill")
} else if sefareshTitleTextField!.text! != "" {
typeandtranslateViewController.checkTextField = true
print("sefaresh title isnt nill")
}
}
//MARK:
//MARK: Collectioview methods
func collectionView(_ collectionView: UICollectionView,
numberOfItemsInSection section: Int) -> Int {
return arr_img.count
}
func collectionView(_ collectionView: UICollectionView,
cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
let cell = collectionView.dequeueReusableCell(withReuseIdentifier: "celll",
for: indexPath)
let imgview : UIImageView = cell.viewWithTag(20) as! UIImageView
imgview.image = self.getAssetThumbnail(asset: self.arr_img.object(at: indexPath.row) as! PHAsset, size: 150)
let selectView : UIImageView = cell.viewWithTag(22) as! UIImageView
if arr_selected.contains(indexPath.row){
selectView.image = UIImage(named: "Select.png")
}else{
selectView.image = UIImage(named: "radioCircleButton.png")
}
cell.layer.cornerRadius = 5
cell.layer.borderWidth = 1
cell.layer.borderColor = UIColor.clear.cgColor
cell.clipsToBounds = true
return cell
}
var selectedIndexes = [NSIndexPath]() {
didSet {
collview.reloadData()
}
}
func collectionView(_ collectionView: UICollectionView, didSelectItemAt indexPath: IndexPath)
{
if arr_selected.contains(indexPath.row){
arr_selected.remove(indexPath.row)
}else{
arr_selected.add(indexPath.row)
}
self.collview.reloadData()
}
override func viewDidAppear(_ animated: Bool) {
let blurEffect = UIBlurEffect(style: UIBlurEffectStyle.dark)
let blurView = UIVisualEffectView(effect: blurEffect)
blurView.frame = backgroundimg.bounds
backgroundimg.addSubview(blurView)
backgroundimg.frame = self.view.bounds
}
#IBAction func backToTheMainCustom(_ sender: UIButton) {
performSegue(withIdentifier: "backToTheMainCustom", sender: self)
sefareshTitleTextField!.text! = ""
typeandtranslateViewController.checkTextField = false
}
}
First you need to ask user for permissions to access to the photo library. If request happens first time, wait for his answer and open the UIImagePickerController again. Please review the following code:
let photosAccess = PHPhotoLibrary.authorizationStatus()
switch photosAccess {
case .notDetermined:
// First time here. Request the access
PHPhotoLibrary.requestAuthorization({status in
if status == .authorized{
// Access was just granted
// Open library here
}
})
case .authorized:
// Open library here
case .denied, .restricted:
// Photos access is not granted.
// Good place to take user to app settings.
}
The same about camera:
AVCaptureDevice.requestAccess(forMediaType: AVMediaTypeVideo) { response in
if response {
DispatchQueue.main.async {
// Show camera UI here
}
} else {
DispatchQueue.main.async {
// Access is restricred
}
}
}

How to Add Live Camera Preview to UIView

I run into a problem, I'm trying to solve within UIView boundary, is there any way to Add Camera Preview to UIView? And Add other content on top of The UIView (Buttons, Label etc.)?
I try to Using AVFoundation Framework but there is not enough documentation for Swift.
UPDATED TO SWIFT 5
You can try something like this:
import UIKit
import AVFoundation
class ViewController: UIViewController{
var previewView : UIView!
var boxView:UIView!
let myButton: UIButton = UIButton()
//Camera Capture requiered properties
var videoDataOutput: AVCaptureVideoDataOutput!
var videoDataOutputQueue: DispatchQueue!
var previewLayer:AVCaptureVideoPreviewLayer!
var captureDevice : AVCaptureDevice!
let session = AVCaptureSession()
override func viewDidLoad() {
super.viewDidLoad()
previewView = UIView(frame: CGRect(x: 0,
y: 0,
width: UIScreen.main.bounds.size.width,
height: UIScreen.main.bounds.size.height))
previewView.contentMode = UIView.ContentMode.scaleAspectFit
view.addSubview(previewView)
//Add a view on top of the cameras' view
boxView = UIView(frame: self.view.frame)
myButton.frame = CGRect(x: 0, y: 0, width: 200, height: 40)
myButton.backgroundColor = UIColor.red
myButton.layer.masksToBounds = true
myButton.setTitle("press me", for: .normal)
myButton.setTitleColor(UIColor.white, for: .normal)
myButton.layer.cornerRadius = 20.0
myButton.layer.position = CGPoint(x: self.view.frame.width/2, y:200)
myButton.addTarget(self, action: #selector(self.onClickMyButton(sender:)), for: .touchUpInside)
view.addSubview(boxView)
view.addSubview(myButton)
self.setupAVCapture()
}
override var shouldAutorotate: Bool {
if (UIDevice.current.orientation == UIDeviceOrientation.landscapeLeft ||
UIDevice.current.orientation == UIDeviceOrientation.landscapeRight ||
UIDevice.current.orientation == UIDeviceOrientation.unknown) {
return false
}
else {
return true
}
}
#objc func onClickMyButton(sender: UIButton){
print("button pressed")
}
}
// AVCaptureVideoDataOutputSampleBufferDelegate protocol and related methods
extension ViewController: AVCaptureVideoDataOutputSampleBufferDelegate{
func setupAVCapture(){
session.sessionPreset = AVCaptureSession.Preset.vga640x480
guard let device = AVCaptureDevice
.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera,
for: .video,
position: AVCaptureDevice.Position.back) else {
return
}
captureDevice = device
beginSession()
}
func beginSession(){
var deviceInput: AVCaptureDeviceInput!
do {
deviceInput = try AVCaptureDeviceInput(device: captureDevice)
guard deviceInput != nil else {
print("error: cant get deviceInput")
return
}
if self.session.canAddInput(deviceInput){
self.session.addInput(deviceInput)
}
videoDataOutput = AVCaptureVideoDataOutput()
videoDataOutput.alwaysDiscardsLateVideoFrames=true
videoDataOutputQueue = DispatchQueue(label: "VideoDataOutputQueue")
videoDataOutput.setSampleBufferDelegate(self, queue:self.videoDataOutputQueue)
if session.canAddOutput(self.videoDataOutput){
session.addOutput(self.videoDataOutput)
}
videoDataOutput.connection(with: .video)?.isEnabled = true
previewLayer = AVCaptureVideoPreviewLayer(session: self.session)
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspect
let rootLayer :CALayer = self.previewView.layer
rootLayer.masksToBounds=true
previewLayer.frame = rootLayer.bounds
rootLayer.addSublayer(self.previewLayer)
session.startRunning()
} catch let error as NSError {
deviceInput = nil
print("error: \(error.localizedDescription)")
}
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
// do stuff here
}
// clean up AVCapture
func stopCamera(){
session.stopRunning()
}
}
Here i use a UIView called previewView to start the camera and then i add a new UIView called boxView wich is above previewView. I add a UIButton to boxView
IMPORTANT
Remember that in iOS 10 and later you need to first ask the user for permission in order to have access to the camera. You do this by adding a usage
key to your app’s Info.plist together with a purpose string
because if you fail to declare the usage, your app will crash when it
first makes the access.
Here's a screenshot to show the Camera access request
Swift 4
Condensed version of mauricioconde's solution
You can use this as a drop in component:
//
// CameraView.swift
import Foundation
import AVFoundation
import UIKit
final class CameraView: UIView {
private lazy var videoDataOutput: AVCaptureVideoDataOutput = {
let v = AVCaptureVideoDataOutput()
v.alwaysDiscardsLateVideoFrames = true
v.setSampleBufferDelegate(self, queue: videoDataOutputQueue)
v.connection(with: .video)?.isEnabled = true
return v
}()
private let videoDataOutputQueue: DispatchQueue = DispatchQueue(label: "JKVideoDataOutputQueue")
private lazy var previewLayer: AVCaptureVideoPreviewLayer = {
let l = AVCaptureVideoPreviewLayer(session: session)
l.videoGravity = .resizeAspect
return l
}()
private let captureDevice: AVCaptureDevice? = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back)
private lazy var session: AVCaptureSession = {
let s = AVCaptureSession()
s.sessionPreset = .vga640x480
return s
}()
override init(frame: CGRect) {
super.init(frame: frame)
commonInit()
}
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
commonInit()
}
private func commonInit() {
contentMode = .scaleAspectFit
beginSession()
}
private func beginSession() {
do {
guard let captureDevice = captureDevice else {
fatalError("Camera doesn't work on the simulator! You have to test this on an actual device!")
}
let deviceInput = try AVCaptureDeviceInput(device: captureDevice)
if session.canAddInput(deviceInput) {
session.addInput(deviceInput)
}
if session.canAddOutput(videoDataOutput) {
session.addOutput(videoDataOutput)
}
layer.masksToBounds = true
layer.addSublayer(previewLayer)
previewLayer.frame = bounds
session.startRunning()
} catch let error {
debugPrint("\(self.self): \(#function) line: \(#line). \(error.localizedDescription)")
}
}
override func layoutSubviews() {
super.layoutSubviews()
previewLayer.frame = bounds
}
}
extension CameraView: AVCaptureVideoDataOutputSampleBufferDelegate {}
iOS 13/14 and Swift 5.3:
private var imageVC: UIImagePickerController?
and then call showCameraVC() when you want to show the camera view
func showCameraVC() {
self.imageVC = UIImagePickerController()
if UIImagePickerController.isCameraDeviceAvailable(.front) {
self.imageVC?.sourceType = .camera
self.imageVC?.cameraDevice = .front
self.imageVC?.showsCameraControls = false
let screenSize = UIScreen.main.bounds.size
let cameraAspectRatio = CGFloat(4.0 / 3.0)
let cameraImageHeight = screenSize.width * cameraAspectRatio
let scale = screenSize.height / cameraImageHeight
self.imageVC?.cameraViewTransform = CGAffineTransform(translationX: 0, y: (screenSize.height - cameraImageHeight)/2)
self.imageVC?.cameraViewTransform = self.imageVC!.cameraViewTransform.scaledBy(x: scale, y: scale)
self.imageVC?.view.frame = CGRect(x: 0, y: 0, width: screenSize.width, height: screenSize.height)
self.view.addSubview(self.imageVC!.view)
self.view.sendSubviewToBack(self.imageVC!.view)
}
}
Camera view will be also fullscreen (other answers wouldn't fix a letterboxed view)
Swift 3:
#IBOutlet weak var cameraContainerView:UIView!
var imagePickers:UIImagePickerController?
On ViewDidLoad:
override func viewDidLoad() {
super.viewDidLoad()
addImagePickerToContainerView()
}
Add Camera Preview to the container view:
func addImagePickerToContainerView(){
imagePickers = UIImagePickerController()
if UIImagePickerController.isCameraDeviceAvailable( UIImagePickerControllerCameraDevice.front) {
imagePickers?.delegate = self
imagePickers?.sourceType = UIImagePickerControllerSourceType.camera
//add as a childviewcontroller
addChildViewController(imagePickers!)
// Add the child's View as a subview
self.cameraContainerView.addSubview((imagePickers?.view)!)
imagePickers?.view.frame = cameraContainerView.bounds
imagePickers?.allowsEditing = false
imagePickers?.showsCameraControls = false
imagePickers?.view.autoresizingMask = [.flexibleWidth, .flexibleHeight]
}
}
On custom button action:
#IBAction func cameraButtonPressed(_ sender: Any) {
if UIImagePickerController.isSourceTypeAvailable(.camera){
imagePickers?.takePicture()
} else{
//Camera not available.
}
}
swift 5
easy way
import UIKit
import AVFoundation
class ViewController: UIViewController, UINavigationControllerDelegate,UIImagePickerControllerDelegate{
//Camera Capture requiered properties
var imagePickers:UIImagePickerController?
#IBOutlet weak var customCameraView: UIView!
override func viewDidLoad() {
addCameraInView()
super.viewDidLoad()
}
func addCameraInView(){
imagePickers = UIImagePickerController()
if UIImagePickerController.isCameraDeviceAvailable( UIImagePickerController.CameraDevice.rear) {
imagePickers?.delegate = self
imagePickers?.sourceType = UIImagePickerController.SourceType.camera
//add as a childviewcontroller
addChild(imagePickers!)
// Add the child's View as a subview
self.customCameraView.addSubview((imagePickers?.view)!)
imagePickers?.view.frame = customCameraView.bounds
imagePickers?.allowsEditing = false
imagePickers?.showsCameraControls = false
imagePickers?.view.autoresizingMask = [.flexibleWidth, .flexibleHeight]
}
}
#IBAction func cameraButtonPressed(_ sender: Any) {
if UIImagePickerController.isSourceTypeAvailable(.camera){
imagePickers?.takePicture()
} else{
//Camera not available.
}
}
}

Set up camera on the background of UIView

I'm trying set camera on the background of UIView in UIViewController, in order to be able to draw on it.
How to do that?
UPDATED TO SWIFT 5
You could try something like this:
I add two UIViews to my UIViewController's main view, one called previewView (for the camera) and another UIView called boxView (which is above the camera view)
class ViewController: UIViewController {
var previewView : UIView!
var boxView:UIView!
//Camera Capture requiered properties
var videoDataOutput: AVCaptureVideoDataOutput!
var videoDataOutputQueue: DispatchQueue!
var previewLayer:AVCaptureVideoPreviewLayer!
var captureDevice : AVCaptureDevice!
let session = AVCaptureSession()
var currentFrame: CIImage!
var done = false
override func viewDidLoad() {
super.viewDidLoad()
previewView = UIView(frame: CGRect(x: 0, y: 0, width: UIScreen.main.bounds.size.width, height: UIScreen.main.bounds.size.height))
previewView.contentMode = .scaleAspectFit
view.addSubview(previewView)
//Add a box view
boxView = UIView(frame: CGRect(x: 0, y: 0, width: 100, height: 200))
boxView.backgroundColor = UIColor.green
boxView.alpha = 0.3
view.addSubview(boxView)
self.setupAVCapture()
}
override func viewWillAppear(_ animated: Bool) {
if !done {
session.startRunning()
}
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
}
override var shouldAutorotate: Bool {
if (UIDevice.current.orientation == UIDeviceOrientation.landscapeLeft ||
UIDevice.current.orientation == UIDeviceOrientation.landscapeRight ||
UIDevice.current.orientation == UIDeviceOrientation.unknown) {
return false
}
else {
return true
}
}
}
// AVCaptureVideoDataOutputSampleBufferDelegate protocol and related methods
extension ViewController: AVCaptureVideoDataOutputSampleBufferDelegate{
func setupAVCapture(){
session.sessionPreset = AVCaptureSession.Preset.vga640x480
guard let device = AVCaptureDevice
.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera,
for: .video,
position: AVCaptureDevice.Position.front) else{
return
}
captureDevice = device
beginSession()
done = true
}
func beginSession(){
var deviceInput: AVCaptureDeviceInput!
do {
deviceInput = try AVCaptureDeviceInput(device: captureDevice)
guard deviceInput != nil else {
print("error: cant get deviceInput")
return
}
if self.session.canAddInput(deviceInput){
self.session.addInput(deviceInput)
}
videoDataOutput = AVCaptureVideoDataOutput()
videoDataOutput.alwaysDiscardsLateVideoFrames=true
videoDataOutputQueue = DispatchQueue(label: "VideoDataOutputQueue")
videoDataOutput.setSampleBufferDelegate(self, queue:self.videoDataOutputQueue)
if session.canAddOutput(self.videoDataOutput){
session.addOutput(self.videoDataOutput)
}
videoDataOutput.connection(with: AVMediaType.video)?.isEnabled = true
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.session)
self.previewLayer.videoGravity = AVLayerVideoGravity.resizeAspect
let rootLayer: CALayer = self.previewView.layer
rootLayer.masksToBounds = true
self.previewLayer.frame = rootLayer.bounds
rootLayer.addSublayer(self.previewLayer)
session.startRunning()
} catch let error as NSError {
deviceInput = nil
print("error: \(error.localizedDescription)")
}
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
currentFrame = self.convertImageFromCMSampleBufferRef(sampleBuffer)
}
// clean up AVCapture
func stopCamera(){
session.stopRunning()
done = false
}
func convertImageFromCMSampleBufferRef(_ sampleBuffer:CMSampleBuffer) -> CIImage{
let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let ciImage:CIImage = CIImage(cvImageBuffer: pixelBuffer)
return ciImage
}
}
You can replace the boxView's frame with mainView's frameand don't set its background property. This way you can use this view to add more subviews.
IMPORTANT
Remember that in iOS 10 you need to first ask the user for permission in order to have access to the camera. You do this by adding a usage
key to your app’s Info.plist together with a purpose string
because if you fail to declare the usage, your app will crash when it
first makes the access.
Here's a screenshot to show the Camera access request
I hope this can help!
An other way, SceneView is useful for augmented reality applications.
Create a preview layer with AVFramework or UIView, then add preview
layer to view's sublayer.
Create and custumize a sceneview. Then add sceneview to view's
subview.
Create and custimize scene. Finally add to scenview's scene.
// 1. Create a preview layer with AVFramework or UIView, then add preview layer to view's sublayer.
self.previewLayer!.frame = view.layer.bounds
view.clipsToBounds = true
view.layer.addSublayer(self.previewLayer!)
// 2. Create and custumize a sceneview. Then add sceneview to view's subview.
let sceneView = SCNView()
sceneView.frame = view.bounds
sceneView.backgroundColor = UIColor.clearColor()
self.previewLayer!.frame = view.bounds
view.addSubview(sceneView)
// 3 . Create and custimize scene. Finally add to scenview's scene.
let scene = SCNScene()
sceneView.autoenablesDefaultLighting = true
sceneView.allowsCameraControl = true
let boxGeometry = SCNBox(width: 800 , height: 400, length: 1.0, chamferRadius: 1.0)
let yellow = UIColor.yellowColor()
let semi = yellow.colorWithAlphaComponent(0.3)
boxGeometry.firstMaterial?.diffuse.contents = semi
let boxNode = SCNNode(geometry: boxGeometry)
scene.rootNode.addChildNode(boxNode)
sceneView.scene = scene
One easy way of doing this is to add overlay view on imagepickercontroller and hide the default view.
The other way is to use AV framework that will give you much more options and freedom.
Choice depends on your needs.

Resources