Capture picture automatically in IOS - ios

My requirement is to write a sample IOS app that would automatically capture a camera picture. Using the various S.O links provided I did implement the below code -
My CameraViewController.h class is defined as follows :
#interface CameraViewController : UIViewController <UIImagePickerControllerDelegate, UINavigationControllerDelegate>
#property (strong, nonatomic) IBOutlet UIImageView *ImageView;
#end
And CameraViewController.m has the below code :
-(void)viewDidAppear:(BOOL)animated
{
NSLog(#"Setting the background now");
UIImagePickerController *picker = [[UIImagePickerController alloc] init];picker.delegate = self;
picker.allowsEditing = YES;
picker.sourceType = UIImagePickerControllerSourceTypeCamera;
picker.cameraCaptureMode = UIImagePickerControllerCameraCaptureModePhoto;
picker.cameraDevice = UIImagePickerControllerCameraDeviceRear;
picker.showsCameraControls = NO;
picker.navigationBarHidden = NO;
picker.toolbarHidden = NO;
[self presentViewController:picker animated:YES completion:NULL];
NSLog(#"Taking the picture now");
[picker takePicture];
}
-(void) imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
NSLog(#"Entered the case of finishing pictures");
}
- (void) imagePickerControllerDidCancel: (UIImagePickerController *) picker
{
NSLog(#"Entered the case of cancel");
}
What the above code does is successfully launch the camera app , however I am not sure if the takePicture API is able to successfully click a picture . I do not see any saved pictures in the Photos app inside my Ipad so I assume that the picture has not been clicked .
Can someone please tell me if my code above is correct or what do I need to do to automate the part of clicking the capture button once the Camera controls are displayed

[Please go to 'Using UIImagePickerController to Select Pictures and Take Photos' in the Apple documentation for the property cameraOverlayView of class UIImagePickerController for a complete example application that does what you need, and more.]
You specified your CameraViewController as adopting the UIImagePickerControllerDelegate protocol and thus you must implement two messages:
- (void) imagePickerController: (UIImagePickerController *) picker
didFinishPickingMediaWithInfo: (NSDictionary *) info;
and
- (void) imagePickerControllerDidCancel: (UIImagePickerController *) picker;
As the iOS documentation describes, the NSDictionary* info has a key UIImagePickerControllerOriginalImage which will return the UIImage. Access it as something like:
UIImage *snapshot = (UIImage *) [info objectForKey: UIImagePickerControllerOriginalImage];
Since your plan is to take a picture automatically (w/o user interaction) using takePicture then be sure to specify
picker.showsCameraControls = NO;

You need to implement the UIImagePIckerControllerDelegate's imagePickerController:didFinishPickingMediaWithInfo: method.
After that, look inside the mediaInfo dictionary and there's a UIImage inside it you can use.

I know this is old, but a better alternative to using a timer (see the comments from the accepted answer) would be to implement the completion handler instead of passing in NULL.
[self presentViewController:picker animated:YES completion:^{
NSLog(#"Taking the picture now");
[picker takePicture];
}];
That way, the picture is taken consistently every time, and you don't waste time adding an unnecessary delay.

**You can auto capturing both camera image and video recording by use this code.**
import UIKit
import AVFoundation
import MobileCoreServices
class ViewController: UIViewController, UIGestureRecognizerDelegate {
let captureSession = AVCaptureSession()
var captureDevice : AVCaptureDevice?
var imagePicker = UIImagePickerController()
var flagVideoRecording = false
var arrImages = [UIImage]()
var countVideoRecording = 0
var labelTime = UILabel()
var timer: Timer?
override func viewDidLoad() {
super.viewDidLoad()
NotificationCenter.default.addObserver(self, selector: #selector(actionRepeatCapturing), name: .AVCaptureSessionDidStartRunning, object: nil)
}
#objc func actionRepeatCapturing() {
flagVideoRecording = false
startCapturingBothImageAndRecordView()
}
//MARK:- UIButton's Action
#IBAction func actionCaptureImage(_ sender: UIButton) {
flagVideoRecording = false
if AVCaptureDevice.authorizationStatus(for: AVMediaType.video) == AVAuthorizationStatus.authorized {
startCapturingBothImageAndRecordView()
} else {
AVCaptureDevice.requestAccess(for: AVMediaType.video, completionHandler: { (granted: Bool) -> Void in
if granted == true {
self.startCapturingBothImageAndRecordView()
} else {
DispatchQueue.main.async {
self.alertToEncourageAccessInitially("Camera access required for capturing photos!", actionTitle: "Allow Camera")
}
}
})
}
}
#IBAction func actionCaptureVideo(_ sender: UIButton) {
flagVideoRecording = true
if AVCaptureDevice.authorizationStatus(for: AVMediaType.video) == AVAuthorizationStatus.authorized {
switch AVAudioSession.sharedInstance().recordPermission {
case AVAudioSession.RecordPermission.granted:
self.startCapturingBothImageAndRecordView()
case AVAudioSession.RecordPermission.denied:
self.alertToEncourageAccessInitially("Microphone access required for record your voice!", actionTitle: "Allow Microphone")
case AVAudioSession.RecordPermission.undetermined:
AVAudioSession.sharedInstance().requestRecordPermission({ (granted) in
if granted {
self.startCapturingBothImageAndRecordView()
} else {
self.alertToEncourageAccessInitially("Microphone access required for record your voice!", actionTitle: "Allow Microphone")
}
})
default:
break
}
} else {
AVCaptureDevice.requestAccess(for: AVMediaType.video, completionHandler: { (granted: Bool) -> Void in
if granted == true {
switch AVAudioSession.sharedInstance().recordPermission {
case AVAudioSession.RecordPermission.granted:
self.startCapturingBothImageAndRecordView()
case AVAudioSession.RecordPermission.denied:
self.alertToEncourageAccessInitially("Microphone access required for record your voice!", actionTitle: "Allow Microphone")
case AVAudioSession.RecordPermission.undetermined:
AVAudioSession.sharedInstance().requestRecordPermission({ (granted) in
if granted {
self.startCapturingBothImageAndRecordView()
} else {
self.alertToEncourageAccessInitially("Microphone access required for record your voice!", actionTitle: "Allow Microphone")
}
})
default:
break
}
} else {
DispatchQueue.main.async {
self.alertToEncourageAccessInitially("Camera access required for record video", actionTitle: "Allow Camera")
}
}
})
}
}
}
extension ViewController: UIImagePickerControllerDelegate, UINavigationControllerDelegate {
func startCapturingBothImageAndRecordView() {
if UIImagePickerController.isSourceTypeAvailable(UIImagePickerController.SourceType.camera) {
debugPrint("captureVideoPressed and camera available.")
imagePicker = UIImagePickerController()
imagePicker.delegate = self
imagePicker.sourceType = .camera
if flagVideoRecording {
imagePicker.mediaTypes = [kUTTypeMovie as String]
imagePicker.allowsEditing = false
imagePicker.showsCameraControls = false
let viewTime = UIView(frame: CGRect(x: 0, y: 0, width: self.view.frame.width, height: 40))
viewTime.backgroundColor = .black
viewTime.alpha = 0.1
labelTime = UILabel(frame: CGRect(x: self.view.frame.width/2-50, y: 10, width: 100, height: 25))
labelTime.font = UIFont.boldSystemFont(ofSize: 17)
labelTime.text = "00.00:00"
labelTime.textColor = .white
labelTime.textAlignment = .center
labelTime.backgroundColor = .red
imagePicker.view.addSubview(viewTime)
imagePicker.view.addSubview(labelTime)
self.timer = Timer.scheduledTimer(timeInterval: 1,
target: self,
selector: #selector(self.actionStopVideoRecording),
userInfo: nil,
repeats: true)
} else {
imagePicker.allowsEditing = false
imagePicker.showsCameraControls = false
}
} else {
debugPrint("Camera not available.")
}
self.present(self.imagePicker, animated: true, completion: {
if self.flagVideoRecording {
self.imagePicker.startVideoCapture()
} else {
self.imagePicker.takePicture()
}
})
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
if flagVideoRecording {
if let videoFileURL = info[UIImagePickerController.InfoKey.mediaURL] as? URL {
debugPrint(videoFileURL)
// let data = try Data(contentsOf: videoFileURL, options: .mappedIfSafe)
// debugPrint(data)
}
self.dismiss(animated: true, completion: nil)
} else {
if let pickedImage = info[UIImagePickerController.InfoKey.originalImage] as? UIImage{
arrImages.append(pickedImage)
}
sleep(1)
if arrImages.count >= 5 {
self.dismiss(animated: true, completion: nil)
} else {
NotificationCenter.default.post(name: .AVCaptureSessionDidStartRunning, object: nil, userInfo: nil)
}
}
}
#objc func actionStopVideoRecording() {
countVideoRecording += 1
labelTime.text = countVideoRecording == 10 ? "00:00:\(countVideoRecording)":"00:00:0\(countVideoRecording)"
if countVideoRecording == 10 {
imagePicker.stopVideoCapture()
timer?.invalidate()
timer = nil
}
}
}
extension ViewController {
func alertToEncourageAccessInitially(_ msgString: String, actionTitle: String) {
let alert = UIAlertController(
title: "IMPORTANT",
message: msgString,
preferredStyle: UIAlertController.Style.alert
)
alert.addAction(UIAlertAction(title: "Cancel", style: .default, handler: nil))
alert.addAction(UIAlertAction(title: actionTitle, style: .destructive, handler: { (alert) -> Void in
let myUrl = URL(string: UIApplication.openSettingsURLString)!
if let url = URL(string: "\(myUrl)"), !url.absoluteString.isEmpty {
UIApplication.shared.open(url, options: [:], completionHandler: nil)
}
// or outside scope use this
guard let url = URL(string: "\(myUrl)"), !url.absoluteString.isEmpty else {
return
}
UIApplication.shared.open(url, options: [:], completionHandler: nil)
}))
present(alert, animated: true, completion: nil)
}
}

Related

UIImagePickerController camera keeps open after closing

I wrote a UIViewControllerRepresentable for a VideoRecordingView in SwiftUI:
import SwiftUI
import AVFoundation
import Photos
struct VideoRecordingView: UIViewControllerRepresentable {
#Binding var videoURL: URL?
#Environment(\.viewController) private var viewControllerHolder: UIViewController?
let imagePickerController: UIImagePickerController
init(videoURL: Binding<URL?>) {
self._videoURL = videoURL
imagePickerController = UIImagePickerController()
}
func makeUIViewController(context: UIViewControllerRepresentableContext<VideoRecordingView>) -> UIImagePickerController {
imagePickerController.allowsEditing = true
imagePickerController.sourceType = .camera
imagePickerController.mediaTypes = ["public.movie"]
imagePickerController.videoMaximumDuration = .infinity
if #available(iOS 14.0, *) {
imagePickerController.videoQuality = .typeHigh
} else {
}
imagePickerController.delegate = context.coordinator
return imagePickerController
}
func updateUIViewController(_ uiViewController: UIImagePickerController, context: UIViewControllerRepresentableContext<VideoRecordingView>) {}
func makeCoordinator() -> Coordinator {
Coordinator(self)
}
final class Coordinator: NSObject, UIImagePickerControllerDelegate, UINavigationControllerDelegate {
var parent: VideoRecordingView
init(_ parent: VideoRecordingView) {
self.parent = parent
NotificationCenter.default.addObserver(forName: UIApplication.willResignActiveNotification, object: nil, queue: nil) { notification in
parent.imagePickerController.stopVideoCapture()
// parent.viewControllerHolder?.dismiss(animated: true, completion: nil)
}
}
func requestAuthorizationToPhotos(completionHandler: #escaping (Bool) -> Void) {
guard PHPhotoLibrary.authorizationStatus() != .authorized else {
completionHandler(true)
return
}
PHPhotoLibrary.requestAuthorization { status in
completionHandler(status == .authorized ? true : false)
}
}
func imagePickerControllerDidCancel(_ picker: UIImagePickerController) {
picker.stopVideoCapture()
parent.viewControllerHolder?.dismiss(animated: true, completion: nil)
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
picker.stopVideoCapture()
if let videoURL = info[UIImagePickerController.InfoKey.mediaURL] as? URL {
DispatchQueue.global(qos: .background).async {
let fileURL = videoURL.copyFileToTempDirectory()
DispatchQueue.main.async {
self.parent.videoURL = fileURL
}
self.requestAuthorizationToPhotos { granted in
if (granted && UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(videoURL.path)) {
UISaveVideoAtPathToSavedPhotosAlbum(videoURL.path, nil, nil, nil)
}
}
}
}
parent.viewControllerHolder?.dismiss(animated: true, completion: nil)
}
}
}
I use imagePickerController.stopVideoCapture() and dismiss it, when I close it.
When I close it, I exit the app and go back to the app again. (I go to the background and return to the foreground.)
At this time, a green light appears in the status bar, which means it is using the camera. While it is closed and I do not know which part of my code is using the camera.
Please help me. Thanks :)

Photo capture permission problems in iOS 11

So here's my problem. I am trying to create a screen in which there is a UIImageView and a UIButton. When the user presses the button, the camera app opens, you take a photo and if you press "Use Photo" in the Camera app, you are returned to my app's screen and the photo is placed in the UIImageView I mentioned previously.
What happens so far is that when I press the "Use Photo" button, the image is correctly placed in my UIImageView but then the app crashes with the following error:
This app has crashed because it attempted to access privacy-sensitive data without a usage description. The app's Info.plist must contain an NSPhotoLibraryAddUsageDescription key with a string value explaining to the user how the app uses this data.
What I've done so far is:
Placed the key "Privacy - Photo Library Usage Description" with the value "$(PRODUCT_NAME) uses Library in order to process the photos you captured." in the Info.plist file (also checked how it is written in Source form and it's correct according to the Apple Developer Documentation).
Also placed the key "Privacy - Camera Usage Description" with the value "$(PRODUCT_NAME) uses Cameras" in the Info.plist file.
Checked under "TARGETS->->Info->Custom iOS Target Properties" and the 2 key/value pairs that I mentioned in steps 1 and 2, exist.
I will provide you with my code so far:
import UIKit
import Vision
import MobileCoreServices
import AVFoundation
import Photos
class ViewController: UIViewController, UIImagePickerControllerDelegate,
UINavigationControllerDelegate {
var newMedia: Bool?
#IBAction func captureImageButtonPressed(_ sender: Any) {
//let imageName : String = "dolphin"
//randomImageView.image = UIImage.init(named:imageName)
if UIImagePickerController.isSourceTypeAvailable(
UIImagePickerControllerSourceType.camera) {
let imagePicker = UIImagePickerController()
imagePicker.delegate = self
imagePicker.sourceType =
UIImagePickerControllerSourceType.camera
imagePicker.mediaTypes = [kUTTypeImage as String]
imagePicker.allowsEditing = false
self.present(imagePicker, animated: true,
completion: nil)
newMedia = true
}
}
#IBAction func classifyButtonPressed(_ sender: UIButton) {
performVisionRequest()
}
#IBOutlet weak var randomImageView: UIImageView!
#IBOutlet weak var classificationLabel: UILabel!
override func viewDidLoad() {
super.viewDidLoad()
}
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
}
func performVisionRequest() {
let start = DispatchTime.now()
let model = Resnet50()
let request = VNImageRequestHandler(cgImage: randomImageView.image!.cgImage!, options: [:])
do {
let m = try VNCoreMLModel(for: model.model)
let coreMLRequest = VNCoreMLRequest(model: m) { (request, error) in
guard let observation = request.results?.first as? VNClassificationObservation else { return }
let stop = DispatchTime.now()
let nanoTime = stop.uptimeNanoseconds - start.uptimeNanoseconds
let timeInterval = Double(nanoTime)
self.classificationLabel.text = "\(observation.identifier) (\(observation.confidence * 100)%) in \(timeInterval) seconds."
}
try request.perform([coreMLRequest])
} catch {
print(error)
}
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
let mediaType = info[UIImagePickerControllerMediaType] as! NSString
self.dismiss(animated: true, completion: nil)
if mediaType.isEqual(to: kUTTypeImage as String) {
let image = info[UIImagePickerControllerOriginalImage]
as! UIImage
randomImageView.image = image
if (newMedia == true) {
UIImageWriteToSavedPhotosAlbum(image, self,
#selector(ViewController.image(image:didFinishSavingWithError:contextInfo:)), nil)
} else if mediaType.isEqual(to: kUTTypeMovie as String) {
// Code to support video here
}
}
}
#objc func image(image: UIImage, didFinishSavingWithError error: NSErrorPointer, contextInfo:UnsafeRawPointer) {
if error != nil {
let alert = UIAlertController(title: "Save Failed",
message: "Failed to save image",
preferredStyle: UIAlertControllerStyle.alert)
let cancelAction = UIAlertAction(title: "OK",
style: .cancel, handler: nil)
alert.addAction(cancelAction)
self.present(alert, animated: true,
completion: nil)
}
}
func imagePickerControllerDidCancel(_ picker: UIImagePickerController) {
self.dismiss(animated: true, completion: nil)
}
}
Any idea why I get the above error in bold? Thank you very much in advance for your time.
NSPhotoLibraryAddUsageDescription was added in iOS 11.
Please add "Privacy - Photo Library Additions Usage Description" in info.plist with a usage description (string), like you did for the other privacy permissions.
Ref: https://developer.apple.com/library/content/documentation/General/Reference/InfoPlistKeyReference/Articles/CocoaKeys.html

iOS camera freezes when previewing video when there is a camera overlay

I am attempting to overlay a view over a video capture session in UIImagePicker. The overlay works fine but when the app gets to the screen where the user can "retake", "play" or "use video", the app crashes and gives the error:
2017-04-16 21:33:04.129212-0400 ChugMug[429:59833] libMobileGestalt
MobileGestalt.c:2690: statfs(/mnt4): No such file or directory
2017-04-16 21:33:04.129871-0400 ChugMug[429:59833] libMobileGestalt
MobileGestalt.c:2587: SInt64 NANDSize(): No kIOMediaSizeKey found for
disk0! 2017-04-16 21:33:09.352085-0400 ChugMug[429:60065]
[MediaRemote] Error Operation requires a client callback to have been
registered. requesting playback queue
The code is quite simple, when the overlay is commented out, the video preview screen and buttons work fine, but when the overlay is present the app freezes at the following screen:
Here is the code for the camera and the overlay:
func startMediaBrowserFromViewController(viewController: UIViewController, usingDelegate delegate: UINavigationControllerDelegate & UIImagePickerControllerDelegate) -> Bool {
// 1
if UIImagePickerController.isSourceTypeAvailable(.savedPhotosAlbum) == false {
return false
}
// 2
let mediaUI = UIImagePickerController()
mediaUI.sourceType = .savedPhotosAlbum
mediaUI.mediaTypes = [kUTTypeMovie as NSString as String]
mediaUI.allowsEditing = true
mediaUI.delegate = delegate
// 3
present(mediaUI, animated: true, completion: nil)
return true
}
func startCameraFromViewController(viewController: UIViewController, withDelegate delegate: UIImagePickerControllerDelegate & UINavigationControllerDelegate) -> Bool {
if UIImagePickerController.isSourceTypeAvailable(.camera) == false {
return false
}
cameraController.sourceType = .camera
cameraController.mediaTypes = [kUTTypeMovie as NSString as String]
cameraController.allowsEditing = false
cameraController.delegate = delegate
cameraController.showsCameraControls = true
//customView stuff
let customViewController = CustomOverlayViewController(
nibName:"CustomOverlayViewController",
bundle: nil
)
let customView = customViewController.view //as! CustomOverlayView
customView?.frame = cameraController.view.frame
present(cameraController, animated: true, completion: {
self.cameraController.cameraOverlayView = customView
customViewController.cameraLabel.text = "Camera Label"
self.cameraController.startVideoCapture()
})
return true
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
let mediaType = info[UIImagePickerControllerMediaType] as! NSString
dismiss(animated: true, completion: nil)
// Handle a movie capture
if mediaType == kUTTypeMovie {
guard let path = (info[UIImagePickerControllerMediaURL] as! NSURL).path else { return }
if UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(path) {
UISaveVideoAtPathToSavedPhotosAlbum(path, self, nil, nil)
}
}
}
I have no clue what is causing this strange error and cant find anything similar. I hope someone can help me.

Front facing camera in UIImagePickerController

I am developing the front facing camera app in iPad2 by using the UIImagePickerController.
When I capture the image it's shows as flipped from left to right.
How do I correct this?
if ([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera])
{
UIImagePickerController *imgPkr = [[UIImagePickerController alloc] init];
imgPkr.delegate = self;
imgPkr.sourceType = UIImagePickerControllerSourceTypeCamera;
imgPkr.cameraDevice=UIImagePickerControllerCameraDeviceFront;
UIImageView *anImageView=[[UIImageView alloc] initWithImage:[UIImage imageNamed:[NSString stringWithFormat:#"select%d.png",val]]];
anImageView.frame = CGRectMake(0, 0, anImageView.image.size.width, anImageView.image.size.height);
imgPkr.cameraOverlayView = anImageView;
[theApp.TabViewControllerObject presentModalViewController:imgPkr animated:YES];
[imgPkr release];
}
You can flip the image from the source image use this
UIImage *flippedImage = [UIImage imageWithCGImage:picture.CGImage scale:picture.scale orientation:UIImageOrientationLeftMirrored];
Edit: Added swift code
let flippedImage = UIImage(CGImage: picture.CGImage, scale: picture.scale, orientation:.LeftMirrored)
I had the same problem - and the solution above only got me half the answer, because the user had to approve the mirrored image before getting to the next page of my app - where I use the captured image after flipping it.
To solve this I had to flip the camera view whenever I switch to the front facing camera:
- (IBAction)flipCamera:(id)sender {
if(cameraUI.cameraDevice == UIImagePickerControllerCameraDeviceFront)
{
cameraUI.cameraDevice = UIImagePickerControllerCameraDeviceRear;
}
else {
cameraUI.cameraDevice = UIImagePickerControllerCameraDeviceFront;
}
cameraUI.cameraViewTransform = CGAffineTransformScale(cameraUI.cameraViewTransform, -1, 1);
}
Just to expand on this great answer, some typical complete code, Dec2013, iOS7 / Xcode5. Does everything. You just need an icon (cameraToggle.PNG in the example).
-(void)showTheDeviceCamera
{
if ( ! [UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera] )
return;
// self.cameraController is a UIImagePickerController
self.cameraController = [[UIImagePickerController alloc] init];
self.cameraController.delegate = (id)self;
self.cameraController.mediaTypes = #[(NSString *)kUTTypeImage];
self.cameraController.allowsEditing = YES;
self.cameraController.sourceType = UIImagePickerControllerSourceTypeCamera;
[self presentViewController:self.cameraController animated:YES completion:NULL];
// Add front-rear toggle button MANUALLY, IF NECESSARY
// (You seem to usually get it for free, on iPhone, but
// need to add manually on an iPad.)
UIView *buttonView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"cameraToggle"]];
[buttonView sizeToFit];
buttonView.userInteractionEnabled = YES;
[self.cameraController.view addSubview:buttonView];
UITapGestureRecognizer *tap =
[[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(_frontRearButtonClicked) ];
tap.numberOfTapsRequired = 1;
[buttonView addGestureRecognizer:tap];
// we'll add it at the top right .. could be anywhere you want
buttonView.center = CGPointMake(
self.cameraController.view.frame.size.width-buttonView.frame.size.width,
3.0 * buttonView.frame.size.height
);
}
-(void)_frontRearButtonClicked
{
[UIView transitionWithView:self.cameraController.view
duration:1.0
options:UIViewAnimationOptionAllowAnimatedContent | UIViewAnimationOptionTransitionFlipFromLeft
animations:^{
if ( self.cameraController.cameraDevice == UIImagePickerControllerCameraDeviceRear )
self.cameraController.cameraDevice = UIImagePickerControllerCameraDeviceFront;
else
self.cameraController.cameraDevice = UIImagePickerControllerCameraDeviceRear;
} completion:NULL];
}
As the other answers, I had the same problem. As Yonatan Betzer mentioned, just flip the final image is only half the answer, because the preview image, displayed by the UIPickerController when you take a picture with the front camera, it's still inverted (mirrored).
Yonatan Betzer's anwser works great, but he did not mentioned how or where to put the action to change the camera device.
Based in some codes from internet, I created a Pod to get this wanted behavior:
https://github.com/lucasecf/LEMirroredImagePicker
After installed, you just have to call this two lines of code together with your UIImagePickerController:
self.mirrorFrontPicker = [[LEMirroredImagePicker alloc] initWithImagePicker:pickerController];
[self.mirrorFrontPicker mirrorFrontCamera];
And thats it, simply as that. You can check for more informations in the README of the github link.
Just to add how I have just achieved this without subclassing UIImagePickerController and without adding extra buttons to the camera view.
Simply listen for this notification which is fired several times whenever the camera is changed:
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(cameraChanged:)
name:#"AVCaptureDeviceDidStartRunningNotification"
object:nil];
Then use this method to flip the camera view:
- (void)cameraChanged:(NSNotification *)notification
{
if(imagePicker.cameraDevice == UIImagePickerControllerCameraDeviceFront)
{
imagePicker.cameraViewTransform = CGAffineTransformIdentity;
imagePicker.cameraViewTransform = CGAffineTransformScale(imagePicker.cameraViewTransform, -1, 1);
} else {
imagePicker.cameraViewTransform = CGAffineTransformIdentity;
}
}
I know this question is really old but it seems like this is a still a common problem. Just set a CGAffineTransform on the cameraViewTransform property on a UIImagePickerController object.
let picker = UIImagePickerController()
picker.cameraViewTransform = CGAffineTransformScale(picker.cameraViewTransform, -1, 1)
Updated "bandog" answer for swift 4
let picker = UIImagePickerController()
picker.cameraViewTransform = picker.cameraViewTransform.scaledBy(x: -1, y: 1)
It took me few hours, but I think I got there. Here is a working solution for Swift 5.2 of how to get correct image (both in ImagePicker preview and in output).
//Registering to get notification when users takes a picture
override func viewDidLoad() {
NotificationCenter.default.addObserver(forName: NSNotification.Name(rawValue: "_UIImagePickerControllerUserDidCaptureItem"), object: nil, queue: nil) { (notification) in
self.changePhotoOrientation()
}
//Changing image orientation for ImagePicker preview
func changePhotoOrientation() {
var subviews: [UIView] = [imagePicker.view]
while (!subviews.isEmpty) {
let subview = subviews.removeFirst()
subviews += subview.subviews
if (subview.isKind(of: UIImageView.self)) {
subview.transform = self.imagePicker.cameraViewTransform.scaledBy(x: -1, y: 1)
}
}
}
//Changing image orientation for the output image
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
if let userPickedImage = info[UIImagePickerController.InfoKey.originalImage] as? UIImage {
image = UIImage(cgImage: userPickedImage.cgImage!, scale: userPickedImage.scale, orientation: .leftMirrored)
}
}
}
It looks like AVCaptureDeviceDidStartRunningNotification is no longer available as a means of detecting camera device changes. Also, the cameraDevice property on UIImagePickerController doesn't work with KVO. However, it's still possible to detect camera device changes, as shown below (though long-term support for this solution isn't guaranteed as we're using KVO on a property that isn't explicitly marked as KVO-compliant).
import AVFoundation
var context = 0
override func viewDidLoad() {
super.viewDidLoad()
// Register for notifications
let notificationCenter = NSNotificationCenter.defaultCenter()
notificationCenter.addObserver(self, selector: #selector(handleCaptureSessionDidStartRunning(_:)), name: AVCaptureSessionDidStartRunningNotification, object: nil)
notificationCenter.addObserver(self, selector: #selector(handleCaptureSessionDidStopRunning(_:)), name: AVCaptureSessionDidStopRunningNotification, object: nil)
}
deinit {
NSNotificationCenter.defaultCenter().removeObserver(self)
}
func handleCaptureSessionDidStartRunning(notification: NSNotification) {
guard let session = notification.object as? AVCaptureSession else { return }
session.addObserver(self, forKeyPath: "inputs", options: [ .Old, .New ], context: &context)
}
func handleCaptureSessionDidStopRunning(notification: NSNotification) {
guard let session = notification.object as? AVCaptureSession else { return }
session.removeObserver(self, forKeyPath: "inputs")
}
override func observeValueForKeyPath(keyPath: String?, ofObject object: AnyObject?, change: [String : AnyObject]?, context: UnsafeMutablePointer<Void>) {
if context == &self.context {
if let inputs = change?[NSKeyValueChangeNewKey] as? [AnyObject], captureDevice = (inputs.first as? AVCaptureDeviceInput)?.device {
switch captureDevice.position {
case .Back: print("Switched to back camera")
case .Front: print("Switched to front camera")
case .Unspecified: break
}
}
} else {
super.observeValueForKeyPath(keyPath, ofObject: object, change: change, context: context)
}
}
Swift 4+ version:
import AVFoundation
var context = 0
override func viewDidLoad() {
super.viewDidLoad()
// Register for notifications
let notificationCenter = NSNotificationCenter.defaultCenter()
notificationCenter.addObserver(forName: NSNotification.Name(rawValue: "AVCaptureSessionDidStartRunningNotification"), object: nil, queue: nil) { [weak self] notification in
self?.handleCaptureSessionDidStartRunning(notification: notification)
}
notificationCenter.addObserver(forName: NSNotification.Name(rawValue: "AVCaptureSessionDidStopRunningNotification"), object: nil, queue: nil) { [weak self] notification in
self?.handleCaptureSessionDidStopRunning(notification: notification)
}
}
deinit {
NotificationCenter.default.removeObserver(self)
}
func handleCaptureSessionDidStartRunning(notification: Notification){
guard let session = notification.object as? AVCaptureSession else { return }
session.addObserver(self, forKeyPath: "inputs", options: [ .old, .new ], context: &context)
}
func handleCaptureSessionDidStopRunning(notification: Notification){
guard let session = notification.object as? AVCaptureSession else { return }
session.removeObserver(self, forKeyPath: "inputs")
}
override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
if context == &self.context {
if let inputs = change?[NSKeyValueChangeKey.newKey] as? [AnyObject], let captureDevice = (inputs.first as? AVCaptureDeviceInput)?.device {
switch captureDevice.position {
case .back: print("Switched to back camera")
case .front: print("Switched to front camera")
case .unspecified: break
}
}
} else {
super.observeValue(forKeyPath: keyPath, of: object, change: change, context: context)
}
}
Full Working Example in Swift, which answers to the initial question of this post (tested on an iPhone 5c using iOS 8.2):
import UIKit
class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate, UIActionSheetDelegate {
#IBOutlet var myUIImageView: UIImageView!
var myUIImagePickerController: UIImagePickerController!
override func viewDidLoad() {
super.viewDidLoad()
}
override func viewWillAppear(animated: Bool) {
println("viewWillAppear(animated: Bool) method called.")
super.viewWillAppear(animated)
NSNotificationCenter.defaultCenter().removeObserver(self)
}
override func viewWillDisappear(animated: Bool) {
println("viewWillDisappear(animated: Bool) method called.")
super.viewWillDisappear(animated)
NSNotificationCenter.defaultCenter().addObserver(self, selector: "cameraChanged:", name: "AVCaptureDeviceDidStartRunningNotification", object: nil)
}
/* UIImagePickerControllerDelegate Section */
func imagePickerController(picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [NSObject : AnyObject]) {
if(self.myUIImagePickerController.sourceType == UIImagePickerControllerSourceType.Camera) {
self.myUIImageView.image = info[UIImagePickerControllerEditedImage] as? UIImage
} else {
self.myUIImageView.image = info[UIImagePickerControllerOriginalImage] as? UIImage
}
self.dismissViewControllerAnimated(true, completion: nil)
}
func imagePickerControllerDidCancel(picker: UIImagePickerController) {
self.dismissViewControllerAnimated(true, completion: nil)
}
/*
You can choose to use one of the UIResponder methods:
touchesBegan, touchesMoved, touchesEnded etc, in order to detect the touch
on the UIImageView.
*/
override func touchesEnded(touches: NSSet, withEvent event: UIEvent) {
let touch: UITouch? = touches.anyObject() as? UITouch
if (touch?.view == myUIImageView) {
println("myUIImageView has been tapped by the user.")
self.takingAPictureUsingTheCamera()
}
}
func takingAPictureUsingTheCamera() {
self.myUIImagePickerController = UIImagePickerController()
self.myUIImagePickerController.delegate = self // Set the delegate
self.myUIImagePickerController.sourceType = UIImagePickerControllerSourceType.Camera
self.myUIImagePickerController.cameraDevice = UIImagePickerControllerCameraDevice.Front
// self.myUIImagePickerController.editing = true
self.myUIImagePickerController.allowsEditing = true
self.presentViewController(self.myUIImagePickerController, animated: true, completion: nil)
}
func cameraChanged(notification: NSNotification) {
println("cameraChanged(notification: NSNotification) method called.")
self.myUIImagePickerController.cameraViewTransform = CGAffineTransformIdentity
if(self.myUIImagePickerController.cameraDevice == UIImagePickerControllerCameraDevice.Front){
self.myUIImagePickerController.cameraViewTransform = CGAffineTransformScale(self.myUIImagePickerController.cameraViewTransform, -1, 1)
}
}
}// End class

How can I scan barcodes on iOS? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
How can I simply scan barcodes on iPhone and/or iPad?
We produced the 'Barcodes' application for the iPhone. It can decode QR Codes. The source code is available from the zxing project; specifically, you want to take a look at the iPhone client and the partial C++ port of the core library. The port is a little old, from circa the 0.9 release of the Java code, but should still work reasonably well.
If you need to scan other formats, like 1D formats, you could continue the port of the Java code within this project to C++.
EDIT: Barcodes and the iphone code in the project were retired around the start of 2014.
As with the release of iOS7 you no longer need to use an external framework or library. The iOS ecosystem with AVFoundation now fully supports scanning almost every code from QR over EAN to UPC.
Just have a look at the Tech Note and the AVFoundation programming guide. AVMetadataObjectTypeQRCode is your friend.
Here is a nice tutorial which shows it step by step:
iPhone QR code scan library iOS7
Just a little example on how to set it up:
#pragma mark -
#pragma mark AVFoundationScanSetup
- (void) setupScanner;
{
self.device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
self.input = [AVCaptureDeviceInput deviceInputWithDevice:self.device error:nil];
self.session = [[AVCaptureSession alloc] init];
self.output = [[AVCaptureMetadataOutput alloc] init];
[self.session addOutput:self.output];
[self.session addInput:self.input];
[self.output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
self.output.metadataObjectTypes = #[AVMetadataObjectTypeQRCode];
self.preview = [AVCaptureVideoPreviewLayer layerWithSession:self.session];
self.preview.videoGravity = AVLayerVideoGravityResizeAspectFill;
self.preview.frame = CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height);
AVCaptureConnection *con = self.preview.connection;
con.videoOrientation = AVCaptureVideoOrientationLandscapeLeft;
[self.view.layer insertSublayer:self.preview atIndex:0];
}
There are two major libraries:
ZXing a library written in Java and then ported to Objective C / C++ (QR code only). And an other port to ObjC has been done, by TheLevelUp: ZXingObjC
ZBar an open source software for reading bar codes, C based.
According to my experiments, ZBar is far more accurate and fast than ZXing, at least on iPhone.
You can find another native iOS solution using Swift 4 and Xcode 9 at below. Native AVFoundation framework used with in this solution.
First part is the a subclass of UIViewController which have related setup and handler functions for AVCaptureSession.
import UIKit
import AVFoundation
class BarCodeScannerViewController: UIViewController {
let captureSession = AVCaptureSession()
var videoPreviewLayer: AVCaptureVideoPreviewLayer!
var initialized = false
let barCodeTypes = [AVMetadataObject.ObjectType.upce,
AVMetadataObject.ObjectType.code39,
AVMetadataObject.ObjectType.code39Mod43,
AVMetadataObject.ObjectType.code93,
AVMetadataObject.ObjectType.code128,
AVMetadataObject.ObjectType.ean8,
AVMetadataObject.ObjectType.ean13,
AVMetadataObject.ObjectType.aztec,
AVMetadataObject.ObjectType.pdf417,
AVMetadataObject.ObjectType.itf14,
AVMetadataObject.ObjectType.dataMatrix,
AVMetadataObject.ObjectType.interleaved2of5,
AVMetadataObject.ObjectType.qr]
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
setupCapture()
// set observer for UIApplicationWillEnterForeground, so we know when to start the capture session again
NotificationCenter.default.addObserver(self,
selector: #selector(willEnterForeground),
name: .UIApplicationWillEnterForeground,
object: nil)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
// this view is no longer topmost in the app, so we don't need a callback if we return to the app.
NotificationCenter.default.removeObserver(self,
name: .UIApplicationWillEnterForeground,
object: nil)
}
// This is called when we return from another app to the scanner view
#objc func willEnterForeground() {
setupCapture()
}
func setupCapture() {
var success = false
var accessDenied = false
var accessRequested = false
let authorizationStatus = AVCaptureDevice.authorizationStatus(for: .video)
if authorizationStatus == .notDetermined {
// permission dialog not yet presented, request authorization
accessRequested = true
AVCaptureDevice.requestAccess(for: .video,
completionHandler: { (granted:Bool) -> Void in
self.setupCapture();
})
return
}
if authorizationStatus == .restricted || authorizationStatus == .denied {
accessDenied = true
}
if initialized {
success = true
} else {
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera,
.builtInTelephotoCamera,
.builtInDualCamera],
mediaType: .video,
position: .unspecified)
if let captureDevice = deviceDiscoverySession.devices.first {
do {
let videoInput = try AVCaptureDeviceInput(device: captureDevice)
captureSession.addInput(videoInput)
success = true
} catch {
NSLog("Cannot construct capture device input")
}
} else {
NSLog("Cannot get capture device")
}
}
if success {
DispatchQueue.global().async {
self.captureSession.startRunning()
DispatchQueue.main.async {
let captureMetadataOutput = AVCaptureMetadataOutput()
self.captureSession.addOutput(captureMetadataOutput)
let newSerialQueue = DispatchQueue(label: "barCodeScannerQueue") // in iOS 11 you can use main queue
captureMetadataOutput.setMetadataObjectsDelegate(self, queue: newSerialQueue)
captureMetadataOutput.metadataObjectTypes = self.barCodeTypes
self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
self.videoPreviewLayer.videoGravity = .resizeAspectFill
self.videoPreviewLayer.frame = self.view.layer.bounds
self.view.layer.addSublayer(self.videoPreviewLayer)
}
}
initialized = true
} else {
// Only show a dialog if we have not just asked the user for permission to use the camera. Asking permission
// sends its own dialog to th user
if !accessRequested {
// Generic message if we cannot figure out why we cannot establish a camera session
var message = "Cannot access camera to scan bar codes"
#if (arch(i386) || arch(x86_64)) && (!os(macOS))
message = "You are running on the simulator, which does not hae a camera device. Try this on a real iOS device."
#endif
if accessDenied {
message = "You have denied this app permission to access to the camera. Please go to settings and enable camera access permission to be able to scan bar codes"
}
let alertPrompt = UIAlertController(title: "Cannot access camera", message: message, preferredStyle: .alert)
let confirmAction = UIAlertAction(title: "OK", style: .default, handler: { (action) -> Void in
self.navigationController?.popViewController(animated: true)
})
alertPrompt.addAction(confirmAction)
self.present(alertPrompt, animated: true, completion: nil)
}
}
}
func handleCapturedOutput(metadataObjects: [AVMetadataObject]) {
if metadataObjects.count == 0 {
return
}
guard let metadataObject = metadataObjects.first as? AVMetadataMachineReadableCodeObject else {
return
}
if barCodeTypes.contains(metadataObject.type) {
if let metaDataString = metadataObject.stringValue {
captureSession.stopRunning()
displayResult(code: metaDataString)
return
}
}
}
func displayResult(code: String) {
let alertPrompt = UIAlertController(title: "Bar code detected", message: code, preferredStyle: .alert)
if let url = URL(string: code) {
let confirmAction = UIAlertAction(title: "Launch URL", style: .default, handler: { (action) -> Void in
UIApplication.shared.open(url, options: [:], completionHandler: { (result) in
if result {
NSLog("opened url")
} else {
let alertPrompt = UIAlertController(title: "Cannot open url", message: nil, preferredStyle: .alert)
let confirmAction = UIAlertAction(title: "OK", style: .default, handler: { (action) -> Void in
})
alertPrompt.addAction(confirmAction)
self.present(alertPrompt, animated: true, completion: {
self.setupCapture()
})
}
})
})
alertPrompt.addAction(confirmAction)
}
let cancelAction = UIAlertAction(title: "Cancel", style: .cancel, handler: { (action) -> Void in
self.setupCapture()
})
alertPrompt.addAction(cancelAction)
present(alertPrompt, animated: true, completion: nil)
}
}
Second part is the extension of our UIViewController subclass for AVCaptureMetadataOutputObjectsDelegate where we catch the captured outputs.
extension BarCodeScannerViewController: AVCaptureMetadataOutputObjectsDelegate {
func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection) {
handleCapturedOutput(metadataObjects: metadataObjects)
}
}
Update for Swift 4.2
.UIApplicationWillEnterForegroundchanges as UIApplication.willEnterForegroundNotification.
If support for the iPad 2 or iPod Touch is important for your application, I'd choose a barcode scanner SDK that can decode barcodes in blurry images, such as our Scandit barcode scanner SDK for iOS and Android. Decoding blurry barcode images is also helpful on phones with autofocus cameras because the user does not have to wait for the autofocus to kick in.
Scandit comes with a free community price plan and also has a product API that makes it easy to convert barcode numbers into product names.
(Disclaimer: I'm a co-founder of Scandit)
The problem with iPhone camera is that the first models (of which there are tons in use) have a fixed-focus camera that cannot take picture in-focus for distances under 2ft. The images are blurry and distorted and if taken from greater distance there is not enough detail/information from the barcode.
A few companies have developed iPhone apps that can accomodate for that by using advanced de-blurring technologies. Those applications you can find on Apple app store: pic2shop, RedLaser and ShopSavvy. All of the companies have announced that they have also SDKs available - some for free or very preferential terms, check that one out.
with Swift 5 it's Simple and Super fast!!
You just need to add cocoa pods "BarcodeScanner" here is the full code
source 'https://github.com/CocoaPods/Specs.git'
platform :ios, '12.0'
target 'Simple BarcodeScanner'
do
pod 'BarcodeScanner'
end
Make sure add Camera permission in your .plist file
<key>NSCameraUsageDescription</key>
<string>Camera usage description</string>
And add Scanner and handle result in your ViewController this way
import UIKit
import BarcodeScanner
class ViewController: UIViewController, BarcodeScannerCodeDelegate, BarcodeScannerErrorDelegate, BarcodeScannerDismissalDelegate {
override func viewDidLoad() {
super.viewDidLoad()
let viewController = BarcodeScannerViewController()
viewController.codeDelegate = self
viewController.errorDelegate = self
viewController.dismissalDelegate = self
present(viewController, animated: true, completion: nil)
}
func scanner(_ controller: BarcodeScannerViewController, didCaptureCode code: String, type: String) {
print("Product's Bar code is :", code)
controller.dismiss(animated: true, completion: nil)
}
func scanner(_ controller: BarcodeScannerViewController, didReceiveError error: Error) {
print(error)
}
func scannerDidDismiss(_ controller: BarcodeScannerViewController) {
controller.dismiss(animated: true, completion: nil)
}
}
Still and any question or challenges, please check sample application here with full source code
I believe this can be done using AVFramework, here is the sample code to do this
import UIKit
import AVFoundation
class ViewController: UIViewController, AVCaptureMetadataOutputObjectsDelegate
{
#IBOutlet weak var lblQRCodeResult: UILabel!
#IBOutlet weak var lblQRCodeLabel: UILabel!
var objCaptureSession:AVCaptureSession?
var objCaptureVideoPreviewLayer:AVCaptureVideoPreviewLayer?
var vwQRCode:UIView?
override func viewDidLoad() {
super.viewDidLoad()
self.configureVideoCapture()
self.addVideoPreviewLayer()
self.initializeQRView()
}
func configureVideoCapture() {
let objCaptureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
var error:NSError?
let objCaptureDeviceInput: AnyObject!
do {
objCaptureDeviceInput = try AVCaptureDeviceInput(device: objCaptureDevice) as AVCaptureDeviceInput
} catch let error1 as NSError {
error = error1
objCaptureDeviceInput = nil
}
objCaptureSession = AVCaptureSession()
objCaptureSession?.addInput(objCaptureDeviceInput as! AVCaptureInput)
let objCaptureMetadataOutput = AVCaptureMetadataOutput()
objCaptureSession?.addOutput(objCaptureMetadataOutput)
objCaptureMetadataOutput.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
objCaptureMetadataOutput.metadataObjectTypes = [AVMetadataObjectTypeQRCode]
}
func addVideoPreviewLayer() {
objCaptureVideoPreviewLayer = AVCaptureVideoPreviewLayer(session: objCaptureSession)
objCaptureVideoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
objCaptureVideoPreviewLayer?.frame = view.layer.bounds
self.view.layer.addSublayer(objCaptureVideoPreviewLayer!)
objCaptureSession?.startRunning()
self.view.bringSubviewToFront(lblQRCodeResult)
self.view.bringSubviewToFront(lblQRCodeLabel)
}
func initializeQRView() {
vwQRCode = UIView()
vwQRCode?.layer.borderColor = UIColor.redColor().CGColor
vwQRCode?.layer.borderWidth = 5
self.view.addSubview(vwQRCode!)
self.view.bringSubviewToFront(vwQRCode!)
}
func captureOutput(captureOutput: AVCaptureOutput!, didOutputMetadataObjects metadataObjects: [AnyObject]!, fromConnection connection: AVCaptureConnection!) {
if metadataObjects == nil || metadataObjects.count == 0 {
vwQRCode?.frame = CGRectZero
lblQRCodeResult.text = "QR Code wans't found"
return
}
let objMetadataMachineReadableCodeObject = metadataObjects[0] as! AVMetadataMachineReadableCodeObject
if objMetadataMachineReadableCodeObject.type == AVMetadataObjectTypeQRCode {
let objBarCode = objCaptureVideoPreviewLayer?.transformedMetadataObjectForMetadataObject(objMetadataMachineReadableCodeObject as AVMetadataMachineReadableCodeObject) as! AVMetadataMachineReadableCodeObject
vwQRCode?.frame = objBarCode.bounds;
if objMetadataMachineReadableCodeObject.stringValue != nil {
lblQRCodeResult.text = objMetadataMachineReadableCodeObject.stringValue
}
}
}
}
Here is simple code:
func scanbarcode()
{
view.backgroundColor = UIColor.blackColor()
captureSession = AVCaptureSession()
let videoCaptureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
let videoInput: AVCaptureDeviceInput
do {
videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice)
} catch {
return
}
if (captureSession.canAddInput(videoInput)) {
captureSession.addInput(videoInput)
} else {
failed();
return;
}
let metadataOutput = AVCaptureMetadataOutput()
if (captureSession.canAddOutput(metadataOutput)) {
captureSession.addOutput(metadataOutput)
metadataOutput.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
metadataOutput.metadataObjectTypes = [AVMetadataObjectTypeEAN8Code, AVMetadataObjectTypeEAN13Code, AVMetadataObjectTypePDF417Code]
} else {
failed()
return
}
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession);
previewLayer.frame = view.layer.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
view.layer.addSublayer(previewLayer);
view.addSubview(closeBtn)
view.addSubview(backimg)
captureSession.startRunning();
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
func failed() {
let ac = UIAlertController(title: "Scanning not supported", message: "Your device does not support scanning a code from an item. Please use a device with a camera.", preferredStyle: .Alert)
ac.addAction(UIAlertAction(title: "OK", style: .Default, handler: nil))
presentViewController(ac, animated: true, completion: nil)
captureSession = nil
}
override func viewWillAppear(animated: Bool) {
super.viewWillAppear(animated)
if (captureSession?.running == false) {
captureSession.startRunning();
}
}
override func viewWillDisappear(animated: Bool) {
super.viewWillDisappear(animated)
if (captureSession?.running == true) {
captureSession.stopRunning();
}
}
func captureOutput(captureOutput: AVCaptureOutput!, didOutputMetadataObjects metadataObjects: [AnyObject]!, fromConnection connection: AVCaptureConnection!) {
captureSession.stopRunning()
if let metadataObject = metadataObjects.first {
let readableObject = metadataObject as! AVMetadataMachineReadableCodeObject;
AudioServicesPlaySystemSound(SystemSoundID(kSystemSoundID_Vibrate))
foundCode(readableObject.stringValue);
}
// dismissViewControllerAnimated(true, completion: nil)
}
func foundCode(code: String) {
var createAccountErrorAlert: UIAlertView = UIAlertView()
createAccountErrorAlert.delegate = self
createAccountErrorAlert.title = "Alert"
createAccountErrorAlert.message = code
createAccountErrorAlert.addButtonWithTitle("ok")
createAccountErrorAlert.addButtonWithTitle("Retry")
createAccountErrorAlert.show()
NSUserDefaults.standardUserDefaults().setObject(code, forKey: "barcode")
NSUserDefaults.standardUserDefaults().synchronize()
ItemBarcode = code
print(code)
}
override func prefersStatusBarHidden() -> Bool {
return true
}
override func supportedInterfaceOrientations() -> UIInterfaceOrientationMask {
return .Portrait
}
If you are developing for iOS >10.2 with Swift 4 then you can try my solution. I mixed up this and this tutorial and came up with a ViewController which scans a QR Code and print() it out. I also have a Switch in my UI to toggle the camera light, might be helpful as well. For now I only tested it on a iPhone SE, please let me know if it doesn't work on newer iPhones.
Here you go:
import UIKit
import AVFoundation
class QRCodeScanner: UIViewController, AVCaptureMetadataOutputObjectsDelegate {
let captureSession: AVCaptureSession = AVCaptureSession()
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
let qrCodeFrameView: UIView = UIView()
var captureDevice: AVCaptureDevice?
override func viewDidLoad() {
// Get the back-facing camera for capturing videos
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera, .builtInDualCamera], mediaType: AVMediaType.video, position: .back)
captureDevice = deviceDiscoverySession.devices.first
if captureDevice == nil {
print("Failed to get the camera device")
return
}
do {
// Get an instance of the AVCaptureDeviceInput class using the previous device object.
let input = try AVCaptureDeviceInput(device: captureDevice!)
// Set the input device on the capture session.
captureSession.addInput(input)
// Initialize a AVCaptureMetadataOutput object and set it as the output device to the capture session.
let captureMetadataOutput = AVCaptureMetadataOutput()
captureSession.addOutput(captureMetadataOutput)
// Set delegate and use the default dispatch queue to execute the call back
captureMetadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
captureMetadataOutput.metadataObjectTypes = [AVMetadataObject.ObjectType.qr]
// Initialize the video preview layer and add it as a sublayer to the viewPreview view's layer.
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
if let videoPreviewLayer = videoPreviewLayer {
videoPreviewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
videoPreviewLayer.frame = view.layer.bounds
view.layer.addSublayer(videoPreviewLayer)
// Start video capture.
captureSession.startRunning()
if let hasFlash = captureDevice?.hasFlash, let hasTorch = captureDevice?.hasTorch {
if hasFlash && hasTorch {
view.bringSubview(toFront: bottomBar)
try captureDevice?.lockForConfiguration()
}
}
}
// QR Code Overlay
qrCodeFrameView.layer.borderColor = UIColor.green.cgColor
qrCodeFrameView.layer.borderWidth = 2
view.addSubview(qrCodeFrameView)
view.bringSubview(toFront: qrCodeFrameView)
} catch {
// If any error occurs, simply print it out and don't continue any more.
print("Error: \(error)")
return
}
}
// MARK: Buttons and Switch
#IBAction func switchFlashChanged(_ sender: UISwitch) {
do {
if sender.isOn {
captureDevice?.torchMode = .on
} else {
captureDevice?.torchMode = .off
}
}
}
// MARK: AVCaptureMetadataOutputObjectsDelegate
func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection) {
// Check if the metadataObjects array is not nil and it contains at least one object.
if metadataObjects.count == 0 {
qrCodeFrameView.frame = CGRect.zero
return
}
// Get the metadata object.
let metadataObj = metadataObjects[0] as! AVMetadataMachineReadableCodeObject
if metadataObj.type == AVMetadataObject.ObjectType.qr {
// If the found metadata is equal to the QR code metadata then update the status label's text and set the bounds
let barCodeObject = videoPreviewLayer?.transformedMetadataObject(for: metadataObj)
qrCodeFrameView.frame = barCodeObject!.bounds
print("QR Code: \(metadataObj.stringValue)")
}
}
}
you can check ZBarSDK to reads QR Code and ECN/ISBN codes it's simple to integrate try the following code.
- (void)scanBarcodeWithZBarScanner
{
// ADD: present a barcode reader that scans from the camera feed
ZBarReaderViewController *reader = [ZBarReaderViewController new];
reader.readerDelegate = self;
reader.supportedOrientationsMask = ZBarOrientationMaskAll;
ZBarImageScanner *scanner = reader.scanner;
// TODO: (optional) additional reader configuration here
// EXAMPLE: disable rarely used I2/5 to improve performance
[scanner setSymbology: ZBAR_I25
config: ZBAR_CFG_ENABLE
to: 0];
//Get the return value from controller
[reader setReturnBlock:^(BOOL value) {
}
and in didFinishPickingMediaWithInfo we get bar code value.
- (void) imagePickerController: (UIImagePickerController*) reader
didFinishPickingMediaWithInfo: (NSDictionary*) info
{
// ADD: get the decode results
id<NSFastEnumeration> results =
[info objectForKey: ZBarReaderControllerResults];
ZBarSymbol *symbol = nil;
for(symbol in results)
// EXAMPLE: just grab the first barcode
break;
// EXAMPLE: do something useful with the barcode data
barcodeValue = symbol.data;
// EXAMPLE: do something useful with the barcode image
barcodeImage = [info objectForKey:UIImagePickerControllerOriginalImage];
[_barcodeIV setImage:barcodeImage];
//set the values for to TextFields
[self setBarcodeValue:YES];
// ADD: dismiss the controller (NB dismiss from the *reader*!)
[reader dismissViewControllerAnimated:YES completion:nil];
}
The simplest way is to use 3rd party framework with minimum UI that can be improved. Check QRCodeScanner83
You can simply use the following code (check the documentation on how to create view controller in your storyboard):
import QRCodeScanner83
guard let vc = UIStoryboard(name: "Main", bundle: nil).instantiateViewController(identifier: "CodeScannerViewController") as? CodeScannerViewController else {
return
}
vc.callbackCodeScanned = { code in
print("SCANNED CODE: \(code)")
vc.dismiss(animated: true, completion: nil)
}
self.present(vc, animated: true, completion: nil)

Resources