How to scan for QR codes on button press? - ios

I am using the code provided by https://www.hackingwithswift.com/example-code/media/how-to-scan-a-qr-code to make my own scanning app. But I like my scanning to occur on button press. Now for this I put the viewDidLoad() part from the tutorial into its own function:
func cameraScanningLayer(){
view.backgroundColor = UIColor.blackColor()
captureSession = AVCaptureSession()
let videoCaptureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
let videoInput: AVCaptureDeviceInput
do {
videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice)
} catch {
return
}
if (captureSession.canAddInput(videoInput)) {
captureSession.addInput(videoInput)
} else {
failed();
return;
}
let metadataOutput = AVCaptureMetadataOutput()
if (captureSession.canAddOutput(metadataOutput)) {
captureSession.addOutput(metadataOutput)
metadataOutput.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
// need to scan barcode + QRcode
metadataOutput.metadataObjectTypes = [AVMetadataObjectTypeQRCode,AVMetadataObjectTypeEAN8Code, AVMetadataObjectTypeEAN13Code, AVMetadataObjectTypePDF417Code,AVMetadataObjectTypeCode128Code,AVMetadataObjectTypeCode39Code]
} else {
failed()
return
}
// Previewlayer with camera
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession);
previewLayer.frame = viewForLayer.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
viewForLayer.layer.addSublayer(previewLayer);
captureSession.startRunning();
}
And a button action calls the function:
func buttonScanAction() {
print("Scan")
scanEnabled = true // like to use some kind of bool/switch
self.cameraScanningLayer()
}
The problems I have are:
1) On load the camera is not in view
2) After the button is pressed the camera is in view but it always scans automatically
So I thought of using a global:
var scanEnabled: Bool = false
Then, when the button is clicked, set it to true and the scanning is enabled.
For reference here is a sketch:
EDIT
my quick fix which might not be the right way to do it.
I replaced the
let metadataOutput = AVCaptureMetadataOutput() {...} else {
failed()
return
}
and put it between an if statement
if (scanEnabled == true) {
let metadataOutput = AVCaptureMetadataOutput()
if (captureSession.canAddOutput(metadataOutput)) {
captureSession.addOutput(metadataOutput)
metadataOutput.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
// to use them both wwe need to skip AVMetadataObjectTypeQRCode
metadataOutput.metadataObjectTypes = [AVMetadataObjectTypeQRCode,AVMetadataObjectTypeEAN8Code, AVMetadataObjectTypeEAN13Code, AVMetadataObjectTypePDF417Code,AVMetadataObjectTypeCode128Code,AVMetadataObjectTypeCode39Code]
scanEnabled = false
} else {
failed()
return
}
}

Author of that tutorial here. My method was to use a dedicated scanning view controller, but I guess you want to unify that with your existing view controller – and that's fine. Both approaches work.
If you want to show the camera interface all the time (even when not actively recognising QR codes) then your plan to use a boolean to track whether scanning is enabled is a good one. My example code has a foundCode() method that gets called, and also calls dismissViewControllerAnimated() when codes are found.
In your version, you need to make foundCode() do all the work of stopping the scane, handling the dismissal, etc. You can then add a check for your scanEnabled boolean in one place.
Something like this ought to do it:
func foundCode(code: String) {
if scanCode == true {
print(code)
captureSession.stopRunning()
AudioServicesPlaySystemSound(SystemSoundID(kSystemSoundID_Vibrate))
dismissViewControllerAnimated(true, completion: nil)
}
}
If you wanted to, you could move the scanCode == true check up to didOutputMetadataObjects to save the unnecessary method call.

Thanks #alex for this question. I also use the excellent class created by twostraws (Very useful Paul, thank you so much) and also needed to implement code scan reading with only the action of a button. My solution was the following:
I define metadataOutput as a global variable and only in the button action do I integrate them as a delegate:
var metadataOutput: AVCaptureMetadataOutput!
In viewDidLoad method:
metadataOutput = AVCaptureMetadataOutput()
if (captureSession.canAddOutput(metadataOutput)) {
captureSession.addOutput(metadataOutput)
// Was removed this line: metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
metadataOutput.metadataObjectTypes = [.qr]
} else {
failed()
return
}
func buttonScanAction() {
print("Scan")
metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
}
When I change my view I stop the camera and remove the delegate like this:
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
if (captureSession?.isRunning == true) {
captureSession.stopRunning()
}
metadataOutput.setMetadataObjectsDelegate(nil, queue: DispatchQueue.main)
}

Related

Device torch turns off when starting AVCaptureSession

I'm using AVCaptureSession to capture video.
I want to light on the torch during the whole session, but once the session is started, the light turns automatically off.
There is a lot of posts here showing how to turn on the torch. It works, unless the capture session is started.
here's the way I start the session
guard let camera = AVCaptureDevice.default(for: .video) else { return }
self.captureSession.beginConfiguration()
let deviceInput = try AVCaptureDeviceInput(device: camera)
self.captureSession.addInput(deviceInput)
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "com.axelife.axcapturer.samplebufferdelegate"))
self.captureSession.addOutput(videoOutput)
try camera.setLight(on: true)
self.captureSession.commitConfiguration()
DispatchQueue(label: "capturesession").async {
self.captureSession.startRunning()
}
And my code to turn on the light
extension AVCaptureDevice {
func setLight(on: Bool) throws {
try self.lockForConfiguration()
if on {
try self.setTorchModeOn(level: 1)
}
else {
self.torchMode = .off
}
self.unlockForConfiguration()
}
}
With that code, the light turns on during < 0.5 seconds, and turn back off automatically.
Ok, I figured out.
The torch simply must be lighted on after the session start.
So instead of:
try camera.setLight(on: true)
self.captureSession.commitConfiguration()
DispatchQueue(label: "capturesession").async {
self.captureSession.startRunning()
}
just do
self.captureSession.commitConfiguration()
DispatchQueue(label: "capturesession").async {
self.captureSession.startRunning()
try camera.setLight(on: true)
}
For me, the torch turns off when I change the videoOrientation of the capture preview layer. I turn it on again after that happens. (DispatchQueue.main.async is important, for some reason, it doesn't work if I leave it out)
previewConnection.videoOrientation = orientation
do {
try captureDevice.lockForConfiguration()
} catch {
print(error)
}
DispatchQueue.main.async { [weak self] in
self?.captureDevice.torchMode = .on
self?.captureDevice.unlockForConfiguration()
}

Camera preview layer does not show up Swift 4

I am creating a ViewController in which I want to have a somewhat small UIView in the corner of the ViewController to display the camera preview. I am using a function to do this. However when I pass in the small UIView into the function the camera preview is not showing up. The weird thing is if I tell the function to display the preview on self.view everything works fine and I can see the camera preview. For this reason I think the problem is with the way I insert the layer or something similar.
Here is the function I am using to display the preview...
func displayPreview(on view: UIView) throws {
guard let captureSession = self.captureSession, captureSession.isRunning else { throw CameraControllerError.captureSessionIsMissing }
self.previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
self.previewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
self.previewLayer?.connection?.videoOrientation = .portrait
view.layer.insertSublayer(self.previewLayer!, at: 0)
self.previewLayer?.frame = view.frame
}
I call this function from inside another function which handles setting up the capture session and other similar things.
func configureCameraController() {
cameraController.prepare {(error) in
if let error = error {
print("ERROR")
print(error)
}else{
}
print("hello")
try! self.cameraController.displayPreview(on: self.mirrorView)
}
}
configureCameraController()
How can I get the camera preview layer to show up on the smaller UIView?
Can you try adding the following
let rootLayer: CALayer = self.yourSmallerView.layer
rootLayer.masksToBounds = true
self.previewLayer.frame = rootLayer.bounds
rootLayer.addSublayer(self.previewLayer)
in place of
view.layer.insertSublayer(self.previewLayer!, at: 0)
Also ensure, yourSmallerView.contentMode = UIViewContentMode.scaleToFill

How to record IOS screen programmatically

Is there any way to record IOS screen programmatically. Means whatever activity you are doing like clicking buttons, Scrolling tableviews.
Even if a video is playing that will be captured again along with some other activity?
Have tried these
https://www.raywenderlich.com/30200/avfoundation-tutorial-adding-overlays-and-animations-to-videos
https://github.com/alskipp/ASScreenRecorder
but with these libraries won't provide quality video. I need quality video.
The issue is that with video playing in the background when i capture screen it does not show smooth video. It shows like one frame of video and then after 3-4 secs 2nd frame and so on. Also quality of video is not good its blurred
As of iOS 9, it looks like ReplayKit is available to greatly simplify this.
https://developer.apple.com/reference/replaykit
https://code.tutsplus.com/tutorials/ios-9-an-introduction-to-replaykit--cms-25458
Update: This may be less relevant now that iOS 11 has a built-in screen recorder, but the following Swift 3 code worked for me:
import ReplayKit
#IBAction func toggleRecording(_ sender: UIBarButtonItem) {
let r = RPScreenRecorder.shared()
guard r.isAvailable else {
print("ReplayKit unavailable")
return
}
if r.isRecording {
self.stopRecording(sender, r)
}
else {
self.startRecording(sender, r)
}
}
func startRecording(_ sender: UIBarButtonItem, _ r: RPScreenRecorder) {
r.startRecording(handler: { (error: Error?) -> Void in
if error == nil { // Recording has started
sender.title = "Stop"
} else {
// Handle error
print(error?.localizedDescription ?? "Unknown error")
}
})
}
func stopRecording(_ sender: UIBarButtonItem, _ r: RPScreenRecorder) {
r.stopRecording( handler: { previewViewController, error in
sender.title = "Record"
if let pvc = previewViewController {
if UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiom.pad {
pvc.modalPresentationStyle = UIModalPresentationStyle.popover
pvc.popoverPresentationController?.sourceRect = CGRect.zero
pvc.popoverPresentationController?.sourceView = self.view
}
pvc.previewControllerDelegate = self
self.present(pvc, animated: true, completion: nil)
}
else if let error = error {
print(error.localizedDescription)
}
})
}
// MARK: RPPreviewViewControllerDelegate
func previewControllerDidFinish(_ previewController: RPPreviewViewController) {
previewController.dismiss(animated: true, completion: nil)
}
ReplayKit is available, although you are not allowed to access the result video, the only way I've found so far is to make a number of screenshots (store them in array of images) and then convert these images to the video, not very efficient from performance standpoint though, but might work when you don't really need a 30/60 fps screen recording and might be ok w/ 6-20 pfs. Here's the full example.
Check out ScreenCaptureView, this has video-recording support built-in (see link).
What this does is it saves the contents of a UIView to a UIImage. The author suggests you can save a video of the app in use by passing the frames through AVCaptureSession.
I believe it hasn't been tested with an OpenGL subview, but assuming that it works you might be able to modify it slightly to include audio and then you'd be set.
AVCaptureSession Sample
AVCaptureSession Reference
import UIKit
import AVFoundation
class ViewController: UIViewController {
let captureSession = AVCaptureSession()
let stillImageOutput = AVCaptureStillImageOutput()
var error: NSError?
override func viewDidLoad() {
super.viewDidLoad()
let devices = AVCaptureDevice.devices().filter{ $0.hasMediaType(AVMediaTypeVideo) && $0.position == AVCaptureDevicePosition.Back }
if let captureDevice = devices.first as? AVCaptureDevice {
captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &error))
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
captureSession.startRunning()
stillImageOutput.outputSettings = [AVVideoCodecKey:AVVideoCodecJPEG]
if captureSession.canAddOutput(stillImageOutput) {
captureSession.addOutput(stillImageOutput)
}
if let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) {
previewLayer.bounds = view.bounds
previewLayer.position = CGPointMake(view.bounds.midX, view.bounds.midY)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
let cameraPreview = UIView(frame: CGRectMake(0.0, 0.0, view.bounds.size.width, view.bounds.size.height))
cameraPreview.layer.addSublayer(previewLayer)
cameraPreview.addGestureRecognizer(UITapGestureRecognizer(target: self, action:"saveToCamera:"))
view.addSubview(cameraPreview)
}
}
}
func saveToCamera(sender: UITapGestureRecognizer) {
if let videoConnection = stillImageOutput.connectionWithMediaType(AVMediaTypeVideo) {
stillImageOutput.captureStillImageAsynchronouslyFromConnection(videoConnection) {
(imageDataSampleBuffer, error) -> Void in
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer)
UIImageWriteToSavedPhotosAlbum(UIImage(data: imageData), nil, nil, nil)
}
}
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
}
}
You can use this library to record a view : screen-cap-view available on GitHub written in Objective C.
**And to use it in swift:**
--> Drag and drop the .m and .h files in your xcode project.
--> Make a header file and import the this file in that : *#import "IAScreenCaptureView.h"*
--> Then give a View this class from the PropertyInspector and then make a IBOutlet for that view . Something like this:
*#IBOutlet weak var contentView: IAScreenCaptureView!*
--> Then Finally just simply start and stop the recording of the view where ever and when ever you want and for that the code will be like this :
For Starting the Recording : *contentView.startRecording()*
For Stoping the Recording : *contentView.stopRecording()*
//Hope this helps.Happy coding. \o/ , ¯\_(ツ)_/¯ ,(╯°□°)╯︵ ┻━┻

Crash in AVCaptureSession when adding an AVCaptureDeviceInput

I have a weird crash showing on Crashlytics when setting up a camera session.
The stacktrace shows that the crash occurred at the method addInput.
func setupCamSession(){
self.captureSession = AVCaptureSession()
self.cameraView.setSession(self.captureSession)
self.sessionQueue = dispatch_queue_create("com.myapp.camera_queue", DISPATCH_QUEUE_SERIAL)
self.setupResult = .Success
switch AVCaptureDevice.authorizationStatusForMediaType(AVMediaTypeVideo){
case .Authorized:
break //already we set it to success
case .NotDetermined:
// The user has not yet been presented with the option to grant video access.
// We suspend the session queue to delay session setup until the access request has completed
dispatch_suspend(self.sessionQueue)
AVCaptureDevice.requestAccessForMediaType(AVMediaTypeVideo, completionHandler: { (granted) -> Void in
if ( !granted ) {
self.setupResult = .CameraNotAuthorized
}
dispatch_resume(self.sessionQueue)
})
default:
self.setupResult = .CameraNotAuthorized
}
dispatch_async(self.sessionQueue){
if self.setupResult != .Success{
return
}
//link input to captureSession
guard let videoDevice = self.deviceWithMediaType(AVMediaTypeVideo, position: AVCaptureDevicePosition.Back) else{
AppLog("Video Device Unavailable")
self.setupResult = .SessionConfigurationFailed
return
}
var videoDeviceInput: AVCaptureDeviceInput!
do {
videoDeviceInput = try AVCaptureDeviceInput(device: videoDevice)
}catch {
AppLog("Could not create video device input")
}
/////////////////////////////////////////////////////
self.captureSession.beginConfiguration()
if self.captureSession.canAddInput(videoDeviceInput){
self.captureSession.addInput(videoDeviceInput)
self.videoDeviceInput = videoDeviceInput
self.videoDevice = videoDevice
dispatch_async(dispatch_get_main_queue()){
//update the cameraView layer on the main thread
let previewLayer : AVCaptureVideoPreviewLayer = self.cameraView.layer as! AVCaptureVideoPreviewLayer
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation(ui:UIApplication.sharedApplication().statusBarOrientation)
}
}else{
AppLog("Could not add video device input to the session")
self.setupResult = .SessionConfigurationFailed
}
//link output to captureSession
let stillImageOutput = AVCaptureStillImageOutput()
if self.captureSession.canAddOutput(stillImageOutput){
self.captureSession.addOutput(stillImageOutput)
self.stillImageOutput = stillImageOutput
stillImageOutput.outputSettings = [AVVideoCodecKey : AVVideoCodecJPEG]
}else{
AppLog("Could not add still image output to the session")
self.setupResult = .SessionConfigurationFailed
}
self.captureSession.commitConfiguration()
/////////////////////////////////////////////////////
}
}
func runSession(){
dispatch_async(self.sessionQueue){
switch self.setupResult!{
case .Success:
self.videoDeviceInput!.device.addObserver(self, forKeyPath: "adjustingFocus", options: NSKeyValueObservingOptions.New, context: nil)
self.captureSession.addObserver(self, forKeyPath: "running", options: [.New], context: &SessionRunningContext)
self.captureSession.startRunning()
self.captureSessionRunning = self.captureSession.running
if !self.captureSessionRunning {
self.captureSession.removeObserver(self, forKeyPath: "running", context: &SessionRunningContext)
self.videoDeviceInput?.device?.removeObserver(self, forKeyPath: "adjustingFocus", context: nil)
}
default:
//Handle errors.
}
}
func stopCaptureSession(){
dispatch_async(self.sessionQueue){
if self.setupResult == .Success{
if self.captureSessionRunning{
self.captureSession.stopRunning()
self.videoDeviceInput?.device?.removeObserver(self, forKeyPath: "adjustingFocus", context: nil)
self.captureSession.removeObserver(self, forKeyPath: "running", context: &SessionRunningContext)
}
self.captureSessionRunning = false
}
}
}
The setupCamSession is called in viewDidLoad, the runSession in viewWillAppear and I have also a stopSession method in viewWillDisappear. Everything related to the camera session is dispatched on a background serial queue.
The crash doesn't happen 100% of the time and I am unable to reproduce the crash on the device I use.
Thanks
Make sure you are removing observers on deinit. I saw this occurring when coming back to the camera capture screen and I didn't remove the observer for adjustingFocus. Once I removed that in deinit all was well.
Had the same problem. It was resolved after adding an usage description for the Privacy - Camera Usage Description in the Info.plist file. This answer contains tips on how to set up the description:
Request Permission for Camera and Library in iOS 10 - Info.plist

How BarCode,QR Code are recognized without capturing the image?

I wonder how Bar Code and QR Code (even character) are recognized without capturing it.I have seen in many app, when we keep our device above any of these (QR/Bar Code), the app automatically recognize it and starts processing. Is there any scanning mechanism used for this? How this can be achieved? What are mechanism involving in this?
Thanks in advance.
1) The phone camera will be launched by the library it will autofocus and scans until it finds the decoded info from the image displayed by camera
2) The info will be parsed by the library and it will give you the result.
the decoded info is "the bar code which has a information decoded in it"
Example for QRCode: The data is present as square
for barcode: the data is present as vertical lines
The library has all the logics for detecting the type of code and decoding as per the format.
Please read more the QrCode/Bar code libraries docs or implement it and learn
You can use AVCaptureSession, e.g.:
let session = AVCaptureSession()
var qrPayload: String?
func startSession() {
guard !started else { return }
let output = AVCaptureMetadataOutput()
output.setMetadataObjectsDelegate(self, queue: .main)
let device: AVCaptureDevice?
if #available(iOS 10.0, *) {
device = AVCaptureDevice
.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .metadataObject, position: .back)
.devices
.first
} else {
device = AVCaptureDevice.devices().first { $0.position == .back }
}
guard
let camera = device,
let input = try? AVCaptureDeviceInput(device: camera),
session.canAddInput(input),
session.canAddOutput(output)
else {
// handle failures here
return
}
session.addInput(input)
session.addOutput(output)
output.metadataObjectTypes = [.qr]
let videoLayer = AVCaptureVideoPreviewLayer(session: session)
videoLayer.frame = view.bounds
videoLayer.videoGravity = .resizeAspectFill
view.layer.addSublayer(videoLayer)
session.startRunning()
}
And extend your view controller to conform to AVCaptureMetadataOutputObjectsDelegate:
extension QRViewController: AVCaptureMetadataOutputObjectsDelegate {
func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection) {
guard
qrPayload == nil,
let object = metadataObjects.first as? AVMetadataMachineReadableCodeObject,
let string = object.stringValue
else { return }
qrPayload = string
print(qrPayload)
// perhaps dismiss this view controller now that you’ve succeeded
}
}
Note, I’m testing to make sure that the qrPayload is nil because I find that you can see metadataOutput(_:didOutput:from:) get called a few times before the view controller is dismissed.

Resources