So guys, as I have multiple views and some zooming buttons that I need them to change the preview camera output, I think it will be correctly to use Singleton for Session initialization, but I have no idea how to do that, and can't find any good information, could someone help me please?
UPDATE
okay, so, I managed somehow to write it, I don't know if it's okay, here is the code:
protocol Singleton: class {
static var sharedInstance: Self { get }
}
final class AVFSessionSingleton: Singleton {
static let sharedInstance = AVFSessionSingleton()
private init() {
session = newVideoCaptureSession()!
}
var session: AVCaptureSession!
var imageOutput : AVCaptureStillImageOutput?
//FUNCTION
func newVideoCaptureSession () -> AVCaptureSession? {
func initCaptureDevice() -> AVCaptureDevice? {
var captureDevice: AVCaptureDevice?
let devices: NSArray = AVCaptureDevice.devices() as NSArray
for device: Any in devices {
if (device as AnyObject).position == AVCaptureDevicePosition.back {
captureDevice = device as? AVCaptureDevice
}
}
print("device inited")
return captureDevice
}
func initOutput() {
self.imageOutput = AVCaptureStillImageOutput()
}
func initInputDevice(captureDevice : AVCaptureDevice) -> AVCaptureInput? {
var deviceInput : AVCaptureInput?
do {
deviceInput = try AVCaptureDeviceInput(device: captureDevice)
}
catch _ {
deviceInput = nil
}
return deviceInput
}
func initSession(deviceInput: AVCaptureInput) {
self.session = AVCaptureSession()
self.session?.sessionPreset = AVCaptureSessionPresetPhoto
self.session?.addInput(deviceInput)
self.session?.addOutput(self.imageOutput!)
}
return session
}
}
So now I want to call it in that way I can manage the layouts for a Preview, any suggestions please?....
Related
I am implementing the functionality for to record the video in my iOS application,
Also, i am using ReplayKit to record a full screen instead of the camera's default capturing
In that there is a requirement for customizing,
(1) Resolution
(2) FPS (frames)
(3) Bit Rate
to implement the above functionalities i am currently working on (1) Resolution and (2) FPS.
For this i have set the resolution and FPS as coded below.
class PreviewView: UIView {
private var captureSession: AVCaptureSession?
private var shakeCountDown: Timer?
let videoFileOutput = AVCaptureMovieFileOutput()
var recordingDelegate:AVCaptureFileOutputRecordingDelegate!
var recorded = 0
var secondsToReachGoal = 30
var videoDevice: AVCaptureDevice?
var onRecord: ((Int, Int)->())?
var onReset: (() -> ())?
var onComplete: (() -> ())?
//MARK:- Screen Recording Variables
let recorder = RPScreenRecorder.shared()
var isRecording = false
init() {
super.init(frame: .zero)
var allowedAccess = false
let blocker = DispatchGroup()
blocker.enter()
AVCaptureDevice.requestAccess(for: .video) { flag in
allowedAccess = flag
blocker.leave()
}
blocker.wait()
recorder.isMicrophoneEnabled = true
if !allowedAccess {
print("!!! NO ACCESS TO CAMERA")
return
}
// setup session
let session = AVCaptureSession()
session.beginConfiguration()
videoDevice = AVCaptureDevice.default(.builtInWideAngleCamera,
for: .video, position: .back)
guard videoDevice != nil, let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice!), session.canAddInput(videoDeviceInput) else {
print("!!! NO CAMERA DETECTED")
return
}
session.addInput(videoDeviceInput)
session.commitConfiguration()
self.captureSession = session
//MARK: Test Cases
//Setup the resolution
captureSession?.sessionPreset = AVCaptureSession.Preset.inputPriority
// Setup the frame
videoDevice?.set(frameRate: 20) // 1 to 30 FPS
}
override class var layerClass: AnyClass {
AVCaptureVideoPreviewLayer.self
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
var videoPreviewLayer: AVCaptureVideoPreviewLayer {
return layer as! AVCaptureVideoPreviewLayer
}
override func didMoveToSuperview() {
super.didMoveToSuperview()
recordingDelegate = self
} }
and, to set frame rates (FPS) i have created one extension as below:
extension AVCaptureDevice {
func set(frameRate: Double) {
var isFPSSupported = false
do {
let supportedFrameRange = activeFormat.videoSupportedFrameRateRanges
for range in supportedFrameRange {
if (range.maxFrameRate >= Double(frameRate) && range.minFrameRate <= Double(frameRate)) {
isFPSSupported = true
break
}
}
if isFPSSupported {
try lockForConfiguration()
activeVideoMaxFrameDuration = CMTimeMake(value: 1, timescale: Int32(frameRate))
activeVideoMinFrameDuration = CMTimeMake(value: 1, timescale: Int32(frameRate))
unlockForConfiguration()
}
} catch {
print("lockForConfiguration error: \(error.localizedDescription)")
}
}
}
I have checked the scenario for preset the session like,
(1) AVCaptureSession.Preset.inputPriority
(2) AVCaptureSession.Preset.hd1280x720
etc...
But, i got result on saved video like screenshots below:
As screenshot describe the FPS is 59.88 FPS which is not same as what we set through the code, as in code i have set 20 as FPS.
and the second question is how we can set the resolution?
Because in all the preset session scenarios it's taking always the resolution like,
828 x 1792
How can we achieve this?
Any help would be appreciable
Thanks in advance
We have fragmented movie data that comes through WebSecureSocket (wss://). We are writing that into a temp.mp4 and simultaneously playing the fragments that got written into the file. So, we used AVFragmentedAsset and AVFragmentedAssetMinder for this. It works as expected in the simulator. But in the device, doesn't update the duration of the asset and doesn't post the .AVAssetDurationDidChange notification. Not sure what could be the issue. Already spent 2 days figuring out this but no luck. Can someone help me with this please,
Implementation..
public class WssPlayerSource: WebSocketDelegate {
static let ASSET_MINDER = AVFragmentedAssetMinder()
private let socketClient: WebSocketStreamingClient
private var tempMovieFile:FileHandle? = nil
private var startedPlay = false
private var fragmentedAsset: AVFragmentedAsset? = nil
let player = AVPlayer()
init(withUrl url: URL) {
socketClient = WebSocketStreamingClient(wssUrl: url)
socketClient.delegate = self
do {
self.tempMovieFile = try getTemporaryMovieFileWriter()
}catch {
print("Error opening file")
}
NotificationCenter.default.addObserver(self, selector: #selector(onVideoUpdate), name: .AVAssetDurationDidChange, object: nil)
socketClient.connect()
}
// Socket delegate
public func didReceive(event: WebSocketEvent, client: WebSocket) {
switch event {
case .binary(let data):
self.tempMovieFile?.write(data)
if !startedPlay {
startedPlay = true
DispatchQueue.global().async { [weak self] in
self?.didReceivedInitialData()
}
}
break;
default:
break;
}
}
func didReceivedInitialData() {
fragmentedAsset = AVFragmentedAsset(url: getTemporaryMovieFile()!)
fragmentedAsset?.loadValuesAsynchronously(forKeys: ["duration", "containsFragments", "canContainFragments"], completionHandler: {
WssPlayerSource.ASSET_MINDER.mindingInterval = 1
WssPlayerSource.ASSET_MINDER.addFragmentedAsset(self.fragmentedAsset!)
self.player.replaceCurrentItem(with: AVPlayerItem(asset: self.fragmentedAsset!))
self.player.play()
})
}
#objc
func onVideoUpdate() {
print("Video duration updated..")
// This is not called in device.. but in simulator it works as expected
}
func stop() {
socketClient.forceDisconnect()
NotificationCenter.default.removeObserver(self)
if let asset = self.fragmentedAsset {
WssPlayerSource.ASSET_MINDER.addFragmentedAsset(asset)
}
}
}
And
class ViewController: UIViewController {
#IBOutlet var playerView:PlayerView!
private static let SOCKET_URL = "wss://..."
private var playerSource:WssPlayerSource? = nil
private var playerLayer:AVPlayerLayer? = nil
override func viewDidLoad() {
super.viewDidLoad()
self.playerSource = WssPlayerSource(withUrl: URL(string: ViewController.SOCKET_URL)!)
self.playerLayer = AVPlayerLayer(player: self.playerSource!.player)
self.playerLayer?.videoGravity = .resize
self.playerView.layer.addSublayer(self.playerLayer!)
}
}
I am trying to build a document scanner that is able to read text off of any document/card. However, it sometimes has trouble identifying text correctly off of a credit card. The accuracy is decent, but there is definitely room for improvement. I used the VisionTextRecognition framework and have used all the standard settings which are the right ones for setting up text recognition.
This is what I had to setup the text recognition request
textRecognitionRequest = VNRecognizeTextRequest(completionHandler: { (request, error) in
if let results = request.results, !results.isEmpty {
if let requestResults = request.results as? [VNRecognizedTextObservation] {
var foundText = ""
for observation in recognizedText {
guard let candidate = observation.topCandidates(1).first else { continue }
foundText.append(candidate.string + "\n")
}
}
}
})
textRecognitionRequest.recognitionLevel = .accurate
textRecognitionRequest.usesLanguageCorrection = true
Does anyone have any suggestions for improving the identification programmatically by either pre-processing or post-processing the scan at some point?
UPDATE: I've made a fully open source project that may help you do exactly what you need. Check it out: https://github.com/ethanwa/credit-card-scanner-and-validator
**
You can't do much to improve accuracy beyond adding some preset values to specifically look for, which doesn't make sense with CC numbers so I won't even bother showing that code. You'll need to rely on Apple to improve their text recognition model as iOS iterates for it to truly improve.
What I suggest in the meantime are these two things you can do:
Do validation on your credit card numbers that you think you're recieving. For example, Visa starts with 4, MasterCard starts with 5, Discover with 6, Amex with 3, etc. They have specific lengths and so on. See here: https://www.freeformatter.com/credit-card-number-generator-validator.html
Keep iterating over and over on a camera feed until you get a number that validates. I'm not sure if you are currently just taking a picture of the card, and processing that image (which it sounds like you are doing), but you should be processing many images per second until you get a valid CC. This is most likely how Apple does it when adding a card via Apple Pay on your phone, or when depositing checks digitally using banking apps (finding valid routing and account numbers).
Here's an example of what I mean...
I wrote this code that can pick out and validate ISBN numbers (basically 10 and 13 digit numbers that catalog books, which have a check digit for validation) in any given text and will keep looking until it finds all the numbers and then validates. It works extremely well and is very fast. Check out this Swift 5.3 code:
import UIKit
import Vision
import Photos
import AVFoundation
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
var recognizedText = ""
var finalText = ""
var image: UIImage?
var processing = false
#IBOutlet weak var nameLabel: UILabel!
#IBOutlet weak var setLabel: UILabel!
#IBOutlet weak var numberLabel: UILabel!
lazy var textDetectionRequest: VNRecognizeTextRequest = {
let request = VNRecognizeTextRequest(completionHandler: self.handleDetectedText)
request.recognitionLevel = .accurate
request.usesLanguageCorrection = false
return request
}()
private let videoOutput = AVCaptureVideoDataOutput()
private let captureSession = AVCaptureSession()
private lazy var previewLayer: AVCaptureVideoPreviewLayer = {
let preview = AVCaptureVideoPreviewLayer(session: self.captureSession)
preview.videoGravity = .resizeAspect
return preview
}()
// MARK: AV
override func viewDidLoad() {
super.viewDidLoad()
self.addCameraInput()
self.addVideoOutput()
}
private func addCameraInput() {
let device = AVCaptureDevice.default(for: .video)!
let cameraInput = try! AVCaptureDeviceInput(device: device)
self.captureSession.addInput(cameraInput)
}
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
self.previewLayer.frame = self.view.bounds
}
private func addVideoOutput() {
self.videoOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString) : NSNumber(value: kCVPixelFormatType_32BGRA)] as [String : Any]
self.videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "my.image.handling.queue"))
self.captureSession.addOutput(self.videoOutput)
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection)
{
if !processing
{
guard let frame = CMSampleBufferGetImageBuffer(sampleBuffer) else {
debugPrint("unable to get image from sample buffer")
return
}
print("did receive image frame")
// process image here
self.processing = true
let ciimage : CIImage = CIImage(cvPixelBuffer: frame)
let theimage : UIImage = self.convert(cmage: ciimage)
self.image = theimage
processImage()
}
}
// Convert CIImage to CGImage
func convert(cmage:CIImage) -> UIImage
{
let context:CIContext = CIContext.init(options: nil)
let cgImage:CGImage = context.createCGImage(cmage, from: cmage.extent)!
let image:UIImage = UIImage.init(cgImage: cgImage)
return image
}
// AV
func processImage()
{
DispatchQueue.main.async {
self.nameLabel.text = ""
self.setLabel.text = ""
self.numberLabel.text = ""
}
guard let image = image, let cgImage = image.cgImage else { return }
let requests = [textDetectionRequest]
let imageRequestHandler = VNImageRequestHandler(cgImage: cgImage, orientation: .right, options: [:])
DispatchQueue.global(qos: .userInitiated).async {
do {
try imageRequestHandler.perform(requests)
} catch let error {
print("Error: \(error)")
}
}
}
fileprivate func handleDetectedText(request: VNRequest?, error: Error?)
{
self.finalText = ""
if let error = error {
print(error.localizedDescription)
self.processing = false
return
}
guard let results = request?.results, results.count > 0 else {
print("No text was found.")
self.processing = false
return
}
if let requestResults = request?.results as? [VNRecognizedTextObservation] {
self.recognizedText = ""
for observation in requestResults {
guard let candidiate = observation.topCandidates(1).first else { return }
self.recognizedText += candidiate.string
self.recognizedText += " "
}
var replaced = self.recognizedText.replacingOccurrences(of: "-", with: "")
replaced = String(replaced.filter { !"\n\t\r".contains($0) })
let replacedArr = replaced.components(separatedBy: " ")
for here in replacedArr
{
let final = here.trimmingCharacters(in: CharacterSet.whitespacesAndNewlines)
if (final.count == 10 || final.count == 13) && final.containsISBNnums && Validate.isbn(final) // validate barcode
{
self.finalText += final
print(final)
self.captureSession.stopRunning()
DispatchQueue.main.async {
self.previewLayer.removeFromSuperlayer()
}
break
}
}
DispatchQueue.main.async {
self.numberLabel.text = self.finalText
}
}
self.processing = false
}
// MARK: Buttons
// This is a live camera view that will start a capture session
#IBAction func takePhoto(_ sender: Any) {
self.view.layer.addSublayer(self.previewLayer)
self.captureSession.startRunning()
}
#IBAction func choosePhoto(_ sender: Any) {
presentPhotoPicker(type: .photoLibrary)
}
fileprivate func presentPhotoPicker(type: UIImagePickerController.SourceType) {
let controller = UIImagePickerController()
controller.sourceType = type
controller.delegate = self
present(controller, animated: true, completion: nil)
}
}
extension ViewController: UIImagePickerControllerDelegate, UINavigationControllerDelegate {
func imagePickerControllerDidCancel(_ picker: UIImagePickerController) {
dismiss(animated: true, completion: nil)
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
dismiss(animated: true, completion: nil)
image = info[.originalImage] as? UIImage
processImage()
}
}
extension String {
var containsISBNnums: Bool {
guard self.count > 0 else { return false }
let nums: Set<Character> = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "X"]
return Set(self).isSubset(of: nums)
}
}
I have set up my custom camera, and already coded the video preview. I have a button on the screen that i want to use to capture video when it is pressed. I don't know how to go about it. Everything so far is set up and working fine.
In the start recording button function, i just need the code necessary to capture the video and save it. Thank you
class CameraViewController: UIViewController, AVAudioRecorderDelegate {
#IBOutlet var recordOutlet: UIButton!
#IBOutlet var recordLabel: UILabel!
#IBOutlet var cameraView: UIView!
var tempImage: UIImageView?
var captureSession: AVCaptureSession?
var stillImageOutput: AVCapturePhotoOutput?
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
var currentCaptureDevice: AVCaptureDevice?
var usingFrontCamera = false
/* This is the function i want to use to start
recording a video */
#IBAction func recordingButton(_ sender: Any) {
}
it seems as though Apple prefers developers to use the default camera for capturing video. If you are ok with that, I found a tutorial online with code to help. https://www.raywenderlich.com/94404/play-record-merge-videos-ios-swift.
You can scroll down to the "recording video" section and it will walk you through it with code.
Here's some of what it says: "
import MobileCoreServices
You’ll also need to adopt the same protocols as PlayVideoViewController, by adding the following to the end of the file:
`extension RecordVideoViewController: UIImagePickerControllerDelegate {
}
extension RecordVideoViewController: UINavigationControllerDelegate {
}
Add the following code to RecordVideoViewController:
`func startCameraFromViewController(viewController: UIViewController, withDelegate delegate: protocol<UIImagePickerControllerDelegate, UINavigationControllerDelegate>) -> Bool {
if UIImagePickerController.isSourceTypeAvailable(.Camera) == false {
return false
}
var cameraController = UIImagePickerController()
cameraController.sourceType = .Camera
cameraController.mediaTypes = [kUTTypeMovie as NSString as String]
cameraController.allowsEditing = false
cameraController.delegate = delegate
presentViewController(cameraController, animated: true, completion: nil)
return true
}`
This method follows the same logic is in PlayVideoViewController, but it accesses the .Camera instead to record video.
Now add the following to record(_:):
startCameraFromViewController(self, withDelegate: self)
You are again in familiar territory. The code simply calls startCameraControllerFromViewController(_:usingDelegate:) when you tap the “Record Video” button.
Build and run to see what you’ve got so far.
Go to the Record screen and press the “Record Video” button. Instead of the Photo Gallery, the camera UI opens. Start recording a video by tapping the red record button at the bottom of the screen, and tap it again when you’re done recording."
Cheers,
Theo
Here is worked code, you need to deal correct with optional values and error handling in real project, but you can use this next code as example:
//
// ViewController.swift
// CustomCamera
//
// Created by Taras Chernyshenko on 6/27/17.
// Copyright © 2017 Taras Chernyshenko. All rights reserved.
//
import UIKit
import AVFoundation
import AssetsLibrary
class CameraViewController: UIViewController,
AVCaptureAudioDataOutputSampleBufferDelegate,
AVCaptureVideoDataOutputSampleBufferDelegate {
#IBOutlet var recordOutlet: UIButton!
#IBOutlet var recordLabel: UILabel!
#IBOutlet var cameraView: UIView!
var tempImage: UIImageView?
private var session: AVCaptureSession = AVCaptureSession()
private var deviceInput: AVCaptureDeviceInput?
private var previewLayer: AVCaptureVideoPreviewLayer?
private var videoOutput: AVCaptureVideoDataOutput = AVCaptureVideoDataOutput()
private var audioOutput: AVCaptureAudioDataOutput = AVCaptureAudioDataOutput()
private var videoDevice: AVCaptureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
private var audioConnection: AVCaptureConnection?
private var videoConnection: AVCaptureConnection?
private var assetWriter: AVAssetWriter?
private var audioInput: AVAssetWriterInput?
private var videoInput: AVAssetWriterInput?
private var fileManager: FileManager = FileManager()
private var recordingURL: URL?
private var isCameraRecording: Bool = false
private var isRecordingSessionStarted: Bool = false
private var recordingQueue = DispatchQueue(label: "recording.queue")
var captureSession: AVCaptureSession?
var stillImageOutput: AVCapturePhotoOutput?
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
var currentCaptureDevice: AVCaptureDevice?
var usingFrontCamera = false
/* This is the function i want to use to start
recording a video */
#IBAction func recordingButton(_ sender: Any) {
if self.isCameraRecording {
self.stopRecording()
} else {
self.startRecording()
}
self.isCameraRecording = !self.isCameraRecording
}
override func viewDidLoad() {
super.viewDidLoad()
self.setup()
}
private func setup() {
self.session.sessionPreset = AVCaptureSessionPresetHigh
self.recordingURL = URL(fileURLWithPath: "\(NSTemporaryDirectory() as String)/file.mov")
if self.fileManager.isDeletableFile(atPath: self.recordingURL!.path) {
_ = try? self.fileManager.removeItem(atPath: self.recordingURL!.path)
}
self.assetWriter = try? AVAssetWriter(outputURL: self.recordingURL!,
fileType: AVFileTypeQuickTimeMovie)
let audioSettings = [
AVFormatIDKey : kAudioFormatAppleIMA4,
AVNumberOfChannelsKey : 1,
AVSampleRateKey : 16000.0
] as [String : Any]
let videoSettings = [
AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : UIScreen.main.bounds.size.width,
AVVideoHeightKey : UIScreen.main.bounds.size.height
] as [String : Any]
self.videoInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo,
outputSettings: videoSettings)
self.audioInput = AVAssetWriterInput(mediaType: AVMediaTypeAudio,
outputSettings: audioSettings)
self.videoInput?.expectsMediaDataInRealTime = true
self.audioInput?.expectsMediaDataInRealTime = true
if self.assetWriter!.canAdd(self.videoInput!) {
self.assetWriter?.add(self.videoInput!)
}
if self.assetWriter!.canAdd(self.audioInput!) {
self.assetWriter?.add(self.audioInput!)
}
self.deviceInput = try? AVCaptureDeviceInput(device: self.videoDevice)
if self.session.canAddInput(self.deviceInput) {
self.session.addInput(self.deviceInput)
}
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.session)
self.previewLayer?.videoGravity = AVLayerVideoGravityResizeAspect
let rootLayer = self.view.layer
rootLayer.masksToBounds = true
self.previewLayer?.frame = CGRect(x: 0, y: 0, width: UIScreen.main.bounds.width, height: UIScreen.main.bounds.height)
rootLayer.insertSublayer(self.previewLayer!, at: 0)
self.session.startRunning()
DispatchQueue.main.async {
self.session.beginConfiguration()
if self.session.canAddOutput(self.videoOutput) {
self.session.addOutput(self.videoOutput)
}
self.videoConnection = self.videoOutput.connection(withMediaType: AVMediaTypeVideo)
if self.videoConnection?.isVideoStabilizationSupported == true {
self.videoConnection?.preferredVideoStabilizationMode = .auto
}
self.session.commitConfiguration()
let audioDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeAudio)
let audioIn = try? AVCaptureDeviceInput(device: audioDevice)
if self.session.canAddInput(audioIn) {
self.session.addInput(audioIn)
}
if self.session.canAddOutput(self.audioOutput) {
self.session.addOutput(self.audioOutput)
}
self.audioConnection = self.audioOutput.connection(withMediaType: AVMediaTypeAudio)
}
}
private func startRecording() {
if self.assetWriter?.startWriting() != true {
print("error: \(self.assetWriter?.error.debugDescription ?? "")")
}
self.videoOutput.setSampleBufferDelegate(self, queue: self.recordingQueue)
self.audioOutput.setSampleBufferDelegate(self, queue: self.recordingQueue)
}
private func stopRecording() {
self.videoOutput.setSampleBufferDelegate(nil, queue: nil)
self.audioOutput.setSampleBufferDelegate(nil, queue: nil)
self.assetWriter?.finishWriting {
print("saved")
}
}
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer
sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
if !self.isRecordingSessionStarted {
let presentationTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
self.assetWriter?.startSession(atSourceTime: presentationTime)
self.isRecordingSessionStarted = true
}
let description = CMSampleBufferGetFormatDescription(sampleBuffer)!
if CMFormatDescriptionGetMediaType(description) == kCMMediaType_Audio {
if self.audioInput!.isReadyForMoreMediaData {
print("appendSampleBuffer audio");
self.audioInput?.append(sampleBuffer)
}
} else {
if self.videoInput!.isReadyForMoreMediaData {
print("appendSampleBuffer video");
if !self.videoInput!.append(sampleBuffer) {
print("Error writing video buffer");
}
}
}
}
}
I am making an app about book.
In the app,
i want to make the app auto-filling book info by getting ISBN(Barcode)
views
There is 2 classes.
one is 'UploadMain',the other is 'ScanView'
I can get ISBN by scanning,
but i have a problem to pass data from ScanView to UploadMain.
In ScanView i have used optional Binding like below
if let UploadVC = self.storyboard?.instantiateViewControllerWithIdentifier("UploadMain") as? UploadMain {
UploadVC.ISBNstring = self.detectionString!
}
Code for UploadMain Class
override func viewDidLoad(){
super.viewDidLoad()
ISBN.delegate = self
}
override func viewWillAppear(animated: Bool){
ISBN.text = ISBNstring
}
i don't know whats the problem my code.
Full Code of UploadMain
import UIKit
import Foundation
class UploadMain: UIViewController,UITextFieldDelegate {
var ISBNstring: String = ""
var TitleString: String = ""
var AuthorString: String = ""
var PubString: String = ""
var PriceSting: String = ""
#IBOutlet weak var ISBN: UITextField!
#IBOutlet weak var bookTitle: UITextField!
#IBOutlet weak var bookAuthor: UITextField!
#IBOutlet weak var bookPub: UITextField!
#IBOutlet weak var bookPrice: UITextField!
override func viewDidLoad(){
super.viewDidLoad()
ISBN.delegate = self
}
override func viewWillAppear(animated: Bool){
ISBN.text = ISBNstring
}
#IBAction func Upload(sender: AnyObject) {
dismissViewControllerAnimated(true, completion: nil)
}
}
ScanView class
import UIKit
import AVFoundation
import Foundation
class ScanView : UIViewController, AVCaptureMetadataOutputObjectsDelegate {
let session : AVCaptureSession = AVCaptureSession()
var previewLayer : AVCaptureVideoPreviewLayer!
var detectionString : String!
let apiKey : String = "---------dddddd"
override func viewDidLoad() {
super.viewDidLoad()
// For the sake of discussion this is the camera
let device = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
// Create a nilable NSError to hand off to the next method.
// Make sure to use the "var" keyword and not "let"
var error : NSError? = nil
var input: AVCaptureDeviceInput = AVCaptureDeviceInput()
do {
input = try AVCaptureDeviceInput(device: device) as AVCaptureDeviceInput
} catch let myJSONError {
print(myJSONError)
}
// If our input is not nil then add it to the session, otherwise we're kind of done!
if input != AVCaptureDeviceInput() {
session.addInput(input)
}
else {
// This is fine for a demo, do something real with this in your app. :)
print(error)
}
let output = AVCaptureMetadataOutput()
output.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
session.addOutput(output)
output.metadataObjectTypes = output.availableMetadataObjectTypes
previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer.frame = self.view.bounds
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
self.view.layer.addSublayer(previewLayer)
// Start the scanner. You'll have to end it yourself later.
session.startRunning()
}
// This is called when we find a known barcode type with the camera.
func captureOutput(captureOutput: AVCaptureOutput!, didOutputMetadataObjects metadataObjects: [AnyObject]!, fromConnection connection: AVCaptureConnection!) {
var highlightViewRect = CGRectZero
var barCodeObject : AVMetadataObject!
let barCodeTypes = [AVMetadataObjectTypeEAN13Code, AVMetadataObjectTypeEAN8Code]
// The scanner is capable of capturing multiple 2-dimensional barcodes in one scan.
for metadata in metadataObjects {
for barcodeType in barCodeTypes {
if metadata.type == barcodeType {
barCodeObject = self.previewLayer.transformedMetadataObjectForMetadataObject(metadata as! AVMetadataMachineReadableCodeObject)
highlightViewRect = barCodeObject.bounds
detectionString = (metadata as! AVMetadataMachineReadableCodeObject).stringValue
self.session.stopRunning()
self.alert(detectionString)
// Daum Book API 호출
let apiURI = NSURL(string: "https://apis.daum.net/search/book?apikey=\(apiKey)&q=\(detectionString)&searchType=isbn&output=json")
let apidata : NSData? = NSData(contentsOfURL: apiURI!)
NSLog("API Result = %#", NSString(data: apidata!, encoding: NSUTF8StringEncoding)!)
**if let UploadVC = self.storyboard?.instantiateViewControllerWithIdentifier("UploadMain") as? UploadMain {
UploadVC.ISBNstring = self.detectionString!
}**
break
}
}
}
print(detectionString)
self.navigationController?.popViewControllerAnimated(true)
}
func alert(Code: String){
let actionSheet:UIAlertController = UIAlertController(title: "Barcode", message: "\(Code)", preferredStyle: UIAlertControllerStyle.Alert)
// for alert add .Alert instead of .Action Sheet
// start copy
let firstAlertAction:UIAlertAction = UIAlertAction(title: "OK", style: UIAlertActionStyle.Default, handler:
{
(alertAction:UIAlertAction!) in
// action when pressed
self.session.startRunning()
})
actionSheet.addAction(firstAlertAction)
}
}
In the ScanView you are creating the new instance of UploadMain, that is not available in window hierarchy, So that data is not available to the UploadMain. To solve your problem you need to create one protocol and pass the delegate of that protocol to ScanView. So create one protocol like this.
protocol IsbnDelegate {
func passData(isbnStr: String)
}
Now inherit this protocol in UploadMain and override its method passData in the UploadMain like below
class UploadMain: UIViewController,UITextFieldDelegate,IsbnDelegate {
//your code
//Add this method
func passData(isbnStr: String) {
self.ISBN.text = isbnStr
}
//Also override prepareForSegue like this
override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
let destVC = segue.destinationViewController as! ScanView
destVC.delegate = self
}
}
After that create one delegate object in ScanView, Change your code of ScanView like this
import UIKit
import AVFoundation
import Foundation
class ScanView : UIViewController, AVCaptureMetadataOutputObjectsDelegate {
let session : AVCaptureSession = AVCaptureSession()
var previewLayer : AVCaptureVideoPreviewLayer!
var detectionString : String!
let apiKey : String = "---------dddddd"
var delegate: IsbnDelegate?
override func viewDidLoad() {
super.viewDidLoad()
// For the sake of discussion this is the camera
let device = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
// Create a nilable NSError to hand off to the next method.
// Make sure to use the "var" keyword and not "let"
var error : NSError? = nil
var input: AVCaptureDeviceInput = AVCaptureDeviceInput()
do {
input = try AVCaptureDeviceInput(device: device) as AVCaptureDeviceInput
} catch let myJSONError {
print(myJSONError)
}
// If our input is not nil then add it to the session, otherwise we're kind of done!
if input != AVCaptureDeviceInput() {
session.addInput(input)
}
else {
// This is fine for a demo, do something real with this in your app. :)
print(error)
}
let output = AVCaptureMetadataOutput()
output.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
session.addOutput(output)
output.metadataObjectTypes = output.availableMetadataObjectTypes
previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer.frame = self.view.bounds
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
self.view.layer.addSublayer(previewLayer)
// Start the scanner. You'll have to end it yourself later.
session.startRunning()
}
// This is called when we find a known barcode type with the camera.
func captureOutput(captureOutput: AVCaptureOutput!, didOutputMetadataObjects metadataObjects: [AnyObject]!, fromConnection connection: AVCaptureConnection!) {
var highlightViewRect = CGRectZero
var barCodeObject : AVMetadataObject!
let barCodeTypes = [AVMetadataObjectTypeEAN13Code, AVMetadataObjectTypeEAN8Code]
// The scanner is capable of capturing multiple 2-dimensional barcodes in one scan.
for metadata in metadataObjects {
for barcodeType in barCodeTypes {
if metadata.type == barcodeType {
barCodeObject = self.previewLayer.transformedMetadataObjectForMetadataObject(metadata as! AVMetadataMachineReadableCodeObject)
highlightViewRect = barCodeObject.bounds
detectionString = (metadata as! AVMetadataMachineReadableCodeObject).stringValue
self.session.stopRunning()
self.alert(detectionString)
// Daum Book API 호출
let apiURI = NSURL(string: "https://apis.daum.net/search/book?apikey=\(apiKey)&q=\(detectionString)&searchType=isbn&output=json")
let apidata : NSData? = NSData(contentsOfURL: apiURI!)
NSLog("API Result = %#", NSString(data: apidata!, encoding: NSUTF8StringEncoding)!)
//Here We are passing the data of your ScanView to UploadMain
self.delegate.passData(self.detectionString!)
break
}
}
}
print(detectionString)
self.navigationController?.popViewControllerAnimated(true)
}
func alert(Code: String){
let actionSheet:UIAlertController = UIAlertController(title: "Barcode", message: "\(Code)", preferredStyle: UIAlertControllerStyle.Alert)
// for alert add .Alert instead of .Action Sheet
// start copy
let firstAlertAction:UIAlertAction = UIAlertAction(title: "OK", style: UIAlertActionStyle.Default, handler:
{
(alertAction:UIAlertAction!) in
// action when pressed
self.session.startRunning()
})
actionSheet.addAction(firstAlertAction)
}
}
For more detail about protcol follow this link
1) Apple Documentation
2) Tutorial 1
3) Tutorial 2
Hope this will help you.
To me it appears you are passing data back from the view controller.
When you call
if let UploadVC = self.storyboard?.instantiateViewControllerWithIdentifier("UploadMain") as? UploadMain {
UploadVC.ISBNstring = self.detectionString!
}
You are creating a new instance of that view controller, and it is successfully receiving the data (But you could never use it).
What you are wanting to do is send data
I could write it out for you, but theres actually some great tutorials
text
video