I am developing application with ability to scan barcodes but i have problem with some characters which mess up everything for me. Same problem occured on android and i fixed it but i can't fix it on swift in same fashion.
I have tried multiple libraries and native ways to generate image of code128 barcode from provided String. It works on everything but special characters like '¿'. I tried everything i read after googling problem but i still could not fix it.
extension UIImage {
convenience init?(barcode: String) {
let data = barcode.data(using: .ascii)
guard let filter = CIFilter(name: "CICode128BarcodeGenerator") else {
return nil
}
filter.setValue(data, forKey: "inputMessage")
guard let ciImage = filter.outputImage else {
return nil
}
self.init(ciImage: ciImage)
}
}
let barcode = UIImage(barcode: "some text")
Everything works fine when scanning this exact barcode image from card and saving the value. It even says that ";038388¿" is type code128, but when I try to generate code128 barcode image out of it, somehow it has problem with "¿" character.
Code128 is defined as only capable of encoding ASCII, but ASCII does not have the "¿" character.
The conversion let data = barcode.data(using: .ascii) fails.
I would recommend catching this early using code like
guard let data = barcode.data(using: .ascii) else {
return nil
}
Related
I have built an app which fetches contacts from phonebook and saves their name and photo. To save the photo I've used the following code
if let imageData = contact.thumbnailImageData {
imageStr = String(describing: UIImage(data: imageData)!)
} else {
imageStr = "null"
}
and when I print imageStr using print("IMGSTR: \(imageStr)") I get the following output
IMSTR: <UIImage:0x283942880 anonymous {1080, 1080} renderingMode=automatic>
Now I'm stuck on how to set this string to the UIImageView, I tried
imageview.image = UIImage(named: imageStr)
but it shows nothing
Could someone please help me in how to set the string <UIImage:0x283942880 anonymous {1080, 1080} renderingMode=automatic> to UIImageView?
No need to convert it to a String. UserDefaults supports Data objects. Store it as Data and when setting it to a UIImageView use let image = UIImage(data : imageData)
If you want to convert an instance of Data to String, you should use the String(decoding:as:) initializer, like this.(eg : let str = String(decoding: data, as: UTF8.self)).
Kind of new to Swift in general, but I'm trying to make a simple RAW camera app for fun. Apple's documentation says that to configure a photo output, you do
let query = photoOutput.isAppleProRAWEnabled ?
{ AVCapturePhotoOutput.isAppleProRAWPixelFormat($0) } :
{ AVCapturePhotoOutput.isBayerRAWPixelFormat($0) }
// Retrieve the RAW format, favoring Apple ProRAW when enabled.
guard let rawFormat =
photoOutput.availableRawPhotoPixelFormatTypes.first(where: query) else {
fatalError("No RAW format found.")
}
but I've been getting an error with the first let statement which says "'isAppleProRAWEnabled' is only available in iOS 14.3 or newer." Is there any way to force it to check for ProRaw, even not on iOS 14.3? I'm not even interested in using ProRaw, but I can't figure out how to get rid of the check and just select the classic RAW format (which I think is the bayer format). If anyone knows a workaround, that would be great!
You can query for the Bayer RAW format as below:
let rawFormatQuery = {AVCapturePhotoOutput.isBayerRAWPixelFormat($0)}
guard let rawFormat = photoOutput.availableRawPhotoPixelFormatTypes.first(where: rawFormatQuery) else {
fatalError("No RAW format found.")
}
Then you set your photo settings using the raw format:
let photoSettings = AVCapturePhotoSettings(rawPixelFormatType: rawFormat,
processedFormat: processedFormat)
Finally, you call your capture delegate as described in the Apple documentation (which I think is where you got the code above).
https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/capturing_photos_in_raw_and_apple_proraw_formats
I am using Firebase cloudVision (ML) API to read image.
I am able to the get the information of an image back but it is not specific.
Example: when I take and upload a picture of MacBook it is giving the output as "notebook,Loptop,electronic device..etc".
But I want to get its brand name like Apple MacBook ,
I have seen few apps doing this .
I could not find any information regarding this, so here I am posting.
Please suggest or guide if anyone come across this
My Code:
func pickedImage(image: UIImage) {
imageView.image = image
imageView.contentMode = .scaleAspectFit
guard let image = imageView.image else { return }
// let onCloudLabeler =
Vision.vision().cloudImageLabeler(options: options)
let onCloudLabeler = Vision.vision().cloudImageLabeler()
// Define the metadata for the image.
let imageMetadata = VisionImageMetadata()
imageMetadata.orientation = .topLeft
// Initialize a VisionImage object with the given UIImage.
let visionImage = VisionImage(image: image)
visionImage.metadata = imageMetadata
onCloudLabeler.process(visionImage) { labels, error in
guard error == nil, let labels = labels, !labels.isEmpty
else {
// [START_EXCLUDE]
let errorString = error?.localizedDescription ?? "No results returned."
print("Label detection failed with error: \(errorString)")
//self.showResults()
// [END_EXCLUDE]
return
}
// [START_EXCLUDE]
var results = [String]()
let resultsText = labels.map { label -> String in
results.append(label.text)
return "Label: \(label.text), " +
"Confidence: \(label.confidence ?? 0), " +
"EntityID: \(label.entityID ?? "")"
}.joined(separator: "\n")
//self.showResults()
// [END_EXCLUDE]
print(results.count)
print(resultsText)
self.labelTxt.text = results.joined(separator: ",")
results.removeAll()
}
}
If you've seen other apps doing something that your app doesn't do, those other apps are likely using a different ML model than the one you're using.
If you want to accomplish the same using ML Kit for Firebase, you can use a custom model that you either trained yourself or got from another source.
As Puf said, the apps you saw are probably using their own custom ML model. ML Kit now supports creating custom image classification models from your own training data. Check out the AutoML Vision Edge functionality here: https://firebase.google.com/docs/ml-kit/automl-vision-edge
I've setup an AVCaptureSession with a video data output and am attempting to use iOS 11's Vision framework to read QR codes. The camera is setup like basically any AVCaptureSession is. I will abbreviate and just show setting up the output.
let output = AVCaptureVideoDataOutput()
output.setSampleBufferDelegate(self, queue: captureQueue)
captureSession.addOutput(output)
// I did this to get the CVPixelBuffer to be oriented in portrait.
// I don't know if it's needed and I'm not sure it matters anyway.
output.connection(with: .video)!.videoOrientation = .portrait
So the camera is up and running as always. Here is the code I am using to perform a VNImageRequestHandler for QR codes.
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: .up, options: [:])
let qrRequest = VNDetectBarcodesRequest { request, error in
let barcodeObservations = request.results as? [VNBarcodeObservation]
guard let qrCode = barcodeObservations?.flatMap({ $0.barcodeDescriptor as? CIQRCodeDescriptor }).first else { return }
if let code = String(data: qrCode.errorCorrectedPayload, encoding: .isoLatin1) {
debugPrint(code)
}
}
qrRequest.symbologies = [.QR]
try! imageRequestHandler.perform([qrRequest])
}
I am using a QR code that encodes http://www.google.com as a test. The debugPrint line prints out:
AVGG\u{03}¢ò÷wwrævöövÆRæ6öÐì\u{11}ì
I have tested this same QR code with the AVCaptureMetadataOutput that has been around for a while and that method decodes the QR code correctly. So my question is, what have I missed to get the output that I am getting?
(Obviously I could just use the AVCaptureMetadataOutput as a solution, because I can see that it works. But that doesn't help me learn how to use the Vision framework.)
Most likely the problem is here:
if let code = String(data: qrCode.errorCorrectedPayload, encoding: .isoLatin1)
Try to use .utf8.
Also i would suggest to look at the raw output of the 'errorCorrectedPayload' without encoding. Maybe it already has correct encoding.
The definition of errorCorrectedPayload says:
-- QR Codes are formally specified in ISO/IEC 18004:2006(E). Section 6.4.10 "Bitstream to codeword conversion" specifies the set of 8-bit codewords in the symbol immediately prior to splitting the message into blocks and applying error correction. --
This seems to work fine with VNBarcodeObservation.payloadStringValue instead of transforming VNBarcodeObservation.barcodeDescriptor.
Im using Xcode 8.3 with Swift 3. I have written one method named pdfFromData(data:) to form the pdf document from the Data, whenever I build my project its not getting build due to this method, means the compiler is got stopped/hanged when it start compile particular file where I coded pdfFromData(data:) method(In Xcode 8.2 with Swift 3 it worked fine). Whenever i comment this method and build means everything working fine.
func pdfFromData(data: Data) -> CGPDFDocument? { // Form pdf document from the data.
if let pdfData = data as? CFData {
if let provider = CGDataProvider(data: pdfData) {
let pdfDocument = CGPDFDocument(provider)
return pdfDocument
}
}
return nil
}
What's wrong with this method?. I want to build my project with this method as well. Thanks in advance.
I tried debugging your issue. This is what I found out:
if let pdfData = data as? CFData {
}
The above line for casting object of type Data to CFData is where it's taking too much time to build.
Replacing that with the following piece of code significantly reduces your build time.
let pdfNsData: NSData = NSData(data: data) // convert `Data` to `NSData`
if let cfPdfData: CFData = pdfNsData as? CFData {
// cast `NSData` to `CFData`
}
NSData and CFData are toll-free bridged.
Please let me know if there's any doubt