I am trying to detect bar code from user selected image. I am able to detect QR code from the image but can not find anything related to bar code scanning from Image. The code I am using to detect QR code from image is like below:
func detectQRCode(_ image: UIImage?) -> [CIFeature]? {
if let image = image, let ciImage = CIImage.init(image: image){
var options: [String: Any]
let context = CIContext()
options = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
let qrDetector = CIDetector(ofType: CIDetectorTypeQRCode, context: context, options: options)
if ciImage.properties.keys.contains((kCGImagePropertyOrientation as String)){
options = [CIDetectorImageOrientation: ciImage.properties[(kCGImagePropertyOrientation as String)] ?? 1]
}else {
options = [CIDetectorImageOrientation: 1]
}
let features = qrDetector?.features(in: ciImage, options: options)
return features
}
return nil
}
When I go into the documentation of the CIDetectorTypeQRCode it says
/* Specifies a detector type for barcode detection. */
#available(iOS 8.0, *)
public let CIDetectorTypeQRCode: String
Though this is QR code Type documentation says it can detect barcode also.
Fine. But when I use the same function to decode barcode it returns me empty array of features. Even if it returns me some features how will I be able to convert it to the bar code alternative of CIQRCodeFeature ? I do not see any bar code alternative in the documentation of CIQRCodeFeature. I know with ZBar SDK you can do this, but I am trying not to use any third party library here, or is it mandatory to use this in this regard??.
Please help, Thanks a lot.
You can use Vision Framework
Barcode detection request code
var vnBarCodeDetectionRequest : VNDetectBarcodesRequest{
let request = VNDetectBarcodesRequest { (request,error) in
if let error = error as NSError? {
print("Error in detecting - \(error)")
return
}
else {
guard let observations = request.results as? [VNDetectedObjectObservation]
else {
return
}
print("Observations are \(observations)")
}
}
return request
}
Function in which you need to pass the image.
func createVisionRequest(image: UIImage)
{
guard let cgImage = image.cgImage else {
return
}
let requestHandler = VNImageRequestHandler(cgImage: cgImage, orientation: image.cgImageOrientation, options: [:])
let vnRequests = [vnBarCodeDetectionRequest]
DispatchQueue.global(qos: .background).async {
do{
try requestHandler.perform(vnRequests)
}catch let error as NSError {
print("Error in performing Image request: \(error)")
}
}
}
Reference Link
Related
I'm attempting to crop an UIImage in iOS using Saliency via the VNGenerateObjectnessBasedSaliencyImageRequest().
I'm following the documentation provided by Apple here https://developer.apple.com/documentation/vision/2908993-vnimagerectfornormalizedrect and
working off of this tutorial https://betterprogramming.pub/cropping-areas-of-interest-using-vision-in-ios-e83b5e53440b.
I'm also referencing this project https://developer.apple.com/documentation/vision/highlighting_areas_of_interest_in_an_image_using_saliency.
This is the code I currently have in place.
static func cropImage(_ image: UIImage, completionHandler:#escaping(UIImage?, String?) -> Void) -> Void {
guard let originalImage = image.cgImage else { return }
let saliencyRequest = VNGenerateObjectnessBasedSaliencyImageRequest()
let requestHandler = VNImageRequestHandler(cgImage: originalImage, orientation: .right, options: [:])
DispatchQueue.global(qos: .userInitiated).async {
do {
try requestHandler.perform([saliencyRequest])
guard let results = saliencyRequest.results?.first else{return}
if let observation = results as VNSaliencyImageObservation?
{
let salientObjects = observation.salientObjects
if let ciimage = CIImage(image: image)
{
let salientRect = VNImageRectForNormalizedRect((salientObjects?.first!.boundingBox)!,
Int(ciimage.extent.size.width),
Int(ciimage.extent.size.height))
let croppedImage = ciimage.cropped(to: salientRect)
let cgImage = iOSVisionHelper.convertCIImageToCGImage(inputImage: croppedImage)
if cgImage != nil {
let thumbnail = UIImage(cgImage: cgImage!)
completionHandler(thumbnail, nil)
}else{
completionHandler(nil, "Unable to crop image")
}
}
}
} catch {
completionHandler(nil, error.localizedDescription)
}
}
}
static func convertCIImageToCGImage(inputImage: CIImage) -> CGImage? {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(inputImage, from: inputImage.extent) {
return cgImage
}
return nil
}
This is working pretty well, except it seems like it's not adjusting the height of the image. It crops in the sides perfectly, but not the top or bottom.
Here are examples of the original image and it being cropped.
This is what the iOS demo app found at https://developer.apple.com/documentation/vision/highlighting_areas_of_interest_in_an_image_using_saliency generates.
Any help would be very much appreciated.
I have an UIImage generated from a CanvasView.
I want to use the featureprintObservationForImage feature on it. However it seems to take a URL and I am trying to provide a UIImage, how can I get around this?
Here is my code:
//getting image
UIGraphicsBeginImageContextWithOptions(theCanvasView.bounds.size, false, UIScreen.main.scale)
theCanvasView.drawHierarchy(in: theCanvasView.bounds, afterScreenUpdates: true)
let image2 = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
//setting up observation for image
let drawing = featureprintObservationForImage(atURL: Bundle.main.url(image2)!)
I am getting the error on the last line that says:
Cannot convert value of type 'UIImage?' to expected argument type
'String'
Any ideas?
Assuming that you need to get a VNFeaturePrintObservation instance, you could request an image instead of a URL by using the VNImageRequestHandler.
Assuming that featureprintObservationForImage method is (or looks something like this):
func featureprintObservationForImage(atURL url: URL) -> VNFeaturePrintObservation? {
let requestHandler = VNImageRequestHandler(url: url, options: [:])
let request = VNGenerateImageFeaturePrintRequest()
do {
try requestHandler.perform([request])
return request.results?.first as? VNFeaturePrintObservation
} catch {
print("Vision error: \(error)")
return nil
}
}
You could have a different version as:
func featureprintObservationForImage(_ image: CIImage?) -> VNFeaturePrintObservation? {
guard let ciImage = image else {
return nil
}
let requestHandler = VNImageRequestHandler(ciImage: ciImage, options: [:])
let request = VNGenerateImageFeaturePrintRequest()
do {
try requestHandler.perform([request])
return request.results?.first as? VNFeaturePrintObservation
} catch {
print("Vision error: \(error)")
return nil
}
}
The differences in the second one are:
The signature of the method, takes an optional CIImage instead of a URL.
The initializer of the requestHandler.
Therefore:
let drawing = featureprintObservationForImage(image2?.ciImage)
I'm using Vision Framework to detecting faces with iPhone's front camera. My code looks like
func detect(_ cmSampleBuffer: CMSampleBuffer) {
guard let pixelBuffer = CMSampleBufferGetImageBuffer(cmSampleBuffer) else {return}
var requests: [VNRequest] = []
let requestLandmarks = VNDetectFaceLandmarksRequest { request, _ in
DispatchQueue.main.async {
guard let results = request.results as? [VNFaceObservation],
print(results)
}
}
requests.append(requestLandmarks)
let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: .leftMirrored)
do {
try handler.perform(requests)
} catch {
print(error)
}
}
However, I noticed that when I move my face horizontally, the coordinates change vertically and vice versa. The image bellow can help to understand
If anyone can help me i'm going crazy about it
For some reason, remove
let connectionVideo = videoDataOutput.connection(with: AVMediaType.video)
connectionVideo?.videoOrientation = AVCaptureVideoOrientation.portrait
from my AVCaptureVideoDataOutput solved the problem 🤡
I see different behavior on iOS 11 vs 12.
On iOS 11 - I get the filepath of files shared in completion handler.
On iOS 12 - I get a URL domain error. But if i handle it based on the type (eg: UIImage), then I get the file content.
Is this behaviour only on simulator or on device as well ?
Do we need to handle this per iOS version ?
Yes you will get both thing (file path or data) on device also. You did not need to add any check on iOS version.
Please flow my code. It is in swift but you can understand it.
func share() {
let inputItem = extensionContext!.inputItems.first! as! NSExtensionItem
let attachment = inputItem.attachments!.first as! NSItemProvider
if attachment.hasItemConformingToTypeIdentifier( kUTTypeImage as String) {
attachment.loadItem(forTypeIdentifier: kUTTypeImage as String, options: [:]) { (data, error) in
var image: UIImage?
if let someURl = data as? URL {
image = UIImage(contentsOfFile: someURl.path)
}else if let someImage = data as? UIImage {
image = someImage
}
if let someImage = image {
guard let compressedImagePath = FileManager.default.urls(for: .cachesDirectory, in: .userDomainMask).first?.appendingPathComponent("shareImage.jpg", isDirectory: false) else {
return
}
let compressedImageData = UIImageJPEGRepresentation(someImage, 1)
guard (try? compressedImageData?.write(to: compressedImagePath)) != nil else {
return
}
}else{
print("bad share data")
}
}
}
}
Below is code from a share extension for iOS part of a flutter app.
I have literally hours of experience working with XCode so please pardon my noob mistakes.
This is the viewDidLoad method from my SLComposeServiceViewController implementation
override func viewDidLoad() {
super.viewDidLoad();
let content = self.extensionContext!.inputItems[0] as! NSExtensionItem;
let contentTypeImage = kUTTypeImage as String;
let contentTypeText = kUTTypeText as String;
for attachment in content.attachments as! [NSItemProvider] {
if attachment.hasItemConformingToTypeIdentifier(contentTypeImage) {
// Verify that the content type is image.
attachment.loadItem(forTypeIdentifier: contentTypeImage, options: nil) {
data, error in if error == nil {
let url = data as! NSURL
if let imageData = NSData(contentsOf: url as URL) {
let image = UIImage(data: imageData as Data)
// Do something with the image.
if(thingGoWrong) {
//show error message.
self.showErrorMessage(text: "Failed to read image.")
return
}
if (finalOperationSucceeded) {
self.extensionContext!.completeRequest(returningItems: nil, completionHandler: nil)
}
}
} else {
// Display error dialog for not supported content. Though we should never receive any such thing.
self.showErrorMessage(text: error?.localizedDescription)
}
}
}
if attachment.hasItemConformingToTypeIdentifier(contentTypeText) {
attachment.loadItem(forTypeIdentifier: contentTypeText, options: nil) {
data, error in if error == nil {
let text = data as! String
// do something with the text
if (textOpSucceeded) {
self.extensionContext!.completeRequest(returningItems: nil, completionHandler: nil)
}
}
}
}
}
}
The text portion of the share extension works as expected, but if I try and send an image to the app. I get this response.
Note:
The same code runs fine when testing on iOS 11.4
I've tested on iPhone 6S simulator iOS 12.0 where it failed.
Have you tried converting the data directly to an UIImage?