How to scan a 2D barcode using ZBar library in iOS. Do I need to set a symbol type (like ZBAR_QRCODE) to scan the 2D barcodes? I had checked other questions related to scanning, but nothing gives a correct answer to 2D barcodes. Please anyone help me on this. Attached a 2D barcode image for reference.
Barcode you're trying to scan is PDF417. It looks like Zbar doesn't support it:
ZBar is an open source software suite for reading bar codes from various sources, such as video streams, image files and raw intensity sensors. It supports many popular symbologies (types of bar codes) including EAN-13/UPC-A, UPC-E, EAN-8, Code 128, Code 39, Interleaved 2 of 5 and QR Code.
I suggest you use some other library for scanning PDF417.
If you target iOS 7.0+, your simplest option is to use MTBBarcodeScanner, which is a neat wrapper around a built in AVFoundation framework.
If you target iOS 6.0 as well, or need a more robust solution (like for scanning US driver's licenses, or a solution which will work on Androids as well), try PDF417.mobi SDK.
You can easily try PDF417.mobi SDK if you use CocoaPods by running
pod try PPpdf417
Related
I'm attempting to update a cordova app to read CODABAR format barcodes.
The barcode scanning plugin in use on iOS relies on the AV Foundation framework to set up an
AVCaptureSession to
activate the camera and intercept image frames.
Most of the cordova plugins & iOS tutorials around the web use this method, and attach a
AVCaptureMetadataOutput instance to specify which barcode formats we're interested in.
eg.
outputItems = [[AVCaptureMetadataOutput alloc] init];
[outputItems setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[captureSession addOutput:outputItems];
outputItems.metadataObjectTypes = [outputItems availableMetadataObjectTypes];
Unfortunately, CODABAR is not one of the supported formats.
Once the plugin is sent the frames, it's using ZXing to process the image. ZXing supports all the formats I want, but since AVCaptureMetadataOutput doesn't allow you
to specify CODABAR, my plugin never receives the images.
Is there an alternative to using an AVCaptureSession to process frames on the camera?
Am I missing a way to force the frames to be sent through despite the "unblessed" barcode format?
Ah. Found it I think.
The codebase still has references to the zxing calls, but swapped out using it without removing the old code, which led me down the wrong path.
Before it changed, it was using AVCaptureVideoDataOutput to process the frames itself.
Then it changed to using AVCaptureMetadataOutput instead, delegating the image processing to AV Foundation instead of zxing.
Looks like to add codabar support, I may need to reverse this, since AV Foundation doesn't do codabar.
UPDATE: About a year old now, but to tidy off this old question, I went ahead with the solution I suggested above since there didn't appear to be anything other way for this on iOS. Here's part of my comment on the plugin's github issue:
It turns out that:
Old versions of this plugin, in iOS, used an (ancient) snapshot of the ZXing c++ port. That library didn't support Codabar at the time.
At some point, the iOS version switched to using the iOS AV Foundation framework instead, delegating the barcode decoding to AVCaptureMetadataOutput. This framework doesn't support codabar either. This switch was necessary due to memory leaks found in the old c++ zxing approach, discovered when iOS 10 landed on everyone.
I've attempted to integrate this plugin with a more recent Objective C port of ZXing. This does support codabar, along with a few extra formats currently missing from the iOS version of this plugin.
My attempt is over here:
https://github.com/otherchirps/phonegap-plugin-barcodescanner
My iOS project is now scanning codabar barcodes using this version of the plugin.
This was based on the earlier efforts to do the same found here:
https://github.com/dually8/BarcodeScanner
I recently started to create an iOS version of my Android app and I've found myself struggling to find a method of extracting dominant colours from an image.
Over in Android there's a palette library provided by Google that allows you to do just that. (https://developer.android.com/reference/android/support/v7/graphics/Palette.html)
I'm can't seem to find any libraries that will allow me to that in iOS however.
It seems my friend use this library: https://github.com/pixelogik/ColorCube
Take a tour, it seems that they do what you are expected.
I am not sure if this question has been asked before or not, but I want to know what frameworks do I need to explore in order to do augmented reality with image recognition for iOS.
Basically to build something like this, http://www.youtube.com/watch?v=GbplSdh0lGU
I am using Wikitude's SDK which enables me to use it in PhoneGap as well. Wikitude uses Vuforia's SDK for Image Recognition. Compare Wikitude and Vuforia for their features!
Here is a good SDK which I know for Augmented Reality. You will find tutorials and demos there.
Note: Adding vuforia SDK with your app is difficult so you can go with metaio SDK but modification(changing target image is easy in Vuforia)
http://www.metaio.com/sdk/
https://developer.vuforia.com/resources/sample-apps
I am developing an image processing application in Centos with OpenCV using C/C++ coding. My intension is to have a single development platform for Linux and IOS (IPAD).
So if I start the development in a Linux environment with OpenCV installed ( in C/CPP ),Can I use the same code in IOS without going for Objective-C? I don't want to put dual effort for IOS and Linux, so how to achieve this?
It looks like it's possible. Compiling and running C/C++ on iOS is no problem, but you'll need some Objective-C for the UI. When you pay some attention to the layering/abstraction of your modules, you should be able to share most/all core code between the platforms.
See my detailed answer to this question:
iOS:Retrieve rectangle shaped image from the background image
Basically you can keep most of your CPP code portable between platforms if you keep your user interface code separate. On iOS all of the UI should be pure objective-C, while your openCV image processing can be pure C++ (which would be exactly the same on linux). On iOS you would make a thin ObjC++ wrapper class that mediates between Objective-C side and the C++ side. All it really does is translate image formats between them and send data in and out of C++ for processing.
I have a couple of simple examples on github you might want to take a look at: OpenCVSquares and OpenCVStitch. These are based on C++ samples distributed with openCV - you should compare the C++ in those projects with the original samples to see how much altering was required (hint: not much).
I'm working on a BlackBerry app which will be used in cafes and restaurants, and one of the features is QR code scanning
Is there any way to make the camera autofocus the QR code?
Searching, I found FocusControl, which looks like the one I'm looking for. Unfortunately, it's only available since OS 5.0.
I wonder how to achieve the same thing on OS 4.5, 4.6, and 4.7.
Any suggestions?
You can't.
In OSs prior to 4.5, you can launch the native camera app and listen for recently created image files (FileSystemJournalListener). This way, if the BB has autofocus, the image has more definition.
However, there's no way to do this for every BB. Unless you apply some image algorithm of your own after taking the image.