How to decode from 1D barcode image - ios

Below code is only worked for decoding from a QR code image, not applicable for 1D barcode.
I don't want to use any third-party library.
Is it possible to get any CIQRCodeFeature for Barcode image?
your help is appreciated.
func scanCodeFromImage(image: UIImage) -> String? {
guard let detector = CIDetector(ofType: CIDetectorTypeQRCode, context: nil, options: [CIDetectorAccuracy : CIDetectorAccuracyHigh]), let ciImage = CIImage(image: image), let features = detector.features(in: ciImage) as? [CIQRCodeFeature] else { return nil }
var qrCodeText = ""
for feature in features {
if let message = feature.messageString {
qrCodeText += message
}
}
return qrCodeText
}

No third party libraries are needed for this, Apple provides the AVFoundation framework
Follow this tutorial for a better understanding of how to get it working, all you have to do is change the type of barcode you want to scan, and you can select multiple different types or just one type. This could also be done with images or straight from the camera without using a third party library.

Related

How performant is CIFilter for displaying several grayscale images in a tableview?

I am using the following to convert images to grayscale before showing them on a UITableView using UIImageView:
extension UIImage {
var noir: UIImage? {
let contextForGrayscale = CIContext(options: nil)
guard let currentFilter = CIFilter(name: "CIPhotoEffectNoir") else { return nil }
currentFilter.setValue(CIImage(image: self), forKey: kCIInputImageKey)
if let output = currentFilter.outputImage,
let cgImage = contextForGrayscale.createCGImage(output, from: output.extent) {
return UIImage(cgImage: cgImage, scale: scale, orientation: imageOrientation)
}
return nil
}
}
Since I am showing these images in a UITableView using UIImageView, each image is being grayscaled as the user scrolls. On my iPhone 13, the performance seems very good and I don't see any lag. However, I am curious how good its performance is on an old device. I don't have an old device, so I am unable to test it.
Is this a performant way to grayscale on the fly and display them? Is there anything I can do to make it better?
Is there a way to make my phone slower for testing the performance? Sort of like simulate older device?
If performance / memory pressure doesn't seem to be an issue, I'd just not worry about it. If it is a problem you could use NSCache.
I'd do the caching outside the extension, but for the sake of the code example:
extension UIImage {
private static var noirCache = NSCache<NSString, UIImage>()
func makeNoirImage(identifier: String) -> UIImage? {
if let cachedImage = UIImage.noirCache.object(forKey: identifier as NSString) {
return cachedImage
}
let contextForGrayscale = CIContext(options: nil)
guard let currentFilter = CIFilter(name: "CIPhotoEffectNoir") else { return nil }
currentFilter.setValue(CIImage(image: self), forKey: kCIInputImageKey)
if let output = currentFilter.outputImage,
let cgImage = contextForGrayscale.createCGImage(output, from: output.extent) {
let noirImage = UIImage(cgImage: cgImage, scale: scale, orientation: imageOrientation)
UIImage.noirCache.setObject(noirImage, forKey: identifier as NSString)
}
return nil
}
}
Also, check out this article https://nshipster.com/image-resizing/
You could, in addition to creating this new image, also create a thumbnail right sized for its display image view and use the built in caching mechanisms for this. This would save some memory and performance overall. But again, if it's not an issue, I'd be happier to just have the simpler code and no caching!
Oh, one more thing. You could use this https://developer.apple.com/documentation/uikit/uitableviewdatasourceprefetching to create the image ahead of time async before the cell is displayed, so it's ready to go by the time the table asks for the cell for the given index path. Thinking about it this is probably the simplest / nicest solution here.

Rare crashes when setting UIImageView with a UIImage backed with CIImage

First of all, I want to emphasize that this bug concerns only about 1% of the user base according to Firebase Crashlytics.
I have a xcasset catalog with many heic images.
I need to display some of those images as such (original version) and some of them blurred.
Here is the code to load and display a normal image or a blurred image.
// Original image
self.imageView.image = UIImage(named: "officeBackground")!
// Blurred image
self.imageView.image = AssetManager.shared.blurred(named: "officeBackground")
I use a manager to cache the blurred images so that I don't have to re-generate them every time I display them.
final class AssetManager {
static let shared = AssetManager()
private var blurredBackground = [String: UIImage]()
func blurred(named: String) -> UIImage {
if let cachedImage = self.blurredBackground[from] {
return cachedImage
}
let blurred = UIImage(named: named)!.blurred()!
self.blurredBackground[from] = blurred
return blurred
}
}
And finally the blur code
extension UIImage {
func blurred() -> UIImage? {
let ciimage: CIImage? = self.ciImage ?? CIImage(image: self)
guard let input = ciimage else { return nil }
let blurredImage = input.clampedToExtent()
.applyingFilter("CIGaussianBlur", parameters: [kCIInputRadiusKey: 13])
.cropped(to: input.extent)
return UIImage(ciImage: blurredImage, scale: self.scale, orientation: .up)
}
}
And here are the 2 types of crashes I get
CoreFoundation with CFAutorelease. Crashlytics has an additional info about it:
crash_info_entry_0:
*** CFAutorelease() called with NULL ***
CoreImage with recursive_render. Crashlytics has also this additional info about it:
crash_info_entry_0:
Cache Stats: count=14 size=100MB non-volatile=0B peakCount=28 peakSize=199MB peakNVSize=50MB
The only common point I found between all users is that they have between 30 - 150 Mo of RAM at the time of crash (according to Firebase, if this info is even reliable?).
At this point, I am honestly clueless. It seems like a bug with CoreImage / CoreFoundation with how it handles CIImage in memory.
The weird thing is that because I'm using the AssetManager to cache the blurred images, I know that during the time of crash the user already has a cache version available in RAM, and yet when setting the UIImageView with the cached image, it crashes because of low memory (?!). Why is the system even trying to allocate memory to do this?
In my experience, using a UIImage that is created from a CIImage directly is very unreliable and buggy. The main reason is that a CIImage is not really a bitmap image, but rather a receipt that contains the instructions for creating an image. It is up to the consumer of the UIImage to know that it's backed by a CIImage and render it properly. UIImageView theoretically does that, but I've seen many reports here on SO that it's somewhat unreliable. And as ohglstr correctly pointed out, caching that UIImage doesn't help much since it still needs to be rendered every time it's used.
I recommend you use a CIContext to render the blurred images yourself and cache the result. You could for instance do that in your AssetManager:
final class AssetManager {
static let shared = AssetManager()
private var blurredBackground = [String: UIImage]()
private var ciContext: CIContext()
func blurred(named name: String) -> UIImage {
if let cachedImage = self.blurredBackground[name] {
return cachedImage
}
let ciImage = UIImage(named: name)!.blurred()!
let cgImage = self.ciContext.createCGImage(ciImage, from: ciImage.extent)!
let blurred = UIImage(cgImage: cgImage)
self.blurredBackground[name] = blurred
return blurred
}
}

Can I generate a QR code that contains both URL and text values?

This question has been asked, but I'm not quite asking the same thing. Using iOS Swift, I am trying to store 2 values in a QR code. One is a URL of an app on the store. The other is a string that can be picked up by that app (it has its own scanner with logic to obtain the string value). The second part works fine, as I can easily parse the whole string. I tried putting a comma between the values, and it almost works, but I get a "Can't connect to the App Store" message when I use a generic scanner. It picks up the URL and tries to connect, but the extra data seems to screw it up. If I take out the comma and string, the URL then works.
Here is a subset of my code...
override func viewDidLoad() {
super.viewDidLoad()
let payload = "https://apps.apple.com/ca/app/.../,<my string value>"
let image = generateQRCode(from: payload)
qrCodeImage.image = image
}
func generateQRCode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
if let filter = CIFilter(name: "CIQRCodeGenerator") {
filter.setValue(data, forKey: "inputMessage")
let transform = CGAffineTransform(scaleX: 3, y: 3)
if let output = filter.outputImage?.transformed(by: transform) {
return UIImage(ciImage: output)
}
}
return nil
}
Does anyone know if this is even possible? e.g. could I use json or a vcard, or would the generic scanner not be able to pick out the URL?
You can add the data you want as a query parameter on the QR code's URL, unless there are privacy issues with the data you're appending to the URL.

Converting a Vision VNTextObservation to a String

I'm looking through the Apple's Vision API documentation and I see a couple of classes that relate to text detection in UIImages:
1) class VNDetectTextRectanglesRequest
2) class VNTextObservation
It looks like they can detect characters, but I don't see a means to do anything with the characters. Once you've got characters detected, how would you go about turning them into something that can be interpreted by NSLinguisticTagger?
Here's a post that is a brief overview of Vision.
Thank you for reading.
This is how to do it ...
//
// ViewController.swift
//
import UIKit
import Vision
import CoreML
class ViewController: UIViewController {
//HOLDS OUR INPUT
var inputImage:CIImage?
//RESULT FROM OVERALL RECOGNITION
var recognizedWords:[String] = [String]()
//RESULT FROM RECOGNITION
var recognizedRegion:String = String()
//OCR-REQUEST
lazy var ocrRequest: VNCoreMLRequest = {
do {
//THIS MODEL IS TRAINED BY ME FOR FONT "Inconsolata" (Numbers 0...9 and UpperCase Characters A..Z)
let model = try VNCoreMLModel(for:OCR().model)
return VNCoreMLRequest(model: model, completionHandler: self.handleClassification)
} catch {
fatalError("cannot load model")
}
}()
//OCR-HANDLER
func handleClassification(request: VNRequest, error: Error?)
{
guard let observations = request.results as? [VNClassificationObservation]
else {fatalError("unexpected result") }
guard let best = observations.first
else { fatalError("cant get best result")}
self.recognizedRegion = self.recognizedRegion.appending(best.identifier)
}
//TEXT-DETECTION-REQUEST
lazy var textDetectionRequest: VNDetectTextRectanglesRequest = {
return VNDetectTextRectanglesRequest(completionHandler: self.handleDetection)
}()
//TEXT-DETECTION-HANDLER
func handleDetection(request:VNRequest, error: Error?)
{
guard let observations = request.results as? [VNTextObservation]
else {fatalError("unexpected result") }
// EMPTY THE RESULTS
self.recognizedWords = [String]()
//NEEDED BECAUSE OF DIFFERENT SCALES
let transform = CGAffineTransform.identity.scaledBy(x: (self.inputImage?.extent.size.width)!, y: (self.inputImage?.extent.size.height)!)
//A REGION IS LIKE A "WORD"
for region:VNTextObservation in observations
{
guard let boxesIn = region.characterBoxes else {
continue
}
//EMPTY THE RESULT FOR REGION
self.recognizedRegion = ""
//A "BOX" IS THE POSITION IN THE ORIGINAL IMAGE (SCALED FROM 0... 1.0)
for box in boxesIn
{
//SCALE THE BOUNDING BOX TO PIXELS
let realBoundingBox = box.boundingBox.applying(transform)
//TO BE SURE
guard (inputImage?.extent.contains(realBoundingBox))!
else { print("invalid detected rectangle"); return}
//SCALE THE POINTS TO PIXELS
let topleft = box.topLeft.applying(transform)
let topright = box.topRight.applying(transform)
let bottomleft = box.bottomLeft.applying(transform)
let bottomright = box.bottomRight.applying(transform)
//LET'S CROP AND RECTIFY
let charImage = inputImage?
.cropped(to: realBoundingBox)
.applyingFilter("CIPerspectiveCorrection", parameters: [
"inputTopLeft" : CIVector(cgPoint: topleft),
"inputTopRight" : CIVector(cgPoint: topright),
"inputBottomLeft" : CIVector(cgPoint: bottomleft),
"inputBottomRight" : CIVector(cgPoint: bottomright)
])
//PREPARE THE HANDLER
let handler = VNImageRequestHandler(ciImage: charImage!, options: [:])
//SOME OPTIONS (TO PLAY WITH..)
self.ocrRequest.imageCropAndScaleOption = VNImageCropAndScaleOption.scaleFill
//FEED THE CHAR-IMAGE TO OUR OCR-REQUEST - NO NEED TO SCALE IT - VISION WILL DO IT FOR US !!
do {
try handler.perform([self.ocrRequest])
} catch { print("Error")}
}
//APPEND RECOGNIZED CHARS FOR THAT REGION
self.recognizedWords.append(recognizedRegion)
}
//THATS WHAT WE WANT - PRINT WORDS TO CONSOLE
DispatchQueue.main.async {
self.PrintWords(words: self.recognizedWords)
}
}
func PrintWords(words:[String])
{
// VOILA'
print(recognizedWords)
}
func doOCR(ciImage:CIImage)
{
//PREPARE THE HANDLER
let handler = VNImageRequestHandler(ciImage: ciImage, options:[:])
//WE NEED A BOX FOR EACH DETECTED CHARACTER
self.textDetectionRequest.reportCharacterBoxes = true
self.textDetectionRequest.preferBackgroundProcessing = false
//FEED IT TO THE QUEUE FOR TEXT-DETECTION
DispatchQueue.global(qos: .userInteractive).async {
do {
try handler.perform([self.textDetectionRequest])
} catch {
print ("Error")
}
}
}
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
//LETS LOAD AN IMAGE FROM RESOURCE
let loadedImage:UIImage = UIImage(named: "Sample1.png")! //TRY Sample2, Sample3 too
//WE NEED A CIIMAGE - NOT NEEDED TO SCALE
inputImage = CIImage(image:loadedImage)!
//LET'S DO IT
self.doOCR(ciImage: inputImage!)
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
}
You'll find the complete project here included is the trained model !
SwiftOCR
I just got SwiftOCR to work with small sets of text.
https://github.com/garnele007/SwiftOCR
uses
https://github.com/Swift-AI/Swift-AI
which uses NeuralNet-MNIST model for text recognition.
TODO : VNTextObservation > SwiftOCR
Will post example of it using VNTextObservation once I have it one connected to the other.
OpenCV + Tesseract OCR
I tried to use OpenCV + Tesseract but got compile errors then found SwiftOCR.
SEE ALSO : Google Vision iOS
Note Google Vision Text Recognition - Android sdk has text detection but also has iOS cocoapod. So keep an eye on it as should add text recognition to the iOS eventually.
https://developers.google.com/vision/text-overview
//Correction: just tried it but only Android version of the sdk supports text detection.
https://developers.google.com/vision/text-overview
If you subscribe to releases:
https://libraries.io/cocoapods/GoogleMobileVision
Click SUBSCRIBE TO RELEASES
you can see when TextDetection is added to the iOS part of the Cocoapod
Apple finally updated Vision to do OCR. Open a playground and dump a couple of test images in the Resources folder. In my case, I called them "demoDocument.jpg" and "demoLicensePlate.jpg".
The new class is called VNRecognizeTextRequest. Dump this in a playground and give it a whirl:
import Vision
enum DemoImage: String {
case document = "demoDocument"
case licensePlate = "demoLicensePlate"
}
class OCRReader {
func performOCR(on url: URL?, recognitionLevel: VNRequestTextRecognitionLevel) {
guard let url = url else { return }
let requestHandler = VNImageRequestHandler(url: url, options: [:])
let request = VNRecognizeTextRequest { (request, error) in
if let error = error {
print(error)
return
}
guard let observations = request.results as? [VNRecognizedTextObservation] else { return }
for currentObservation in observations {
let topCandidate = currentObservation.topCandidates(1)
if let recognizedText = topCandidate.first {
print(recognizedText.string)
}
}
}
request.recognitionLevel = recognitionLevel
try? requestHandler.perform([request])
}
}
func url(for image: DemoImage) -> URL? {
return Bundle.main.url(forResource: image.rawValue, withExtension: "jpg")
}
let ocrReader = OCRReader()
ocrReader.performOCR(on: url(for: .document), recognitionLevel: .fast)
There's an in-depth discussion of this from WWDC19
Adding my own progress on this, if anyone have a better solution:
I've successfully drawn the region box and character boxes on screen. The vision API of Apple is actually very performant. You have to transform each frame of your video to an image and feed it to the recogniser. It's much more accurate than feeding directly the pixel buffer from the camera.
if #available(iOS 11.0, *) {
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {return}
var requestOptions:[VNImageOption : Any] = [:]
if let camData = CMGetAttachment(sampleBuffer, kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, nil) {
requestOptions = [.cameraIntrinsics:camData]
}
let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer,
orientation: 6,
options: requestOptions)
let request = VNDetectTextRectanglesRequest(completionHandler: { (request, _) in
guard let observations = request.results else {print("no result"); return}
let result = observations.map({$0 as? VNTextObservation})
DispatchQueue.main.async {
self.previewLayer.sublayers?.removeSubrange(1...)
for region in result {
guard let rg = region else {continue}
self.drawRegionBox(box: rg)
if let boxes = region?.characterBoxes {
for characterBox in boxes {
self.drawTextBox(box: characterBox)
}
}
}
}
})
request.reportCharacterBoxes = true
try? imageRequestHandler.perform([request])
}
}
Now I'm trying to actually reconize the text. Apple doesn't provide any built in OCR model. And I want to use CoreML to do that, so I'm trying to convert a Tesseract trained data model to CoreML.
You can find Tesseract models here: https://github.com/tesseract-ocr/tessdata and I think the next step is to write a coremltools converter that support those type of input and output a .coreML file.
Or, you can link to TesseractiOS directly and try to feed it with your region boxes and character boxes you get from the Vision API.
Thanks to a GitHub user, you can test an example: https://gist.github.com/Koze/e59fa3098388265e578dee6b3ce89dd8
- (void)detectWithImageURL:(NSURL *)URL
{
VNImageRequestHandler *handler = [[VNImageRequestHandler alloc] initWithURL:URL options:#{}];
VNDetectTextRectanglesRequest *request = [[VNDetectTextRectanglesRequest alloc] initWithCompletionHandler:^(VNRequest * _Nonnull request, NSError * _Nullable error) {
if (error) {
NSLog(#"%#", error);
}
else {
for (VNTextObservation *textObservation in request.results) {
// NSLog(#"%#", textObservation);
// NSLog(#"%#", textObservation.characterBoxes);
NSLog(#"%#", NSStringFromCGRect(textObservation.boundingBox));
for (VNRectangleObservation *rectangleObservation in textObservation.characterBoxes) {
NSLog(#" |-%#", NSStringFromCGRect(rectangleObservation.boundingBox));
}
}
}
}];
request.reportCharacterBoxes = YES;
NSError *error;
[handler performRequests:#[request] error:&error];
if (error) {
NSLog(#"%#", error);
}
}
The thing is, the result is an array of bounding boxes for each detected character. From what I gathered from Vision's session, I think you are supposed to use CoreML to detect the actual chars.
Recommended WWDC 2017 talk: Vision Framework: Building on Core ML (haven't finished watching it either), have a look at 25:50 for a similar example called MNISTVision
Here's another nifty app demonstrating the use of Keras (Tensorflow) for the training of a MNIST model for handwriting recognition using CoreML: Github
I'm using Google's Tesseract OCR engine to convert the images into actual strings. You'll have to add it to your Xcode project using cocoapods. Although Tesseract will perform OCR even if you simply feed the image containing texts to it, the way to make it perform better/faster is to use the detected text rectangles to feed pieces of the image that actually contain text, which is where Apple's Vision Framework comes in handy.
Here's a link to the engine:
Tesseract OCR
And here's a link to the current stage of my project that has text detection + OCR already implemented:
Out Loud - Camera to Speech
Hope these can be of some use. Good luck!
For those still looking for a solution I wrote a quick library to do this. It uses both the Vision API and Tesseract and can be used to achieve the task the question describes with one single method:
func sliceaAndOCR(image: UIImage, charWhitelist: String, charBlackList: String = "", completion: #escaping ((_: String, _: UIImage) -> Void))
This method will look for text in your image, return the string found and a slice of the original image showing where the text was found
Firebase ML Kit does it for iOS (and Android) with their on-device Vision API and it outperforms Tesseract and SwiftOCR.

Consistent binary data from images in Swift

For a small project, I'm making an iOS app which should do two things:
take a picture
take a hash from the picture data (and print it to the xcode console)
Then, I want to export the picture to my laptop and confirm the hash. I tried exporting via AirDrop, Photos.app, email and iCloud (Photos.app compresses the photo and iCloud transforms it into an .png).
Problem is, I can't repodruce the hash. This means that the exported picture differs from the picture in the app. There are some variables I tried to rule out one by one. To get NSData from a picture, one can use the UIImagePNGRepresentation and UIImageJPEGRepresentation functions, forcing the image in a format representation before extracting the data. To be honest, I'm not completely sure what these functions do (other than transforming to NSData), but they do something different from the other because they give a different result compared to each other and compared to the exported data (which is .jpg).
There are some things unclear to me what Swift/Apple is doing to my (picture)data upon exporting. I read in several places that Apple transforms (or deletes) the EXIF but to me it is unclear what part. I tried to anticipate this by explicitly removing the EXIF data myself before hashing in both the app (via function ImageHelper.removeExifData (found here) and via exiftools on the command line), but to no avail.
I tried hashing an existing photo on my phone. I had a photo send to me by mail but hashing this in my app and on the command line gave different results. A string gave similar results in the app and on command line so the hash function(s) are not the problem.
So my questions are:
Is there a way to prevent transformation when exporting a photo
Are there alternatives to UIImagePNGRepresentation / UIImageJPEGRepresentation functions
(3. Is this at all possible or is iOS/Apple too much of a black box?)
Any help or pointers to more documentation is greatly appreciated!
Here is my code
//
// ViewController.swift
// camera test
import UIKit
import ImageIO
// extension on NSData format, to enable conversion to String type
extension NSData {
func toHexString() -> String {
var hexString: String = ""
let dataBytes = UnsafePointer<CUnsignedChar>(self.bytes)
for (var i: Int=0; i<self.length; ++i) {
hexString += String(format: "%02X", dataBytes[i])
}
return hexString
}
}
// function to remove EXIF data from image
class ImageHelper {
static func removeExifData(data: NSData) -> NSData? {
guard let source = CGImageSourceCreateWithData(data, nil) else {
return nil
}
guard let type = CGImageSourceGetType(source) else {
return nil
}
let count = CGImageSourceGetCount(source)
let mutableData = NSMutableData(data: data)
guard let destination = CGImageDestinationCreateWithData(mutableData, type, count, nil) else {
return nil
}
// Check the keys for what you need to remove
// As per documentation, if you need a key removed, assign it kCFNull
let removeExifProperties: CFDictionary = [String(kCGImagePropertyExifDictionary) : kCFNull, String(CGImagePropertyOrientation): kCFNull]
for i in 0..<count {
CGImageDestinationAddImageFromSource(destination, source, i, removeExifProperties)
}
guard CGImageDestinationFinalize(destination) else {
return nil
}
return mutableData;
}
}
class ViewController: UIViewController, UINavigationControllerDelegate, UIImagePickerControllerDelegate, MFMailComposeViewControllerDelegate {
#IBOutlet weak var imageView: UIImageView!
// creats var for picture
var imagePicker: UIImagePickerController!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
// calls Camera function and outputs picture to imagePicker
#IBAction func cameraAction(sender: UIButton) {
imagePicker = UIImagePickerController()
imagePicker.delegate = self
imagePicker.sourceType = .Camera
presentViewController(imagePicker, animated: true, completion: nil)
}
// calls camera app, based on cameraAction
func imagePickerController(picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : AnyObject]) {
imagePicker.dismissViewControllerAnimated(true, completion: nil)
imageView.image = info[UIImagePickerControllerOriginalImage] as? UIImage
}
// calls photoHash function based on button hashAction
#IBAction func hashAction(sender: AnyObject) {
photoHash()
}
// converts latest picture to binary to sha256 hash and outputs to console
func photoHash(){
let img = ImageHelper.removeExifData(UIImagePNGRepresentation(imageView.image!)!)
let img2 = ImageHelper.removeExifData(UIImageJPEGRepresentation(imageView.image!, 1.0)!)
let imgHash = sha256_bin(img!)
let imgHash2 = sha256_bin(img2!)
print(imgHash)
print(imgHash2)
// write image to photo library
UIImageWriteToSavedPhotosAlbum(imageView.image!, nil, nil, nil)
}
// Digests binary data from picture into sha256 hash, output: hex string
func sha256_bin(data : NSData) -> String {
var hash = [UInt8](count: Int(CC_SHA256_DIGEST_LENGTH), repeatedValue: 0)
CC_SHA256(data.bytes, CC_LONG(data.length), &hash)
let res = NSData(bytes: hash, length: Int(CC_SHA256_DIGEST_LENGTH))
let resString = res.toHexString()
return resString
}
}
Specifications:
MacBook Pro retina 2013, OS X 10.11.5
xcode version 7.3.1
swift 2
iphone 5S
hash on command line via shasum -a 256 filename.jpg
Since posting my question last week I learned that Apple seperates the image data from the meta data (image data is stored in UIIMage object), so hashing the UIImage object will never result in a hash that is the same as a hash digested on the command line (or in python or where ever). This is because for python/perl/etc, the meta data is present (even with a tool as Exiftool, the exif data is standardized but still there, whereas in the app environment, the exif data is simply not there, I guess this has something to do with low level vs high level languages but not sure).
Although there are some ways to access the EXIF data (or meta data in general) of a UIImage, it is not easy. This is a feature to protect the privacy (among other things) of the user.
Solution
I have found a solution to our specific problem via a different route: turns out that iOS does save all the image data and meta data in one place on disk for a photo. By using the Photos API, I can get access to these with this call (I found this in an answer on SO, but I just don't remember how I ended up there. If you recognise this snippet, please let me know):
func getLastPhoto() {
let fetchOptions = PHFetchOptions()
fetchOptions.sortDescriptors = [NSSortDescriptor(key: "creationDate", ascending: true)]
let fetchResult = PHAsset.fetchAssetsWithMediaType(PHAssetMediaType.Image, options: fetchOptions)
if let lastAsset: PHAsset = fetchResult.lastObject as? PHAsset {
let manager = PHImageManager.defaultManager()
let imageRequestOptions = PHImageRequestOptions()
manager.requestImageDataForAsset(lastAsset, options: imageRequestOptions) {
(let imageData: NSData?, let dataUTI: String?,
let orientation: UIImageOrientation,
let info: [NSObject : AnyObject]?) -> Void in
// Doing stuff to the NSDAta in imageData
}
}
By sorting on date in reverse order the first entry is (obviously) the most recent photo. And as long as I don't load it into an imageView, I can do with the data what I want (sending it to a hash function in this case).
So the flow is as follows: user takes photo, photo is saved to the library and imported to the imageView. The user then presses the hash button upon which the most recently added photo (the one in the imageView) is fetched from disk with meta data and all. I can then export the photo from the library by airdrop (for now, https request in later stadium) and reproduce the hash on my laptop.

Resources