Consistent binary data from images in Swift - ios

For a small project, I'm making an iOS app which should do two things:
take a picture
take a hash from the picture data (and print it to the xcode console)
Then, I want to export the picture to my laptop and confirm the hash. I tried exporting via AirDrop, Photos.app, email and iCloud (Photos.app compresses the photo and iCloud transforms it into an .png).
Problem is, I can't repodruce the hash. This means that the exported picture differs from the picture in the app. There are some variables I tried to rule out one by one. To get NSData from a picture, one can use the UIImagePNGRepresentation and UIImageJPEGRepresentation functions, forcing the image in a format representation before extracting the data. To be honest, I'm not completely sure what these functions do (other than transforming to NSData), but they do something different from the other because they give a different result compared to each other and compared to the exported data (which is .jpg).
There are some things unclear to me what Swift/Apple is doing to my (picture)data upon exporting. I read in several places that Apple transforms (or deletes) the EXIF but to me it is unclear what part. I tried to anticipate this by explicitly removing the EXIF data myself before hashing in both the app (via function ImageHelper.removeExifData (found here) and via exiftools on the command line), but to no avail.
I tried hashing an existing photo on my phone. I had a photo send to me by mail but hashing this in my app and on the command line gave different results. A string gave similar results in the app and on command line so the hash function(s) are not the problem.
So my questions are:
Is there a way to prevent transformation when exporting a photo
Are there alternatives to UIImagePNGRepresentation / UIImageJPEGRepresentation functions
(3. Is this at all possible or is iOS/Apple too much of a black box?)
Any help or pointers to more documentation is greatly appreciated!
Here is my code
//
// ViewController.swift
// camera test
import UIKit
import ImageIO
// extension on NSData format, to enable conversion to String type
extension NSData {
func toHexString() -> String {
var hexString: String = ""
let dataBytes = UnsafePointer<CUnsignedChar>(self.bytes)
for (var i: Int=0; i<self.length; ++i) {
hexString += String(format: "%02X", dataBytes[i])
}
return hexString
}
}
// function to remove EXIF data from image
class ImageHelper {
static func removeExifData(data: NSData) -> NSData? {
guard let source = CGImageSourceCreateWithData(data, nil) else {
return nil
}
guard let type = CGImageSourceGetType(source) else {
return nil
}
let count = CGImageSourceGetCount(source)
let mutableData = NSMutableData(data: data)
guard let destination = CGImageDestinationCreateWithData(mutableData, type, count, nil) else {
return nil
}
// Check the keys for what you need to remove
// As per documentation, if you need a key removed, assign it kCFNull
let removeExifProperties: CFDictionary = [String(kCGImagePropertyExifDictionary) : kCFNull, String(CGImagePropertyOrientation): kCFNull]
for i in 0..<count {
CGImageDestinationAddImageFromSource(destination, source, i, removeExifProperties)
}
guard CGImageDestinationFinalize(destination) else {
return nil
}
return mutableData;
}
}
class ViewController: UIViewController, UINavigationControllerDelegate, UIImagePickerControllerDelegate, MFMailComposeViewControllerDelegate {
#IBOutlet weak var imageView: UIImageView!
// creats var for picture
var imagePicker: UIImagePickerController!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
// calls Camera function and outputs picture to imagePicker
#IBAction func cameraAction(sender: UIButton) {
imagePicker = UIImagePickerController()
imagePicker.delegate = self
imagePicker.sourceType = .Camera
presentViewController(imagePicker, animated: true, completion: nil)
}
// calls camera app, based on cameraAction
func imagePickerController(picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : AnyObject]) {
imagePicker.dismissViewControllerAnimated(true, completion: nil)
imageView.image = info[UIImagePickerControllerOriginalImage] as? UIImage
}
// calls photoHash function based on button hashAction
#IBAction func hashAction(sender: AnyObject) {
photoHash()
}
// converts latest picture to binary to sha256 hash and outputs to console
func photoHash(){
let img = ImageHelper.removeExifData(UIImagePNGRepresentation(imageView.image!)!)
let img2 = ImageHelper.removeExifData(UIImageJPEGRepresentation(imageView.image!, 1.0)!)
let imgHash = sha256_bin(img!)
let imgHash2 = sha256_bin(img2!)
print(imgHash)
print(imgHash2)
// write image to photo library
UIImageWriteToSavedPhotosAlbum(imageView.image!, nil, nil, nil)
}
// Digests binary data from picture into sha256 hash, output: hex string
func sha256_bin(data : NSData) -> String {
var hash = [UInt8](count: Int(CC_SHA256_DIGEST_LENGTH), repeatedValue: 0)
CC_SHA256(data.bytes, CC_LONG(data.length), &hash)
let res = NSData(bytes: hash, length: Int(CC_SHA256_DIGEST_LENGTH))
let resString = res.toHexString()
return resString
}
}
Specifications:
MacBook Pro retina 2013, OS X 10.11.5
xcode version 7.3.1
swift 2
iphone 5S
hash on command line via shasum -a 256 filename.jpg

Since posting my question last week I learned that Apple seperates the image data from the meta data (image data is stored in UIIMage object), so hashing the UIImage object will never result in a hash that is the same as a hash digested on the command line (or in python or where ever). This is because for python/perl/etc, the meta data is present (even with a tool as Exiftool, the exif data is standardized but still there, whereas in the app environment, the exif data is simply not there, I guess this has something to do with low level vs high level languages but not sure).
Although there are some ways to access the EXIF data (or meta data in general) of a UIImage, it is not easy. This is a feature to protect the privacy (among other things) of the user.
Solution
I have found a solution to our specific problem via a different route: turns out that iOS does save all the image data and meta data in one place on disk for a photo. By using the Photos API, I can get access to these with this call (I found this in an answer on SO, but I just don't remember how I ended up there. If you recognise this snippet, please let me know):
func getLastPhoto() {
let fetchOptions = PHFetchOptions()
fetchOptions.sortDescriptors = [NSSortDescriptor(key: "creationDate", ascending: true)]
let fetchResult = PHAsset.fetchAssetsWithMediaType(PHAssetMediaType.Image, options: fetchOptions)
if let lastAsset: PHAsset = fetchResult.lastObject as? PHAsset {
let manager = PHImageManager.defaultManager()
let imageRequestOptions = PHImageRequestOptions()
manager.requestImageDataForAsset(lastAsset, options: imageRequestOptions) {
(let imageData: NSData?, let dataUTI: String?,
let orientation: UIImageOrientation,
let info: [NSObject : AnyObject]?) -> Void in
// Doing stuff to the NSDAta in imageData
}
}
By sorting on date in reverse order the first entry is (obviously) the most recent photo. And as long as I don't load it into an imageView, I can do with the data what I want (sending it to a hash function in this case).
So the flow is as follows: user takes photo, photo is saved to the library and imported to the imageView. The user then presses the hash button upon which the most recently added photo (the one in the imageView) is fetched from disk with meta data and all. I can then export the photo from the library by airdrop (for now, https request in later stadium) and reproduce the hash on my laptop.

Related

Can I generate a QR code that contains both URL and text values?

This question has been asked, but I'm not quite asking the same thing. Using iOS Swift, I am trying to store 2 values in a QR code. One is a URL of an app on the store. The other is a string that can be picked up by that app (it has its own scanner with logic to obtain the string value). The second part works fine, as I can easily parse the whole string. I tried putting a comma between the values, and it almost works, but I get a "Can't connect to the App Store" message when I use a generic scanner. It picks up the URL and tries to connect, but the extra data seems to screw it up. If I take out the comma and string, the URL then works.
Here is a subset of my code...
override func viewDidLoad() {
super.viewDidLoad()
let payload = "https://apps.apple.com/ca/app/.../,<my string value>"
let image = generateQRCode(from: payload)
qrCodeImage.image = image
}
func generateQRCode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
if let filter = CIFilter(name: "CIQRCodeGenerator") {
filter.setValue(data, forKey: "inputMessage")
let transform = CGAffineTransform(scaleX: 3, y: 3)
if let output = filter.outputImage?.transformed(by: transform) {
return UIImage(ciImage: output)
}
}
return nil
}
Does anyone know if this is even possible? e.g. could I use json or a vcard, or would the generic scanner not be able to pick out the URL?
You can add the data you want as a query parameter on the QR code's URL, unless there are privacy issues with the data you're appending to the URL.

How to decode from 1D barcode image

Below code is only worked for decoding from a QR code image, not applicable for 1D barcode.
I don't want to use any third-party library.
Is it possible to get any CIQRCodeFeature for Barcode image?
your help is appreciated.
func scanCodeFromImage(image: UIImage) -> String? {
guard let detector = CIDetector(ofType: CIDetectorTypeQRCode, context: nil, options: [CIDetectorAccuracy : CIDetectorAccuracyHigh]), let ciImage = CIImage(image: image), let features = detector.features(in: ciImage) as? [CIQRCodeFeature] else { return nil }
var qrCodeText = ""
for feature in features {
if let message = feature.messageString {
qrCodeText += message
}
}
return qrCodeText
}
No third party libraries are needed for this, Apple provides the AVFoundation framework
Follow this tutorial for a better understanding of how to get it working, all you have to do is change the type of barcode you want to scan, and you can select multiple different types or just one type. This could also be done with images or straight from the camera without using a third party library.

Saving multiple images to Parse

So I have an array of images I've accessed from my xcassets for demonstration purposes. There are 150 images I'm trying to save to my parse server at one time using parse frameworks. Here is the code I have so far. The problem I have is my app cpu goes to 100% in the tests and drops to 0. Also the images aren't saving to parse. I was hoping someone could help me find an efficient way to save 150 images to parse.
var imageNameList: [String] {
var imageNameList2:[String] = [] //[NSMutableArray]()
for i in 0...149 {
let imageName = String(format: "pic_%03d", Int(i))
imageNameList2.append(imageName)
}
return imageNameList2
}
#IBAction func Continue(_ sender: Any) {
for imageName in imageNameList {
var objectForSave:PFObject = PFObject(className: "Clo")
let object:UIImage = UIImage(named: imageName)!
let tilesPF = imageNameList.map({ name in
let data = UIImagePNGRepresentation(object as! UIImage)!
let file = PFFile(data: data)
let tile = PFObject(className: "Tile")
tile["tile"] = file
})
objectForSave["tiles"] = tilesPF
objectForSave.saveInBackground(block: { responseObject, error in
//you'll want to save the object ID of the PFObject if you want to retrieve a specific image later
})
}
}
The trouble is that the tight for-loop launches all of those requests concurrently causing some part of the http stack to bottleneck.
Instead, run the requests sequentially as follows (in my best approximation of Swift)...
func doOne(imageName: String, completion: (success: Bool)->()) {
var objectForSave:PFObject = PFObject(className: "Clo")
let object:UIImage = UIImage(named: imageName)!
// ... OP code that forms the request
objectForSave.saveInBackground(block: { responseObject, error in
success(error == nil)
})
}
func doMany(imageNames: Array<String>, completion: (success: Bool)->()) {
if (imageNames.count == 0) return completion(YES)
let nextName = imageNames[0];
self.doOne(imageName:imageNames[0] completion: {(success: Bool) -> Void in
if (success) {
let remainingNames = imageNames[1..imageNames.count-1]
self.doMany(imageNames: remainingNames completion:completion)
} else {
completion(NO)
})
}
In English, just in case I goofed the Swift, the idea is to factor out a single request into it's own function with a completion handler. Build a second function that takes an array of arguments to the network request, and use that array like a to-do list: do the first item on the list, when it completes, call itself recursively to do the remaining items.

download images using alamofire - iOS

I'm using Alamofire to fetch data from server and then put them in an array of CarType objects which CarType is my struct. what I get from server is name , id and iconUrl. from iconUrls i want to download icons and put them in icon. after that I'll use icon and name in a collection view. my Alamofire request is:
var info = [CarType]()
Alamofire.request(.GET,"url")
.responseJSON { response in
for (_,subJson):(String, JSON) in json["result"]
{
let name = subJson["name"].string
let iconUrl = subJson["icon"].string
let id = subJson["id"].int
info.append(CarType(id: id!, name: name!, iconUrl: iconUrl! , image: UIImage()))
}
my struct is:
import Foundation
import UIKit
struct CarType {
var name : String
var id : Int
var iconUrl : String
var icon : UIImage
}
I want to download images before using them in collectionView.
How can i download images (using AlamofireImage) and put them in related carType icon property?
What you are asking is really bad practice in mobile app. Just a case, for example, you made a request and got like 20 items in an array, and in order to put all UIImages in your models, you have to make 20 more requests, also you even don't know if your users will eventually use (view) those icons or no.
Instead, you could fetch the images when the cell (I guess, you will be displaying those icons in a cell) is displayed, for this purpose you could use libs like SDWebImage(objective c) or Kingfisher(swift) which have extensions for UIImageView to make it simple to fetch and display image on. Those libs can also cache the downloaded image.
Also, another suggestion for object mapping. Currently, you are mapping json to your model manually. There are a lot of good libs to handle that for you, which could automate your object mapping process, for example - ObjectMapper
Hope, this was helpful. Good Luck!
I have done functionalities thing in UITableview, added following in CellForRowIndex method:
getDataFromUrl(urlString){(data,response,error) -> Void in
if error == nil {
// Convert the downloaded data in to a UIImage object
let image = UIImage(data: data!)
// Store the image in to our cache
if((image) != nil){
// Store the image in to our cache
self.imageCacheProfile[urlString] = image
// Update the cell
DispatchQueue.main.async(execute: {
cell.imgvwProfile?.image = image
})
}
else{
cell.imgvwProfile!.image = UIImage(named: "user")
}
}
}
func getDataFromUrl(_ strUrl:String, completion: #escaping ((_ data: Data?, _ response: URLResponse?, _ error: NSError? ) -> Void)) {
let url:URL = URL(string: strUrl)!
let request = URLRequest(url: url)
URLSession.shared.dataTask(with: request) {data, response, err in
print("Entered the completionHandler")
}.resume()
}
You also need to declare imageCache to store downloaded images.
var imageCache = [String:UIImage]()
You can use above code in your method and it should work just.

loadPreviewImageWithOptions options dictionary

I'm writing an iOS app share extension, and I wanted to obtain a large preview image. After some effort, I was able to make this code work:
class ShareViewController: SLComposeServiceViewController {
// This is the result handler for the call to loadPreviewImageWithOptions
let imageHandler: NSItemProviderCompletionHandler = { [unowned self]
(result: NSSecureCoding?, error: NSError!) in
if result is UIImage
{
let image = result as! UIImage
dispatch_async(dispatch_get_main_queue(), { () -> Void in
self.imageView.image = image
self.imageView.contentMode = UIViewContentMode.ScaleAspectFill
self.imageView.clipsToBounds = true
})
}
}
// Find the shared item and preview it.
for item: AnyObject in self.extensionContext!.inputItems
{
let inputItem = item as! NSExtensionItem
for provider: AnyObject in inputItem.attachments!
{
let provider = provider as! NSItemProvider
// I want a preview image as large as the device.
var options_dict = [NSObject:AnyObject]()
options_dict[NSItemProviderPreferredImageSizeKey] = NSValue(CGSize: CGSize(width: 960, height:540))
provider.loadPreviewImageWithOptions(options_dict, completionHandler: imageHandler)
}
}
...
}
I obtain an image, but its size is always 84 x 79 pixels. From the NSItemProvider documentation the options dictionary should support preview image size:
options - A dictionary of keys and values that provide information about the item, such as the size of an image. For a list of possible keys, see Options Dictionary Key.
And under Options Dictonary Key on the same page:
NSItemProviderPreferredImageSizeKey -
A key specifying the dimensions of an image in pixels. The value of this key is an NSValue object containing a CGSize or NSSize data type.
There is one clue:
Keys are used in the dictionary passed to the options parameter of a NSItemProviderLoadHandler block.
So maybe I have to call or override loadItemForTypeIdentifier with the size option, and then call loadPreviewImageWithOptions? I'm trying this now.

Resources