loadPreviewImageWithOptions options dictionary - ios

I'm writing an iOS app share extension, and I wanted to obtain a large preview image. After some effort, I was able to make this code work:
class ShareViewController: SLComposeServiceViewController {
// This is the result handler for the call to loadPreviewImageWithOptions
let imageHandler: NSItemProviderCompletionHandler = { [unowned self]
(result: NSSecureCoding?, error: NSError!) in
if result is UIImage
{
let image = result as! UIImage
dispatch_async(dispatch_get_main_queue(), { () -> Void in
self.imageView.image = image
self.imageView.contentMode = UIViewContentMode.ScaleAspectFill
self.imageView.clipsToBounds = true
})
}
}
// Find the shared item and preview it.
for item: AnyObject in self.extensionContext!.inputItems
{
let inputItem = item as! NSExtensionItem
for provider: AnyObject in inputItem.attachments!
{
let provider = provider as! NSItemProvider
// I want a preview image as large as the device.
var options_dict = [NSObject:AnyObject]()
options_dict[NSItemProviderPreferredImageSizeKey] = NSValue(CGSize: CGSize(width: 960, height:540))
provider.loadPreviewImageWithOptions(options_dict, completionHandler: imageHandler)
}
}
...
}
I obtain an image, but its size is always 84 x 79 pixels. From the NSItemProvider documentation the options dictionary should support preview image size:
options - A dictionary of keys and values that provide information about the item, such as the size of an image. For a list of possible keys, see Options Dictionary Key.
And under Options Dictonary Key on the same page:
NSItemProviderPreferredImageSizeKey -
A key specifying the dimensions of an image in pixels. The value of this key is an NSValue object containing a CGSize or NSSize data type.
There is one clue:
Keys are used in the dictionary passed to the options parameter of a NSItemProviderLoadHandler block.
So maybe I have to call or override loadItemForTypeIdentifier with the size option, and then call loadPreviewImageWithOptions? I'm trying this now.

Related

On iOS, Will there be race condition for two UIDragItem.itemProvider.loadObject methods?

I am implementing dragging and dropping images from, let say, Safari into a collectionView of my own app. I need the both the UIImage and image url from the dragItem.
The way to get any of them is to use UIDragItem.itemProvider.loadObject. But since loadObject(ofClass:completionHandler:) runs asynchronously, how can I make sure there will be no race condition with the code below?
(note: the two are in the performDropWith func, I want to make sure they execute in order because I need to do work immediately after the second one finishes)
var imageARAndURL = [String: Any]()
_ = item.dragItem.itemProvider.loadObject(ofClass: UIImage.self) { (provider, error) in
if let image = provider as? UIImage {
let aspectRatio = image.size.height / image.size.width
print(aspectRatio)
imageARAndURL["aspectRatio"] = aspectRatio
}
}
_ = item.dragItem.itemProvider.loadObject(ofClass: URL.self) { (provider, error) in
if let url = provider {
print(url.imageURL)
imageARAndURL["imageURL"] = url.imageURL
print(imageARAndURL)
}
}

Consistent binary data from images in Swift

For a small project, I'm making an iOS app which should do two things:
take a picture
take a hash from the picture data (and print it to the xcode console)
Then, I want to export the picture to my laptop and confirm the hash. I tried exporting via AirDrop, Photos.app, email and iCloud (Photos.app compresses the photo and iCloud transforms it into an .png).
Problem is, I can't repodruce the hash. This means that the exported picture differs from the picture in the app. There are some variables I tried to rule out one by one. To get NSData from a picture, one can use the UIImagePNGRepresentation and UIImageJPEGRepresentation functions, forcing the image in a format representation before extracting the data. To be honest, I'm not completely sure what these functions do (other than transforming to NSData), but they do something different from the other because they give a different result compared to each other and compared to the exported data (which is .jpg).
There are some things unclear to me what Swift/Apple is doing to my (picture)data upon exporting. I read in several places that Apple transforms (or deletes) the EXIF but to me it is unclear what part. I tried to anticipate this by explicitly removing the EXIF data myself before hashing in both the app (via function ImageHelper.removeExifData (found here) and via exiftools on the command line), but to no avail.
I tried hashing an existing photo on my phone. I had a photo send to me by mail but hashing this in my app and on the command line gave different results. A string gave similar results in the app and on command line so the hash function(s) are not the problem.
So my questions are:
Is there a way to prevent transformation when exporting a photo
Are there alternatives to UIImagePNGRepresentation / UIImageJPEGRepresentation functions
(3. Is this at all possible or is iOS/Apple too much of a black box?)
Any help or pointers to more documentation is greatly appreciated!
Here is my code
//
// ViewController.swift
// camera test
import UIKit
import ImageIO
// extension on NSData format, to enable conversion to String type
extension NSData {
func toHexString() -> String {
var hexString: String = ""
let dataBytes = UnsafePointer<CUnsignedChar>(self.bytes)
for (var i: Int=0; i<self.length; ++i) {
hexString += String(format: "%02X", dataBytes[i])
}
return hexString
}
}
// function to remove EXIF data from image
class ImageHelper {
static func removeExifData(data: NSData) -> NSData? {
guard let source = CGImageSourceCreateWithData(data, nil) else {
return nil
}
guard let type = CGImageSourceGetType(source) else {
return nil
}
let count = CGImageSourceGetCount(source)
let mutableData = NSMutableData(data: data)
guard let destination = CGImageDestinationCreateWithData(mutableData, type, count, nil) else {
return nil
}
// Check the keys for what you need to remove
// As per documentation, if you need a key removed, assign it kCFNull
let removeExifProperties: CFDictionary = [String(kCGImagePropertyExifDictionary) : kCFNull, String(CGImagePropertyOrientation): kCFNull]
for i in 0..<count {
CGImageDestinationAddImageFromSource(destination, source, i, removeExifProperties)
}
guard CGImageDestinationFinalize(destination) else {
return nil
}
return mutableData;
}
}
class ViewController: UIViewController, UINavigationControllerDelegate, UIImagePickerControllerDelegate, MFMailComposeViewControllerDelegate {
#IBOutlet weak var imageView: UIImageView!
// creats var for picture
var imagePicker: UIImagePickerController!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
// calls Camera function and outputs picture to imagePicker
#IBAction func cameraAction(sender: UIButton) {
imagePicker = UIImagePickerController()
imagePicker.delegate = self
imagePicker.sourceType = .Camera
presentViewController(imagePicker, animated: true, completion: nil)
}
// calls camera app, based on cameraAction
func imagePickerController(picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : AnyObject]) {
imagePicker.dismissViewControllerAnimated(true, completion: nil)
imageView.image = info[UIImagePickerControllerOriginalImage] as? UIImage
}
// calls photoHash function based on button hashAction
#IBAction func hashAction(sender: AnyObject) {
photoHash()
}
// converts latest picture to binary to sha256 hash and outputs to console
func photoHash(){
let img = ImageHelper.removeExifData(UIImagePNGRepresentation(imageView.image!)!)
let img2 = ImageHelper.removeExifData(UIImageJPEGRepresentation(imageView.image!, 1.0)!)
let imgHash = sha256_bin(img!)
let imgHash2 = sha256_bin(img2!)
print(imgHash)
print(imgHash2)
// write image to photo library
UIImageWriteToSavedPhotosAlbum(imageView.image!, nil, nil, nil)
}
// Digests binary data from picture into sha256 hash, output: hex string
func sha256_bin(data : NSData) -> String {
var hash = [UInt8](count: Int(CC_SHA256_DIGEST_LENGTH), repeatedValue: 0)
CC_SHA256(data.bytes, CC_LONG(data.length), &hash)
let res = NSData(bytes: hash, length: Int(CC_SHA256_DIGEST_LENGTH))
let resString = res.toHexString()
return resString
}
}
Specifications:
MacBook Pro retina 2013, OS X 10.11.5
xcode version 7.3.1
swift 2
iphone 5S
hash on command line via shasum -a 256 filename.jpg
Since posting my question last week I learned that Apple seperates the image data from the meta data (image data is stored in UIIMage object), so hashing the UIImage object will never result in a hash that is the same as a hash digested on the command line (or in python or where ever). This is because for python/perl/etc, the meta data is present (even with a tool as Exiftool, the exif data is standardized but still there, whereas in the app environment, the exif data is simply not there, I guess this has something to do with low level vs high level languages but not sure).
Although there are some ways to access the EXIF data (or meta data in general) of a UIImage, it is not easy. This is a feature to protect the privacy (among other things) of the user.
Solution
I have found a solution to our specific problem via a different route: turns out that iOS does save all the image data and meta data in one place on disk for a photo. By using the Photos API, I can get access to these with this call (I found this in an answer on SO, but I just don't remember how I ended up there. If you recognise this snippet, please let me know):
func getLastPhoto() {
let fetchOptions = PHFetchOptions()
fetchOptions.sortDescriptors = [NSSortDescriptor(key: "creationDate", ascending: true)]
let fetchResult = PHAsset.fetchAssetsWithMediaType(PHAssetMediaType.Image, options: fetchOptions)
if let lastAsset: PHAsset = fetchResult.lastObject as? PHAsset {
let manager = PHImageManager.defaultManager()
let imageRequestOptions = PHImageRequestOptions()
manager.requestImageDataForAsset(lastAsset, options: imageRequestOptions) {
(let imageData: NSData?, let dataUTI: String?,
let orientation: UIImageOrientation,
let info: [NSObject : AnyObject]?) -> Void in
// Doing stuff to the NSDAta in imageData
}
}
By sorting on date in reverse order the first entry is (obviously) the most recent photo. And as long as I don't load it into an imageView, I can do with the data what I want (sending it to a hash function in this case).
So the flow is as follows: user takes photo, photo is saved to the library and imported to the imageView. The user then presses the hash button upon which the most recently added photo (the one in the imageView) is fetched from disk with meta data and all. I can then export the photo from the library by airdrop (for now, https request in later stadium) and reproduce the hash on my laptop.

Work with multi-diminutional array dynamically as UIImage array

I'm working on app to do paged photos inside scrollViews like in the photos app to swipe right to get the old photos and swipe left to get the new photos until the end of photos.
Alright,so i am getting the photos as a multi-diminutional from the web :
imageArray[indexPath.row][0] as? String
Thats in the ViewController1 to show all the images in CollectionView .
When the user press on a photo i do segue and show the image larger in ViewController2 so if swipe left it show the new photos and right to show the old photos which is stored in the array.
but i need to covert my two-dimensional array to one and use it dynamically to be something like this :
pageImages = [UIImage(named:"photo1.png")!,
UIImage(named:"photo2.png")!,
UIImage(named:"photo3.png")!,
UIImage(named:"photo4.png")!,
UIImage(named:"photo5.png")!]
how is it possible to do ?
i could say like :
pageImages = [UIimage(named:thewholearray)] ?
i tried first to convert to one-diminutional array but i failed :
var imageArray : NSArray = []
var mybigarray : NSArray = []
for (var i=0; i<=self.imageArray.count; i++) {
self.mybigarray = self.imageArray[i][0] as! NSArray
}
which generate this casting error :
Could not cast value of type '__NSCFString' (0x196806958) to 'NSArray' (0x196807308).
You can use the map-function to extract the image names from your multi-dimensional array.
var imageNames:[String] = imageArray.map({
$0[0] as! String
})
The map function iterates through all array entries like a for-in-loop.
The statement in the closure determines the new entry of your imageNames array.
EDIT:
If you don't use Swift-Arrays:
var imageNames:NSMutableArray = []
for image in imageArray{
imageNames.addObject(image[0] as! String)
}
I looked at the tutorial and I think this will help with the answer from Daniel. Keep Daniel's Answer and use the URLs from the array with the extension.
Add this to create you images from a URL
extension UIImageView {
public func imageFromUrl(urlString: String) {
if let url = NSURL(string: urlString) {
let request = NSURLRequest(URL: url)
NSURLConnection.sendAsynchronousRequest(request, queue: NSOperationQueue.mainQueue()) {
(response: NSURLResponse!, data: NSData!, error: NSError!) -> Void in
if data != nil {
self.image = UIImage(data: data)
}
else {
self.image = UIImage()
}
}
}
}
}
Use this in the PagedImageScrollViewController:
newPageView.imageFromUrl(pageImages[page])
pageImages is your array of Strings.

Swift array: how to append element references in order to update element later on from a synch request?

[UPDATE]
I have added actual code snippet in order to make my question clear.
Say we want to store uiimages into an array, which are fetched from the internet.
I have this code snippet:
// Somewhere in a loop
{
var story = Story()
story.imgUrl = "http:\(imgUrl)"
/// Donwload image, and replace in the top
if let imgUrl = story.imgUrl {
if let url = NSURL(string: imgUrl) {
let request = NSURLRequest(URL: url)
NSURLConnection.sendAsynchronousRequest(request, queue: NSOperationQueue.mainQueue()) {
(response, data, error) -> Void in
if let data = data {
story.image = UIImage(data: data)
var i = 0
for a in self.Stories {
print("iv image \(i++) is \(a.image)")
}
print("Received image for story as \(story.image) into story \(story)")
// Should one set the image view?
if let _ = self.imageView {
if let indexPath = self.tableView?.indexPathForSelectedRow {
if stories.count == indexPath.section { // is that the currently selected section?
self.imageView?.image = story.image
self.imageView?.hidden = false
print("Set imageView withstory \(story.tag.string)")
}
}
}
}
}
}
}
stories.append(story)
}
/// Switch data source
self.Stories = stories
This doesn't store the image property value into the destination array.
Though the image is ok in the block, if I iterate through the destination array, my image is nil
image: Optional(<UIImage: 0x7ff14b6e4b20> size {100, 67} orientation 0 scale 1.000000))
iv image 0 is nil
How to achieve above functionality?
[INITIAL QUESTION]
Say we want to store element i.e UIImages which i have fetched from the internet. I have this code snippet:
var array = []
let imageView = UIImageView(image:nil)
array.append(imageView)
// and later or, in an synch block
imageView.image = image
This doesn't store the image property value in the array.
How could I do that?
Gotcha!
The point is that Story was defined as struct. And structs are passed by values unlike classes.
To make the code working, I just changed from struct Story {} to class Story {}, and voila!

Loading image in Swift from Parse

I'm successfully pulling in data from Parse into swift, but my images don't seem to be working the way I'm doing it.
In my cellForRowAtIndexPath method, I do the following:
var event: AnyObject? = eventContainerArray[indexPath.row]
if let unwrappedEvent: AnyObject = event {
let eventTitle = unwrappedEvent["title"] as? String
let eventDate = unwrappedEvent["date"] as? String
let eventDescription = unwrappedEvent["description"] as String
let eventImage = unwrappedEvent["image"] as? UIImage
println(eventImage)
if (eventImage != nil){
cell.loadItem(date: eventDate!, title: eventTitle!, description: eventDescription, image: eventImage!)}
else {
let testImage: UIImage = UIImage(named: "test-image.png")!
cell.loadItem(date: eventDate!, title: eventTitle!, description: eventDescription, image: testImage )
}
}
}
return cell
}
I'm using println() with my PFQuery, and I am seeing this as part of the object that's loading in: image = "<PFFile: 0x7fee62420b00>";
So I'm getting title, date, description, etc. all loading fine as part of the above eventContainerArray, but when I look at eventImage, it's nil every time. In the code above, it always defaults to loading test-image.png, being that the image is coming up nil. Am I simply handling that PFFile the improper way? Not sure why it's not working.
Thanks!
Your image is likely a PFFile object. You will need to load it to get an UIImage object from it.
let userImageFile = unwrappedEvent["image"] as PFFile
userImageFile.getDataInBackgroundWithBlock {
(imageData: NSData!, error: NSError!) -> Void in
if error == nil {
let eventImage = UIImage(data:imageData)
// do something with image here
}
}
I'm not familiar with Parse, but given what you're seeing from the println() it looks like what you've received isn't a UIImage but instead some sort of data type. This makes sense; you aren't going to be receiving ObjC objects over the network. So you are getting a file, but when you use the conditional downcast (x as? UIImage) it's returning nil because you don't have a UIImage.
Instead you're going to need to cast to PFFile, and then probably create a UIImage with UIImage(data: file.getData) or something similar. I'm not familiar with exactly how Parse works.
Edit: here's a related question that might be helpful: I can't get image from PFFile

Resources