iOS: Preserve JFIF/EXIF data when saving to camera roll - ios

Original Problem
I am trying to save an image into the camera roll while preserving all the original EXIF/JFIF data. I'd like the image as saved to the camera roll to be identical to the original file byte-for-byte. When I save the image via [ALAssetsLibrary writeImageDataToSavedPhotosAlbum:metadata:completionBlock:], the original EXIF data is preserved, but JFIF is stripped out.
ALAssetsLibrary *library = [[[ALAssetsLibrary alloc] init] autorelease];
[library writeImageDataToSavedPhotosAlbum:imageData metadata:nil completionBlock:^(NSURL *assetURL, NSError *error) {
[self imageDidFinishSavingToCameraRollWithError:error];
}];
I tried using the iphone-exif project to parse out the JFIF data and explicitly pass it in via the metadata parameter:
EXFJpeg *jpegScanner = [[EXFJpeg alloc] init];
[jpegScanner scanImageData:imageData];
EXFJFIF *jfif = [jpegScanner jfif];
NSMutableDictionary *jfifMetadata = [[NSMutableDictionary alloc] init];
[jfifMetadata setObject:[jfif version] forKey:(NSString *)kCGImagePropertyJFIFVersion];
...
NSMutableDictionary *metadata = [[NSMutableDictionary alloc] init];
[metadata setObject:jfifMetadata forKey:(NSString*)kCGImagePropertyJFIFDictionary];
But doing so results in the same file, byte-for-byte, as passing a nil metadata dictionary, meaning the JFIF data is still being stripped by iOS.
According to Wikipedia:
Formally, the Exif and JFIF standards are incompatible. This is because both specify that their particular application segment (APP0 for JFIF, APP1 for Exif) must be the first in the image file. In practice, many programs and digital cameras produce files with both application segments included. This will not affect the image decoding for most decoders, but poorly designed JFIF or Exif parsers may not recognise the file properly.
Is there any way to save the original file to the camera roll byte-for-byte, or will iOS always ignore JFIF data if EXIF is present?
Update: 2/26/13
I've filed <rdar://13291591> with Apple. I also created a sample project demonstrating the issue: http://spolet.to/3J0l1u3w0R2e
The sample app has three buttons: one for a JPEG with JFIF data, one for a JPEG without JFIF data, and one for a PNG. When the button is tapped, the corresponding image will be hashed and saved to the photo library. The resulting ALAsset will then be hashed as well.
Results:
The ALAsset for the saved PNG has the same hash as the original file.
The ALAssets for the saved JPEGs do not have the same hashes as their original counterparts.
Digging in to this, it appears that the 'Brightness Value', 'Components Configuration', 'Thumbnail Length' and 'Thumbnail Image' attributes within the ALAsset EXIF data have been modified by the OS. Further, the original JFIF data (for the image with JFIF) has been stripped.
Expected Results:
All three files should have the same hash when saved into the photo library.

Related

Crop Captured RAW Photo and save to file iOS [duplicate]

I want to build an iOS 10 app that lets you shoot a RAW (.dng) image, edit it, and then saved the edited .dng file to the camera roll. By combining code from Apple's 2016 "AVCamManual" and "RawExpose" sample apps, I've gotten to the point where I have a CIFilter containing the RAW image along with the edits.
However, I can't figure out how to save the resulting CIImage to the camera roll as a .dng file. Is this possible?
A RAW file is "raw" output direct from a camera sensor, so the only way to get it is directly from a camera. Once you've processed a RAW file, what you have is by definition no longer "raw", so you can't go back to RAW.
To extend the metaphor presented at WWDC where they introduced RAW photography... a RAW file is like the ingredients for a cake. When you use Core Image to create a viewable image from the RAW file, you're baking the cake. (And as noted, there are many different ways to bake a cake from the same ingredients, corresponding to the possible options for processing RAW.) But you can't un-bake a cake — there's no going back to original ingredients, much less a way that somehow preserves the result of your processing.
Thus, the only way to store an image processed from a RAW original is to save the processed image in a bitmap image format. (Use JPEG if you don't mind lossy compression, PNG or TIFF if you need lossless, etc.)
If you're writing the results of an edit to PHPhotoLibrary, use JPEG (high quality / less compressed if you prefer), and Photos will store your edit as a derived result, allowing the user to revert to the RAW original. You can also describe the set of filters you applied in PHAdjustmentData saved with your edit — with adjustment data, another instance of your app (or Photos app extension) can reconstruct the edit using the original RAW data plus the filter settings you save, then allow a user to alter the filter parameters to create a different processed image.
Note: There is a version of the DNG format called Linear DNG that supports non-RAW (or "not quite RAW") images, but it's rather rare in practice, and Apple's imaging stack doesn't support it.
Unfortunately DNG isn't supported as an output format in Apple's ImageIO framework. See the output of CGImageDestinationCopyTypeIdentifiers() for a list of supported output types:
(
"public.jpeg",
"public.png",
"com.compuserve.gif",
"public.tiff",
"public.jpeg-2000",
"com.microsoft.ico",
"com.microsoft.bmp",
"com.apple.icns",
"com.adobe.photoshop-image",
"com.adobe.pdf",
"com.truevision.tga-image",
"com.sgi.sgi-image",
"com.ilm.openexr-image",
"public.pbm",
"public.pvr",
"org.khronos.astc",
"org.khronos.ktx",
"com.microsoft.dds",
"com.apple.rjpeg"
)
This answer comes late, but it may help others with the problem. This is how I saved a raw photo to the camera roll as a .dng file.
Just to note, I captured the photo using the camera with AVFoundation.
import Photos
import AVFoundation
//reading in the photo data in as a data object
let photoData = AVCapturePhotoOutput.dngPhotoDataRepresentation(forRawSampleBuffer: sampleBuffer, previewPhotoSampleBuffer: previewPhotoSampleBuffer)
// put it into a temporary file
let temporaryDNGFileURL = NSURL(fileURLWithPath: NSTemporaryDirectory()).appendingPathComponent("temp.dng")!
try! photoData?.write(to: temporaryDNGFileURL)
// get access to photo library
PHPhotoLibrary.requestAuthorization { status in
if status == .authorized {
// Perform changes to the library
PHPhotoLibrary.shared().performChanges({
let options = PHAssetResourceCreationOptions()
options.shouldMoveFile = true
//Write Raw:
PHAssetCreationRequest.forAsset().addResource(with: .photo, fileURL: temporaryDNGFileURL, options: options)
}, completionHandler: { success, error in
if let error = error { print(error) }
})
}
else { print("cant access photo album") }
}
Hope it helps.
The only way to get DNG data as of the writing of this response (iOS 10.1) is:
AVCapturePhotoOutput.dngPhotoDataRepresentation(forRawSampleBuffer: CMSampleBuffer, previewPhotoSampleBuffer: CMSampleBuffer?)
Noting the OP refers to Core Image. As mentioned by rickster, CI works on processed image data, therefore only offers processed image results (JPEG, TIFF):
CIContext.writeJPEGRepresentation(of:to:colorSpace:options:)
CIContext.writeTIFFRepresentation(of:to:format:colorSpace:options:)

get image metadata without decoding image data (iOS)

I have an image file in my app directory. This is not encoded with the iOS default encoder but our own encoders.
The format can be JPEG (EXIF) or TIFF. They have been checked for compliancy with standards so for this post, we can treat it as any image.
I want to get metadata of the image but don't want to decode it. I have been suggested:
CGImageSourceRef imageSourceRef = CGImageSourceCreateWithData((__bridge_retained CFDataRef)(localImageData), nil);
CFDictionaryRef cfMetadata = CGImageSourceCopyPropertiesAtIndex(imageSourceRef, 0, nil);
NSDictionary *metaDataDic = [NSDictionary dictionaryWithDictionary:(__bridge_transfer NSDictionary *)(cfMetadata)];
What I want to know is 'does this decode the image?'
I know there is an incremental option which is used when transferring images to server. Is there a way to stop the decoding after it gets the EXIF info?

Download Image with AFHTTP and save it to ALAsset

Does anyone know how to save an image to the Asset Library? I know that saving it to the Asset, the image will be available also in the Gallery of the iPad.
I know how to get the file:
AFHTTPRequestOperation *requestOperation = [[AFHTTPRequestOperation alloc] initWithRequest:downloadRequest];
[requestOperation setCompletionBlockWithSuccess:^(AFHTTPRequestOperation *operation, id responseObject) {
// How to make the filepath?
operation.outputStream = [NSOutputStream outputStreamToFileAtPath:filePath append:NO];
[operation.responseData writeToFile:filePath atomically:YES];
}
In order to write an image into the assets library, you can use any of the three methods below (defined in class ALAssetsLibrary):
– writeImageToSavedPhotosAlbum:orientation:completionBlock:
– writeImageDataToSavedPhotosAlbum:metadata:completionBlock:
– writeImageToSavedPhotosAlbum:metadata:completionBlock:
This requires that your image is either represented as a UIImage, a CGImageRef or a NSData object.
For example, you might save your image using a NSOutputStream associated to a temporary file. Then, create and initialize a UIImage with that file and write it into the assets library using the above method.
Alternatively, load the image as a NSData object (if it fits nicely into memory) and again use the appropriate method above, along with meta information.
See also: ALAssetsLibrary Class Reference
Edit:
If you want to get more information about how to setup the metadata parameter, you may find this blog post valuable: Adding metadata to iOS images the easy way.
And if you want to add Geolocations to your image metadata on SO: Saving Geotag info with photo on iOS4.1

When should I use UIImageJPEGRepresentation and UIImagePNGRepresentation for uploading different image formats to the server?

In my application I have to send images of different formats to the server (it must be all file formats that can be read by the UIImage class) https://developer.apple.com/library/ios/#documentation/uikit/reference/UIImage_Class/Reference/Reference.html
And the problem is: I don't know when I should use each of this methods. Of course it's obvious that for .png images I need to use UIImagePNGRepresentation and for .jpg/.jpeg UIImageJPEGRepresentation. But what about other formats (.tiff,.gif , etc.)? There are only two methods for image manipulations and so many formats.
You say:
Of course it's obvious that for .png images I need to use UIImagePNGRepresentation and for .jpg/.jpeg UIImageJPEGRepresentation.
No, that's not necessarily the case. If you have some original "digital asset", rather than creating a UIImage and then using one of those two functions to create the NSData that you'll upload, you will often just load the NSData from the original asset and bypass the round-trip to a UIImage at all. If you do this, you don't risk any loss of data that converting to a UIImage, and then back again, can cause.
There are some additional considerations, though:
Meta data:
These UIImageXXXRepresentation functions strip the image of its meta data. Sometimes that's a good thing (e.g. you don't want to upload photos of your children or expensive gadgets the include the GPS locations where malcontents could identify where the shot was taken). In other cases, you don't want the meta data to be thrown away (e.g. date of the original shot, which camera, etc.).
You should make an explicit decision as to whether you want meta data stripped or not. If not, don't round-trip your image through a UIImage, but rather use the original asset.
Image quality loss and/or file size considerations:
I'm particularly not crazy about UIImageJPEGRepresentation because it a lossy compression. Thus, if you use a compressionQuality value smaller than 1.0, you can lose some image quality (modest quality loss for values close to 1.0, more significant quality loss with lower compressionQuality values). And if you use a compressionQuality of 1.0, you mitigate much of the JPEG image quality loss, but the resulting NSData can often be bigger than the original asset (at least if the original was, itself, a compressed JPEG or PNG), resulting in slower uploads.
UIImagePNGRepresentation doesn't introduce compression-based data loss, but depending upon the image, you may still lose data (e.g. if the original file was a 48-bit TIFF or used a colorspace other than sRGB).
It's a question of whether you are ok with some image quality loss and/or larger file size during the upload process.
Image size:
Sometimes you don't want to upload the full resolution image. For example, you might be using a web service that wants images no bigger than 800px per side. Or if you're uploading a thumbnail, they might want something even smaller (e.g. 32px x 32px). By resizing images, you can make the upload much smaller and thus much faster (though with obvious quality loss). But if you use an image resizing algorithm, then creating a PNG or JPEG using these UIImageXXXRepresentation functions would be quite common.
In short, if I'm trying to minimize the data/quality loss, I would upload the original asset if it's in a format that the server accepts, and I'd use UIImagePNGRepresentation (or UIImageJPGRepresentation with quality setting of 1.0) if the original asset was not in a format accepted by the server. But the choice of using these UIImageXXXRepresentation functions is a question of your business requirements and what the server accepts.
Rob points out a lot of very good things to consider when working with images (+1), however here is an example of how to create tiff's and gif's as you asked:
First, you need to link to the ImageIO library (under the Build Phases of your app).
Next you need to #import <ImageIO/ImageIO.h> at the top of your file.
Then, the following code will convert the image for you:
// Get a reference to the image that you already have stored on disk somehow.
// If it isn't stored on disk, then you can use CGImageSourceCreateWithData() to create it from an NSData representation of your image.
NSURL *url = [[NSBundle mainBundle] URLForResource:#"01" withExtension:#"jpg"];
CGImageSourceRef src = CGImageSourceCreateWithURL((__bridge CFURLRef)(url), NULL);
// Create a URL referencing the Application Support Directory. We will save the new image there.
NSFileManager *fm = [NSFileManager defaultManager];
NSURL *suppurl = [fm URLForDirectory:NSApplicationSupportDirectory
inDomain:NSUserDomainMask
appropriateForURL:nil
create:YES
error:NULL];
// Append the name of the output file to the app support directory
// For tiff change the extension in the next line to .tiff
NSURL *gifURL = [suppurl URLByAppendingPathComponent:#"mytiff.gif"];
// Create the destination for the new image
// For tiff, use #"public.tiff" for the second argument of the next line (instead of #com.compuserve.gif".
CGImageDestinationRef dest = CGImageDestinationCreateWithURL((__bridge CFURLRef)gifURL,
(CFStringRef)#"com.compuserve.gif",
1,
NULL);
CGImageDestinationAddImageFromSource(dest, src, 0, NULL);
// Write the image data to the URL.
bool ok = CGImageDestinationFinalize(dest);
if (!ok)
NSLog(#"Unable to create gif file.");
// Cleanup
CFRelease(src);
CFRelease(dest);
This was adapted from the code in this book.

iOS saving photo to Camera Roll does not preserve EXIF/GPS metadata

I know a UIImage can be saved into Camera Roll with UIImageWriteToSavedPhotosAlbum, but this approach strips all metadata from the original file (EXIF, GPS data, etc). Is there any way to save the original file, rather than just the image data into the iOS device's Camera Roll?
Edit: I guess I should have been a bit more specific. The aim is to save a duplicate of an existing JPEG file into a user's Camera Roll. What's the most efficient way to do this?
Depending on how you have your image to save you can chose one of the methods provided by the ALAssetsLibrary.
– writeImageDataToSavedPhotosAlbum:metadata:completionBlock:
– writeImageToSavedPhotosAlbum:metadata:completionBlock:
(Depending on if you have the image as an actual UIImage, or as NSData)
http://developer.apple.com/library/ios/#documentation/AssetsLibrary/Reference/ALAssetsLibrary_Class/Reference/Reference.html
Take notice on the fact that you have to have set the correct keys for the dictionary or they might not be saved correctly.
Here is an example for the GPS information:
NSDictionary *gpsDict = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat:fabs(loc.coordinate.latitude)], kCGImagePropertyGPSLatitude,
((loc.coordinate.latitude >= 0) ? #"N" : #"S"), kCGImagePropertyGPSLatitudeRef,
[NSNumber numberWithFloat:fabs(loc.coordinate.longitude)], kCGImagePropertyGPSLongitude,
((loc.coordinate.longitude >= 0) ? #"E" : #"W"), kCGImagePropertyGPSLongitudeRef,
[formatter stringFromDate:[loc timestamp]], kCGImagePropertyGPSTimeStamp,
[NSNumber numberWithFloat:fabs(loc.altitude)], kCGImagePropertyGPSAltitude,
nil];
And here is a list of the keys:
http://developer.apple.com/library/ios/#documentation/GraphicsImaging/Reference/CGImageProperties_Reference/Reference/reference.html#//apple_ref/doc/uid/TP40005103
UIImagePickerControllerDelegate is what you're looking for.
Starting in iOS 4.0, you can save still-image metadata, along with a
still image, to the Camera Roll. To do this, use the
writeImageToSavedPhotosAlbum:metadata:completionBlock: method of the
Assets Library framework. See the description for the
UIImagePickerControllerMediaMetadata key.
UIImagePickerControllerDelegate Protocol Reference
For a Swift solution that uses the Photos API (ALAssetLibrary is deprecated in iOS 9), you can see my solution to this problem here, including sample code.
With the Photos API, the key thing to note is that the .location property of a PHAsset does NOT embed the CLLocation metadata into the file itself, so using an EXIF viewer will not turn up any results.
To get around this, you must embed any metadata changes directly into the Data of the image itself before writing it to the camera roll using the Photos API (or, for iOS versions prior to 9, you must write a temporary file using the Data with the embedded metadata and create the PHAsset from the file's URL).
Also note that the act of converting image Data to a UIImage appears to strip metadata, so be careful with that.

Resources