Crop Captured RAW Photo and save to file iOS [duplicate] - ios

I want to build an iOS 10 app that lets you shoot a RAW (.dng) image, edit it, and then saved the edited .dng file to the camera roll. By combining code from Apple's 2016 "AVCamManual" and "RawExpose" sample apps, I've gotten to the point where I have a CIFilter containing the RAW image along with the edits.
However, I can't figure out how to save the resulting CIImage to the camera roll as a .dng file. Is this possible?

A RAW file is "raw" output direct from a camera sensor, so the only way to get it is directly from a camera. Once you've processed a RAW file, what you have is by definition no longer "raw", so you can't go back to RAW.
To extend the metaphor presented at WWDC where they introduced RAW photography... a RAW file is like the ingredients for a cake. When you use Core Image to create a viewable image from the RAW file, you're baking the cake. (And as noted, there are many different ways to bake a cake from the same ingredients, corresponding to the possible options for processing RAW.) But you can't un-bake a cake — there's no going back to original ingredients, much less a way that somehow preserves the result of your processing.
Thus, the only way to store an image processed from a RAW original is to save the processed image in a bitmap image format. (Use JPEG if you don't mind lossy compression, PNG or TIFF if you need lossless, etc.)
If you're writing the results of an edit to PHPhotoLibrary, use JPEG (high quality / less compressed if you prefer), and Photos will store your edit as a derived result, allowing the user to revert to the RAW original. You can also describe the set of filters you applied in PHAdjustmentData saved with your edit — with adjustment data, another instance of your app (or Photos app extension) can reconstruct the edit using the original RAW data plus the filter settings you save, then allow a user to alter the filter parameters to create a different processed image.
Note: There is a version of the DNG format called Linear DNG that supports non-RAW (or "not quite RAW") images, but it's rather rare in practice, and Apple's imaging stack doesn't support it.

Unfortunately DNG isn't supported as an output format in Apple's ImageIO framework. See the output of CGImageDestinationCopyTypeIdentifiers() for a list of supported output types:
(
"public.jpeg",
"public.png",
"com.compuserve.gif",
"public.tiff",
"public.jpeg-2000",
"com.microsoft.ico",
"com.microsoft.bmp",
"com.apple.icns",
"com.adobe.photoshop-image",
"com.adobe.pdf",
"com.truevision.tga-image",
"com.sgi.sgi-image",
"com.ilm.openexr-image",
"public.pbm",
"public.pvr",
"org.khronos.astc",
"org.khronos.ktx",
"com.microsoft.dds",
"com.apple.rjpeg"
)

This answer comes late, but it may help others with the problem. This is how I saved a raw photo to the camera roll as a .dng file.
Just to note, I captured the photo using the camera with AVFoundation.
import Photos
import AVFoundation
//reading in the photo data in as a data object
let photoData = AVCapturePhotoOutput.dngPhotoDataRepresentation(forRawSampleBuffer: sampleBuffer, previewPhotoSampleBuffer: previewPhotoSampleBuffer)
// put it into a temporary file
let temporaryDNGFileURL = NSURL(fileURLWithPath: NSTemporaryDirectory()).appendingPathComponent("temp.dng")!
try! photoData?.write(to: temporaryDNGFileURL)
// get access to photo library
PHPhotoLibrary.requestAuthorization { status in
if status == .authorized {
// Perform changes to the library
PHPhotoLibrary.shared().performChanges({
let options = PHAssetResourceCreationOptions()
options.shouldMoveFile = true
//Write Raw:
PHAssetCreationRequest.forAsset().addResource(with: .photo, fileURL: temporaryDNGFileURL, options: options)
}, completionHandler: { success, error in
if let error = error { print(error) }
})
}
else { print("cant access photo album") }
}
Hope it helps.

The only way to get DNG data as of the writing of this response (iOS 10.1) is:
AVCapturePhotoOutput.dngPhotoDataRepresentation(forRawSampleBuffer: CMSampleBuffer, previewPhotoSampleBuffer: CMSampleBuffer?)
Noting the OP refers to Core Image. As mentioned by rickster, CI works on processed image data, therefore only offers processed image results (JPEG, TIFF):
CIContext.writeJPEGRepresentation(of:to:colorSpace:options:)
CIContext.writeTIFFRepresentation(of:to:format:colorSpace:options:)

Related

WebP encodedData loads for 30+ seconds

iOS version: 13.1
iPhone: X
I'm currently using DBAttachmentPickerController to choose from a variety of images, the problem comes when I take a picture directly from the camera and try to upload it to our server. The SDImageWebPCoder.shared.encodedData loads for about 30 seconds more less. The same image in the Android app takes about 2-3 seconds.
Here is the code I use
let attachmentPickerController = DBAttachmentPickerController(finishPicking: { attachmentArray in
self.images = attachmentArray
var currrentImage = UIImage()
self.images[0].loadOriginalImage(completion: { image in
self.userImage.image = image
currrentImage = image!
})
//We transform it to webP
let webpData = SDImageWebPCoder.shared.encodedData(with: currrentImage, format: .webP, options: nil)
self.api.editImageUser(data: webpData!)
}, cancel: nil)
attachmentPickerController.mediaType = DBAttachmentMediaType.image
attachmentPickerController.allowsSelectionFromOtherApps = true
attachmentPickerController.present(on: self)
Should I change the Pod I'm using? Should I just compress it? Or am I doing something wrong?
WebP encoding speed is related slow, it use software encoding and VP8 compression algorithm (complicated), compared to the Hardware accelerated JPEG/PNG encoding. (Apple's SoC).
picture directly from the camera
The original image taken on iPhone camera may be really lark, like 4K resolution. If you don't do some pre-scale and try to encode it, you may consume much more time.
The suggestion can be like this:
Try to use the options like the compressionQuality, the higher cost
more time, but compress more.By default it's 1.0, which is the higest and most time consuming.
Try to pre-scale the original image. For image from Photos Libraray, you can always use the API to control the size. Or, you can use SDWebImage's transform method like - [UIImage sd_resizedImage:].
Do all the encoding in background thread, never block main thread
If all these is not suitable, the better solution, it's to use JPEG and PNG format instead of WebP. Then, on your image server side code, transcoding the JPEG/PNG to WebP. Server side processing is always the best idea for this thing.
If you're intersted the real benchmark or something, compared to JPEG/PNG (Hardware) and WebP (Software). You can try to use my benchmark code demo here, to help you do your decision.
https://github.com/dreampiggy/ModernImageFormatBenchmark

Detecting that iOS image data is HEIF or HEIC

My server doesn't support the HEIF format. So I need to transform it to JPEG before uploading from my app.
I do this:
UIImage *image = [UIImage imageWithData:imageData];
NSData *data=UIImageJPEGRepresentation(image, 1.0);
But how can I know that the data is HEIF (or HEIC) ? I can look at a file:
([filePath hasSuffix:#".HEIC"] || [filePath hasSuffix:#".heic"])
But I don't think it's a good answer. Is there any other solution?
Both existing answers have good recommendations, but to attempt to tell the whole story...
UIImage doesn't represent an image file or even binary data in an image-file format. A UIImage is best thought of as an abstract representation of the displayable image encoded in that data — that is, a UIImage is the result of the decoding process. By the time you have a UIImage object, it doesn't care what file format it came from.
So, as #Ladislav's answer notes, if you have a UIImage already and you just want to get data in a particular image file format, call one of the convenience functions for getting a UIImage into a file-formatted data. As its name might suggest, UIImageJPEGRepresentation returns data appropriate for writing to a JPEG file.
If you already have a UIImage, UIImageJPEGRepresentation is probably your best bet, since you can use it regardless of the original image format.
As #ScottCorscadden implies, if you don't have a UIImage (yet) because you're working at a lower level such that you have access to the original file data, then you'll need to inspect that data to divine its format, or ask whatever source you got the data from for metadata describing its format.
If you want to inspect the data itself, you're best off reading up on the HIEF format standards. See nokiatech, MPEG group, or wikipedia.
There's a lot going on in the HEIF container format and the possible kinds of media that can be stored within, so deciding if you have not just a HEIF file, but an HEIF/HEVC file compatible with this-or-that viewer could be tricky. Since you're talking about excluding things your server doesn't support, it might be easier to code from the perspective of including only the things that your server does support. That is, if you have data with no metadata, look for something like the JPEG magic number 0xffd8ff, and use that to exclude anything that isn't JPEG.
Better, though, might be to look for metadata. If you're picking images from the Photos library with PHImageManager.requestImageData(for:options:resultHandler:), the second parameter to your result handler is the Uniform Type Identifier for the image data: for HEIF and HEIC files, public.heif, public.heif-standard, and public.heic have been spotted in the wild.
(Again, though, if you're looking for "images my sever doesn't support", you're better off checking for the formats your server does support and rejecting anything not on that list, rather than trying to identify all the possible unsupported formats.)
When you are sending to your server you are most likely decoding the UIImage and sending it as Data so just do
let data = UIImageJPEGRepresentation(image, 0.9)
Just decide what quality works best for you, here it is 0.9
A bit late to the party, but other than checking the extension (after the last dot), you can also check for the "magic number" aka file signature. Byte 5 to 8 should give you the constant "ftyp". The following 4 bytes would be the major brand, which I believe is one of "mif1", "heic" and "heix".
For example, the first 12 bytes of a .heic image would be:
00 00 00 18 66 74 79 70 6d 69 66 31
which, after removing 0s and trim the result, literally decoded to ftypmif1.
Well, you could look at magic bytes - JPEG and PNG certainly are known, and I seem to see some references that HEIF (.heic) starts with a NUL byte. If you're using any of the PHImageManager methods like requestImageDataForAsset:options:resultHandler, that resultHandler will be passed a NSString * _Nullable dataUTI reference. There's a decent WWDC video/slides on this (possibly here) that suggest if the UTI is not kUTTypeJPEG you convert it (and the slides have some lower-level sample code in swift to do it that preserve orientation too).
I should also mention, if you have control at your app layer and all uploads come from there, do all this there.
If you're using Photos framework and are importing images from photo library, there's a solution that was mentioned briefly during WWDC17. First, import core services:
import MobileCoreServices
Then, when you request the image, check the UTType that is returned as a second parameter to your block:
// asset: PHAsset
PHImageManager.default().requestImageData(for: asset, options: nil) { imageData, dataUTI, orientation, info in
guard let dataUTI = dataUTI else { return }
if !(UTTypeConformsTo(dataUTI as CFString, kUTTypeJPEG) || UTTypeConformsTo(dataUTI as CFString, kUTTypePNG)) {
// imageData is neither JPG not PNG, possibly subject for transcoding
}
}
Other UTTypes can be found here

Saving DNG (raw photos) taken on other camera to iOS Photo Library

I write an app that reads dng files from my Ricoh GR II and save them to iOS 10's Photo Library, code like this
let photoLibrary = PHPhotoLibrary.shared()
photoLibrary.performChanges({
PHAssetChangeRequest.creationRequestForAsset(from: image)
}) { (success: Bool, error: Error?) -> Void in
if success {
print("Saving photo ok")
} else {
print("Error writing to photo library: \(error!.localizedDescription)")
}
}
And I got the error below:
ImageIO: PluginForUTType:316: file format 'com.adobe.raw-image' doesn't support writing
Error writing to photo library: The operation couldn’t be completed. (Cocoa error -1.)
I guess maybe iOS only support DNG that's taken on iPhone?
PHAssetChangeRequest.creationRequestForAsset(from: image) starts from a UIImage, so it wouldn't get the original file into your library anyway. A UIImage is the displayable result of reading and decoding an image file; by the time you have a UIImage it doesn't know whether it came from a JPEG or a DNG or a GIF or was rendered at run time via CGBitmapContext or whatever. When you try to save via creationRequestForAssetFromImage:, you're taking that end result and turning it back into a file — whatever kind of file that method wants. (Probably JPEG.)
If you want to put an actual DNG file into the photo library, you'll need to use a Photos framework method that takes original files, not decoded images. Furthermore, since not every Photos client can deal with RAW DNGs, Photos requires that every DNG file you put in the library be accompanied by a JPEG representation that non-RAW-supporting apps (sadly, including the Photos app itself) can see.
Luckily, there's an API for that.
PHPhotoLibrary.shared().performChanges( {
let creationRequest = PHAssetCreationRequest.forAsset()
let creationOptions = PHAssetResourceCreationOptions()
creationOptions.shouldMoveFile = true
creationRequest.addResource(with: .photo, data: jpegData, options: nil)
creationRequest.addResource(with: .alternatePhoto, fileURL: dngFileURL, options: creationOptions)
}, completionHandler: completionHandler)
PHAssetCreationRequest is for creating assets from the underlying resources - one or more image files, video files, some combination thereof (for Live Photos), etc. The photo and alternatePhoto resources are how you provide a DNG file and its accompanying JPEG preview. And the shouldMoveFile option is good for if you don't want to blow your device storage from copying the file from your app sandbox into the Photos library storage — good for big resources like DNGs and 4K video.
(The code snippet is from Apple's Photo Capture guide.)
That said, while Apple's RAW processing supports images from all sorts of third-party cameras, it doesn't look like their list includes any Ricoh models. (Not even this kind.)
That doesn't prevent you from storing Ricoh DNGs in the Photos library, though — it just means the only apps that will be able to usefully read them from the library will need their own Ricoh RAW processing support to see anything but the preview JPEG.

iOS saving photo to Camera Roll does not preserve EXIF/GPS metadata

I know a UIImage can be saved into Camera Roll with UIImageWriteToSavedPhotosAlbum, but this approach strips all metadata from the original file (EXIF, GPS data, etc). Is there any way to save the original file, rather than just the image data into the iOS device's Camera Roll?
Edit: I guess I should have been a bit more specific. The aim is to save a duplicate of an existing JPEG file into a user's Camera Roll. What's the most efficient way to do this?
Depending on how you have your image to save you can chose one of the methods provided by the ALAssetsLibrary.
– writeImageDataToSavedPhotosAlbum:metadata:completionBlock:
– writeImageToSavedPhotosAlbum:metadata:completionBlock:
(Depending on if you have the image as an actual UIImage, or as NSData)
http://developer.apple.com/library/ios/#documentation/AssetsLibrary/Reference/ALAssetsLibrary_Class/Reference/Reference.html
Take notice on the fact that you have to have set the correct keys for the dictionary or they might not be saved correctly.
Here is an example for the GPS information:
NSDictionary *gpsDict = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat:fabs(loc.coordinate.latitude)], kCGImagePropertyGPSLatitude,
((loc.coordinate.latitude >= 0) ? #"N" : #"S"), kCGImagePropertyGPSLatitudeRef,
[NSNumber numberWithFloat:fabs(loc.coordinate.longitude)], kCGImagePropertyGPSLongitude,
((loc.coordinate.longitude >= 0) ? #"E" : #"W"), kCGImagePropertyGPSLongitudeRef,
[formatter stringFromDate:[loc timestamp]], kCGImagePropertyGPSTimeStamp,
[NSNumber numberWithFloat:fabs(loc.altitude)], kCGImagePropertyGPSAltitude,
nil];
And here is a list of the keys:
http://developer.apple.com/library/ios/#documentation/GraphicsImaging/Reference/CGImageProperties_Reference/Reference/reference.html#//apple_ref/doc/uid/TP40005103
UIImagePickerControllerDelegate is what you're looking for.
Starting in iOS 4.0, you can save still-image metadata, along with a
still image, to the Camera Roll. To do this, use the
writeImageToSavedPhotosAlbum:metadata:completionBlock: method of the
Assets Library framework. See the description for the
UIImagePickerControllerMediaMetadata key.
UIImagePickerControllerDelegate Protocol Reference
For a Swift solution that uses the Photos API (ALAssetLibrary is deprecated in iOS 9), you can see my solution to this problem here, including sample code.
With the Photos API, the key thing to note is that the .location property of a PHAsset does NOT embed the CLLocation metadata into the file itself, so using an EXIF viewer will not turn up any results.
To get around this, you must embed any metadata changes directly into the Data of the image itself before writing it to the camera roll using the Photos API (or, for iOS versions prior to 9, you must write a temporary file using the Data with the embedded metadata and create the PHAsset from the file's URL).
Also note that the act of converting image Data to a UIImage appears to strip metadata, so be careful with that.

get yuv planar format image from camera - iOS

I am using AVFoundation to capture still images from camera (capturing still images and not video frame) using captureStillImageAsynchronouslyFromConnection. This gives to me a buffer of type CMSSampleBuffer, which I am calling imageDataSampleBuffer.
As far as I have understood, this buffer can contain any type of data related to media, and the type of data is determined when I am configuring the output settings.
for output settings, I make a dictionary with value: AVVideoCodecJPEG for key: AVVideoCOdecKey.
There is no other codec option. But when I read the AVFoundation Programming Guide>Media Capture, I can see that 420f, 420v, BGRA, jpeg are the available encoded formats supported for iPhone 3gs (which i am using)
I want to get the yuv420 (i.e. 420v) formatted image data into the imageSampleBuffer. Is that possible?
if I print the availableImageDataCodecTypes, I get only JPEG
if I print availableImageDataCVPixelFormatTypes, I get three numbers 875704422, 875704438, 1111970369.
Is it possible that these three numbers map to 420f, 420v, BGRA?
If yes, which key should I modify in my output settings?
I tried putting the value: [NSNumber numberWithInt:875704438] for key: (id)kCVPixelBufferPixelFormatTypeKey.
Would it work?
If yes, how do I extract this data from the imageSampleBuffer?
Also, In which format is UIImage stored? Can it be any format? Is it just NSData with some extra info which makes it interpreted as an image?
I have been trying to use this method :
Raw image data from camera like "645 PRO"
I am saving the data using writeToFile and I have been trying to open it using irfan view.
But I am unable to verify whether or not the saved file is in yuv format ot not because irfan view gives error that it is unable to read the headers.

Resources