How to attach metadata to UIImage without saving to library - ios

I have a function capturing few images and add them into NSMutableArray rawImages of UIImage. Then I pass this rawImages to other object to apply some image processing on the images in the array. In the image processing, I need to retrieve the exposure time from EXIF metadata.
I have been able to get the EXIF from CMSampleBuffer when capturing using AVFoundation. Is there a way to attach this EXIF to the rawImages array?
(PS: I know that one can save it into library and then read the images from library along with the metadata, but I don't want that)
Any help is much appreciated! THANKS!

Related

ObjCinstance to PIL image

I am designing a face recognition system for a course.
For converting I Currently i use Image.writeToFile_atomically_(filename, True) which saves the image and I can open the image but it is very slow.
I need to be able to convert ObjCinstance into PIL or ui image without saving the image.
Edit:
My problem is the way I convert ObjCinstance is slow because I save the image and then access the image with PIL. I would like to get a faster way to convert ObjCinstance into PIL. Preferably convert within the script.

Metadata on UIImage not using the camera roll - Swift

Simply, I am trying to save/add GPS metadata into a UIImage which is then stored in a custom folder. I do not want this image to appear in the camera roll.
It appears that the "Photos Framework" won't help as this requires the image being added to the camera roll.
Other code I have seen online uses a lot of unmanaged code and so, this doesn't feel like a very "swift" way of doing things.
Does anyone have any resources that shows how to modify the metadata of a UIImage? or am I looking at this incorrectly.
Thanks

How to compose a "right" oriented UIImage with NSData and NSDictionary captured from AVCaptureDevice?

I'm using AVCaptureDevice to capture image from camera. After setting up session, and capturing image, I can get a NSData * and a NSDictionary * object, however, due to the separation of EXIF meta and data, when I tried to convert the NSData * to UIImage *, and put it in a UIImageView *, the image is misoriented.
I know I can use a lot of fix methods to get the image right, and I'm doing it right now, but the thing is, when the image resolution is high (ie, 3000 * 4000), the fix method is slow, it takes around 500 milliseconds on my iPhone5. After running time profiler with instruments, the major time is spent on writing data, which is inevitable I guess.
So I'm wondering am I doing the right thing? Shouldn't there be a easier yet more efficient way to solve the problem like this? Please help me out of here, big thanks!
This is possible without actually rotating the image bytes by using the ImageIO framework. You want to create an image destination with CGImageDestinationCreateWithURL, then add your image data along with its EXIF data with CGImageDestinationAddImage.
There are a couple of examples floating around on SO, like this one.
If all you want to do is display this image, then you can create a UIImage with a specified rotation from the data using:
+ (UIImage *)imageWithCGImage:(CGImageRef)imageRef scale:(CGFloat)scale orientation:(UIImageOrientation)orientation

ALAssest of an Image taken from Camera without saving it

Hi I was wondering if theres a way to extract ALAsset of an image taken from Camera but without saving it...
Ive come across various example that used writeImageToSavedPhotosAlbum and then fetched the ALAssest, but i dont deem it necessary to save the image in the camera roll, was just wondering if this could be done otherwise
No ALAssest exists until the image has been successfully saved to the image library. Until then you just have a UIImage that has come from the picker. There is no mandate that the image needs to be saved into the library and any decision about whether you want to save the image should be based on what the app tells the user and if the user would naturally expect to find the image in the library after taking / saving it in the app.

Best way to store and load Freehand Drawing Lines for every image

I found this post Draw Lines Load From Plist in iphone sdk about saving and loading your Freehand Drawing from plist.
In my app I am able to draw on a captured photo and save it as a new image in my photo library. But I want to just save the photo without the drawing to my photo library and be able to load my Freehand Drawing Lines manually whenever I load the original photo.
In the above named link he saves his linecoordinates in a plist. Is it effective to create for every photo a new plist?? Any ideas? Please help :(
I would really appreciate any ideas you might have! Thank you
The reason he uses line coordinates is because the mobile devices receive the data the same way (relative to previous or absolute). So unless the size of the line coordinates is bigger than a compressed pixel array (png, jpg, etc), this is the best way. For simplicity though, I would just keep that format.
If size is a problem for you, just use zlib (http://www.zlib.net/) to compress/uncompress the data - it's available on all mobile devices.
You could also use different algorithms to reduce the size of the data prior to zlib compression - for instance, if all coordinates are relative to the previous one, the values would be smaller meaning you could encode it in a smaller type of data - integer to byte for instance - or a dynamic one (ie: 1 byte + 0 if done 1 if another byte) + ....).

Resources