UIImage from NSInputStream - ios

I'm downloading a 2400x1600 image from Parse and I don't want it to hold all that data in memory at once. PFFile object from Parse has a convenient method to get NSData as NSInputStream so when the data is finally downloaded I end up with a NSInputStream.
So now I want to use that NSInputStream to get my UIImage. It should work like creating a UIImage with contents of file method i.e. not the whole image is loaded into memory at once.
I think that writing to a file from NSInputStream and then use the UIImage's contents of file method should work fine in my case, but I have found no way to write to a file from a NSInputStream.
Any code example or some guideline would be really appreciated.
Thanks in advance.

To accomplish this you can set up an NSOutputStream to then stream the received data to a file. Create your output stream using initToFileAtPath:append: with YES for append. In your input stream call back, pass the data to your output stream by calling write:maxLength: (read more in the docs). Once the stream is complete, you then have the full image on file without ever having it fully in memory.
Henri's answer above is more appropriate since you're using Parse, but this is the general solution.

In the documentation on iOS/OS X, Parse brings this an example.
Retrieving the image back involves calling one of the getData variants on the PFFile. Here we retrieve the image file off another UserPhoto named anotherPhoto:
PFFile *userImageFile = anotherPhoto[#"imageFile"];
[userImageFile getDataInBackgroundWithBlock:^(NSData *imageData, NSError *error) {
if (!error) {
UIImage *image = [UIImage imageWithData:imageData];
}
}];
Now, I don't quite see the reason for you to use NSInputStream, mainly for two reasons:
NSInputStream is supposedly meant for INPUTTING data, not taking it from somewhere
NSInputStream is meant for streaming, so for scenarios in which you want to do something with the data as it is coming in, from your description it seems as if you only ever care about the data once it has completed the download.
In short, you should be using the aforementioned way, unless you truly care about the way the data is loaded in, for example wanting to manipulate it as it comes in (highly unlikely in the case you describe).
As to having it all in memory at once, the dimensions you give are not that large, yes you could stream it into a file, but assuming you want to show it full-size in the app, the problem of memory would appear at some point nevertheless, i.e you would just be postponing the inevitable. If that is not the case (not showing full-size), then it might be a good idea to chop the source image up into tiles and use those instead, far quicker to download specific tiles and easier on memory.

Related

iOS - NSFIleManager - How to deal with low disk space

When writing data to file (e.g. Thumbnails for caching, user data, etc.), how do you deal with the fact that the iDevice could not be able to write your data to file since the disk is full?
Will NSFileManager throw an exception in case of low disk space?
What's the designated way to deal with this and to inform my user that there's very little disk space left for his data? (I'm saving a fair amount of different data at different places in my app and searching for a common way to deal with it.)
As you mentioned in the comments that you want to save NSDictionary. If you only want to know whether the file is saved successfully or not, you can inspect the return value of the
writeToFile:atomically: function.
Return Value
YES if the file is written successfully, otherwise NO.
More information under the NSDictionary's Storing Dictionaries Section.
Alternatively,
If you want to get a more detail error message for the failure (such as out of disk space, folder not exist and etc.), then you can convert the NSDictionary to NSData before saving it.
NSDictionary to NSData:
NSData *myData = [NSKeyedArchiver archivedDataWithRootObject:myDictionary];
NSData to NSDictionary:
NSDictionary *myDictionary = (NSDictionary*) [NSKeyedUnarchiver unarchiveObjectWithData:myData];
The benifits is that you will also have access to this API -writeToFile:options:error:.
If there is an error writing out the data, upon return contains an NSError object that describes the problem.
Also more detail could be found under the Storing Data Section of NSData.
I think that's the best you can do in case there is a low disk space problem on the device.

How do I convert NSData to a PDF?

I have pdf files stored on Parse.com, I want to download them and set them as images. I have googled around trying to find out how to do this but I'm still clueless. I have got my parse object downloading successfully, the pdf file is stored in the field "image"
//DOWNLOAD IMAGE CODE
PFFile *image = object[#"image"];
[image getDataInBackgroundWithBlock:^(NSData *data, NSError *error) {
//we have data, now we want to convert it to a UIImage
}];
just have no idea what to do with the data. Can someone please give me some pointers? thanks
I haven't done this, but I think what you need to do is to first create a data provider out of your data using a call like CGDataProviderCreateWithCFData(), then use the data provider to create a PDF object using CGPDFDocumentCreateWithProvider.
These are Core Foundation calls so you need to do manual memory management of the CF objects (ARC doesn't manage CF objects)
You should be able to Google around using those terms and find code to do what you want.
Are the files store on parse as images or PDF's?
It sounds to me like you've got actual PDF files on parse. You can't just convert them to an image as they aren't images.
If they're PDF files you can save them to disk and then view them with QuickLookPreview
https://developer.apple.com/library/prerelease/ios/documentation/NetworkingInternet/Reference/QLPreviewController_Class/index.html#//apple_ref/occ/instp/QLPreviewController/currentPreviewItem
If they're actually images then you simply use:
[UIImage imageWithData:data]

RestKit CoreData and UIImage

I'm using Rest Kit with Core Data, one of the Core Data entities has an attribute 'image' that has a binary type.
I'm still in mockup stage so the image is populated with this code:
UIImage *image = [UIImage imageWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:#"http://lorempixel.com/60/60/people"]]];
entry.image = UIImagePNGRepresentation(image);
Another tab has a collection view that uses fetchedResultsController.
After creating a new entity, if I only save the context the image works fine.
But if I push the entity to the web server using 'postObject:' the image is corrupted when it comes back from the server. I've confirmed the server receives the same string representation of the image "<2f396a2f 34414151 536b5a4a 52674142 ... 6a6e502f 32513d3d>" and stores it directly into a MySQL column of type long blob and at all points the string representation is the same.
But when the collection view is populated using a server call via RestKit the entities image is invalid. I'm think the issue is the data is being converted into the data representation of the description of the data.
Does anyone have a working example with images. The only thing I can think of is that I need to add a custom transformation, but the documentation and examples are lacking as far as how to actually implement one.
RestKit is storing the plain NSData for the image in Core Data - it has no idea what else you might want to do with it. Generally you don't want to manage images directly in Core Data or using RestKit.
Generally, store the path of the image in Core Data and the file on disk. Download them asynchronously (from the URL's which would also be in Core Data).
For uploading, you could make RestKit upload the data, but you probably actually want to file upload or convert to base64. You will need to write some code for this (which you could have RestKit pick up by using the key of the method name that returns the appropriate data). A similar process will work for mapping the data in.
RestKit data transformers are hard to make work in this situation as you are converting between data and strings and they are too general to be able to intercept accurately.

UIImagePNGRepresentation memory never released

I realize this question has been asked before but I didn't find any answers that actually resolved it, so...asking again.
I'm using ARC. My app takes photos from an AVCaptureSession at periodic intervals and saves them to core data. I get the NSData object by calling UIImagePNGRepresentation(). As this is happening, memory steadily climbs and the app eventually quits due to memory pressure...but there are no leaks. Instruments shows that for each photo, 500k is being malloced and it never gets released.
The answer I keep coming across is to wrap UIImagePNGRepresentation, or indeed the whole method body, in an autoreleasepool, but this did not help.
I am reasonably certain that the UIImagePNGRepresentation call is the culprit, because when I comment it out, no more memory problems (or images to save).
Would appreciate any help here...is there another way to grab NSData from a UIImage? Is this just another SDK bug we have to live with?
-(void)photoTimerFired:(NSTimer*)timer
{
...
ManagedDataPoint *lastPoint = [_currentSession.dataPoints lastObject];
_lastImage = [_imageCapturer singleImage];
Photo *newPhoto = [NSEntityDescription insertNewObjectForEntityForName:#"Photo"
inManagedObjectContext:self.managedObjectContext];
// Line below is the culprit.
newPhoto.photoData = UIImagePNGRepresentation(_lastImage);
newPhoto.managedDataPoint = lastPoint;
}
I think this page of the CoreData Programming Guide is worth reading in your case, talking to things you need to take care to minimize overheads when using BLOBs (binary data) like images in CoreData.
Especially the part when they talk about creating a dedicated entity to only hold your binary attribute / image, separating it from other attributes of your primary entity and allowing it to "fault", so that it will only be loaded from the database into memory when the attribute is actually referenced/used.
Another thing to try is to check the "Use External Storage" checkbox on the binary attribute of your entity that holds the image. This way in practice the image won't actually be saved as a BLOB directly in your sqlite database, but will be saved in an external file instead, the attribute holding only a reference (path) to this external file, limiting the growth of your database (and corruption risks as the base grows in size).
(Hopefully it will also reduce your memory footprint by avoiding to keep the image in memory when the NSManagedObject is around and not faulting...?)
Note: all of this "External Storage" stuff is working totally transparently for you as for the usage of this attribute in your code: you still access it as if the attribute directly contained the binary data.

Is it possible to set only the metadata of an ALAsset with writeModifiedImageDataToSavedPhotosAlbum or setImageData

Both writeModifiedImageDataToSavedPhotosAlbum and setImageData methods in ALAsset take both image data (in the form of an NSData object) and metadata (in the form of an NSDictionary object). I've got everything working to inject additional metadata into an ALAsset that's already in the camera roll (obviously written by our app, therefore editable by it), but what I would love to do is to not have to first read the entire image data for the original just to pass it completely unmodified to either of these calls.
Is there any way to modify only the metadata of an ALAsset without paying the memory penalty of reading the entire image data? I've tried passing nil to imageData (despite this not being a documented option) and it did not work.
Berend

Resources