How do I access image data in Xcode iOS app developer? - ios

I am VERY new to Xcode and even iOS apps/platform in general. I have a lot of image processing experience using other development platform environments and am looking to apply this toward iOS apps. I have noticed that nothing is mentioned for Xcode in regards to accessing image data and/or directly modifying it. Many people that have made tutorials seem to use an image picker but never have I seen where they say or show how to access the image data.
An answer to this would great. Guidance would be most appreciated. Thanks.

UIImage provides the ability to display images from several different formats, but it's immutable -- you can't change the data it uses and expect to see the image change. Instead, you'll get an image in one of the underlying types and work with that. You'll probably want to read up on both Core Graphics (Quartz) and Core Image. For example, you can use UIImage's -CGImage method to get the image as a CGImage and then use the Core Graphics CGImage...() functions to find out about the image data (format, bit depth, etc) and get the actual bits.

AVAssetLibrary can help you to modify image data.
ALAssetsLibrary *assetsLibrary = [[ALAssetsLibrary alloc]init];
[assetsLibrary assetForURL:photoUrl resultBlock:resultBlock
failureBlock:nil];
Using above code you will get asset of image in result block.
ALAssetsLibraryAssetForURLResultBlock resultBlock = ^(ALAsset *photoAsset)
{
ALAssetRepresentation *image_representation = [photoAsset defaultRepresentation];
//Do the stuff to get-modify image data from image_representation...
}
Hope this is what you are looking for.

Related

How do I convert NSData to a PDF?

I have pdf files stored on Parse.com, I want to download them and set them as images. I have googled around trying to find out how to do this but I'm still clueless. I have got my parse object downloading successfully, the pdf file is stored in the field "image"
//DOWNLOAD IMAGE CODE
PFFile *image = object[#"image"];
[image getDataInBackgroundWithBlock:^(NSData *data, NSError *error) {
//we have data, now we want to convert it to a UIImage
}];
just have no idea what to do with the data. Can someone please give me some pointers? thanks
I haven't done this, but I think what you need to do is to first create a data provider out of your data using a call like CGDataProviderCreateWithCFData(), then use the data provider to create a PDF object using CGPDFDocumentCreateWithProvider.
These are Core Foundation calls so you need to do manual memory management of the CF objects (ARC doesn't manage CF objects)
You should be able to Google around using those terms and find code to do what you want.
Are the files store on parse as images or PDF's?
It sounds to me like you've got actual PDF files on parse. You can't just convert them to an image as they aren't images.
If they're PDF files you can save them to disk and then view them with QuickLookPreview
https://developer.apple.com/library/prerelease/ios/documentation/NetworkingInternet/Reference/QLPreviewController_Class/index.html#//apple_ref/occ/instp/QLPreviewController/currentPreviewItem
If they're actually images then you simply use:
[UIImage imageWithData:data]

Check PhotoLibrary if Image is already Exist

I am creating app to save images to Photo library from net. now i want to check every time before saving image to PhotoLibrary that image is already there or not?
i fetch the name of images stored in photo library but that name are different than original name so i don't know how to compare current image with already existing images!
please help me!
ALAssetsLibrary *Library=[[ALAssetsLibrary alloc]init];
NSMutableArray *arr_ImagesUrl=[[NSMutableArray alloc]init];
[Library enumerateGroupsWithTypes:ALAssetsGroupSavedPhotos usingBlock:^(ALAssetsGroup *Group,BOOL *stop){
[Group enumerateAssetsUsingBlock:^(ALAsset *asset,NSUInteger index,BOOL *stop)
{
NSLog(#"%#",[[asset defaultRepresentation]filename]);
}];
} failureBlock:^(NSError *error){NSLog(#"%#",error.description);}];
Basically theres is no such API in IOS documentation as far as I know.
What you can do, which is not quite the same as you originally wanted, is that in your application store an individual value (*) for every image the user selected throughout the app lifecycle. Based on this array of information you are able to tell whether a new selection is the same as a previous one. But still, you cannot do this for the full PhotoLibrary.
(*) This is an useful thread on SO which about detecting image similarity and/or creating a value (hash) which describes the image and usable for comparing images later.
My favourite is phash.
https://softwareengineering.stackexchange.com/questions/92830/how-to-know-if-two-images-are-the-same
or see this:
Compare two images to check if they are same

UIImage from NSInputStream

I'm downloading a 2400x1600 image from Parse and I don't want it to hold all that data in memory at once. PFFile object from Parse has a convenient method to get NSData as NSInputStream so when the data is finally downloaded I end up with a NSInputStream.
So now I want to use that NSInputStream to get my UIImage. It should work like creating a UIImage with contents of file method i.e. not the whole image is loaded into memory at once.
I think that writing to a file from NSInputStream and then use the UIImage's contents of file method should work fine in my case, but I have found no way to write to a file from a NSInputStream.
Any code example or some guideline would be really appreciated.
Thanks in advance.
To accomplish this you can set up an NSOutputStream to then stream the received data to a file. Create your output stream using initToFileAtPath:append: with YES for append. In your input stream call back, pass the data to your output stream by calling write:maxLength: (read more in the docs). Once the stream is complete, you then have the full image on file without ever having it fully in memory.
Henri's answer above is more appropriate since you're using Parse, but this is the general solution.
In the documentation on iOS/OS X, Parse brings this an example.
Retrieving the image back involves calling one of the getData variants on the PFFile. Here we retrieve the image file off another UserPhoto named anotherPhoto:
PFFile *userImageFile = anotherPhoto[#"imageFile"];
[userImageFile getDataInBackgroundWithBlock:^(NSData *imageData, NSError *error) {
if (!error) {
UIImage *image = [UIImage imageWithData:imageData];
}
}];
Now, I don't quite see the reason for you to use NSInputStream, mainly for two reasons:
NSInputStream is supposedly meant for INPUTTING data, not taking it from somewhere
NSInputStream is meant for streaming, so for scenarios in which you want to do something with the data as it is coming in, from your description it seems as if you only ever care about the data once it has completed the download.
In short, you should be using the aforementioned way, unless you truly care about the way the data is loaded in, for example wanting to manipulate it as it comes in (highly unlikely in the case you describe).
As to having it all in memory at once, the dimensions you give are not that large, yes you could stream it into a file, but assuming you want to show it full-size in the app, the problem of memory would appear at some point nevertheless, i.e you would just be postponing the inevitable. If that is not the case (not showing full-size), then it might be a good idea to chop the source image up into tiles and use those instead, far quicker to download specific tiles and easier on memory.

Image handling rails + iOS Core Data

I am developing an iOS app that will have a number of images. These images are associated with Show object and Band objects. I am fairly new to iOS development
On the server, the Shows and Bands are associated with a list of images. Currently I am storing the images with the following info:
height:integer width:integer imageType:string imageData:binary
First question is: should there be more?
Secondly, I am persisting the Show and Band objects using Core Data. I do not want to persist the images because I would quickly run out of memory. I will store them in the cache directory. My second question is: how should the Show and Band objects keep track of the images? Should I have a Image objects in the model with a to many relationship with Shows and Bands. But these Image objects would perhaps only contain height, width, imageType and a path to where the cached image should be. My idea is that if it is not found in the cache directory, it gets the imageData from the server.
What is the right way to do this?
UPDATE
I also plan on pinging the server with HEAD to check if the imageData has been updated before getting the cached version.
you can actually just store the images directly with Core Data and not have to mess with documents at all.
This has been answered in other places but I'll put the code here anyway:
To save:
NSData *imageData = UIImagePNGRepresentation(yourUIImage);
[newManagedObject setValue:imageData forKey:#"image"];
and to load:
NSManagedObject *selectedObject = [[self fetchedResultsController] objectAtIndexPath:indexPath];
UIImage *image = [UIImage imageWithData:[selectedObject valueForKey:#"image"]];
[[newCustomer yourImageView] setImage:image];
The way you should model your data depends on what kind of relationship the images will have with bands/shows.
Do shows and bands each relate to multiple images? Can a show have images that are related to multiple bands?
Ultimately you may want to have an Image Core Data entity that will have a one-one or one-many relationship with bands/shows depending on your answers to these questions. You will also want to add the inverse relationship so that for example when you look up a show, you can also access the set of images associated with it.
It might be easier to help you if you provide more detail about the relationships between your objects.

Is it possible to set only the metadata of an ALAsset with writeModifiedImageDataToSavedPhotosAlbum or setImageData

Both writeModifiedImageDataToSavedPhotosAlbum and setImageData methods in ALAsset take both image data (in the form of an NSData object) and metadata (in the form of an NSDictionary object). I've got everything working to inject additional metadata into an ALAsset that's already in the camera roll (obviously written by our app, therefore editable by it), but what I would love to do is to not have to first read the entire image data for the original just to pass it completely unmodified to either of these calls.
Is there any way to modify only the metadata of an ALAsset without paying the memory penalty of reading the entire image data? I've tried passing nil to imageData (despite this not being a documented option) and it did not work.
Berend

Resources