I'm trying to create a CGImageSourceRef from a CGDataProviderRef. The CGDataProviderRef comes from a UIImage that was created with UIGraphicsGetImageFromCurrentImageContext. When I do this and check the status of my image source, it's returning kCGImageStatusUnknownType.
I've tried giving it a hint with kCGImageSourceTypeIdentifierHint but to no avail.
Does anyone know how I can create this CGImageSource?
Thanks!
You tell the source that you are giving it PNG data, but you are not. The data provider that you get from the CGImageGetDataProvider function provides raw pixel data, not data in some external format.
The purpose of a CGImageSource is to decode data in some external format (such as PNG) into CGImages. It can't handle pixel data; when you have pixel data, you can just create the CGImage from it directly. Of course, given that you already have the final image, I'm not sure why you're then trying to create it again from any kind of data at all.
If you want to generate some external data to subsequently (as in, after you release this image, possibly in another process) be loaded back in by a CGImageSource, use a CGImageDestination.
Related
Based on other question, a based64 image Data is about +37% extra size of the actual String.
Therefore consider this case.
An API Client responses with an Array of Object,
The object contains a lot of properties ..etc, but the one matters is a property i found with a Key of ImageString64 that returns an image of maxWidth & maxHeight by 300x300.
Desired solution:
I want the fastest, yet memory friendly way to decode those images
Notes to consider:
I do know how to encode them, the question is where and why, does it
even matter ?
Options i have: //You can add yours if you have better one
1- inside the Request response callback
2- pass those Objects with their ImageString64 as they are plain String, and then Decode them inside the UIViewController.
More general information about the response:
1- maximum array of object as response is limited between 6 - 9 .
2- each response object, has 17 Keys.
If you need to work with the base64 decoded image, I will transform it neither in the request call back or the view controller.
You should have a type of internal repository, from where your view controllers get the Object instances. The repository contains the raw data, with the base64 string.
If you access one of the object instances from the repository, you should transform the base64 string to an UIImage instance. The view controller should only see the UIImage.
To get good performance for transform and showing the data, you should do this transform before trying to display it, but as mentioned, only if its needed.
If you want to display the data in an UITableView or UICollectionView you should use the prefetching protocols UICollectionViewDataSourcePrefetching (https://developer.apple.com/documentation/uikit/uicollectionviewdatasourceprefetching/prefetching_collection_view_data) or UITableViewDataSourcePrefetching (https://developer.apple.com/documentation/uikit/uitableviewdatasourceprefetching).
What's your use case for displaying the images?
(I'm working on a Xamarin.Forms project targeting iOS, but I get the feeling I need to use monotouch to accomplish my goals.)
I have an array I get from a function, and this array represents some bitmap data I would like to save and display later. I've been looking at the documentation, and I don't see a path from bytes to bitmap to file that is as clear as I'd expect. In WinForms, I'd create a System.Drawing.Bitmap, then use the Save() method. In Xamarin for Android, Android.Graphics.Bitmap has a Compress() method that lets me output to a stream. I'm having trouble figuring out what the equivalent in Xamarin iOS would be.
The pixel data is 32 bits per pixel, 8 bits per component, ARGB. I've got as far as figuring out I can create a CGImage from that, currently via a CGBitmapContext. I can get a UIImage from that, but the next clear method is UIImage.SaveToPhotoStream() which isn't what I want to do. I see AsPng() returns an NSData, and that seems like a way to go. I don't quite have it working yet (because for some reason the provider decided to output int[] instead of byte[]) but I'm curious: is there an easier way to go from raw pixel data to a PNG file?
You can use ImageIO that gives you a little bit more control over what to serialize.
You still need to get yourself a CGImage to work with though.
Use one of the three CGImageDestination.Create overloads to select the kind of output you want to produce (save to a NSMutableData object, to an NSUrl, or to your own data provider), add the image, and then close the CGImageDestination.
Something like this:
var storage = new NSMutableData ()
var dest = CGImageDestination.Create (storage, MobileCoreServices.UTType.PNG, imageCount: 1);
dest.AddImage (yourImage);
dest.Close (); // This saves the data
You can change the parameter in Create for imageCount to other numbers if you want to create images that have multiple frames, in that case, you will also call AddImage repeatedly.
If you are looking to write the image to a file then it would be quite simple:
var bytes = image.AsPNG().ToArray();
await stream.WriteAsync(bytes, 0, bytes.Length);
I'm using Rest Kit with Core Data, one of the Core Data entities has an attribute 'image' that has a binary type.
I'm still in mockup stage so the image is populated with this code:
UIImage *image = [UIImage imageWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:#"http://lorempixel.com/60/60/people"]]];
entry.image = UIImagePNGRepresentation(image);
Another tab has a collection view that uses fetchedResultsController.
After creating a new entity, if I only save the context the image works fine.
But if I push the entity to the web server using 'postObject:' the image is corrupted when it comes back from the server. I've confirmed the server receives the same string representation of the image "<2f396a2f 34414151 536b5a4a 52674142 ... 6a6e502f 32513d3d>" and stores it directly into a MySQL column of type long blob and at all points the string representation is the same.
But when the collection view is populated using a server call via RestKit the entities image is invalid. I'm think the issue is the data is being converted into the data representation of the description of the data.
Does anyone have a working example with images. The only thing I can think of is that I need to add a custom transformation, but the documentation and examples are lacking as far as how to actually implement one.
RestKit is storing the plain NSData for the image in Core Data - it has no idea what else you might want to do with it. Generally you don't want to manage images directly in Core Data or using RestKit.
Generally, store the path of the image in Core Data and the file on disk. Download them asynchronously (from the URL's which would also be in Core Data).
For uploading, you could make RestKit upload the data, but you probably actually want to file upload or convert to base64. You will need to write some code for this (which you could have RestKit pick up by using the key of the method name that returns the appropriate data). A similar process will work for mapping the data in.
RestKit data transformers are hard to make work in this situation as you are converting between data and strings and they are too general to be able to intercept accurately.
I'm downloading a 2400x1600 image from Parse and I don't want it to hold all that data in memory at once. PFFile object from Parse has a convenient method to get NSData as NSInputStream so when the data is finally downloaded I end up with a NSInputStream.
So now I want to use that NSInputStream to get my UIImage. It should work like creating a UIImage with contents of file method i.e. not the whole image is loaded into memory at once.
I think that writing to a file from NSInputStream and then use the UIImage's contents of file method should work fine in my case, but I have found no way to write to a file from a NSInputStream.
Any code example or some guideline would be really appreciated.
Thanks in advance.
To accomplish this you can set up an NSOutputStream to then stream the received data to a file. Create your output stream using initToFileAtPath:append: with YES for append. In your input stream call back, pass the data to your output stream by calling write:maxLength: (read more in the docs). Once the stream is complete, you then have the full image on file without ever having it fully in memory.
Henri's answer above is more appropriate since you're using Parse, but this is the general solution.
In the documentation on iOS/OS X, Parse brings this an example.
Retrieving the image back involves calling one of the getData variants on the PFFile. Here we retrieve the image file off another UserPhoto named anotherPhoto:
PFFile *userImageFile = anotherPhoto[#"imageFile"];
[userImageFile getDataInBackgroundWithBlock:^(NSData *imageData, NSError *error) {
if (!error) {
UIImage *image = [UIImage imageWithData:imageData];
}
}];
Now, I don't quite see the reason for you to use NSInputStream, mainly for two reasons:
NSInputStream is supposedly meant for INPUTTING data, not taking it from somewhere
NSInputStream is meant for streaming, so for scenarios in which you want to do something with the data as it is coming in, from your description it seems as if you only ever care about the data once it has completed the download.
In short, you should be using the aforementioned way, unless you truly care about the way the data is loaded in, for example wanting to manipulate it as it comes in (highly unlikely in the case you describe).
As to having it all in memory at once, the dimensions you give are not that large, yes you could stream it into a file, but assuming you want to show it full-size in the app, the problem of memory would appear at some point nevertheless, i.e you would just be postponing the inevitable. If that is not the case (not showing full-size), then it might be a good idea to chop the source image up into tiles and use those instead, far quicker to download specific tiles and easier on memory.
Both writeModifiedImageDataToSavedPhotosAlbum and setImageData methods in ALAsset take both image data (in the form of an NSData object) and metadata (in the form of an NSDictionary object). I've got everything working to inject additional metadata into an ALAsset that's already in the camera roll (obviously written by our app, therefore editable by it), but what I would love to do is to not have to first read the entire image data for the original just to pass it completely unmodified to either of these calls.
Is there any way to modify only the metadata of an ALAsset without paying the memory penalty of reading the entire image data? I've tried passing nil to imageData (despite this not being a documented option) and it did not work.
Berend