How do I best convert raw pixel data to a PNG file? - ios

(I'm working on a Xamarin.Forms project targeting iOS, but I get the feeling I need to use monotouch to accomplish my goals.)
I have an array I get from a function, and this array represents some bitmap data I would like to save and display later. I've been looking at the documentation, and I don't see a path from bytes to bitmap to file that is as clear as I'd expect. In WinForms, I'd create a System.Drawing.Bitmap, then use the Save() method. In Xamarin for Android, Android.Graphics.Bitmap has a Compress() method that lets me output to a stream. I'm having trouble figuring out what the equivalent in Xamarin iOS would be.
The pixel data is 32 bits per pixel, 8 bits per component, ARGB. I've got as far as figuring out I can create a CGImage from that, currently via a CGBitmapContext. I can get a UIImage from that, but the next clear method is UIImage.SaveToPhotoStream() which isn't what I want to do. I see AsPng() returns an NSData, and that seems like a way to go. I don't quite have it working yet (because for some reason the provider decided to output int[] instead of byte[]) but I'm curious: is there an easier way to go from raw pixel data to a PNG file?

You can use ImageIO that gives you a little bit more control over what to serialize.
You still need to get yourself a CGImage to work with though.
Use one of the three CGImageDestination.Create overloads to select the kind of output you want to produce (save to a NSMutableData object, to an NSUrl, or to your own data provider), add the image, and then close the CGImageDestination.
Something like this:
var storage = new NSMutableData ()
var dest = CGImageDestination.Create (storage, MobileCoreServices.UTType.PNG, imageCount: 1);
dest.AddImage (yourImage);
dest.Close (); // This saves the data
You can change the parameter in Create for imageCount to other numbers if you want to create images that have multiple frames, in that case, you will also call AddImage repeatedly.

If you are looking to write the image to a file then it would be quite simple:
var bytes = image.AsPNG().ToArray();
await stream.WriteAsync(bytes, 0, bytes.Length);

Related

How to delete image from array of strings?

I have a problem with deleting images from my app. I have an array of string, that are images converted to base64 string. So I get back array from API back to my app, and I'm stuck when I want to delete one picture that user has select.
I tried to delete with filter and map method but didn't solve the problem. Here is my "try" "
func deleteImage(image : UIImageView) {
for img in newAdedImages {
newAdedImages = newAdedImages.filter({$0 !== image})
newAdedImages.append(img)
}
}
Bad code
As #vadian already pointed out code you've posted doesn't make sense because you are trying to filter array of strings with instance of UIImageView. Also you are adding string to array which already contains that string that means you will have a lot of duplicates.
Possible solution
You could check how base64 string is used to create UIImage that is used in UIImageView then you can try to reverse process and extract base64 string from UIImage. Then you can filter array of newAddedImages by comparing string values.
Check this answer on SO: https://stackoverflow.com/a/47610733/4949050
try === operator
or add some property to identify your image, also all UIView subclasses have .tag property which can be used as identifier
upd: if you are trying to compare base64 string with UIImageView then seems like u doing something wrong, its better to store UIImageView instead of base64 strings. Imagine your app in abstractions, all the "UI/visual" is view abstraction and the "data" (e.g. base64 strings coming from server) is **data abstraction so u shouldnt mix them up. I dont know the context of your task or so, but there is some pointers I can give:
1) get base64 strings from service/API/etc. (this is data abstraction)
2) use some helper (e.g. some swift class with function) to make UIImage (there goes view abstraction)
3) operate your uiviews as u wish
But this is very simple explanation, I hardly recommend to read more about architecture patterns such as mvvm for example

Base64 Or UIImage.

Based on other question, a based64 image Data is about +37% extra size of the actual String.
Therefore consider this case.
An API Client responses with an Array of Object,
The object contains a lot of properties ..etc, but the one matters is a property i found with a Key of ImageString64 that returns an image of maxWidth & maxHeight by 300x300.
Desired solution:
I want the fastest, yet memory friendly way to decode those images
Notes to consider:
I do know how to encode them, the question is where and why, does it
even matter ?
Options i have: //You can add yours if you have better one
1- inside the Request response callback
2- pass those Objects with their ImageString64 as they are plain String, and then Decode them inside the UIViewController.
More general information about the response:
1- maximum array of object as response is limited between 6 - 9 .
2- each response object, has 17 Keys.
If you need to work with the base64 decoded image, I will transform it neither in the request call back or the view controller.
You should have a type of internal repository, from where your view controllers get the Object instances. The repository contains the raw data, with the base64 string.
If you access one of the object instances from the repository, you should transform the base64 string to an UIImage instance. The view controller should only see the UIImage.
To get good performance for transform and showing the data, you should do this transform before trying to display it, but as mentioned, only if its needed.
If you want to display the data in an UITableView or UICollectionView you should use the prefetching protocols UICollectionViewDataSourcePrefetching (https://developer.apple.com/documentation/uikit/uicollectionviewdatasourceprefetching/prefetching_collection_view_data) or UITableViewDataSourcePrefetching (https://developer.apple.com/documentation/uikit/uitableviewdatasourceprefetching).
What's your use case for displaying the images?

UIImage from NSInputStream

I'm downloading a 2400x1600 image from Parse and I don't want it to hold all that data in memory at once. PFFile object from Parse has a convenient method to get NSData as NSInputStream so when the data is finally downloaded I end up with a NSInputStream.
So now I want to use that NSInputStream to get my UIImage. It should work like creating a UIImage with contents of file method i.e. not the whole image is loaded into memory at once.
I think that writing to a file from NSInputStream and then use the UIImage's contents of file method should work fine in my case, but I have found no way to write to a file from a NSInputStream.
Any code example or some guideline would be really appreciated.
Thanks in advance.
To accomplish this you can set up an NSOutputStream to then stream the received data to a file. Create your output stream using initToFileAtPath:append: with YES for append. In your input stream call back, pass the data to your output stream by calling write:maxLength: (read more in the docs). Once the stream is complete, you then have the full image on file without ever having it fully in memory.
Henri's answer above is more appropriate since you're using Parse, but this is the general solution.
In the documentation on iOS/OS X, Parse brings this an example.
Retrieving the image back involves calling one of the getData variants on the PFFile. Here we retrieve the image file off another UserPhoto named anotherPhoto:
PFFile *userImageFile = anotherPhoto[#"imageFile"];
[userImageFile getDataInBackgroundWithBlock:^(NSData *imageData, NSError *error) {
if (!error) {
UIImage *image = [UIImage imageWithData:imageData];
}
}];
Now, I don't quite see the reason for you to use NSInputStream, mainly for two reasons:
NSInputStream is supposedly meant for INPUTTING data, not taking it from somewhere
NSInputStream is meant for streaming, so for scenarios in which you want to do something with the data as it is coming in, from your description it seems as if you only ever care about the data once it has completed the download.
In short, you should be using the aforementioned way, unless you truly care about the way the data is loaded in, for example wanting to manipulate it as it comes in (highly unlikely in the case you describe).
As to having it all in memory at once, the dimensions you give are not that large, yes you could stream it into a file, but assuming you want to show it full-size in the app, the problem of memory would appear at some point nevertheless, i.e you would just be postponing the inevitable. If that is not the case (not showing full-size), then it might be a good idea to chop the source image up into tiles and use those instead, far quicker to download specific tiles and easier on memory.

kCGImageStatusUnknownType when calling CGImageSourceCreateWithDataProvider from UIGraphicsGetImageFromCurrentImageContext

I'm trying to create a CGImageSourceRef from a CGDataProviderRef. The CGDataProviderRef comes from a UIImage that was created with UIGraphicsGetImageFromCurrentImageContext. When I do this and check the status of my image source, it's returning kCGImageStatusUnknownType.
I've tried giving it a hint with kCGImageSourceTypeIdentifierHint but to no avail.
Does anyone know how I can create this CGImageSource?
Thanks!
You tell the source that you are giving it PNG data, but you are not. The data provider that you get from the CGImageGetDataProvider function provides raw pixel data, not data in some external format.
The purpose of a CGImageSource is to decode data in some external format (such as PNG) into CGImages. It can't handle pixel data; when you have pixel data, you can just create the CGImage from it directly. Of course, given that you already have the final image, I'm not sure why you're then trying to create it again from any kind of data at all.
If you want to generate some external data to subsequently (as in, after you release this image, possibly in another process) be loaded back in by a CGImageSource, use a CGImageDestination.

Converting image to byte array? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
What is the need/purpose of converting an image to a byte array?
Why do we need to do this?
What's the purpose of converting an Image to a byte array?
Saving an image to disk
Serializing the image to send to a web service
Saving the image to a database
Serializing the image for processing
...are just a few that I can think of off the top of my head.
I think the most common use would be saving an image in a database (BINARY(MAX) data type).
Most common is to persist the image in the database as a blob (binary large object) but it could also be used in web services where you could take in a byte array as a method argument and then convert that to an image for whatever reason.
the only time i've ever needed to do this was to compare two images pixel-by-pixel to see if they were identical (as part of an automated test suite). converting to bytes and pinning the memory allowed me to use an unsafe block in C# to do a direct pointer-based comparison, which was orders of magnitude faster than GetPixel.
Recently I wrote a code to get hash from image, here is how:
private ImageConverter c = new ImageConverter();
private byte[] getBitmapHash(Bitmap hc) {
return md5.ComputeHash(c.ConvertTo(hc, typeof(byte[])) as byte[]);
}
Here is this in context. Serializing image or adding it to database in raw byte format (without mime type) is not something that seems to be sensible, but you can do that. Image processing and cryptography are most likely of places where this is useful.
To further generalize what Brad said: Serialization and (probably) a basis for object comparison.
Also is helpful if you have a image in memory and want to send it to someone via a protocol (HTTP for example). A perfect example would be the method "AddBytesForUpload" in the Chilkat HTTPRequest clas [http://www.chilkatsoft.com/refdoc/vbnetHttpRequestRef.html].
Why would you ever need to do this you may ask... Well lets assume we have a directory of images we want to auto upload to imageshack, but do some mods before hand like put our domain name at the bottom right. With this, u load the image in memory, do the mods with it that u need, then simply attach that stream to the HTTPRequest object. Without the array u would need to then same to a file before uploading, which in turn will create either a new file u then need to delete after, or over write the original, which is not always ideal.
public byte[] imageToByteArray(System.Drawing.Image image)
{
MemoryStream ms = new MemoryStream();
image.Save(ms,System.Drawing.Imaging.ImageFormat.Gif);
return ms.ToArray();
}
If it is a image file use File.ReadAllBytes().

Resources