Saving CGImage with image properties - ios

I'm working on a photo app that needs to save images with user specified properties such as number of inches across the long edge at a specific resolution, say 400dpi or whatever the user specifies. So, I need to write out a CGImage with the ability to set image dimensions, resolution, colorspace, etc. I haven't yet found a way to do this at the CGImage level. I'm currently using the iOS port of ImageMagick and as amazing as it is, I'd like to use methods intrinsic to iOS.
Any help would be greatly appreciated....

CGImage is an immutable, opaque, in memory image representation.
I believe what you are looking for instead is the Image I/O functionalities, located in the aptly titled ImageIO.framework. Create a CGImageDestination, then specify its properties using CGImageDestinationSetProperties.

Related

SceneKit, what is the most efficient way to load and replace textures on the fly? UIImage, MTLTexture, MDLTexture, URL?

I need to load and replace various textures during runtime ASAP, also maintain lowest possible memory usage. Unused old texture should be removed from VRAM ASAP. Rewrite everything to Metal may help, but that's a lot work.
As we can see from Apple's documents, the texture could be:
UIImage / NSImage / MTLTexture / MDLTexture / String / URL
So, which one is the best. I tested some of them, but they behave differently on macOS and iOS.
File paths and URLs are better because they don't pre allocate memory for the texture data. When dealing with UIImage or CGImageRef for instance, SceneKit has no way to discard the original image data on your behalf, which will likely not match the color space and/or the pixel format SceneKit will want to use internally.

Large images with Direct2D

Currently I am developing application for the Windows Store which does real time-image processing using Direct2D. It must support various sizes of images. The first problem I have faced is how to handle the situations when the image is larger than the maximum supported texture size. After some research and documentation reading I found the VirtualSurfaceImageSource as a solution. The idea was to load the image as IWICBitmap then to create render target with CreateWICBitmapRenderTarget (which as far as I know is not hardware accelerated). After some drawing operations I wanted to display the result to the screen by invalidating the corresponding region in the VirtualSurfaceImage source or when the NeedUpdate callback fires. I supposed that it is possible to do it by creating ID2D1Bitmap (hardware accelerated) and to call CopyFromRenderTarget with the render target created with CreateWICBitmapRenderTarget and the invalidated region as bounds, but the method returns D2DERR_WRONG_RESOURCE_DOMAIN as a result. Another reason for using IWICBitmap is one of the algorithms involved in the application which must have access to update the pixels of the image.
The question is why this logic doesn't work? Is this the right way to achieve my goal using Direct2D? Also as far as the render target created with CreateWICBitmapRenderTarget is not hardware accelerated if I want to do my image processing on the GPU with images larger than the maximum allowed texture size which is the best solution?
Thank you in advance.
You are correct that images larger than the texture limit must be handled in software.
However, the question to ask is whether or not you need that entire image every time you render.
You can use the hardware accel to render a portion of the large image that is loaded in a software target.
For example,
Use ID2D1RenderTarget::CreateSharedBitmap to make a bitmap that can be used by different resources.
Then create a ID2D1BitmapRenderTarget and render the large bitmap into that. (making sure to do BeginDraw, Clear, DrawBitmap, EndDraw). Both the bitmap and the render target can be cached for use by successive calls.
Then copy from that render target into a regular ID2D1Bitmap with the portion that will fit into the texture memory using the ID2D1Bitmap::CopyFromRenderTarget method.
Finally draw that to the real render target, pRT->DrawBitmap

Handle large images in iOS

I want to allow the user to select a photo, without limiting the size, and then edit it.
My idea is to create a thumbnail of the large photo with the same size as the screen for editing, and then, when the editing is finished, use the large photo to make the same edit that was performed on the thumbnail.
When I use UIGraphicsBeginImageContext to create a thumbnail image, it will cause a memory issue.
I know it's hard to edit the whole large image directly due to hardware limits, so I want to know if there is a way I can downsample the large image to less then 2048*2048 wihout memory issues?
I found that there is a BitmapFactory Class which has an inSampleSize option which can downsample a photo in Android platform. How can this be done on iOS?
You need to handle the image loading using UIImage which doesn't actually load the image into memory and then create a bitmap context at the size of the resulting image that you want (so this will be the amount of memory used). Then you need to iterate a number of times drawing tiles from the original image (this is where parts of the image data are loaded into memory) using CGImageCreateWithImageInRect into the destination context using CGContextDrawImage.
See this sample code from Apple.
Large images don't fit in memory. So loading them into memory to then resize them doesn't work.
To work with very large images you have to tile them. Lots of solutions out there already for example see if this can solve your problem:
https://github.com/dhoerl/PhotoScrollerNetwork
I implemented my own custom solution but that was specific to our environment where we had an image tiler running server side already & I could just request specific tiles of large images (madea server, it's really cool)
The reason tiling works is that basically you only ever keep the visible pixels in memory, and there isn't that many of those. All tiles not currently visible are factored out to the disk cache, or flash memory cache as it were.
Take a look at this work by Trevor Harmon. It improved my app's performance.I believe it will work for you too.
https://github.com/coryalder/UIImage_Resize

What is the most memory-efficient way of downscaling images on iOS?

In background thread, my application needs to read images from disk, downscale them to the size of screen (1024x768 or 2048x1536) and save them back to disk. Original images are mostly from the Camera Roll but some of them may have larger sizes (e.g. 3000x3000).
Later, in a different thread, these images will frequently get downscaled to different sizes around 500x500 and saved to the disk again.
This leads me to wonder: what is the most efficient way to do this in iOS, performance and memory-wise? I have used two different APIs:
using CGImageSource and CGImageSourceCreateThumbnailAtIndex from ImageIO;
drawing to CGBitmapContext and saving results to disk with CGImageDestination.
Both worked for me but I'm wondering if they have any difference in performance and memory usage. And if there are better options, of course.
While I can't definetly say it will help, I think it's worth trying to push the work to the GPU. You can either do that yourself by rendering a textured quad at a given size, or by using GPUImage and its resizing capabilities. While it has some texture size limitations on older devices, it should have much better performance than CPU based solution
With libjpeg-turbo you can use the scale_num and scale_denom fields of jpeg_decompress_struct, and it will decode only needed blocks of an image. It gave me 250 ms decoding+scaling time in background thread on 4S with 3264x2448 original image (from camera, image data placed in memory) to iPhone's display resolution. I guess it's OK for an image that large, but still not great.
(And yes, that is memory efficient. You can decode and store the image almost line by line)
What you said on twitter does not match your question.
If you are having memory spikes, look at Instruments to figure out what is consuming the memory. Just the data alone for your high resolution image is 10 megs, and your resulting images are going to be about 750k, if they contain no alpha channel.
The first issue is keeping the memory usage low, for that, make sure that all of the images that you load are disposed as soon as you are done using them, that will ensure that the underlying C/Objective-C API disposes the memory immediately, instead of waiting for the GC to run, so something like:
using (var img = UIImage.FromFile ("..."){
using (var scaled = Scaler (img)){
scaled.Save (...);
}
}
As for the scaling, there are a variety of ways of scaling the images. The simplest way is to create a context, then draw on it, and then get the image out of the context. This is how MonoTouch's UIImage.Scale method is implemented:
public UIImage Scale (SizeF newSize)
{
UIGraphics.BeginImageContext (newSize);
Draw (new RectangleF (0, 0, newSize.Width, newSize.Height));
var scaledImage = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
return scaledImage;
}
The performance will be governed by the context features that you enable. For example, a higher-quality scaling would require changing the interpolation quality:
context.InterpolationQuality = CGInterpolationQuality.High
The other option is to run your scaling not on the CPU, but on the GPU. To do that, you would use the CoreImage API and use the CIAffineTransform filter.
As to which one is faster, it is something left for someone else to benchmark
CGImage Scale (string file)
{
var ciimage = CIImage.FromCGImage (UIImage.FromFile (file));
// Create an AffineTransform that makes the image 1/5th of the size
var transform = CGAffineTransform.MakeScale (0.5f, 0.5f);
var affineTransform = new CIAffineTransform () {
Image = ciimage,
Transform = transform
};
var output = affineTransform.OutputImage;
var context = CIContext.FromOptions (null);
return context.CreateCGImage (output, output.Extent);
}
If either is more efficient of the two then it'll be the former.
When you create a CGImageSource you create just what the name says — some sort of opaque thing from which an image can be obtained. In your case it'll be a reference to a thing on disk. When you ask ImageIO to create a thumbnail you explicitly tell it "do as much as you need to output this many pixels".
Conversely if you draw to a CGBitmapContext then at some point you explicitly bring the whole image into memory.
So the second approach definitely has the whole image in memory at once at some point. Conversely the former needn't necessarily (in practice there'll no doubt be some sort of guesswork within ImageIO as to the best way to proceed). So across all possible implementations of the OS either the former will be advantageous or there'll be no difference between the two.
I would try using a c-based library like leptonica. I'm not sure whether ios optimizes Core Graphics with the relatively new Accelerate Framework, but CoreGraphics probably has more overhead involved just to re-size an image. Finally... If you want to roll your own implementation try using vImageScale_??format?? backed with some memory mapped files, I can't see anything being faster.
http://developer.apple.com/library/ios/#documentation/Performance/Conceptual/vImage/Introduction/Introduction.html
PS. Also make sure to check the compiler optimization flags.
I think if you want to save the memory you can read the source image from tile to tile and compress the tile and save to the destination tile.
There is an example from apple. It is the implementation of the way.
https://developer.apple.com/library/ios/samplecode/LargeImageDownsizing/Introduction/Intro.html
You can download this project and run it. It is MRC so you can use it very smoothly.
May it help. :)

iOS Image Processing from JPEG

I have a stupid question. I want to load a JPEG file and do some image processing. In the processing, the image file must be in pixel=by-pixel (unsigned char*), or in the format .bmp. I want to know how can I do this?
After processing, I want to save this file as .bmp or .jpg, how can I do it?
Thanks very much and sorry for my poor English.
The link int the comments is useful but not exactly what I think you want to do. You will have to read up a bit on Quartz, Apple's technology for drawing and image processing. Apple has great documentation on this, but its not a simple one hour type of effort - plan on at least a day maybe more.
Assume you start with a UIImage (I'm guessing, its not all critical):
get a CGImageRef to that image (possibly [UIImage CGImage])
create a CGBitMapContext that is large enough to render that image into (you can ask a CGImage for its width and height, etc).
if you want to ultimately create a PNG with alpha, you will need to create a bitmap context big enough for alpha, even if the JPEG image does not have it. If you want another JPEG your job is a bit easier.
render the image into the bit map context (there is a CGContextDraw... routine for that)
Now your image is contained in a bitmap that you can read and modify. The layout will probably be r-g-b unless you specified alpha, in which case you will have to determine where the alpha byte is.
Once you have the bit map modified, you can create a new CGImage from that bitmap, and with that you can get a UIImage if you want.
PS: you will find much sample code and posting about this if you search, so you will not be totally on your own.

Resources