I used the timing profile tool to identify that 95% of the time is spent calling the function CGContextDrawImage.
In my app there are a lot of duplicate images repeatably being chopped from a sprite map and drawn to the screen. I was wondering if it was possible to cache the output of CGContextDrawImage in an NSMutableDictionay, then if the same sprite is requested again it can be just pull it from the cache rather than doing all the work of clipping and rendering it again. This is what i’ve got but I have not been to successful:
Definitions
if(cache == NULL) cache = [[NSMutableDictionary alloc]init];
//Identifier based on the name of the sprite and location within the sprite.
NSString* identifier = [NSString stringWithFormat:#"%#-%d",filename,frame];
Adding to cache
CGRect clippedRect = CGRectMake(0, 0, clipRect.size.width, clipRect.size.height);
CGContextClipToRect( context, clippedRect);
//create a rect equivalent to the full size of the image
//offset the rect by the X and Y we want to start the crop
//from in order to cut off anything before them
CGRect drawRect = CGRectMake(clipRect.origin.x * -1,
clipRect.origin.y * -1,
atlas.size.width,
atlas.size.height);
//draw the image to our clipped context using our offset rect
CGContextDrawImage(context, drawRect, atlas.CGImage);
[cache setValue:UIGraphicsGetImageFromCurrentImageContext() forKey:identifier];
UIGraphicsEndImageContext();
Rendering a cached sprite
There is probably a better way to render CGImage which is my ultimate caching goal but at the moment I’m just looking to successfully render the cached image out however this has not been successful.
UIImage* cachedImage = [cache objectForKey:identifier];
if(cachedImage){
NSLog(#"Cached %#",identifier);
CGRect imageRect = CGRectMake(0,
0,
cachedImage.size.width,
cachedImage.size.height);
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageRect.size, NO, 0);
else
UIGraphicsBeginImageContext(imageRect.size);
//Use draw for now just to see if the image renders out ok
CGContextDrawImage(context, imageRect, cachedImage.CGImage);
UIGraphicsEndImageContext();
}
Yes it's possible to cache a rendered image. Below is a sample of how it's done:
+ (UIImage *)getRenderedImage:(UIImage *)image targetSize:(CGSize)targetSize
{
CGRect targetRect = CGRectIntegral(CGRectMake(0, 0, targetSize.width, targetSize.height)); // should be used by your drawing code
CGImageRef imageRef = image.CGImage; // should be used by your drawing code
UIGraphicsBeginImageContextWithOptions(targetSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// TODO: draw and clip your image here onto context
// CGContextDrawImage CGContextClipToRect calls
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
This way, you get a rendered copy of the resource image. Because during rendering, you have the context, you are free to do anything you want. You just need to determine the output size beforehand.
The resulting image is just an instance of UIImage that you can put into NSMutableDictionary for later use.
Related
i am trying to merge two different images in single image and trying to save it in the photo gallery. But the sticker which i am trying to merge on the image is getting stretched and not properly align to x and y in the image.
the code i am using is as below:
- (UIImage*)buildImage:(UIImage*)image
{
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
[image drawAtPoint:CGPointZero];
CGFloat scale = image.size.width / _workingView.width;
NSLog(#"%f",scale);
CGContextScaleCTM(UIGraphicsGetCurrentContext(), scale, scale);
[_workingView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *tmp = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return tmp;
}
here in this code _workingview is the view containing the sticker image inside it & image in the arguement is the main image which we are passing in the function
i would be very much thank full for your help
Thanks Guys
CGFloat width, height;
CGFloat width, height;
UIImage *inputImage; // input image to be composited over new image as example
// create a new bitmap image context at the device resolution (retina/non-retina)
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, height), YES, 0.0);
// get context
CGContextRef context = UIGraphicsGetCurrentContext();
// push context to make it current
// (need to do this manually because we are not drawing in a UIView)
UIGraphicsPushContext(context);
// drawing code comes here- look at CGContext reference
// for available operations
// this example draws the inputImage into the context
[inputImage drawInRect:CGRectMake(0, 0, width, height)];
[stickerImage drawInRect:CGRectMake(yourX,yourY,othertWidth,otherHeight)]
// pop context
UIGraphicsPopContext();
// get a UIImage from the image context- enjoy!!!
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up drawing environment
UIGraphicsEndImageContext();
I have an app using the camera to take pictures. As soon as the picture is taken, I reduce the size of the image coming from the camera.
Running the method for reducing the size of the image, makes the memory usage peaks from 21 MB to 61 MB, sometimes near 69MB!
I have added #autoreleasepool to every method involved in this process. Things improved a little bit, but not as much as I expected. I don't expect the memory usage jump 3 times when reducing an image, specially because the new image being produced is smaller.
These are the methods I have tried:
- (UIImage*)reduceImage:(UIImage *)image toSize:(CGSize)size {
#autoreleasepool {
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0.0, size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, size.width, size.height), image.CGImage);
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
}
and also
- (UIImage *)reduceImage:(UIImage *)image toSize:(CGSize)size {
#autoreleasepool {
UIGraphicsBeginImageContext(size);
[image drawInRect:rect];
UIImage * result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
}
no difference at all between these two.
NOTE: the original image is 3264x2448 pixels x 4 bytes/pixel = 32MB and the final image is 1136x640, that is 2.9MB... sum both numbers and you get 35MB, not 70!
Is there a way to reduce the size of the image without making the memory usage peak to stratosphere? thanks.
BTW and out of curiosity: is there a way to reduce an image dimension without using Quartz?
The answer is here
Uses CoreGraphics and uses 30~40% less memory.
#import <ImageIO/ImageIO.h>
-(UIImage*) resizedImageToRect:(CGRect) thumbRect
{
CGImageRef imageRef = [inImage CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
// There's a wierdness with kCGImageAlphaNone and CGBitmapContextCreate
// see Supported Pixel Formats in the Quartz 2D Programming Guide
// Creating a Bitmap Graphics Context section
// only RGB 8 bit images with alpha of kCGImageAlphaNoneSkipFirst, kCGImageAlphaNoneSkipLast, kCGImageAlphaPremultipliedFirst,
// and kCGImageAlphaPremultipliedLast, with a few other oddball image kinds are supported
// The images on input here are likely to be png or jpeg files
if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
// Build a bitmap context that's the size of the thumbRect
CGContextRef bitmap = CGBitmapContextCreate(
NULL,
thumbRect.size.width, // width
thumbRect.size.height, // height
CGImageGetBitsPerComponent(imageRef), // really needs to always be 8
4 * thumbRect.size.width, // rowbytes
CGImageGetColorSpace(imageRef),
alphaInfo
);
// Draw into the context, this scales the image
CGContextDrawImage(bitmap, thumbRect, imageRef);
// Get an image from the context and a UIImage
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap); // ok if NULL
CGImageRelease(ref);
return result;
}
added as a category to UIImage.
I 've a PNG loaded into an UIImage. I want to get a portion of the image based on a path (i.e. it might not be a rectangular). Say, it might be some shape with arcs, etc. Like a drawing path.
What would be the easiest way to do that?
Thanks.
I haven't run this, so it may not be perfect but this should give you an idea.
UIImage *imageToClip = //get your image somehow
CGPathRef yourPath = //get your path somehow
CGImageRef imageRef = [imageToClip CGImage];
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8, 0, CGImageGetColorSpace(imageRef), kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextAddPath(context, yourPath);
CGContextClip(context);
CGImageRef clippedImageRef = CGBitmapContextCreateImage(context);
UIImage *clippedImage = [UIImage imageWithCGImage:clippedImageRef];//your final, masked image
CGImageRelease(clippedImageRef);
CGContextRelease(context);
The easiest way to add a category to the UIImage with follow method:
-(UIImage *)scaleToRect:(CGRect)rect{
// Create a bitmap graphics context
// This will also set it as the current context
UIGraphicsBeginImageContext(size);
// Draw the scaled image in the current context
[self drawInRect:rect];
// Create a new image from current context
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
// Pop the current context from the stack
UIGraphicsEndImageContext();
// Return our new scaled image
return scaledImage;
}
I'm trying to overlay picture taken from camera with some other preset images. Problem is, picture from camera might be 8MP which is huge in term of memory usage. Some of the overlays might try to cover the whole image.
I tried multiple way to merge them all into one image.
UIGraphicsBeginImageContextWithOptions(_imageView.image.size, NO, 1.0f);
[_containerView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
or
CGContextDrawImage(UIGraphicsGetCurrentContext(), (CGRect){CGPointZero, _imageView.image.size}, _imageView.image.CGImage);
CGContextDrawImage(UIGraphicsGetCurrentContext(), (CGRect){CGPointZero, _imageView.image.size}, *other image views*);
or
UIImage* image = _imageView.image;
CGImageRef imageRef = image.CGImage;
size_t imageWidth = (size_t)image.size.width;
size_t imageHeight = (size_t)image.size.height;
CGContextRef context = CGBitmapContextCreate(NULL, CGImageGetWidth(imageRef), CGImageGetHeight(imageRef), CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), CGImageGetColorSpace(imageRef), CGImageGetBitmapInfo(imageRef));
CGRect rect = (CGRect){CGPointZero, {imageWidth, imageHeight}};
CGContextDrawImage(context, rect, imageRef);
CGContextDrawImage(context, rect, **other images**);
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
CGContextRelease(context);
UIImage* resultImage = nil;
NSURL* url = [NSURL fileURLWithPath:[NSTemporaryDirectory() stringByAppendingPathComponent:#"x.jpg"]];
CFURLRef URLRef = CFBridgingRetain(url);
CGImageDestinationRef destination = CGImageDestinationCreateWithURL(URLRef, kUTTypeJPEG, 1, NULL);
if (destination != NULL)
{
CGImageDestinationAddImage(destination, newImageRef, NULL);
if (CGImageDestinationFinalize(destination))
{
resultImage = [[UIImage alloc] initWithContentsOfFile:url.path];
}
CFRelease(destination);
}
CGImageRelease(newImageRef);
all of these work but essentially, doubling the current memory usage.
Is there anyway to compose them all together without the need to create new context? Maybe save all images in file system and doing the merging there without actually consuming tons of memory? Or maybe even rendering into filesystem tile per tile?
Any suggestion or pointer where do I go from this?
Thanks
Check out the following code. You can send multiple images to the following method, however as you are facing memory issues, I suggest you to call the same method repetitively to merge multiple images. This process might take more time though.
-(CGImageRef )mergedImageFromImageOne:(UIImage *)imageOne andImageTwo:(UIImage *)imageTwo
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
CGSize imageSize = imageOne.size;
UIGraphicsBeginImageContext(sizeVideo);
[imageOne drawInRect:CGRectMake(0,0,imageSize.width,imageSize.height)];
[imageTwo drawInRect:CGRectMake(0,0,imageSize.width,imageSize.height) alpha:1];
CGImageRef imageRefNew = CGImageCreateWithImageInRect(UIGraphicsGetImageFromCurrentImageContext().CGImage, CGRectMake(0,0,imageOne.width,imageOne.height));
UIGraphicsEndImageContext();
[pool release];
return imageRefNew;
}
Hope this helps.
is it possible to add an alpha property on a region in the same image?
For example
The easiest solution would be to break the image apart and save the alpha as part of a png, then organize the imageviews to be flush against each other.
Otherwise, I wrote this quick code in a regular view that does the same with an image (I'm relatively new to Core Graphics so I'm sure there are better ways of doing this - also, my example the images are side by side):
-(void) drawRect {
// GET THE CONTEXT, THEN FLIP THE COORDS (my view is 189 ponts tall)
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform flip = CGAffineTransformMake(1, 0, 0, -1, 0, 189);
CGContextConcatCTM(context, flip);
// GET THE IMAGE REF
UIImage *targetImage = [UIImage imageNamed:#"test.jpg"];
CGImageRef imageRef = targetImage.CGImage;
// SET THE COORDS
CGRect imageCoords = CGRectMake(0, 0, 116, 189);
CGRect imageCoordsTwo = CGRectMake(116, 0, 117, 189);
// CUT UP THE IMAGE INTO TWO IMAGES
CGImageRef firstImage = CGImageCreateWithImageInRect(imageRef, imageCoords);
CGImageRef secondImage = CGImageCreateWithImageInRect(imageRef, imageCoordsTwo);
// DRAW FIRST IMAGE, SAVE THE STATE, THEN SET THE TRANSPARENCY AMOUNT
CGContextDrawImage(context, imageCoords, firstImage);
CGContextSaveGState(context);
CGContextSetAlpha(context, .4f);
// DRAW SECOND IMAGE, RESTORE THE STATE
CGContextDrawImage(context, imageCoordsTwo, secondImage);
CGContextRestoreGState(context);
// TIDY UP
CGImageRelease(firstImage);
CGImageRelease(secondImage);
}