Is drawRect "wasteful" when cropping? Is there an alternative? - ios

Let's say you have an original image that is
200 high, 100 wide
Let's say you want to draw only a square of it. Let's say, just the bottom square.
Let's say you want to draw it on to a new small image that is
20 high, 20 wide
Of course, you simply do this:
CGRect imageRect = CGRectMake( -10,0, 20,20);
.. begin graphics context ..
[originalImage drawInRect:imageRect];
With drawRect, you supply a rectangle the same full shape (same proportions) of the original image, but expressed in the size of the new canvas. No problem.
BUT:
in the example, you are drawing THE WHOLE ORIGINAL IMAGE -- THE WHOLE 200 HEIGHT on to the new small square.
(Of course the "top half" misses the new canvas, and you only get the bottom half on the new canvas -- which is what you wanted.)
My impression is iOS renders or calculates the "whole" original image, and it only "puts on" the bottom half (in the example) on to the new canvas.
This seems very wasteful.
IS THERE A FASTER WAY TO DO THIS?
It seems like there should be a command, something like this:
drawThisPartOfTheOriginalImage: (0,100 to 100,200)
ontoThisPartOfTheNewCanvas: (0,20 to 20,20)
What's the situation? Is there a more efficient command than drawRect when you are only drawing a small part of the original image? Cheers
CGContextClipToRect approach...(doesn't work!)
.
I experimented with CGContextClipToRect as Peter suggested below.
CGContextClipToRect indeed sets the area you will draw to on your "result" canvas. I simply set it to the size of that result canvas (it would be 20.20 in the example above). To repeat the aim here being to have iOS save time by avoiding pointlessly drawing the, err, not-drawn part of the original.
This example is for an original image 2000.2000 drawing on to a 500.500 (ie, only drawing the top left quarter of the original on to the result).
In fact notice it is slightly slower when you include the CGContextClipToRect, again suggesting iOS "knows when to stop" anyways.
// no need to "overdraw"... quickener turned OFF
//CGContextRef c = UIGraphicsGetCurrentContext();
//CGContextClipToRect(c, CGRectMake(0, 0, resultSize.width,resultSize.height));
//Execution Time .................................. 0.443669
// no need to "overdraw"... quickener turned ON
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextClipToRect(c, CGRectMake(0, 0, resultSize.width,resultSize.height));
//Execution Time .................................. 0.461845
As you can see it's a hair slower, actually, adding the CGContextClipToRect trick.
For the record, here is the exact routine used to crop an image:
-(UIImage *)simplishTopCrop:(UIImage *)fromImage
{
// check for zero fromImage.size.width etc etc
CGSize resultSize = CGSizeMake(640,640);
CGFloat scale = MAX(
resultSize.width/fromImage.size.width,
resultSize.height/fromImage.size.height);
CGFloat width = fromImage.size.width * scale;
CGFloat height = fromImage.size.height * scale;
CGRect imageRect = CGRectMake(0,0, width,height);
UIGraphicsBeginImageContextWithOptions(resultSize, NO, 0);
// INSERT 'CGContextClipToRect' TRICK ABOVE, RIGHT HERE
[fromImage drawInRect:imageRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

This is where clipping comes in. Clip to your dirty rect, then draw the whole image into your bounds. The clipping path will keep the rest of the image at least from appearing, and hopefully from being composited or sampled at all.
If your profiling in Instruments finds that that is not efficient enough, you might try cropping the image itself, using CGImageCreateWithImageInRect, and then drawing that image into your dirty rect. You may want to keep your cropped image around and only throw it away when the rect changes. One way or the other, cropping the image may be more efficient—but don't forget to profile both before and after to prove that.

Related

Rotated Image gets distorted and blurry?

I use an image view:
#IBOutlet weak var imageView: UIImageView!
to paint an image and also another image which has been rotated. It turns out that the rotated image has very bad quality. In the following image the glasses in the yellow box are not rotated. The glasses in the red box are rotated by 4.39 degrees.
Here is the code I use to draw the glasses:
UIGraphicsBeginImageContext(imageView.image!.size)
imageView.image!.drawInRect(CGRectMake(0, 0, imageView.image!.size.width, imageView.image!.size.height))
var drawCtxt = UIGraphicsGetCurrentContext()
var glassImage = UIImage(named: "glasses.png")
let yellowRect = CGRect(...)
CGContextSetStrokeColorWithColor(drawCtxt, UIColor.yellowColor().CGColor)
CGContextStrokeRect(drawCtxt, yellowRect)
CGContextDrawImage(drawCtxt, yellowRect, glassImage!.CGImage)
// paint the rotated glasses in the red square
CGContextSaveGState(drawCtxt)
CGContextTranslateCTM(drawCtxt, centerX, centerY)
CGContextRotateCTM(drawCtxt, 4.398 * CGFloat(M_PI) / 180)
var newRect = yellowRect
newRect.origin.x = -newRect.size.width / 2
newRect.origin.y = -newRect.size.height / 2
CGContextAddRect(drawCtxt, newRect)
CGContextSetStrokeColorWithColor(drawCtxt, UIColor.redColor().CGColor)
CGContextSetLineWidth(drawCtxt, 1)
// draw the red rect
CGContextStrokeRect(drawCtxt, newRect)
// draw the image
CGContextDrawImage(drawCtxt, newRect, glassImage!.CGImage)
CGContextRestoreGState(drawCtxt)
How can I rotate and paint the glasses without losing quality or get a distorted image?
You should use UIGraphicsBeginImageContextWithOptions(CGSize size, BOOL opaque, CGFloat scale) to create the initial context. Passing in 0.0 as the scale will default to the scale of the current screen (e.g., 2.0 on an iPhone 6 and 3.0 on an iPhone 6 Plus).
See this note on UIGraphicsBeginImageContext():
This function is equivalent to calling the UIGraphicsBeginImageContextWithOptions function with the opaque parameter set to NO and a scale factor of 1.0.
As others have pointed out, you need to set up your context to allow for retina displays.
Aside from that, you might want to use a source image that is larger than the target display size and scale it down. (2X the pixel dimensions of the target image would be a good place to start.)
Rotating to odd angles is destructive. The graphics engine has to map a grid of source pixels onto a different grid where they don't line up. Perfectly straight lines in the source image are no longer straight in the destination image, etc. The graphics engine has to do some interpolation, and a source pixel might be spread over several pixels, or less than a full pixel, in the destination image.
By providing a larger source image you give the graphics engine more information to work with. It can better slice and dice those source pixels into the destination grid of pixels.

How can I use Open GL ES to crop image?

So I face some problems when I deal with image cropping. I am awared of two possible ways: UIGraphicsBeginImageContextWithOptions combined with drawAtPoint:blendMode: methods and CGImageCreateWithImageInRect. They both work but have some serious disadvantages:the first way takes a lot of time(in my case 7 secs approx.) and memory(I receive memory warning)(suppose I crop an image taken with iPhone camera); the second ends up with rotated image so that you need to put a bunch code to defeat this behavior which I don't want. What I want to know is how, for instance, apple's built in edit function of "Photos" app works, or Aviary or any other photo editor. Consider apple's editor(iOS 8), you can rotate image,change cropping rectangle,they also have blurring(!) outside the cropping rect and so on, but when you apply cropping it takes max 8 mb of memory and it happens immediately. How do they do this?
The only thought I have is that they use the potential of GPU(Aviary, for instance). So,if we combine all this questions in one, how can I use Open GL to make cropping be less painful operation? I've never worked with it, so any tuts,links and sources are welcome.Thank you in advance.
As already mentioned this should most likely not be done with openGL but even if...
The problem most people have is getting the rectangle in which the image should be displayed and the solution looks something like this:
- (CGRect)fillSizeForSource:(CGRect)source target:(CGRect)target
{
if(source.size.width/source.size.height > target.size.width/target.size.height)
{
// keep target height and make the width larger
CGSize newSize = CGSizeMake(target.size.height * (source.size.width/source.size.height), target.size.height);
return CGRectMake((target.size.width-newSize.width)*.5f, .0f, newSize.width, newSize.height);
}
else
{
// keep target width and make the height larger
CGSize newSize = CGSizeMake(target.size.width, target.size.width * (source.size.height/source.size.width));
return CGRectMake(.0f, (target.size.height-newSize.height)*.5f, newSize.width, newSize.height);
}
}
- (CGRect)fitSizeForSource:(CGRect)source target:(CGRect)target
{
if(source.size.width/source.size.height < target.size.width/target.size.height)
{
// keep target height and make the width smaller
CGSize newSize = CGSizeMake(target.size.height * (source.size.width/source.size.height), target.size.height);
return CGRectMake((target.size.width-newSize.width)*.5f, .0f, newSize.width, newSize.height);
}
else
{
// keep target width and make the height smaller
CGSize newSize = CGSizeMake(target.size.width, target.size.width * (source.size.height/source.size.width));
return CGRectMake(.0f, (target.size.height-newSize.height)*.5f, newSize.width, newSize.height);
}
}
I did not test this.
Or since you are on iOS simply create an image view with the desired size, add an image to it and then get the screenshot of the image.

Generating a 54 megapixel image on iPhone 4/4S and iPad 2

I'm currently working on a project that must generate a collage of a 9000x6000 pixels resolution, generated from 15 photos. The problem that I'm facing is that when I finish drawing I'm getting an empty image (those 15 images are not being drawn in the context).
This problem is only present on devices with 512MB of RAM like iPhone 4/4S or iPad 2 and I think that this is a problem caused by the system because it cannot allocate enough memory for this app. When I run this line: UIGraphicsBeginImageContextWithOptions(outputSize, opaque, 1.0f); the app's memory usage raises by 216MB and the total memory usage gets to ~240MB RAM.
The thing that I cannot understand is why on Earth the images that I'm trying to draw within the for loop are not being rendered always on the currentContext? I emphasized the word always, because only once in 30 tests the images were rendered (without changing any line of code).
Question nr. 2: If this is a problem caused by the system because it cannot allocate enough memory, is there any other way to generate this image, like a CGContextRef backed by a file output stream, so that it won't keep the image in the memory?
This is the code:
CGSize outputSize = CGSizeMake(9000, 6000);
BOOL opaque = YES;
UIGraphicsBeginImageContextWithOptions(outputSize, opaque, 1.0f);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(currentContext, [UIColor blackColor].CGColor);
CGContextFillRect(currentContext, CGRectMake(0, 0, outputSize.width, outputSize.height));
for (NSUInteger i = 0; i < strongSelf.images.count; i++)
{
#autoreleasepool
{
AGAutoCollageImageData *imageData = (AGAutoCollageImageData *)strongSelf.layout.images[i];
CGRect destinationRect = CGRectMake(floorf(imageData.destinationRectangle.origin.x * scaleXRatio),
floorf(imageData.destinationRectangle.origin.y * scaleYRatio),
floorf(imageData.destinationRectangle.size.width * scaleXRatio),
floorf(imageData.destinationRectangle.size.height * scaleYRatio));
CGRect sourceRect = imageData.sourceRectangle;
// Draw clipped image
CGImageRef clippedImageRef = CGImageCreateWithImageInRect(((ALAsset *)strongSelf.images[i]).defaultRepresentation.fullScreenImage, sourceRect);
CGContextDrawImage(currentContext, drawRect, clippedImageRef);
CGImageRelease(clippedImageRef);
}
}
// Pull the image from our context
strongSelf.result = UIGraphicsGetImageFromCurrentImageContext();
// Pop the context
UIGraphicsEndImageContext();
P.S: The console doesn't show anything but 'memory warnings', which are expected to see.
Sound like a cool project.
Tactic: try also releasing imageData at the end of every loop (explicitly, after releasing the clippedImageRef)
Strategic:
If you do need to support such "low" RAM requirements with such "high" input, maybe you should consider 2 alternative options:
Compress (obviously): even minimal, naked to the eye, JPEG compression can go a long way.
Split: never "really" merge the image. Have an arrayed datastructure which represents a BigImage. make utilities for the presentation logic.

Quartz2D: How to convert a clipping rect to an inverted mask at runtime?

Given:
a CGContextRef (ctx) with frame {0,0,100,100}
and a rect (r), with frame {25,25,50,50}
It's easy to clip the context to that rect:
CGContextClipToRect(ctx, r);
to mask out the red area below (red == mask):
But I want to invert this clipping rect to convert it into a clipping mask. The desired outcome is to mask the red portion below (red == mask):
I want to do this programmatically at runtime.
I do not want to manually prepare a bitmap image to ship statically with my app.
Given ctx and r, how can this be done at runtime most easily/straightforwardly?
Read about fill rules in the “Filling a Path” section of the Quartz 2D Programming Guide.
In your case, the easiest thing to do is use the even-odd fill rule. Create a path consisting of your small rectangle, and a much larger rectangle:
CGContextBeginPath(ctx);
CGContextAddRect(ctx, CGRectMake(25,25,50,50));
CGContextAddRect(ctx, CGRectInfinite);
Then, intersect this path into the clipping path using the even-odd fill rule:
CGContextEOClip(ctx);
You could clip the context with CGContextClipToRects() by passing rects that make up the red frame you've wanted.
Can you just do all your painting as normal, and then do:
CGContextClearRect(ctx, r);
after everything has been done?
Here is a helpful extension for implementing rob's answer
extension UIBezierPath {
func addClipInverse() {
let paths = UIBezierPath()
paths.append(self)
paths.append(.init(rect: .infinite))
paths.usesEvenOddFillRule = true
paths.addClip()
}
}

Colorizing image ignores alpha channel — why and how to fix?

Here's what I'm trying to do: On the left is a generic, uncolorized RGBA image that I've created off-screen and cached for speed (it's very slow to create initially, but very fast to colorize with any color later, as needed). It's a square image with a circular swirl. Inside the circle, the image has an alpha/opacity of 1. Outside the circle, it has an alpha/opacity of 0. I've displayed it here inside a UIView with a background color of [UIColor scrollViewTexturedBackgroundColor]. On the right is what happens when I attempt to colorize the image by filling a solid red rectangle over the top of it after setting CGContextSetBlendMode(context, kCGBlendModeColor).
That's not what I want, nor what I expected. Evidently, colorizing a completely transparent pixel (e.g., alpha value of 0) results in the full-on fill color for some strange reason, rather than remaining transparent as I would have expected.
What I want is actually this:
Now, in this particular case, I can set the clipping region to a circle, so that the area outside the circle remains untouched — and that's what I've done here as a workaround.
But in my app, I also need to be able to colorize arbitrary shapes where I don't know the clipping/outline path. One example is colorizing white text by overlaying a gradient. How is this done? I suspect there must be some way to do it efficiently — and generally, with no weird path/clipping tricks — using image masks... but I have yet to find a tutorial on this. Obviously it's possible because I've seen colored-gradient text in other games.
Incidentally, what I can't do is start with a gradient and clip/clear away parts I don't need — because (as shown in the example above) my uncolorized source images are, in general, grayscale rather than pure white. So I really need to start with the uncolorized image and then colorize it.
p.s. — kCGBlendModeMultiply also has the same flaws / shortcomings / idiosyncrasies when it comes to colorizing partially transparent images. Does anyone know why Apple decided to do it that way? It's as if the Quartz colorizing code treats RGBA(0,0,0,0) as RGBA(0,0,0,1), i.e., it completely ignores and destroys the alpha channel.
One approach that you can take that will work is to construct a mask from the original image and then invoke the CGContextClipToMask() method before rendering your image with the multiply blend mode set. Here is the CoreGraphics code that would set the mask before drawing the image to color.
CGContextRef context = [frameBuffer createBitmapContext];
CGRect bounds = CGRectMake( 0.0f, 0.0f, width, height );
CGContextClipToMask(context, bounds, maskImage.CGImage);
CGContextDrawImage(context, bounds, greyImage.CGImage);
The slightly more tricky part will be to take the original image and generate a maskImage. What you can do for that is write a loop that will examine each pixel and write either a black or white pixel as the mask value. If the original pixel in the image to color is completely transparent, then write a black pixel, otherwise write a white pixel. Note that the mask value will be a 24BPP image. Here is some code to give you the right idea.
uint32_t *inPixels = (uint32_t*) MEMORY_ADDR_OF_ORIGINAL_IMAGE;
uint32_t *maskPixels = malloc(numPixels * sizeof(uint32_t));
uint32_t *maskPixelsPtr = maskPixels;
for (int rowi = 0; rowi < height; rowi++) {
for (int coli = 0; coli < width; coli++) {
uint32_t inPixel = *inPixels++;
uint32_t inAlpha = (inPixel >> 24) & 0xFF;
uint32_t cval = 0;
if (inAlpha != 0) {
cval = 0xFF;
}
uint32_t outPixel = (0xFF << 24) | (cval << 16) | (cval << 8) | cval;
*maskPixelsPtr++ = outPixel;
}
}
You will of course need to fill in all the details and create the graphics contexts and so on. But the general idea is to simply create your own mask to filter out drawing of the red parts around the outside of the circle.

Resources