I have an MKMapView where I want to grey out parts of the map. More specifically, I want to have some circles and rectangles which are displayed normally and the rest of the map has a semi-transparent grey layer. Something like this:
For that, I think that I should subclass MKOverlay and MKOverlayRenderer. As Apple suggests, in my MKOverlayRenderer subclass, I should override the drawMapRect:zoomScale:inContext: method, and draw my stuff using Core Graphics. My question is how can I draw the following with Core Graphics?
I have spent some hours looking at masking and clipping using Core Graphics, but I haven't found anything similar to this. The QuartzDemo has some examples of clipping and masking. I guess clipping with either the even-odd or nonzero winding number rules won't work for me, as the rectangles and circles are dynamic. I think I have to create a mask somehow, but I can't figure out how. The QuartzDemo creates a mask out of an image. How could I create a mask using rectangles and circles? Is there any other way I could approach this?
Thank you
You should be able to setup a context with transparency, draw the skinny rectangles and circles without stroking them, draw a rectangle border around the whole context, and then fill the shape with the darker color. You'll need to look into fill order rules to make sure that the larger space is what is filled rather than the smaller joined shapes.
I guess clipping with either the even-odd or nonzero winding number rules won't work for me, as the rectangles and circles are dynamic.
This shouldn't matter.
Well, I worked it out myself. I have to create my own mask. This is done by drawing the "hole" shapes, outputting this to an ImageRef, and inverting this ImageRef to output the mask. Here is a code snippet that could work if you add it in the QuartzDemo project->QuartzClipping.m->QuartzClippingView class, at the end of the drawInContext method.
-(void)drawInContext:(CGContextRef)context {
// ...
CGContextSaveGState(context);
// dimension of the square mask
int dimension = 20;
// create mask
UIGraphicsBeginImageContextWithOptions(CGSizeMake(dimension, dimension), NO, 0.0f);
CGContextRef newContext = UIGraphicsGetCurrentContext();
// draw overlapping circle holes
CGContextFillEllipseInRect(newContext, CGRectMake(0, 0, 10, 10));
CGContextFillEllipseInRect(newContext, CGRectMake(0, 7, 10, 10));
// draw mask
CGImageRef mask = CGBitmapContextCreateImage(UIGraphicsGetCurrentContext());
UIGraphicsEndImageContext();
// the inverted mask is what we need
CGImageRef invertedMask = [self invertMask:mask dimension:dimension];
CGRect rectToDraw = CGRectMake(210.0, height - 290.0, 90.0, 90.0);
// everything drawn in rectToDraw after this will have two holes
CGContextClipToMask(context, rectToDraw, invertedMask);
// drawing a red rectangle for this demo
CGContextFillRect(context, rectToDraw);
CGContextRestoreGState(context);
}
// taken from the QuartzMaskingView below
- (CGImageRef)invertMask:(CGImageRef)originalMask dimension:(int)dimension{
// To show the difference with an image mask, we take the above image and process it to extract
// the alpha channel as a mask.
// Allocate data
NSMutableData *data = [NSMutableData dataWithLength:dimension * dimension * 1];
// Create a bitmap context
CGContextRef context = CGBitmapContextCreate([data mutableBytes], dimension, dimension, 8, dimension, NULL, (CGBitmapInfo)kCGImageAlphaOnly);
// Set the blend mode to copy to avoid any alteration of the source data
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the image to extract the alpha channel
CGContextDrawImage(context, CGRectMake(0.0, 0.0, dimension, dimension), originalMask);
// Now the alpha channel has been copied into our NSData object above, so discard the context and lets make an image mask.
CGContextRelease(context);
// Create a data provider for our data object (NSMutableData is tollfree bridged to CFMutableDataRef, which is compatible with CFDataRef)
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData((__bridge CFMutableDataRef)data);
// Create our new mask image with the same size as the original image
return CGImageMaskCreate(dimension, dimension, 8, 8, dimension, dataProvider, NULL, YES);
}
Any easier/more efficient solution is welcome :)
Related
I am drawing image on a custom UIView. On resizing the view, the drawing performance goes down and it starts lagging.
My image drawing code is below:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
UIBezierPath *bpath = [UIBezierPath bezierPathWithOvalInRect:CGRectMake(0, 0, width, height)];
CGContextAddPath(context, bpath.CGPath);
CGContextClip(context);
CGContextDrawImage(context, [self bounds], image.CGImage);
}
Is this approach correct?
You would be better using Instruments to find where the bottleneck is than asking on here.
However, what you will probably find is that every time the frame changes slightly the entire view will be redrawn.
If you're just using the drawRect to clip the view into an oval (I guess there's an image behind it or something) then you would be better off using a CAShapeLayer.
Create a CAShapeLayer and give it a CGPath then add it as a clipping layer to the view.layer.
Then you can change the path on the CAShapeLayer and it will update. You'll find (I think) that it performs much better too.
If your height and width are the same, you could just use a UIImageView instead of needing a custom view, and get the circular clipping by setting properties on the image view's layer. That approach draws nice and quickly.
Just set up a UIImageView (called "image" in my example) and then have your view controller do this once:
image.layer.cornerRadius = image.size.width / 2.0;
image.layer.masksToBounds = YES;
I'm trying to zoom and translate an image on the screen.
here's my drawRect:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextSetShouldAntialias(context, NO);
CGContextScaleCTM (context, senderScale, senderScale);
[self.image drawAtPoint:CGPointMake(imgposx, imgposy)];
CGContextRestoreGState(context);
}
When senderScale is 1.0, moving the image (imgposx/imgposy) is very smooth. But if senderScale has any other value, performance takes a big hit and the image stutters when I move it.
The image I am drawing is a UIImageobject. I create it with
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0);
and draw a simple UIBezierPath(stroke):
self.image = UIGraphicsGetImageFromCurrentImageContext();
Am I doing something wrong? Turning off the anti-aliasing did not improve things much.
Edit:
I tried this:
rectImage = CGRectMake(0, 0, self.frame.size.width * senderScale, self.frame.size.height * senderScale);
[image drawInRect:rectImage];
but it was just as slow as the other method.
If you want this to perform well, you should let the GPU do the heavy lifting by using CoreAnimation instead of drawing the image in your -drawRect: method. Try creating a view and doing:
myView.layer.contents = self.image.CGImage;
Then zoom and translate it by manipulating the UIView relative to its superview. If you draw the image in -drawRect: you're making it do the hard work of blitting the image for every frame. Doing it via CoreAnimation only blits once, and then subsequently lets the GPU zoom and translate the layer.
I'm trying to add a feature in my iPhone app that allows users to record the screen. To do this, I used an open source class called ScreenCaptureView that compiles a series of screenshots of the main view through the renderInContext: method. However, this method ignores masked CALayers, which is key to my app.
EDIT:
I need a way to record the screen so that the masks are included. Although this question specifically asks for a way to create the illusion of an image mask, I'm open to any other modifications I can make to record the screen successfully.
APP DESCRIPTION
In the app, I take a picture and create the effect of a moving mouth by animating the position of the jaw region, as shown below.
Currently, I have the entire face as one CALayer, and the chin region as a separate CALayer. To make the chin layer, I mask the chin region from the complete face using a CGPath. (This path is an irregular shape and must be dynamic).
- (CALayer *)getChinLayerFromPic:(UIImage *)pic frame:(CGRect)frame {
CGMutablePathRef mPath = CGPathCreateMutable();
CGPathMoveToPoint(mPath, NULL, p0.x, p0.y);
CGPoint midpt = CGPointMake( (p2.x + p0.x)/2, (p2.y+ p0.y)/2);
CGPoint c1 = CGPointMake(2*v1.x - midpt.x, 2*v1.y - midpt.y); //control points
CGPoint c2 = CGPointMake(2*v2.x - midpt.x, 2*v2.y - midpt.y);
CGPathAddQuadCurveToPoint(mPath, NULL, c1.x, c1.y, p2.x, p2.y);
CGPathAddQuadCurveToPoint(mPath, NULL, c2.x, c2.y, p0.x, p0.y);
CALayer *chin = [CALayer layer];
CAShapeLayer *chinMask = [CAShapeLayer layer];
chin.frame = frame;
chin.contents = (id)[pic CGImageWithProperOrientation];
chinMask.path = mPath;
chin.mask = chinMask;
CGPathRelease(mPath);
return chin;
}
I then animate the chin layer with a path animation.
As mentioned before, the renderInContext: method ignores the mask, and returns an image of the entire face instead of just the chin. Is there any way I can create an illusion of masking the chin? I would like to use CALayers if possible, since it would be most convenient for animations. However, I'm open to any ideas, including other ways to capture the video. Thanks.
EDIT:
I'm turning the cropped chin into a UIImage, and then setting that new image as the layers contents, instead of directly masking the layer. However, the cropped region is the reverse of the specified path.
CALayer *chin = [CALayer layer];
chin.frame = frame;
CGImageRef imageRef = [pic CGImage];
CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);
int targetWidth = frame.size.width;
int targetHeight = frame.size.height;
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGContextRef bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
CGContextAddPath(bitmap, mPath);
CGContextClip(bitmap);
CGContextDrawImage(bitmap, CGRectMake(0, 0, targetWidth, targetHeight), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage *chinPic = [UIImage imageWithCGImage:ref];
chin.contents = (id)[chinPic CGImageWithProperOrientation];
Why don't you draw the CALayer of the chin into a seperate CGImage and make a new UIImage with that?
Then you can add this CGImage to a seperate UIImageView, which you can just move around with a PanGestureRecognizer for example?
I suggest that you draw each part alone(you don't need masks), the face without a chin, and the chin, both with alpha pixels around, then just draw the chin over and move it on your path ..
Please inform me if this wasn't helpful to you, regards
If you just need this in order to capture the screen, it is much easier to use a programme such as
http://www.airsquirrels.com/reflector/
that connects your device to the computer via AirPlay, and then record the stream on the computer. This particular programme has screen recording built in, extremely convenient. I think there is a trial version you can use for recordings of up to 10 minutes.
Have you tried taking a look at layer.renderInContext doesn't take layer.mask into account? ?
In particular (notice the coordinate flip):
//Make the drawing right with coordinate switch
CGContextTranslateCTM(context, 0, cHeight);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextClipToMask(context, CGRectMake(maskLayer.frame.origin.x * mod, maskLayer.frame.origin.y * modTwo, maskLayer.frame.size.width * mod,maskLayer.frame.size.height * modTwo), maskLayer.image.CGImage);
//Reverse the coordinate switch
CGAffineTransform ctm = CGContextGetCTM(context);
ctm = CGAffineTransformInvert(ctm);
CGContextConcatCTM(context, ctm);
I'm using this code to colorize some images of a UIButton subclass:
UIImage *img = [self imageForState:controlState];
// begin a new image context, to draw our colored image onto
UIGraphicsBeginImageContextWithOptions(img.size, NO, 0.0f);
// get a reference to that context we created
CGContextRef context = UIGraphicsGetCurrentContext();
// set the fill color
[self.buttonColor setFill];
CGContextSetAllowsAntialiasing(context, true);
CGContextSetShouldAntialias(context, true);
// translate/flip the graphics context (for transforming from CG* coords to UI* coords
CGContextTranslateCTM(context, 0, img.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// set the blend mode to multiply, and the original image
CGContextSetBlendMode(context, kCGBlendModeScreen);
CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
CGContextDrawImage(context, rect, img.CGImage);
// set a mask that matches the shape of the image, then draw the colored image
CGContextClipToMask(context, rect, img.CGImage);
CGContextAddRect(context, rect);
CGContextDrawPath(context,kCGPathFill);
// generate a new UIImage from the graphics context we drew onto
UIImage *coloredImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return the colored image
[self setImage:coloredImg forState:controlState];
But the images come out with rough edges. I've tried using screen, lighten, and plusLighter blend modes, because some of the images have white parts that I want to stay white. The only part I want colorized is the black areas. I've attached the original button images, and after they've been colorized. I can't get the edges to look good. When I had them as white images that were colorized using multiply blend mode, it looked much better. But I want to use black so I can use one method for colorizing images with and without white in them. I tried with anti-aliasing, that didn't help either. It looks like it just isn't anti-aliasing it. I haven't worked with Core Graphics enough to know what's up with it.
EDIT
Here's what the original PNGs look like:
and here's what it should look like:
and here's what it does look like:
The size if different, but you can see the bad quality around the edges.
Maybe your original icons (PNGs?) are just "too sharp"? Could you show us? You just draw the image at its original size without resizing, so the problem could be right from the start.
I'm not sure what is what you are trying to accomplish here. Are you trying to round the edges of the images? If so, you are better of by changing the round corner property of the UIButton's layer. Since UIButton is a subclass of UIView, you can get its layer property and change the edge color and round its corner.
I am working in iOS and have been presented with this problem. I will be receiving a relatively large set of data (a 500x500 or greater C-style array). I need to construct what is essentially an X/Y plot from this data, where each datapoint in the 500x500 grid corresponds to a color based on it's value. The thing is that this changes over time, making some animation as it were, so the calculations have to be fast in order to change when a new set of data comes in.
So basically, for every point in the array, I need to figure out which color it should map to, then figure out the square to draw on the grid to represent the data. If my grid were 768x768 pixels, but I have a 500x500 dataset, then each datapoint would represent about a 1.5x1.5 rectangle (that's rounded, but I hope you get the idea).
I tried this by creating a new view class and overriding drawRect. However, that met with horrible performance with anything much over a 20x20 dataset.
I have seen some suggestions about writing to image buffers, but I have not been able to find any examples of doing that (I'm pretty new to iOS). Do you have any suggestions or could you point me to any resources which could help?
Thank you for your time,
Darryl
Here's some code that you can put in a method that will generate and return a UIImage in an offscreen context. To improve performance, try to come up with ways to minimize the number of iterations, such as making your "pixels" bigger, or only drawing a portion that changes.
UIGraphicsBeginImageContext(size); // Use your own image size here
CGContextRef context = UIGraphicsGetCurrentContext();
// push context to make it current
// (need to do this manually because we are not drawing in a UIView)
//
UIGraphicsPushContext(context);
for (CGFloat x = 0.0; x<size.width; x+=1.0) {
for (CGFloat y=0.0; y< size.height; y+=1.0) {
// Set your color here
CGContextSetRGBFillColor(context, 1.0, 1.0, 1.0, 1.0));
CGContextFillRect(context, CGRectMake(x, y, 1.0, 1.0));
}
}
// pop context
//
UIGraphicsPopContext();
// get a UIImage from the image context- enjoy!!!
//
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
[outputImage retain];
// clean up drawing environment
//
UIGraphicsEndImageContext();
return [outputImage autorelease];
If you want to be fast, you shouldn't call explicit drawing functions for every pixel. You can allocate memory and use CGBitmapContextCreate to build an image with the data in that memory - which is basically a bytes array. Do your calculations and write the color information (a-r-g-b) directly into that buffer. You have to do the maths on your own, though, regarding the sub-pixel accuracy (blending).
I don't have an example at hand, but searching for CGBitmapContextCreate should head you the right direction.