Setting the context using CGContextRef [duplicate] - ios

This question already exists:
Closed 10 years ago.
Possible Duplicate:
Drawing shape iOS
CGContextRef, CGPoint, and CGSize
I am trying to use a current method where I have to include the CGContextRef, CGPoint, and CGSize:
CGPoint p1 = {10, 10};
CGSize size;
CGContextRef context = UIGraphicsGetCurrentContext();
[self drawArrowWithContext:context atPoint:p1 withSize:size lineWidth:400 arrowHeight:400];
When I run the application I get this error:
Jan 21 21:41:56 Alexs-ipad Splash-it[1497] : CGContextDrawPath: invalid context 0x0
The problem must be in the context, but I can't find anywhere on the internet the solution to the problem. This whole code should call a method for drawing an arrow.
Thanks for any help.

In order to return a valid context you have to be in the appropriate area.
That basically means this code needs to be in drawRect: or you need to create an image context using UIGraphicsBeginImageContext
Update: the DrawRect:
The drawRect: is a special method called for each UIView that gives you an access point to do custom drawing using Core Graphics. The most common use for this is to create a custom UIView object in your case an ArrowView. Then in that you would override drawRect: using your code.
- (void)drawRect:(CGRect)rect
{
CGPoint p1 = {10, 10};
CGSize size;
CGContextRef context = UIGraphicsGetCurrentContext();
[self drawArrowWithContext:context atPoint:p1 withSize:size lineWidth:400 arrowHeight:400];
}
Update: the image context
A secondary way to tap into custom Core Graphics drawing is to create an imageContext then harvest its results.
So you'd start by creating an image context, running your drawing code, then converting that into an UIImage you can add to your existing views.
UIGraphicsBeginImageContext(CGSizeMake(400.0, 400.0));
CGPoint p1 = {10, 10};
CGSize size;
CGContextRef context = UIGraphicsGetCurrentContext();
[self drawArrowWithContext:context atPoint:p1 withSize:size lineWidth:400 arrowHeight:400];
// converts your context into a UIImage
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
// Adds that image into an imageView and sticks it on the screen.
UIImageView *imageView = [[UIImageView alloc] initWithImage:image];
[self.view addSubview:imageView];

Related

UIGraphicsGetCurrentContext vs UIGraphicsBeginImageContext/UIGraphicsEndImageContext

I am new to these parts of iOS API and here are some questions that are causing an infinite loop in my mind
Why does ..BeginImageContext have a size but ..GetCurrentContext does not have a size? If ..GetCurrentContext does not have a size, where does it draw? What are the bounds?
Why did they have to have two contexts, one for image and one for general graphics? Isn't an image context already a graphic context? What was the reason for the separation (I am trying to know what I don't know)
UIGraphicsGetCurrentContext() returns a reference to the current graphics context. It doesn't create one. This is important to remember because if you view it in that light, you see that it doesn't need a size parameter because the current context is just the size the graphics context was created with.
UIGraphicsBeginImageContext(aSize) is for creating graphics contexts at the UIKit level outside of UIView's drawRect: method.
Here is where you would use them.
If you had a subclass of UIView you could override its drawRect: method like so:
- (void)drawRect:(CGRect)rect
{
//the graphics context was created for you by UIView
//you can now perform your custom drawing below
//this gets you the current graphic context
CGContextRef ctx = UIGraphicsGetCurrentContext();
//set the fill color to blue
CGContextSetFillColorWithColor(ctx, [UIColor blueColor].CGColor);
//fill your custom view with a blue rect
CGContextFillRect(ctx, rect);
}
In this case, you didn't need to create the graphics context. It was created for you automatically and allows you to perform your custom drawing in the drawRect: method.
Now, in another situation, you might want to perform some custom drawing outside of the drawRect: method. Here you would use UIGraphicsBeginImageContext(aSize)
You could do something like this:
UIBezierPath *circle = [UIBezierPath
bezierPathWithOvalInRect:CGRectMake(0, 0, 200, 200)];
UIGraphicsBeginImageContext(CGSizeMake(200, 200));
//this gets the graphic context
CGContextRef context = UIGraphicsGetCurrentContext();
//you can stroke and/or fill
CGContextSetStrokeColorWithColor(context, [UIColor blueColor].CGColor);
CGContextSetFillColorWithColor(context, [UIColor lightGrayColor].CGColor);
[circle fill];
[circle stroke];
//now get the image from the context
UIImage *bezierImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *bezierImageView = [[UIImageView alloc]initWithImage:bezierImage];
I hope this helps to clear things up for you. Also, you should be using UIGraphicsBeginImageContextWithOptions(size, opaque, scale). For further explanation of custom drawing with graphics contexts, see my answer here
You are slightly confused here.
As the name suggests UIGraphicsGetCurrentContext grabs the CURRENT context, thus it doesn't need the size, it grabs an existing context and returns it to you.
So when is there an existing context? Always? No. When the screen is rendering a frame, a context is created. This context is available in the drawRect: function, which is called to draw the view.
Normally, your functions aren't called in drawRect:, so they don't actually have a context available. This is when you call UIGraphicsBeginImageContext.
When you do that, you create an image context, then you can grab said context with UIGraphicsGetCurrentContext and work with it. And thus, you have to remember to end it with UIGraphicsEndImageContext
To clear things up further - if you modify the context in drawRect:, your changes will be shown on screen. In your own function, your changes don't show up anywhere. You have to extract the image in the context through the UIGraphicsGetImageFromCurrentImageContext() call.
Hope this helps!

Zoom effect in a UIView image

When user hovers over the image,tapping on certain body part presents the magnified image of that area. I wanted to know any possible third party frameworks that address such kind of feature or code snippets(like using which gesturerecognizers) that can help me attain this feature.
Question 2: Also I have to add a dynamic clickable Label at the point where the touch happens and ends (as you can see the wrist label in image) so that I can take the user to a separate view from this screen on clicking the label. How to make this possible?
In your drawRect method, mask off a circle (using a monochrome bitmap containing the 'mask' of your magnifying glass) and draw your subject view in there with a 2x scale transform. Then draw a magnifying glass image over that and you're done.
- (void) drawRect: (CGRect) rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect bounds = self.bounds;
CGImageRef mask = [UIImage imageNamed: #"loupeMask"].CGImage;
UIImage *glass = [UIImage imageNamed: #"loupeImage"];
CGContextSaveGState(context);
CGContextClipToMask(context, bounds, mask);
CGContextFillRect(context, bounds);
CGContextScaleCTM(context, 2.0, 2.0);
//draw your subject view here
CGContextRestoreGState(context);
[glass drawInRect: bounds];
}
Check this link for complete example.

CGContext: how do I erase pixels (e.g. kCGBlendModeClear) outside of a bitmap context?

I'm trying to build an eraser tool using Core Graphics, and I'm finding it incredibly difficult to make a performant eraser - it all comes down to:
CGContextSetBlendMode(context, kCGBlendModeClear)
If you google around for how to "erase" with Core Graphics, almost every answer comes back with that snippet. The problem is it only (apparently) works in a bitmap context. If you're trying to implement interactive erasing, I don't see how kCGBlendModeClear helps you - as far as I can tell, you're more or less locked into erasing on and off-screen UIImage/CGImage and drawing that image in the famously non-performant [UIView drawRect].
Here's the best I've been able to do:
-(void)drawRect:(CGRect)rect
{
if (drawingStroke) {
if (eraseModeOn) {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
[eraseImage drawAtPoint:CGPointZero];
CGContextAddPath(context, currentPath);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, lineWidth);
CGContextSetBlendMode(context, kCGBlendModeClear);
CGContextSetLineWidth(context, ERASE_WIDTH);
CGContextStrokePath(context);
curImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[curImage drawAtPoint:CGPointZero];
} else {
[curImage drawAtPoint:CGPointZero];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextAddPath(context, currentPath);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, lineWidth);
CGContextSetBlendMode(context, kCGBlendModeNormal);
CGContextSetStrokeColorWithColor(context, lineColor.CGColor);
CGContextStrokePath(context);
}
} else {
[curImage drawAtPoint:CGPointZero];
}
}
Drawing a normal line (!eraseModeOn) is acceptably performant; I'm blitting my off-screen drawing buffer (curImage, which contains all previously drawn strokes) to the current CGContext, and I'm rendering the line (path) being currently drawn. It's not perfect, but hey, it works, and it's reasonably performant.
However, because kCGBlendModeNormal apparently does not work outside of a bitmap context, I'm forced to:
Create a bitmap context (UIGraphicsBeginImageContextWithOptions).
Draw my offscreen buffer (eraseImage, which is actually derived from curImage when the eraser tool is turned on - so really pretty much the same as curImage for arguments sake).
Render the "erase line" (path) currently being drawn to the bitmap context (using kCGBlendModeClear to clear pixels).
Extract the entire image into the offscreen buffer (curImage = UIGraphicsGetImageFromCurrentImageContext();)
And then finally blit the offscreen buffer to the view's CGContext
That's horrible, performance-wise. Using Instrument's Time tool, it's painfully obvious where the problems with this method are:
UIGraphicsBeginImageContextWithOptions is expensive
Drawing the offscreen buffer twice is expensive
Extracting the entire image into an offscreen buffer is expensive
So naturally, the code performs horribly on a real iPad.
I'm not really sure what to do here. I've been trying to figure out how to clear pixels in a non-bitmap context, but as far as I can tell, relying on kCGBlendModeClear is a dead-end.
Any thoughts or suggestions? How do other iOS drawing apps handle erase?
Additional Info
I've been playing around with a CGLayer approach, as it does appear that CGContextSetBlendMode(context, kCGBlendModeClear) will work in a CGLayer based on a bit of googling I've done.
However, I'm not super hopeful that this approach will pan out. Drawing the layer in drawRect (even using setNeedsDisplayInRect) is hugely non-performant; Core Graphics is choking up will rendering each path in the layer in CGContextDrawLayerAtPoint (according to Instruments). As far as I can tell, using a bitmap context is definitely preferable here in terms of performance - the only problem, of course, being the above question (kCGBlendModeClear not working after I blit the bitmap context to the main CGContext in drawRect).
I've managed to get good results by using the following code:
- (void)drawRect:(CGRect)rect
{
if (drawingStroke) {
if (eraseModeOn) {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextBeginTransparencyLayer(context, NULL);
[eraseImage drawAtPoint:CGPointZero];
CGContextAddPath(context, currentPath);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, ERASE_WIDTH);
CGContextSetBlendMode(context, kCGBlendModeClear);
CGContextSetStrokeColorWithColor(context, [[UIColor clearColor] CGColor]);
CGContextStrokePath(context);
CGContextEndTransparencyLayer(context);
} else {
[curImage drawAtPoint:CGPointZero];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextAddPath(context, currentPath);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, self.lineWidth);
CGContextSetBlendMode(context, kCGBlendModeNormal);
CGContextSetStrokeColorWithColor(context, self.lineColor.CGColor);
CGContextStrokePath(context);
}
} else {
[curImage drawAtPoint:CGPointZero];
}
self.empty = NO;
}
The trick was to wrap the following into CGContextBeginTransparencyLayer / CGContextEndTransparencyLayer calls:
Blitting the erase background image to the context
Drawing the "erase" path on top of the erase background image, using kCGBlendModeClear
Since both the erase background image's pixel data and the erase path are in the same layer, it has the effect of clearing the pixels.
2D graphics following painting paradigms. When you are painting, it's hard to remove paint you've already put on the canvas, but super easy to add more paint on top. The blend modes with a bitmap context give you a way to do something hard (scrape paint off the canvas) with few lines of code. The few lines of code do not make it an easy computing operation (which is why it performs slowly).
The easiest way to fake clearing out pixels without having to do the offscreen bitmap buffering is to paint the background of your view over the image.
-(void)drawRect:(CGRect)rect
{
if (drawingStroke) {
CGColor lineCgColor = lineColor.CGColor;
if (eraseModeOn) {
//Use concrete background color to display erasing. You could use the backgroundColor property of the view, or define a color here
lineCgColor = [[self backgroundColor] CGColor];
}
[curImage drawAtPoint:CGPointZero];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextAddPath(context, currentPath);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, lineWidth);
CGContextSetBlendMode(context, kCGBlendModeNormal);
CGContextSetStrokeColorWithColor(context, lineCgColor);
CGContextStrokePath(context);
} else {
[curImage drawAtPoint:CGPointZero];
}
}
The more difficult (but more correct) way is to do the image editing on a background serial queue in response to an editing event. When you get a new action, you do the bitmap rendering in the background to an image buffer. When the buffered image is ready, you call setNeedsDisplay to allow the view to be redrawn during the next update cycle. This is more correct as drawRect: should be displaying the content of your view as quickly as possible, not processing the editing action.
#interface ImageEditor : UIView
#property (nonatomic, strong) UIImage * imageBuffer;
#property (nonatomic, strong) dispatch_queue_t serialQueue;
#end
#implementation ImageEditor
- (dispatch_queue_t) serialQueue
{
if (_serialQueue == nil)
{
_serialQueue = dispatch_queue_create("com.example.com.imagebuffer", DISPATCH_QUEUE_SERIAL);
}
return _serialQueue;
}
- (void)editingAction
{
dispatch_async(self.serialQueue, ^{
CGSize bufferSize = [self.imageBuffer size];
UIGraphicsBeginImageContext(bufferSize);
CGContext context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, CGRectMake(0, 0, bufferSize.width, bufferSize.height), [self.imageBuffer CGImage]);
//Do editing action, draw a clear line, solid line, etc
self.imageBuffer = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
dispatch_async(dispatch_get_main_queue(), ^{
[self setNeedsDisplay];
});
});
}
-(void)drawRect:(CGRect)rect
{
[self.imageBuffer drawAtPoint:CGPointZero];
}
#end
key is CGContextBeginTransparencyLayer and use clearColor and set CGContextSetBlendMode(context, kCGBlendModeClear);

Method returning image visible on iPhone screen intermittently returns nil

It looks like I might've potentially found an answer to one of my earlier problems and would be happy to post the solution on SO though I first need to confirm it works properly.
The problem is it seems to be - most of the time, though not always. I've isolated the problematic code - it's a method I created whose purpose is to return a UIImage of what is currently visible on the device's screen. It looks like this:
+ (UIImage *)getImageVisibleOnScreenWith: (CGRect) boundingRect rotationAngle: (CGFloat) angle scalingRatio: (CGFloat) scale entireImageView: (UIImageView *) imageView actualVisibleView: (UIView *) visibleView {
// Create a graphics context the size of the bounding rectangle
UIGraphicsBeginImageContext(boundingRect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
// Rotate and translate the context
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, boundingRect.size.width/2, boundingRect.size.height/2);
transform = CGAffineTransformRotate(transform, angle);
transform = CGAffineTransformScale(transform, scale, -scale);
CGContextConcatCTM(context, transform);
// Draw the image into the context
CGContextDrawImage(context, CGRectMake(-imageView.image.size.width/2, -imageView.image.size.height/2, imageView.image.size.width, imageView.image.size.height), imageView.image.CGImage);
// Get an image from the context
UIImage *viewImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];
// Clean up
UIGraphicsEndImageContext();
// Get the image currently on the screen (it's an intersection of specific UIImageViews)
CGRect visibleImageRect = CGRectIntersection(imageView.frame, visibleView.frame);
UIImage *visibleImage = (__bridge UIImage *)(CGImageCreateWithImageInRect((__bridge CGImageRef)(viewImage), visibleImageRect));
return visibleImage;
}
I pass on the result of this method to another one and noticed it sometimes returns nil - for no apparent reason, well at least I couldn't find any.
As usual, any ideas and help will be appreciated; also let me know if you need to see more code or if there's anything unclear as to what the purpose it is.

UIGraphicsGetCurrentContext seems to return nil

I'm trying to convert individual PDF pages into PNGs here, and it's worked perfectly until UIGraphicsGetCurrentContext suddenly started returning nil.
I'm trying to retrace my steps here, but I'm not quite sure that I know at which point this happened. My frame is not 0, which I see might create this problem, but other than that everything "looks" correct.
Here's the beginning of my code.
_pdf = CGPDFDocumentCreateWithURL((__bridge CFURLRef)_pdfFileUrl);
CGPDFPageRef myPageRef = CGPDFDocumentGetPage(_pdf, pageNumber);
CGRect aRect = CGPDFPageGetBoxRect(myPageRef, kCGPDFCropBox);
CGRect bRect = CGRectMake(0, 0, height / (aRect.size.height / aRect.size.width), height);
UIGraphicsBeginImageContext(bRect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
Anybody have any idea what else might be causing the nil context?
It doesn't have to be called from "drawRect".
you can also call it after "UIGraphicsBeginImageContext(bRect.size);"
Check in following line
UIGraphicsBeginImageContext(bRect.size);
if bRect.size is not 0,0
In my case, this was the reason why the returned context on the following line was null.
Are you calling UIGraphicsGetCurrentContext() inside of the drawRect method? As far as I know, it can only be called within drawRect, otherwise it will just return nil.
Indeed, it is possible to have CGContextRef object reusable after it has been set in drawRect method.
The point is - you need to push the Context to the stack before using it from anywhere. Otherwise, current context will be 0x0
1. Add
#interface RenderView : UIView {
CGContextRef visualContext;
BOOL renderFirst;
}
2. In your #implementation first set renderFirst to TRUE before view has appeared on the screen, then:
-(void) drawRect:(CGRect) rect {
if (renderFirst) {
visualContext = UIGraphicsGetCurrentContext();
renderFirst = FALSE;
}
}
3. Rendering Something to the context after the context was set.
-(void) renderSomethingToRect:(CGRect) rect {
UIGraphicsPushContext(visualContext);
// For instance
UIGraphicsPushContext(visualContext);
CGContextSetRGBFillColor(visualContext, 1.0, 1.0, 1.0, 1.0);
CGContextFillRect(visualContext, rect);
}
Here is an example exactly matching the thread case:
- (void) drawImage: (CGImageRef) img inRect: (CGRect) aRect {
UIGraphicsBeginImageContextWithOptions(aRect.size, NO, 0.0);
visualContext = UIGraphicsGetCurrentContext();
CGContextConcatCTM(visualContext, CGAffineTransformMakeTranslation(-aRect.origin.x, -aRect.origin.y));
CGContextClipToRect(visualContext, aRect);
CGContextDrawImage(visualContext, aRect, img);
// this can be used for drawing image on CALayer
self.layer.contents = (__bridge id) img;
[CATransaction flush];
UIGraphicsEndImageContext();
}
And Drawing image from context that was taken before in this post:
-(void) drawImageOnContext: (CGImageRef) someIm onPosition: (CGPoint) aPos {
UIGraphicsPushContext(visualContext);
CGContextDrawImage(visualContext, CGRectMake(aPos.x,
aPos.y, someIm.size.width,
someIm.size.height), someIm.CGImage);
}
Do not call UIGraphicsPopContext() function until you need the context to render your objects.
It seems that CGContextRef is being removed from the top of the graphic stack automatically when the calling method finishes.
Anyway, this example seems to be a kind of Hack - not planned and proposed by Apple. The solution is very unstable and works only with direct method messages calls inside only one UIView that is on the top of the screen. In case of "performselection" calls, Context does not render any results to the screen. So, I suggest to use CALayer as a rendering to the screen target instead of direct graphic context usage.
Hope it helps.

Resources