I've struggled with undo function for few days and I really have no idea to solve this problem.
I'm making drawing function on UIViewController that is using a custom UIimageView.
The problem is when I triggered Undo function the previous drawing is gone well but when I tried to draw again then the removed drawing appeared again with current drawing.
here is my code.
If you can see any problem please let me know! It will be really helpful.
Thank you!
- (IBAction)Pan:(UIPanGestrueRecognizer *)pan {
CGContextRef ctxt = [self drawingContext];
CGContextBeginPath(ctxt);
CGContextMoveToPoint(ctxt, previous.x, previous.y);
CGContextAddLineToPoint(ctxt, current.x, current.y);
CGContextStrokePath(ctxt);
CGImageRef img = CGBitmapContextCreateImage(ctxt);
self.costomImageView.image = [UIImage imageWithCGImage:img];
CGImageRelease(img);
previous = current;
...
}
- (CGContextRef)drawingContext {
if(!context) { // context is an instance variable of type CGContextRef
CGImageRef image = self.customImageView.image.CGImage;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if(!colorSpace) return nil;
context = CGBitmapContextCreate(NULL,
CGImageGetWidth(image), CGImageGetHeight(image),
8, CGImageGetWidth(image) * 4,
colorSpace, kCGImageAlphaPremultipliedLast
);
CGColorSpaceRelease(colorSpace);
if(!context) return nil;
CGContextConcatCTM(context, CGAffineTransformMake(1,0,0,-1,0,self.customeImageView.image.size.height));
CGContextSaveGState(context);
CGContextTranslateCTM(context, 0.0, self.customImageView.image.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, (CGRect){CGPointZero,self.customImageView.image.size}, image);
CGContextRestoreGState(context);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, 4.f);
CGContextSetStrokeColorWithColor(context, [UIColor redColor].CGColor);
}
return context;
}
I'm unsure what you're trying to do is supported by the UndoManager
What I'd personally do is something like this:
Create an NSMutableArray to contain all of your drawing strokes. A struct with two Point instances in it should do for this task. If you want to go fancy, you could add a UIColor instance to set the colour and more variables for other style information as well.
When your Pan gesture starts, record the point at which it starts. When it ends, record that point, create an instance of your struct and add it to the array.
Have a render procedure that clears the graphics context, iterates through the array and draws to the screen.
For undo functionality, all you need to do is remove the last entry from the array and re-render.
Related
I'm using this code:
UIImageScanlineFloodfill
to do a UIImage color Flood fill and everything works great. It's filling the area I want with the color I want.
Next step is to get a new image with only the color filled and not the rest of the image. Preferably with a transparent background. I've tried some stuff but without success since my C skills are very limited. Any suggestions on what to do?
Update:
I've managed to recreate the flood fill by doing the following:
Adding the points to an array:
[pointsArr addObject:[NSValue valueWithCGPoint: CGPointMake(x, y)]];
Then recreate the pixels like this:
UIGraphicsBeginImageContextWithOptions(self.originalImage.size, NO, self.originalImage.scale);
CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetStrokeColorWithColor(context, color.CGColor);
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextSetLineWidth(context, 2.0);
NSValue *valu = [arr firstObject];
CGPoint prevValue = valu.CGPointValue;
for (NSValue *value in arr) {
CGContextMoveToPoint(context, prevValue.x, prevValue.y);
CGContextAddLineToPoint(context, value.CGPointValue.x, value.CGPointValue.y);
prevValue = value.CGPointValue;
}
CGContextDrawPath(context, kCGPathFillStroke);
UIImage *floodFilled = UIGraphicsGetImageFromCurrentImageContext();
CGContextRelease(context);
But this makes the process really slow on a high res image. Up to 20-30 sec. I tried it with a UIBeizerPath as well but it's even slower. It goes fairly fast if i use a linewidth of 1 but then it gets some white spots. Would love some help.
I have a code block which aims to capture snapshot of pdf based custom views for each page. To accomplish it I'll create view controller in a loop and then iterate. The problem is even view controller released custom view doesn't released and look like live on Instruments tool. As a result for loop iterates a lot so it breaks the memory (up to 500MB for 42 living) and crashes.
Here is the iteration code;
do
{
__pageDictionary = CFDictionaryGetValue(_allPages,
__pageID);
CUIPageViewController *__pageViewController = [self _pageWithID:__pageID];
[__pageViewController addMainLayers];
[[APP_DELEGATE assetManager] temporarilyPasteSnapshotSource:__pageViewController.view];
UIImage *__snapshotImage = [__pageViewController captureSnapshot];
[[AOAssetManager sharedManager] saveImage:__snapshotImage
forPublicationBundle:_publicationTileViewController.publication.bundle
pageID:(__bridge NSString *)__pageID];
[[APP_DELEGATE assetManager] removePastedSnapshotSource:__pageViewController.view];
__snapshotImage = nil;
__pageViewController = nil;
ind += 6 * 0.1 / CFDictionaryGetCount(_allPages);
}
while (![(__bridge NSString *)(__pageID = CFDictionaryGetValue(__pageDictionary,
kMFMarkupKeyPageNextPageID)) isMemberOfClass:[NSNull class]]);
_generatingSnapshots = NO;
And here the captureSnapshot method;
- (UIImage *)captureSnapshot
{
CGRect rect = [self.view bounds];
UIGraphicsBeginImageContextWithOptions(rect.size,YES,0.0f);
context = UIGraphicsGetCurrentContext();
[self.view.layer renderInContext:context];
UIImage *capturedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return capturedImage;
}
Instruments;
Edit for further details:
Below code is from CUIPDFView a subclass of UIView;
- (void)drawRect:(CGRect)rect
{
[self drawInContext:UIGraphicsGetCurrentContext()];
}
-(void)drawInContext:(CGContextRef)context
{
CGRect drawRect = CGRectMake(self.bounds.origin.x, self.bounds.origin.y,self.bounds.size.width, self.bounds.size.height);
CGContextSetRGBFillColor(context, 1.0000, 1.0000, 1.0000, 1.0f);
CGContextFillRect(context, drawRect);
// PDF page drawing expects a Lower-Left coordinate system, so we flip the coordinate system
// before we start drawing.
CGContextTranslateCTM(context, 0.0, self.bounds.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// Grab the first PDF page
CGPDFPageRef page = CGPDFDocumentGetPage(_pdfDocument, _pageNumberToUse);
// We're about to modify the context CTM to draw the PDF page where we want it, so save the graphics state in case we want to do more drawing
CGContextSaveGState(context);
// CGPDFPageGetDrawingTransform provides an easy way to get the transform for a PDF page. It will scale down to fit, including any
// base rotations necessary to display the PDF page correctly.
CGAffineTransform pdfTransform = CGPDFPageGetDrawingTransform(page, kCGPDFCropBox, self.bounds, 0, true);
// And apply the transform.
CGContextConcatCTM(context, pdfTransform);
// Finally, we draw the page and restore the graphics state for further manipulations!
CGContextDrawPDFPage(context, page);
CGContextRestoreGState(context);
}
When I delete drawRect method implementation, memory allocation problem dismiss but obviously it can't print the pdf.
Try putting an #autoreleasepool inside your loop:
do
{
#autoreleasepool
{
__pageDictionary = CFDictionaryGetValue(_allPages,
__pageID);
CUIPageViewController *__pageViewController = [self _pageWithID:__pageID];
[__pageViewController addMainLayers];
[[APP_DELEGATE assetManager] temporarilyPasteSnapshotSource:__pageViewController.view];
UIImage *__snapshotImage = [__pageViewController captureSnapshot];
[[AOAssetManager sharedManager] saveImage:__snapshotImage
forPublicationBundle:_publicationTileViewController.publication.bundle
pageID:(__bridge NSString *)__pageID];
[[APP_DELEGATE assetManager] removePastedSnapshotSource:__pageViewController.view];
__snapshotImage = nil;
__pageViewController = nil;
ind += 6 * 0.1 / CFDictionaryGetCount(_allPages);
}
}
while (![(__bridge NSString *)(__pageID = CFDictionaryGetValue(__pageDictionary,
This will flush the autorelease pool each time through the loop, releasing all the autoreleased objects.
I would be surprised if this is actually a problem with ARC. An object that is still alive still has a strong reference to the layer.
What does [AOAssetManager saveImage:...] do with the image. Are you sure it is not holding onto it?
Is _pageWithID: doing something that is keeping a pointer to CUIPageViewController around?
I'm trying to build an eraser tool using Core Graphics, and I'm finding it incredibly difficult to make a performant eraser - it all comes down to:
CGContextSetBlendMode(context, kCGBlendModeClear)
If you google around for how to "erase" with Core Graphics, almost every answer comes back with that snippet. The problem is it only (apparently) works in a bitmap context. If you're trying to implement interactive erasing, I don't see how kCGBlendModeClear helps you - as far as I can tell, you're more or less locked into erasing on and off-screen UIImage/CGImage and drawing that image in the famously non-performant [UIView drawRect].
Here's the best I've been able to do:
-(void)drawRect:(CGRect)rect
{
if (drawingStroke) {
if (eraseModeOn) {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
[eraseImage drawAtPoint:CGPointZero];
CGContextAddPath(context, currentPath);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, lineWidth);
CGContextSetBlendMode(context, kCGBlendModeClear);
CGContextSetLineWidth(context, ERASE_WIDTH);
CGContextStrokePath(context);
curImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[curImage drawAtPoint:CGPointZero];
} else {
[curImage drawAtPoint:CGPointZero];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextAddPath(context, currentPath);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, lineWidth);
CGContextSetBlendMode(context, kCGBlendModeNormal);
CGContextSetStrokeColorWithColor(context, lineColor.CGColor);
CGContextStrokePath(context);
}
} else {
[curImage drawAtPoint:CGPointZero];
}
}
Drawing a normal line (!eraseModeOn) is acceptably performant; I'm blitting my off-screen drawing buffer (curImage, which contains all previously drawn strokes) to the current CGContext, and I'm rendering the line (path) being currently drawn. It's not perfect, but hey, it works, and it's reasonably performant.
However, because kCGBlendModeNormal apparently does not work outside of a bitmap context, I'm forced to:
Create a bitmap context (UIGraphicsBeginImageContextWithOptions).
Draw my offscreen buffer (eraseImage, which is actually derived from curImage when the eraser tool is turned on - so really pretty much the same as curImage for arguments sake).
Render the "erase line" (path) currently being drawn to the bitmap context (using kCGBlendModeClear to clear pixels).
Extract the entire image into the offscreen buffer (curImage = UIGraphicsGetImageFromCurrentImageContext();)
And then finally blit the offscreen buffer to the view's CGContext
That's horrible, performance-wise. Using Instrument's Time tool, it's painfully obvious where the problems with this method are:
UIGraphicsBeginImageContextWithOptions is expensive
Drawing the offscreen buffer twice is expensive
Extracting the entire image into an offscreen buffer is expensive
So naturally, the code performs horribly on a real iPad.
I'm not really sure what to do here. I've been trying to figure out how to clear pixels in a non-bitmap context, but as far as I can tell, relying on kCGBlendModeClear is a dead-end.
Any thoughts or suggestions? How do other iOS drawing apps handle erase?
Additional Info
I've been playing around with a CGLayer approach, as it does appear that CGContextSetBlendMode(context, kCGBlendModeClear) will work in a CGLayer based on a bit of googling I've done.
However, I'm not super hopeful that this approach will pan out. Drawing the layer in drawRect (even using setNeedsDisplayInRect) is hugely non-performant; Core Graphics is choking up will rendering each path in the layer in CGContextDrawLayerAtPoint (according to Instruments). As far as I can tell, using a bitmap context is definitely preferable here in terms of performance - the only problem, of course, being the above question (kCGBlendModeClear not working after I blit the bitmap context to the main CGContext in drawRect).
I've managed to get good results by using the following code:
- (void)drawRect:(CGRect)rect
{
if (drawingStroke) {
if (eraseModeOn) {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextBeginTransparencyLayer(context, NULL);
[eraseImage drawAtPoint:CGPointZero];
CGContextAddPath(context, currentPath);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, ERASE_WIDTH);
CGContextSetBlendMode(context, kCGBlendModeClear);
CGContextSetStrokeColorWithColor(context, [[UIColor clearColor] CGColor]);
CGContextStrokePath(context);
CGContextEndTransparencyLayer(context);
} else {
[curImage drawAtPoint:CGPointZero];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextAddPath(context, currentPath);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, self.lineWidth);
CGContextSetBlendMode(context, kCGBlendModeNormal);
CGContextSetStrokeColorWithColor(context, self.lineColor.CGColor);
CGContextStrokePath(context);
}
} else {
[curImage drawAtPoint:CGPointZero];
}
self.empty = NO;
}
The trick was to wrap the following into CGContextBeginTransparencyLayer / CGContextEndTransparencyLayer calls:
Blitting the erase background image to the context
Drawing the "erase" path on top of the erase background image, using kCGBlendModeClear
Since both the erase background image's pixel data and the erase path are in the same layer, it has the effect of clearing the pixels.
2D graphics following painting paradigms. When you are painting, it's hard to remove paint you've already put on the canvas, but super easy to add more paint on top. The blend modes with a bitmap context give you a way to do something hard (scrape paint off the canvas) with few lines of code. The few lines of code do not make it an easy computing operation (which is why it performs slowly).
The easiest way to fake clearing out pixels without having to do the offscreen bitmap buffering is to paint the background of your view over the image.
-(void)drawRect:(CGRect)rect
{
if (drawingStroke) {
CGColor lineCgColor = lineColor.CGColor;
if (eraseModeOn) {
//Use concrete background color to display erasing. You could use the backgroundColor property of the view, or define a color here
lineCgColor = [[self backgroundColor] CGColor];
}
[curImage drawAtPoint:CGPointZero];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextAddPath(context, currentPath);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, lineWidth);
CGContextSetBlendMode(context, kCGBlendModeNormal);
CGContextSetStrokeColorWithColor(context, lineCgColor);
CGContextStrokePath(context);
} else {
[curImage drawAtPoint:CGPointZero];
}
}
The more difficult (but more correct) way is to do the image editing on a background serial queue in response to an editing event. When you get a new action, you do the bitmap rendering in the background to an image buffer. When the buffered image is ready, you call setNeedsDisplay to allow the view to be redrawn during the next update cycle. This is more correct as drawRect: should be displaying the content of your view as quickly as possible, not processing the editing action.
#interface ImageEditor : UIView
#property (nonatomic, strong) UIImage * imageBuffer;
#property (nonatomic, strong) dispatch_queue_t serialQueue;
#end
#implementation ImageEditor
- (dispatch_queue_t) serialQueue
{
if (_serialQueue == nil)
{
_serialQueue = dispatch_queue_create("com.example.com.imagebuffer", DISPATCH_QUEUE_SERIAL);
}
return _serialQueue;
}
- (void)editingAction
{
dispatch_async(self.serialQueue, ^{
CGSize bufferSize = [self.imageBuffer size];
UIGraphicsBeginImageContext(bufferSize);
CGContext context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, CGRectMake(0, 0, bufferSize.width, bufferSize.height), [self.imageBuffer CGImage]);
//Do editing action, draw a clear line, solid line, etc
self.imageBuffer = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
dispatch_async(dispatch_get_main_queue(), ^{
[self setNeedsDisplay];
});
});
}
-(void)drawRect:(CGRect)rect
{
[self.imageBuffer drawAtPoint:CGPointZero];
}
#end
key is CGContextBeginTransparencyLayer and use clearColor and set CGContextSetBlendMode(context, kCGBlendModeClear);
I know this question is answered quite a bit but I have a different situation it seems. I'm trying to write a top level function where I can take a screenshot of my app at anytime, be it openGLES or UIKit, and I won't have access to the underlying classes to make any changes.
The code I've been trying works for UIKit, but returns a black screen for OpenGLES parts
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
for (UIView *subview in window.subviews)
{
CAEAGLLayer *eaglLayer = (CAEAGLLayer *) subview.layer;
if([eaglLayer respondsToSelector:#selector(drawableProperties)]) {
NSLog(#"reponds");
/*eaglLayer.drawableProperties = #{
kEAGLDrawablePropertyRetainedBacking: [NSNumber numberWithBool:YES],
kEAGLDrawablePropertyColorFormat: kEAGLColorFormatRGBA8
};*/
UIImageView *glImageView = [[UIImageView alloc] initWithImage:[self snapshotx:subview]];
glImageView.transform = CGAffineTransformMakeScale(1, -1);
[glImageView.layer renderInContext:context];
//CGImageRef iref = [self snapshot:subview withContext:context];
//CGContextDrawImage(context, CGRectMake(0.0, 0.0, 640, 960), iref);
}
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
and
- (UIImage*)snapshotx:(UIView*)eaglview
{
GLint backingWidth, backingHeight;
//glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
//don't know how to access the renderbuffer if i can't directly access the below code
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate (
width,
height,
8,
32,
width * 4,
colorspace,
// Fix from Apple implementation
// (was: kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast).
kCGBitmapByteOrderDefault,
ref,
NULL,
true,
kCGRenderingIntentDefault
);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions)
{
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
Any advice on how to mix the two without having the ability to modify the classes in the rest of the application?
Thanks!
I see what you tried to do there and it is not really a bad concept. There does seem to be one big problem though: You can not just call glReadPixels at any time you want. First of all you should make sure the buffer is full with data (pixels) you really need and second is it should be called on the same thread as its GL part is working on...
If the GL views are not yours you might have some big trouble calling that screenshot method, you need to call some method that will trigger binding its internal context and if it is animating you will have to know when the cycle is done to ensure that the pixels you receive are the same as the ones presented on the view.
Anyway if you get past all those you will still probably need to "jump" through different threads or need to wait for a cycle to finish. In that case I suggest you use blocks that return the screenshot image which should be passed as a method parameter so you can catch it whenever it is returned. That being said it would be best if you could override some methods on the GL views to be able to return you the screenshot image via callback block and write some recursive system.
To sum it up you need to anticipate multithreading, setting the context, binding the correct frame buffer, waiting for everything to be rendered. This all might result in impossibility to create a screenshot method that would simply work for any application, view, system without overloading some internal methods.
Note that you are simply not allowed to make a whole screenshot (like the one pressing home and lock button at the same time) in your application. As for the UIView part being so easy to create an image from it is because UIView is being redrawn into graphics context independently to the screen; as if you could take some GL pipeline and bind it to your own buffer and context and draw it, this would result in being able to get its screenshot independently and could be performed on any thread.
Actually, I'm trying to do something similar: I'll post in full when I've ironed it out, but in brief:
use your superview's layer's renderInContext method
in the subviews which use openGL, implement the layer delegate's drawLayer:inContext: method
to render your view into the context, use a CVOpenGLESTextureCacheRef
Your superview's layer will call renderInContext: on each of it's sublayers - by implementing the delegate method, your GLView respond for it's layer.
Using a texture cache is much, much faster than glReadPixels: that will probably be a bottleneck.
Sam
I'm trying to convert individual PDF pages into PNGs here, and it's worked perfectly until UIGraphicsGetCurrentContext suddenly started returning nil.
I'm trying to retrace my steps here, but I'm not quite sure that I know at which point this happened. My frame is not 0, which I see might create this problem, but other than that everything "looks" correct.
Here's the beginning of my code.
_pdf = CGPDFDocumentCreateWithURL((__bridge CFURLRef)_pdfFileUrl);
CGPDFPageRef myPageRef = CGPDFDocumentGetPage(_pdf, pageNumber);
CGRect aRect = CGPDFPageGetBoxRect(myPageRef, kCGPDFCropBox);
CGRect bRect = CGRectMake(0, 0, height / (aRect.size.height / aRect.size.width), height);
UIGraphicsBeginImageContext(bRect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
Anybody have any idea what else might be causing the nil context?
It doesn't have to be called from "drawRect".
you can also call it after "UIGraphicsBeginImageContext(bRect.size);"
Check in following line
UIGraphicsBeginImageContext(bRect.size);
if bRect.size is not 0,0
In my case, this was the reason why the returned context on the following line was null.
Are you calling UIGraphicsGetCurrentContext() inside of the drawRect method? As far as I know, it can only be called within drawRect, otherwise it will just return nil.
Indeed, it is possible to have CGContextRef object reusable after it has been set in drawRect method.
The point is - you need to push the Context to the stack before using it from anywhere. Otherwise, current context will be 0x0
1. Add
#interface RenderView : UIView {
CGContextRef visualContext;
BOOL renderFirst;
}
2. In your #implementation first set renderFirst to TRUE before view has appeared on the screen, then:
-(void) drawRect:(CGRect) rect {
if (renderFirst) {
visualContext = UIGraphicsGetCurrentContext();
renderFirst = FALSE;
}
}
3. Rendering Something to the context after the context was set.
-(void) renderSomethingToRect:(CGRect) rect {
UIGraphicsPushContext(visualContext);
// For instance
UIGraphicsPushContext(visualContext);
CGContextSetRGBFillColor(visualContext, 1.0, 1.0, 1.0, 1.0);
CGContextFillRect(visualContext, rect);
}
Here is an example exactly matching the thread case:
- (void) drawImage: (CGImageRef) img inRect: (CGRect) aRect {
UIGraphicsBeginImageContextWithOptions(aRect.size, NO, 0.0);
visualContext = UIGraphicsGetCurrentContext();
CGContextConcatCTM(visualContext, CGAffineTransformMakeTranslation(-aRect.origin.x, -aRect.origin.y));
CGContextClipToRect(visualContext, aRect);
CGContextDrawImage(visualContext, aRect, img);
// this can be used for drawing image on CALayer
self.layer.contents = (__bridge id) img;
[CATransaction flush];
UIGraphicsEndImageContext();
}
And Drawing image from context that was taken before in this post:
-(void) drawImageOnContext: (CGImageRef) someIm onPosition: (CGPoint) aPos {
UIGraphicsPushContext(visualContext);
CGContextDrawImage(visualContext, CGRectMake(aPos.x,
aPos.y, someIm.size.width,
someIm.size.height), someIm.CGImage);
}
Do not call UIGraphicsPopContext() function until you need the context to render your objects.
It seems that CGContextRef is being removed from the top of the graphic stack automatically when the calling method finishes.
Anyway, this example seems to be a kind of Hack - not planned and proposed by Apple. The solution is very unstable and works only with direct method messages calls inside only one UIView that is on the top of the screen. In case of "performselection" calls, Context does not render any results to the screen. So, I suggest to use CALayer as a rendering to the screen target instead of direct graphic context usage.
Hope it helps.