I have a code block which aims to capture snapshot of pdf based custom views for each page. To accomplish it I'll create view controller in a loop and then iterate. The problem is even view controller released custom view doesn't released and look like live on Instruments tool. As a result for loop iterates a lot so it breaks the memory (up to 500MB for 42 living) and crashes.
Here is the iteration code;
do
{
__pageDictionary = CFDictionaryGetValue(_allPages,
__pageID);
CUIPageViewController *__pageViewController = [self _pageWithID:__pageID];
[__pageViewController addMainLayers];
[[APP_DELEGATE assetManager] temporarilyPasteSnapshotSource:__pageViewController.view];
UIImage *__snapshotImage = [__pageViewController captureSnapshot];
[[AOAssetManager sharedManager] saveImage:__snapshotImage
forPublicationBundle:_publicationTileViewController.publication.bundle
pageID:(__bridge NSString *)__pageID];
[[APP_DELEGATE assetManager] removePastedSnapshotSource:__pageViewController.view];
__snapshotImage = nil;
__pageViewController = nil;
ind += 6 * 0.1 / CFDictionaryGetCount(_allPages);
}
while (![(__bridge NSString *)(__pageID = CFDictionaryGetValue(__pageDictionary,
kMFMarkupKeyPageNextPageID)) isMemberOfClass:[NSNull class]]);
_generatingSnapshots = NO;
And here the captureSnapshot method;
- (UIImage *)captureSnapshot
{
CGRect rect = [self.view bounds];
UIGraphicsBeginImageContextWithOptions(rect.size,YES,0.0f);
context = UIGraphicsGetCurrentContext();
[self.view.layer renderInContext:context];
UIImage *capturedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return capturedImage;
}
Instruments;
Edit for further details:
Below code is from CUIPDFView a subclass of UIView;
- (void)drawRect:(CGRect)rect
{
[self drawInContext:UIGraphicsGetCurrentContext()];
}
-(void)drawInContext:(CGContextRef)context
{
CGRect drawRect = CGRectMake(self.bounds.origin.x, self.bounds.origin.y,self.bounds.size.width, self.bounds.size.height);
CGContextSetRGBFillColor(context, 1.0000, 1.0000, 1.0000, 1.0f);
CGContextFillRect(context, drawRect);
// PDF page drawing expects a Lower-Left coordinate system, so we flip the coordinate system
// before we start drawing.
CGContextTranslateCTM(context, 0.0, self.bounds.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// Grab the first PDF page
CGPDFPageRef page = CGPDFDocumentGetPage(_pdfDocument, _pageNumberToUse);
// We're about to modify the context CTM to draw the PDF page where we want it, so save the graphics state in case we want to do more drawing
CGContextSaveGState(context);
// CGPDFPageGetDrawingTransform provides an easy way to get the transform for a PDF page. It will scale down to fit, including any
// base rotations necessary to display the PDF page correctly.
CGAffineTransform pdfTransform = CGPDFPageGetDrawingTransform(page, kCGPDFCropBox, self.bounds, 0, true);
// And apply the transform.
CGContextConcatCTM(context, pdfTransform);
// Finally, we draw the page and restore the graphics state for further manipulations!
CGContextDrawPDFPage(context, page);
CGContextRestoreGState(context);
}
When I delete drawRect method implementation, memory allocation problem dismiss but obviously it can't print the pdf.
Try putting an #autoreleasepool inside your loop:
do
{
#autoreleasepool
{
__pageDictionary = CFDictionaryGetValue(_allPages,
__pageID);
CUIPageViewController *__pageViewController = [self _pageWithID:__pageID];
[__pageViewController addMainLayers];
[[APP_DELEGATE assetManager] temporarilyPasteSnapshotSource:__pageViewController.view];
UIImage *__snapshotImage = [__pageViewController captureSnapshot];
[[AOAssetManager sharedManager] saveImage:__snapshotImage
forPublicationBundle:_publicationTileViewController.publication.bundle
pageID:(__bridge NSString *)__pageID];
[[APP_DELEGATE assetManager] removePastedSnapshotSource:__pageViewController.view];
__snapshotImage = nil;
__pageViewController = nil;
ind += 6 * 0.1 / CFDictionaryGetCount(_allPages);
}
}
while (![(__bridge NSString *)(__pageID = CFDictionaryGetValue(__pageDictionary,
This will flush the autorelease pool each time through the loop, releasing all the autoreleased objects.
I would be surprised if this is actually a problem with ARC. An object that is still alive still has a strong reference to the layer.
What does [AOAssetManager saveImage:...] do with the image. Are you sure it is not holding onto it?
Is _pageWithID: doing something that is keeping a pointer to CUIPageViewController around?
Related
I've struggled with undo function for few days and I really have no idea to solve this problem.
I'm making drawing function on UIViewController that is using a custom UIimageView.
The problem is when I triggered Undo function the previous drawing is gone well but when I tried to draw again then the removed drawing appeared again with current drawing.
here is my code.
If you can see any problem please let me know! It will be really helpful.
Thank you!
- (IBAction)Pan:(UIPanGestrueRecognizer *)pan {
CGContextRef ctxt = [self drawingContext];
CGContextBeginPath(ctxt);
CGContextMoveToPoint(ctxt, previous.x, previous.y);
CGContextAddLineToPoint(ctxt, current.x, current.y);
CGContextStrokePath(ctxt);
CGImageRef img = CGBitmapContextCreateImage(ctxt);
self.costomImageView.image = [UIImage imageWithCGImage:img];
CGImageRelease(img);
previous = current;
...
}
- (CGContextRef)drawingContext {
if(!context) { // context is an instance variable of type CGContextRef
CGImageRef image = self.customImageView.image.CGImage;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if(!colorSpace) return nil;
context = CGBitmapContextCreate(NULL,
CGImageGetWidth(image), CGImageGetHeight(image),
8, CGImageGetWidth(image) * 4,
colorSpace, kCGImageAlphaPremultipliedLast
);
CGColorSpaceRelease(colorSpace);
if(!context) return nil;
CGContextConcatCTM(context, CGAffineTransformMake(1,0,0,-1,0,self.customeImageView.image.size.height));
CGContextSaveGState(context);
CGContextTranslateCTM(context, 0.0, self.customImageView.image.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, (CGRect){CGPointZero,self.customImageView.image.size}, image);
CGContextRestoreGState(context);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, 4.f);
CGContextSetStrokeColorWithColor(context, [UIColor redColor].CGColor);
}
return context;
}
I'm unsure what you're trying to do is supported by the UndoManager
What I'd personally do is something like this:
Create an NSMutableArray to contain all of your drawing strokes. A struct with two Point instances in it should do for this task. If you want to go fancy, you could add a UIColor instance to set the colour and more variables for other style information as well.
When your Pan gesture starts, record the point at which it starts. When it ends, record that point, create an instance of your struct and add it to the array.
Have a render procedure that clears the graphics context, iterates through the array and draws to the screen.
For undo functionality, all you need to do is remove the last entry from the array and re-render.
I see a sharp increase in memory usage (from 39 MB to 186 MB on iPad) with "CGContextFillRect" statement execution in my below code. Is there something wrong here.
My application eventually crashes.
PS: Surprisingly the memory spike is seen on 3rd and 4th gen iPads and not on 2nd Gen iPad.
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
[self setBackgroundColor:[UIColor clearColor]];
}
return self;
}
- (id)initWithFrame:(CGRect)iFrame andHollowCircles:(NSArray *)iCircles {
self = [super initWithFrame:iFrame];
if (self) {
[self setBackgroundColor:[UIColor clearColor]];
self.circleViews = iCircles;
}
return self;
}
- (void)drawHollowPoint:(CGPoint)iHollowPoint withRadius:(NSNumber *)iRadius {
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(currentContext, self.circleRadius.floatValue);
[[UIColor whiteColor] setFill];
CGContextAddArc(currentContext, iHollowPoint.x, iHollowPoint.y, iRadius.floatValue, 0, M_PI * 2, YES);
CGContextFillPath(currentContext);
}
- (void)drawRect:(CGRect)rect {
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGContextSaveGState(currentContext);
CGRect aRect = [self.superview bounds];
[[UIColor whiteColor]setFill];
CGContextFillRect(currentContext, aRect);
CGContextSaveGState(currentContext);
[[UIColor blackColor]setFill];
CGContextFillRect(currentContext, aRect);
CGContextRestoreGState(currentContext);
for (MyCircleView *circleView in self.circleViews) {
[self drawHollowPoint:circleView.center withRadius:circleView.circleRadius];
}
CGContextTranslateCTM(currentContext, 0, self.bounds.size.height);
CGContextScaleCTM(currentContext, 1.0, -1.0);
CGContextSaveGState(currentContext);
}
This code doesn't quite make sense; I assume you've removed parts of it? You create a blank alpha mask and then throw it away.
If the above code is really what you're doing, you don't really need to draw anything. You could just create a 12MB memory area and fill it with repeating 1 0 0 0 (opaque black in ARGB) and then create an image off of that. But I assume you're actually doing more than that.
Likely you have this view configured with contentScaleFactor set to match the scale from UIScreen, and this view is very large. 3rd and 4th gen iPads have a Retina display, so the scale is 2, and the memory required to draw a view is 4x as large.
That said, you should only expect about 12MB to hold a full screen image (2048*1536*4). The fact that you're seeing 10x that suggests something more is going on, but I suspect that it's still related to perhaps drawing too many copies.
If possible, you can step the scale down to 1 to make retina and non-retina behave the same.
EDIT:
Your edited code is very different from your original code. There's no attempt to make an image in this code. I've tested it out as best I can, and I don't see any surprising memory spike. But there are several oddities:
You're not correctly balancing CGContextSaveGState with CGContextRestoreGState. That actually might cause a memory problem.
Why are you drawing the rect all in white and then all in black?
Your rect is [self.superview bounds]. That's in the wrong coordinate space. You should almost certainly mean [self bounds].
Why do you flip the context right before returning from drawRect and then save the context? This doesn't make sense at all.
I would assume your drawRect: would look like this:
- (void)drawRect:(CGRect)rect {
[[UIColor blackColor] setFill];
UIRectFill(rect); // You're only responsible for drawing the area given in `rect`
for (CircleView *circleView in self.circleViews) {
[self drawHollowPoint:circleView.center withRadius:circleView.circleRadius];
}
}
It looks like I might've potentially found an answer to one of my earlier problems and would be happy to post the solution on SO though I first need to confirm it works properly.
The problem is it seems to be - most of the time, though not always. I've isolated the problematic code - it's a method I created whose purpose is to return a UIImage of what is currently visible on the device's screen. It looks like this:
+ (UIImage *)getImageVisibleOnScreenWith: (CGRect) boundingRect rotationAngle: (CGFloat) angle scalingRatio: (CGFloat) scale entireImageView: (UIImageView *) imageView actualVisibleView: (UIView *) visibleView {
// Create a graphics context the size of the bounding rectangle
UIGraphicsBeginImageContext(boundingRect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
// Rotate and translate the context
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, boundingRect.size.width/2, boundingRect.size.height/2);
transform = CGAffineTransformRotate(transform, angle);
transform = CGAffineTransformScale(transform, scale, -scale);
CGContextConcatCTM(context, transform);
// Draw the image into the context
CGContextDrawImage(context, CGRectMake(-imageView.image.size.width/2, -imageView.image.size.height/2, imageView.image.size.width, imageView.image.size.height), imageView.image.CGImage);
// Get an image from the context
UIImage *viewImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];
// Clean up
UIGraphicsEndImageContext();
// Get the image currently on the screen (it's an intersection of specific UIImageViews)
CGRect visibleImageRect = CGRectIntersection(imageView.frame, visibleView.frame);
UIImage *visibleImage = (__bridge UIImage *)(CGImageCreateWithImageInRect((__bridge CGImageRef)(viewImage), visibleImageRect));
return visibleImage;
}
I pass on the result of this method to another one and noticed it sometimes returns nil - for no apparent reason, well at least I couldn't find any.
As usual, any ideas and help will be appreciated; also let me know if you need to see more code or if there's anything unclear as to what the purpose it is.
I know this question is answered quite a bit but I have a different situation it seems. I'm trying to write a top level function where I can take a screenshot of my app at anytime, be it openGLES or UIKit, and I won't have access to the underlying classes to make any changes.
The code I've been trying works for UIKit, but returns a black screen for OpenGLES parts
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
for (UIView *subview in window.subviews)
{
CAEAGLLayer *eaglLayer = (CAEAGLLayer *) subview.layer;
if([eaglLayer respondsToSelector:#selector(drawableProperties)]) {
NSLog(#"reponds");
/*eaglLayer.drawableProperties = #{
kEAGLDrawablePropertyRetainedBacking: [NSNumber numberWithBool:YES],
kEAGLDrawablePropertyColorFormat: kEAGLColorFormatRGBA8
};*/
UIImageView *glImageView = [[UIImageView alloc] initWithImage:[self snapshotx:subview]];
glImageView.transform = CGAffineTransformMakeScale(1, -1);
[glImageView.layer renderInContext:context];
//CGImageRef iref = [self snapshot:subview withContext:context];
//CGContextDrawImage(context, CGRectMake(0.0, 0.0, 640, 960), iref);
}
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
and
- (UIImage*)snapshotx:(UIView*)eaglview
{
GLint backingWidth, backingHeight;
//glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
//don't know how to access the renderbuffer if i can't directly access the below code
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate (
width,
height,
8,
32,
width * 4,
colorspace,
// Fix from Apple implementation
// (was: kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast).
kCGBitmapByteOrderDefault,
ref,
NULL,
true,
kCGRenderingIntentDefault
);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions)
{
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
Any advice on how to mix the two without having the ability to modify the classes in the rest of the application?
Thanks!
I see what you tried to do there and it is not really a bad concept. There does seem to be one big problem though: You can not just call glReadPixels at any time you want. First of all you should make sure the buffer is full with data (pixels) you really need and second is it should be called on the same thread as its GL part is working on...
If the GL views are not yours you might have some big trouble calling that screenshot method, you need to call some method that will trigger binding its internal context and if it is animating you will have to know when the cycle is done to ensure that the pixels you receive are the same as the ones presented on the view.
Anyway if you get past all those you will still probably need to "jump" through different threads or need to wait for a cycle to finish. In that case I suggest you use blocks that return the screenshot image which should be passed as a method parameter so you can catch it whenever it is returned. That being said it would be best if you could override some methods on the GL views to be able to return you the screenshot image via callback block and write some recursive system.
To sum it up you need to anticipate multithreading, setting the context, binding the correct frame buffer, waiting for everything to be rendered. This all might result in impossibility to create a screenshot method that would simply work for any application, view, system without overloading some internal methods.
Note that you are simply not allowed to make a whole screenshot (like the one pressing home and lock button at the same time) in your application. As for the UIView part being so easy to create an image from it is because UIView is being redrawn into graphics context independently to the screen; as if you could take some GL pipeline and bind it to your own buffer and context and draw it, this would result in being able to get its screenshot independently and could be performed on any thread.
Actually, I'm trying to do something similar: I'll post in full when I've ironed it out, but in brief:
use your superview's layer's renderInContext method
in the subviews which use openGL, implement the layer delegate's drawLayer:inContext: method
to render your view into the context, use a CVOpenGLESTextureCacheRef
Your superview's layer will call renderInContext: on each of it's sublayers - by implementing the delegate method, your GLView respond for it's layer.
Using a texture cache is much, much faster than glReadPixels: that will probably be a bottleneck.
Sam
I'm trying to convert individual PDF pages into PNGs here, and it's worked perfectly until UIGraphicsGetCurrentContext suddenly started returning nil.
I'm trying to retrace my steps here, but I'm not quite sure that I know at which point this happened. My frame is not 0, which I see might create this problem, but other than that everything "looks" correct.
Here's the beginning of my code.
_pdf = CGPDFDocumentCreateWithURL((__bridge CFURLRef)_pdfFileUrl);
CGPDFPageRef myPageRef = CGPDFDocumentGetPage(_pdf, pageNumber);
CGRect aRect = CGPDFPageGetBoxRect(myPageRef, kCGPDFCropBox);
CGRect bRect = CGRectMake(0, 0, height / (aRect.size.height / aRect.size.width), height);
UIGraphicsBeginImageContext(bRect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
Anybody have any idea what else might be causing the nil context?
It doesn't have to be called from "drawRect".
you can also call it after "UIGraphicsBeginImageContext(bRect.size);"
Check in following line
UIGraphicsBeginImageContext(bRect.size);
if bRect.size is not 0,0
In my case, this was the reason why the returned context on the following line was null.
Are you calling UIGraphicsGetCurrentContext() inside of the drawRect method? As far as I know, it can only be called within drawRect, otherwise it will just return nil.
Indeed, it is possible to have CGContextRef object reusable after it has been set in drawRect method.
The point is - you need to push the Context to the stack before using it from anywhere. Otherwise, current context will be 0x0
1. Add
#interface RenderView : UIView {
CGContextRef visualContext;
BOOL renderFirst;
}
2. In your #implementation first set renderFirst to TRUE before view has appeared on the screen, then:
-(void) drawRect:(CGRect) rect {
if (renderFirst) {
visualContext = UIGraphicsGetCurrentContext();
renderFirst = FALSE;
}
}
3. Rendering Something to the context after the context was set.
-(void) renderSomethingToRect:(CGRect) rect {
UIGraphicsPushContext(visualContext);
// For instance
UIGraphicsPushContext(visualContext);
CGContextSetRGBFillColor(visualContext, 1.0, 1.0, 1.0, 1.0);
CGContextFillRect(visualContext, rect);
}
Here is an example exactly matching the thread case:
- (void) drawImage: (CGImageRef) img inRect: (CGRect) aRect {
UIGraphicsBeginImageContextWithOptions(aRect.size, NO, 0.0);
visualContext = UIGraphicsGetCurrentContext();
CGContextConcatCTM(visualContext, CGAffineTransformMakeTranslation(-aRect.origin.x, -aRect.origin.y));
CGContextClipToRect(visualContext, aRect);
CGContextDrawImage(visualContext, aRect, img);
// this can be used for drawing image on CALayer
self.layer.contents = (__bridge id) img;
[CATransaction flush];
UIGraphicsEndImageContext();
}
And Drawing image from context that was taken before in this post:
-(void) drawImageOnContext: (CGImageRef) someIm onPosition: (CGPoint) aPos {
UIGraphicsPushContext(visualContext);
CGContextDrawImage(visualContext, CGRectMake(aPos.x,
aPos.y, someIm.size.width,
someIm.size.height), someIm.CGImage);
}
Do not call UIGraphicsPopContext() function until you need the context to render your objects.
It seems that CGContextRef is being removed from the top of the graphic stack automatically when the calling method finishes.
Anyway, this example seems to be a kind of Hack - not planned and proposed by Apple. The solution is very unstable and works only with direct method messages calls inside only one UIView that is on the top of the screen. In case of "performselection" calls, Context does not render any results to the screen. So, I suggest to use CALayer as a rendering to the screen target instead of direct graphic context usage.
Hope it helps.