Taking an iOS Screenshot Mixing OpenGLES and UIKIT from a parent class - ios

I know this question is answered quite a bit but I have a different situation it seems. I'm trying to write a top level function where I can take a screenshot of my app at anytime, be it openGLES or UIKit, and I won't have access to the underlying classes to make any changes.
The code I've been trying works for UIKit, but returns a black screen for OpenGLES parts
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
for (UIView *subview in window.subviews)
{
CAEAGLLayer *eaglLayer = (CAEAGLLayer *) subview.layer;
if([eaglLayer respondsToSelector:#selector(drawableProperties)]) {
NSLog(#"reponds");
/*eaglLayer.drawableProperties = #{
kEAGLDrawablePropertyRetainedBacking: [NSNumber numberWithBool:YES],
kEAGLDrawablePropertyColorFormat: kEAGLColorFormatRGBA8
};*/
UIImageView *glImageView = [[UIImageView alloc] initWithImage:[self snapshotx:subview]];
glImageView.transform = CGAffineTransformMakeScale(1, -1);
[glImageView.layer renderInContext:context];
//CGImageRef iref = [self snapshot:subview withContext:context];
//CGContextDrawImage(context, CGRectMake(0.0, 0.0, 640, 960), iref);
}
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
and
- (UIImage*)snapshotx:(UIView*)eaglview
{
GLint backingWidth, backingHeight;
//glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
//don't know how to access the renderbuffer if i can't directly access the below code
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate (
width,
height,
8,
32,
width * 4,
colorspace,
// Fix from Apple implementation
// (was: kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast).
kCGBitmapByteOrderDefault,
ref,
NULL,
true,
kCGRenderingIntentDefault
);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions)
{
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
Any advice on how to mix the two without having the ability to modify the classes in the rest of the application?
Thanks!

I see what you tried to do there and it is not really a bad concept. There does seem to be one big problem though: You can not just call glReadPixels at any time you want. First of all you should make sure the buffer is full with data (pixels) you really need and second is it should be called on the same thread as its GL part is working on...
If the GL views are not yours you might have some big trouble calling that screenshot method, you need to call some method that will trigger binding its internal context and if it is animating you will have to know when the cycle is done to ensure that the pixels you receive are the same as the ones presented on the view.
Anyway if you get past all those you will still probably need to "jump" through different threads or need to wait for a cycle to finish. In that case I suggest you use blocks that return the screenshot image which should be passed as a method parameter so you can catch it whenever it is returned. That being said it would be best if you could override some methods on the GL views to be able to return you the screenshot image via callback block and write some recursive system.
To sum it up you need to anticipate multithreading, setting the context, binding the correct frame buffer, waiting for everything to be rendered. This all might result in impossibility to create a screenshot method that would simply work for any application, view, system without overloading some internal methods.
Note that you are simply not allowed to make a whole screenshot (like the one pressing home and lock button at the same time) in your application. As for the UIView part being so easy to create an image from it is because UIView is being redrawn into graphics context independently to the screen; as if you could take some GL pipeline and bind it to your own buffer and context and draw it, this would result in being able to get its screenshot independently and could be performed on any thread.

Actually, I'm trying to do something similar: I'll post in full when I've ironed it out, but in brief:
use your superview's layer's renderInContext method
in the subviews which use openGL, implement the layer delegate's drawLayer:inContext: method
to render your view into the context, use a CVOpenGLESTextureCacheRef
Your superview's layer will call renderInContext: on each of it's sublayers - by implementing the delegate method, your GLView respond for it's layer.
Using a texture cache is much, much faster than glReadPixels: that will probably be a bottleneck.
Sam

Related

CGBitmapContextCreate with undo not working well

I've struggled with undo function for few days and I really have no idea to solve this problem.
I'm making drawing function on UIViewController that is using a custom UIimageView.
The problem is when I triggered Undo function the previous drawing is gone well but when I tried to draw again then the removed drawing appeared again with current drawing.
here is my code.
If you can see any problem please let me know! It will be really helpful.
Thank you!
- (IBAction)Pan:(UIPanGestrueRecognizer *)pan {
CGContextRef ctxt = [self drawingContext];
CGContextBeginPath(ctxt);
CGContextMoveToPoint(ctxt, previous.x, previous.y);
CGContextAddLineToPoint(ctxt, current.x, current.y);
CGContextStrokePath(ctxt);
CGImageRef img = CGBitmapContextCreateImage(ctxt);
self.costomImageView.image = [UIImage imageWithCGImage:img];
CGImageRelease(img);
previous = current;
...
}
- (CGContextRef)drawingContext {
if(!context) { // context is an instance variable of type CGContextRef
CGImageRef image = self.customImageView.image.CGImage;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if(!colorSpace) return nil;
context = CGBitmapContextCreate(NULL,
CGImageGetWidth(image), CGImageGetHeight(image),
8, CGImageGetWidth(image) * 4,
colorSpace, kCGImageAlphaPremultipliedLast
);
CGColorSpaceRelease(colorSpace);
if(!context) return nil;
CGContextConcatCTM(context, CGAffineTransformMake(1,0,0,-1,0,self.customeImageView.image.size.height));
CGContextSaveGState(context);
CGContextTranslateCTM(context, 0.0, self.customImageView.image.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, (CGRect){CGPointZero,self.customImageView.image.size}, image);
CGContextRestoreGState(context);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, 4.f);
CGContextSetStrokeColorWithColor(context, [UIColor redColor].CGColor);
}
return context;
}
I'm unsure what you're trying to do is supported by the UndoManager
What I'd personally do is something like this:
Create an NSMutableArray to contain all of your drawing strokes. A struct with two Point instances in it should do for this task. If you want to go fancy, you could add a UIColor instance to set the colour and more variables for other style information as well.
When your Pan gesture starts, record the point at which it starts. When it ends, record that point, create an instance of your struct and add it to the array.
Have a render procedure that clears the graphics context, iterates through the array and draws to the screen.
For undo functionality, all you need to do is remove the last entry from the array and re-render.

UIIView to CGContext coordinate system in landscape orientation

I am trying to capture screen shots and make a movie. Everything works fine in portrait mode, but my app is only a landscape app.
I am using this method to add the UIImage.CGImageRef into the video:
- (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image withSize:(CGSize)frameSize
{
CVPixelBufferRef pxbuffer;
if (pixelBufferPool == NULL) {
NSLog(#"pixelBufferPool is null!");
return nil;
}
CVReturn status = CVPixelBufferPoolCreatePixelBuffer (NULL, pixelBufferPool, &pxbuffer);
if(status != kCVReturnSuccess) {
NSLog(#"failed to create pixel buffer pool pixel buffer %i", status);
return nil;
}
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, frameSize.width,
frameSize.height, 8, 4*frameSize.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
//CGAffineTransform t = CGAffineTransformIdentity;
//t = CGAffineTransformRotate(t, 0/*M_PI_2*/);
//t = CGAffineTransformScale(t, -1.0, 1.0);
//CGContextConcatCTM(context, t);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
The video is as if the images are skewed. I know its because the coordinate system of UIImage and CGContext is different, hence the frames are skewed. But I don't know how to fix this.
I tried rotating, scaling, etc. But everything needs to be in the correct order.
What is the coordinate system of a UIImage in a landscape app, and the coordinate system of the CGContext?
What is the correct transform that I need?
Actually, if the app is launching in landscape mode, the frame of the view should already have the correct sizing, therefore you don't need to rotate your context.
You will only have to flip the y-axis of the context, by the following:
CGContextScaleCTM(c, 1, -1);
CGContextTranslateCTM(c, 0, -size.height);
Could this be happening because the size of the CGContext doesn't match the size of the CVPixelBuffer? When you draw your image, you're probably walking past the end of the row and start drawing pixels on the following row, because your landscape image is wider than the pixel buffer's width.
What if you created your CGContext with a size that's based on the pixel buffer's size, and then if the pixel buffer size doesn't match the frameSize, then you apply your rotation transform?
Or, alternatively, when you're in landscape mode, re-create the pixel buffer pool with a different size that matches the landscape size.
(I'm not familiar with the CoreVideo API, so that's just a guess, though)

UIImageView rotation without moving Image

Hi,
I want to rotate my UIImageView without moving the whole "png". No code is only to test what happens
_fanImage.transform = CGAffineTransformMakeRotation(45);
It turns but the whole image moves. What can I do that this doesn't happen ?
You can try something like this.. You should rotate the UIImage rather than UIImageView.
- (UIImage *)imageWithTransform:(CGAffineTransform)transform {
CGRect rect = CGRectMake(0, 0, self.size.height, self.size.width);
CGImageRef imageRef = self.CGImage;
// Build a context that's the same dimensions as the new size
CGContextRef bitmap = CGBitmapContextCreate(NULL,
self.size.width,
self.size.height,
CGImageGetBitsPerComponent(imageRef),
0,
CGImageGetColorSpace(imageRef),
CGImageGetBitmapInfo(imageRef));
// Rotate and/or flip the image if required by its orientation
CGContextConcatCTM(bitmap, transform);
// Draw into the context; this scales the image
CGContextDrawImage(bitmap, rect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// Clean up
CGContextRelease(bitmap);
CGImageRelease(newImageRef);
return newImage;
}
I think you mean that you want your image view to rotate around it's center point. Is that right? If so, that's what a view should do by default.
You should do a search on "Translating, Scaling, and Rotating Views" in Xcode and read the resulting article.
Note that all of iOS's angles are specified in radians, not degrees.
Your sample images aren't really helpful, since we can't see the frame that the image view is drawn into. It's almost impossible to tell what your image views are doing and what they are supposed to be doing instead based on the pictures you linked from your dropbox.
A full 360 degrees is 2pi.
You should use
CGFloat degrees = 45;
CGFloat radians = degrees/180*M_PI;
_fanImage.transform = CGAffineTransformMakeRotation(radians);
That will fix the rotation amount for your code, but probably not the rotation position.

ARC does not release CALayer

I have a code block which aims to capture snapshot of pdf based custom views for each page. To accomplish it I'll create view controller in a loop and then iterate. The problem is even view controller released custom view doesn't released and look like live on Instruments tool. As a result for loop iterates a lot so it breaks the memory (up to 500MB for 42 living) and crashes.
Here is the iteration code;
do
{
__pageDictionary = CFDictionaryGetValue(_allPages,
__pageID);
CUIPageViewController *__pageViewController = [self _pageWithID:__pageID];
[__pageViewController addMainLayers];
[[APP_DELEGATE assetManager] temporarilyPasteSnapshotSource:__pageViewController.view];
UIImage *__snapshotImage = [__pageViewController captureSnapshot];
[[AOAssetManager sharedManager] saveImage:__snapshotImage
forPublicationBundle:_publicationTileViewController.publication.bundle
pageID:(__bridge NSString *)__pageID];
[[APP_DELEGATE assetManager] removePastedSnapshotSource:__pageViewController.view];
__snapshotImage = nil;
__pageViewController = nil;
ind += 6 * 0.1 / CFDictionaryGetCount(_allPages);
}
while (![(__bridge NSString *)(__pageID = CFDictionaryGetValue(__pageDictionary,
kMFMarkupKeyPageNextPageID)) isMemberOfClass:[NSNull class]]);
_generatingSnapshots = NO;
And here the captureSnapshot method;
- (UIImage *)captureSnapshot
{
CGRect rect = [self.view bounds];
UIGraphicsBeginImageContextWithOptions(rect.size,YES,0.0f);
context = UIGraphicsGetCurrentContext();
[self.view.layer renderInContext:context];
UIImage *capturedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return capturedImage;
}
Instruments;
Edit for further details:
Below code is from CUIPDFView a subclass of UIView;
- (void)drawRect:(CGRect)rect
{
[self drawInContext:UIGraphicsGetCurrentContext()];
}
-(void)drawInContext:(CGContextRef)context
{
CGRect drawRect = CGRectMake(self.bounds.origin.x, self.bounds.origin.y,self.bounds.size.width, self.bounds.size.height);
CGContextSetRGBFillColor(context, 1.0000, 1.0000, 1.0000, 1.0f);
CGContextFillRect(context, drawRect);
// PDF page drawing expects a Lower-Left coordinate system, so we flip the coordinate system
// before we start drawing.
CGContextTranslateCTM(context, 0.0, self.bounds.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// Grab the first PDF page
CGPDFPageRef page = CGPDFDocumentGetPage(_pdfDocument, _pageNumberToUse);
// We're about to modify the context CTM to draw the PDF page where we want it, so save the graphics state in case we want to do more drawing
CGContextSaveGState(context);
// CGPDFPageGetDrawingTransform provides an easy way to get the transform for a PDF page. It will scale down to fit, including any
// base rotations necessary to display the PDF page correctly.
CGAffineTransform pdfTransform = CGPDFPageGetDrawingTransform(page, kCGPDFCropBox, self.bounds, 0, true);
// And apply the transform.
CGContextConcatCTM(context, pdfTransform);
// Finally, we draw the page and restore the graphics state for further manipulations!
CGContextDrawPDFPage(context, page);
CGContextRestoreGState(context);
}
When I delete drawRect method implementation, memory allocation problem dismiss but obviously it can't print the pdf.
Try putting an #autoreleasepool inside your loop:
do
{
#autoreleasepool
{
__pageDictionary = CFDictionaryGetValue(_allPages,
__pageID);
CUIPageViewController *__pageViewController = [self _pageWithID:__pageID];
[__pageViewController addMainLayers];
[[APP_DELEGATE assetManager] temporarilyPasteSnapshotSource:__pageViewController.view];
UIImage *__snapshotImage = [__pageViewController captureSnapshot];
[[AOAssetManager sharedManager] saveImage:__snapshotImage
forPublicationBundle:_publicationTileViewController.publication.bundle
pageID:(__bridge NSString *)__pageID];
[[APP_DELEGATE assetManager] removePastedSnapshotSource:__pageViewController.view];
__snapshotImage = nil;
__pageViewController = nil;
ind += 6 * 0.1 / CFDictionaryGetCount(_allPages);
}
}
while (![(__bridge NSString *)(__pageID = CFDictionaryGetValue(__pageDictionary,
This will flush the autorelease pool each time through the loop, releasing all the autoreleased objects.
I would be surprised if this is actually a problem with ARC. An object that is still alive still has a strong reference to the layer.
What does [AOAssetManager saveImage:...] do with the image. Are you sure it is not holding onto it?
Is _pageWithID: doing something that is keeping a pointer to CUIPageViewController around?

How to get a programmatic screenshot of map overlay which is an image in iOS

Goal : Get a screenshot of a map with overlays(MKOverlayView) as well as the annotation.
What I have done :
I have a mapview, overlayview as well as an annotation view present. This is all handled by one controller.
I am using the code provided by Apple to take the screenshot
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
//for(self.mapView.overlayView.layer.)
//[self.mapView.overlayView.layer renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The problem is that the above line of code gets the screenshot of only the map view and the annotation view. The overlay view is not present.
Also, to clarify my overlay view(MKOverlayView) is an image. This does not show up in the screenshot. I have not used OpenGL. I can see the annotation view that is the default pin but the screenshot is not able to capture the tiles. I have been working on it for a long time. Any help would be much appreciated! Thanks!
Here is what my controller looks like
->> MYController
->> MKMapViewOBject
->> MKMapViewOverlayView
->> MKMapOverlay
->> MKAnnotation
->> MKAnnotationView
So am I having a problem with my renderInContext:context ?
More Information :
Following code for tile drawing in my overlay view
UIGraphicsPushContext(context);
NSData* data = //get data
if(data)
{
UIImage* img = [[UIImage alloc] initWithData:data];
[img drawInRect:drawRect blendMode:kCGBlendModeNormal alpha:0.4 ];
[img release];
}
UIGraphicsPopContext();
The problem was in the drawMapRect. The drawMapRect is called with different values for MKMapRect when overlaying maps by different threads. Therefore, apple wants your code to be thread safe and able to run with multiple different visible MapRects.
I am working on creating the overlays based on the given MapRect now i.e. the visible portion of it.

Resources