I've tried to create a screenshot by using UIActivityViewController and saved into Photos in iPhone/iPad device. However, in simulator everything shows correctly, but when I switched to device, it only shows part. Here is the screenshot:
In simulator (You can see there is one background, one green line and one star image)
In real device (You can see there is only one star image, and everything else is gone)
I merged all of those three different UIImages into one image so that I can take a screenshot.
I first merge the background image (bridge UIImage) with star image.
-(UIImage*)mergeUIImageView:(UIImage*)bkgound
FrontPic:(UIImage*)fnt
FrontPicX:(CGFloat)xPos
FrontPicY:(CGFloat)yPos
FrontPicWidth:(CGFloat)picWidth
FrontPicHeight:(CGFloat)picHeight
FinalSize:(CGSize)finalSize
{
UIGraphicsBeginImageContext(CGSizeMake([UIScreen mainScreen].bounds.size.height, [UIScreen mainScreen].bounds.size.width));
// bkgound - is the bridge image
[bkgound drawInRect:CGRectMake(0, 0, [UIScreen mainScreen].bounds.size.height, [UIScreen mainScreen].bounds.size.width)];
// fnt - is the star image
[fnt drawInRect:CGRectMake(xPos, yPos, picWidth, picHeight)];
// merged image
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Then I merged this picture with opengl rendered picture which is the green line.
a) I first change the opengGL image to UIImage by using this function
-(UIImage *) glToUIImage {
float scaleFactor = [[UIScreen mainScreen] scale];
CGRect screen = [[UIScreen mainScreen] bounds];
CGFloat image_height = screen.size.width * scaleFactor;
CGFloat image_width = screen.size.height * scaleFactor;
NSInteger myDataLength = image_width * image_height * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, image_width, image_height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < image_height; y++)
{
for(int x = 0; x < image_width * 4; x++)
{
buffer2[(int)((image_height - 1 - y) * image_width * 4 + x)] = buffer[(int)(y * 4 * image_width + x)];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * image_width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(image_width, image_height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
b)Then I merge this opengl image with my above image(bridge + star) by using the same function above
-(UIImage*)screenshot
{
// get opengl image from above function
UIImage *image = [self glToUIImage];
CGRect pos = CGRectMake(0, 0, image.size.width, image.size.height);
UIGraphicsBeginImageContext(image.size);
[image drawInRect:pos];
[self.background.image drawInRect:pos];
UIImage* final = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return final;
}
And it works great in (iPhone, iPad, iPhone with retina and iPad with retina)simulator (version 6.0). However, when I switched to real device (iPhone 4/4s/5, iPad (2/mini/retina)) it only shows star image. The xcode version is 4.6.3 and base SDK is latest IOS(IOS 6.1) and IOS deployment target is 5.0. Could you guys tell me how to fix it? Thanks.
The problem is IOS 6.0 will not keep the buffer all the time, it will erase it. However, when you do screenshot, you are getting data from buffer so that's why I keep getting black background. So add category to let device keep buffer image instead will solve this problem.
#interface CAEAGLLayer (Retained)
#end
#implementation CAEAGLLayer (Retained)
- (NSDictionary*) drawableProperties
{
return #{kEAGLDrawablePropertyRetainedBacking : #(YES)};
}
#end
Related
I want to ask about image processing mechanism. I develop an iOS app which using OpenGLES for hand-writing on a view. I have a function save that convert a view with all drawing to an Image and save to Photo Library.
I can properly convert content of view to image easily using below code
(Note: The following code is not the problem. Its purpose is just to convert content of view to image and it worked perfect, but I show here for reference)
// Get the size of the backing CAEAGLLayer
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != &UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = self.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
} else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
The problem is I want to determine if the view has any drawing or not. If no drawing -> can't save because saving a blank image is useless so my thinking is to check if image has any non-transparent pixel or not
My solution
Convert my drawing view to Image (its pixels have alpha channel)
Check if the Image has any non-zero alpha channel pixel
If yes, user properly draws something -> can Save
If no, user not draws anything or user erases everything -> not Save
I know the BruteForce algorithim to go through all pixels but it seems the worst way and just be implemented if there is no other efficient ways
So is there any efficient way to check it
I found that the BruteForce algorithm is not slower as I though. It just take about less than 200 miliseconds to go through all pixel datas of an image has size of iPad Pro as well as iPad mini 2
So I though using BruteForce is acceptable
Following is code to check
CGImageRef imageRef = [selfImage CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
float total = width * height * 4;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(total, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef tempContext = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(tempContext, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(tempContext);
// Now your rawData contains the image data in the RGBA8888 pixel format
BOOL empty = YES;
for (int i = 0 ; i < total ;) {
CGFloat alpha = ((CGFloat) rawData[i + 3] ) / 255.0f;
// CGFloat red = ((CGFloat) rawData[i] ) / alpha;
// CGFloat green = ((CGFloat) rawData[i + 1] ) / alpha;
// CGFloat blue = ((CGFloat) rawData[i + 2] ) / alpha;
i += bytesPerPixel;
if (alpha != 0) {
empty = NO;
break;
}
}
if (empty) {
//Do something
} else {
//Do other thing
}
If is there any improvement or other effiecient algorithms, please post here, I really appreciate
I'm developing a universal app for iOS which will dynamically generate it's own full-screen bitmaps (a pointer to 32-bit pixel data in a byte buffer). It reacts to touch events and needs to do the drawing in a responsive way as the user touches (e.g. zooms/pans). At the start of my app, I can see that the display is scaled by 2x on my retina iPad and iPod Touch. My code currently creates and displays bitmaps correctly, but at 1/2 the native resolution of the display. I can see the native resolution using the nativeBounds of the view, but I would like to create and display my bitmaps at native resolution without any scaling. I tried changing the transform scale in the drawRect() method, but it didn't work correctly. Below is my drawRect code:
- (void)drawRect:(CGRect)rect
{
UInt32 * pixels;
pixels = (UInt32 *)[TheFile thePointer];
NSUInteger width = [TheFile iScreenWidth];
NSUInteger height = [TheFile iScreenHeight];
NSUInteger borderX = [TheFile iBorderX];
NSUInteger borderY = [TheFile iBorderY];
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = UIGraphicsGetCurrentContext();
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(pixels, width-borderX*2, height-borderY*2, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaNoneSkipLast | kCGBitmapByteOrder32Big);
CGImageRef myimage = CGBitmapContextCreateImage(gtx);
CGContextSetInterpolationQuality(gtx, kCGInterpolationNone); // does this speed it up?
// Create a rect to display
CGRect imageRect = CGRectMake(borderX, borderY, width - borderX*2, height - borderY * 2);
// Need to repaint the background that would show through (black)
if (borderX != 0 || borderY != 0)
{
[[UIColor blackColor] set];
UIRectFill(rect);
}
// Transform image (flip right side up)
CGContextTranslateCTM(context, 0, height);
CGContextScaleCTM(context, 1.0, -1.0);
// Draw the image
CGContextDrawImage(context, imageRect, myimage); //image.CGImage);
CGColorSpaceRelease(colorSpace);
CGContextRelease(gtx);
CGImageRelease(myimage);
} /* drawRect() */
Edit: The answer below fixes both the performance issue by using a UIImageView, and the scaling issue by setting the proper display scale in the initialization of the UIImage. When the UIImage scale matches the display scale, then it will display bitmaps at 1:1 with the native resolution of the device.
The problem of your code is that after the result image is created, the code draws the image into current graphic context configured for drawRect:. It's CPU that draws the image. That's why it takes 70ms. Rendering the image with an UIImageView or set it as the contents of a layer is not handled by CPU, in this way, it's handled by GPU. GPU is good at things like this, so it's much faster in this case. Since drawRect: causes Core Animation to create a backing bitmap which is useless in this case, you should create the image without drawRect::
- (UIImage *)drawBackground {
UIImage *output;
UInt32 * pixels;
pixels = (UInt32 *)[TheFile thePointer];
NSUInteger borderX = [TheFile iBorderX];
NSUInteger borderY = [TheFile iBorderY];
NSUInteger width = [TheFile iScreenWidth] - 2 * borderX;
NSUInteger height = [TheFile iScreenHeight] - 2 * borderY;
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGFloat scale = [UIScreen mainScreen].scale;
// create bitmap graphic context
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(pixels,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrder32Big);
// create image
CGImageRef myimage = CGBitmapContextCreateImage(gtx);
output = [UIImage imageWithCGImage:myimage
scale:[UIScreen mainScreen].scale
orientation:UIImageOrientationUp];
// clean up
CGColorSpaceRelease(colorSpace);
CGContextRelease(gtx);
CGImageRelease(myimage);
return output;
}
when user triggers an event, suppose you use a gesture recognizer:
- (IBAction)handleTap:(UITapGestureRecognizer *)tap {
UIImage *background = [self drawBackground];
// when background view is an UIImageView
self.backgroundView.image = background;
// self.backgroundView should have already set up in viewDidLoad
// based on your code snippet, you may need to configure background color
// self.backgroundView.backgroundColor = [UIColor blackColor];
// do other configuration if needed...
// when background view is an UIView or subclass of UIView
self.backgroundView.layer.contents = (id)background.CGImage;
// not like UIImageView, the size of background view must exactly equals
// to the size of background image. Otherwise the image will be scaled.
}
I wrote this function in a test project:
static UIImage *drawBackground() {
// allocate bitmap buffer
CGFloat scale = [UIScreen mainScreen].scale;
CGRect screenBounds = [UIScreen mainScreen].bounds;
NSUInteger borderWidth = 1 * 2; // width of border is 1 pixel
NSUInteger width = scale * CGRectGetWidth(screenBounds) - borderWidth;
NSUInteger height = scale * CGRectGetHeight(screenBounds) - borderWidth;
NSUInteger bytesPerPixel = 4;
// test resolution begin
// tested on a iPhone 4 (320 x 480 points, 640 x 960 pixels), iOS 7.1
// the image is rendered by an UIImageView which covers whole screen.
// the content mode of UIImageView is center, which doesn't cause scaling.
width = scale * 310;
height = scale * 240;
// test resolution end
UInt32 *pixels = malloc((size_t)(width * height * bytesPerPixel));
// manipulate bitmap buffer
NSUInteger count = width * height;
unsigned char *byte = (unsigned char *)pixels;
for (int i = 0; i < count; i = i + 1) {
byte[0] = 100;
byte = byte + 1;
byte[0] = 100;
byte = byte + 1;
byte[0] = 0;
byte = byte + 2;
}
// create bitmap grahpic context
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(pixels,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrder32Big);
// create image
CGImageRef myimage = CGBitmapContextCreateImage(gtx);
UIImage *output = [UIImage imageWithCGImage:myimage
scale:scale
orientation:UIImageOrientationUp];
// clean up
CGColorSpaceRelease(colorSpace);
CGContextRelease(gtx);
CGImageRelease(myimage);
free(pixels);
return output;
}
I tested it on a iPhone 4 device. Seems ok to me. This is the screenshot:
I'm using VLCKit to play video in my app, and I need to be able to take a screenshot of the video at certain points. This is the code I'm using:
-(NSData*)generateThumbnail{
int s = 1;
UIScreen* screen = [UIScreen mainScreen];
if ([screen respondsToSelector:#selector(scale)]) {
s = (int) [screen scale];
}
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
int width = viewport[2];
int height = [_profile.resolution integerValue];//viewport[3];
int myDataLength = width * height * 4;
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
for(int y1 = 0; y1 < height; y1++) {
for(int x1 = 0; x1 <width * 4; x1++) {
buffer2[(height - 1 - y1) * width * 4 + x1] = buffer[y1 * 4 * width + x1];
}
}
free(buffer);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
UIImage *image = [ UIImage imageWithCGImage:imageRef scale:s orientation:UIImageOrientationUp ];
NSData *thumbAsData = UIImageJPEGRepresentation(image, 5);
return thumbAsData;
}
To be honest, I have no idea how most of this works. I copied it from somewhere a while ago (I don't remember the source). It mostly works, however frequently it seems parts of the image are missing.
Can someone point me in the right direction? Most of the other posts I see regarding OpenGL screenshots are fairly old, and don't seem to apply.
Thanks.
I wrote a class to work around this problem.
Basically you take directly a screenshot from the screen, and then if you want you can take just a part of the image and also scale it.
Taking a screenshot from the screen, you take every stuff. UIKit , OpenGL, AVFoundation, etc.
Here the class: https://github.com/matteogobbi/MGScreenshotHelper/
Below useful functions, but i suggest you to download (and star :D) directly my helper class ;)
/* Get the screenshot of the screen (useful when you have UIKit elements and OpenGL or AVFoundation stuff */
+ (UIImage *)screenshotFromScreen
{
CGImageRef UIGetScreenImage(void);
CGImageRef screen = UIGetScreenImage();
UIImage* screenImage = [UIImage imageWithCGImage:screen];
CGImageRelease(screen);
return screenImage;
}
/* Get the screenshot of a determinate rect of the screen, and scale it to the size that you want. */
+ (UIImage *)getScreenshotFromScreenWithRect:(CGRect)captureRect andScaleToSize:(CGSize)newSize
{
UIImage *image = [[self class] screenshotFromScreen];
image = [[self class] cropImage:image withRect:captureRect];
image = [[self class] scaleImage:image toSize:newSize];
return image;
}
#pragma mark - Other methods
/* Get a portion of an image */
+ (UIImage *)cropImage:(UIImage *)image withRect:(CGRect)rect
{
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);
UIImage *cropedImage = [UIImage imageWithCGImage:imageRef];
return cropedImage;
}
/* Scale an image */
+ (UIImage *)scaleImage:(UIImage *)image toSize:(CGSize)newSize
{
UIGraphicsBeginImageContextWithOptions(newSize, YES, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
Ok turns out this is a problem only in the Simulator. On a device it seems to work 98% of the time.
I am new in opengl es 2.0 development. The UIImage I got from screenshot looks good on non-retina devices (iphone 4 and ipad), but when I got screenshot from retina devices it seems enlarged. Here is the code I used.
-(UIImage *) glToUIImage {
CGSize size = self.view.frame.size;
// the reason I set the height and width up-side-down is because my
// screenshot captured in landscape mode.
int image_height = (int)size.width;
int image_width = (int)size.height;
NSInteger myDataLength = image_width * image_height * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, image_width, image_height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < image_height; y++)
{
for(int x = 0; x < image_width * 4; x++)
{
buffer2[(image_height - 1 - y) * image_width * 4 + x] = buffer[y * 4 * image_width + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * image_width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(image_width, image_height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
// screenshot function, combined my opengl image with background image and
// saved into Photos.
-(UIImage*)screenshot
{
UIImage *image = [self glToUIImage];
CGRect pos = CGRectMake(0, 0, image.size.width, image.size.height);
UIGraphicsBeginImageContext(image.size);
[image drawInRect:pos];
[self.background.image drawInRect:pos];
UIImage* final = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// final picture I saved into Photos.
return final;
}
Function is working, but the opengl image only shows part in retina devices, how to solve this problem. Thanks !!!
Your code assumes the view size is equal to pixels but it isn't. It is points. You need to convert to actual pixel size per device. UIScreen has a scale property for this.
i am having this strange problem...
i had to capture the screen data and convert it into an image using the following code..this code is working fine over iphone/ipad simulator and on iphone device but not on iPad only .
iphone device is having ios version 3.1.1 and ipad is ios 4.2...
- (UIImage *)screenshotImage {
CGRect screenBounds = [[UIScreen mainScreen] bounds];
int backingWidth = screenBounds.size.width;
int backingHeight =screenBounds.size.height;
NSInteger myDataLength = backingWidth * backingHeight * 4;
GLuint *buffer = (GLuint *) malloc(myDataLength);
glReadPixels(0, 0, backingWidth, backingHeight, GL_RGBA4, GL_UNSIGNED_BYTE, buffer);
for(int y = 0; y < backingHeight / 2; y++) {
for(int xt = 0; xt < backingWidth; xt++) {
GLuint top = buffer[y * backingWidth + xt];
GLuint bottom = buffer[(backingHeight - 1 - y) * backingWidth + xt];
buffer[(backingHeight - 1 - y) * backingWidth + xt] = top;
buffer[y * backingWidth + xt] = bottom;
}
}
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, releaseScreenshotData);
const int bitsPerComponent = 8;
const int bitsPerPixel = 4 * bitsPerComponent;
const int bytesPerRow = 4 * backingWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(backingWidth,backingHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
// myImage = [self addIconToImage:myImage];
return myImage;}
Any idea whats going wrong ..??
Those two lines don't match
NSInteger myDataLength = backingWidth * backingHeight * 4;
glReadPixels(0, 0, backingWidth, backingHeight, GL_RGBA4, GL_UNSIGNED_BYTE, buffer);
GL_RGB4 means 4 bits per channel, however you're allocating for 8 bits per channel. The proper token is GL_RGB8. On the iPhone GL_RGB4 may be unsupported and falls back to GL_RGBA.
Also make sure you're reading from the correct buffer (front vs. left vs. any (accidently) bound FBOs). I recommend reading from the back buffer before doing the buffer swap.
for ios 4 or later i m using Multi-Sampling technique for anti-aliasing ....glReadpixels() cannot read directly from multiSampled FBO you need to resolve it to Single sampled buffer and then try reading it...Please refer to the following post :-
Reading data using glReadPixel() with multisampling
Screenshot from openGL ES apple documentation
- (UIImage*)snapshot:(UIView*)eaglview
{
GLint backingWidth, backingHeight;
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}