Here's the question in brief:
For some layer compositing, I have to render an OpenGL texture in a CGContext. What's the fastest way to do that?
Thoughts so far:
Obviously, calling renderInContext won't capture OpenGL content, and glReadPixels is too slow.
For some 'context', I'm calling this method in a delegate class of a layer:
- (void) drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx
I've considered using a CVOpenGLESTextureCache, but that requires an additional rendering, and it seems like some complicated conversion would be necessary post-rendering.
Here's my (terrible) implemention right now:
glBindRenderbuffer(GL_RENDERBUFFER, displayRenderbuffer);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte *) malloc(dataLength * sizeof(GLubyte));
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
CGFloat scale = self.contentScaleFactor;
NSInteger widthInPoints, heightInPoints;
widthInPoints = width / scale;
heightInPoints = height / scale;
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
For anyone curious, the method shown above is not the fastest way.
When a UIView is asked for its contents, it will ask its layer (usually a CALayer) to draw them for it. The exception: OpenGL-based views, which use a CAEAGLLayer (a subclass of CALayer), use the same method but returns nothing. No drawing happens.
So, if you call:
[someUIView.layer drawInContext:someContext];
it will work, while
[someOpenGLView.layer drawInContext:someContext];
won't.
This also becomes an issue if you're asking a superview of any OpenGL-based view for its content: it will recursively ask each of its subviews for theirs, and any subview that uses a CAEAGLLayer will hand back nothing (you'll see a black rectangle).
I set out above to find an implementation of a delegate method of CALayer, drawLayer:inContext:, which I could use in any OpenGL-based views so that the view object itself would provide its contents (rather than the layer). The delegate method is called automatically: Apple expects it to work this way.
Where performance isn't an issue, you can implement a variation of a simple snapshot method in your view. The method would look like this:
- (void) drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx {
GLint backingWidth, backingHeight;
glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
CGContextDrawImage(ctx, self.bounds, iref);
}
BUT! This is not a performance effective.
glReadPixels, as noted just about everywhere, is not a fast call. Starting in iOS 5, Apple exposed CVOpenGLESTextureCacheRef - basically, a shared buffer that can be used both as a CVPixelBufferRef and as an OpenGL texture. Originally, it was designed to be used as a way of getting an OpenGL texture from a video frame: now it's more often used in reverse, to get a video frame from a texture.
So a much better implementation of the above idea is to use the CVPixelBufferRef you get from CVOpenGLESTextureCacheCreateTextureFromImage, get direct access to those pixels, draw them into a CGImage which you cache and which is drawn into your context in the delegate method above.
The code is here. On each rendering pass, you draw your texture into the texturecache, which is linked to the CVPixelBuffer Ref:
- (void) renderToCGImage {
// Setup the drawing
[ochrContext useProcessingContext];
glBindFramebuffer(GL_FRAMEBUFFER, layerRenderingFramebuffer);
glViewport(0, 0, (int) self.frame.size.width, (int) self.frame.size.height);
[ochrContext setActiveShaderProgram:layerRenderingShaderProgram];
// Do the actual drawing
glActiveTexture(GL_TEXTURE4);
glBindTexture(GL_TEXTURE_2D, self.inputTexture);
glUniform1i(layerRenderingInputTextureUniform, 4);
glVertexAttribPointer(layerRenderingShaderPositionAttribute, 2, GL_FLOAT, 0, 0, kRenderTargetVertices);
glVertexAttribPointer(layerRenderingShaderTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, kRenderTextureVertices);
// Draw and finish up
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glFinish();
// Try running this code asynchronously to improve performance
dispatch_async(PixelBufferReadingQueue, ^{
// Lock the base address (can't get the address without locking it).
CVPixelBufferLockBaseAddress(renderTarget, 0);
// Get a pointer to the pixels
uint32_t * pixels = (uint32_t*) CVPixelBufferGetBaseAddress(renderTarget);
// Wrap the pixel data in a data-provider object.
CGDataProviderRef pixelWrapper = CGDataProviderCreateWithData(NULL, pixels, CVPixelBufferGetDataSize(renderTarget), NULL);
// Get a color-space ref... can't this be done only once?
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
// Release the exusting CGImage
CGImageRelease(currentCGImage);
// Get a CGImage from the data (the CGImage is used in the drawLayer: delegate method above)
currentCGImage = CGImageCreate(self.frame.size.width,
self.frame.size.height,
8,
32,
4 * self.frame.size.width,
colorSpaceRef,
kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
pixelWrapper,
NULL,
NO,
kCGRenderingIntentDefault);
// Clean up
CVPixelBufferUnlockBaseAddress(renderTarget, 0);
CGDataProviderRelease(pixelWrapper);
CGColorSpaceRelease(colorSpaceRef);
});
}
And then implement the delegate method very simply:
- (void) drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx {
CGContextDrawImage(ctx, self.bounds, currentCGImage);
}
Related
I want to take snapshot of the content in CCGLView in my viewController and display the resultant image in the same viewController.
Right now I'm using the following method to do so :
-(UIImage *) drawableToCGImage{
GLint backingWidth2, backingHeight2;
//backingHeight2=self.glView.frame.size.height;
//backingWidth2=self.glView.frame.size.width;
//Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, self.glView.colorRenderBuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth2);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight2);
NSInteger x = self.glView.frame.origin.x, y = self.glView.frame.origin.y, width2 = backingWidth2, height2 = backingHeight2;
NSInteger dataLength = width2 * height2 * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width2, height2, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width2, height2, 8, 32, width2 * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = self.glView.contentScaleFactor;
widthInPoints = width2 / scale;
heightInPoints = height2 / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
But it works only in simulator and in device when I test it, I don't get the content of the CCGLView. Why this method doesn't give the snapshot in device? Or is there any other way to get it done?
I don't know why the previous method didn't work, but I got to know another way of doing it, and its less expensive too :). I'm using the following method :
- (UIImage *)snapshot:(UIView *)view{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, YES, 0);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
for more info go to following link: https://developer.apple.com/library/ios/qa/qa1817/_index.html
I draw a figure on CAEAGLLayer
glDisable(GL_DEPTH_TEST);
glEnable(GL_POINT_SPRITE_OES);
glEnable(GL_POINT_SMOOTH);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_COLOR, GL_ONE_MINUS_SRC_COLOR);
glClearColor(1.0, 1.0, 1.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
It looks like this: src image
After I took this picture context this code:
- (UIImage*)snapshot:(UIView*)eaglview
{
GLint backingWidth, backingHeight;
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
аnd getting here is distortion in color: dst image.
What could be the problem?
If I change
glClearColor (1.0, 1.0, 1.0, 0.0);
on
glClearColor (0.0, 0.0, 0.0, 0.0);
screenshot context is obtained without color distortion. Only the color of a shape other. dst image
glEnable(GL_POINT_SPRITE_OES);
glEnable(GL_POINT_SMOOTH);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_COLOR, GL_ONE_MINUS_SRC_COLOR);
//glClearColor(1.0, 1.0, 1.0, 0.0);
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(0, 0, self.frame.size.width, self.frame.size.height);
glUseProgram(programHandle);
I wonder if your blend equation is wrong - is this better?
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
I'm trying to make a screenshot on my iPad with OpenGL ES. This does work, but there are blank spots on them. These blank spots seem to be the rendered object. I've tried using the other buffers aswell, but none of them seem to contain the actual 3D object?
I'm using the example code of String SDK.
Image of the issue:
EAGLView.m
- (void)createFramebuffer
{
if (context && !defaultFramebuffer)
{
[EAGLContext setCurrentContext:context];
// Handle scale
if ([self respondsToSelector:#selector(setContentScaleFactor:)])
{
float screenScale = [UIScreen mainScreen].scale;
self.contentScaleFactor = screenScale;
}
// Create default framebuffer object.
glGenFramebuffers(1, &defaultFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
// Create color render buffer and allocate backing store.
glGenRenderbuffers(1, &colorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &framebufferWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &framebufferHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer);
// Create and attach depth buffer
glGenRenderbuffers(1, &depthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, framebufferWidth, framebufferHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderbuffer);
// Bind color buffer
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
NSLog(#"Failed to make complete framebuffer object %x", glCheckFramebufferStatus(GL_FRAMEBUFFER));
}
}
Screenshot code
EAGLView.m
- (UIImage*)snapshot:(UIView*)eaglview
{
GLint backingWidth, backingHeight;
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
//glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
String_OGL_TutorialViewController.m
- (void)render
{
[(EAGLView *)self.view setFramebuffer];
glDisable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
const int maxMarkerCount = 10;
struct MarkerInfoMatrixBased markerInfo[10];
int markerCount = [stringOGL getMarkerInfoMatrixBased: markerInfo maxMarkerCount: maxMarkerCount];
for (int i = 0; i < markerCount; i++)
{
float diffuse[4] = {0, 0, 0, 0};
diffuse[markerInfo[i].imageID % 3] = 1;
if ([context API] == kEAGLRenderingAPIOpenGLES2)
{
glUseProgram(program);
glUniform4fv(uniforms[UNIFORM_COLOR], 1, diffuse);
const float translationMatrix[16] = {1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, -cubeScale, 1};
float modelViewMatrix[16];
float modelViewProjectionMatrix[16];
[String_OGL_TutorialViewController multiplyMatrix: translationMatrix withMatrix: markerInfo[i].transform into: modelViewMatrix];
[String_OGL_TutorialViewController multiplyMatrix: modelViewMatrix withMatrix: projectionMatrix into: modelViewProjectionMatrix];
glUniformMatrix4fv(uniforms[UNIFORM_MVP], 1, GL_FALSE, modelViewProjectionMatrix);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, ((float *)NULL) + 6 * 4 * 3);
// Validate program before drawing. This is a good check, but only really necessary in a debug build.
// DEBUG macro must be defined in your debug configurations if that's not already the case.
#if defined(DEBUG)
if (![self validateProgram:program])
{
NSLog(#"Failed to validate program: %d", program);
return;
}
#endif
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_BYTE, NULL);
}
else
{
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glLightfv(GL_LIGHT0, GL_DIFFUSE, diffuse);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 12, NULL);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, 12, ((float *)NULL) + 6 * 4 * 3);
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(projectionMatrix);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glMultMatrixf(markerInfo[i].transform);
glTranslatef(0, 0, -cubeScale);
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_BYTE, NULL);
}
UIImage *img = [(EAGLView *)self.view snapshot: self.view];
UIImageWriteToSavedPhotosAlbum(img, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
[stringOGL pause];
}
}
I've been challenged with the task of putting an oddly sized image (with fixed proportion, though) on a GL_QUAD (well, a GL_TRIANGLE_STRIP resem--you got the point) and that seemed fairily easy to me at first, except for the part where I need to do this in iOS (4.2+). The solution is awkwardly easy anyway: just take the image, make a texture out of it, map it to the correct vertices and you're good to go.
As you may very well know, OpenGL ES textures are required to have width and height to be powers of 2, like 2, 4, 8, ..., 256, 512... (not sure this holds for regular OpenGL but I think it does... anyway, doesn't matter).
Since I have to download these images from the Intertubes (actually, the YouTube) I can't really do anything beforehand, so I have these 480x360 images (if I remember it correctly) and I have to splat them on my triangle strips. Fortunately we have texture mapping which allows us to select portions of the texture to be mapped where we want, so the obvious solution would be to (optionally up/downsize) and pad with some matte color the source image, and live with it.
Enter iOS. I get the data from the Intertubes, I happily build the corresponding UIImage, then I make another UIImage (yes, I know, bear with me, I'll optimize it later) just scaled down to the nearest power-of-2 in width, preserving aspect, so let's say 256x192, then I make a bitmap context , paint it black (or, for what matters, any other colour, but I think you can see why I chose black in this case), draw the UIImage (a CGImage) on it, and return the UIImage built using the aforementioned bitmap context.
I am now the happy owner of a 256x256 image ready to be mapped on my GL_TRIANGLE_STRIP. Except that it does not work. I tried with a prepared 512x512 image and it worked flawlessly. The code I'm pasting here does not include the retrieval of the image from YouTube, I just saved it locally to rule out networking problems. Also, I'm not including the GL code as it's clearly working.
- (void)viewDidLoad {
images = [[NSMutableArray alloc] init];
//NSURL *url = [NSURL URLWithString:#"http://i.ytimg.com/vi/d2wVgzXWE9Y/0.jpg"];
NSString *path = [[NSBundle mainBundle] pathForResource:#"opengl_texture" ofType:#"jpg"];
NSData *texData = [NSData dataWithContentsOfFile:path];
UIImage *rawImage = [[UIImage alloc] initWithData:texData];
float newWidth = (float)(1 << (int)floor(log2f(rawImage.size.width)));
// Scale means the scale of the current image relative to the resulting image.
float scale = rawImage.size.width / newWidth;
UIImage *midImage = [UIImage imageWithCGImage:[rawImage CGImage] scale:scale orientation:UIImageOrientationUp];
NSLog(#"%f %f %f", midImage.size.width, midImage.size.height, scale);
[rawImage release];
UIImage *image = [self padImage:midImage withColor:[UIColor redColor]];
NSLog(#"%f %f", image.size.width, image.size.height);
[images addObject:image];
textures = malloc(sizeof(GLuint));
glGenTextures(1, textures);
glBindTexture(GL_TEXTURE_2D, textures[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetWidth(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc(width * height * 4);
CGContextRef context = CGBitmapContextCreate(imageData, width, height, 8, 4*width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease( colorSpace );
CGContextClearRect( context, CGRectMake( 0, 0, width, height ) );
CGContextTranslateCTM( context, 0, height - height );
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(context);
free(imageData);
[midImage release];
[image release];
[texData release];
}
- (UIImage *)padImage:(UIImage *)image withColor:(UIColor *)color {
CGFloat size = round(image.size.width);
NSLog(#"%f", size);
CGContextRef bContext = [self createBitmapContextOfSize:CGSizeMake(size, size)];
CGContextSetFillColorWithColor(bContext, [color CGColor]);
CGContextFillRect(bContext, CGRectMake(0, 0, size, size));
CGContextDrawImage(bContext, CGRectMake(0, 0, size, size), [image CGImage]);
UIImage *result = [UIImage imageWithCGImage:CGBitmapContextCreateImage(bContext)];
CGContextRelease(bContext);
return result;
}
- (CGContextRef) createBitmapContextOfSize:(CGSize) size {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height);
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL) {
fprintf (stderr, "Memory not allocated!");
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
size.width,
size.height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGContextSetAllowsAntialiasing (context,NO);
if (context== NULL) {
free (bitmapData);
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );
return context;
}
Please don't bother mentioning obvious memory management issues unless you think they are the core of the problem. As for the "error message" or whatever: no, there's no such thing, the whole app just crashes.
Ok, now you can collectively smack my face with a large trout.
The problem was actually memory management, specifically I was releasing objects that were created with implicit methods (namely midImage and texData). Implicit creation does not increase the retain count, while explicit (alloc+init and friends) does. How may times did I already crash against this? Lots. Were them enough? Obviously not.
Second question: where can I find a large post-it, like 1x1m at least?
i am having this strange problem...
i had to capture the screen data and convert it into an image using the following code..this code is working fine over iphone/ipad simulator and on iphone device but not on iPad only .
iphone device is having ios version 3.1.1 and ipad is ios 4.2...
- (UIImage *)screenshotImage {
CGRect screenBounds = [[UIScreen mainScreen] bounds];
int backingWidth = screenBounds.size.width;
int backingHeight =screenBounds.size.height;
NSInteger myDataLength = backingWidth * backingHeight * 4;
GLuint *buffer = (GLuint *) malloc(myDataLength);
glReadPixels(0, 0, backingWidth, backingHeight, GL_RGBA4, GL_UNSIGNED_BYTE, buffer);
for(int y = 0; y < backingHeight / 2; y++) {
for(int xt = 0; xt < backingWidth; xt++) {
GLuint top = buffer[y * backingWidth + xt];
GLuint bottom = buffer[(backingHeight - 1 - y) * backingWidth + xt];
buffer[(backingHeight - 1 - y) * backingWidth + xt] = top;
buffer[y * backingWidth + xt] = bottom;
}
}
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, releaseScreenshotData);
const int bitsPerComponent = 8;
const int bitsPerPixel = 4 * bitsPerComponent;
const int bytesPerRow = 4 * backingWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(backingWidth,backingHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
// myImage = [self addIconToImage:myImage];
return myImage;}
Any idea whats going wrong ..??
Those two lines don't match
NSInteger myDataLength = backingWidth * backingHeight * 4;
glReadPixels(0, 0, backingWidth, backingHeight, GL_RGBA4, GL_UNSIGNED_BYTE, buffer);
GL_RGB4 means 4 bits per channel, however you're allocating for 8 bits per channel. The proper token is GL_RGB8. On the iPhone GL_RGB4 may be unsupported and falls back to GL_RGBA.
Also make sure you're reading from the correct buffer (front vs. left vs. any (accidently) bound FBOs). I recommend reading from the back buffer before doing the buffer swap.
for ios 4 or later i m using Multi-Sampling technique for anti-aliasing ....glReadpixels() cannot read directly from multiSampled FBO you need to resolve it to Single sampled buffer and then try reading it...Please refer to the following post :-
Reading data using glReadPixel() with multisampling
Screenshot from openGL ES apple documentation
- (UIImage*)snapshot:(UIView*)eaglview
{
GLint backingWidth, backingHeight;
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}