OpenGL snapshot changes color - ios

I draw a figure on CAEAGLLayer
glDisable(GL_DEPTH_TEST);
glEnable(GL_POINT_SPRITE_OES);
glEnable(GL_POINT_SMOOTH);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_COLOR, GL_ONE_MINUS_SRC_COLOR);
glClearColor(1.0, 1.0, 1.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
It looks like this: src image
After I took this picture context this code:
- (UIImage*)snapshot:(UIView*)eaglview
{
GLint backingWidth, backingHeight;
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
аnd getting here is distortion in color: dst image.
What could be the problem?

If I change
glClearColor (1.0, 1.0, 1.0, 0.0);
on
glClearColor (0.0, 0.0, 0.0, 0.0);
screenshot context is obtained without color distortion. Only the color of a shape other. dst image
glEnable(GL_POINT_SPRITE_OES);
glEnable(GL_POINT_SMOOTH);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_COLOR, GL_ONE_MINUS_SRC_COLOR);
//glClearColor(1.0, 1.0, 1.0, 0.0);
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(0, 0, self.frame.size.width, self.frame.size.height);
glUseProgram(programHandle);

I wonder if your blend equation is wrong - is this better?
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

Related

CoreGraphics Image render from OpenGL has black background

I am unable to render an image from an OpenGL context with a transparent background in CoreGraphics.
The rendered image has a black background.
This is the draw code
GLint default_frame_buffer = 0;
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &default_frame_buffer);
if (default_frame_buffer == 0) {
target = createBMGLRenderTarget(width, height);
setFiltering(target, BMGL_BilinearFiltering);
glBindFramebuffer(GL_FRAMEBUFFER, target->framebuffer);
}
glViewport(0, 0, width, height);
if (background) {
CGFloat red, green, blue, alpha;
[background getRed:&red green:&green blue:&blue alpha:&alpha];
glClearColor(red, green, blue, alpha);
} else {
glClearColor(0.f, 0.f, 0.f, 0.f);
}
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
drawer.draw()
glFlush();
glFinish();
GLubyte *data = (GLubyte *)malloc(width * height * 4);
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, width * height * 4, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef imgRef = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast, ref, NULL, NO, kCGRenderingIntentDefault);
UIGraphicsBeginImageContextWithOptions(size, YES, 0.0);
CGContextRef cgContext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(cgContext, kCGBlendModeCopy);
CGContextDrawImage(cgContext, CGRectMake(0, 0, size.width, size.height), imgRef);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
free(data);
CGDataProviderRelease(ref);
CGColorSpaceRelease(colorspace);
CGImageRelease(imgRef);
I have tried to specifically set opaque to false as well as different blend modes but it still adds a black background, to the original clear image. I am able to set the GLKView to have a transparent background, but rendering the image and then drawing its contents into a CGImage doesn't work.
Does anyone know why this is?
I believe your problem is in how to create UIImage from raw RGBA data. To confirm this you may check what your data is on a pixel you know to be transparent. Like data[pixelIndex*4 + 3] should be zero where you expect it to be transparent. If it is not transparent then the issue is on openGL part.
Anyway the most probable reason your image is not transparent is you are premultiplying alpha using kCGImageAlphaPremultipliedLast. Try using kCGBitmapByteOrder32Big|kCGImageAlphaLast.

Fastest way to render OpenGL texture into CGContext

Here's the question in brief:
For some layer compositing, I have to render an OpenGL texture in a CGContext. What's the fastest way to do that?
Thoughts so far:
Obviously, calling renderInContext won't capture OpenGL content, and glReadPixels is too slow.
For some 'context', I'm calling this method in a delegate class of a layer:
- (void) drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx
I've considered using a CVOpenGLESTextureCache, but that requires an additional rendering, and it seems like some complicated conversion would be necessary post-rendering.
Here's my (terrible) implemention right now:
glBindRenderbuffer(GL_RENDERBUFFER, displayRenderbuffer);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte *) malloc(dataLength * sizeof(GLubyte));
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
CGFloat scale = self.contentScaleFactor;
NSInteger widthInPoints, heightInPoints;
widthInPoints = width / scale;
heightInPoints = height / scale;
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
For anyone curious, the method shown above is not the fastest way.
When a UIView is asked for its contents, it will ask its layer (usually a CALayer) to draw them for it. The exception: OpenGL-based views, which use a CAEAGLLayer (a subclass of CALayer), use the same method but returns nothing. No drawing happens.
So, if you call:
[someUIView.layer drawInContext:someContext];
it will work, while
[someOpenGLView.layer drawInContext:someContext];
won't.
This also becomes an issue if you're asking a superview of any OpenGL-based view for its content: it will recursively ask each of its subviews for theirs, and any subview that uses a CAEAGLLayer will hand back nothing (you'll see a black rectangle).
I set out above to find an implementation of a delegate method of CALayer, drawLayer:inContext:, which I could use in any OpenGL-based views so that the view object itself would provide its contents (rather than the layer). The delegate method is called automatically: Apple expects it to work this way.
Where performance isn't an issue, you can implement a variation of a simple snapshot method in your view. The method would look like this:
- (void) drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx {
GLint backingWidth, backingHeight;
glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
CGContextDrawImage(ctx, self.bounds, iref);
}
BUT! This is not a performance effective.
glReadPixels, as noted just about everywhere, is not a fast call. Starting in iOS 5, Apple exposed CVOpenGLESTextureCacheRef - basically, a shared buffer that can be used both as a CVPixelBufferRef and as an OpenGL texture. Originally, it was designed to be used as a way of getting an OpenGL texture from a video frame: now it's more often used in reverse, to get a video frame from a texture.
So a much better implementation of the above idea is to use the CVPixelBufferRef you get from CVOpenGLESTextureCacheCreateTextureFromImage, get direct access to those pixels, draw them into a CGImage which you cache and which is drawn into your context in the delegate method above.
The code is here. On each rendering pass, you draw your texture into the texturecache, which is linked to the CVPixelBuffer Ref:
- (void) renderToCGImage {
// Setup the drawing
[ochrContext useProcessingContext];
glBindFramebuffer(GL_FRAMEBUFFER, layerRenderingFramebuffer);
glViewport(0, 0, (int) self.frame.size.width, (int) self.frame.size.height);
[ochrContext setActiveShaderProgram:layerRenderingShaderProgram];
// Do the actual drawing
glActiveTexture(GL_TEXTURE4);
glBindTexture(GL_TEXTURE_2D, self.inputTexture);
glUniform1i(layerRenderingInputTextureUniform, 4);
glVertexAttribPointer(layerRenderingShaderPositionAttribute, 2, GL_FLOAT, 0, 0, kRenderTargetVertices);
glVertexAttribPointer(layerRenderingShaderTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, kRenderTextureVertices);
// Draw and finish up
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glFinish();
// Try running this code asynchronously to improve performance
dispatch_async(PixelBufferReadingQueue, ^{
// Lock the base address (can't get the address without locking it).
CVPixelBufferLockBaseAddress(renderTarget, 0);
// Get a pointer to the pixels
uint32_t * pixels = (uint32_t*) CVPixelBufferGetBaseAddress(renderTarget);
// Wrap the pixel data in a data-provider object.
CGDataProviderRef pixelWrapper = CGDataProviderCreateWithData(NULL, pixels, CVPixelBufferGetDataSize(renderTarget), NULL);
// Get a color-space ref... can't this be done only once?
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
// Release the exusting CGImage
CGImageRelease(currentCGImage);
// Get a CGImage from the data (the CGImage is used in the drawLayer: delegate method above)
currentCGImage = CGImageCreate(self.frame.size.width,
self.frame.size.height,
8,
32,
4 * self.frame.size.width,
colorSpaceRef,
kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
pixelWrapper,
NULL,
NO,
kCGRenderingIntentDefault);
// Clean up
CVPixelBufferUnlockBaseAddress(renderTarget, 0);
CGDataProviderRelease(pixelWrapper);
CGColorSpaceRelease(colorSpaceRef);
});
}
And then implement the delegate method very simply:
- (void) drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx {
CGContextDrawImage(ctx, self.bounds, currentCGImage);
}

rendering a UIImage using openGL

I use the following code to render a UIImage using openGL. the Texture2D class is from Apple, so I assume it's correct. The image does not get displayed. I just get the background color produced by glClearColor. My app is based on GLpaint sample code from Apple, so the setup is correct and I am able to draw lines using openGL just fine.
Is this render code below missing something?
- (void) render:(UIImage *)image
{
[EAGLContext setCurrentContext:context];
glViewport(0, 0, backingWidth, backingHeight);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
// texturing will need these
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glEnable(GL_TEXTURE_2D);
glOrthof(0, backingWidth, 0, backingHeight, -1, 1);
glMatrixMode(GL_MODELVIEW);
glClearColor(0.2f, 0.2f, 0.2f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
Texture2D *texture = [[Texture2D alloc] initWithImage:image];
[texture drawInRect: self.frame];
// This application only creates a single color renderbuffer which is already bound at this point.
// This call is redundant, but needed if dealing with multiple renderbuffers.
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
}
update: replaced the broken link in the original answer with actual code.
Here is the code that worked for me:
-(void) mergeWithImage:(UIImage*) image
{
if(image==nil)
{
return;
}
glPushMatrix();
glColor4f(256,
256,
256,
1.0);
GLuint stampTexture; // = texture.id;
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glGenTextures(1, &stampTexture);
glBindTexture(GL_TEXTURE_2D, stampTexture);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
GLuint imgwidth = CGImageGetWidth(image.CGImage);
GLuint imgheight = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( imgheight * imgwidth * 4 );
CGContextRef context2 = CGBitmapContextCreate( imageData, imgwidth, imgheight, 8, 4 * imgwidth, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGContextTranslateCTM (context2, 0, imgheight);
CGContextScaleCTM (context2, 1.0, -1.0);
CGColorSpaceRelease( colorSpace );
CGContextClearRect( context2, CGRectMake( 0, 0, imgwidth, imgheight ) );
CGContextTranslateCTM( context2, 0, imgheight - imgheight );
CGContextDrawImage( context2, CGRectMake( 0, 0, imgwidth, imgheight ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imgwidth, imgheight, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(context2);
free(imageData);
static const GLfloat texCoords[] = {
0.0, 1.0,
1.0, 1.0,
0.0, 0.0,
1.0, 0.0
};
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, texCoords);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
/*
These arrays would need to be changed if the size of the paintview changes. You must make sure that all image imput is 64x64, 256x256, 512x512 or 1024x1024. Here we are using 512, but you can use 1024 as follows:
use the numbers:
{
0.0, height, 0.0,
1024, height, 0.0,
0.0, height-1024, 0.0,
1024, height-1024, 0.0
}
*/
NSLog(#"height of me: %f", self.bounds.size.height);
static const GLfloat vertices[] = {
0.0, 643, 0.0,
1024, 643, 0.0,
0.0, -381, 0.0,
1024, -381, 0.0
};
static const GLfloat normals[] = {
0.0, 0.0, 1024,
0.0, 0.0, 1024,
0.0, 0.0, 1024,
0.0, 0.0, 1024
};
glBindTexture(GL_TEXTURE_2D, stampTexture);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glNormalPointer(GL_FLOAT, 0, normals);
glTexCoordPointer(2, GL_FLOAT, 0, texCoords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glPopMatrix();
glDeleteTextures( 1, &stampTexture );
// Display the buffer
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
-(void) addImageToView{
UIImage* imageToAdd= [UIImage imageNamed:#"IMAGE_TO_ADD_TO_GL_VIEW.png"];
// all images added to the paining view MUST be 512x512.
// You can also add smaller images (even transformed ones) using this method - just add it to a UIView and then get it's graphics context
UIView* imageView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 512, 512)];
UIImageView* subView = [[UIImageView alloc] initWithImage:imageToAdd];
[imageView addSubview:subView];
UIImage* blendedImage =nil;
UIGraphicsBeginImageContext(imageView.frame.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
blendedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self mergeWithImage: blendedImage ];

glReadPixels white spots issues

I'm trying to make a screenshot on my iPad with OpenGL ES. This does work, but there are blank spots on them. These blank spots seem to be the rendered object. I've tried using the other buffers aswell, but none of them seem to contain the actual 3D object?
I'm using the example code of String SDK.
Image of the issue:
EAGLView.m
- (void)createFramebuffer
{
if (context && !defaultFramebuffer)
{
[EAGLContext setCurrentContext:context];
// Handle scale
if ([self respondsToSelector:#selector(setContentScaleFactor:)])
{
float screenScale = [UIScreen mainScreen].scale;
self.contentScaleFactor = screenScale;
}
// Create default framebuffer object.
glGenFramebuffers(1, &defaultFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
// Create color render buffer and allocate backing store.
glGenRenderbuffers(1, &colorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &framebufferWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &framebufferHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer);
// Create and attach depth buffer
glGenRenderbuffers(1, &depthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, framebufferWidth, framebufferHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderbuffer);
// Bind color buffer
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
NSLog(#"Failed to make complete framebuffer object %x", glCheckFramebufferStatus(GL_FRAMEBUFFER));
}
}
Screenshot code
EAGLView.m
- (UIImage*)snapshot:(UIView*)eaglview
{
GLint backingWidth, backingHeight;
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
//glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
String_OGL_TutorialViewController.m
- (void)render
{
[(EAGLView *)self.view setFramebuffer];
glDisable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
const int maxMarkerCount = 10;
struct MarkerInfoMatrixBased markerInfo[10];
int markerCount = [stringOGL getMarkerInfoMatrixBased: markerInfo maxMarkerCount: maxMarkerCount];
for (int i = 0; i < markerCount; i++)
{
float diffuse[4] = {0, 0, 0, 0};
diffuse[markerInfo[i].imageID % 3] = 1;
if ([context API] == kEAGLRenderingAPIOpenGLES2)
{
glUseProgram(program);
glUniform4fv(uniforms[UNIFORM_COLOR], 1, diffuse);
const float translationMatrix[16] = {1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, -cubeScale, 1};
float modelViewMatrix[16];
float modelViewProjectionMatrix[16];
[String_OGL_TutorialViewController multiplyMatrix: translationMatrix withMatrix: markerInfo[i].transform into: modelViewMatrix];
[String_OGL_TutorialViewController multiplyMatrix: modelViewMatrix withMatrix: projectionMatrix into: modelViewProjectionMatrix];
glUniformMatrix4fv(uniforms[UNIFORM_MVP], 1, GL_FALSE, modelViewProjectionMatrix);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, ((float *)NULL) + 6 * 4 * 3);
// Validate program before drawing. This is a good check, but only really necessary in a debug build.
// DEBUG macro must be defined in your debug configurations if that's not already the case.
#if defined(DEBUG)
if (![self validateProgram:program])
{
NSLog(#"Failed to validate program: %d", program);
return;
}
#endif
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_BYTE, NULL);
}
else
{
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glLightfv(GL_LIGHT0, GL_DIFFUSE, diffuse);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 12, NULL);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, 12, ((float *)NULL) + 6 * 4 * 3);
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(projectionMatrix);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glMultMatrixf(markerInfo[i].transform);
glTranslatef(0, 0, -cubeScale);
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_BYTE, NULL);
}
UIImage *img = [(EAGLView *)self.view snapshot: self.view];
UIImageWriteToSavedPhotosAlbum(img, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
[stringOGL pause];
}
}

Trouble Displaying Textures OpenGL ES 1.1

I'm working on a simple little game for the iPhone, and I'd like to use textures, however I can't quite seem to get it working...
After some research I found this page and this site. Both are great references, and taught me a little bit about textures, however, after loading a texture using either function I can't get the texture displayed, here's what my code looks like:
Very Simple Texture Display Function (not working)
void drawTexture(GLuint texture, float x, float y, float w, float h)
{
glBindTexture(GL_TEXTURE_2D,texture);
GLfloat box[] = {x,y+h, x+w,y+h, x,y, x+w,y};
GLfloat tex[] = {0,0, 1,0, 1,1, 0,1};
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, 0,box);
glTexCoordPointer(2, GL_FLOAT, 0, tex);
glDrawArrays(GL_TRIANGLE_STRIP,0,4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
Normally, I'd not create an array every single frame only to display an image, but this is just an example. When I run this function, I get nothing. Blank- no image, nothing (unless of course I'd previously enabled a color array and hadn't disabled it afterwards)
Second Simple Display Function (this one uses a quick little class)
void draw_rect(RectObject* robj){
glVertexPointer(2, GL_FLOAT, 0, [robj vertices]);
glEnableClientState(GL_VERTEX_ARRAY);
glColorPointer(4, GL_UNSIGNED_BYTE, 0, [robj colors]);
glEnableClientState(GL_COLOR_ARRAY);
if ([robj texture] != -1){
glEnable(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glClientActiveTexture([robj texture]);
glTexCoordPointer(2, GL_FLOAT, 0, defaultTexCoord);
glBindTexture(GL_TEXTURE_2D, [robj texture]);
}
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisable(GL_TEXTURE_2D);
glDisable(GL_TEXTURE_COORD_ARRAY);
}
This function on the other hand, does change the display, instead of outputting the texture however it outputs a black square...
Setup Background
In my init function I'm calling
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_SRC_COLOR);
Two LONG Texture Loading Functions
struct Texture2D LoadImage(NSString* path)
{
struct Texture2D tex;
tex.texture = -1;
// Id for texture
GLuint texture;
// Generate textures
glGenTextures(1, &texture);
// Bind it
glBindTexture(GL_TEXTURE_2D, texture);
// Set a few parameters to handle drawing the image
// at lower and higher sizes than original
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
//NSString *path = [[NSString alloc] initWithUTF8String:imagefile.c_str()];
path = [[NSBundle mainBundle] pathForResource:path ofType:#""];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
if (image == nil)
return tex;
// Get Image size
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Allocate memory for image
void *imageData = malloc( height * width * 4 );
CGContextRef imgcontext = CGBitmapContextCreate(
imageData, width, height, 8, 4 * width, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease( colorSpace );
CGContextClearRect( imgcontext,
CGRectMake( 0, 0, width, height ) );
CGContextTranslateCTM( imgcontext, 0, height - height );
CGContextDrawImage( imgcontext,
CGRectMake( 0, 0, width, height ), image.CGImage );
// Generate texture in opengl
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height,
0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
// Release context
CGContextRelease(imgcontext);
// Free Stuff
free(imageData);
[image release];
[texData release];
// Create and return texture
tex.texture=texture;
tex.width=width;
tex.height=height;
return tex;
}
GLuint makeTexture(NSString* path){
GLuint texture[1]={-1};
glGenTextures(1, texture);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
path = [[NSBundle mainBundle] pathForResource:path ofType:#"png"];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
if (image == nil)
NSLog(#"Do real error checking here");
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef context = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease( colorSpace );
CGContextClearRect( context, CGRectMake( 0, 0, width, height ) );
CGContextTranslateCTM( context, 0, height - height );
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(context);
free(imageData);
[image release];
[texData release];
return texture[0];
}
If you could point me in the right direction it would be much appreciated.
First of all, your draw_rect function has an error. Don't call glClientActiveTexture, it is used for multi-texturing and you don't need it. Calling it with a texture object will either bind some really strange texture unit or, most likely, result in an error.
And in the drawTexture function you are actually drawing the triangles in clockwise order. Assuming you didn't flip the y-direction in the projection matrix or something similar, if you have back-face culling enabled your whole geometry will get culled away. Try calling glDisable(GL_CULL_FACE), although back-face culling should be disabled by default. Or even better, change your vertices to counter-clockwise ordering:
box[] = { x,y+h, x,y, x+w,y+h, x+w,y };
You also have a mismatch of texture coordinates to vertices in your drawTexture function, but this shouldn't cause the texture not to be drawn, but rather just look a bit strange. Considering the changes to counter-clockwise ordering from the last paragraph, the texture coordinates should be:
tex[] = { 0.0f,1.0f, 0.0f,0.0f, 1.0f,1.0f, 1.0f, 0.0f };
EDIT: Your draw_rect function is also messing up the state, because you enable the vertex and color arrays, but then don't disable them again when you are finished with rendering. When you now want to draw something different without a color array (like in drawTexture), the color array is still enabled and uses some arbitrary data. So you should add
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
right after
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
in draw_rect.
EDIT: And you should also wrap the drawTexture function in a pair of glEnable(GL_TEXTURE_2D) and glDisable(GL_TEXTURE_2D). You enable texturing in the initialization code, which is wrong. You should set all neccessary state right before rendering, especially such highly object-dependent state like texturing. For example once you call draw_rect before drawTexture, you end up with disabled texturing, although you enabled it in the initialization code and thought it to be always enabled. Do you see that this is not a good idea?
EDIT: I just spotted another error. In draw_rect you call glEnable and glDisable with GL_TEXTURE_COORD_ARRAY, which is wrong. You have to use glEnableClientState and glDisableClientState for enabling/disabling vertex arrays, like you did int drawTexture.
So as a little mid-way conclusion your functions should actually look like:
void drawTexture(GLuint texture, float x, float y, float w, float h)
{
glBindTexture(GL_TEXTURE_2D,texture);
glEnable(GL_TEXTURE_2D);
GLfloat box[] = {x,y+h, x+w,y+h, x,y, x+w,y};
GLfloat tex[] = {0,0, 1,0, 1,1, 0,1};
glTexCoordPointer(2, GL_FLOAT, 0, tex);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, box);
glEnableClientState(GL_VERTEX_ARRAY);
glDrawArrays(GL_TRIANGLE_STRIP,0,4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
}
void draw_rect(RectObject* robj)
{
if ([robj texture] != -1)
{
glBindTexture(GL_TEXTURE_2D, [robj texture]);
glEnable(GL_TEXTURE_2D);
glTexCoordPointer(2, GL_FLOAT, 0, defaultTexCoord);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
}
glColorPointer(4, GL_UNSIGNED_BYTE, 0, [robj colors]);
glEnableClientState(GL_COLOR_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, [robj vertices]);
glEnableClientState(GL_VERTEX_ARRAY);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
if ([robj texture] != -1)
{
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
}
}
If one of the textures work and the other not, could it be a problem with the texture file ?
Dimensions sometimes can trick you, try to use the same file (the one working) on both textures and see if that solved. If it does it's a problem with the texture file.
The fun does work.
void drawTexture(GLuint texture, float x, float y, float w, float h)
{
glBindTexture(GL_TEXTURE_2D,texture);
glEnable(GL_TEXTURE_2D);
GLfloat box[] = {x,y+h, x+w,y+h, x,y, x+w,y};
GLfloat tex[] = {0,0, 1,0, 1,1, 0,1};
glTexCoordPointer(2, GL_FLOAT, 0, tex);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, box);
glEnableClientState(GL_VERTEX_ARRAY);
glDrawArrays(GL_TRIANGLE_STRIP,0,4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
}
but we should relate to the right coordinate for texture. wo should change the code
form
GLfloat box[] = {x,y+h, x+w,y+h, x,y, x+w,y};
GLfloat tex[] = {0,0, 1,0, 1,1, 0,1};
to
GLfloat box[] = {x,y+h, x+w,y+h, x,y, x+w,y};
GLfloat tex[] = { 0.0f,1.0f, 1.0f,1.0f, 0.0f,0.0f, 1.0f, 0.0f };
Thanks stackoverflow. Thanks your help.
Good luck!

Resources