I have got a GLKView, where I try to draw a couple of cubes and I create textures from a view and map them onto the cubes. However, when I start the app on a retina device, the textures are correctly sized but they look terrible. I have tried to set the contentScaleFactor of the GLKView to the scale of the main screen - to no avail. I have also tried to multiply the the buffers dimensions by the scale, which resulted in textures that looked crisp, but were only 1/4 of the original size...
Without further ado, I may present you what I have done (without above indicated multiplication):
GLKView
- (void)setupGL {
UIScreen *mainScreen = [UIScreen mainScreen];
const CGFloat scale = mainScreen.scale;
self.contentScaleFactor = scale;
self.layer.contentsScale = scale;
glGenFramebuffers(1, &defaultFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, defaultFrameBuffer);
glGenRenderbuffers(1, &depthBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, self.bounds.size.width, self.bounds.size.height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthBuffer);
glGenRenderbuffers(1, &colorBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA4, self.bounds.size.width, self.bounds.size.height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ARRAY, GL_RENDERBUFFER, colorBuffer);
glEnable(GL_DEPTH_TEST);
}
Here I load the textures
// make space for an RGBA image of the view
GLubyte *pixelBuffer = (GLubyte *)malloc(
4 *
cV.bounds.size.width *
cV.bounds.size.height);
// create a suitable CoreGraphics context
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context =
CGBitmapContextCreate(pixelBuffer,
cV.bounds.size.width, cV.bounds.size.height,
8, 4*cV.bounds.size.width,
colourSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colourSpace);
// draw the view to the buffer
[cV.layer renderInContext:context];
// upload to OpenGL
glTexImage2D(GL_TEXTURE_2D, 0,
GL_RGBA,
cV.bounds.size.width, cV.bounds.size.height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, pixelBuffer);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
The answer to this question can be found here
How to create a CGBitmapContext which works for Retina display and not wasting space for regular display?
What I basically did, is to multiply the texture and the buffer by the screen's scale factor and because this only yielded a texture that was 1/4 of the size, I had to multiply the context by the scale factor as well
CGContextScaleCTM(context, scaleFactor, scaleFactor);
Related
I'm currently drawing objects (images, rectangles) on iPhone with OpenGL ES 2.0.
There are two modes :
A) Without FBO :
Draw objects
Render to screen
B) With FBO
Bind FBO
Draw objects
Render FBO to screen
And the scene draw order is :
Draw background with red (or black) color (1, 0, 0, 1) with glClearColor
Draw texture with transparency color (1, 1, 1, 0.5)
Here are the results (left without FBO, right with FBO) :
1) Image without transparency : both are same
2) Transparency set to 0.5, red background : both different
3) Transparency set to 0.5, black background : right same as 1) Without transparency
Here's how I create the FBO :
GLint maxRenderBufferSize;
glGetIntegerv(GL_MAX_RENDERBUFFER_SIZE, &maxRenderBufferSize);
GLuint textureWidth = (GLuint)self.size.width;
GLuint textureHeight = (GLuint)self.size.height;
if(maxRenderBufferSize <= (GLint)textureWidth || maxRenderBufferSize <= (GLint)textureHeight)
#throw [NSException exceptionWithName:TAG
reason:#"FBO cannot allocate that much space"
userInfo:nil];
glGenFramebuffers(1, &fbo);
glGenRenderbuffers(1, &fboBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glGenTextures(1, &fboTexture);
glBindTexture(GL_TEXTURE_2D, fboTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, textureWidth, textureHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, fboTexture, 0);
glBindRenderbuffer(GL_RENDERBUFFER, fboBuffer);
GLuint status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(status != GL_FRAMEBUFFER_COMPLETE)
#throw [NSException exceptionWithName:TAG
reason:#"Failed to initialize fbo"
userInfo:nil];
Here's my fragment shader :
gl_FragColor = (v_Color * texture2D(u_Texture, v_TexCoordinate));
Found the problem, this line was the problem in my render-FBO-to-window function :
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
I just removed it because I don't need alpha blending in this step.
I am trying to render an image to a texture in OpenGL ES to implement off-screen Sobel filter. Following this link http://www.songho.ca/opengl/gl_fbo.html, I create a resulting texture named sobelTexture and a render frame buffer named depthBuffer, then respectively attach them to GL_COLOR_ATTACHMENT0 and GL_DEPTH_ATTACHMENT of a newly created frame buffer. The code is shown in bellow.
// Create a texture associated with the frame buffer to render image into
GLuint sobelTexture;
glGenTextures(1, &sobelTexture);
glBindTexture(GL_TEXTURE_2D, sobelTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_FLOAT, 0);
glBindTexture(GL_TEXTURE_2D, 0);
GLuint depthBuffer;
glGenRenderbuffers(1, &depthBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, height);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
// Create frame buffer
GLuint frameBuffer;
glGenFramebuffers(1, &frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, sobelTexture, 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthBuffer);
However, when I check the frame buffer status using the bellow code, it returns what is different from GL_FRAMEBUFFER_COMPLETE.
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
NSLog(#"! ERROR: frame buffer is not completed\n");
exit(1);
}
Moreover, If I change the data type GL_FLOAT in glTexImage2D() function to GL_UNSIGNED_BYTE, the frame buffer will be marked as complete. But I cannot use it because, in my fragment shader, the output color components are of float type.
How could I fix this problem?
Thanks in advance for any help.
I have a OpenGL program for iOS.
I would like to have a repeated texture. Normaly this is not a big deal for me, because GL_REPEAT does a fine job.
You can see the problem comparing the next two images:
First image is simulator screenshot. Everything works fine.
Second image is iPad screenshot. The textur will be repeated once and the clamped to edge.
Notice that the image is repeated 4 times in each direction. The red area is the area where the texture coordinate in a direction is > 1.0. So the device shows the image (normal area) repeats the image (red area) and than clamps the image.
So I will show what I do to render the Quad.
I setup the texture
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
_location = location;
_texture = [self setupTextureByUIImage:[UIImage imageNamed:name]];
_uniform = uniform;
The setUpTextureByUIImage function is
- (GLuint) setupTextureByUIImage: (UIImage*) image {
// We want to display images
glActiveTexture(_location);
GLuint texture;
// Generate textures
glGenTextures(1, &texture);
// Bind it
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
// Get Image size
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Allocate memory for image
void *imageData = malloc( height * width * 4 );
CGContextRef imgcontext = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease( colorSpace );
CGContextClearRect( imgcontext, CGRectMake( 0, 0, width, height ) );
CGContextTranslateCTM( imgcontext, 0, height - height );
CGContextDrawImage( imgcontext, CGRectMake( 0, 0, width, height ), image.CGImage );
// Generate texture in opengl
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
// Release context
CGContextRelease(imgcontext);
// Free Stuff
free(imageData);
return texture;
}
After setup I begin to draw an object.
First I call makeActiveAndBind.
Then I draw.
Then I call unbind.
- (void) makeActiveAndBind
{
glActiveTexture(_location);
glBindTexture(GL_TEXTURE_2D, _texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glUniform1i(_uniform, _location - GL_TEXTURE0);
}
- (void) unbind
{
glBindTexture(GL_TEXTURE_2D, 0);
}
Does anyone have any idea to this strange behavior?
The texture coordinates of the cube are [0,4]x[0,4]
Simple question, is it possible to load texture asynchronously with iOS and OpenGL ES ?
Here is my loading method, called on a separate thread:
//Image size
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
//Create context
void *imageData = malloc(height * width * 4);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
//Prepare image
CGContextClearRect(context, CGRectMake(0, 0, width, height));
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image.CGImage);
//Dispatch OpenGL stuff on main thread
dispatch_sync(dispatch_get_main_queue(), ^{
//Bind texture
glBindTexture(GL_TEXTURE_2D, name);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
});
//Release
CGContextRelease(context);
free(imageData);
If I don't dispatch OpenGL calls on the main thread, my textures wont be displayed...
Same question for the glDeleteTextures call...
Any idea ?
You need to use the same context on your background thread that you're using on the main one. For this use setCurrentContext:.
So on main thread create new thread (as an example the simplest way) and pass main context
[self performSelectorInBackground: #selector(loadTextureWithContext:) withObject: [EAGLContext currentContext]];
And the creation code:
-(void) loadTextureWithContext:(EAGLContext*) main_context {
[EAGLContext setCurrentContext: main_context];
//Image size
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
//Create context
void *imageData = malloc(height * width * 4);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
//Prepare image
CGContextClearRect(context, CGRectMake(0, 0, width, height));
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image.CGImage);
//Bind texture
glBindTexture(GL_TEXTURE_2D, name);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
//Release
CGContextRelease(context);
free(imageData);
[EAGLContext setCurrentContext: nil];
}
As an option you can also create the new context and share the same EAGLSharegroup with the main one.
I'm writing a particle system that uses point sprites in OpenGL ES 1.1 on iOS. Everything works great until I try to texture the point sprites... when I render, each sprite is colored by the top left pixel of the texture I'm loading (rather than displaying the image). I've tried different images and different sizes and always get the same result.
setup code (taken from GLPaint example):
CGImageRef brushImage;
CGContextRef brushContext;
size_t width, height;
GLubyte *brushData;
brushImage = [UIImage imageNamed:#"Particle.png"].CGImage;
width = CGImageGetWidth(brushImage);
height = CGImageGetHeight(brushImage);
if(brushImage) {
brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
brushContext = CGBitmapContextCreate(brushData, width, width, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast);
CGContextDrawImage(brushContext, CGRectMake(0, 0.0, (CGFloat)width, (CGFloat)height), brushImage);
CGContextRelease(brushContext);
glGenTextures(1, &brushTexture);
glBindTexture(GL_TEXTURE_2D, brushTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);
free(brushData);
}
and the render code:
glLoadIdentity();
glEnable(GL_BLEND);
glTranslatef(0.0f,0.0f,0.0f);
glClearColor(0.0, 0.0, 0.0, 1.0f);
glClear(GL_COLOR_BUFFER_BIT );
glEnableClientState(GL_POINT_SPRITE_OES);
glEnableClientState(GL_POINT_SIZE_ARRAY_OES);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, brushTexture);
glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
// took this out as incorrect call glEnableClientState(GL_POINT_SMOOTH);
glVertexPointer(2, GL_FLOAT, sizeof(ColoredVertexData2D), &vertexData[0].vertex);
glColorPointer(4, GL_FLOAT, sizeof(ColoredVertexData2D), &vertexData[0].color);
glPointSizePointerOES(GL_FLOAT,sizeof(ColoredVertexData2D),&vertexData[0].size)
glDrawArrays(GL_POINTS, 0,1000);
when texturing point sprites, do you have to specify texture coordinates, and if so, how?
no coordinates need to be specified, but you have to make the correct calls to enable point sprites:
glEnable(GL_POINT_SPRITE_OES) instead of glEnableClientState(GL_POINT_SPRITE_OES) did the trick.
going to go bone up on the difference.