iOS OPENGL Texture GL_REPEAT repeats only two times - ios

I have a OpenGL program for iOS.
I would like to have a repeated texture. Normaly this is not a big deal for me, because GL_REPEAT does a fine job.
You can see the problem comparing the next two images:
First image is simulator screenshot. Everything works fine.
Second image is iPad screenshot. The textur will be repeated once and the clamped to edge.
Notice that the image is repeated 4 times in each direction. The red area is the area where the texture coordinate in a direction is > 1.0. So the device shows the image (normal area) repeats the image (red area) and than clamps the image.
So I will show what I do to render the Quad.
I setup the texture
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
_location = location;
_texture = [self setupTextureByUIImage:[UIImage imageNamed:name]];
_uniform = uniform;
The setUpTextureByUIImage function is
- (GLuint) setupTextureByUIImage: (UIImage*) image {
// We want to display images
glActiveTexture(_location);
GLuint texture;
// Generate textures
glGenTextures(1, &texture);
// Bind it
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
// Get Image size
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Allocate memory for image
void *imageData = malloc( height * width * 4 );
CGContextRef imgcontext = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease( colorSpace );
CGContextClearRect( imgcontext, CGRectMake( 0, 0, width, height ) );
CGContextTranslateCTM( imgcontext, 0, height - height );
CGContextDrawImage( imgcontext, CGRectMake( 0, 0, width, height ), image.CGImage );
// Generate texture in opengl
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
// Release context
CGContextRelease(imgcontext);
// Free Stuff
free(imageData);
return texture;
}
After setup I begin to draw an object.
First I call makeActiveAndBind.
Then I draw.
Then I call unbind.
- (void) makeActiveAndBind
{
glActiveTexture(_location);
glBindTexture(GL_TEXTURE_2D, _texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glUniform1i(_uniform, _location - GL_TEXTURE0);
}
- (void) unbind
{
glBindTexture(GL_TEXTURE_2D, 0);
}
Does anyone have any idea to this strange behavior?
The texture coordinates of the cube are [0,4]x[0,4]

Related

OpenGL ES - Colors are different with or without FBO

I'm currently drawing objects (images, rectangles) on iPhone with OpenGL ES 2.0.
There are two modes :
A) Without FBO :
Draw objects
Render to screen
B) With FBO
Bind FBO
Draw objects
Render FBO to screen
And the scene draw order is :
Draw background with red (or black) color (1, 0, 0, 1) with glClearColor
Draw texture with transparency color (1, 1, 1, 0.5)
Here are the results (left without FBO, right with FBO) :
1) Image without transparency : both are same
2) Transparency set to 0.5, red background : both different
3) Transparency set to 0.5, black background : right same as 1) Without transparency
Here's how I create the FBO :
GLint maxRenderBufferSize;
glGetIntegerv(GL_MAX_RENDERBUFFER_SIZE, &maxRenderBufferSize);
GLuint textureWidth = (GLuint)self.size.width;
GLuint textureHeight = (GLuint)self.size.height;
if(maxRenderBufferSize <= (GLint)textureWidth || maxRenderBufferSize <= (GLint)textureHeight)
#throw [NSException exceptionWithName:TAG
reason:#"FBO cannot allocate that much space"
userInfo:nil];
glGenFramebuffers(1, &fbo);
glGenRenderbuffers(1, &fboBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glGenTextures(1, &fboTexture);
glBindTexture(GL_TEXTURE_2D, fboTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, textureWidth, textureHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, fboTexture, 0);
glBindRenderbuffer(GL_RENDERBUFFER, fboBuffer);
GLuint status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(status != GL_FRAMEBUFFER_COMPLETE)
#throw [NSException exceptionWithName:TAG
reason:#"Failed to initialize fbo"
userInfo:nil];
Here's my fragment shader :
gl_FragColor = (v_Color * texture2D(u_Texture, v_TexCoordinate));
Found the problem, this line was the problem in my render-FBO-to-window function :
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
I just removed it because I don't need alpha blending in this step.

OpenGL ES 2.0 textures for retina display?

I have got a GLKView, where I try to draw a couple of cubes and I create textures from a view and map them onto the cubes. However, when I start the app on a retina device, the textures are correctly sized but they look terrible. I have tried to set the contentScaleFactor of the GLKView to the scale of the main screen - to no avail. I have also tried to multiply the the buffers dimensions by the scale, which resulted in textures that looked crisp, but were only 1/4 of the original size...
Without further ado, I may present you what I have done (without above indicated multiplication):
GLKView
- (void)setupGL {
UIScreen *mainScreen = [UIScreen mainScreen];
const CGFloat scale = mainScreen.scale;
self.contentScaleFactor = scale;
self.layer.contentsScale = scale;
glGenFramebuffers(1, &defaultFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, defaultFrameBuffer);
glGenRenderbuffers(1, &depthBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, self.bounds.size.width, self.bounds.size.height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthBuffer);
glGenRenderbuffers(1, &colorBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA4, self.bounds.size.width, self.bounds.size.height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ARRAY, GL_RENDERBUFFER, colorBuffer);
glEnable(GL_DEPTH_TEST);
}
Here I load the textures
// make space for an RGBA image of the view
GLubyte *pixelBuffer = (GLubyte *)malloc(
4 *
cV.bounds.size.width *
cV.bounds.size.height);
// create a suitable CoreGraphics context
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context =
CGBitmapContextCreate(pixelBuffer,
cV.bounds.size.width, cV.bounds.size.height,
8, 4*cV.bounds.size.width,
colourSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colourSpace);
// draw the view to the buffer
[cV.layer renderInContext:context];
// upload to OpenGL
glTexImage2D(GL_TEXTURE_2D, 0,
GL_RGBA,
cV.bounds.size.width, cV.bounds.size.height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, pixelBuffer);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
The answer to this question can be found here
How to create a CGBitmapContext which works for Retina display and not wasting space for regular display?
What I basically did, is to multiply the texture and the buffer by the screen's scale factor and because this only yielded a texture that was 1/4 of the size, I had to multiply the context by the scale factor as well
CGContextScaleCTM(context, scaleFactor, scaleFactor);

opengl es(iphone) render from file

sorry for my english
I want to display video from a file, where the frames of 4 bytes per pixel, BRGA, 1280x720?
on mac I just took out the frame and drew this glDrawPixels, running on a Mac but in opengl es all differently.
here's the code from the mac
int pos = 0;
NSData *data = [[NSData alloc] initWithContentsOfFile:#"video.raw"];
glViewport(0,0,width,height);
glLoadIdentity();
glOrtho(0, width, 0, height, -1.0, 1.0);
glPixelZoom(1, -1);
glClear(GL_COLOR_BUFFER_BIT);
//glRasterPos2i(0, height);
glRasterPos2i(0, 0);
glDrawPixels(1280, 720, GL_BGRA, GL_UNSIGNED_BYTE, [data bytes]+pos);
glFinish();
Push those data to texture with "glTexSubImage2D" and render the texture. Note though that texture has to be of power of 2 so for your case you can make it (2048, 1024) but you may update only the (1280, 720) part:
CGSize videoSize;
CGSize textureSize;
GLuint dimension = 1;
while (videoSize.width > dimension) {
dimension <<= 1;
}
textureSize = CGSizeMake(dimension, .0f);
dimension = 1;
while (videoSize.height > dimension) {
dimension <<= 1;
}
textureSize = CGSizeMake(textureSize.width, dimension);
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, textureSize.width, textureSize.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
GLfloat textureCoordinates[] = {
.0f, .0f,
.0f, videoSize.height/textureSize.height,
videoSize.width/textureSize.width, .0f,
videoSize.width/textureSize.width, videoSize.height/textureSize.height
};
To update the texture:
void *data;
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, videoSize.width, videoSize.height, GL_RGBA, GL_UNSIGNED_BYTE, data);
Then just draw your textured quad.

same texture binding on every "quad"

I have 6 squares made up of 2 trangles, each of which is supposed to have a different texture mapped onto it. Instead, each texture is having the last binded texture on it instead of its own. Heres my drawView and setView:
- (void)drawView:(GLView*)view
{
glBindTexture(GL_TEXTURE_2D, texture[0]);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
static const Vertex3D vertices[] = {
{0,0, 1}, //TL
{ 1024,0, 1}, //TR
{0,-1024, 1}, //BL
{ 1024.0f, -1024.0f, 1} //BR
};
static const GLfloat texCoords[] = {
0.0, 1.0,
1.0, 1.0,
0.0, 0.0,
1.0, 0.0
};
glVertexPointer(3, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, texCoords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
- (void)setupView:(GLView*)view {
// Bind the number of textures we need.
glGenTextures(1, &texture[0]);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_GENERATE_MIPMAP,GL_TRUE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glLoadIdentity();
NSString *path = [[NSBundle mainBundle] pathForResource:filename ofType:#"jpg"];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
if (image == nil)
NSLog(#"Do real error checking here");
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef context = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
// Flip the Y-axis
CGContextTranslateCTM (context, 0, height);
CGContextScaleCTM (context, 1.0, -1.0);
CGColorSpaceRelease( colorSpace );
CGContextClearRect( context, CGRectMake( 0, 0, width, height ) );
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(context);
free(imageData);
}
You're always using texture[0], so you will indeed get the same texture every time. You need to pass the id of the texture you want to glBindTexture ().
I think the problem is related to the texture binding and in particular to this line:
glBindTexture(GL_TEXTURE_2D, texture[0]);
Double check that you use the right value of the gluint required for the texture binding.
Are you using shaders? In case double check it as well though it is most probably not the case.
I suggest to use texture atlas in order to not kill the overall engine's performances by binding every time a different texture in the GPU.

OpenGL ES Async texture loading

Simple question, is it possible to load texture asynchronously with iOS and OpenGL ES ?
Here is my loading method, called on a separate thread:
//Image size
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
//Create context
void *imageData = malloc(height * width * 4);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
//Prepare image
CGContextClearRect(context, CGRectMake(0, 0, width, height));
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image.CGImage);
//Dispatch OpenGL stuff on main thread
dispatch_sync(dispatch_get_main_queue(), ^{
//Bind texture
glBindTexture(GL_TEXTURE_2D, name);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
});
//Release
CGContextRelease(context);
free(imageData);
If I don't dispatch OpenGL calls on the main thread, my textures wont be displayed...
Same question for the glDeleteTextures call...
Any idea ?
You need to use the same context on your background thread that you're using on the main one. For this use setCurrentContext:.
So on main thread create new thread (as an example the simplest way) and pass main context
[self performSelectorInBackground: #selector(loadTextureWithContext:) withObject: [EAGLContext currentContext]];
And the creation code:
-(void) loadTextureWithContext:(EAGLContext*) main_context {
[EAGLContext setCurrentContext: main_context];
//Image size
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
//Create context
void *imageData = malloc(height * width * 4);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
//Prepare image
CGContextClearRect(context, CGRectMake(0, 0, width, height));
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image.CGImage);
//Bind texture
glBindTexture(GL_TEXTURE_2D, name);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
//Release
CGContextRelease(context);
free(imageData);
[EAGLContext setCurrentContext: nil];
}
As an option you can also create the new context and share the same EAGLSharegroup with the main one.

Resources