Render CoreGraphics to OpenGL texture on IOS - ios

Using CoreGraphics on iOS is very easy to use, but it is possible to get the output of CoreGraphics and put it into OpenGL Textures?
The final goal is to use CGContextDrawPDFPage to render very performant pdf's and write it into a specific texture id with
OpenGL.glBindTexture(GL_TEXTURE_2D, TextureNativeId);
It does look like CoreGraphics is not able to render directly into a specific "native texture id".

Yes, you can, by rendering your Core Graphics content to a bitmap context and uploading that to a texture. The following is code that I use to draw a UIImage to a Core Graphics context, but you could replace the CGContextDrawImage() portion with your own drawing code:
GLubyte *imageData = (GLubyte *) calloc(1, (int)pixelSizeOfImage.width * (int)pixelSizeOfImage.height * 4);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)pixelSizeOfImage.width, (int)pixelSizeOfImage.height, 8, (int)pixelSizeOfImage.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, pixelSizeOfImage.width, pixelSizeOfImage.height), [newImageSource CGImage]);
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)pixelSizeOfImage.width, (int)pixelSizeOfImage.height, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
This assumes that you've created your texture using code like the following:
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &outputTexture);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_2D, 0);
For rapidly changing content, you might want to look into iOS 5.0's texture caches (CVOpenGLESTextureCacheCreateTextureFromImage() and the like), which might let you render directly to the bytes for your texture. However, I've found that the overhead for creating and rendering to a texture with a texture cache makes this slightly slower for rendering a single image, so if you don't need to continually update this the code above is probably your fastest route.

Related

How to restore a GL_RENDERBUFFER?

I am working on storing and restoring my OpenGL ES based application's state.
I have a function to save the GL_RENDERBUFFER to dump the data with the following code:
glBindFramebuffer(GL_FRAMEBUFFER, fboTextureBufferData.framebuffer);
glBindRenderbuffer(GL_RENDERBUFFER, fboTextureBufferData.colorbuffer);
GLint x = 0, y = 0, width2 = backingWidth, height2 = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
I don't see a glWritePixels function. What is the best way to repopulate the GL_RENDERBUFFER with the GLubyte data populated above? An example would be greatly appreciated.
EDIT 3:
Here is how I am attempting to configure the texture render buffer, and the function used to draw it. As noted in the code, if I specify GL_COLOR_ATTACHMENT1 for the glFramebufferTexture2D parameter, the stored pixel data is restored but I can't get any updates to draw. But if I use GL_COLOR_ATTACHMENT0 instead, I get drawing updates but no pixel data restored.
I have tried various combinations (for instance also using GL_COLOR_ATTACHMENT1 for the glFramebufferRenderbuffer parameter) but then I get an invalid frame buffer error when attempting to render. It seems I am so close, but can't figure out how to get them both restoring and rendering working together.
- (bool)configureRenderTextureBuffer {
[EAGLContext setCurrentContext:context];
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer*)self.layer];
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer*)self.layer];
glGenFramebuffers(1, &fboTextureBufferData.framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, fboTextureBufferData.framebuffer);
glGenRenderbuffers(1, &fboTextureBufferData.colorbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, fboTextureBufferData.colorbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8, backingWidth, backingHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, fboTextureBufferData.colorbuffer);
// Generate texture name, stores in .textureID
glGenTextures(1, &fboTextureBufferData.textureID);
glBindTexture(GL_TEXTURE_2D, fboTextureBufferData.textureID);
////////////////// Read Existing texture data //////////////////
NSString *dataPath = [TDTDeviceUtilitesLegacy documentDirectory]; //This just returns the app's document directory
NSData *data = [NSData dataWithContentsOfFile:[NSString stringWithFormat:#"%#/buffer.data", dataPath]];
GLubyte *pixelData = (GLubyte*)[data bytes];
// If I use GL_COLOR_ATTACHMENT1 here, my existing pixel data is restored
// but no drawing occurs. If I use GL_COLOR_ATTACHMENT0, then data isn't
// restored but drawing updates work
glFramebufferTexture2D ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, fboTextureBufferData.textureID, 0 );
// Populate with existing data
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, backingWidth, backingHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, &pixelData[0]); //&image[0]
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER) ;
if(status != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"failed to make complete render texture framebuffer object %x", status);
return false;
}
return true;
}
Here is the code for rendering. The viewFramebuffer is attached to GL_COLOR_ATTACHMENT0 and is used so the texture frame buffer can be zoomed and positioned inside the view.
- (void)renderTextureBuffer {
//Bind the texture frame buffer, if I don't use this, I can't get it to draw
//glBindFramebuffer(GL_FRAMEBUFFER, fboTextureBufferData.framebuffer);
//If I use this instead of binding the framebuffer above, I get no drawing and black background
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, fboTextureBufferData.textureID, 0);
renderParticlesToTextureBuffer();
//Bind the view frame buffer.
glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
drawFboTexture();
[context presentRenderbuffer:GL_RENDERBUFFER];
}
If you need to write data directly to a render target, using a renderbuffer is not a good option. In this case, it's much better to use a texture instead.
Using a texture as a FBO attachment works very similarly to using a renderbuffer. Where you currently use glRenderbufferStorage() to allocate a renderbuffer of the needed dimensions, you create a texture instead, and allocate its storage with:
GLuint texId = 0;
glGenTextures(1, &texId);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0,
GL_RGBA, GL_UNSIGNED_SHORT_5_6_5, 0);
Then you attach it to the framebuffer with:
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texId, 0);
Now, if you later want to fill your render target with data, you can simply call glTexImage2D() again, or even better glTexSubImage2D() if the size is unchanged, to do that.

How to pass the data argument in glTexImage2D when using GLKit to load textures?

I am loading the texture.png texture by using GLKit as per the code below:
// Setup texture
CGImageRef imageRef = [[UIImage imageNamed:#"texture.png"] CGImage];
GLKTextureInfo texInfo = [GLKTextureLoader textureWithCGImage:imageRef options:nil error:NULL];
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texInfo.name);
// Set parameters that control texture sampling for the bound
// texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texInfo.width, texInfo.height, 0, GL_RGB, GL_UNSIGNED_BYTE, XXXX);
XXXX is the data which specifies a pointer to the image data in memory. The problem is that I am using GLKit to load the texture but I couldn't find in the apple documentation any way to retrieve the data from the GLKTextureInfo class. Does anyone know how I can fix that?
If you've called [GLKTextureLoader textureWithCGImage:options:error:], you don't need to upload the bitmap data. It's already happened, so the call to glTexImage2D() is unnecessary.

openGL ES warnings

I'm using openGL ES to display a YUV420 video on a UIView. I'm following this thread (CADisplayLink OpenGL rendering breaks UIScrollView behaviour) to achieve a 30fps. Everything seems to be okay. The playback is smooth. Now thought I running my app through Instruments and I get few warnings:
Logical Buffer Load - Summary => slow framebuffer load
GPU Wait on Texture - Summary => CPU wait for GPU on Texture Upload
Texture Upload Non-Optimal GPU Utilization - Summary => Mid-frame texture upload
In my callback I do the following:
[EAGLContext setCurrentContext:_context];
glClear(GL_COLOR_BUFFER_BIT);
// load the color components into OpenGL
glBindTexture(GL_TEXTURE_2D, _textures[TEXTURE_Y]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, frameWidth, frameHeight, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, (UInt8*)yuvFrame.luma.bytes);
glBindTexture(GL_TEXTURE_2D, _textures[TEXTURE_U]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, frameWidth/2, frameHeight/2, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, (UInt8*)yuvFrame.chromaB.bytes);
glBindTexture(GL_TEXTURE_2D, _textures[TEXTURE_V]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, frameWidth/2, frameHeight/2, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, (UInt8*)yuvFrame.chromaR.bytes);
// draw
glDrawElements(GL_TRIANGLE_STRIP, sizeof(indices)/sizeof(indices[0]), GL_UNSIGNED_SHORT, 0);
[_context presentRenderbuffer:GL_RENDERBUFFER];
And prior to this call, in the init method of the UIView, I generate the textures and set some params, like this:
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glGenTextures(NUM_TEXTURES, _textures);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, _textures[TEXTURE_Y]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glUniform1i(_uniformSamplers[SAMPLER_Y], TEXTURE_Y);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, _textures[TEXTURE_U]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glUniform1i(_uniformSamplers[SAMPLER_U], TEXTURE_U);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, _textures[TEXTURE_V]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glUniform1i(_uniformSamplers[SAMPLER_V], TEXTURE_V);
Although, like I said the playback seems to perform pretty well, I would like to try to fix those warnings. I tried several things without success. The only way I was able to remove them was by calling glGenTextures(NUM_TEXTURES, _textures)/glDeleteTextures(NUM_TEXTURES, _textures) at the beginning/ending of my callback but I don't think that is the right way.
Does anyone have any suggestions?
The first warning on your list can sometimes be a false positive in Instruments, so it may not be a real issue at all. However, it may indicate that you haven't properly cleared out the previous state of the framebuffer, so you could check to make sure that you don't also have a depth buffer that needs to be cleared in addition to your GL_COLOR_BUFFER_BIT.
The other two are simply telling you that it's taking a while to upload your image data via glTexImage2D() on each frame. If you're doing this on iOS 5.0, you could look at using texture caches to speed this upload process (CVOpenGLESTextureCacheCreate() and friends). They might help you squeeze out a little extra performance.

Rendering to texture on iOS OpenGL ES—works on simulator, but not on device

In order to improve the performance of my OpenGL ES application for the iPad, I was planning to draw a rarely updated but rendertime-heavy element to a texture, so I can just use the texture unless the element has to be redrawn. However, while the texture is mapped correctly on both the simulator and the device, only on the simulator is something actually rendered into the texture.
The following is the code that I added to the project. While setting up the scene, I create the buffers and the texture needed:
int width = 768;
int height = 270;
// Prepare texture for off-screen rendering.
glGenTextures(1, &wTexture);
glBindTexture(GL_TEXTURE_2D, wTexture);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_FALSE);
glClearColor(.9f, .3f, .6f, 1.0f); // DEBUG
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA,
GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);
// Depth attachment buffer, always needed.
glGenRenderbuffersOES(1, &wDepth);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, wDepth);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES,
width, height);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, 0);
// Create FBO for render-to-texture.
glGenFramebuffersOES(1, &wBuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, wBuffer);
glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES,
GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, wTexture, 0);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES,
GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, wDepth);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0);
A glFramebufferStatusOES on the new FBO (before it is unbound of course) yields a 'framebuffer complete' return value on both the simulator and the device. Note that I set the pink clear colour for the texture in order to confirm that the texture is actually rendered, and the problem is in fact simply that the texture is never drawn into.
Whenever the texture needs to be redrawn, I do this before rendering the element:
glBindFramebufferOES(GL_FRAMEBUFFER_OES, wBuffer);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(0, 0, width, height);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
// ...
and the following after the actual rendering:
// ...
glPopMatrix();
glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0);
Finally, every time the screen is redrawn I map the texture to a quad at the appropriate position on the screen, like so:
float Vertices[] = {
-65.0f, -100.0f, .0f,
-65.0f, 100.0f, .0f,
-10.0f, -100.0f, .0f,
-10.0f, 100.0f, .0f};
float Texture[] = {.0f, .0f, 1.0f, .0f, .0f, 1.0f, 1.0f, 1.0f};
glEnable(GL_TEXTURE_2D);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D, wTexture);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glVertexPointer(3, GL_FLOAT, 0, Vertices);
glTexCoordPointer(2, GL_FLOAT, 0, Texture);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
On the iPhone and iPad simulators (4.2, 4.3), the code works as expected. I see the dynamically rendered texture displayed at the respective position, of course with a pink instead of a transparent background due to my debugging statement. On my iPad 4.2 device, however, only the pink rectangle is rendered, not what should have been drawn into it during the render-to-texture step. Thus, the texture is rendered to the screen correctly, but for some reason, on the device the render-to-texture code fails to actually render anything to the texture.
I suppose I am using some functionality that is not available on the device, or make an erroneus assumption somewhere, but I can't figure out what it is. I also tried running it through the OpenGL ES Analyzer, but it gives me nothing but some basic performance optimisation tips. Where do I need to look for the problem?
I was using MSAA in my project, and have found out that the problem disappeared when I disabled it. This has lead me to discover this other question where the same problem is discussed (but not solved).
The problem seems to be that if multisampling is enabled for your main framebuffer, all of your custom FBOs have to use multisampling as well. You cannot render to a normal non-multisampled GL_TEXTURE_2D, and a multi-sampled GL_TEXTURE_2D_MULTISAMPLE is not available on OpenGL ES 2.
In order to fix the problem, I modified my render-to-texture code the same way I modified my main rendering code to enable multisampling. In addition to the three buffer objects created in the code from the question, I create three more for the multi-sampled rendering:
glGenFramebuffersOES(1, &wmBuffer);
glGenRenderbuffersOES(1, &wmColor);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, wmBuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, wmColor);
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER_OES, 4, GL_RGBA8_OES, width, height);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, wmColor);
glGenRenderbuffersOES(1, &wmDepth);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, wmDepth);
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER_OES, 4, GL_DEPTH_COMPONENT16_OES, width, height);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, wmDepth);
Before rendering to the texture, I bind the new MSAA buffer:
glBindFramebufferOES(GL_FRAMEBUFFER_OES, wmBuffer);
Finally, after rendering, I resolve the MSAA FBO into the texture FBO the same way I do for my main rendering framebuffer:
glBindFramebufferOES(GL_READ_FRAMEBUFFER_APPLE, wmBuffer);
glBindFramebufferOES(GL_DRAW_FRAMEBUFFER_APPLE, wBuffer);
glResolveMultisampleFramebufferAPPLE();
GLenum attachments[] = {GL_DEPTH_ATTACHMENT_OES, GL_COLOR_ATTACHMENT0_OES, GL_STENCIL_ATTACHMENT_OES};
glDiscardFramebufferEXT(GL_READ_FRAMEBUFFER_APPLE, 3, attachments);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0);
The textures are now rendered correctly (and the performance is great!)

Unable to load multiple textures in opengl es 2.0

I'm attempting to load two textures and pass them on to two samplers in my shader, however both the samplers return the first texture I load. Furthermore if I don't load a texture into GL_TEXTURE0, both the samplers return black. I've reduced it to a single sampler/texture and still have the same issue:
GLuint texture1;
glGenTextures(2, &texture1);
glActiveTexture(GL_TEXTURE0); //if this is GL_TEXTURE1 or any other, I get black
glBindTexture(GL_TEXTURE_2D, texture1);
glEnable(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
[self LoadTextureData: [[NSBundle mainBundle] pathForResource:#"normals" ofType:#"png"]];
glUniform1i(uniforms[SAMPLER_1], 1); //whether this is 0 or 1 or anything else doesn't seem to make any difference for both samplers.
return TRUE;
Any ideas on what I'm doing wrong?
I'm reasonably sure my sampler indices are correct, as is my shader. LoadTextureData boils down to a glTexImage2D.
Edit:
This is how I attempt to load and set the two textures
//texture 1
GLuint texture1;
glGenTextures(1, &texture1);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture1);
glEnable(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
[self LoadTextureData: [[NSBundle mainBundle] pathForResource:#"normals" ofType:#"png"]];
//texture 2
GLuint texture2;
glGenTextures(1, &texture2);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture2);
glEnable(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
[self LoadTextureData: [[NSBundle mainBundle] pathForResource:#"bricks" ofType:#"jpg"]];
glUniform1i(uniforms[SAMPLER_1], 0);
glUniform1i(uniforms[SAMPLER_2], 1);
The result is both samplers return the 'normals' image.
LoadTextureData, assignment of sampler IDs, and my shader can be found at http://paste2.org/p/1514200
The long of the short of it: make sure you assign textures to samplers -after- the shader is loaded. I was tricked into thinking uniforms[] is assigned correctly because I was only checking it after all loading had completed.
GLuint texture1;
glGenTextures(2, &texture1);
You generate two textures, but alloc variable only for one.
Try this:
GLuint textures[2];
glGenTextures(2, textures);
This just happened to me and I spent 2 or 3 hours trying to figure it out. Your answer was right for you but this was what I was doing...
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureInfo[textureIndex].texture);
glUniform1i(glPrograms[currentProgram].glUniforms[U_textureSampler], 0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, textureInfo[textureIndex].texture);
glUniform1i(glPrograms[currentProgram].glUniforms[U_secondTextureSampler], 0);
My entire 3 hours of debugging, testing things, cursing, trying to figure out why BOTH textures were the same thing... was because of the 0 instead of a 1 on the 2nd one.
It should have been:
glUniform1i(glPrograms[currentProgram].glUniforms[U_secondTextureSampler], 1);
It's not just the GL_TEXTURE0 and GL_TEXTURE1 that makes it the 2nd texture. It's that 1 there too.
Hope it saves someone else a couple of hours.

Resources