Setting up and accessing multiple textures in openGL - ios

I am doing some heavy computation with openGLES on IOS. Currently, I am trying to implement lookup tables using textures and texelFetch(), and succeeded in making one, but only one at a time. How to make multiple sampler2d available in the glsl? I tried the following code but it gives weird results.
[Update: After reading the official tutorial #13, I tried reformat the code but it still does not work. It seems both sampler2d refers to the first-set one, even though I checked that the data pointers point to different content.]
In OjbC, I tried something like this
int lookupTabSize = 2000;
GLuint tab0Handle;
glGenTextures(1, &tab0Handle);
glBindTexture(GL_TEXTURE_2D, tab0Handle);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, lookupTabSize,1, 0, GL_RED, GL_FLOAT,[self getConstant:0]); // Pointer checked
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
GLuint tab0ID = glGetUniformLocation(program, "lookupTab0");
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tab0Handle);
glUniform1f(tab0ID,1);
GLuint tab1Handle;
glGenTextures(1, &tab1Handle);
glBindTexture(GL_TEXTURE_2D, tab1Handle);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, lookupTabSize,1, 0, GL_RED, GL_FLOAT,[self getConstant:1]); // Pointer checked
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
GLuint tab1ID = glGetUniformLocation(program, "lookupTab1");
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, tab1Handle);
glUniform1f(tab1ID,0);
In glsl, I just output the look-up content, but it seems both tables refer to the same data source, though they shouldn't as I double-checked the reference! Any ideas about how should I do it correctly? Some sample codes would be much appreciated.
#version 300 es
uniform sampler2D lookupTab0;
uniform sampler2D lookupTab1;
in int InV0;
out float OutV0;
float f(int ptIn){
float valueSum = 0.0f;
ivec2 pos = ivec2(ptIn, 0);
valueSum = valueSum + texelFetch(lookupTab0,pos,0).r;
valueSum = valueSum + texelFetch(lookupTab1,pos,0).r;
return valueSum;
}
void main(){
OutV0 = f(InV0);
}

You need to use glUniform1i() to set sampler uniforms. So these two calls:
glUniform1f(tab0ID,1);
...
glUniform1f(tab1ID,0);
will not work.
Also, the values appear to be swapped, since you're binding lookup table 0 to texture unit GL_TEXTURE0, and lookup table 1 to GL_TEXTURE1. So the correct calls are:
glUniform1i(tab0ID, 0);
...
glUniform1i(tab1ID, 1);
This is based on the following found on page 66, in the section "2.12.6 Uniform Values", of the ES 3.0 spec:
Only the Uniform1i{v}commands can be used to load sampler values

Related

RGBA4444 texture becomes black box on iPhone

I'm planning to use RGBA4444 texture to reduce memory usage on iPhone, but to my suprise, the created texture is always a black box on screen, same code works fine on Win10. here is the code i use to create RGBA4444 texture :
{
const int texSize = 256;
vector<unsigned char> bytes(texSize * texSize * 2, 0xFF);
GLuint id = 0;
glGenTextures(1, &id);
glBindTexture(GL_TEXTURE_2D, id);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA4, texSize, texSize, 0, GL_RGBA, GL_UNSIGNED_SHORT_4_4_4_4, &bytes[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
I'm totally at a loss, RGBA4444 is natively supported by iphone right?
tested on iPhone5s and iPad3, both are black boxes.
thank you.
Okay, it turns out that the internalFormat is incorrect:
glTexImage2D(GL_TEXTURE_2D
, 0
, GL_RGBA4 // <---- should be GL_RGBA instead of GL_RGBA4 on iPhone
, texSize
, texSize
, 0
, GL_RGBA
, GL_UNSIGNED_SHORT_4_4_4_4
, &bytes[0])
;
On Win10, it works fine when internalFormat is GL_RGBA4 (GLAD, es2.0), but acoording to this glTexImage2D, the acceptable internalFormats are GL_ALPHA, GL_LUMINANCE, GL_LUMINANCE_ALPHA, GL_RGB, GL_RGBA, so even on WIN10, the correct internalFormat for RGBA4444 still should be GL_RGBA.

Create And Write GL_ALPHA texture (OpenGL ES 2)

I'm new to OpenGL so I'm not sure how to do this.
Currently I'm doing this to create an Alpha Texture in iOS :
GLuint framebuffer, renderBufferT;
glGenFramebuffers(1, &framebuffer);
glGenTextures(1, &renderBufferT);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glBindTexture(GL_TEXTURE_2D, renderBufferT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, width, height, 0, GL_ALPHA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, renderBufferT, 0);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(status != GL_FRAMEBUFFER_COMPLETE)
{
NSLog(#"createAlphaBufferWithSize not complete %x", status);
return NO;
}
But it returns an error : GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT
And I also wonder how to write to this texture in the fragment shader. Is it simply the same with RGBA, like this :
gl_FragColor = vec1(0.5);
My intention is to use an efficient texture, because there are so much texture reading in my code, while I only need one component of color.
Thanks for any direction where I might go with this.
I'm not an iOS guy but that error indicates your OpenGL ES driver (PowerVR) does not support rendering to the GL_ALPHA format. I have not seen any OpenGL ES drivers that will do that on any platform. You can create GL_ALPHA textures to use with OpenGL ES using the texture compression tool in the PowerVR SDK, but I think the smallest format you can render to will be 16 bit color - if that is even available.
A better way to make textures efficient is to use compressed formats because the GPU decodes them with dedicated hardware. You really need the PowerVR Texture compression tool. It is a free download:
http://www.imgtec.com/powervr/insider/sdkdownloads/sdk_licence_agreement.asp
So I still haven't found the answers for my questions above, but I found a workaround.
In essence, since each pixel comprises of 4 color components, and I only need one for alpha only, then what I do is I use one texture to store 4 different logical alpha textures that I need. It takes a little effort to maintain these logical alpha textures.
And to draw to this one texture that contains 4 logical alpha textures, I use a shader and create a sign bit that marks which color component I intend to write to.
The blend I used is
(GL_ONE, GL_ONE_MINUS_SRC_COLOR),
And the fragment shader is like this :
uniform lowp vec4 uSignBit;
void main()
{
lowp vec4 textureColor = texture2D(uTexture,vTexCoord);
gl_FragColor = uSignBit * textureColor.a;
}
So, when I intend to write an alpha value of some texture to a logical alpha texture number 2, I write :
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_COLOR);
glUniform4f(uSignBit, 0.0, 1.0, 0.0, 0.0);

Weird GLSL float color value in fragment shader on iOS

I am trying to write a simple GLSL fragment shader on an iPad2 and I am running into a strange issue with the way that OpenGL seems to represent a 8bit "red" value onces a pixel value has been converted into a float as part of the texture upload. What I want to do is pass in a texture that contains a large number of 8bit table indexes and a 32bpp table of the actual pixel values.
My texture data looks look like this:
// Lookup table stored in a texture
const uint32_t pixel_lut_num = 7;
uint32_t pixel_lut[pixel_lut_num] = {
// 0 -> 3 = w1 -> w4 (w4 is pure white)
0xFFA0A0A0,
0xFFF0F0F0,
0xFFFAFAFA,
0xFFFFFFFF,
// 4 = red
0xFFFF0000,
// 5 = green
0xFF00FF00,
// 6 = blue
0xFF0000FF
};
uint8_t indexes[4*4] = {
0, 1, 2, 3,
4, 4, 4, 4,
5, 5, 5, 5,
6, 6, 6, 6
};
Each texture is then bound and the texture data is uploaded like so:
GLuint texIndexesName;
glGenTextures(1, &texIndexesName);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texIndexesName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED_EXT, width, height, 0, GL_RED_EXT, GL_UNSIGNED_BYTE, indexes);
GLuint texLutName;
glGenTextures(1, &texLutName);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texLutName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, pixel_lut_num, 1, 0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, pixel_lut);
I am confident the texture setup and uniform values are working as expected, because the fragment shader is mostly working with the following code:
varying highp vec2 coordinate;
uniform sampler2D indexes;
uniform sampler2D lut;
void main()
{
// normalize to (RED * 42.5) then lookup in lut
highp float val = texture2D(indexes, coordinate.xy).r;
highp float normalized = val * 42.5;
highp vec2 lookupCoord = vec2(normalized, 0.0);
gl_FragColor = texture2D(lut, lookupCoord);
}
The code above takes an 8 bit index and looks up a 32bpp BGRA pixel value in lut. The part that I do not understand is where this 42.5 value is defined in OpenGL. I found this scale value through trial and error and I have confirmed that the output colors for each pixel are correct (meaning the index for each lut table lookup is right) with the 42.5 value. But, how exactly does OpenGL come up with this value?
In looking at this OpenGL man page, I find mention of two color constants GL_c_SCALE and GL_c_BIAS that seem to be used when converting the 8bit "index" value to a floating point value internally used inside OpenGL. Where are these constants defined and how could I query the value at runtime or compile time? Is the actual floating point value of the "index" table the real issue here? I am at a loss to understand why the texture2D(indexes,...) call returns this funky value, is there some other way to get a int or float value for the index that works on iOS? I tried looking at 1D textures but they do not seem to be supported.
Your color index values will be accessed as 8Bit UNORMs, so the range [0,255] is mapped to the floating point interval [0,1]. When you access your LUT texture, the texcoord range is also [0,1]. But currently, you only have a width of 7. So with your magic value of 42.5, you end up with the following:
INTEGER INDEX: 0: FP: 0.0 -> TEXCOORD: 0.0 * 42.5 == 0.0
INTEGER INDEX: 6: FP: 6.0/255.0 -> TEXCOORD: (6.0/255.0) * 42.5 == 0.9999999...
That mapping is close, but not 100% correct though, since you do not map to texel centers.
To get the correct mapping (see this answer for details), you would need something like:
INTEGER INDEX: 0: FP: 0.0 -> TEXCOORD: 0.0 + 1.0/(2.0 *n)
INTEGER INDEX: n-1: FP: (n-1)/255.0 -> TEXCOORD: 1.0 - 1.0/(2.0 *n)
where n is is pixel_lut_size from your code above.
So, a single scale value is not enough, you actually need an additional offset. The correct values would be:
scale= (255 * (1 - 1/n)) / (n-1) == 36.428...
offset= 1/(2.0*n) == 0.0714...
One more thing: you souldn't use GL_LINEAR for the LUT minification texture filter.

Depth texture as color attachment

I d'like to attach a depth texture as a color attachment to the framebuffer. (I'm on iOS and GL_OES_depth_texture is supported)
So I setup a texture like this:
glGenTextures(1, &TextureName);
glBindTexture(GL_TEXTURE_2D, TextureName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, ImageSize.Width, ImageSize.Height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, 0);
glGenFramebuffers(1, &ColorFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, ColorFrameBuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, TextureName, 0);
But now if I check the framebuffer status I get a GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT
What am I doing wrong here?
I also tried some combinations with GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT24_OES, GL_DEPTH_COMPONENT32_OES but non of these worked (GL_OES_depth24 is also supported)
You can't. Textures with depth internal formats can only be attached to depth attachments. Textures with color internal formats can only be attached to color attachments.
As previous answer mentioned, you cannot attach a texture with depth format as a color surface. Now looking at your comment, you're really after rendering to a 1-channel float format.
You could look at http://www.khronos.org/registry/gles/extensions/OES/OES_texture_float.txt which allows you to have a Texture format of float format.
You can then initialize the texture to be a Alpha map, which would only include 1 channel.
glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, ImageSize.Width, ImageSize.Height, 0, GL_ALPHA, GL_FLOAT, 0);
This may or may not work depending on what extensions supported by your device.

Multisampled rendering to texture

I am working with the following architecture:
OpenGL ES 2 on iOS
Two EAGL contexts with the same ShareGroup
Two threads (server, client = main thread); the server renders stuff to textures, the client displays the textures using simple textured quads.
Additional detail to the server thread (working code)
An fbo is created during initialization:
void init(void) {
glGenFramebuffer(1, &fbo);
}
The render loop of the server looks roughly like this:
GLuint loop(void) {
glBindFrameBuffer(GL_FRAMEBUFFER, fbo);
glViewport(0,0,width,height);
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex, 0);
// Framebuffer completeness check omitted
glClear(GL_COLOR_BUFFER_BIT);
// actual drawing code omitted
// the drawing code bound other textures, so..
glBindTexture(GL_TEXTURE_2D, tex);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, GL_NONE);
glFlush();
return tex;
}
All this works fine so far.
New (buggy) code
Now i want to add Multisampling to the server thread, using the GL_APPLE_framebuffer_multisample extension and modified the the initialization code like this:
void init(void) {
glGenFramebuffer(1, &resolve_fbo);
glGenFramebuffers(1, &sample_fbo);
glBindFramebuffer(GL_FRAMEBUFFER, sample_fbo);
glGenRenderbuffers(1, &sample_colorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, sample_colorRenderbuffer);
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER, 4, GL_RGBA8_OES, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, sample_colorRenderbuffer);
// Framebuffer completeness check (sample_fbo) omitted
glBindRenderbuffer(GL_RENDERBUFFER, GL_NONE);
glBindFramebuffer(GL_FRAMEBUFFER, GL_NONE);
}
The main loop has been changed to:
GLuint loop(void) {
glBindFrameBuffer(GL_FRAMEBUFFER, sample_fbo);
glViewport(0,0,width,height);
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
glClear(GL_COLOR_BUFFER_BIT);
// actual drawing code omitted
glBindFramebuffer(GL_FRAMEBUFFER, resolve_fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex, 0);
// Framebuffer completeness check (resolve_fbo) omitted
// resolve multisampling
glBindFramebuffer(GL_DRAW_FRAMEBUFFER_APPLE, resolve_fbo);
glBindFramebuffer(GL_READ_FRAMEBUFFER_APPLE, sample_fbo);
glResolveMultisampleFramebufferAPPLE();
// the drawing code bound other textures, so..
glBindTexture(GL_TEXTURE_2D, tex);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, GL_NONE);
glFlush();
return tex;
}
What i see now is that a texture contains data from multiple loop() calls, blended together. I guess I'm either missing an 'unbind' of some sort, or probably a glFinish() call (I previously had such a problem at a different point, I set texture data with glTexImage2D() and used it right afterwards - that required a glFinish() call to force the texture to be updated).
However inserting a glFinish() after the drawing code didn't change anything here..
Oh nevermind, such a stupid mistake. I omitted the detail that the loop() method actually contains a for loop and renders multiple textures, the mistake was that i bound the sample fbo only before this loop, so after the first run the resolve fbo was bound..
Moving the fbo binding inside the loop fixed the problem.
Anyway, thanks # all the readers and sorry for wasting your time :)

Resources