Create And Write GL_ALPHA texture (OpenGL ES 2) - ios

I'm new to OpenGL so I'm not sure how to do this.
Currently I'm doing this to create an Alpha Texture in iOS :
GLuint framebuffer, renderBufferT;
glGenFramebuffers(1, &framebuffer);
glGenTextures(1, &renderBufferT);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glBindTexture(GL_TEXTURE_2D, renderBufferT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, width, height, 0, GL_ALPHA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, renderBufferT, 0);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(status != GL_FRAMEBUFFER_COMPLETE)
{
NSLog(#"createAlphaBufferWithSize not complete %x", status);
return NO;
}
But it returns an error : GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT
And I also wonder how to write to this texture in the fragment shader. Is it simply the same with RGBA, like this :
gl_FragColor = vec1(0.5);
My intention is to use an efficient texture, because there are so much texture reading in my code, while I only need one component of color.
Thanks for any direction where I might go with this.

I'm not an iOS guy but that error indicates your OpenGL ES driver (PowerVR) does not support rendering to the GL_ALPHA format. I have not seen any OpenGL ES drivers that will do that on any platform. You can create GL_ALPHA textures to use with OpenGL ES using the texture compression tool in the PowerVR SDK, but I think the smallest format you can render to will be 16 bit color - if that is even available.
A better way to make textures efficient is to use compressed formats because the GPU decodes them with dedicated hardware. You really need the PowerVR Texture compression tool. It is a free download:
http://www.imgtec.com/powervr/insider/sdkdownloads/sdk_licence_agreement.asp

So I still haven't found the answers for my questions above, but I found a workaround.
In essence, since each pixel comprises of 4 color components, and I only need one for alpha only, then what I do is I use one texture to store 4 different logical alpha textures that I need. It takes a little effort to maintain these logical alpha textures.
And to draw to this one texture that contains 4 logical alpha textures, I use a shader and create a sign bit that marks which color component I intend to write to.
The blend I used is
(GL_ONE, GL_ONE_MINUS_SRC_COLOR),
And the fragment shader is like this :
uniform lowp vec4 uSignBit;
void main()
{
lowp vec4 textureColor = texture2D(uTexture,vTexCoord);
gl_FragColor = uSignBit * textureColor.a;
}
So, when I intend to write an alpha value of some texture to a logical alpha texture number 2, I write :
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_COLOR);
glUniform4f(uSignBit, 0.0, 1.0, 0.0, 0.0);

Related

I have encountered strange behaviour with opengl shader in iOS

I am writing opengl shader in iOS to apply image effects. I need to use a map to lookup pixels from image. Here is the code for my shader:
precision lowp float;
uniform sampler2D u_Texture;
uniform sampler2D u_Map; //map
varying highp vec2 v_TexCoordinate;
void main()
{
//get the pixel
vec3 texel = texture2D(u_Map, v_TexCoordinate).rgb;
gl_FragColor = vec4(texel, 1.0);
}
Above is a test shader which should show up the map that is being used.
Now, here is the behaviour of above shader with two maps
MAP_1 pixel size (256 x 1)
and here is the output using above shader:
MAP_2 pixel size (256 x 3)
and here is the output:
So, while using the map of 256 x 1 pixel it works as it should but on using the map of 256 x 3 pixel it shows up the black image. I have tested this well with other maps too and encountered that it is because of the pixel height of the map.
Here is the code on how I am loading the map:
+ (GLuint)loadImage:(UIImage *)image{
//Convert Image to Data
GLubyte* imageData = malloc(image.size.width * image.size.height * 4);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageData, image.size.width, image.size.height, 8, image.size.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, image.size.width, image.size.height), image.CGImage);
//Release Objects
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
//load into texture
GLuint textureHandle;
glGenTextures(1, &textureHandle);
glBindTexture(GL_TEXTURE_2D, textureHandle);
[GLToolbox checkGLError:#"glTextureHandle"];
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image.size.width, image.size.height, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
[GLToolbox checkGLError:#"glTexImage2D"];
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
//Free Image Data
free(imageData);
return textureHandle;}
I am not sure why this is happening. May be, I am doing something wrong in loading the Map of size 256x3. Someone, please show me the way up to fix above issue.
Thanks in advance.
In OpenGLES2, non-power-of-two textures are required to have no mipmaps (you're fine there) and use GL_CLAMP_TO_EDGE (I think this is your problem). From here:
Similarly, if the width or height of a texture image are not powers of two and either the GL_TEXTURE_MIN_FILTER is set to one of the functions that requires mipmaps or the GL_TEXTURE_WRAP_S or GL_TEXTURE_WRAP_T is not set to GL_CLAMP_TO_EDGE, then the texture image unit will return (R, G, B, A) = (0, 0, 0, 1).
You don't set the wrap mode, but from the same document:
Initially, GL_TEXTURE_WRAP_S is set to GL_REPEAT.
To fix, set the wrap mode to GL_CLAMP_TO_EDGE, or use a 256x4 texture instead of 256x3 (I'd lean toward the latter unless there's some obstacle to doing so, GPUs love powers of two!).

Setting up and accessing multiple textures in openGL

I am doing some heavy computation with openGLES on IOS. Currently, I am trying to implement lookup tables using textures and texelFetch(), and succeeded in making one, but only one at a time. How to make multiple sampler2d available in the glsl? I tried the following code but it gives weird results.
[Update: After reading the official tutorial #13, I tried reformat the code but it still does not work. It seems both sampler2d refers to the first-set one, even though I checked that the data pointers point to different content.]
In OjbC, I tried something like this
int lookupTabSize = 2000;
GLuint tab0Handle;
glGenTextures(1, &tab0Handle);
glBindTexture(GL_TEXTURE_2D, tab0Handle);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, lookupTabSize,1, 0, GL_RED, GL_FLOAT,[self getConstant:0]); // Pointer checked
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
GLuint tab0ID = glGetUniformLocation(program, "lookupTab0");
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tab0Handle);
glUniform1f(tab0ID,1);
GLuint tab1Handle;
glGenTextures(1, &tab1Handle);
glBindTexture(GL_TEXTURE_2D, tab1Handle);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, lookupTabSize,1, 0, GL_RED, GL_FLOAT,[self getConstant:1]); // Pointer checked
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
GLuint tab1ID = glGetUniformLocation(program, "lookupTab1");
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, tab1Handle);
glUniform1f(tab1ID,0);
In glsl, I just output the look-up content, but it seems both tables refer to the same data source, though they shouldn't as I double-checked the reference! Any ideas about how should I do it correctly? Some sample codes would be much appreciated.
#version 300 es
uniform sampler2D lookupTab0;
uniform sampler2D lookupTab1;
in int InV0;
out float OutV0;
float f(int ptIn){
float valueSum = 0.0f;
ivec2 pos = ivec2(ptIn, 0);
valueSum = valueSum + texelFetch(lookupTab0,pos,0).r;
valueSum = valueSum + texelFetch(lookupTab1,pos,0).r;
return valueSum;
}
void main(){
OutV0 = f(InV0);
}
You need to use glUniform1i() to set sampler uniforms. So these two calls:
glUniform1f(tab0ID,1);
...
glUniform1f(tab1ID,0);
will not work.
Also, the values appear to be swapped, since you're binding lookup table 0 to texture unit GL_TEXTURE0, and lookup table 1 to GL_TEXTURE1. So the correct calls are:
glUniform1i(tab0ID, 0);
...
glUniform1i(tab1ID, 1);
This is based on the following found on page 66, in the section "2.12.6 Uniform Values", of the ES 3.0 spec:
Only the Uniform1i{v}commands can be used to load sampler values

Achieving a persistence effect in GLKit view

I have a GLKit view set up to draw a solid shape, a line and an array of points which all change every frame. The basics of my drawInRect method are:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClear(...);
glBufferData(...);
glEnableVertexAttribArray(...);
glVertexAttribPointer(...);
// draw solid shape
glDrawArrays(GL_TRIANGLE_STRIP, ...);
// draw line
glDrawArrays(GL_LINE_STRIP, ...);
// draw points
glDrawArrays(GL_POINTS, ...);
}
This works fine; each array contains around 2000 points, but my iPad seems to have no problem rendering it all at 60fps.
The issue now is that I would like the lines to fade away slowly over time, instead of disappearing with the next frame, making a persistence or phosphor-like effect. The solid shape and the points must not linger, only the line.
I've tried the brute-force method (as used in Apple's example project aurioTouch): storing the data from the last 100 frames and drawing all 100 lines every frame, but this is too slow. My iPad can't render more than about 10fps with this method.
So my question is: can I achieve this more efficiently using some kind of frame or render buffer which accumulates the color of previous frames? Since I'm using GLKit, I haven't had to deal directly with these things before, and so don't know much about them. I've read about accumulation buffers, which seem to do what I want, but I've heard that they are very slow and anyway I can't tell whether they even exist in OpenGL ES 3, let alone how to use them.
I'm imagining something like the following (after setting up some kind of storage buffer):
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClear(...);
glBufferData(...);
glEnableVertexAttribArray(...);
glVertexAttribPointer(...);
// draw solid shape
glDrawArrays(GL_TRIANGLE_STRIP, ...);
// draw contents of storage buffer
// draw line
glDrawArrays(GL_LINE_STRIP, ...);
// multiply the alpha value of each pixel in the storage buffer by 0.9 to fade
// draw line again, this time into the storage buffer
// draw points
glDrawArrays(GL_POINTS, ...);
}
Is this possible? What are the commands I need to use (in particular, to combine the contents of the storage buffer and change its alpha)? And is this likely to actually be more efficient than the brute-force method?
I ended up achieving the desired result by rendering to a texture, as described for example here. The basic idea is to setup a custom framebuffer and attach a texture to it – I then render the line that I want to persist into this framebuffer (without clearing it) and render the whole framebuffer as a texture into the default framebuffer (which is cleared every frame). Instead of clearing the custom framebuffer, I render a slightly opaque quad over the whole screen to make the previous contents fade out a little every frame.
The relevant code is below; setting up the framebuffer and persistence texture is done in the init method:
// vertex data for fullscreen textured quad (x, y, texX, texY)
GLfloat persistVertexData[16] = {-1.0, -1.0, 0.0, 0.0,
-1.0, 1.0, 0.0, 1.0,
1.0, -1.0, 1.0, 0.0,
1.0, 1.0, 1.0, 1.0};
// setup texture vertex buffer
glGenBuffers(1, &persistVertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, persistVertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(persistVertexData), persistVertexData, GL_STATIC_DRAW);
// create texture for persistence data and bind
glGenTextures(1, &persistTexture);
glBindTexture(GL_TEXTURE_2D, persistTexture);
// provide an empty image
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 2048, 1536, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
// set texture parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// create frame buffer for persistence data
glGenFramebuffers(1, &persistFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, persistFrameBuffer);
// attach render buffer
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, persistTexture, 0);
// check for errors
NSAssert(glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE, #"Error: persistence framebuffer incomplete!");
// initialize default frame buffer pointer
defaultFrameBuffer = -1;
and in the glkView:drawInRect: method:
// get default frame buffer id
if (defaultFrameBuffer == -1)
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &defaultFrameBuffer);
// clear screen
glClear(GL_COLOR_BUFFER_BIT);
// DRAW PERSISTENCE
// bind persistence framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, persistFrameBuffer);
// render full screen quad to fade
glEnableVertexAttribArray(...);
glBindBuffer(GL_ARRAY_BUFFER, persistVertexBuffer);
glVertexAttribPointer(...);
glUniform4f(colorU, 0.0, 0.0, 0.0, 0.01);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// add most recent line
glBindBuffer(GL_ARRAY_BUFFER, dataVertexBuffer);
glVertexAttribPointer(...);
glUniform4f(colorU, color[0], color[1], color[2], 0.8*color[3]);
glDrawArrays(...);
// return to normal framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, defaultFrameBuffer);
// switch to texture shader
glUseProgram(textureProgram);
// bind texture
glBindTexture(GL_TEXTURE_2D, persistTexture);
glUniform1i(textureTextureU, 0);
// set texture vertex attributes
glBindBuffer(GL_ARRAY_BUFFER, persistVertexBuffer);
glEnableVertexAttribArray(texturePositionA);
glEnableVertexAttribArray(textureTexCoordA);
glVertexAttribPointer(self.shaderBridge.texturePositionA, 2, GL_FLOAT, GL_FALSE, 4*sizeof(GLfloat), 0);
glVertexAttribPointer(self.shaderBridge.textureTexCoordA, 2, GL_FLOAT, GL_FALSE, 4*sizeof(GLfloat), 2*sizeof(GLfloat));
// draw fullscreen quad with texture
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// DRAW NORMAL FRAME
glUseProgram(normalProgram);
glEnableVertexAttribArray(...);
glVertexAttribPointer(...);
// draw solid shape
glDrawArrays(GL_TRIANGLE_STRIP, ...);
// draw line
glDrawArrays(GL_LINE_STRIP, ...);
// draw points
glDrawArrays(GL_POINTS, ...);
The texture shaders are very simple: the vertex shader just passes the texture coordinate to the fragment shader:
attribute vec4 aPosition;
attribute vec2 aTexCoord;
varying vec2 vTexCoord;
void main(void)
{
gl_Position = aPosition;
vTexCoord = aTexCoord;
}
and the fragment shader reads the fragment color from the texture:
uniform highp sampler2D uTexture;
varying vec2 vTexCoord;
void main(void)
{
gl_FragColor = texture2D(uTexture, vTexCoord);
}
Although this works, it doesn't seem very efficient, causing the renderer utilization to rise to close to 100%. It only seems better than the brute force approach when the number of lines drawn each frame exceeds 100 or so. If anyone has any suggestions on how to improve this code, I would be very grateful!

Migrating from glReadPixels to CVOpenGLESTextureCache

Curretly, I use glReadPixels in an iPad app to save the contents of an OpenGL texture in a frame buffer, which is terribly slow. The texture has a size of 1024x768, and I plan on supporting Retina display at 2048x1536. The data retrieved is saved in a file.
After reading from several sources, using CVOpenGLESTextureCache seems to be the only faster alternative. However, I could not find any guide or documentation as a good starting point.
How do I rewrite my code so it uses CVOpenGLESTextureCache? What parts of the code need to be rewritten? Using third-party libraries is not a preferred option unless there is already documentation on how to do this.
Code follows below:
//Generate a framebuffer for drawing to the texture
glGenFramebuffers(1, &textureFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, textureFramebuffer);
//Create the texture itself
glGenTextures(1, &drawingTexture);
glBindTexture(GL_TEXTURE_2D, drawingTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F_EXT, pixelWidth, pixelHeight, 0, GL_RGBA32F_EXT, GL_UNSIGNED_BYTE, NULL);
//When drawing to or reading the texture, change the active buffer like that:
glBindFramebuffer(GL_FRAMEBUFFER, textureFramebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureId, 0);
//When the data of the texture needs to be retrieved, use glReadPixels:
GLubyte *buffer = (GLubyte *) malloc(pixelWidth * pixelHeight * 4);
glReadPixels(0, 0, pixelWidth, pixelHeight, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid *)buffer);

Weird GLSL float color value in fragment shader on iOS

I am trying to write a simple GLSL fragment shader on an iPad2 and I am running into a strange issue with the way that OpenGL seems to represent a 8bit "red" value onces a pixel value has been converted into a float as part of the texture upload. What I want to do is pass in a texture that contains a large number of 8bit table indexes and a 32bpp table of the actual pixel values.
My texture data looks look like this:
// Lookup table stored in a texture
const uint32_t pixel_lut_num = 7;
uint32_t pixel_lut[pixel_lut_num] = {
// 0 -> 3 = w1 -> w4 (w4 is pure white)
0xFFA0A0A0,
0xFFF0F0F0,
0xFFFAFAFA,
0xFFFFFFFF,
// 4 = red
0xFFFF0000,
// 5 = green
0xFF00FF00,
// 6 = blue
0xFF0000FF
};
uint8_t indexes[4*4] = {
0, 1, 2, 3,
4, 4, 4, 4,
5, 5, 5, 5,
6, 6, 6, 6
};
Each texture is then bound and the texture data is uploaded like so:
GLuint texIndexesName;
glGenTextures(1, &texIndexesName);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texIndexesName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED_EXT, width, height, 0, GL_RED_EXT, GL_UNSIGNED_BYTE, indexes);
GLuint texLutName;
glGenTextures(1, &texLutName);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texLutName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, pixel_lut_num, 1, 0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, pixel_lut);
I am confident the texture setup and uniform values are working as expected, because the fragment shader is mostly working with the following code:
varying highp vec2 coordinate;
uniform sampler2D indexes;
uniform sampler2D lut;
void main()
{
// normalize to (RED * 42.5) then lookup in lut
highp float val = texture2D(indexes, coordinate.xy).r;
highp float normalized = val * 42.5;
highp vec2 lookupCoord = vec2(normalized, 0.0);
gl_FragColor = texture2D(lut, lookupCoord);
}
The code above takes an 8 bit index and looks up a 32bpp BGRA pixel value in lut. The part that I do not understand is where this 42.5 value is defined in OpenGL. I found this scale value through trial and error and I have confirmed that the output colors for each pixel are correct (meaning the index for each lut table lookup is right) with the 42.5 value. But, how exactly does OpenGL come up with this value?
In looking at this OpenGL man page, I find mention of two color constants GL_c_SCALE and GL_c_BIAS that seem to be used when converting the 8bit "index" value to a floating point value internally used inside OpenGL. Where are these constants defined and how could I query the value at runtime or compile time? Is the actual floating point value of the "index" table the real issue here? I am at a loss to understand why the texture2D(indexes,...) call returns this funky value, is there some other way to get a int or float value for the index that works on iOS? I tried looking at 1D textures but they do not seem to be supported.
Your color index values will be accessed as 8Bit UNORMs, so the range [0,255] is mapped to the floating point interval [0,1]. When you access your LUT texture, the texcoord range is also [0,1]. But currently, you only have a width of 7. So with your magic value of 42.5, you end up with the following:
INTEGER INDEX: 0: FP: 0.0 -> TEXCOORD: 0.0 * 42.5 == 0.0
INTEGER INDEX: 6: FP: 6.0/255.0 -> TEXCOORD: (6.0/255.0) * 42.5 == 0.9999999...
That mapping is close, but not 100% correct though, since you do not map to texel centers.
To get the correct mapping (see this answer for details), you would need something like:
INTEGER INDEX: 0: FP: 0.0 -> TEXCOORD: 0.0 + 1.0/(2.0 *n)
INTEGER INDEX: n-1: FP: (n-1)/255.0 -> TEXCOORD: 1.0 - 1.0/(2.0 *n)
where n is is pixel_lut_size from your code above.
So, a single scale value is not enough, you actually need an additional offset. The correct values would be:
scale= (255 * (1 - 1/n)) / (n-1) == 36.428...
offset= 1/(2.0*n) == 0.0714...
One more thing: you souldn't use GL_LINEAR for the LUT minification texture filter.

Resources