Im having trouble displaying pixel based art (think retro tiles and art) on OpenGL Es 1.1 on the iPhones.
Tiles are represented using 8bytes (1 byte for each row) with each bit representing a pixel being set or not.
For example a tile with the number 8:
0 0 0 0 0 0 0 0 ->
0 0 1 1 1 0 0 0 -> xxx
0 1 0 0 0 1 0 0 -> x x
0 1 0 0 0 1 0 0 -> x x
0 0 1 1 1 0 0 0 -> xxx
0 1 0 0 0 1 0 0 -> x x
0 1 0 0 0 1 0 0 -> x x
0 0 1 1 1 0 0 0 -> xxx
Converting this to OpenGL on iPhone using
glDrawArrays(GL_POINTS, 0, 64);
The logic is correct, but the problem is it doesn't give the retro effect. I was looking for more of a blocky/retro style. I read that i can turn off pixel smoothing which would cause pixels to be displayed as squares.(i think it was GL_Disable(POINT_SMOOTH), but not sure if this effects ES since nothing changed.)
Possible solutions i found to relating problems:
use a frame buffer to render to a smaller resolution and then scale it up in the render buffer. I don't know how this is done or if it'll work.
Create an image from the pixels, create a texture from that image and finally render that texture.
Possible solutions i thought off:
For each pixel, draw two pixels instead both horizontally and vertically.
Draw each pixel as a square using triangles.
Use GLPointSize - gives a correct effect when set to 2, but coordinates are then messed up. Aligning becomes harder.
Ultimately i would like the tiles to be presented:
This is more of me understanding how OpenGL and pixels work, and I'm using a gameboy emulator to work this out. If someone thinks the correct way is to create the graphics manually and load them as textures, its not the feasible answer.
There are quite a few ways of doing this and I would suggest the first one you already found. Draw the scene to a smaller buffer and then redraw it a canvas.
What you are looking here for is a FBO (frame buffer object). Find some examples on how to create a FBO and attach a texture to it. This will create a buffer for you with any dimensions you will input. Some common issues here are that you will most likely need a POT texture (a power of 2 dimensions: 2, 4, 8, 16... So 64x128 buffer for instance) so to control a different size you should use viewport which will then use only a part of the buffer you need.
So in the end this will create a low resolution texture which can be used to draw to the canvas (view). How you draw to it something you should experiment with. The points may not be the best solution, even in your case of a buffer I would use lines between the points you defined in your example. At this point you must choose to draw with or without the antialias. To enable it look for the multisampling on iOS.
After you have the texture to which the shape is drawn you will need to redraw it to the view. This is pretty much drawing a full-screen texture. Again you have multiple ways of drawing it. The most powerful tool here are the texture parameters: Using nearest will discard all the color interpolations and the squares should be visible; using linear (or trilinear) will do some interpolation and the result will probably be nearer to what you want to achieve. Then you may again play around with multisampling to create antialiasing and get a better result.
So the powers here are:
Different FBO buffer sizes
Antialiasing on the FBO
Texture parameters
Antialiasing when redrawing to canvas
As for the FBO this is one of the easiest things to do:
Generate frame buffer (glGenFramebuffers)
Bind the frame buffer (glBindFramebuffer)
Create a texture (glGenTextures) and bind it (glBindTexture)
Set texture data glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, twidth, theight, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
Attach the texture to the frame buffer glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture.textureID, 0);
Now when you draw to the texture you need to bind the FBO frame buffer and when you draw to the main buffer just bind that frame buffer.
Since this is quite a broad answer (as is the question)... If you will be implementing this and will have additional questions or issues it will be best to create a separate questions and probably link them in the comments.
Good luck.
Im not sure if my question was not clear, but to draw pixels to screen you have to create a texture and pass in the pixel data to it, then render that texture onto the screen. It would be the equivalent of glDrawPixels.
The code would be:
#define W 255,255,255
#define G 192,192,192
//8 x 8 tile with 3 bytes for each pixel RGB format
GLubyte pixels[8 * 8 * 3] = {
W,W,W,W,W,W,W,W,
W,W,G,G,G,W,W,W,
W,G,W,W,W,G,W,W,
W,G,W,W,W,G,W,W,
W,W,G,G,G,W,W,W,
W,G,W,W,W,G,W,W,
W,G,W,W,W,G,W,W,
W,W,G,G,G,W,W,W
};
somewhere in setup:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &tex);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 8, 8, 0, GL_RGB, GL_UNSIGNED_BYTE, pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
Then draw the texture as usual:
glActiveTexture(GL_TEXTURE0);
glUniform1i([program uniformLocation:#"s_texture"], 0);
glBindTexture(GL_TEXTURE_2D, tex);
glEnableVertexAttribArray(positionAttrib);
glVertexAttribPointer(positionAttrib, 2, GL_FLOAT, GL_FALSE, 0, v);
glEnableVertexAttribArray(texAttrib);
glVertexAttribPointer(texAttrib, 2, GL_FLOAT, GL_FALSE, 0, t);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, i);
Related
To get a sort of index of the elements drawn on the screen, I've created a framebuffer that will draw objects with solid colors of type GL_R32UI.
The framebuffer I created has two renderbuffer attached. One of color and one of depth. Here is a schematic of how it was created using python:
my_fbo = glGenFramebuffers(1)
glBindFramebuffer(GL_FRAMEBUFFER, my_fbo)
rbo = glGenRenderbuffers(2) # GL_DEPTH_COMPONENT16 and GL_COLOR_ATTACHMENT0
glBindRenderbuffer(GL_RENDERBUFFER, rbo[0])
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, height)
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rbo[0])
glBindRenderbuffer(GL_RENDERBUFFER, rbo[1])
glRenderbufferStorage(GL_RENDERBUFFER, GL_R32UI, width, height)
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbo[1])
glBindRenderbuffer(GL_RENDERBUFFER, 0)
glBindFramebuffer(GL_FRAMEBUFFER, 0)
I read the indexes with readpixel like this:
glBindFramebuffer(GL_FRAMEBUFFER, my_fbo)
glReadPixels(x, y, threshold, threshold, GL_RED_INTEGER, GL_UNSIGNED_INT, r_data)
glBindFramebuffer(GL_FRAMEBUFFER, 0)
The code works perfectly, I have no problem with that.
But for debugging, I'd like to see the indexes on the screen
With the data obtained below, how could I see the result of drawing the indices (unsigned int) on the screen?*
active_fbo = glGetIntegerv(GL_FRAMEBUFFER_BINDING)
my_indices_fbo = my_fbo
my_rbo_depth = rbo[0]
my_rbo_color = rbo[1]
## how mix my_rbo_color and cur_fbo??? ##
glBindFramebuffer(gl.GL_FRAMEBUFFER, active_fbo)
glBlitFramebuffer transfer a rectangle of pixel values from one region of a read framebuffer to another region of a draw framebuffer.
glBindFramebuffer( GL_READ_FRAMEBUFFER, my_fbo );
glBindFramebuffer( GL_DRAW_FRAMEBUFFER, active_fbo );
glBlitFramebuffer( 0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST );
Note, you have to be careful, because an GL_INVALID_OPERATION error will occur, if the read buffer contains unsigned integer values and any draw buffer does not contain unsigned integer values. Since the internal format of the frame buffers color attachment is GL_R32UI, and the internal format of the drawing buffer is usually something like GL_RGBA8, this maybe not works, or it even will not do what you have expected.
But you can create a frame buffer with a texture attached to its color plane an use the texture as an input to a post pass, where you draw a quad over the whole canvas.
First you have to create the texture with the size as the frame buffer:
ColorMap0 = glGenTextures(1);
glBindTexture(GL_TEXTURE_2D, ColorMap0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, width, height, 0, GL_R, GL_UNSIGNED_INT, 0);
You have to attach the texture to the frame buffer:
my_fbo = glGenFramebuffers(1)
glBindFramebuffer(GL_FRAMEBUFFER, my_fbo)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, ColorMap0, 0);
When you have drawn the scene then you have to release the framebuffer.
glBindFramebuffer(GL_FRAMEBUFFER, 0)
Now you can use the texture as an input for a final pass. Simply bind the texture, enable 2D textures and draw a quad over the whole canvas. The quad should range from from (-1,-1) to (1,1), with texture coordinates in range from (0, 0) to (1, 1). Of course you can use a shader, with a texture sampler uniform in the fragment shader, for that. You can read the texel from the texture a write to the fragment in an way you want.
Extension to the answer
If performance is not important, then you can convert the buffer on the CPU and draw it on the canvas, after reading the frame buffer with glReadPixels. For that you can leave your code as it is and read the frame buffer with glReadPixels, but you have to convert the buffer to a format appropriate to the drawing buffer. I suggest to use the
internal format GL_RGBA8 or GL_RGB8. You have to create a new texture with the convert buffer data.
debugTexturePlane = ...;
debugTexture = glGenTextures(1);
glBindTexture(GL_TEXTURE_2D, debugTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, debugTexturePlane);
From now on you have 2 possibilities.
Either you create a new frame buffer and attach the texture to its color plane
debugFbo = glGenFramebuffers(1)
glBindFramebuffer(GL_FRAMEBUFFER, debugFbo)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, debugTexture, 0);
and you use glBlitFramebuffer as described above to copy from the debug frame buffer to the color plane.
This should not be any problem, because the internal formats of the buffers should be equal.
Or you draw a textured quad over the whole viewport. The code may look like this (old school):
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, debugTexture);
glBegin(GL_QUADS);
glTexCoord2f(0.0, 0.0); glVertex2f(-1.0, -1.0);
glTexCoord2f(0.0, 1.0); glVertex2f(-1.0, 1.0);
glTexCoord2f(1.0, 1.0); glVertex2f( 1.0, 1.0);
glTexCoord2f(1.0, 0.0); glVertex2f( 1.0, -1.0);
glEnd();
In the following simple code, I load a 1-channel data to a texture. I use glTexImage2D() with GL_LUMINANCE (which is a 1-channel format) and GL_UNSIGNED_BYTE, so it should take one byte per pixel. I allocate a buffer with size equal the number of pixels (2 x 2) which represents the input pixel data (the values of the pixels don't matter for our purposes).
When you run the following code with Address Sanitizer enabled, it detects a heap buffer overflow in the call to glTexImage2D(), saying that it tried to read beyond the bounds of the heap-allocated buffer:
#import <OpenGLES/ES2/gl.h>
//...
EAGLContext* context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
[EAGLContext setCurrentContext:context];
GLsizei width = 2, height = 2;
void *data = malloc(width * height); // contents don't matter for now
glTexImage2D(GL_TEXTURE_2D,
0,
GL_LUMINANCE,
width,
height,
0,
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
data);
This is 100% reproducible and happens on both iOS simulator and device. Only if you increase the size of the buffer to 6 will it not overflow (2 bigger than the expected size of 4).
Sizes of 1x1 and 4x4 don't seem to have this problem, but 2x2 and 3x3 do. It seems kind of arbitrary.
What is wrong?
I have solved it thanks to #genpfault's comment.
I need to set the unpack alignment to 1:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
Specifically, the unpack alignment determines the alignment for the start of each row. The default value is 4. Since my rows don't have any special alignment, and there are no gaps between row bytes, the alignment should be 1.
The first row will always be aligned because malloc allocates 16-aligned buffers. But the second and subsequent rows were misaligned with the default alignment of 4 unless the row length was a multiple of 4 (this explains why 2x2 and 3x3 don't work, but 4x4 does). 1x1 happens to work because it has no second row.
I'm new to OpenGL so I'm not sure how to do this.
Currently I'm doing this to create an Alpha Texture in iOS :
GLuint framebuffer, renderBufferT;
glGenFramebuffers(1, &framebuffer);
glGenTextures(1, &renderBufferT);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glBindTexture(GL_TEXTURE_2D, renderBufferT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, width, height, 0, GL_ALPHA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, renderBufferT, 0);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(status != GL_FRAMEBUFFER_COMPLETE)
{
NSLog(#"createAlphaBufferWithSize not complete %x", status);
return NO;
}
But it returns an error : GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT
And I also wonder how to write to this texture in the fragment shader. Is it simply the same with RGBA, like this :
gl_FragColor = vec1(0.5);
My intention is to use an efficient texture, because there are so much texture reading in my code, while I only need one component of color.
Thanks for any direction where I might go with this.
I'm not an iOS guy but that error indicates your OpenGL ES driver (PowerVR) does not support rendering to the GL_ALPHA format. I have not seen any OpenGL ES drivers that will do that on any platform. You can create GL_ALPHA textures to use with OpenGL ES using the texture compression tool in the PowerVR SDK, but I think the smallest format you can render to will be 16 bit color - if that is even available.
A better way to make textures efficient is to use compressed formats because the GPU decodes them with dedicated hardware. You really need the PowerVR Texture compression tool. It is a free download:
http://www.imgtec.com/powervr/insider/sdkdownloads/sdk_licence_agreement.asp
So I still haven't found the answers for my questions above, but I found a workaround.
In essence, since each pixel comprises of 4 color components, and I only need one for alpha only, then what I do is I use one texture to store 4 different logical alpha textures that I need. It takes a little effort to maintain these logical alpha textures.
And to draw to this one texture that contains 4 logical alpha textures, I use a shader and create a sign bit that marks which color component I intend to write to.
The blend I used is
(GL_ONE, GL_ONE_MINUS_SRC_COLOR),
And the fragment shader is like this :
uniform lowp vec4 uSignBit;
void main()
{
lowp vec4 textureColor = texture2D(uTexture,vTexCoord);
gl_FragColor = uSignBit * textureColor.a;
}
So, when I intend to write an alpha value of some texture to a logical alpha texture number 2, I write :
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_COLOR);
glUniform4f(uSignBit, 0.0, 1.0, 0.0, 0.0);
I am trying to write a simple GLSL fragment shader on an iPad2 and I am running into a strange issue with the way that OpenGL seems to represent a 8bit "red" value onces a pixel value has been converted into a float as part of the texture upload. What I want to do is pass in a texture that contains a large number of 8bit table indexes and a 32bpp table of the actual pixel values.
My texture data looks look like this:
// Lookup table stored in a texture
const uint32_t pixel_lut_num = 7;
uint32_t pixel_lut[pixel_lut_num] = {
// 0 -> 3 = w1 -> w4 (w4 is pure white)
0xFFA0A0A0,
0xFFF0F0F0,
0xFFFAFAFA,
0xFFFFFFFF,
// 4 = red
0xFFFF0000,
// 5 = green
0xFF00FF00,
// 6 = blue
0xFF0000FF
};
uint8_t indexes[4*4] = {
0, 1, 2, 3,
4, 4, 4, 4,
5, 5, 5, 5,
6, 6, 6, 6
};
Each texture is then bound and the texture data is uploaded like so:
GLuint texIndexesName;
glGenTextures(1, &texIndexesName);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texIndexesName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED_EXT, width, height, 0, GL_RED_EXT, GL_UNSIGNED_BYTE, indexes);
GLuint texLutName;
glGenTextures(1, &texLutName);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texLutName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, pixel_lut_num, 1, 0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, pixel_lut);
I am confident the texture setup and uniform values are working as expected, because the fragment shader is mostly working with the following code:
varying highp vec2 coordinate;
uniform sampler2D indexes;
uniform sampler2D lut;
void main()
{
// normalize to (RED * 42.5) then lookup in lut
highp float val = texture2D(indexes, coordinate.xy).r;
highp float normalized = val * 42.5;
highp vec2 lookupCoord = vec2(normalized, 0.0);
gl_FragColor = texture2D(lut, lookupCoord);
}
The code above takes an 8 bit index and looks up a 32bpp BGRA pixel value in lut. The part that I do not understand is where this 42.5 value is defined in OpenGL. I found this scale value through trial and error and I have confirmed that the output colors for each pixel are correct (meaning the index for each lut table lookup is right) with the 42.5 value. But, how exactly does OpenGL come up with this value?
In looking at this OpenGL man page, I find mention of two color constants GL_c_SCALE and GL_c_BIAS that seem to be used when converting the 8bit "index" value to a floating point value internally used inside OpenGL. Where are these constants defined and how could I query the value at runtime or compile time? Is the actual floating point value of the "index" table the real issue here? I am at a loss to understand why the texture2D(indexes,...) call returns this funky value, is there some other way to get a int or float value for the index that works on iOS? I tried looking at 1D textures but they do not seem to be supported.
Your color index values will be accessed as 8Bit UNORMs, so the range [0,255] is mapped to the floating point interval [0,1]. When you access your LUT texture, the texcoord range is also [0,1]. But currently, you only have a width of 7. So with your magic value of 42.5, you end up with the following:
INTEGER INDEX: 0: FP: 0.0 -> TEXCOORD: 0.0 * 42.5 == 0.0
INTEGER INDEX: 6: FP: 6.0/255.0 -> TEXCOORD: (6.0/255.0) * 42.5 == 0.9999999...
That mapping is close, but not 100% correct though, since you do not map to texel centers.
To get the correct mapping (see this answer for details), you would need something like:
INTEGER INDEX: 0: FP: 0.0 -> TEXCOORD: 0.0 + 1.0/(2.0 *n)
INTEGER INDEX: n-1: FP: (n-1)/255.0 -> TEXCOORD: 1.0 - 1.0/(2.0 *n)
where n is is pixel_lut_size from your code above.
So, a single scale value is not enough, you actually need an additional offset. The correct values would be:
scale= (255 * (1 - 1/n)) / (n-1) == 36.428...
offset= 1/(2.0*n) == 0.0714...
One more thing: you souldn't use GL_LINEAR for the LUT minification texture filter.
case 1:
I create the texture by
D3DXCreateTexture(device, width, height, 0, D3DUSAGE_DYNAMIC, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &texture)
and update the texture with white color.
D3DLOCKED_RECT lr;
HRESULT hr = texture->LockRect(0, &lr, NULL, 0);
ConvertRGB2RGBA(width, height, pixels, stride, (unsigned char*)(lr.pBits), lr.Pitch);
texture->UnlockRect(0);
the render result shows as:
What I want is pure white on the surface.
The z value of all the vertexes equals to 0.0f.
case 2:
If I create the texture by
D3DXCreateTextureFromFile(device, "e:\\test.bmp", &texture);
and do not update the texture, it shows absolutly correct.
case 3:
If I create the texture from file as case 2, and update the texture as case 1, the result is incorrect, there is test.bmp content remains slightly.
conclusion:
There must be something wrong with updating texture. What's wrong???
SOLVED!!!Change the levels param to 1, then it works.
D3DXCreateTexture(device, width, height, 1, D3DUSAGE_DYNAMIC, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &texture)
Congratulations. When the mipmap level argument of D3DXCreateTexture is set to 0, full mipmapped texture will be created. If you want to use mipmaps, your function ConvertRGB2RGBA should cover not only the top level texture, but also lower level textures.
Change the levels param to 1, then it works.
D3DXCreateTexture(device, width, height, 1, D3DUSAGE_DYNAMIC, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &texture)