I'm planning to use RGBA4444 texture to reduce memory usage on iPhone, but to my suprise, the created texture is always a black box on screen, same code works fine on Win10. here is the code i use to create RGBA4444 texture :
{
const int texSize = 256;
vector<unsigned char> bytes(texSize * texSize * 2, 0xFF);
GLuint id = 0;
glGenTextures(1, &id);
glBindTexture(GL_TEXTURE_2D, id);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA4, texSize, texSize, 0, GL_RGBA, GL_UNSIGNED_SHORT_4_4_4_4, &bytes[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
I'm totally at a loss, RGBA4444 is natively supported by iphone right?
tested on iPhone5s and iPad3, both are black boxes.
thank you.
Okay, it turns out that the internalFormat is incorrect:
glTexImage2D(GL_TEXTURE_2D
, 0
, GL_RGBA4 // <---- should be GL_RGBA instead of GL_RGBA4 on iPhone
, texSize
, texSize
, 0
, GL_RGBA
, GL_UNSIGNED_SHORT_4_4_4_4
, &bytes[0])
;
On Win10, it works fine when internalFormat is GL_RGBA4 (GLAD, es2.0), but acoording to this glTexImage2D, the acceptable internalFormats are GL_ALPHA, GL_LUMINANCE, GL_LUMINANCE_ALPHA, GL_RGB, GL_RGBA, so even on WIN10, the correct internalFormat for RGBA4444 still should be GL_RGBA.
Related
I'm trying to render a camera captured frame using opengl es 2.0 on ios, then write the frame into a video file using AVAssetWriter.
So I created a CVPixelBufferRef from AVAssetWriterInputPixelBufferAdaptor and created a CVOpenGLESTextureRef from the CVPixelBufferRef using follow code
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
mRenderTextureCache,
mOutputPixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RGBA,
mOutputDimensions.width,
mOutputDimensions.height,
GL_BGRA,
GL_UNSIGNED_BYTE,
0,
&mOutputTexture);
if (err != kCVReturnSuccess || !mOutputTexture)
{
NSLog(#"Error at CVOpenGLESTextureCacheCreateTextureFromImage with output texture %d", err);
}
glBindTexture(CVOpenGLESTextureGetTarget(mOutputTexture), CVOpenGLESTextureGetName(mOutputTexture));
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
Then I bind this texture to the frame buffer's color attachment.
But when I render and capture a GPU frame, I always get a opengl error says my frame buffer' color attachment's internal format is not texture-filterable.
So what's wrong with the code? Am I set the texture's internal format wrong?
And the strange thing is the result video is correct.
And also, I noticed that it's a little bit slow render to a texture than render to a render buffer, is it caused by this opengl error?
Currently I'm loading a texture with this code :
GLKTextureInfo * t = [GLKTextureLoader textureWithContentsOfFile:path options:#{GLKTextureLoaderGenerateMipmaps: [NSNumber numberWithBool:YES]} error:&error];
But the result is not that good when I scaled down the image (jagged edged).
Can I create my own mipmap using image software like Adobe Illustrator? But what is the rule to do that?
And how do I load this image using the code ?
Thanks!
-- Edited --
Thanks for the answer, I got it using :
GLuint texName;
glGenTextures(1, &texName);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// load image data here
...
// set up mipmap
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 256, 256, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData0);
glTexImage2D(GL_TEXTURE_2D, 1, GL_RGBA, 128,128, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData1);
...
glTexImage2D(GL_TEXTURE_2D, 8, GL_RGBA, 1, 1, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData8);
Yes, you can manually make the mipmaps and upload them yourself. If you're using Illustrator, presumably it has some method to output an image at a particular resolution. I'm not that familiar with Illustrator, so I don't know how that part works.
Once you have the various resolutions, you can upload them as part of your main image. You can use glTexImage2D() to upload the original image to a texture. Then you can upload additional mipmap levels using glTexImage2D(), but setting the level parameter to other values. For example:
glTexImage2D (GL_TEXTURE_2D, level, etc...);
where level is the mipmap level for this particular image. Note that you will probably have to set the various texture parameters appropriately for mipmaps, such as:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, <whatever the max is here>);
See the section on mipmaps on this page for details.
I d'like to attach a depth texture as a color attachment to the framebuffer. (I'm on iOS and GL_OES_depth_texture is supported)
So I setup a texture like this:
glGenTextures(1, &TextureName);
glBindTexture(GL_TEXTURE_2D, TextureName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, ImageSize.Width, ImageSize.Height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, 0);
glGenFramebuffers(1, &ColorFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, ColorFrameBuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, TextureName, 0);
But now if I check the framebuffer status I get a GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT
What am I doing wrong here?
I also tried some combinations with GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT24_OES, GL_DEPTH_COMPONENT32_OES but non of these worked (GL_OES_depth24 is also supported)
You can't. Textures with depth internal formats can only be attached to depth attachments. Textures with color internal formats can only be attached to color attachments.
As previous answer mentioned, you cannot attach a texture with depth format as a color surface. Now looking at your comment, you're really after rendering to a 1-channel float format.
You could look at http://www.khronos.org/registry/gles/extensions/OES/OES_texture_float.txt which allows you to have a Texture format of float format.
You can then initialize the texture to be a Alpha map, which would only include 1 channel.
glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, ImageSize.Width, ImageSize.Height, 0, GL_ALPHA, GL_FLOAT, 0);
This may or may not work depending on what extensions supported by your device.
Following code works without errors on iOS 4 and prints in CHECK_GL_ERROR macro 0x500 after glTexImage2D on iOS5.
I searched for info on this, but did not find anything useful.
GLuint depthTexId = 0;
glGenTextures(1, &depthTexId);
CHECK_GL_ERROR();
glActiveTexture(GL_TEXTURE0);
CHECK_GL_ERROR();
glBindTexture(GL_TEXTURE_2D, depthTexId);
CHECK_GL_ERROR();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
CHECK_GL_ERROR();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
CHECK_GL_ERROR();
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 512, 512, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0);
CHECK_GL_ERROR();
I've tried to change parameters of glTexImage2D, but have no success.
Why is it working so? What is the difference between iOS 4 and 5?
How to fix this?
I have replaced GL_UNSIGNED_BYTE with GL_UNSIGNED_INT, and the code began to work both on iOS 4 and iOS 5.
I am working with the following architecture:
OpenGL ES 2 on iOS
Two EAGL contexts with the same ShareGroup
Two threads (server, client = main thread); the server renders stuff to textures, the client displays the textures using simple textured quads.
Additional detail to the server thread (working code)
An fbo is created during initialization:
void init(void) {
glGenFramebuffer(1, &fbo);
}
The render loop of the server looks roughly like this:
GLuint loop(void) {
glBindFrameBuffer(GL_FRAMEBUFFER, fbo);
glViewport(0,0,width,height);
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex, 0);
// Framebuffer completeness check omitted
glClear(GL_COLOR_BUFFER_BIT);
// actual drawing code omitted
// the drawing code bound other textures, so..
glBindTexture(GL_TEXTURE_2D, tex);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, GL_NONE);
glFlush();
return tex;
}
All this works fine so far.
New (buggy) code
Now i want to add Multisampling to the server thread, using the GL_APPLE_framebuffer_multisample extension and modified the the initialization code like this:
void init(void) {
glGenFramebuffer(1, &resolve_fbo);
glGenFramebuffers(1, &sample_fbo);
glBindFramebuffer(GL_FRAMEBUFFER, sample_fbo);
glGenRenderbuffers(1, &sample_colorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, sample_colorRenderbuffer);
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER, 4, GL_RGBA8_OES, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, sample_colorRenderbuffer);
// Framebuffer completeness check (sample_fbo) omitted
glBindRenderbuffer(GL_RENDERBUFFER, GL_NONE);
glBindFramebuffer(GL_FRAMEBUFFER, GL_NONE);
}
The main loop has been changed to:
GLuint loop(void) {
glBindFrameBuffer(GL_FRAMEBUFFER, sample_fbo);
glViewport(0,0,width,height);
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
glClear(GL_COLOR_BUFFER_BIT);
// actual drawing code omitted
glBindFramebuffer(GL_FRAMEBUFFER, resolve_fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex, 0);
// Framebuffer completeness check (resolve_fbo) omitted
// resolve multisampling
glBindFramebuffer(GL_DRAW_FRAMEBUFFER_APPLE, resolve_fbo);
glBindFramebuffer(GL_READ_FRAMEBUFFER_APPLE, sample_fbo);
glResolveMultisampleFramebufferAPPLE();
// the drawing code bound other textures, so..
glBindTexture(GL_TEXTURE_2D, tex);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, GL_NONE);
glFlush();
return tex;
}
What i see now is that a texture contains data from multiple loop() calls, blended together. I guess I'm either missing an 'unbind' of some sort, or probably a glFinish() call (I previously had such a problem at a different point, I set texture data with glTexImage2D() and used it right afterwards - that required a glFinish() call to force the texture to be updated).
However inserting a glFinish() after the drawing code didn't change anything here..
Oh nevermind, such a stupid mistake. I omitted the detail that the loop() method actually contains a for loop and renders multiple textures, the mistake was that i bound the sample fbo only before this loop, so after the first run the resolve fbo was bound..
Moving the fbo binding inside the loop fixed the problem.
Anyway, thanks # all the readers and sorry for wasting your time :)