Unable to render square in directx9 - directx

// this is the function used to render a single frame
void render_frame(void)
{
init_graphics();
d3ddev->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(255, 0, 0), 1.0f, 0);
d3ddev->BeginScene();
// select which vertex format we are using
d3ddev->SetFVF(CUSTOMFVF);
// select the vertex buffer to display
d3ddev->SetStreamSource(0, v_buffer, 0, sizeof(CUSTOMVERTEX));
// copy the vertex buffer to the back buffer
d3ddev->DrawPrimitive(D3DPT_TRIANGLESTRIP, 0, 2);
// d3ddev->DrawPrimitive(D3DPT_TRIANGLESTRIP, 0, 1);
d3ddev->EndScene();
d3ddev->Present(NULL, NULL, NULL, NULL);
}
// this is the function that puts the 3D models into video RAM
void init_graphics(void)
{
// create the vertices using the CUSTOMVERTEX struct
CUSTOMVERTEX vertices[] =
{
{ 100.f, 0.f, 0.f, D3DCOLOR_XRGB(0, 0, 255), },
{ 300.f, 0.f, 0.f, D3DCOLOR_XRGB(0, 0, 255), },
{ 300.f, 80.f, 0.f, D3DCOLOR_XRGB(0, 0, 255), },
{ 100.f, 80.f, 0.f, D3DCOLOR_XRGB(0, 0, 255), },
};
// create a vertex buffer interface called v_buffer
d3ddev->CreateVertexBuffer(6 * sizeof(CUSTOMVERTEX),
0,
CUSTOMFVF,
D3DPOOL_MANAGED,
&v_buffer,
NULL);
VOID* pVoid; // a void pointer
// lock v_buffer and load the vertices into it
v_buffer->Lock(0, 0, (void**)&pVoid, 0);
memcpy(pVoid, vertices, sizeof(vertices));
v_buffer->Unlock();
}
Can't render a square for some reason. I've searched for an hour but can't find the answer.
https://i.imgur.com/KCKZSrJ.jpg
Does anybody know how to render it? I'm using directx9.
Tried using DrawIndexPrimitive but It has the same result.

There's likely a few things going on here:
You do not set a Vertex or Pixel Shader, so you are using the legacy fixed-function render pipeline. This pipeline requires you set the view/projection matrices with SetTransform. Since you haven't done that, the vertex positions you provide in 'screens space' don't mean what you think they mean. See The Direct3D Transformation Pipeline.
You are not setting the backface culling mode via SetRenderState so it's defaulting to D3DCULL_CCW (i.e. cull counter-clockwise winding triangles). As such, your vertex positions are resulting in one of the triangles being rejected. You may want to to call SetRenderState(D3DRS_CULLMODE, D3DCULL_NONE); getting started.
You are using TRIANGLESTRIP and only 4 points. You may find it easier to get correct initially by using TRIANGELIST and 6 points.

Related

Drawing 2D bitmap in OpenGL ES (iOS)

I've been struggling for hours trying to render a simple 2D bitmap in OpenGL ES (iOS). While in OpenGL I could simply use glDrawPixels, it doesn't exist in OpenGL ES, neither does glBegin. Seems like glVertexPointer is now deprecated too.
(Note: the bitmap I'm rendering is constantly changing at 60 FPS, so glDrawPixels is a better solution than using textures)
I failed to find any documented sample code that draws a bitmap using current APIs.
So to put it shortly: given an array of pixels (in RGBX format, for example), how to I render it, potentially scaled using nearest neighbor, using OpenGL ES?
The short answer is to render a textured quad and implement a model matrix to perform various transforms (e.g. scaling).
How to render a textured quad
First you'll need to build a VBO with your quad's vertex positions:
float[] positions = {
+0.5f, +0.5f, +0f, // top right
-0.5f, +0.5f, +0f, // top left
+0.5f, -0.5f, +0f, // bottom right
-0.5f, -0.5f, +0f // bottom left
};
int positionVBO = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, positionVBO);
glBufferData(GL_ARRAY_BUFFER, floatBuffer(positions), GL_STATIC_DRAW);
Then pass the necessary info to your vertex shader:
int positionAttribute = glGetAttribLocation(shader, "position");
glEnableVertexAttribArray(positionAttribute);
glVertexAttribPointer(positionAttribute, 3, GL_FLOAT, false, 0, 0);
Now we'll do the same thing but with the quad's texture coordinates:
float[] texcoords = {
1f, 0f, // top right
0f, 0f, // top left
1f, 1f, // bottom right
0f, 1f // bottom left
};
int texcoordVBO = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, texcoordVBO);
glBufferData(GL_ARRAY_BUFFER, floatBuffer(texcoords), GL_STATIC_DRAW);
int textureAttribute = glGetAttribLocation(shader.getId(), "texcoord");
glEnableVertexAttribArray(textureAttribute);
glVertexAttribPointer(textureAttribute, 2, GL_FLOAT, false, 0, 0);
You could interleave this data into a single VBO but I'll leave that to the reader. Regardless we've submitted all the quad vertex data to the GPU and told the shader how to access it.
Next we build our texture buffer assuming we have an object called image:
int texture = glGenTextures();
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, image.getWidth(), image.getHeight(), 0, GL_RGB, GL_UNSIGNED_BYTE, image.getPixels());
And pass that info to the shaders:
int textureUniform = glGetUniformLocation(shader, "image");
glUniform1i(textureUniform, 0);
Check out open.gl's page on Textures for more information.
Finally, the shaders:
vertex.glsl
attribute vec3 position;
attribute vec2 texcoord;
varying vec2 uv;
void main()
{
gl_Position = vec4(position, 1.0);
uv = texcoord;
}
fragment.glsl
varying vec2 uv;
uniform sampler2D image;
void main()
{
gl_FragColor = texture(image, uv);
}
Given no other GL state changes this will render the following:
Note: Since I don't have access to an iOS development environment currently this sample is written in Java. The principle is the same however.
EDIT: How to build the shader program
A shader program is composed from a series of shaders. The bare minimum is a vertex and fragment shader. This is how we would build a shader program from the two shaders above:
String vertexSource = loadShaderSource("vertex.glsl");
GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vertexShader, vertexSource);
glCompileShader(vertexShader);
String fragmentSource = loadFileAsString("fragment.glsl");
GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fragmentShader, fragmentSource);
glCompileShader(fragmentShader);
GLuint shaderProgram = glCreateProgram();
glAttachShader(shaderProgram, vertexShader);
glAttachShader(shaderProgram, fragmentShader);
glLinkProgram(shaderProgram);
Once created you would communicate with it via glVertexAttribPointer and glUniform.

DirectX 11 models with even indices not rendered

I have a problem with directx 11 rendering - if i try to render more then one model, i see just models with odd index. All model that are rendered with even index are not visible.
my code based on rastertek tutorials:
m_dx->BeginScene(0.0f, 0.0f, 0.0f, 1.0f);
{
m_camera->Render();
XMMATRIX view;
m_camera->GetViewMatrix(view);
XMMATRIX world;
m_dx->GetWorldMatrix(world);
XMMATRIX projection;
m_dx->GetProjectionMatrix(projection);
XMMATRIX ortho;
m_dx->GetOrthoMatrix(ortho);
world = XMMatrixTranslation(-2, 0, -4);
m_model->Render(m_dx->GetDeviceContext());
m_texture_shader->Render(m_dx->GetDeviceContext(), m_model->GetIndicesCount(), world, view, projection,
m_model->GetTexture());
world = XMMatrixTranslation(2, 0, -2);
m_model->Render(m_dx->GetDeviceContext());
m_texture_shader->Render(m_dx->GetDeviceContext(), m_model->GetIndicesCount(), world, view, projection,
m_model->GetTexture());
world = XMMatrixTranslation(0, 0, -3);
m_model->Render(m_dx->GetDeviceContext());
m_texture_shader->Render(m_dx->GetDeviceContext(), m_model->GetIndicesCount(), world, view, projection,
m_model->GetTexture());
}
m_dx->EndScene();
Model render method
UINT stride, offset;
stride = sizeof(VertexPosTextureNormal);
offset = 0;
device_context->IASetVertexBuffers(0, 1, &m_vertex_buffer, &stride, &offset);
device_context->IASetIndexBuffer(m_index_buffer, DXGI_FORMAT_R32_UINT, 0);
device_context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
Shader render method:
world = XMMatrixTranspose(world);
view = XMMatrixTranspose(view);
projection = XMMatrixTranspose(projection);
D3D11_MAPPED_SUBRESOURCE mapped_subres;
RETURN_FALSE_IF_FAILED(context->Map(m_matrix_buffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mapped_subres));
MatrixBuffer* data = (MatrixBuffer*)mapped_subres.pData;
data->world = world;
data->view = view;
data->projection = projection;
context->Unmap(m_matrix_buffer, 0);
context->VSSetConstantBuffers(0, 1, &m_matrix_buffer);
context->PSSetShaderResources(0, 1, &texture);
// render
context->IASetInputLayout(m_layout);
context->VSSetShader(m_vertex_shader, NULL, 0);
context->PSSetShader(m_pixel_shader, NULL, 0);
context->PSSetSamplers(0, 1, &m_sampler_state);
context->DrawIndexed(indices, 0, 0);
What can be the reason of this?
thank you.
This code -
world = XMMatrixTranspose(world);
view = XMMatrixTranspose(view);
projection = XMMatrixTranspose(projection);
is transposing the same matrixes each time you call it so they only have the correct value alternate times. The world matrix is being reset each time in the calling code but the view and project matrices are wrong alternate times.

Render Large Texture To Smaller Renderbuffer

I have a render buffer that is 852x640 and a texture that is 1280x720. When I render the texture, it is getting cropped, not just stretched. I know the aspect ratio needs correcting, but how can I get it so that the full texture displays in the render buffer?
//-------------------------------------
glGenFramebuffers(1, &frameBufferHandle);
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferHandle);
glGenRenderbuffers(1, &renderBufferHandle);
glBindRenderbuffer(GL_RENDERBUFFER, renderBufferHandle);
[oglContext renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &renderBufferWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &renderBufferHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderBufferHandle);
//-------------------------------------
static const GLfloat squareVertices[] = {
-1.0f, 1.0f,
1.0f, 1.0f,
-1.0f, -1.0f,
1.0f, -1.0f
};
static const GLfloat horizontalFlipTextureCoordinates[] = {
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
};
size_t frameWidth = CVPixelBufferGetWidth(pixelBuffer);
size_t frameHeight = CVPixelBufferGetHeight(pixelBuffer);
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RGBA,
frameWidth,
frameHeight,
GL_BGRA,
GL_UNSIGNED_BYTE,
0,
&texture);
if (!texture || err) {
NSLog(#"CVOpenGLESTextureCacheCreateTextureFromImage failed (error: %d)", err);
return;
}
glBindTexture(CVOpenGLESTextureGetTarget(texture), CVOpenGLESTextureGetName(texture));
glViewport(0, 0, renderBufferWidth, renderBufferHeight); // setting this to 1280x720 fixes the aspect ratio but still crops
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferHandle);
glUseProgram(shaderPrograms[PASSTHROUGH]);
// Update attribute values.
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, horizontalFlipTextureCoordinates);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Present
glBindRenderbuffer(GL_RENDERBUFFER, renderBufferHandle);
[oglContext presentRenderbuffer:GL_RENDERBUFFER];
EDIT
I'm still running into issues. I've included more source. Basically, I need the entire raw input texture to display in wide screen while also writing the raw texture to disk.
When rendering to a smaller texture, things are automatically scaled, is this not the case with a renderbuffer?
I guess I could make another passthrough to a smaller texture, but that would slow things down.
First of all, keep glViewport(0, 0, renderBufferWidth, renderBufferHeight); with 852x640.
The problem is in your squareVertices - looks like it keeps coordinates that represent texture size. You need to set it equal to renderbuffer size.
The idea is that texture is mapped on your squareVertices rect. So you can render texture of any size mapped to rect of any size - texture image will be scaled to fit the rect.
[Update: square vertices]
In your case it should be:
{
0.0f, (float)renderBufferWidth/frameHeight,
(float)renderBufferWidth/frameWidth, (float)renderBufferHeight/frameHeight,
0.0f, 0.0f,
(float)renderBufferWidth/frameWidth, 0.0f,
};
But this is not good solution in common. From theory, the rectangle size on screen is determined by vertices position and transformation matrix. Each vertice is multiplied with matrix before rendering on screen. Looks like you don't set OpenGL projection matrix. With correct orthogonal projection your vertices should have pixel-equivalent positions.
Since I, Being new to OpenGL, remembers that the texture to be mapped should be in the powers of 2 by 2.
for eg the image resolution should be... 256x256, 512x512.
You can then SCALE the image using
gl.glScalef(x,y,z); function accordingly as per your requirements.
get the height and width accordingly and put these in your scalef function.
Try this, i hope this works.
Try these functions. My answer can be validated from the info #songhoa.ca.com
glGenFramebuffers()
void glGenFramebuffers(GLsizei n, GLuint* ids)
number of frame-buffers to create
void glDeleteFramebuffers(GLsizei n, const GLuint* ids)
pointer to a GLuint variable or an array to store a number of IDs.It returns the IDs of unused framebuffer objects. ID 0 means the default framebuffer, which is the window-system-provided framebuffer.
FBO may be deleted by calling glDeleteFramebuffers() when it is not used anymore.
glBindFramebuffer()
Once a FBO is created, it has to be bound before using it.
void glBindFramebuffer(GLenum target, GLuint id)
First parameter is The target should be GL_FRAMEBUFFER.
Second parameter is the ID of a framebuffer object.
Once a FBO is bound, all OpenGL operations affect onto the current bound framebuffer object. The object ID 0 is reserved for the default window-system provided framebuffer. Therefore, in order to unbind the current framebuffer (FBO), use ID 0 in glBindFramebuffer().
Try using those, or at least visit the link which could help you a lot. Sorry, i'm not experienced in OpenGL but I wanted to contribute the link, and explain the 2 functions. I think you can use the info to write your code.
Oh boy, so the answer is that this was working all along ;) It turns out the high resolution preset mode on the iPhone 4 actually covers less area than the medium resolution preset. This threw me in for a loop until Brigadir suggested what I should have done first all along, check the GPU snapshots.
I figured out the aspect ratio issue too by hacking the appropriate code in the GPUImage framework. https://github.com/bradLarson/GPUImage

GLKBaseEffect not loading texture (texture appears black on object)

I'm using GLKit in an OpenGL project. Everything is based on GLKView and GLKBaseEffect (no custom shaders). In my project I have several views that have GLKViews for showing 3D objects, and occasionally several of those view can be "open" at once (i.e. are in the modal view stack).
While until now everything was working great, in a new view I was creating I needed to have a rectangle with texture to simulate a measuring tape for the 3D world of my app. For some unknown reason, in that view only, the texture isn't loaded right into the opengl context: the texture is loaded right by GLKTextureLoader, but when drawing the rectangle is black, and looking at the OpenGL frame in debug, I can see that an empty texture is loaded (there's a reference to a texture, but it's all zeroed out or null).
The shape I'm drawing is defined by: (it was originally a triangle strip, but I switched for triangles to make sure it's not the issue)
static const GLfloat initTape[] = {
-TAPE_WIDTH / 2.0f, 0, 0,
TAPE_WIDTH / 2.0f, 0, 0,
-TAPE_WIDTH / 2.0f, TAPE_INIT_LENGTH, 0,
TAPE_WIDTH / 2.0f, 0, 0,
TAPE_WIDTH / 2.0f, TAPE_INIT_LENGTH, 0,
-TAPE_WIDTH / 2.0f, TAPE_INIT_LENGTH, 0,
};
static const GLfloat initTapeTex[] = {
0, 0,
1, 0,
0, 1.0,
1, 0,
1, 1,
0, 1,
};
I set the effect variable as:
effect.transform.modelviewMatrix = modelview;
effect.light0.enabled = GL_FALSE;
// Projection setup
GLfloat ratio = self.view.bounds.size.width/self.view.bounds.size.height;
GLKMatrix4 projection = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(self.fov), ratio, 0.1f, 1000.0f);
effect.transform.projectionMatrix = projection;
// Set the color of the wireframe.
if (tapeTex == nil) {
NSError* error;
tapeTex = [GLKTextureLoader textureWithContentsOfFile:[[[NSBundle mainBundle] URLForResource:#"ruler_texture" withExtension:#"png"] path] options:nil error:&error];
}
effect.texture2d0.enabled = GL_TRUE;
effect.texture2d0.target = GLKTextureTarget2D;
effect.texture2d0.envMode = GLKTextureEnvModeReplace;
effect.texture2d0.name = tapeTex.name;
And the rendering loop is:
[effect prepareToDraw];
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glVertexAttribPointer(GLKVertexAttribPosition, COORDS, GL_FLOAT, GL_FALSE, 0, tapeVerts);
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 0, tapeTexCoord);
glDrawArrays(GL_TRIANGLES, 0, TAPE_VERTS);
glDisableVertexAttribArray(GLKVertexAttribPosition);
glDisableVertexAttribArray(GLKVertexAttribTexCoord0);
I've also tested the texture itself in another view with other objects and it works fine, so it's not the texture file fault.
Any help would be greatly appreciated, as I'm stuck on this issue for over 3 days.
Update: Also, there are no glErrors during the rendering loop.
After many many days I've finally found my mistake - When using multiple openGL contexts, it's important to create a GLKTextureLoader using a shareGroup, or else the textures aren't necessarily loaded to the right context.
Instead of using the class method textureWithContentOf, every context needs it's own GLKTextureLoader that is initialized with context.sharegroup, and use only that texture loader for that view. (actually the textures can be saved between different contexts, but I didn't needed that feature of sharegroups).
Easy tutorial http://games.ianterrell.com/how-to-texturize-objects-with-glkit/
I think it will help you.

glGetAttribLocation/glGetUniformLocation returns 0, causing EXC_BAD_ACCESS on glDrawArrays call (iOS)

I am creating a simple OpenGL ES 2.0 application for iOS, and whenever I call glDrawArrays. I found that this was occurring when I had previously called glEnableVertexAttribArray for my two attributes (position and color), and then found that glGetAttribLocation was returning 1 for position, and 0 for color, and then also found that glGetUniformLocation was returning 0 for my MVP matrix. I am not sure if 0 is a valid value, or why glEnableVertexAttribArray appears to be causing EXC_BAD_ACCESS when glDrawArrays is called.
Here is my code:
compileShaders function:
-(void)compileShaders {
GLuint vertShader = [self compileShader:#"Shader" ofType:GL_VERTEX_SHADER];
GLuint fragShader = [self compileShader:#"Shader" ofType:GL_FRAGMENT_SHADER];
GLuint program = glCreateProgram();
glAttachShader(program, vertShader);
glAttachShader(program, fragShader);
glLinkProgram(program);
GLint success;
glGetProgramiv(program, GL_LINK_STATUS, &success);
if (success == GL_FALSE) {
GLchar messages[256];
glGetProgramInfoLog(program, sizeof(messages), 0, &messages[0]);
NSLog(#"%#", [NSString stringWithUTF8String:messages]);
exit(1);
}
glUseProgram(program);
_positionSlot = glGetAttribLocation(program, "position");
_colorSlot = glGetAttribLocation(program, "color"); //Returns 0
_mvpSlot = glGetUniformLocation(program, "MVP"); //Returns 0
if (!_positionSlot || !_colorSlot || !_mvpSlot) {
NSLog(#"Failed to retrieve the locations of the shader variables:\n Position:%i\n Color:%i\n MVP:%i", _positionSlot, _colorSlot, _mvpSlot); //Prints out the values of 1, 0, 0
}
glEnableVertexAttribArray(_positionSlot);
glEnableVertexAttribArray(_colorSlot);
My render function:
-(void)render:(CADisplayLink *)displayLink {
glClearColor(0.5, 0.5, 0.5, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
GLfloat mvp[16] = {
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f,
};
glUniformMatrix4fv(_mvpSlot, 1, 0, mvp);
glViewport(0, 0, width, height);
glVertexAttribPointer(_positionSlot, 3, GL_FLOAT, GL_FALSE, 0, vertices);
glVertexAttribPointer(_colorSlot, 4, GL_UNSIGNED_BYTE, GL_FALSE, 0, colors);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
[_context presentRenderbuffer:GL_RENDERBUFFER];
}
And my vertex shader:
attribute vec4 position;
attribute vec4 color;
uniform mat4 MVP;
varying vec4 v_color;
void main(void) {
gl_Position = MVP * position;
v_color = color;
}
0 is a valid value, -1 is the "error" or "not found" value. The space for uniform locations and attrib locations is separate, so it's fine to have both a location 0 uniform and a location 0 attribute.
You should post the vertices and colors arrays to check if they have the right sizes, this is likely to cause a crash.
I have the same issue and no solution, however I have tried to extensively debug, so I thought perhaps it might be helpful to share.
I traced the issue down to glCreateProgram() and glCreateShader() calls. They both return zero, which is the error return. However in both cases glGetError() returns GL_NO_ERROR. In fact no error pops up all the way through the compile and link chain following the create calls.
I have the same symptom that glGetAttribLocation and also uniforms always return zero (rather than -1 for error!). But I can even feed broken shader files and it will not lead to a compile error or any log info. The correct string was checked to be fed into to compile.
I am completely dumbfounded by the behavior, because clearly glCreateProgram errors but doesn't set an error. And it is not clear to me at all why it does error. What is especially confusion is that a whole chain of GL function calls don't return errors. I am getting my first error when I the program calls a gl function that operates on a uniform, such as glUniformMatrix4fv() or glUniform1i(). But clearly things are broken way before then, because all uniforms return as 0 even though they cannot all be zero.
Edit: I found the solution in my case. Opengles1 seems to be quite flexible when framebuffers are set up. But I needed to force framebuffer setup before starting with shaders to get rid of the issue. e.g. if one follows the old eaglview template, framebuffers are created in layoutSubviews, however that is too late for a typical initialization of shaders in say initwithcoder. The symptoms are the above otherwise.

Resources