I try to draw multiple triangles using OpenGL ES and iOS. I create vertices array with float values with following structure
{x, y, z, r, g, b, a}
for each vertex. Final array for one triangle is:
{x1, y1, z1, r1, g1, b1, a1, x2, y2, z2, r2, g2, b2, a2, x3, y3, z3,
r3, g3, b3, a3}
Here is my update method:
-(void)update {
float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 1.0, 100.0);
self.effect.transform.projectionMatrix = projectionMatrix;
}
and render:
-(void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
[self drawShapes]; // here I fill vertices array
glClearColor(0.65f, 0.65f, 0.8f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
int numItems = 3 * trianglesCount;
glBindVertexArrayOES(vao);
[self.effect prepareToDraw];
glUseProgram(shaderProgram);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * itemSize, convertedVerts, GL_DYNAMIC_DRAW);
glVertexAttribPointer(vertexPositionAttribute, 3, GL_FLOAT, false, stride, 0);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(vertexColorAttribute, 4, GL_FLOAT, false, stride, (GLvoid*)(3 * sizeof(float)));
glEnableVertexAttribArray(GLKVertexAttribColor);
glDrawArrays(GL_TRIANGLES, 0, numItems);
}
Context setup. Here I bind my vertex array and generate vertex buffer:
-(void)setupContext
{
self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
if(!self.context) {
NSLog(#"Failed to create OpenGL ES Context");
}
GLKView *view = (GLKView *)self.view;
view.context = self.context;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
[EAGLContext setCurrentContext:self.context];
self.effect = [[GLKBaseEffect alloc] init];
glEnable(GL_DEPTH_TEST);
glDisable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glGenVertexArraysOES(1, &vao);
glBindVertexArrayOES(vao);
glGenBuffers(1, &vbo);
}
Fragment and vertex shaders are pretty simple:
//fragment
varying lowp vec4 vColor;
void main(void) {
gl_FragColor = vColor;
}
//vertex
attribute vec3 aVertexPosition;
attribute vec4 aVertexColor;
varying lowp vec4 vColor;
void main(void) {
gl_Position = vec4(aVertexPosition, 1.0);
vColor = aVertexColor;
}
Result. Triangles aren't shown:
Where is mistake? I guess problem is with projection matrix. Here is github link to Xcode project.
Downloaded your code and tried it out. I see the purplish screen and no triangles, so I'm guessing that's the problem. I see two things that could be the problem:
1) You'll need to pass glBufferData the total number of bytes you're sending it, like this: glBufferData(GL_ARRAY_BUFFER, sizeof(float) * itemSize * numItems, convertedVerts, GL_DYNAMIC_DRAW);. Any data related to how to chunk the data stays glVertexAttribPointer.
2) That doesn't seem to be the only thing since I still can't get triangles to show up. I've never used GLKit before (I just have a little experience with OpenGL on the desktop platform). That being said, if I replace GLKVertexAttributePosition and GLKVertexAttribColor with 0 and 1 respectively. And apply the glBufferData fix from 1 I see artifacts flashing on the simulator screen when I move the mouse. So there's gotta be something fishy with those enum values and glVertexAttribPointer.
Edit - clarification for 2:
After changing the glBufferData line as described in 1. I also modified the glEnableVertexAttribArray lines so the looked like this:
glVertexAttribPointer(vertexPositionAttribute, 3, GL_FLOAT, false, stride, 0);
glEnableVertexAttribArray(0);
glVertexAttribPointer(vertexColorAttribute, 4, GL_FLOAT, false, stride, (GLvoid*)(3 * sizeof(float)));
glEnableVertexAttribArray(1);
After both of those changes I can see red triangles flickering on the screen. A step closer, since I couldn't see anything before. But I haven't been able to figure it out any further than that :(
Related
I've been struggling for hours trying to render a simple 2D bitmap in OpenGL ES (iOS). While in OpenGL I could simply use glDrawPixels, it doesn't exist in OpenGL ES, neither does glBegin. Seems like glVertexPointer is now deprecated too.
(Note: the bitmap I'm rendering is constantly changing at 60 FPS, so glDrawPixels is a better solution than using textures)
I failed to find any documented sample code that draws a bitmap using current APIs.
So to put it shortly: given an array of pixels (in RGBX format, for example), how to I render it, potentially scaled using nearest neighbor, using OpenGL ES?
The short answer is to render a textured quad and implement a model matrix to perform various transforms (e.g. scaling).
How to render a textured quad
First you'll need to build a VBO with your quad's vertex positions:
float[] positions = {
+0.5f, +0.5f, +0f, // top right
-0.5f, +0.5f, +0f, // top left
+0.5f, -0.5f, +0f, // bottom right
-0.5f, -0.5f, +0f // bottom left
};
int positionVBO = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, positionVBO);
glBufferData(GL_ARRAY_BUFFER, floatBuffer(positions), GL_STATIC_DRAW);
Then pass the necessary info to your vertex shader:
int positionAttribute = glGetAttribLocation(shader, "position");
glEnableVertexAttribArray(positionAttribute);
glVertexAttribPointer(positionAttribute, 3, GL_FLOAT, false, 0, 0);
Now we'll do the same thing but with the quad's texture coordinates:
float[] texcoords = {
1f, 0f, // top right
0f, 0f, // top left
1f, 1f, // bottom right
0f, 1f // bottom left
};
int texcoordVBO = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, texcoordVBO);
glBufferData(GL_ARRAY_BUFFER, floatBuffer(texcoords), GL_STATIC_DRAW);
int textureAttribute = glGetAttribLocation(shader.getId(), "texcoord");
glEnableVertexAttribArray(textureAttribute);
glVertexAttribPointer(textureAttribute, 2, GL_FLOAT, false, 0, 0);
You could interleave this data into a single VBO but I'll leave that to the reader. Regardless we've submitted all the quad vertex data to the GPU and told the shader how to access it.
Next we build our texture buffer assuming we have an object called image:
int texture = glGenTextures();
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, image.getWidth(), image.getHeight(), 0, GL_RGB, GL_UNSIGNED_BYTE, image.getPixels());
And pass that info to the shaders:
int textureUniform = glGetUniformLocation(shader, "image");
glUniform1i(textureUniform, 0);
Check out open.gl's page on Textures for more information.
Finally, the shaders:
vertex.glsl
attribute vec3 position;
attribute vec2 texcoord;
varying vec2 uv;
void main()
{
gl_Position = vec4(position, 1.0);
uv = texcoord;
}
fragment.glsl
varying vec2 uv;
uniform sampler2D image;
void main()
{
gl_FragColor = texture(image, uv);
}
Given no other GL state changes this will render the following:
Note: Since I don't have access to an iOS development environment currently this sample is written in Java. The principle is the same however.
EDIT: How to build the shader program
A shader program is composed from a series of shaders. The bare minimum is a vertex and fragment shader. This is how we would build a shader program from the two shaders above:
String vertexSource = loadShaderSource("vertex.glsl");
GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vertexShader, vertexSource);
glCompileShader(vertexShader);
String fragmentSource = loadFileAsString("fragment.glsl");
GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fragmentShader, fragmentSource);
glCompileShader(fragmentShader);
GLuint shaderProgram = glCreateProgram();
glAttachShader(shaderProgram, vertexShader);
glAttachShader(shaderProgram, fragmentShader);
glLinkProgram(shaderProgram);
Once created you would communicate with it via glVertexAttribPointer and glUniform.
So I am currently learning OpenGL ES 2.0 with iOS and working on a maze game. The maze is randomly generated (so not a loaded model) and my struggle is texturing the walls and floor of the maze. My approach is to just treat the maze as a series of cubes, and I have code that draws the individual faces of a cube separately (so I can create a path by simply leaving some faces out).
Using capture GPU frame, I have confirmed that the texture is indeed loading in correctly, the data in the frame buffers is correct and that I'm not getting any errors. I can see my other lighting effects (so the face isn't completely black), but no texture appears.
Here is how I've defined my cube faces
GLfloat rightCubeVertexData[] =
{
0.5f, -0.5f, -0.5f,
0.5f, -0.5f, 0.5f,
0.5f, 0.5f, -0.5f,
0.5f, 0.5f, -0.5f,
0.5f, -0.5f, 0.5f,
0.5f, 0.5f, 0.5f,
};
GLfloat rightCubeNormalData[] =
{
-1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f,
};
GLfloat rightCubeTexCoords[] =
{
0.0, 0.0,
1.0, 0.0,
0.0, 1.0,
0.0, 1.0,
1.0, 0.0,
1.0, 1.0,
};
The other faces are defined essentially the same way (except they are in one array each, splitting up the position, normals, and tex coords was just something I tried; I'm just trying to get one face to texture and then I'll expand to the rest).
Here is how I load the data into the buffer
glGenVertexArraysOES(1, &_rightVertexArray);
glBindVertexArrayOES(_rightVertexArray);
glGenBuffers(3, _rightVertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _rightVertexBuffer[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(rightCubeVertexData), rightCubeVertexData, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 12, BUFFER_OFFSET(0));
glBindBuffer(GL_ARRAY_BUFFER, _rightVertexBuffer[1]);
glBufferData(GL_ARRAY_BUFFER, sizeof(rightCubeNormalData), rightCubeNormalData, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 12, BUFFER_OFFSET(0));
glBindBuffer(GL_ARRAY_BUFFER, _rightVertexBuffer[2]);
glBufferData(GL_ARRAY_BUFFER, sizeof(rightCubeTexCoords), rightCubeTexCoords, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 12, BUFFER_OFFSET(0));
Again, using three buffers was an experiment, the rest are defined in one buffer with an offset.
Here is how I load textures
crateTexture = [self setupTexture:#"crate.jpg"];
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, crateTexture);
glUniform1i(uniforms[UNIFORM_TEXTURE], 0);
// Load in and set up texture image (adapted from Ray Wenderlich)
- (GLuint)setupTexture:(NSString *)fileName
{
CGImageRef spriteImage = [UIImage imageNamed:fileName].CGImage;
if (!spriteImage) {
NSLog(#"Failed to load image %#", fileName);
exit(1);
}
size_t width = CGImageGetWidth(spriteImage);
size_t height = CGImageGetHeight(spriteImage);
GLubyte *spriteData = (GLubyte *) calloc(width*height*4, sizeof(GLubyte));
CGContextRef spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width*4, CGImageGetColorSpace(spriteImage), kCGImageAlphaPremultipliedLast);
CGContextDrawImage(spriteContext, CGRectMake(0, 0, width, height), spriteImage);
CGContextRelease(spriteContext);
GLuint texName;
glGenTextures(1, &texName);
glBindTexture(GL_TEXTURE_2D, texName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
free(spriteData);
return texName;
}
Then, at the appropriate time, I simply call glDrawArrays to draw the face. I am completely stumped on this, and it is probably a very silly error, but any help anybody could provide would be much appreciated.
P.S. Here is my fragment shader
varying vec3 eyeNormal;
varying vec4 eyePos;
varying vec2 texCoordOut;
uniform sampler2D texture;
uniform vec3 flashlightPosition;
uniform vec3 diffuseLightPosition;
uniform vec4 diffuseComponent;
uniform float shininess;
uniform vec4 specularComponent;
uniform vec4 ambientComponent;
void main()
{
vec4 ambient = ambientComponent;
vec3 N = normalize(eyeNormal);
float nDotVP = max(0.0, dot(N, normalize(diffuseLightPosition)));
vec4 diffuse = diffuseComponent * nDotVP;
vec3 E = normalize(-eyePos.xyz);
vec3 L = normalize(flashlightPosition - eyePos.xyz);
vec3 H = normalize(L+E);
float Ks = pow(max(dot(N, H), 0.0), shininess);
vec4 specular = Ks*specularComponent;
if( dot(L, N) < 0.0 ) {
specular = vec4(0.0, 0.0, 0.0, 1.0);
}
gl_FragColor = (ambient + diffuse + specular) * texture2D(texture, texCoordOut);
//gl_FragColor = ambient + diffuse + specular;
gl_FragColor.a = 1.0;
}
And yes, all the uniform names are correct and correspond to something in the main code.
EDIT: Here is the vertex shader
precision mediump float;
attribute vec4 position;
attribute vec3 normal;
attribute vec2 texCoordIn;
varying vec3 eyeNormal;
varying vec4 eyePos;
varying vec2 texCoordOut;
uniform mat4 modelViewProjectionMatrix;
uniform mat4 modelViewMatrix;
uniform mat3 normalMatrix;
void main()
{
eyeNormal = (normalMatrix * normal);
eyePos = modelViewMatrix * position;
texCoordOut = texCoordIn;
gl_Position = modelViewProjectionMatrix * position;
}
To sum up the procedure done from the comments...
There is much that can go wrong when dealing with textures and it is good to know how to pinpoint where the issue is.
What to be careful wit the texture itself:
Check if you did set the parameters such as
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
Check setting the uniform as glUniform1i(uniformName, 0) where the last parameter corresponds to the active texture and not the texture ID.
Other include checking if the uniform name is correct, texture is bound.. And if possible check the debugger if the texture is properly loaded.
Next to that there are chances your texture coordinates are messed up and this seems to be a very common issue. To debug that it is best that you replace the color gotten from the texture in your fragment shader with the texture coordinate itself. E.g. replace texture2D(texture, texCoordOut) with vec4(texCoordOut.x, texCoordOut.y, .0, 1.0). Since the texture coordinates should be in range [0,1] you should see nice gradients between red and green color in your scene. If you do not see them then your texture coordinates are messed up: If all is black your coordinates are all zero, if most is yellow then your coordinates are most likely too large.
In your case the texture coordinates were all black which means you were always getting the first pixel from the texture thus a constant color in your scene. What to check at this point is:
Are the coordinates you push to the GPU correct
Is the pointer set correctly as glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 12, BUFFER_OFFSET(0)) (check all the parameters)
Is the attribute enabled glEnableVertexAttribArray(GLKVertexAttribTexCoord0)
Is the attribute bound to the shader being compiled.
In your case you have forgotten to bind the texture coordinate attribute which is quite natural to miss.
From the information you have given us it is impossible to find this mistake but you should note the procedure I have given you to pinpoint the issue as to where it actually lies. It might be handy in future as well.
Turns out I had forgotten to bind the attribute location when compiling the shader. Needed to add the line
glBindAttribLocation(_program, GLKVertexAttribTexCoord0, "texCoordIn");
to the load shaders method.
My vertexShader:
attribute vec4 vertexPosition;
attribute vec2 vertexTexCoord;
varying vec2 texCoord;
uniform mat4 modelViewProjectionMatrix;
void main()
{
gl_Position = modelViewProjectionMatrix * vertexPosition;
texCoord = vertexTexCoord;
}
My fragmentShder:
precision mediump float;
varying vec2 texCoord;
uniform sampler2D texSampler2D;
void main()
{
gl_FragColor = texture2D(texSampler2D, texCoord);
}
Init Shader:
if (shader2D == nil) {
shader2D = [[Shader2D alloc] init];
shader2D.shaderProgramID = [ShaderUtils compileShaders:vertexShader2d :fragmentShader2d];
if (0 < shader2D.shaderProgramID) {
shader2D.vertexHandle = glGetAttribLocation(shader2D.shaderProgramID, "vertexPosition");
shader2D.textureCoordHandle = glGetAttribLocation(shader2D.shaderProgramID, "vertexTexCoord");
shader2D.mvpMatrixHandle = glGetUniformLocation(shader2D.shaderProgramID, "modelViewProjectionMatrix");
shader2D.texSampler2DHandle = glGetUniformLocation(shader2D.shaderProgramID,"texSampler2D");
}
else {
NSLog(#"Could not initialise shader2D");
}
}
return shader2D;
Rendering:
GLKMatrix4 mvpMatrix;
mvpMatrix = [self position: position];
mvpMatrix = GLKMatrix4Multiply([QCARutils getInstance].projectionMatrix, mvpMatrix);
glUseProgram(shader.shaderProgramID);
glVertexAttribPointer(shader.vertexHandle, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)vertices);
glVertexAttribPointer(shader.textureCoordHandle, 2, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)texCoords);
glEnableVertexAttribArray(shader.vertexHandle);
glEnableVertexAttribArray(shader.textureCoordHandle);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, [texture textureID]);
glUniformMatrix4fv(shader.mvpMatrixHandle, 1, GL_FALSE, (const GLfloat*)&mvpMatrix);
glUniform1i(shader.texSampler2DHandle, 0);
glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, (const GLvoid*)indices);
glDisableVertexAttribArray(shader.vertexHandle);
glDisableVertexAttribArray(shader.textureCoordHandle);
It seems to work properly when one texture coordinates corresponds to one and only one vertex coordinates(Number of texCoords == Number of vertices)
My question: Does openGL assign a texture coordinates to one and only one vertex? In other words, when texture coordinates and vertex coordinates are not one-to-one correspondence, what will the rendering result turn out to be?
Yes, there needs to be a one-to-one correspondence between vertices and texCoords -- all information passed down the OpenGL pipeline is per-vertex, so every normal and every texCoord must have a vertex.
Note, however, that you can (and will often need to) have multiple texCoords, normals, or other per-vertex data for the same point in space: e.g. if you're wrapping a texture map around a sphere, there will be a "seam" where the ends of the rectangular texture meet. At those spots you'll need to have multiple vertices that occupy the same point.
If anyone can shed light on what's going wrong here, perhaps a misordering of gl commands or some other incompatible command sequence, I would be tremendously grateful for your assistance. I have been trying to get this code working all day with no success, despite much Google research and poring over examples in "OpenGL ES 2.0 Programming Guide".
I'm trying to use a Vertex Buffer Object and custom shaders in OpenGL ES 2.0 on iPhone. I am attempting to interleave vertex data from a series of custom structures of the following type:
typedef struct {
float x, y; // Position.
float radius;
float colR,colG,colB,colA; // Color rgba components.
} VType;
The position, radius and color bytes are to be considered for vertex location, pointsize and color respectively. Ids for these are initialised:
ID_ATT_Pos = 0;
ID_ATT_Radius = 1;
ID_ATT_Color = 2;
// Note: I have also tried values of 1,2,3 but no difference.
The stride for these is specified in each glVertexAttribPointer call.
It is intended that each vertex be drawn at its x,y position with the specified color and a pointsize of its radius. Associated with each aforementioned attribute is a vertex shader attribute, these are "a_position","a_color" and "a_radius". Here are the vertex and fragment shaders:
VertexShader.txt
attribute vec2 a_position;
attribute vec4 a_color;
attribute float a_radius;
varying vec4 v_color;
void main()
{
gl_Position = vec4(a_position,0.0,1.0);
gl_PointSize = a_radius;
v_color = a_color;
}
FragmentShader.txt
#ifdef GL_FRAGMENT_PRECISION_HIGH
precision highp float;
#else
precision mediump float;
#endif
void main()
{
gl_FragColor = vec4(1.0,0.0,0.0,1.0);
}
I wonder if a projection matrix is required in the vertex shader? All the points I create are 2D in the dimensions of the iPhone screen, and as you can see they each have 'z,w' appended as '0.0,1.0' in the vertex shader above.
Remaining core code to setup the VBO and render using glDrawElements is listed below. When running this code it is visibly apparent that the glClear has been successful and indeed NSLog print confirms this, however no VType vertices are drawn in the viewport by the DrawFrame code listed below. The vertex coordinates are well within screen dimensions e.g. x,y: (92, 454).
Note that any undeclared variables in the following code are class properties, and of appropriate type so e.g. 'vao' is GLuint, 'vbos' is GLuint[2], 'program' is a GLuint program handle. I have also left out the boilerplate OpenGL setup code, which has been tested with different code internals and shown to work.
Load Shader Code
-(GLuint)loadShaderType:(GLenum)type From:(NSString*)shaderFile {
GLuint shader;
GLint compiled;
// Create and compile vertex shader.
NSString *filepath = [[NSBundle mainBundle] pathForResource:shaderFile ofType:#"txt"];
const GLchar *shaderSrc = (GLchar *)[[NSString stringWithContentsOfFile:filepath encoding:NSUTF8StringEncoding error:nil] UTF8String];
if (!shaderSrc) {
NSLog(#"Failed to load vertex shader");
return 0;
}
// Create shader object.
shader = glCreateShader(type);
if (shader == 0) return 0;
// Load shader source.
glShaderSource(shader, 1, &shaderSrc, NULL);
// Compile shader.
glCompileShader(shader);
// Check compile status.
glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);
if (!compiled) {
GLint infoLen = 0;
glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLen);
if (infoLen > 1) {
char * infoLog = (char*)malloc(sizeof(char)*infoLen);
glGetShaderInfoLog(shader, infoLen, NULL, infoLog);
NSLog(#"Error compiling shader:\n%s\n",infoLog);
free(infoLog);
}
glDeleteShader(shader);
return 0;
}
return shader;
}
Initialisation Code
GLfloat screenHeight = [[UIScreen mainScreen] bounds].size.height;
GLfloat screenWidth = [[UIScreen mainScreen] bounds].size.width;
glViewport(0, 0, screenWidth, screenHeight);
glGenVertexArraysOES(1, &vao);
glBindVertexArrayOES(vao);
// Generate buffer, bind to use now, set initial data.
glGenBuffers(2, vbos);
glBindBuffer(GL_ARRAY_BUFFER, vbos[0]);
glBufferData(GL_ARRAY_BUFFER, vxBufSize, squidVxs, GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbos[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, ixBufSize, squidIxs, GL_STATIC_DRAW);
glEnableVertexAttribArray(ID_ATT_Pos); // Pos
glVertexAttribPointer(ID_ATT_Pos, 2, GL_FLOAT, GL_FALSE, sizeof(VType), BUFFER_OFFSET(0));
glEnableVertexAttribArray(ID_ATT_Radius);// Radius
glVertexAttribPointer(ID_ATT_Radius, 1, GL_FLOAT, GL_FALSE, sizeof(VType), BUFFER_OFFSET(sizeof(float)*2));
glEnableVertexAttribArray(ID_ATT_Color);// Color
glVertexAttribPointer(ID_ATT_Color, 4, GL_FLOAT, GL_FALSE, sizeof(VType), BUFFER_OFFSET(sizeof(float)*3));
GLuint shaders[2];
shaders[0] = [self loadShaderType:GL_VERTEX_SHADER From:#"VertexShader"];
shaders[1] = [self loadShaderType:GL_FRAGMENT_SHADER From:#"FragmentShader"];
program = glCreateProgram();
glAttachShader(program, shaders[0]);
glAttachShader(program, shaders[1]);
glBindAttribLocation(program, ID_ATT_Pos, "a_position");
glBindAttribLocation(program, ID_ATT_Radius, "a_radius");
glBindAttribLocation(program, ID_ATT_Color, "a_color");
glLinkProgram(program);
GLint linked;
glGetProgramiv(program, GL_LINK_STATUS, &linked);
if (!linked) {
GLint infoLen = 0;
glGetProgramiv(program, GL_INFO_LOG_LENGTH, &infoLen);
if (infoLen > 1) {
char* infoLog = (char*)malloc(sizeof(char)*infoLen);
glGetProgramInfoLog(program, infoLen, NULL, infoLog);
NSLog(#"Error linking program:\n%s\n",infoLog);
free(infoLog);
}
glDeleteProgram(program);
}
DrawFrame Code
// Note: Framebuffer swapping is taken care of before/after these
// lines, and has been shown to work.
glClearColor(0.33f, 0.0f, 0.33f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(program);
glDrawElements(GL_POINTS, numPoints, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0));
Let me know if any other information is needed, thank you for your time.
You do not necessarily need a projection matrix for your 2d vertices, but you definitely need to transform all 3 coordinates into the [-1,1] range (which you have already done for z by setting it to 0). These coordinates are then transformed by GL with the current viewport transformation (that should usually match your screen dimensions). So if the coordinates are in screen dimensions, then transform them into [-1,1] in the shader, or just use [-1,1] coordinates in the application (which would also be more resolution agnostic).
EDIT: And I don't know how OpenGL ES handles this, but in desktop GL (at least upto 2.1) you need to call glEnable(GL_VERTEX_PROGRAM_POINT_SIZE) for the vertex shader to be able to chnage the point size.
I am creating a simple OpenGL ES 2.0 application for iOS, and whenever I call glDrawArrays. I found that this was occurring when I had previously called glEnableVertexAttribArray for my two attributes (position and color), and then found that glGetAttribLocation was returning 1 for position, and 0 for color, and then also found that glGetUniformLocation was returning 0 for my MVP matrix. I am not sure if 0 is a valid value, or why glEnableVertexAttribArray appears to be causing EXC_BAD_ACCESS when glDrawArrays is called.
Here is my code:
compileShaders function:
-(void)compileShaders {
GLuint vertShader = [self compileShader:#"Shader" ofType:GL_VERTEX_SHADER];
GLuint fragShader = [self compileShader:#"Shader" ofType:GL_FRAGMENT_SHADER];
GLuint program = glCreateProgram();
glAttachShader(program, vertShader);
glAttachShader(program, fragShader);
glLinkProgram(program);
GLint success;
glGetProgramiv(program, GL_LINK_STATUS, &success);
if (success == GL_FALSE) {
GLchar messages[256];
glGetProgramInfoLog(program, sizeof(messages), 0, &messages[0]);
NSLog(#"%#", [NSString stringWithUTF8String:messages]);
exit(1);
}
glUseProgram(program);
_positionSlot = glGetAttribLocation(program, "position");
_colorSlot = glGetAttribLocation(program, "color"); //Returns 0
_mvpSlot = glGetUniformLocation(program, "MVP"); //Returns 0
if (!_positionSlot || !_colorSlot || !_mvpSlot) {
NSLog(#"Failed to retrieve the locations of the shader variables:\n Position:%i\n Color:%i\n MVP:%i", _positionSlot, _colorSlot, _mvpSlot); //Prints out the values of 1, 0, 0
}
glEnableVertexAttribArray(_positionSlot);
glEnableVertexAttribArray(_colorSlot);
My render function:
-(void)render:(CADisplayLink *)displayLink {
glClearColor(0.5, 0.5, 0.5, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
GLfloat mvp[16] = {
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f,
};
glUniformMatrix4fv(_mvpSlot, 1, 0, mvp);
glViewport(0, 0, width, height);
glVertexAttribPointer(_positionSlot, 3, GL_FLOAT, GL_FALSE, 0, vertices);
glVertexAttribPointer(_colorSlot, 4, GL_UNSIGNED_BYTE, GL_FALSE, 0, colors);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
[_context presentRenderbuffer:GL_RENDERBUFFER];
}
And my vertex shader:
attribute vec4 position;
attribute vec4 color;
uniform mat4 MVP;
varying vec4 v_color;
void main(void) {
gl_Position = MVP * position;
v_color = color;
}
0 is a valid value, -1 is the "error" or "not found" value. The space for uniform locations and attrib locations is separate, so it's fine to have both a location 0 uniform and a location 0 attribute.
You should post the vertices and colors arrays to check if they have the right sizes, this is likely to cause a crash.
I have the same issue and no solution, however I have tried to extensively debug, so I thought perhaps it might be helpful to share.
I traced the issue down to glCreateProgram() and glCreateShader() calls. They both return zero, which is the error return. However in both cases glGetError() returns GL_NO_ERROR. In fact no error pops up all the way through the compile and link chain following the create calls.
I have the same symptom that glGetAttribLocation and also uniforms always return zero (rather than -1 for error!). But I can even feed broken shader files and it will not lead to a compile error or any log info. The correct string was checked to be fed into to compile.
I am completely dumbfounded by the behavior, because clearly glCreateProgram errors but doesn't set an error. And it is not clear to me at all why it does error. What is especially confusion is that a whole chain of GL function calls don't return errors. I am getting my first error when I the program calls a gl function that operates on a uniform, such as glUniformMatrix4fv() or glUniform1i(). But clearly things are broken way before then, because all uniforms return as 0 even though they cannot all be zero.
Edit: I found the solution in my case. Opengles1 seems to be quite flexible when framebuffers are set up. But I needed to force framebuffer setup before starting with shaders to get rid of the issue. e.g. if one follows the old eaglview template, framebuffers are created in layoutSubviews, however that is too late for a typical initialization of shaders in say initwithcoder. The symptoms are the above otherwise.