I'm drawing a few lines using OpenGL ES and I need to change their thickness from 1 pixel to 3 pixels smoothly, but glLineWidth doesn't allow to set line thickness between 1.0 and 2.0.
Is it possible?
Here is my code
- (void)setupGL
{
[EAGLContext setCurrentContext:self.context];
self.effect = [[GLKBaseEffect alloc] init];
glEnable(GL_DEPTH_TEST);
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(thinLines), thinLines, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
glBindVertexArrayOES(0);
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArrayOES(_vertexArray);
self.effect.constantColor = GLKVector4Make(lineR, lineG, lineB, 1.0f);
[self.effect prepareToDraw];
glLineWidth(1 + scaleQ);
glDrawArrays(GL_LINES, 0, thinLinesCount*2);
}
OpenGL ES (including 3.0) does not support antialiased lines. From documentation to glLineWidth:
The actual width is determined by rounding the supplied width to the
nearest integer.
So unfortunately you can't "smoothly" change line thickness.
Related
I'm trying to reproduce in a GLKViewController the effect in this stencil buffer tutorial here in which a stencil is used for clipping.
In my drawinrect method I have:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
[self.effect prepareToDraw];
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid *) offsetof(Vertex, Position));
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid *) offsetof(Vertex, TexCoord));
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indexBuffer);
glClearColor(0.5, 0.5, 0.5, 1.0);
glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
glClear(GL_DEPTH_BUFFER_BIT);
glEnable(GL_STENCIL_TEST);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glDepthMask(GL_FALSE);
glStencilFunc(GL_NEVER, 1, 0xFF);
glStencilOp(GL_REPLACE, GL_KEEP, GL_KEEP); // draw 1s on test fail (always)
// draw stencil pattern
glStencilMask(0xFF);
glClear(GL_STENCIL_BUFFER_BIT); // needs mask=0xFF
glDrawElements(GL_TRIANGLES, 1*6, GL_UNSIGNED_SHORT, 0);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glDepthMask(GL_TRUE);
glStencilMask(0x00);
// draw only where stencil's value is 1
glStencilFunc(GL_EQUAL, 1, 0xFF);
glDrawElements(GL_TRIANGLES, nLiveSquares*6, GL_UNSIGNED_SHORT, 0);
glDisable(GL_STENCIL_TEST);
}
The vertexbuffer contains info for squares, which hold textures.
The effect I'm trying to get is to have the first square be used to define the stencil shape and then when I draw the scene, anything which lies outside of the first square is clipped. Hence, I have used drawElements twice - firstly to draw the clipping square and then to draw all the squares.
I also have another method used for the OpenGL set-up, called once at the start of the program. As suggested by an answer to this stackoverflow question, this includes:
GLuint depthStencilRenderbuffer;
glGenRenderbuffers(1, &depthStencilRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthStencilRenderbuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthStencilRenderbuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, depthStencilRenderbuffer);
Running this results in nothing except a screen containing the grey clear color. It appears that everything has been stenciled out (or is not drawn to the screen).
Solution:
i) I changed the initialization to:
GLuint depthStencilRenderbuffer;
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthStencilRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES,
GL_DEPTH24_STENCIL8_OES,
self.view.bounds.size.width,
self.view.bounds.size.height);
I'm not sure what the difference is, but this seems to work.
ii) In the set up I use:
GLKView *view = (GLKView *)self.view;
view.context = self.context;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
view.drawableStencilFormat = GLKViewDrawableStencilFormat8;
Thanks to #rickster for his suggestion.
I am trying and failing to draw a line with color using glkit / OpenGL Es2. I can draw the line, but it comes out as black.
My current approach is attempting to use a VBO containing both positional and color data.
Set-up:
typedef struct {
float Position[2];
float Color[4];
} LinePoint;
LinePoint line[] =
{
{{0.0, 0.0},{1.0,0.0,0.0,1}},
{{2.0, 3.0},{1.0,0.0,0.0,1}}
};
glGenBuffers(1, &_lineBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _lineBuffer);
glBufferData(GL_ARRAY_BUFFER,sizeof(LinePoint)*2,line,GL_STATIC_DRAW);
In my drawInRect method:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
[EAGLContext setCurrentContext:self.context];
[self.effect prepareToDraw];
glClearColor(0.5, 0.5, 0.5, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
glBindBuffer(GL_ARRAY_BUFFER, _lineBuffer);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition,2,GL_FLOAT,GL_FALSE,sizeof(LinePoint),(const GLvoid *) offsetof(LinePoint, Position));
glEnableVertexAttribArray(GLKVertexAttribColor);
glVertexAttribPointer(GLKVertexAttribColor,4,GL_FLOAT,GL_FALSE,sizeof(LinePoint),(const GLvoid *) offsetof(LinePoint, Color));
// Set the line width
glLineWidth(50.0);
// Render the line
glDrawArrays(GL_LINE_STRIP, 0, 2);
glEnable(GL_BLEND);
glEnable(GL_TEXTURE_2D);
}
I am trying to combine 2D and 3D in OpenGL ES "2.0" and even though there are many questions here on openGL es 1.0 and some on 2.0 I am having trouble trying to figure this out. So for 2D I am going off of this tutorial: http://www.raywenderlich.com/9743/how-to-create-a-simple-2d-iphone-game-with-opengl-es-2-0-and-glkit-part-1 and for the 3D I am using the existing cube rotating xcode template...
I am getting EXC_BAD_ACCESS on the second glDrawArray (somehow its thinking that the original buffer still applies? anyway to unbind this before drawing the 2D texture?) error with the following render function. It works if I disable the lines:
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(0));
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(12));
What's going on? Thanks in advance.
[code]
glEnable(GL_DEPTH_TEST);
// glGenVertexArraysOES(1, &_vertexArrayNum);
// glBindVertexArrayOES(_vertexArrayNum);
glGenBuffers(1, &_vertexBufferNum);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBufferNum);
glBufferData(GL_ARRAY_BUFFER, sizeof(gCubeVertexData), gCubeVertexData, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(0));
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(12));
glBindVertexArrayOES(0);
float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f);
self.effect.transform.projectionMatrix = projectionMatrix;
GLKMatrix4 baseModelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -4.0f);
// baseModelViewMatrix = GLKMatrix4Rotate(baseModelViewMatrix, _rotation, 0.0f, 1.0f, 0.0f);
// Compute the model view matrix for the object rendered with GLKit
// GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 1.5f);
GLKMatrix4 modelViewMatrix = GLKMatrix4MakeRotation(_rotation, 0, 1, 0);
//modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f);
modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix);
self.effect.transform.modelviewMatrix = modelViewMatrix;
glClearColor(0.65f, 0.65f, 0.65f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArrayOES(_vertexArrayNum);
// Render the object with GLKit
// [self.effect prepareToDraw];
// glDrawArrays(GL_TRIANGLES, 0, 36);
modelViewMatrix = GLKMatrix4MakeScale(2.0f, 2.0f, 2.0f);
projectionMatrix = GLKMatrix4MakeOrtho(0, 480, 0, 320, -1024, 1048);
self.effect.transform.projectionMatrix = projectionMatrix;
self.effect.transform.modelviewMatrix = modelViewMatrix;
[self.effect prepareToDraw];
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
[self.player render];
[/code]
UPDATE:
Added glEnableClientState(GL_VERTEX_ARRAY); wrapped around the cube drawing part... which got rid of the bad access error, but neither the cube nor the sprite is being drawn (unless of couse I comment out the code part altogether)...
UPDATE2:
so the problen is obviously that after i call glDrawArray to draw the cube (which works fine)... I call glDrawArray again as follows. But somehow its still trying to render previous array?
// 1
self.effect.texture2d0.name = self.textureInfo.name;
self.effect.texture2d0.enabled = YES;
// 2
[self.effect prepareToDraw];
// 3
glEnableVertexAttribArray(GLKVertexAttribPosition);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
// 4
long offset = (long)&_quad;
glVertexAttribPointer(GLKVertexAttribPosition, 2, GL_FLOAT, GL_FALSE, sizeof(TexturedVertex), (void *) (offset + offsetof(TexturedVertex, geometryVertex)));
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, sizeof(TexturedVertex), (void *) (offset + offsetof(TexturedVertex, textureVertex)));
// 5
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Stupid me. Works as soon as I do:
glBindBuffer(GL_ARRAY_BUFFER, 0);
to clear the buffer. I kept looking for glUnbind something... forgot openGL works off of memory references... The 2D texutre is now applying to the cube, but I am sure thats an easy fix :).... That you all for the help >:o
Texture issue fixed :)
I am drawing a pixel using GLKit. I can successfully draw the pixel at (10, 10) coordinates if I have:
glClearColor(0.65f, 0.65f, 0.65f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Prepare the effect for rendering
[self.effect prepareToDraw];
GLfloat points[] =
{
10.0f, 10.0f,
};
glClearColor(1.0f, 1.0f, 0.0f, 1.0f);
GLuint bufferObjectNameArray;
glGenBuffers(1, &bufferObjectNameArray);
glBindBuffer(GL_ARRAY_BUFFER, bufferObjectNameArray);
glBufferData(
GL_ARRAY_BUFFER,
sizeof(points),
points,
GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(
GLKVertexAttribPosition,
2,
GL_FLOAT,
GL_FALSE,
2*4,
NULL);
glDrawArrays(GL_POINTS, 0, 1);
But I want to decide at runtime how many and exactly where I want to draw pixels, so I tried this but it is drawing pixel at (10, 0), something's wrong here:
glClearColor(0.65f, 0.65f, 0.65f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Prepare the effect for rendering
[self.effect prepareToDraw];
GLfloat *points = (GLfloat*)malloc(sizeof(GLfloat) * 2);
for (int i=0; i<2; i++) {
points[i] = 10.0f;
}
glClearColor(1.0f, 1.0f, 0.0f, 1.0f);
GLuint bufferObjectNameArray;
glGenBuffers(1, &bufferObjectNameArray);
glBindBuffer(GL_ARRAY_BUFFER, bufferObjectNameArray);
glBufferData(
GL_ARRAY_BUFFER,
sizeof(points),
points,
GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(
GLKVertexAttribPosition,
2,
GL_FLOAT,
GL_FALSE,
2*4,
NULL);
glDrawArrays(GL_POINTS, 0, 1);
Kindly help me out.
Edit:
Problem
Actually the problem is: I can't figure out what is the difference between:
GLfloat points[] =
{
10.0f, 10.0f,
};
AND
GLfloat *points = (GLfloat*)malloc(sizeof(GLfloat) * 2);
for (int i=0; i<2; i++) {
points[i] = 10.0f;
}
My bet will be that the problem is the sizeof(points) in the glBufferData data call: in this case it will return the size of the pointer which is a word i.e. 4 bytes (or something like this). You should pass the real size of your array that will be the same what you calculate in the malloc code.
I am building an iOS game with OpenGL ES 2.0 that contains a scene with around 100 different 'objects' which I need to be able to transform independently of one another. I am new to OpenGL ES. As such, I initially thought I should start with OpenGL ES 1.1, then later migrate the app to 2.0 to take advantage of the enhanced visual effects.
However, just as I was getting comfortable with 1.1, I realized, partly based on Apple's very 2.0-centric Xcode template for OpenGL, that it was going to be more trouble than it's worth to fight moving straight into 2.0, so I bit the bullet.
Now, I'm finding it difficult to grasp the concepts enough to do simple transforms on my vertex array objects independently of one another. I read in the OpenGLES 2.0 docs that the best way to do this would be to use multiple VBOs, one for each vertex array. However, I can't seem to transform one of them without affecting the entire scene.
I started the project with the Apple OpenGL ES template and streamlined it, such that the setupGL method looks like this:
-(void)setupGL
{
// Setup
[EAGLContext setCurrentContext:self.context];
[self loadShaders];
self.effect = [[GLKBaseEffect alloc] init];
self.effect.light0.enabled = GL_TRUE;
self.effect.light0.diffuseColor = GLKVector4Make(1.0f, 0.4f, 0.4f, 1.0f);
glEnable(GL_DEPTH_TEST); // do depth comparisons and update the depth buffer
// Initialize first buffer
glGenVertexArraysOES(1, &_vertexArray);
glBindVertexArrayOES(_vertexArray);
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(gPyramidVertexData), gPyramidVertexData, GL_DYNAMIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(0));
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(12));
glBindVertexArrayOES(0);
// Initialize second buffer
glGenVertexArraysOES(2, &_vertexArray2);
glBindVertexArrayOES(_vertexArray2);
glGenBuffers(2, &_vertexBuffer2);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer2);
glBufferData(GL_ARRAY_BUFFER, sizeof(gCubeVertexData), gCubeVertexData, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(0));
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(12));
glBindVertexArrayOES(0);
}
I know that's incredibly sloppy code- I basically copy-and-pasted the buffer creation portion to set myself up for experimenting with two different vertextArrays in separate buffers. It appears to have worked, as I can render both buffers in the drawInRect method:
-(void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glBindVertexArrayOES(_vertexArray);
glUseProgram(_program);
glUniformMatrix4fv(uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX], 1, 0, _modelViewProjectionMatrix.m);
glUniformMatrix3fv(uniforms[UNIFORM_NORMAL_MATRIX], 1, 0, _normalMatrix.m);
glDrawArrays(GL_TRIANGLES, 0, numberOfVerteces);
glBindVertexArrayOES(_vertexArray2);
glUseProgram(_program);
glUniformMatrix4fv(uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX], 1, 0, _modelViewProjectionMatrix.m);
glUniformMatrix3fv(uniforms[UNIFORM_NORMAL_MATRIX], 1, 0, _normalMatrix.m);
glDrawArrays(GL_TRIANGLES, 0, numberOfVerteces2);
}
My problem comes when I try to do transforms on one vertex array without affecting the other one. I tried adding transforms in the above method just before the glDrawArrays calls, without success- I think that may only work with 1.1. It appears my transforms must be done within the update method:
-(void)update
{
float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f);
self.effect.transform.projectionMatrix = projectionMatrix;
GLKMatrix4 baseModelViewMatrix = GLKMatrix4MakeTranslation( 0.0f, 0.0f, -4.0f);
baseModelViewMatrix = GLKMatrix4Rotate(baseModelViewMatrix, _sceneRotation, 1.0f, 1.0f, 1.0f);
GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, zoomFactor);
modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _objectRotation, 0.0f, 1.0f, 0.0f);
modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix);
_normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL);
_modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix);
}
However, this is where I get really confused-- it's not at all clear how I could modify this method to do transforms to one VBO or the other. It's not even clear that th above code is making any reference to one particular buffer at all.
Eventually I would like to expand my game by moving the code for each game object into an instance of a custom class, which would handle the transforms, etc. But I would like to know if I'm completely misguided in my approach before doing so.
In case this is helpful to anyone having similar issues, I found the answers I was looking for on Ian Terrell's excellent blog, Games by Ian Terrell.
In particular, this tutorial and its accompanying sample code were exactly what I needed to get a grip on this very complicated subject. Hope this helps!
GLKMatrix4 _modelViewProjectionMatrix[5]; // Al inicio
En - (void)update utiliza
GLKMatrix4 projectionMatrix = GLKMatrix4MakeOrtho(-1.0f, 1.0f, -1.0f / aspect, 1.0f / aspect, -10.0f, 10.0f);
for (int i=0; i<5;i++) {
GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(i*0.2+ 0.3f, 0.0f, 0.0f);
modelViewMatrix = GLKMatrix4Multiply(modelViewMatrix, GLKMatrix4MakeZRotation(0.0 - _rotation));
modelViewMatrix = GLKMatrix4Multiply(modelViewMatrix, GLKMatrix4MakeXRotation(0.0 - _rotation));
modelViewMatrix = GLKMatrix4Multiply(modelViewMatrix, GLKMatrix4MakeYRotation(0.20 - _rotation));
_modelViewProjectionMatrix[i] = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix);
}