I have recently started a new iOS project based off of the OpenGL sample. I have added my own camera movement code, and I have added an NSMutableArray that contains instances of Blocks (currently only contain 3D position). I have modified the drawing code do draw instances of the cube model included with the sample from this array.
When I navigate the camera all of the cubes behave as I expect (they all stay in their original places, but I move the camera around them), except one cube seems to slide a little apart from the others when the camera changes. Whenever I move the camera (position or rotation) this block does stay in the same general location, but it slides slightly in the opposite direction of the camera movement. When the camera is stationary it is perfectly where it should be.
This block always seems to be the last one drawn. If I add a conditional to skip that block it picks a different one.
I've gone through my code over and over again, and I can't see why one block should behave differently than the others.
Here is all of the relevant code:
ViewMatrix = GLKMatrix4MakeRotation(CameraRotation.y, 1.0f, 0.0f, 0.0f);
ViewMatrix = GLKMatrix4Rotate(ViewMatrix, CameraRotation.x, 0.0f, 1.0f, 0.0f);
ViewMatrix = GLKMatrix4TranslateWithVector3(ViewMatrix, CameraPosition);
ViewMatrix = GLKMatrix4Translate(ViewMatrix, 0.0f, -1.5f, 0.0f);
glBindVertexArrayOES(_vertexArray);
for (int i = 0; i < blocks.count; i++){
Block *b = [blocks objectAtIndex:i];
if (true){
[self.effect prepareToDraw];
GLKMatrix4 ModelViewMatrix = GLKMatrix4MakeTranslation(b.position.x, b.position.y, b.position.z);
ModelViewMatrix = GLKMatrix4Multiply(ViewMatrix, ModelViewMatrix);
self.effect.transform.modelviewMatrix = ModelViewMatrix;
glDrawArrays(GL_TRIANGLES, 0, 36);
}
}
As you can see, all of the blocks are drawn with the exact same code. Why would one behave differently?
The answer to your problem is simple, you should call prepareToDraw after you set the effect.
You should always first configure the effect and than call prepareToDraw. So just move the [self.effect prepareToDraw] right before glDrawArrays(..).
Hope it helps
Related
Basically what I'm doing is making a simple finger drawing application. I have a single class that takes the input touch points and does all the fun work of turning those touch points into bezier curves, calculating vertices from those, etc. That's all working fine.
The only interesting constraint I'm working with is that I need strokes to blend on on top of each other, but not with themselves. Imagine having a scribbly line that crosses itself and has 50% opacity. Where the line crosses itself, there should be no visible blending (it should all look like the same color). However, the line SHOULD blend with the rest of the drawing below it.
To accomplish this, I'm using two textures. A back texture and a scratch texture. While the line is actively being updated (during the course of the stroke), I disable blending, draw the vertices on the scratch texture, then enable blending, and draw the back texture and scratch texture into my frame buffer. When the stroke is finished, I draw the scratch texture into the back texture, and we're ready to start the next stroke.
This all works very smoothly on a newer device, but on older devices the frame rate takes a severe hit. From some testing, it seems that the biggest performance hit is in drawing the textures to the frame buffer, because they're relatively large textures (due to the iPhone's retina resolution).
Does anybody have any hints on some strategies to work around this? I'm happy to provide more specifics or code, I'm just not sure where to start.
I am using OpenGL ES 2.0, targeting iOS 7.0, but testing on an iPhone 4S
The following is code I'm using to draw into the framebuffers:
- (void)drawRect:(CGRect)rect
{
[self drawRect:rect
ofTexture:_backTex
withOpacity:1.0];
if (_activeSpriteStroke)
{
[self drawStroke:_activeSpriteStroke
intoFrameBuffer:0];
}
}
Those rely on the following few methods:
- (void)drawRect:(CGRect)rect
ofTexture:(GLuint)tex
withOpacity:(CGFloat)opacity
{
_texShader.color = GLKVector4Make(1.0, 1.0, 1.0, opacity);
[_texShader prepareToDraw];
glBindTexture(GL_TEXTURE_2D, tex);
glBindVertexArrayOES(_texVertexVAO);
glBindBuffer(GL_ARRAY_BUFFER, _texVertexVBO);
[self bufferTexCoordsForRect:rect];
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindVertexArrayOES(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindTexture(GL_TEXTURE_2D, tex);
}
- (void)drawStroke:(AHSpriteStroke *)stroke
intoFrameBuffer:(GLuint)frameBuffer
{
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
[self renderStroke:stroke
ontoTexture:_scratchTex
inFrameBuffer:_scratchFrameBuffer];
if (frameBuffer == 0)
{
[self bindDrawable];
}
else
{
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
}
[self setScissorRect:_activeSpriteStroke.boundingRect];
glEnable(GL_SCISSOR_TEST);
[self drawRect:self.bounds
ofTexture:_scratchTex
withOpacity:stroke.lineOpacity];
glDisable(GL_SCISSOR_TEST);
glDisable(GL_BLEND);
}
- (void)renderStroke:(AHSpriteStroke *)stroke
ontoTexture:(GLuint)tex
inFrameBuffer:(GLuint)framebuffer
{
glBindFramebuffer(GL_FRAMEBUFFER, _msFrameBuffer);
glBindTexture(GL_TEXTURE_2D, tex);
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT);
[stroke render];
glBindFramebuffer(GL_DRAW_FRAMEBUFFER_APPLE, framebuffer);
glBindFramebuffer(GL_READ_FRAMEBUFFER_APPLE, _msFrameBuffer);
glResolveMultisampleFramebufferAPPLE();
const GLenum discards[] = { GL_COLOR_ATTACHMENT0 };
glDiscardFramebufferEXT(GL_READ_FRAMEBUFFER_APPLE, 1, discards);
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
And a couple of the helper methods just for completeness so you can follow it:
- (void)bufferTexCoordsForRect:(CGRect)rect
{
AHTextureMap textureMaps[4] =
{
[self textureMapForPoint:CGPointMake(CGRectGetMinX(rect), CGRectGetMinY(rect))
inRect:self.bounds],
[self textureMapForPoint:CGPointMake(CGRectGetMaxX(rect), CGRectGetMinY(rect))
inRect:self.bounds],
[self textureMapForPoint:CGPointMake(CGRectGetMinX(rect), CGRectGetMaxY(rect))
inRect:self.bounds],
[self textureMapForPoint:CGPointMake(CGRectGetMaxX(rect), CGRectGetMaxY(rect))
inRect:self.bounds]
};
glBufferData(GL_ARRAY_BUFFER, 4 * sizeof(AHTextureMap), textureMaps, GL_DYNAMIC_DRAW);
}
- (AHTextureMap)textureMapForPoint:(CGPoint)point
inRect:(CGRect)outerRect
{
CGPoint pt = CGPointApplyAffineTransform(point, CGAffineTransformMakeScale(self.contentScaleFactor, self.contentScaleFactor));
return (AHTextureMap) { { pt.x, pt.y }, { point.x / outerRect.size.width, 1.0 - (point.y / outerRect.size.height) } };
}
From what I understand you are drawing each quad in a separate draw call.
If your stroke consist of a lot of quads(from sampling the bezier curve) your code will make many draw calls per frame.
Having many draw calls in OpenGL ES 2 on older iOS devices will probably generate a bottle neck on the CPU.
The reason is that draw calls in OpenGL ES 2 can have a lot of overhead in the driver.
The driver tries to organize the draw calls you make into something the GPU can digest and it does this organization using the CPU.
If you intend to draw many quads to simulate a brush stroke you should update a vertex buffer to contain many quads and then draw it with one draw call instead of making a draw call per quad.
You can verify that your bottle neck is in the CPU with the Time Profiler instrument.
You can then check if the CPU is spending most of his time on the OpenGL draw call methods or rather on your own functions.
If the CPU spends most of it's time on the OpenGL draw call methods it is likely because you are making too many draw calls per frame.
I have a GLKView (OpenGL ES2.0) between a navigation bar on the top and a tool bar at the bottom of my iOS app window. I have implemented pinch zoom using UIPinchGestureRecognizer but when I zoom out a good extent, my view runs over the top navigation bar. Surprisingly the view does not go over the tool bar at the bottom. Wonder what I'm doing wrong.
Here's the viewport settings I'm using:
glViewport(0, 0, self.frame.size.width, self.frame.size.height);
and here's the update and the pinch handler:
-(void) update {
float aspect = fabsf(self.bounds.size.width / self.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f),aspect, 0.01f, 10.0f);
self.effect.transform.projectionMatrix = projectionMatrix;
GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -6.0f);
modelViewMatrix = GLKMatrix4Multiply(modelViewMatrix, _rotMatrix);
self.effect.transform.modelviewMatrix = modelViewMatrix;
}
-(IBAction) handlePinch: (UIPinchGestureRecognizer *)recognizer {
recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale);
recognizer.scale = 1.0;
}
First, you don't need to call glViewport when drawing with GLKView to its builtin framebuffer -- it does that for you automatically before calling your drawing method (drawRect: in a GLKView subclass, or glkView:drawInRect: if you're doing your drawing from the view's delegate). That's not your problem, though -- it's just redundant state setting (which Instruments or the Xcode Frame Debugger will probably tell you about when you use them).
If you want to zoom in on the contents of the view rather than resizing the view, you'll need to change how you're drawing those contents. Luckily, you're already set up well for doing that because you're already adjusting the ModelView and Projection matrices in your update method. Those control how vertices are transformed from model to screen space -- and part of that transformation includes a notion of a "camera" you can adjust to affect how near/far the objects in your scene appear.
In 3D rendering (as in real life), there are two ways to "zoom":
Move the camera closer to / farther from the point it's looking at. The translation matrix you're using for your modelViewMatrix is what sets the camera distance (it's the z parameter you currently have fixed at -6.0). Keep track of / change a distance in your pinch recognizer handler and use it when creating the modelViewMatrix if you want to zoom this way.
Change the camera's field of view angle -- this is what happens when you adjust the zoom lens on a real camera. This is part of the projectionMatrix (the first parameter, currently fixed at 65 degrees). Keep track of / change the field of view angle in your pinch recognizer handler and use it when creating the projectionMatrix if you want to zoom this way.
I am using OpenGL 2.0 to draw a rectangle. Initially the viewport is such that i am looking from above and i can see my rectangle as i expected.
Then i start rotating this rectangle about the x-axis. When the angle of rotation equals -90deg (or +90 deg if rotating in the other direction), the rectangle disappears.
What i expect to see if the bottom surface of the rectangle when i rotate past 90deg/-90deg but instead the view disappears. It does re-appear with the total rotation angle is -270deg (or +270 deg) when the upper surface is just about ready to be shown.
How do i ensure that i can see the rectangle all along (both upper and lower surface has to be visible while rotating)?
Here' the relevant piece of code:
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch * touch = [touches anyObject];
if ([touches count] == 1) {
CGPoint currLoc = [touch locationInView:self];
CGPoint lastLoc = [touch previousLocationInView:self];
CGPoint diff = CGPointMake(lastLoc.x - currLoc.x, lastLoc.y - currLoc.y);
rotX = -1 * GLKMathDegreesToRadians(diff.y / 2.0);
rotY = -1 * GLKMathDegreesToRadians(diff.x / 2.0);
totalRotationX += ((rotX * 180.0f)/3.141592f);
NSLog(#"rotX: %f, rotY: %f, totalRotationX: %f", rotX, rotY, totalRotationX);
//rotate around x axis
GLKVector3 xAxis = GLKMatrix4MultiplyVector3(GLKMatrix4Invert(_rotMatrix, &isInvertible), GLKVector3Make(1, 0, 0));
_rotMatrix = GLKMatrix4Rotate(_rotMatrix, rotX, xAxis.v[0], 0, 0);
}
}
-(void)update{
GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0, 0, -6.0f);
modelViewMatrix = GLKMatrix4Multiply(modelViewMatrix, _rotMatrix);
self.effect.transform.modelviewMatrix = modelViewMatrix;
float aspect = fabsf(self.bounds.size.width / self.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0, 10.0f);
self.effect.transform.projectionMatrix = projectionMatrix;
}
- (void)setupGL {
NSLog(#"setupGL");
isInvertible = YES;
totalRotationX = 0;
[EAGLContext setCurrentContext:self.context];
glEnable(GL_CULL_FACE);
self.effect = [[GLKBaseEffect alloc] init];
// New lines
glGenVertexArraysOES(1, &_vertexArray);
glBindVertexArrayOES(_vertexArray);
// Old stuff
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
glGenBuffers(1, &_indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Indices), Indices, GL_STATIC_DRAW);
glViewport(0, 0, self.frame.size.width, self.frame.size.height);
// New lines (were previously in draw)
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid *) offsetof(Vertex, Position));
glEnableVertexAttribArray(GLKVertexAttribColor);
glVertexAttribPointer(GLKVertexAttribColor, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid *) offsetof(Vertex, Color));
_rotMatrix = GLKMatrix4Identity;
// New line
glBindVertexArrayOES(0);
initialized = 1;
}
I am a newbie to OpenGL and i am using the GLKit along with OpenGL 2.0
Thanks.
There are many causes for things not rendering in OpenGL. In this case, it was back face culling (see comments on the question). Back face culling is useful because it can ignore triangles facing away from the camera and save some rasterization/fragment processing time. Since many meshes/objects are watertight and you'd never want to see the inside anyway it's uncommon to actually want two-sided shading. This functionality starts with defining a front/back of a triangle. This is done with the order the vertices are given in (sometimes called winding direction). glFrontFace chooses the direction clockwise/counter-clockwise that defines forwards, glCullFace chooses to cull either front or back (I guess some could argue not much point in having both) and finally to enable/disable:
glEnable(GL_CULL_FACE); //discards triangles facing away from the camera
glDisable(GL_CULL_FACE); //default, two-sided rendering
Some other things I check for geometry not being visible include...
Is the geometry colour the same as the background. Choosing a non-black/white background can be handy here.
Is the geometry actually drawing within the viewing volume. Throw in a simple object (immediate mode helps) and maybe use identity projection/modelview to rule them out.
Is the viewing volume correct. Near/far planes too far apart (causing z-fighting) or 0.0f near planes are common issues. Also when switching to a projection matrix, drawing anything on the Z=0 plane won't be visible any more.
Is blending enabled and everything's transparent.
Is the depth buffer not being cleared and causing subsequent frames to be discarded.
In fixed pipeline rendering, are glTranslate/glRotate transforms being carried over from the previous frame causing objects to shoot off into the distance. Always keep a glLoadIdentity at the top of the display function.
Is the rendering loop structured correctly - clear/draw/swapbuffers
Of course there's heaps more - geometry shaders not outputting anything, vertex shaders transforming vertices to the same position (so they're all degenerate), fragment shaders calling discard when they shouldn't, VBO binding/indexing issues etc. Checking GL errors is a must but never catches all mistakes.
I want to draw simple square with size of full screen with glDrawArray method in cocos2d. When retina is disabled everything draws as expected but when enabled - everything is half as big as it should be. (it seems like coordinate system in glDrawArray is not in points but in pixels)
Other draw functions works as expected but since I am drawing complicated shapes we have to use glDrawArray since it is much faster.
Any ideas how to solve this?
-(void) draw
{
CGPoint box[4];
CGPoint boxTex[4];
CGSize winSize = [[CCDirector sharedDirector] winSize];
//float boxSize = winSize.width;
box[0] = ccp(0,winSize.height); // top left
box[1] = ccp(0,0); // bottom left
box[2] = ccp(winSize.width,winSize.height);
box[3] = ccp(winSize.width,0);
boxTex[0] = ccp(0,1);
boxTex[1] = ccp(0,0);
boxTex[2] = ccp(1,1);
boxTex[3] = ccp(1,0);
// texture backround
glBindTexture(GL_TEXTURE_2D, self.sprite.texture.name);
glVertexPointer(2, GL_FLOAT, 0, box);
glTexCoordPointer(2, GL_FLOAT, 0, boxTex);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
Yes, the drawing is done in pixels, so in order to handle proper rendering on the retina display as well, you need to multiply your vertices by CC_CONTENT_SCALE_FACTOR():
for (int i = 0; i < 3; i++)
box[i] = ccpMult(box[i], CC_CONTENT_SCALE_FACTOR());
CC_CONTENT_SCALE_FACTOR returns 2 on retina devices instead of 1, so using it should take care of the scaling.
I'm using [NSString drawInRect:] to draw text to a texture and everything works fine until I add drop shadows. They appear correctly, but are often clipped by the rect I'm drawing to.
The issue is that [NSString sizeWithFont:] doesn't know about the drop shadows since they are applied via CGContextSetShadowWithColor(...).
Here is the code I'm using (fluff removed):
CGSize dim = [theString sizeWithFont:uifont];
...
CGContextSetRGBFillColor(context, 1.0f, 1.0f, 1.0f, 1.0f);
CGContextTranslateCTM(context, 0.0f, dim.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
CGContextSetShadowWithColor(context, CGSizeMake(dropOffset.x, dropOffset.y), dropBlur, dropColorRef);
...
[theString drawInRect:dim withFont:uifont lineBreakMode:UILineBreakModeWordWrap alignment:align];
I've tried expanding dim to take the drop shadow and blur into account and that mostly works, but sometimes the expanded rect causes the line to be wrapped completely different due to the extra space that was added.
Is there a better way to be finding the size of the texture/rect needed to draw to (or to draw the string) than I'm using?
You just need to keep track of two different rects.
The rect that will contain the text, which you pass to -[NSString drawInRect:]. Call this stringBounds.
The expanded/offset rect that contains the shadow. Call this shadowBounds.
Make your texture the size of shadowBounds.
When you draw the text, you'll need to translate by shadowBounds.origin - stringBounds.origin. (Or possibly the reverse -- it depends on exactly what you do, in which order. You'll know it when you get it.)
Then do [theString drawInRect:stringBounds ...].