GLKView zooming over Navigation Bar - ios

I have a GLKView (OpenGL ES2.0) between a navigation bar on the top and a tool bar at the bottom of my iOS app window. I have implemented pinch zoom using UIPinchGestureRecognizer but when I zoom out a good extent, my view runs over the top navigation bar. Surprisingly the view does not go over the tool bar at the bottom. Wonder what I'm doing wrong.
Here's the viewport settings I'm using:
glViewport(0, 0, self.frame.size.width, self.frame.size.height);
and here's the update and the pinch handler:
-(void) update {
float aspect = fabsf(self.bounds.size.width / self.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f),aspect, 0.01f, 10.0f);
self.effect.transform.projectionMatrix = projectionMatrix;
GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -6.0f);
modelViewMatrix = GLKMatrix4Multiply(modelViewMatrix, _rotMatrix);
self.effect.transform.modelviewMatrix = modelViewMatrix;
}
-(IBAction) handlePinch: (UIPinchGestureRecognizer *)recognizer {
recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale);
recognizer.scale = 1.0;
}

First, you don't need to call glViewport when drawing with GLKView to its builtin framebuffer -- it does that for you automatically before calling your drawing method (drawRect: in a GLKView subclass, or glkView:drawInRect: if you're doing your drawing from the view's delegate). That's not your problem, though -- it's just redundant state setting (which Instruments or the Xcode Frame Debugger will probably tell you about when you use them).
If you want to zoom in on the contents of the view rather than resizing the view, you'll need to change how you're drawing those contents. Luckily, you're already set up well for doing that because you're already adjusting the ModelView and Projection matrices in your update method. Those control how vertices are transformed from model to screen space -- and part of that transformation includes a notion of a "camera" you can adjust to affect how near/far the objects in your scene appear.
In 3D rendering (as in real life), there are two ways to "zoom":
Move the camera closer to / farther from the point it's looking at. The translation matrix you're using for your modelViewMatrix is what sets the camera distance (it's the z parameter you currently have fixed at -6.0). Keep track of / change a distance in your pinch recognizer handler and use it when creating the modelViewMatrix if you want to zoom this way.
Change the camera's field of view angle -- this is what happens when you adjust the zoom lens on a real camera. This is part of the projectionMatrix (the first parameter, currently fixed at 65 degrees). Keep track of / change the field of view angle in your pinch recognizer handler and use it when creating the projectionMatrix if you want to zoom this way.

Related

CATransform3D making my UITextView disappear?

So I have a UITextView that's supposed to be visually sitting on the bottom edge of the screen and then stretching up and back "into" the screen, "Star Wars" opening crawl-style.
After much googling etc, I feel like I have what looks like the right code for the job... but instead of setting up the perspective view I was looking for, this is just making the UITextView totally disappear!
The text view is set up in a storyboard with springs/struts (no autolayout) such that it's pinned at the top and bottom of the main view, about 20px in from each side, and the springs are active in both directions. Its outlet is hooked up to self.infoTextView. It shows up as I'd expect if I don't apply any transformations to it.
But when i fire off the code below in viewDidLoad, the text view just disappears completely. I'm sure I'm missing something but I can't seem to figure out what t is.
CGRect frame = self.infoTextView.layer.frame;
self.infoTextView.layer.anchorPoint = CGPointMake(.5f, 1.0f);
self.infoTextView.layer.frame = frame;
CATransform3D rotationAndPerspectiveTransform = CATransform3DIdentity;
rotationAndPerspectiveTransform.m34 = 1.0 / 500;
rotationAndPerspectiveTransform = CATransform3DRotate(rotationAndPerspectiveTransform, 1.57, 0, 1, 0);
self.infoTextView.layer.transform = rotationAndPerspectiveTransform;
thanks!
There are two things you'll need to modify in your code:
The rotation axis. You're rotating it around the Y axis. To get the
effect you're after you will need to change the rotate transform to
turn around the X axis.
To get "Star Wars opening crawl-style" effect you'll need to set a negative angle or a negative perspective.
Also, you would probably want to set a more "strong" perspective to achieve a more dramatic effect.
Here's an example based on your transform code:
CGFloat angle = -45;
CATransform3D rotationAndPerspectiveTransform = CATransform3DIdentity;
rotationAndPerspectiveTransform.m34 = 1.0 / 200;
rotationAndPerspectiveTransform = CATransform3DRotate(rotationAndPerspectiveTransform, angle / 180.0 * M_PI, 1, 0, 0);
self.infoTextView.layer.transform = rotationAndPerspectiveTransform;

Manipulating UIView layer's CATransform3D and UIView's CGAffineTransform at the same time

This might sound like a weird question, but what I am trying to do isn't very strange. I am currently resizing a UIView via the view's CGAffineTransform like this:
self.selectedController.view.transform = CGAffineTransformScale(CGAffineTransformIdentity, resizeRatioValue, resizeRatioValue);
Where the resizeRatioValue is a value between 0.7-1.0, depending on the location of the gesture. This works great. With a pan gesture, I am able to shrink down my view beautifully.
Now, I would like to add another twist to this. As the view is shrinking, I would like to apply a rotation to the view (similar to the coverflow effect) so that it rotates and shrinks at the same time.
I can rotate the view just fine using this code:
float angle = 45.0 * progressRatio;
CATransform3D rotationAndPerspectiveTransform = CATransform3DIdentity;
rotationAndPerspectiveTransform.m34 = 1.0 / 2000; // Perspective
rotationAndPerspectiveTransform = CATransform3DRotate(rotationAndPerspectiveTransform,
1 * angle / (180.0 / M_PI), 0.0f, 1.0f, 0.0f);
self.selectedController.view.layer.transform = rotationAndPerspectiveTransform;
But when I put both of these together, it's one or the other. Whichever one occurs second is the one that works. I can't get them to coexist.
What can I do to make it so that these can both work together? Or is there a completely different approach that would be better suited for what I am trying to do?

GLKit one object moves odd when moving camera

I have recently started a new iOS project based off of the OpenGL sample. I have added my own camera movement code, and I have added an NSMutableArray that contains instances of Blocks (currently only contain 3D position). I have modified the drawing code do draw instances of the cube model included with the sample from this array.
When I navigate the camera all of the cubes behave as I expect (they all stay in their original places, but I move the camera around them), except one cube seems to slide a little apart from the others when the camera changes. Whenever I move the camera (position or rotation) this block does stay in the same general location, but it slides slightly in the opposite direction of the camera movement. When the camera is stationary it is perfectly where it should be.
This block always seems to be the last one drawn. If I add a conditional to skip that block it picks a different one.
I've gone through my code over and over again, and I can't see why one block should behave differently than the others.
Here is all of the relevant code:
ViewMatrix = GLKMatrix4MakeRotation(CameraRotation.y, 1.0f, 0.0f, 0.0f);
ViewMatrix = GLKMatrix4Rotate(ViewMatrix, CameraRotation.x, 0.0f, 1.0f, 0.0f);
ViewMatrix = GLKMatrix4TranslateWithVector3(ViewMatrix, CameraPosition);
ViewMatrix = GLKMatrix4Translate(ViewMatrix, 0.0f, -1.5f, 0.0f);
glBindVertexArrayOES(_vertexArray);
for (int i = 0; i < blocks.count; i++){
Block *b = [blocks objectAtIndex:i];
if (true){
[self.effect prepareToDraw];
GLKMatrix4 ModelViewMatrix = GLKMatrix4MakeTranslation(b.position.x, b.position.y, b.position.z);
ModelViewMatrix = GLKMatrix4Multiply(ViewMatrix, ModelViewMatrix);
self.effect.transform.modelviewMatrix = ModelViewMatrix;
glDrawArrays(GL_TRIANGLES, 0, 36);
}
}
As you can see, all of the blocks are drawn with the exact same code. Why would one behave differently?
The answer to your problem is simple, you should call prepareToDraw after you set the effect.
You should always first configure the effect and than call prepareToDraw. So just move the [self.effect prepareToDraw] right before glDrawArrays(..).
Hope it helps

How do I map the x co-ordinate of a pan gesture to the rotation of cube?

I'm developing an iPhone app where the main views are presented the user on the surface of a cube. Users switch views by rotating the cube with a pan gesture.
To achieve this I am using the GKLCubeController class from this GitHub project.
In terms of adding views to a cube and rotating, it works fine. However the angular rotation of the cube doesn't map correctly to the current x position of the finger as it pans across the screen.
The problem is that the cube rotation lags behind the finger movement by about ½ second making the cube feel ‘heavy’ as illustrated in this short screencast.
The code handling the rotation is shown below:
-(void)panHandler:(UIPanGestureRecognizer*)panner{
CGPoint translatedPoint = [panner translationInView:self.view.window];
CGFloat halfWidth = self.view.bounds.size.width / 2.0;
// save our starting points
if([panner state] == UIGestureRecognizerStateBegan) {
startingX = translatedPoint.x;
if (!transformLayer) {
transformLayer = [[CATransformLayer alloc] init];
transformLayer.frame = self.view.layer.bounds;
for (UIView *viewToTranslate in views) {
[viewToTranslate removeFromSuperview];
[transformLayer addSublayer:viewToTranslate.layer];
}
// add in this new layer
[self.view.layer addSublayer:transformLayer];
}
} else if([panner state] == UIGestureRecognizerStateEnded) {
...
} else {
// instantly adjust our transformation layer
CATransform3D transform = CATransform3DIdentity;
transform.m34 = kPerspective;
double percentageOfWidth = (translatedPoint.x - startingX) / self.view.frame.size.width;
transform = CATransform3DTranslate(transform, 0, 0, -halfWidth);
double adjustmentAngle = percentageOfWidth * M_PI_2 + startingAngle;
transform = CATransform3DRotate(transform, adjustmentAngle, 0, 1, 0);
transform = CATransform3DTranslate(transform, 0, 0, halfWidth);
transformLayer.transform = transform;
finishingAngle = adjustmentAngle;
}
}
I've a suspicion the problem is something to do with the conversion of the CGPoint.x returned by UIPanGestureRecognizer translationInView: to a rotation angle. Can anyone confirm whether this is the case, and suggest what the correct maths should be for mapping the touch position x to the rotation of a cube such that the cube edge tracks the finger motion as it pans across the screen?
There are two issues here:
The major performance issue here is the way this class is performing the transform of the sides of the cube. It's giving each side of the cube a complicated transform, and then as you're dragging the cube around, it's taken the relevant sides of the cube, added them to a CATransformLayer, and performing a complicated transform upon that layer (thus, when you look at the individual sides of the cube, you're doing a transform of a transform).
I pulled out that CATransformLayer logic, and updated the transform for the individual sides, and it was dramatically more responsive.
By the way, you may might want to still employ something like this CATransformLayer logic when you animate the letting go of the rotated cube, as that's an excellent way of synchronizing the animation of the individual sides of the cube (otherwise you get some separation in the sides of the cube during the animation). But while dragging, there's too much of a performance hit.
As you continue to refine this, there are possibly other optimizations that can be done, but my testing suggests that getting rid of a transformation on a complicated transformation made a huge impact on performance.
And, by the way, make sure to test this on a device, not the simulator, as the simulator's graphics performance is very different than that of the device.
A minor factor that might contribute a slight initial delay in responsiveness may be the inherent delay in UIPanGestureRecognizer (which looks for a certain amount of movement before recognizing the gesture as a pan, so that other gestures such as taps and the like can trigger if appropriate). It's a modest delay and a very small part of your performance problem, but for the quickest of response times, you might not want to use the UIPanGestureRecognizer. Either subclass your own, or use a UILongPressGestureRecognizer with a minimumPressDuration of 0.0, and you can get instantaneous response to the gesture.
You'll see this respond more quickly to movement (but it's also a gesture that doesn't play well with others, that if you have tap gestures or the like inside the view, they won't be triggered).

iOS Convert TouchBegan coordinates to OpenGL ES Coordinates

New to OpenGL ES here.
I'm using the following code to detect where I tapped in a GLKView (OpenGL ES 2.0). I would like to know if I touched my OpenGL drawn objects. It's all 2D.
How do I convert the coordinates I am getting to OpenGL ES 2.0 coordinates, which are seemingly -1.0 to 1.0 based? Are there already built in functions to do so?
Thanks.
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
CGRect bounds = [self.view bounds];
UITouch* touch = [[event touchesForView:self.view] anyObject];
CGPoint location = [touch locationInView:self.view];
NSLog(#"x: %f y: %f", location.x, location.y);
}
-1 to 1 is clipping space. If your coordinate space is in clipping space when it displays on the screen, I'd say you forgot to convert the spaces using a projection matrix. If you're using GLKBaseEffect (which I don't recommend later down the road since it tends to memory leak everywhere) then you need to set <baseEffect>.transform.projectionMatrix to a matrix that will convert the space correctly. For example,
GLKBaseEffect* effect = [[GLKBaseEffect alloc] init];
GLKMatrix4 projectionMatrix = GLKMatrix4MakeOrtho(0, <width>, 0, <height>, 0.0f, 1.0f);
self.effect.transform.projectionMatrix = projectionMatrix;
width and height would be the width and height of the device's screen/your GLKView/etc. This is automatically applied to the coordinates you pass in so that you can use normal coordinates ranging from 0 to <width> on the x axis and 0 to <height> on the y axis, with the origin in the lower left corner of the screen.
If you are using custom shaders like I am then you can pass in the projection matrix as a uniform using:
glUniformMatrix4fv(shaderLocations.projectionMatrix,1,0,projection.m)
where projection is the matrix and and shaderLocations.projectionMatrix is the identifier for the uniform-its name, as they say. You then need to multiply your position by the projection matrix.
Once you've converted away from clipping space, either by passing in the matrix manually or setting the correct property on GLKBaseEffect, the only difference between OpenGL space an UIKit space is that the y axis is flipped. I convert touches I receive through the touches methods and gesture recognizers like this.
CGPoint openGLTouch = CGPointMake(touch.x, self.view.bounds.size.height - touch.y);
I'll try my best to clarify if you have any questions but keep in mind I'm relatively new to OpenGL myself. :)

Resources