rotation along x and y axis - ios

I'm using GLKit along with PowerVR library for my opengl-es 2.0 3D app. The 3D scene loads with several meshes, which simulate a garage environment. I have a car in the center of the garage. I am trying to add touch handling to the app, where the user can rotate the room around (e.g., to see all 4 walls surrounding the car). I also want to allow a rotation on the x axis, though limited to a small range. Basically they can see from a little bit of the top of the car to just above the floor level.
I am able to rotate on the Y OR on the X, but not both. As soon as I rotate on both axis, the car is thrown off-axis. The car isn't level with the camera anymore. I wish I could explain this better, but hopefully you guys will understand.
Here is my touches implementation:
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch * touch = [touches anyObject];
CGPoint location = [touch locationInView:self.view];
CGPoint lastLoc = [touch previousLocationInView:self.view];
CGPoint diff = CGPointMake(lastLoc.x - location.x, lastLoc.y - location.y);
float rotX = -1 * GLKMathDegreesToRadians(diff.x / 4.0);
float rotY = GLKMathDegreesToRadians(diff.y / 5.0);
PVRTVec3 xAxis = PVRTVec3(1, 0, 0);
PVRTVec3 yAxis = PVRTVec3(0,1,0);
PVRTMat4 yRotMatrix, xRotMatrix;
// create rotation matrices with angle
PVRTMatrixRotationXF(yRotMatrix, rotY);
PVRTMatrixRotationYF(xRotMatrix, -rotX);
_rotationY = _rotationY * yRotMatrix;
_rotationX = _rotationX * xRotMatrix;
}
Here's my update method:
- (void)update {
// Use the loaded effect
m_pEffect->Activate();
PVRTVec3 vFrom, vTo, vUp;
VERTTYPE fFOV;
vUp.x = 0.0f;
vUp.y = 1.0f;
vUp.z = 0.0f;
// We can get the camera position, target and field of view (fov) with GetCameraPos()
fFOV = m_Scene.GetCameraPos(vFrom, vTo, 0);
/*
We can build the world view matrix from the camera position, target and an up vector.
For this we use PVRTMat4LookAtRH().
*/
m_mView = PVRTMat4::LookAtRH(vFrom, vTo, vUp);
// rotate the camera based on the users swipe in the X direction (THIS WORKS)
m_mView = m_mView * _rotationX;
// Calculates the projection matrix
bool bRotate = false;
m_mProjection = PVRTMat4::PerspectiveFovRH(fFOV, (float)1024.0/768.0, CAM_NEAR, CAM_FAR, PVRTMat4::OGL, bRotate);
}
I've tried multiplying the new X rotation matrix to the current scene rotation first, and then multiplying the new Y rotation matrix second. I've tried the reverse of that, thinking the order of multiplication was my problem. That didn't help. Then I tried adding the new X and Y rotation matrices together before multiplying to the current rotation, but that didn't work either. I feel that I'm close, but at this point I'm just out of ideas.
Can you guys help? Thanks. -Valerie
Update: In an effort to solve this, I'm trying to simplify it a little. I've updated the above code, removing any limit in the range of the Y rotation. Basically I calculate the X and Y rotation based on the user swipe on the screen.
If I understand this correctly, I think I want to rotate the View matrix (camera/eye) with the calculation for the _rotationX.
I think I need to use the World matrix (origin 0,0,0) for the _rotationY calculation. I'll try and get some images of exactly what I'm talking about.

Wahoo, got this working! I rotated the view matrix (created by LookAt method) with the X rotation matrix. I rotated the model view matrix with the Y rotation Matrix.
Here's the modified Update method:
- (void)update {
PVRTVec3 vFrom, vTo, vUp;
VERTTYPE fFOV;
// We can get the camera position, target and field of view (fov) with GetCameraPos()
fFOV = m_Scene.GetCameraPos(vFrom, vTo, 0);
/*
We can build the world view matrix from the camera position, target and an up vector.
For this we use PVRTMat4LookAtRH().
*/
m_mView = PVRTMat4::LookAtRH(vFrom, vTo, PVRTVec3(0.0f, 1.0f, 0.0f));
// rotate on the X axis (finger swipe Y direction)
m_mView = m_mView * _rotationY;
// Calculates the projection matrix
m_mProjection = PVRTMat4::PerspectiveFovRH(fFOV, (float)1024.0/768.0, CAM_NEAR, CAM_FAR, PVRTMat4::OGL, false);
}
Here's the modified touch moved method:
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch * touch = [touches anyObject];
CGPoint location = [touch locationInView:self.view];
CGPoint lastLoc = [touch previousLocationInView:self.view];
CGPoint diff = CGPointMake(lastLoc.x - location.x, lastLoc.y - location.y);
float rotX = -1 * GLKMathDegreesToRadians(diff.x / 2.5);
float rotY = GLKMathDegreesToRadians(diff.y / 2.5);
PVRTMat4 rotMatrixX, rotMatrixY;
// create rotation matrices with angle
PVRTMatrixRotationYF(rotMatrixX, -rotX);
PVRTMatrixRotationXF(rotMatrixY, rotY);
_rotationX = _rotationX * rotMatrixX;
_rotationY = _rotationY * rotMatrixY;
}

Related

Retrieve ARVector3 from touch on cameraView

I've been playing around with the enhanced samples and read the full SDK documentation but I'm can't figure out this problem. I'm trying to convert a touch on the self.cameraView to a ARVector3 so I can move the 3D model to that position. At the moment I'm trying to convert the CGPoint from the tapGesture to the world ARNode but no luck so far.
float x = [gesture locationInView:self.cameraView].x;
float y = [gesture locationInView:self.cameraView].y;
CGPoint gesturePoint = CGPointMake(x, y);
ARVector3 *newPosition = [arbiTrack.world nodeFromViewPort:gesturePoint];
ARNode *touchNode = [ARNode nodeWithName:#"touchNode"];
ARImageNode *targetImageNode = [[ARImageNode alloc] initWithImage:[UIImage imageNamed:#"drop.png"]];
[touchNode addChild:targetImageNode];
[targetImageNode scaleByUniform:1];
touchNode.position = newPosition;
[arbiTrack.world addChild:touchNode];
This results in the following situation:
And seen on my iPhone:
Why is a touch in the upper left corner of the self.cameraView a point that is closest by? I actually want to click on the screen and get a X, Y, Z (ARVector3) coordinate back.

GLKit Object not rotating properly

İ have a 3d object.
i am rotating it with touches like this :
-(void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint location = [touch locationInView:self.view];
CGPoint lastLoc = [touch previousLocationInView:self.view];
CGPoint diff = CGPointMake(lastLoc.x - location.x, lastLoc.y - location.y);
float rotX = -1 * GLKMathDegreesToRadians( diff.y / 2.0 );
float rotY = GLKMathDegreesToRadians( diff.x / 3.0 );
GLKVector3 yAxis = GLKMatrix4MultiplyAndProjectVector3(GLKMatrix4Invert(_rotMatrix, &isInvertible), GLKVector3Make(0, 0, 1) );
_rotMatrix = GLKMatrix4Rotate(_rotMatrix, rotY, yAxis.x, yAxis.y, yAxis.z);
GLKVector3 xAxis = GLKVector3Make(1, 0, 0);
_rotMatrix = GLKMatrix4Rotate(_rotMatrix, rotX, xAxis.x, xAxis.y, xAxis.z);
}
and setting matrices like this :
_modelViewMatrix = GLKMatrix4Identity;
_modelViewMatrix = GLKMatrix4Translate(_modelViewMatrix, 0.0f, 0.0f, -60.0f);
_modelViewMatrix = GLKMatrix4Translate(_modelViewMatrix, 0.0f, 5.5f, -4.0f);
// i know i can to this by one code
//çevirme işlemleri ilki klimanın kameraya doğru bakması için
//ikincisi parmak hareketlerinden gelen transform matrisi
// 90 derece döndermeyi kapatıyorum
_modelViewMatrix = GLKMatrix4RotateX(_modelViewMatrix, GLKMathDegreesToRadians(90.0f));
_modelViewMatrix = GLKMatrix4Multiply(_modelViewMatrix, _rotMatrix);
_modelViewMatrix = GLKMatrix4Translate(_modelViewMatrix, 0.0f, -5.5f, +4.0f);
self.reflectionMapEffect.transform.modelviewMatrix = _modelViewMatrix;
i am translating modelViewMatrix to objects centre. rotating it. than translating back. than translating -65 on z. but everytime i tried to do it. it's rotates like on the same vector. i think object has it's own centre. and rotating with it's own center and scene's center.
how can i change object's centre with code or how can i rotate this object properly?
The way the matrix multiplication works is considering the object base vectors. You can imagine it as looking form a first person perspective (from the object/model that is). If you first move the object (translate) and then rotate the object will still be at the same position but facing a different rotation, that means it will not simulate the orbiting. If you change the operations to rotate first and then move it will simulate orbiting (but rotating as well). For instance if you rotate the model to face your right and than translate forward it will seem as if translated to your right. So a true orbiting consists of first rotating by some angle, then translating by radius and then rotating by same negative angle. Again, try looking as from the model perspective.
I hope this helps as you did not explain what exactly is it you want/need to accomplish.

Converting iPad Coordinates to OpenGL ES 2.0 Scene Coordinates ? iOS

I have an OpenGL ES 2.0 scene which contains only 2D objects. I am applying the following two matrices :
width = 600;
CC3GLMatrix * projection = [CC3GLMatrix matrix];
height = width * self.frame.size.height / self.frame.size.width;
[projection populateFromFrustumLeft:-width/2 andRight:width/2 andBottom:-height/2 andTop:height/2 andNear:4 andFar:10];
glUniformMatrix4fv(_projectionUniform, 1, 0, projection.glMatrix);
CC3GLMatrix * modelView = [CC3GLMatrix matrix];
[modelView populateFromTranslation:CC3VectorMake(xTranslation ,yTranslation, -7)];
glUniformMatrix4fv(_modelViewUniform, 1, 0, modelView.glMatrix);
In the touches began method I am then trying to map the touch point coordinates to the OpenGL ES 2.0 scene co-ordinates :
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
NSLog(#"Touches Began");
UITouch *touch = [[event allTouches] anyObject];
CGPoint touchPoint = [touch locationInView:self];
float differentialWidth = (768-width)/2; //Accounts for if OpenGL view is less than iPad width or height.
float differentialHeight = (1024-height)/2;
float openGlXPoint = ((touchPoint.x - differentialWidth) - (width/2));
float openGlYPoint = ((touchPoint.y - differentialHeight) - (width/2));
NSLog(#"X in Scene Touched is %f", openGlXPoint);
CGPoint finalPoint = CGPointMake(openGlXPoint, openGlYPoint);
for (SquareObject * square in squareArray) {
if (CGRectContainsPoint(stand.bounds, finalPoint)) {
NSString * messageSquare = (#"Object name is %#", square.Name);
UIAlertView *message = [[UIAlertView alloc] initWithTitle:#"Touched"
message:messageSquare
delegate:nil
cancelButtonTitle:#"OK"
otherButtonTitles:nil];
[message show];
}
}
}
This code works in that it returns OpenGL co-ordinates - for example clicking in the middle of the screen successfully returns 0,0. The problem is however (I think) is that I somehow need to account for the zoom scale of the scene, as an object drawn with an origin of 150,0 does not match with where I click on the iPad (which returns 112,0 using the above code). Can anyone suggest how I can correct this ?
Thanks !
This might be overkill for a 2D app, but the way you typically would do this for a 3D app is to make two vectors, a "far point," and a "near point," unproject them both using GLKUnproject or whatever other math library you want, then subtract the near point from the far point to get a ray in object coordinates which you can use to test for intersection using only the geometry without having to worry about projection or modelview matrices. Here's an example
bool testResult;
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
GLKVector3 nearPt = GLKMathUnproject(GLKVector3Make(tapLoc.x, tapLoc.y, 0.0), modelViewMatrix, projectionMatrix, &viewport[0] , &testResult);
GLKVector3 farPt = GLKMathUnproject(GLKVector3Make(tapLoc.x, tapLoc.y, 1.0), modelViewMatrix, projectionMatrix, &viewport[0] , &testResult);
farPt = GLKVector3Subtract(farPt, nearPt);
//now you can test if the farPt ray intersects the geometry of interest, perhaps
//using a method like the one described here http://www.cs.virginia.edu/~gfx/Courses/2003/ImageSynthesis/papers/Acceleration/Fast%20MinimumStorage%20RayTriangle%20Intersection.pdf
In your case projectionMatrix is probably the identity since you are working in two dimensions, and modelViewMatrix is the scales, translates, rotates, shears, etc you've applied to your object.
Also, in case you were unaware, what you are asking is often referred to as "picking," and if you enter "OpenGL picking," into Google you may find better info on the subject than what you might have gotten before with just "converting coordinates."

Problems with offsets in portrait mode Cocos2d

I am working on the basis of Ray Wenderlich's tutorial on rotating turrets in Cocos 2d (see here: http://www.raywenderlich.com/25791/rotating-turrets-how-to-make-a-simple-iphone-game-with-cocos2d-2-x-part-2). I need my game to be in portrait mode so I have managed to get the position of the turret correctly:
The turret manages to shoot right, but not left. Here is my code:
- (void)ccTouchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
if (_nextProjectile != nil) return;
// Choose one of the touches to work with
UITouch *touch = [touches anyObject];
CGPoint location = [self convertTouchToNodeSpace:touch];
// Set up initial location of projectile
CGSize winSize = [[CCDirector sharedDirector] winSize];
_nextProjectile = [[CCSprite spriteWithFile:#"projectile2.png"] retain];
_nextProjectile.position = ccp(160, 20);
// Determine offset of location to projectile
CGPoint offset = ccpSub(location, _nextProjectile.position);
// Bail out if you are shooting down or backwards
if (offset.x <= 0) return;
// Determine where you wish to shoot the projectile to
int realX = winSize.width + (_nextProjectile.contentSize.width/2);
float ratio = (float) offset.y / (float) offset.x;
int realY = (realX * ratio) + _nextProjectile.position.y;
CGPoint realDest = ccp(realX, realY);
// Determine the length of how far you're shooting
int offRealX = realX - _nextProjectile.position.x;
int offRealY = realY - _nextProjectile.position.y;
float length = sqrtf((offRealX*offRealX)+(offRealY*offRealY));
float velocity = 480/1; // 480pixels/1sec
float realMoveDuration = length/velocity;
// Determine angle to face
float angleRadians = atanf((float)offRealY / (float)offRealX);
float angleDegrees = CC_RADIANS_TO_DEGREES(angleRadians);
float cocosAngle = -1 * angleDegrees;
float rotateDegreesPerSecond = 180 / 0.5; // Would take 0.5 seconds to rotate 180 degrees, or half a circle
float degreesDiff = _player.rotation - cocosAngle;
float rotateDuration = fabs(degreesDiff / rotateDegreesPerSecond);
[_player runAction:
[CCSequence actions:
[CCRotateTo actionWithDuration:rotateDuration angle:cocosAngle],
[CCCallBlock actionWithBlock:^{
// OK to add now - rotation is finished!
[self addChild:_nextProjectile];
[_projectiles addObject:_nextProjectile];
// Release
[_nextProjectile release];
_nextProjectile = nil;
}],
nil]];
// Move projectile to actual endpoint
[_nextProjectile runAction:
[CCSequence actions:
[CCMoveTo actionWithDuration:realMoveDuration position:realDest],
[CCCallBlockN actionWithBlock:^(CCNode *node) {
[_projectiles removeObject:node];
[node removeFromParentAndCleanup:YES];
}],
nil]];
_nextProjectile.tag = 2;
}
Thanks for the help!
You are checking x axis instead of Y
// Bail out if you are shooting down or backwards
if (offset.x <= 0) return
;
Did you actually set the application to run in portrait mode or have you just rotated the simulator and repositioned the turret?
If you didn't explicitly set the app to run in portrait your x and y coordinates will be swapped (x will run from the ios button to the top of the phone, not accross as you would expect).
If it is converted properly I have answered this question before :)
This issue here is that you've copy-pasted the math instead of editing it properly for your purposes. There are some assumptions made in Ray's code that rely on you shooting always to the right of the turret instead of up, down, or left.
Here's the math code you should be looking at:
// Determine offset of location to projectile
CGPoint offset = ccpSub(location, _nextProjectile.position);
// Bail out if you are shooting down or backwards
if (offset.x <= 0) return;
Note here that you will have an offset.x less than 0 if the tap location is to the left of the turret, so this is an assumption you took from Ray but did not revise. As gheesse said, for your purposes this should be set to offset.y as you don't want them shooting south of the projectile's original location. But this is only part of the problem here.
// Determine where you wish to shoot the projectile to
int realX = winSize.width + (_nextProjectile.contentSize.width/2);
Here's your other big issue. You did not revise Ray's math for determining where the projectile should go. In Ray's code, his projectile will always end up on a location that is off the screen to the right, so he uses the width of the screen and projectile's size to determine the real location he wants the projectile to go. This is causing your issue since you don't have the assumption that your projectile will always head right - yours will always go up (hint, code similar to this should be used for your realY)
float ratio = (float) offset.y / (float) offset.x;
int realY = (realX * ratio) + _nextProjectile.position.y;
Again, Ray makes assumptions in his math for his game and you haven't corrected it in this realY. Your code has the turret turning in ways that will effect the realX coordinate instead of the realY, which is the coordinate that Ray's always shoot right turret needed to effect.
You need to sit down and re-do the math.

iOS Convert TouchBegan coordinates to OpenGL ES Coordinates

New to OpenGL ES here.
I'm using the following code to detect where I tapped in a GLKView (OpenGL ES 2.0). I would like to know if I touched my OpenGL drawn objects. It's all 2D.
How do I convert the coordinates I am getting to OpenGL ES 2.0 coordinates, which are seemingly -1.0 to 1.0 based? Are there already built in functions to do so?
Thanks.
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
CGRect bounds = [self.view bounds];
UITouch* touch = [[event touchesForView:self.view] anyObject];
CGPoint location = [touch locationInView:self.view];
NSLog(#"x: %f y: %f", location.x, location.y);
}
-1 to 1 is clipping space. If your coordinate space is in clipping space when it displays on the screen, I'd say you forgot to convert the spaces using a projection matrix. If you're using GLKBaseEffect (which I don't recommend later down the road since it tends to memory leak everywhere) then you need to set <baseEffect>.transform.projectionMatrix to a matrix that will convert the space correctly. For example,
GLKBaseEffect* effect = [[GLKBaseEffect alloc] init];
GLKMatrix4 projectionMatrix = GLKMatrix4MakeOrtho(0, <width>, 0, <height>, 0.0f, 1.0f);
self.effect.transform.projectionMatrix = projectionMatrix;
width and height would be the width and height of the device's screen/your GLKView/etc. This is automatically applied to the coordinates you pass in so that you can use normal coordinates ranging from 0 to <width> on the x axis and 0 to <height> on the y axis, with the origin in the lower left corner of the screen.
If you are using custom shaders like I am then you can pass in the projection matrix as a uniform using:
glUniformMatrix4fv(shaderLocations.projectionMatrix,1,0,projection.m)
where projection is the matrix and and shaderLocations.projectionMatrix is the identifier for the uniform-its name, as they say. You then need to multiply your position by the projection matrix.
Once you've converted away from clipping space, either by passing in the matrix manually or setting the correct property on GLKBaseEffect, the only difference between OpenGL space an UIKit space is that the y axis is flipped. I convert touches I receive through the touches methods and gesture recognizers like this.
CGPoint openGLTouch = CGPointMake(touch.x, self.view.bounds.size.height - touch.y);
I'll try my best to clarify if you have any questions but keep in mind I'm relatively new to OpenGL myself. :)

Resources