New to OpenGL ES here.
I'm using the following code to detect where I tapped in a GLKView (OpenGL ES 2.0). I would like to know if I touched my OpenGL drawn objects. It's all 2D.
How do I convert the coordinates I am getting to OpenGL ES 2.0 coordinates, which are seemingly -1.0 to 1.0 based? Are there already built in functions to do so?
Thanks.
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
CGRect bounds = [self.view bounds];
UITouch* touch = [[event touchesForView:self.view] anyObject];
CGPoint location = [touch locationInView:self.view];
NSLog(#"x: %f y: %f", location.x, location.y);
}
-1 to 1 is clipping space. If your coordinate space is in clipping space when it displays on the screen, I'd say you forgot to convert the spaces using a projection matrix. If you're using GLKBaseEffect (which I don't recommend later down the road since it tends to memory leak everywhere) then you need to set <baseEffect>.transform.projectionMatrix to a matrix that will convert the space correctly. For example,
GLKBaseEffect* effect = [[GLKBaseEffect alloc] init];
GLKMatrix4 projectionMatrix = GLKMatrix4MakeOrtho(0, <width>, 0, <height>, 0.0f, 1.0f);
self.effect.transform.projectionMatrix = projectionMatrix;
width and height would be the width and height of the device's screen/your GLKView/etc. This is automatically applied to the coordinates you pass in so that you can use normal coordinates ranging from 0 to <width> on the x axis and 0 to <height> on the y axis, with the origin in the lower left corner of the screen.
If you are using custom shaders like I am then you can pass in the projection matrix as a uniform using:
glUniformMatrix4fv(shaderLocations.projectionMatrix,1,0,projection.m)
where projection is the matrix and and shaderLocations.projectionMatrix is the identifier for the uniform-its name, as they say. You then need to multiply your position by the projection matrix.
Once you've converted away from clipping space, either by passing in the matrix manually or setting the correct property on GLKBaseEffect, the only difference between OpenGL space an UIKit space is that the y axis is flipped. I convert touches I receive through the touches methods and gesture recognizers like this.
CGPoint openGLTouch = CGPointMake(touch.x, self.view.bounds.size.height - touch.y);
I'll try my best to clarify if you have any questions but keep in mind I'm relatively new to OpenGL myself. :)
Related
İ have a 3d object.
i am rotating it with touches like this :
-(void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint location = [touch locationInView:self.view];
CGPoint lastLoc = [touch previousLocationInView:self.view];
CGPoint diff = CGPointMake(lastLoc.x - location.x, lastLoc.y - location.y);
float rotX = -1 * GLKMathDegreesToRadians( diff.y / 2.0 );
float rotY = GLKMathDegreesToRadians( diff.x / 3.0 );
GLKVector3 yAxis = GLKMatrix4MultiplyAndProjectVector3(GLKMatrix4Invert(_rotMatrix, &isInvertible), GLKVector3Make(0, 0, 1) );
_rotMatrix = GLKMatrix4Rotate(_rotMatrix, rotY, yAxis.x, yAxis.y, yAxis.z);
GLKVector3 xAxis = GLKVector3Make(1, 0, 0);
_rotMatrix = GLKMatrix4Rotate(_rotMatrix, rotX, xAxis.x, xAxis.y, xAxis.z);
}
and setting matrices like this :
_modelViewMatrix = GLKMatrix4Identity;
_modelViewMatrix = GLKMatrix4Translate(_modelViewMatrix, 0.0f, 0.0f, -60.0f);
_modelViewMatrix = GLKMatrix4Translate(_modelViewMatrix, 0.0f, 5.5f, -4.0f);
// i know i can to this by one code
//çevirme işlemleri ilki klimanın kameraya doğru bakması için
//ikincisi parmak hareketlerinden gelen transform matrisi
// 90 derece döndermeyi kapatıyorum
_modelViewMatrix = GLKMatrix4RotateX(_modelViewMatrix, GLKMathDegreesToRadians(90.0f));
_modelViewMatrix = GLKMatrix4Multiply(_modelViewMatrix, _rotMatrix);
_modelViewMatrix = GLKMatrix4Translate(_modelViewMatrix, 0.0f, -5.5f, +4.0f);
self.reflectionMapEffect.transform.modelviewMatrix = _modelViewMatrix;
i am translating modelViewMatrix to objects centre. rotating it. than translating back. than translating -65 on z. but everytime i tried to do it. it's rotates like on the same vector. i think object has it's own centre. and rotating with it's own center and scene's center.
how can i change object's centre with code or how can i rotate this object properly?
The way the matrix multiplication works is considering the object base vectors. You can imagine it as looking form a first person perspective (from the object/model that is). If you first move the object (translate) and then rotate the object will still be at the same position but facing a different rotation, that means it will not simulate the orbiting. If you change the operations to rotate first and then move it will simulate orbiting (but rotating as well). For instance if you rotate the model to face your right and than translate forward it will seem as if translated to your right. So a true orbiting consists of first rotating by some angle, then translating by radius and then rotating by same negative angle. Again, try looking as from the model perspective.
I hope this helps as you did not explain what exactly is it you want/need to accomplish.
I'm using CoreGraphics in my UIView to draw a graph and I want to be able to interact with the graph using touch input. Since touches are received in device coordinates, I need to transform it into user coordinates in order to relate it to the graph, but that has become an obstacle since CGContextConvertPointToUserSpace doesn't work outside of the graphics drawing context.
Here's what I've tried.
In drawRect:
CGContextScaleCTM(ctx,...);
CGContextTranslateCTM(ctx,...); // transform graph to fit the view nicely
self.ctm = CGContextGetCTM(ctx); // save for later
// draw points using user coordinates
In my touch event handler:
CGPoint touchDevice = [gesture locationInView:self]; // touch point in device coords
CGPoint touchUser = CGPointApplyAffineTransform(touchDevice, self.ctm); // doesn't give me what I want
// CGContextConvertPointToUserSpace(touchDevice) <- what I want, but doesn't work here
Using the inverse of ctm doesn't work either. I'll admit I'm having trouble getting my head around the meaning and relationships between device coordinates, user coordinates, and the transformation matrix. I think it's not as simple as I want it to be.
EDIT: Some background from Apple's documentation (iOS Coordinate Systems and Drawing Model).
"A window is positioned and sized in screen coordinates, which are defined by the coordinate system for the display."
"Drawing commands make reference to a fixed-scale drawing space, known as the user coordinate space. The operating system maps coordinate units in this drawing space onto the actual pixels of the corresponding target device."
"You can change a view’s default coordinate system by modifying the current transformation matrix (CTM). The CTM maps points in a view’s coordinate system to points on the device’s screen."
I discovered that the CTM already included a transformation to map view coordinates (with origin at the top left) to screen coordinates (with origin at the bottom left). So (0,0) got transformed to (0,800), where the height of my view was 800, and (0,2) mapped to (0,798) etc. So I gather there are 3 coordinate systems we're talking about: screen coordinates, view/device coordinates, user coordinates. (Please correct me if I am wrong.)
The CGContext transform (CTM) maps from user coordinates all the way to screen coordinates. My solution was to maintain my own transform separately which maps from user coordinates to view coordinates. Then I could use it to go back to user coordinates from view coordinates.
My Solution:
In drawRect:
CGAffineTransform scale = CGAffineTransformMakeScale(...);
CGAffineTransform translate = CGAffineTransformMakeTranslation(...);
self.myTransform = CGAffineTransformConcat(translate, scale);
// draw points using user coordinates
In my touch event handler:
CGPoint touch = [gesture locationInView:self]; // touch point in view coords
CGPoint touchUser = CGPointApplyAffineTransform(touchPoint, CGAffineTransformInvert(self.myTransform)); // this does the trick
Alternate Solution:
Another approach is to manually setup an identical context, but I think this is more of a hack.
In my touch event handler:
#import <QuartzCore/QuartzCore.h>
CGPoint touch = [gesture locationInView:self]; // view coords
CGSize layerSize = [self.layer frame].size;
UIGraphicsBeginImageContext(layerSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// as in drawRect:
CGContextScaleCTM(...);
CGContextTranslateCTM(...);
CGPoint touchUser = CGContextConvertPointToUserSpace(context, touch); // now it gives me what I want
UIGraphicsEndImageContext();
I have a GLKView (OpenGL ES2.0) between a navigation bar on the top and a tool bar at the bottom of my iOS app window. I have implemented pinch zoom using UIPinchGestureRecognizer but when I zoom out a good extent, my view runs over the top navigation bar. Surprisingly the view does not go over the tool bar at the bottom. Wonder what I'm doing wrong.
Here's the viewport settings I'm using:
glViewport(0, 0, self.frame.size.width, self.frame.size.height);
and here's the update and the pinch handler:
-(void) update {
float aspect = fabsf(self.bounds.size.width / self.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f),aspect, 0.01f, 10.0f);
self.effect.transform.projectionMatrix = projectionMatrix;
GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -6.0f);
modelViewMatrix = GLKMatrix4Multiply(modelViewMatrix, _rotMatrix);
self.effect.transform.modelviewMatrix = modelViewMatrix;
}
-(IBAction) handlePinch: (UIPinchGestureRecognizer *)recognizer {
recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale);
recognizer.scale = 1.0;
}
First, you don't need to call glViewport when drawing with GLKView to its builtin framebuffer -- it does that for you automatically before calling your drawing method (drawRect: in a GLKView subclass, or glkView:drawInRect: if you're doing your drawing from the view's delegate). That's not your problem, though -- it's just redundant state setting (which Instruments or the Xcode Frame Debugger will probably tell you about when you use them).
If you want to zoom in on the contents of the view rather than resizing the view, you'll need to change how you're drawing those contents. Luckily, you're already set up well for doing that because you're already adjusting the ModelView and Projection matrices in your update method. Those control how vertices are transformed from model to screen space -- and part of that transformation includes a notion of a "camera" you can adjust to affect how near/far the objects in your scene appear.
In 3D rendering (as in real life), there are two ways to "zoom":
Move the camera closer to / farther from the point it's looking at. The translation matrix you're using for your modelViewMatrix is what sets the camera distance (it's the z parameter you currently have fixed at -6.0). Keep track of / change a distance in your pinch recognizer handler and use it when creating the modelViewMatrix if you want to zoom this way.
Change the camera's field of view angle -- this is what happens when you adjust the zoom lens on a real camera. This is part of the projectionMatrix (the first parameter, currently fixed at 65 degrees). Keep track of / change the field of view angle in your pinch recognizer handler and use it when creating the projectionMatrix if you want to zoom this way.
I have an OpenGL ES 2.0 scene which contains only 2D objects. I am applying the following two matrices :
width = 600;
CC3GLMatrix * projection = [CC3GLMatrix matrix];
height = width * self.frame.size.height / self.frame.size.width;
[projection populateFromFrustumLeft:-width/2 andRight:width/2 andBottom:-height/2 andTop:height/2 andNear:4 andFar:10];
glUniformMatrix4fv(_projectionUniform, 1, 0, projection.glMatrix);
CC3GLMatrix * modelView = [CC3GLMatrix matrix];
[modelView populateFromTranslation:CC3VectorMake(xTranslation ,yTranslation, -7)];
glUniformMatrix4fv(_modelViewUniform, 1, 0, modelView.glMatrix);
In the touches began method I am then trying to map the touch point coordinates to the OpenGL ES 2.0 scene co-ordinates :
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
NSLog(#"Touches Began");
UITouch *touch = [[event allTouches] anyObject];
CGPoint touchPoint = [touch locationInView:self];
float differentialWidth = (768-width)/2; //Accounts for if OpenGL view is less than iPad width or height.
float differentialHeight = (1024-height)/2;
float openGlXPoint = ((touchPoint.x - differentialWidth) - (width/2));
float openGlYPoint = ((touchPoint.y - differentialHeight) - (width/2));
NSLog(#"X in Scene Touched is %f", openGlXPoint);
CGPoint finalPoint = CGPointMake(openGlXPoint, openGlYPoint);
for (SquareObject * square in squareArray) {
if (CGRectContainsPoint(stand.bounds, finalPoint)) {
NSString * messageSquare = (#"Object name is %#", square.Name);
UIAlertView *message = [[UIAlertView alloc] initWithTitle:#"Touched"
message:messageSquare
delegate:nil
cancelButtonTitle:#"OK"
otherButtonTitles:nil];
[message show];
}
}
}
This code works in that it returns OpenGL co-ordinates - for example clicking in the middle of the screen successfully returns 0,0. The problem is however (I think) is that I somehow need to account for the zoom scale of the scene, as an object drawn with an origin of 150,0 does not match with where I click on the iPad (which returns 112,0 using the above code). Can anyone suggest how I can correct this ?
Thanks !
This might be overkill for a 2D app, but the way you typically would do this for a 3D app is to make two vectors, a "far point," and a "near point," unproject them both using GLKUnproject or whatever other math library you want, then subtract the near point from the far point to get a ray in object coordinates which you can use to test for intersection using only the geometry without having to worry about projection or modelview matrices. Here's an example
bool testResult;
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
GLKVector3 nearPt = GLKMathUnproject(GLKVector3Make(tapLoc.x, tapLoc.y, 0.0), modelViewMatrix, projectionMatrix, &viewport[0] , &testResult);
GLKVector3 farPt = GLKMathUnproject(GLKVector3Make(tapLoc.x, tapLoc.y, 1.0), modelViewMatrix, projectionMatrix, &viewport[0] , &testResult);
farPt = GLKVector3Subtract(farPt, nearPt);
//now you can test if the farPt ray intersects the geometry of interest, perhaps
//using a method like the one described here http://www.cs.virginia.edu/~gfx/Courses/2003/ImageSynthesis/papers/Acceleration/Fast%20MinimumStorage%20RayTriangle%20Intersection.pdf
In your case projectionMatrix is probably the identity since you are working in two dimensions, and modelViewMatrix is the scales, translates, rotates, shears, etc you've applied to your object.
Also, in case you were unaware, what you are asking is often referred to as "picking," and if you enter "OpenGL picking," into Google you may find better info on the subject than what you might have gotten before with just "converting coordinates."
I'm using GLKit along with PowerVR library for my opengl-es 2.0 3D app. The 3D scene loads with several meshes, which simulate a garage environment. I have a car in the center of the garage. I am trying to add touch handling to the app, where the user can rotate the room around (e.g., to see all 4 walls surrounding the car). I also want to allow a rotation on the x axis, though limited to a small range. Basically they can see from a little bit of the top of the car to just above the floor level.
I am able to rotate on the Y OR on the X, but not both. As soon as I rotate on both axis, the car is thrown off-axis. The car isn't level with the camera anymore. I wish I could explain this better, but hopefully you guys will understand.
Here is my touches implementation:
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch * touch = [touches anyObject];
CGPoint location = [touch locationInView:self.view];
CGPoint lastLoc = [touch previousLocationInView:self.view];
CGPoint diff = CGPointMake(lastLoc.x - location.x, lastLoc.y - location.y);
float rotX = -1 * GLKMathDegreesToRadians(diff.x / 4.0);
float rotY = GLKMathDegreesToRadians(diff.y / 5.0);
PVRTVec3 xAxis = PVRTVec3(1, 0, 0);
PVRTVec3 yAxis = PVRTVec3(0,1,0);
PVRTMat4 yRotMatrix, xRotMatrix;
// create rotation matrices with angle
PVRTMatrixRotationXF(yRotMatrix, rotY);
PVRTMatrixRotationYF(xRotMatrix, -rotX);
_rotationY = _rotationY * yRotMatrix;
_rotationX = _rotationX * xRotMatrix;
}
Here's my update method:
- (void)update {
// Use the loaded effect
m_pEffect->Activate();
PVRTVec3 vFrom, vTo, vUp;
VERTTYPE fFOV;
vUp.x = 0.0f;
vUp.y = 1.0f;
vUp.z = 0.0f;
// We can get the camera position, target and field of view (fov) with GetCameraPos()
fFOV = m_Scene.GetCameraPos(vFrom, vTo, 0);
/*
We can build the world view matrix from the camera position, target and an up vector.
For this we use PVRTMat4LookAtRH().
*/
m_mView = PVRTMat4::LookAtRH(vFrom, vTo, vUp);
// rotate the camera based on the users swipe in the X direction (THIS WORKS)
m_mView = m_mView * _rotationX;
// Calculates the projection matrix
bool bRotate = false;
m_mProjection = PVRTMat4::PerspectiveFovRH(fFOV, (float)1024.0/768.0, CAM_NEAR, CAM_FAR, PVRTMat4::OGL, bRotate);
}
I've tried multiplying the new X rotation matrix to the current scene rotation first, and then multiplying the new Y rotation matrix second. I've tried the reverse of that, thinking the order of multiplication was my problem. That didn't help. Then I tried adding the new X and Y rotation matrices together before multiplying to the current rotation, but that didn't work either. I feel that I'm close, but at this point I'm just out of ideas.
Can you guys help? Thanks. -Valerie
Update: In an effort to solve this, I'm trying to simplify it a little. I've updated the above code, removing any limit in the range of the Y rotation. Basically I calculate the X and Y rotation based on the user swipe on the screen.
If I understand this correctly, I think I want to rotate the View matrix (camera/eye) with the calculation for the _rotationX.
I think I need to use the World matrix (origin 0,0,0) for the _rotationY calculation. I'll try and get some images of exactly what I'm talking about.
Wahoo, got this working! I rotated the view matrix (created by LookAt method) with the X rotation matrix. I rotated the model view matrix with the Y rotation Matrix.
Here's the modified Update method:
- (void)update {
PVRTVec3 vFrom, vTo, vUp;
VERTTYPE fFOV;
// We can get the camera position, target and field of view (fov) with GetCameraPos()
fFOV = m_Scene.GetCameraPos(vFrom, vTo, 0);
/*
We can build the world view matrix from the camera position, target and an up vector.
For this we use PVRTMat4LookAtRH().
*/
m_mView = PVRTMat4::LookAtRH(vFrom, vTo, PVRTVec3(0.0f, 1.0f, 0.0f));
// rotate on the X axis (finger swipe Y direction)
m_mView = m_mView * _rotationY;
// Calculates the projection matrix
m_mProjection = PVRTMat4::PerspectiveFovRH(fFOV, (float)1024.0/768.0, CAM_NEAR, CAM_FAR, PVRTMat4::OGL, false);
}
Here's the modified touch moved method:
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch * touch = [touches anyObject];
CGPoint location = [touch locationInView:self.view];
CGPoint lastLoc = [touch previousLocationInView:self.view];
CGPoint diff = CGPointMake(lastLoc.x - location.x, lastLoc.y - location.y);
float rotX = -1 * GLKMathDegreesToRadians(diff.x / 2.5);
float rotY = GLKMathDegreesToRadians(diff.y / 2.5);
PVRTMat4 rotMatrixX, rotMatrixY;
// create rotation matrices with angle
PVRTMatrixRotationYF(rotMatrixX, -rotX);
PVRTMatrixRotationXF(rotMatrixY, rotY);
_rotationX = _rotationX * rotMatrixX;
_rotationY = _rotationY * rotMatrixY;
}