touch object GLKIT or OpenGL any functions? - ios

I am looking for a description to touch my OpenGL object or get an Event if I touch it.
Have the GLKit or OpenGL ES some functions to use?
Or I have to calculate the position of my Object and have to compare it with the coordination ob my Touch?

There is no built in one, but here is a ported version of the gluProject function that will put a point from your object in screen coordinates, so you can see if your touch is near that point:
GLKVector3 gluProject(GLKVector3 position,
GLKMatrix4 projMatrix,
GLKMatrix4 modelMatrix,
CGRect viewport
)
{
GLKVector4 in;
GLKVector4 out;
in = GLKVector4Make(position.x, position.y, position.z, 1.0);
out = GLKMatrix4MultiplyVector4(modelMatrix, in);
in = GLKMatrix4MultiplyVector4(projMatrix, out);
if (in.w == 0.0) NSLog(#"W = 0 in project function\n");
in.x /= in.w;
in.y /= in.w;
in.z /= in.w;
/* Map x, y and z to range 0-1 */
in.x = in.x * 0.5 + 0.5;
in.y = in.y * 0.5 + 0.5;
in.z = in.z * 0.5 + 0.5;
/* Map x,y to viewport */
in.x = in.x * (viewport.size.width) + viewport.origin.x;
in.y = in.y * (viewport.size.height) + viewport.origin.y;
return GLKVector3Make(in.x, in.y, in.z);
}
- (GLKVector2) getScreenCoordOfPoint {
GLKVector3 out = gluProject(self.point, modelMatrix, projMatrix, view.frame);
GLKVector2 point = GLKVector2Make(out.x, view.frame.size.height - out.y);
return point;
}

Related

HLSL: Which DDX DDY are expected for TextureCube.SampleGrad()

I am wondering which DDX DDY values the SampleGrad() function expects for a TextureCube object.
I know that it's the change in UV coordinates for 2D textures. So I thought, it would be the change in the direction in this case. However, this does not seem to be the case.
I get different results if I try to use the Sample function vs. SampleGrad:
Sample:
// calculate reflected ray
float3 reflRay = reflect(-viewDir, normal);
// reflection map lookup
return reflectionMap.Sample(linearSampler, reflRay);
SampleGrad:
// calculate reflected ray
float3 reflRay = reflect(-viewDir, normal);
// reflection map lookup
float3 dxr = ddx(reflRay);
float3 dyr = ddy(reflRay);
return reflectionMap.SampleGrad(linearSampler, reflRay, dxr, dyr);
I still don't know which values for DDX and DDY are required, but if found an acceptable workaround that computes the level of detail for my gradients. Unfortunately, the quality of this solution is not as good as a real Sample function with anisotropic filtering.
In case anyone needs it:
The computation is described in: https://microsoft.github.io/DirectX-Specs/d3d/archive/D3D11_3_FunctionalSpec.htm#LODCalculation
My HLSL implementation:
// calculate reflected ray
float3 reflRay = reflect(-viewDir, normal);
// reflection map lookup
float3 dxr = ddx(reflRay);
float3 dyr = ddy(reflRay);
// cubemap size for lod computation
float reflWidth, reflHeight;
reflectionMap.GetDimensions(reflWidth, reflHeight);
// calculate lod based on raydiffs
float lod = calcLod(getCubeDiff(reflRay, dxr).xy * reflWidth, getCubeDiff(reflRay, dyr).xy * reflHeight);
return reflectionMap.SampleLevel(linearSampler, reflRay, lod).rgb;
Helper functions:
float pow2(float x) {
return x * x;
}
// calculates texture coordinates [-1, 1] for the view direction (xy values must be divided by axisMajorValue for proper [-1, 1] range).else
// z coordinate is the faceId
float3 getCubeCoord(float3 viewDir, out float axisMajorValue)
{
// according to dx spec: https://microsoft.github.io/DirectX-Specs/d3d/archive/D3D11_3_FunctionalSpec.htm#PointSampling
// Choose the largest magnitude component of the input vector. Call this magnitude of this value AxisMajor. In the case of a tie, the following precedence should occur: Z, Y, X.
int axisMajor = 0;
int axisFlip = 0;
axisMajorValue = 0.0;
[unroll] for (int i = 0; i < 3; ++i)
{
if (abs(viewDir[i]) >= axisMajorValue)
{
axisMajor = i;
axisFlip = viewDir[i] < 0.0f ? 1 : 0;
axisMajorValue = abs(viewDir[i]);
}
}
int faceId = axisMajor * 2 + axisFlip;
// Select and mirror the minor axes as defined by the TextureCube coordinate space. Call this new 2d coordinate Position.
int axisMinor1 = axisMajor == 0 ? 2 : 0; // first coord is x or z
int axisMinor2 = 3 - axisMajor - axisMinor1;
// Project the coordinate onto the cube by dividing the components Position by AxisMajor.
//float u = viewDir[axisMinor1] / axisMajorValue;
//float v = -viewDir[axisMinor2] / axisMajorValue;
// don't project for getCubeDiff function!
float u = viewDir[axisMinor1];
float v = -viewDir[axisMinor2];
switch (faceId)
{
case 0:
case 5:
u *= -1.0f;
break;
case 2:
v *= -1.0f;
break;
}
return float3(u, v, float(faceId));
}
float3 getCubeDiff(float3 ray, float3 diff)
{
// from: https://microsoft.github.io/DirectX-Specs/d3d/archive/D3D11_3_FunctionalSpec.htm#LODCalculation
// Using TC, determine which component is of the largest magnitude, as when calculating the texel location. If any of the components are equivalent, precedence is as follows: Z, Y, X. The absolute value of this will be referred to as AxisMajor.
// select and mirror the minor axes of TC as defined by the TextureCube coordinate space to generate TC'.uv
float axisMajor;
float3 tuv = getCubeCoord(ray, axisMajor);
// select and mirror the minor axes of the partial derivative vectors as defined by the TextureCube coordinate space, generating 2 new partial derivative vectors dX'.uv & dY'.uv.
float derivateMajor;
float3 duv = getCubeCoord(diff, derivateMajor);
// Calculate 2 new dX and dY vectors for future calculations as follows:
// dX.uv = (AxisMajor*dX'.uv - TC'.uv*DerivativeMajorX)/(AxisMajor*AxisMajor)
float3 res;
res.z = 0.0;
res.xy = (axisMajor * duv.xy - tuv.xy * derivateMajor) / (axisMajor * axisMajor);
return res * 0.5;
}
// dx, dy in pixel coordinates
float calcLod(float2 dX, float2 dY)
{
// from: https://microsoft.github.io/DirectX-Specs/d3d/archive/D3D11_3_FunctionalSpec.htm#LODCalculation
float A = pow2(dX.y) + pow2(dY.y);
float B = -2.0 * (dX.x * dX.y + dY.x * dY.y);
float C = pow2(dX.x) + pow2(dY.x);
float F = pow2(dX.x * dY.y - dY.x * dX.y);
float p = A - C;
float q = A + C;
float t = sqrt(pow2(p) + pow2(B));
float lengthX = sqrt(abs(F * (t+p) / ( t * (q+t))) + abs(F * (t-p) / ( t * (q+t))));
float lengthY = sqrt(abs(F * (t-p) / ( t * (q-t))) + abs(F * (t+p) / ( t * (q-t))));
return log2(max(lengthX,lengthY));
}

OpenCV + OpenGL Using solvePnP camera pose - object is offset from detected marker

I have a problem in my iOS application where i attempt to obtain a view matrix using solvePnP and render a 3d cube using modern OpenGL. While my code attempts to render a 3d cube directly on top of the detected marker, it seems to render with a certain offset from the marker (see video for example)
https://www.youtube.com/watch?v=HhP5Qr3YyGI&feature=youtu.be
(on the bottom right of the image you can see an opencv render of the homography around the tracker marker. the rest of the screen is an opengl render of the camera input frame and a 3d cube at location (0,0,0).
the cube rotates and translates correctly whenever i move the marker, though it is very telling that there is some difference in the scale of translations (IE, if i move my marker 5cm in the real world, it hardly moves by 1cm on screen)
these are what i believe to be the relevant parts of the code where the error could come from :
Extracting view matrix from homography :
AVCaptureDevice *deviceInput = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceFormat *format = deviceInput.activeFormat;
CMFormatDescriptionRef fDesc = format.formatDescription;
CGSize dim = CMVideoFormatDescriptionGetPresentationDimensions(fDesc, true, true);
const float cx = float(dim.width) / 2.0;
const float cy = float(dim.height) / 2.0;
const float HFOV = format.videoFieldOfView;
const float VFOV = ((HFOV)/cx)*cy;
const float fx = abs(float(dim.width) / (2 * tan(HFOV / 180 * float(M_PI) / 2)));
const float fy = abs(float(dim.height) / (2 * tan(VFOV / 180 * float(M_PI) / 2)));
Mat camIntrinsic = Mat::zeros(3, 3, CV_64F);
camIntrinsic.at<double>(0, 0) = fx;
camIntrinsic.at<double>(0, 2) = cx;
camIntrinsic.at<double>(1, 1) = fy;
camIntrinsic.at<double>(1, 2) = cy;
camIntrinsic.at<double>(2, 2) = 1.0;
std::vector<cv::Point3f> object3dPoints;
object3dPoints.push_back(cv::Point3f(-0.5f,-0.5f,0));
object3dPoints.push_back(cv::Point3f(+0.5f,-0.5f,0));
object3dPoints.push_back(cv::Point3f(+0.5f,+0.5f,0));
object3dPoints.push_back(cv::Point3f(-0.5f,+0.5f,0));
cv::Mat raux,taux;
cv::Mat Rvec, Tvec;
cv::solvePnP(object3dPoints, mNewImageBounds, camIntrinsic, Mat(),raux,taux); //mNewImageBounds are the 4 corner of the homography detected by perspectiveTransform (the green outline seen in the image)
raux.convertTo(Rvec,CV_32F);
taux.convertTo(Tvec ,CV_64F);
Mat Rot(3,3,CV_32FC1);
Rodrigues(Rvec, Rot);
// [R | t] matrix
Mat_<double> para = Mat_<double>::eye(4,4);
Rot.convertTo(para(cv::Rect(0,0,3,3)),CV_64F);
Tvec.copyTo(para(cv::Rect(3,0,1,3)));
Mat cvToGl = Mat::zeros(4, 4, CV_64F);
cvToGl.at<double>(0, 0) = 1.0f;
cvToGl.at<double>(1, 1) = -1.0f; // Invert the y axis
cvToGl.at<double>(2, 2) = -1.0f; // invert the z axis
cvToGl.at<double>(3, 3) = 1.0f;
para = cvToGl * para;
Mat_<double> modelview_matrix;
Mat(para.t()).copyTo(modelview_matrix); // transpose to col-major for OpenGL
glm::mat4 openGLViewMatrix;
for(int col = 0; col < modelview_matrix.cols; col++)
{
for(int row = 0; row < modelview_matrix.rows; row++)
{
openGLViewMatrix[col][row] = modelview_matrix.at<double>(col,row);
}
}
i made sure the camera intrinsic matrix contains correct values, the portion which converts the opencv Mat to an opengl view matrix i believe to be correct as the cube translates and rotates in the right directions.
once the view matrix is calculated, i use it to draw the cube as follows :
_projectionMatrix = glm::perspective<float>(radians(60.0f), fabs(view.bounds.size.width / view.bounds.size.height), 0.1f, 100.0f);
_cube_ModelMatrix = glm::translate(glm::vec3(0,0,0));
const mat4 MVP = _projectionMatrix * openGLViewMatrix * _cube_ModelMatrix;
glUniformMatrix4fv(glGetUniformLocation(_cube_program, "ModelMatrix"), 1, GL_FALSE, value_ptr(MVP));
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_INT, BUFFER_OFFSET(0));
Is anyone able to spot my error?
You should create perspective matrix as explained here: http://ksimek.github.io/2013/06/03/calibrated_cameras_in_opengl
Here is quick code:
const float fx = intrinsicParams(0, 0); // Focal length in x axis
const float fy = intrinsicParams(1, 1); // Focal length in y axis
const float cx = intrinsicParams(0, 2); // Primary point x
const float cy = intrinsicParams(1, 2); // Primary point y
projectionMatrix(0, 0) = 2.0f * fx;
projectionMatrix(0, 1) = 0.0f;
projectionMatrix(0, 2) = 0.0f;
projectionMatrix(0, 3) = 0.0f;
projectionMatrix(1, 0) = 0.0f;
projectionMatrix(1, 1) = 2.0f * fy;
projectionMatrix(1, 2) = 0.0f;
projectionMatrix(1, 3) = 0.0f;
projectionMatrix(2, 0) = 2.0f * cx - 1.0f;
projectionMatrix(2, 1) = 2.0f * cy - 1.0f;
projectionMatrix(2, 2) = -(far + near) / (far - near);
projectionMatrix(2, 3) = -1.0f;
projectionMatrix(3, 0) = 0.0f;
projectionMatrix(3, 1) = 0.0f;
projectionMatrix(3, 2) = -2.0f * far * near / (far - near);
projectionMatrix(3, 3) = 0.0f;
For more information about intrinsic matrix: http://ksimek.github.io/2013/08/13/intrinsic

converting isometric tile map coordinates to screen coordinates

I'm trying to convert isometric tile coordinates to screen coordinates.
I seem to have problem especially with the Y coordinates, looks like the X part works just fine. here is what I got so far.
// calculate screen coordinates from tile coordinates
- (CGPoint)positionForTileCoord:(CGPoint)pos {
float halfMapWidth = _tileMap.mapSize.width*0.5;
float mapHeight = _tileMap.mapSize.height;
float tileWidth = _tileMap.tileSize.width;
float tileHeight = _tileMap.tileSize.height;
int x = halfMapWidth*tileWidth + tileWidth*pos.x*0.5-tileWidth*pos.y*0.5;
int y = ............
return ccp(x, y);
my player is added as a child to the Tile map itself and the map is added to the layer at screenSize.x/2, scrrensize.y/2 with an anchor point of 0.5
I have done the same thing successfully with an orthogonal map but seem to struggle with the isometric one.
Thank you
really its look like this:
// calculate screen coordinates from tile coordinates
- (CGPoint)positionForTileCoord:(CGPoint)pos {
float halfMapWidth = _tileMap.mapSize.width*0.5;
float mapHeight = _tileMap.mapSize.height;
float tileWidth = _tileMap.tileSize.width;
float tileHeight = _tileMap.tileSize.height;
int x = halfMapWidth*tileWidth + tileWidth*pos.x*0.5-tileWidth*pos.y*0.5;
int y = (pos.y + (mapHeight * tileWidth/2) - (tileHeight/2)) - ((pos.y + pos.x) * tileHeight/2) + tileHeight;
return ccp(x, y);
}
// calculating the tile coordinates from screen location
-(CGPoint) tilePosFromLocation:(CGPoint)location
{
CGPoint pos = location;
float halfMapWidth = _tileMap.mapSize.width*0.5;
float mapHeight = _tileMap.mapSize.height;
float tileWidth = _tileMap.tileSize.width;
float tileHeight = _tileMap.tileSize.height;
CGPoint tilePosDiv = CGPointMake(pos.x/tileWidth, pos.y/tileHeight);
float invereseTileY = mapHeight - tilePosDiv.y;
// Cast int to make sure that result is in whole numbers
float posX = (int)(invereseTileY + tilePosDiv.x - halfMapWidth);
float posY = (int)(invereseTileY - tilePosDiv.x + halfMapWidth);
return CGPointMake(posX, posY);
}
int y = (pos.y + (mapHeight * tileWidth/2) - (tileHeight/2)) - ((pos.y + pos.x) * tileHeight/2) + tileHeight;

Screen space raycasting

I am using OpenGL ES 2.0 and the latest iOS.
I have a 3D model that I wish the user to be able to select different parts of the model by tapping on the screen. I have found this tutorial on converting a pixel-space screen coordinate to a world-space ray, and have implemented a ray-AABB intersection test to determine the intersection portion of the model.
I get some hits on the model at seemingly random sections of the model. So I need to debug this feature, but I don't really know where to start.
I can't exactly draw a line representing the ray (since it is coming out of the camera it will appear as a point), so I can see a couple of ways of debugging this:
Check the bounding boxes of the model sections. So is there an easy way with OGL ES to draw a bounding box given a min and max point?
draw some 3D object along the path of the ray. This seems more complicated.
Actually debug the raycast and intersection code. This seems like the hardest to accomplish since the algorithms are fairly well known (I took the intersection test straight ouf of my Real-Time Collision Detection book).
If anyone can help, or wants me to post some code, I could really use it.
Here is my code for converting to world space:
- (IBAction)tappedBody:(UITapGestureRecognizer *)sender
{
if ( !editMode )
{
return;
}
CGPoint tapPoint = [sender locationOfTouch:0 inView:self.view];
const float tanFOV = tanf(GLKMathDegreesToRadians(65.0f*0.5f));
const float width = self.view.frame.size.width,
height = self.view.frame.size.height,
aspect = width/height,
w_2 = width * 0.5,
h_2 = height * 0.5;
CGPoint screenPoint;
screenPoint.x = tanFOV * ( tapPoint.x / w_2 - 1 ) / aspect;
screenPoint.y = tanFOV * ( 1.0 - tapPoint.y / h_2 );
GLKVector3 nearPoint = GLKVector3Make(screenPoint.x * NEAR_PLANE, screenPoint.y * NEAR_PLANE, NEAR_PLANE );
GLKVector3 farPoint = GLKVector3Make(screenPoint.x * FAR_PLANE, screenPoint.y * FAR_PLANE, FAR_PLANE );
GLKVector3 nearWorldPoint = GLKMatrix4MultiplyVector3( _invViewMatrix, nearPoint );
GLKVector3 farWorldPoint = GLKMatrix4MultiplyVector3( _invViewMatrix, farPoint );
GLKVector3 worldRay = GLKVector3Subtract(farWorldPoint, nearWorldPoint);
NSLog(#"Model matrix: %#", NSStringFromGLKMatrix4(_modelMatrix));
worldRay = GLKVector4Normalize(worldRay);
[male intersectWithRay:worldRay fromStartPoint:nearWorldPoint];
for ( int i =0; i < 3; ++i )
{
touchPoint[i] = nearWorldPoint.v[i];
}
}
And here's how I get the matrices:
- (void)update
{
// _rotation = 0;
float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, NEAR_PLANE, FAR_PLANE);
self.effect.transform.projectionMatrix = projectionMatrix;
GLKMatrix4 baseModelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -5.0f);
// Compute the model view matrix for the object rendered with ES2
_viewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 0.0f);
_modelMatrix = GLKMatrix4Rotate(baseModelViewMatrix, _rotation, 0.0f, 1.0f, 0.0f);
_modelViewMatrix = GLKMatrix4Rotate(_viewMatrix, _rotation, 0.0f, 1.0f, 0.0f);
_modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, _modelViewMatrix);
_invViewMatrix = GLKMatrix4Invert(_viewMatrix, NULL);
_invMVMatrix = GLKMatrix4Invert(_modelViewMatrix, NULL);
_normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(_modelViewMatrix), NULL);
_modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, _modelViewMatrix);
male.modelTransform = _modelMatrix;
if ( !editMode )
{
_rotation += self.timeSinceLastUpdate * 0.5f;
}
}
I can't exactly draw a line representing the ray (since it is coming
out of the camera it will appear as a point)
DonĀ“t discard this so soon. Isn't it that what you are trying to test? I mean, That would be so if you are unprojecting things right. But if it you have a bug, it won't.
I'll go with this first... if you see a point just under your finger the conversion is right, and you can start investigating the other options you pointed out, which are more complex.
I'm not sure why this fixed the problem I was having but changing
screenPoint.x = tanFOV * ( tapPoint.x / w_2 - 1 ) / aspect;
to
screenPoint.x = tanFOV * ( tapPoint.x / w_2 - 1 ) * aspect;
and
GLKVector4 nearPoint = GLKVector4Make(screenPoint.x * NEAR_PLANE, screenPoint.y * NEAR_PLANE, NEAR_PLANE, 1 );
GLKVector4 farPoint = GLKVector4Make(screenPoint.x * FAR_PLANE, screenPoint.y * FAR_PLANE, FAR_PLANE, 1 );
to
GLKVector4 nearPoint = GLKVector4Make(screenPoint.x * NEAR_PLANE, screenPoint.y * NEAR_PLANE, -NEAR_PLANE, 1 );
GLKVector4 farPoint = GLKVector4Make(screenPoint.x * FAR_PLANE, screenPoint.y * FAR_PLANE, -FAR_PLANE, 1 );
seems to have fixed the raycasting issue. Still not sure why my view matrix seems to be indicating that the camera is looking down the positive Z-axis, but my objects are translated negative along the z axis

Different x and y speed (acceleration) in cocos2d?

I want to create a visible object with a trajectory using standard actions (CCMoveBy and e.t.c) which is similar to:
x = sin(y)
My code:
CCMoveBy *moveAction1 = [CCMoveBy actionWithDuration:1.5 position:ccp(300, 0)];
CCEaseInOut *easeInOutAction1 = [CCEaseInOut actionWithAction:moveAction1 rate:2];
CCMoveBy *moveAction2 = [CCMoveBy actionWithDuration:1.5 position:ccp(-300, 0)];
CCEaseInOut *easeInOutAction2 = [CCEaseInOut actionWithAction:moveAction2 rate:2];
CCMoveBy *moveAction3 = [CCMoveBy actionWithDuration:1.5 position:ccp(0, -32)];
CCSpawn *moveActionRight = [CCSpawn actionOne:easeInOutAction1 two:moveAction3];
CCSpawn *moveActionLeft = [CCSpawn actionOne:easeInOutAction2 two:moveAction3];
CCSequence *sequenceOfActions = [CCSequence actionOne:moveActionRight two:moveActionLeft];
CCRepeatForever *finalMoveAction = [CCRepeatForever actionWithAction:sequenceOfActions];
[enemy runAction:finalMoveAction];
This code shows move down only. The problem is that object has a different x and y accelerations and I don't know how to combine them
UPDATED
- (void)tick:(ccTime)dt
{
CGPoint pos = self.position;
pos.y -= 50 * dt;
if (pos.y < activationDistance) {
pos.x = 240 + sin(angle) * 140;
angle += dt * 360 * 0.007;
if (angle >= 360) {
angle = ((int)angle) % 360;
}
}
self.position = pos;
}
It is my current solution. I can increase activationDistance to adjust the object trajectory. But I want to setup an initial value of the angle variable.
I use numbers instead of variables because they are used inside this function only.
SOLVED
To change the initial angle:
angle = point.x < 240 ? -asin((240 - point.x) / 140) : asin((point.x - 240) / 140);
the main problem was my tiled map has its own coordinates and cover 320x320 part of the screen only
I think it will be easier for you to just do it in your frame update method (the one I assume you schedule for updating your objects. So why not just do :
- (void)tick:(ccTime)dt {
CGPoint pos = myObject.position;
pos.x = <desired x> + sin(angle);
pos.y = pos.y - y_acceleration * dt;
angle += dt * 360 * x_acceleration;
if (angle >= 360)
angle = ((int)angle) % 360;
myObject.position = pos;
}
And you can apply the same for the y axis of the object

Resources