I would like to use CMAttitude to know the vector normal to the glass of the iPad/iPhone's screen (relative to the ground). As such, I would get vectors like the following:
Notice that this is different from orientation, in that I don't care how the device is rotated about the z axis. So if I was holding the iPad above my head facing down, it would read (0,-1,0), and even as I spun it around above my head (like a helicopter), it would continue to read (0,-1,0):
I feel like this might be pretty easy, but as I am new to quaternions and don't fully understand the reference frame options for device motion, its been evading me all day.
In your case we can say rotation of the device is equal to rotation of the device normal (rotation around the normal itself is just ignored like you specified it)
CMAttitude which you can get via
CMMotionManager.deviceMotion provides the rotation
relative to a reference frame. Its properties quaternion, roation
matrix and Euler angles are just different representations.
The reference frame can be specified when you start device motion updates using CMMotionManager's startDeviceMotionUpdatesUsingReferenceFrame method. Until iOS 4 you had to use multiplyByInverseOfAttitude
Putting this together you just have to multiply the quaternion in the right way with the normal vector when the device lies face up on the table. Now we need this right way of quaternion multiplication that represents a rotation: According to Rotating vectors this is done by:
n = q * e * q' where q is the quaternion delivered by CMAttitude [w, (x, y, z)], q' is its conjugate [w, (-x, -y, -z)] and e is the quaternion representation of the face up normal [0, (0, 0, 1)]. Unfortunately Apple's CMQuaternion is struct and thus you need a small helper class.
Quaternion e = [[Quaternion alloc] initWithValues:0 y:0 z:1 w:0];
CMQuaternion cm = deviceMotion.attitude.quaternion;
Quaternion quat = [[Quaternion alloc] initWithValues:cm.x y:cm.y z:cm.z w: cm.w];
Quaternion quatConjugate = [[Quaternion alloc] initWithValues:-cm.x y:-cm.y z:-cm.z w: cm.w];
[quat multiplyWithRight:e];
[quat multiplyWithRight:quatConjugate];
// quat.x, .y, .z contain your normal
Quaternion.h:
#interface Quaternion : NSObject {
double w;
double x;
double y;
double z;
}
#property(readwrite, assign)double w;
#property(readwrite, assign)double x;
#property(readwrite, assign)double y;
#property(readwrite, assign)double z;
Quaternion.m:
- (Quaternion*) multiplyWithRight:(Quaternion*)q {
double newW = w*q.w - x*q.x - y*q.y - z*q.z;
double newX = w*q.x + x*q.w + y*q.z - z*q.y;
double newY = w*q.y + y*q.w + z*q.x - x*q.z;
double newZ = w*q.z + z*q.w + x*q.y - y*q.x;
w = newW;
x = newX;
y = newY;
z = newZ;
// one multiplication won't denormalise but when multipling again and again
// we should assure that the result is normalised
return self;
}
- (id) initWithValues:(double)w2 x:(double)x2 y:(double)y2 z:(double)z2 {
if ((self = [super init])) {
x = x2; y = y2; z = z2; w = w2;
}
return self;
}
I know quaternions are a bit weird at the beginning but once you have got an idea they are really brilliant. It helped me to imagine a quaternion as a rotation around the vector (x, y, z) and w is (cosine of) the angle.
If you need to do more with them take a look at cocoamath open source project. The classes Quaternion and its extension QuaternionOperations are a good starting point.
For the sake of completeness, yes you can do it with matrix multiplication as well:
n = M * e
But I would prefer the quaternion way it saves you all the trigonometric hassle and performs better.
Thanks to Kay for the starting point on the solution. Here is my implementation for anyone that needs it. I made a couple of small tweeks to Kay's advice for my situation. As a heads up, I'm using a landscape only presentation. I have code that updates a variable _isLandscapeLeft to make the necessary adjustment to the direction of the vector.
Quaternion.h
#interface Quaternion : NSObject{
//double w;
//double x;
//double y;
//double z;
}
#property(readwrite, assign)double w;
#property(readwrite, assign)double x;
#property(readwrite, assign)double y;
#property(readwrite, assign)double z;
- (id) initWithValues:(double)w2 x:(double)x2 y:(double)y2 z:(double)z2;
- (Quaternion*) multiplyWithRight:(Quaternion*)q;
#end
Quaternion.m
#import "Quaternion.h"
#implementation Quaternion
- (Quaternion*) multiplyWithRight:(Quaternion*)q {
double newW = _w*q.w - _x*q.x - _y*q.y - _z*q.z;
double newX = _w*q.x + _x*q.w + _y*q.z - _z*q.y;
double newY = _w*q.y + _y*q.w + _z*q.x - _x*q.z;
double newZ = _w*q.z + _z*q.w + _x*q.y - _y*q.x;
_w = newW;
_x = newX;
_y = newY;
_z = newZ;
// one multiplication won't denormalise but when multipling again and again
// we should assure that the result is normalised
return self;
}
- (id) initWithValues:(double)w2 x:(double)x2 y:(double)y2 z:(double)z2 {
if ((self = [super init])) {
_x = x2; _y = y2; _z = z2; _w = w2;
}
return self;
}
#end
And my game class that uses the quaternion for shooting:
-(void)fireWeapon{
ProjectileBaseClass *bullet = [[ProjectileBaseClass alloc] init];
bullet.position = SCNVector3Make(0, 1, 0);
[self.rootNode addChildNode:bullet];
Quaternion *e = [[Quaternion alloc] initWithValues:0 x:0 y:0 z:1];
CMQuaternion cm = _currentAttitude.quaternion;
Quaternion *quat = [[Quaternion alloc] initWithValues:cm.w x:cm.x y:cm.y z:cm.z];
Quaternion *quatConjugate = [[Quaternion alloc] initWithValues:cm.w x:-cm.x y:-cm.y z:-cm.z];
quat = [quat multiplyWithRight:e];
quat = [quat multiplyWithRight:quatConjugate];
SCNVector3 directionToShoot;
if (_isLandscapeLeft) {
directionToShoot = SCNVector3Make(quat.y, -quat.x, -quat.z);
}else{
directionToShoot = SCNVector3Make(-quat.y, quat.x, -quat.z);
}
SCNAction *shootBullet = [SCNAction moveBy:directionToShoot duration:.1];
[bullet runAction:[SCNAction repeatActionForever:shootBullet]];
}
Related
Firstly, I am not using 3Js in my Orbits app because I encountered a number of limitations including, but not limited to, issues with texture resolution and my requirement for complex lighting equations but I would like to implement something like 3Js' raycaster to allow me to detect the object clicked by the user.
I'm new to WebGL, but an "old hand" in software development so I'm looking for some hints about where to start.
The approach is as follows:
You generate your scene twice, once normally which is displayed and the second, with the objects uniquely coloured but not displayed. Then you use gl.readPixels from the second scene using the position on the first and decode the colour to identify the object.
Now I have to implement it myself.
Picking spheres
When picking spheres, or objects that are separated (not one inside another) you can use a simple distance from ray to very quickly get the closest object.
Example
The function returns a function that does the calculation. As it is only the closest you are interested in the distances can remain as squares. The distance from the camera is held as a unit distance along the ray.
function distanceFromRay() {
var dSqr, ox, oy, oz, vx, vy, vz;
function distanceSqr(px, py, pz) {
const ax = px - ox, ay = py - oy, az = pz - oz;
const u = (ax * vx + ay * vy + az * vz) / dSqr;
distanceSqr.unit = u;
if (u > 0) { // is past origin
const bx = ox + vx * u - px, by = oy + vy * u - py, bz = oz + vz * u - pz;
return bx * bx + by * by + bz * bz; // dist sqr to closest point on ray
}
return Infinity;
}
distanceSqr.unit = 0;
distanceSqr.setRay(x, y, z, xx, yy, zz) { // ray from origin x, y,z,
// infinite length along xx,yy,zz
(ox = x, oy = y, oz = z);
(vx = xx, vy = yy, vz = zz);
dSqr = vx * vx + vy * vy + vz * vz;
}
return distanceSqr;
}
Usage
There is a one time setup call;
// setup
const distToRay = distanceFromRay();
At the start of a frame that requires a pick, calculate the pick ray and set it. Also set the min distance from ray and eye.
// at start of frame set pick ray
distToRay.setRay(eye.x, eye.y, eye.z, pointer.ray.x, pointer.ray.y, pointer.ray.y);
var minDist = maxObjRadius * maxObjRadius;
var nearestObj = undefined;
var eyeDist = Infinity;
Then for each pickable object get the distance by passing the objects center and comparing it to any previous (in frame) found distance, objects radius, and distance from eye.
// per object
const dis = distToRay(obj.pos.x, obj.pos.y, obj.pos.z);
if (dis < obj.radius && dis < minDist && distToRay.unit > 0 && distToRay.unit < eyeDist ) {
minDist = dis;
eyeDist = distToRay.unit;
nearestObj = obj;
}
At the end of the frame if nearestObj is not undefined it will hold a reference to the picked object.
// end of frame
if (nearestObj) {
// you have the closest object
}
I'm trying to create a ray-casting camera in DirectX11 using XMVector3Unproject(). From my understanding, I will be passing in the (Vector3)position of the pixel on the near plane, and in separate call, a corresponding position on the far plane. Then I would subtract these vectors to get the direction of the ray. The origin would then be the Unprojected coordinate on the near plane. My problem here is calculating the origin of the ray to be passed in.
Example
// assuming screenHeight and screenWidth are the number of pixels.
const uint32_t screenHeight = 768;
const uint32_t screenWidth = 1024;
struct Ray
{
XMFLOAT3 origin;
XMFLOAT3 direction;
};
Ray rays[screenWidth * screenHeight];
for (uint32_t i = 0; i < screenHeight; ++i)
{
for (uint32_t j = 0; j < screenWidth; ++j)
{
// 1. ***calculate and store the current pixel position on the near plane***
// 2. ***calculate the corresponding point on the far plane***
// 3. ***pass both positions separately into XMVector3Unproject() (2 total calls to the function)***
// 4. ***store the returned vectors' difference into rays[i * screenWidth + j].direction***
// 5. ***store the near plane pixel position's returned vector into rays[i * screenWidth + j].origin***
}
}
Hopefully I'm understanding this correctly. Any help in determining the ray origins, or corrections would be greatly appreciated.
According to the documentation, the XMVector3Unproject function gives you the coordinates of a ray you have provided in camera space (Normalized-device coordinates), in object space (given your model matrix).
To generate your camera rays, you consider your camera pinhole (all the light passes through one point, which is your camera (0, 0, 0), then you choose your ray direction. Let say you want to generate W*H camera rays, your loop might look like this
Vector3 ray_origin = Vector3(0, 0, 0);
for (float x = -1.f; x <= 1.f; x += 2.f / W) {
for (float y = -1.f; y <= 1.f; y += 2.f / H) {
Vector3 ray_direction = Normalize(Vector3(x, y, -1.f)) - ray_origin;
Vector3 ray_in_model = Unproject(ray_direction, 0.f, 0.f,
width, height, znear, zfar,
proj, view, model);
}
}
You might also want to have a look at this link which sounds interesting
Not quite sure how to word this but I've been using
if (CGRectIntersectsRect(ball.frame, bottom.frame)) {
[self finish ];
}
for my collision detection and it run the code perfectly but the ball sometimes collides with the bottom and runs the code but you can clearly see there is a gap in between the objects. i have created the images my self and they have no background around them. i was wondering if theres any other way of coding it or making it so the it doesn't run the code until the intersect x amount of pixels into one another.
You can implement ball-line collision in many ways. Your solution is in fact a rectangle - rectangle collision detection. Here is how I did it in one of my small gaming projects. It gave me best results and it's simple.
Let's say that a ball has a ballRadius, and location (xBall, yBall). The line is defined with two points (xStart, yStart) and (xEnd, yEnd).
Implementation of a simple collision detection:
float ballRadius = ...;
float x1 = xStart - xBall;
float y1 = yStart - yBall;
float x2 = xEnd - xBall;
float y2 = yEnd - yBall;
float dx = x2 - x1;
float dy = y2 - y1;
float dr = sqrtf(powf(dx, 2) + powf(dy, 2));
float D = x1*y2 - x2*y1;
float delta = powf(ballRadius*0.9,2)*powf(dr,2) - powf(D,2);
if (delta >= 0)
{
// Collision detected
}
If delta is greater than zero there are two intersections between ball (circle) and line. If delta is equal to zero there is one intersection – perfect collision.
I am given two GLKVector3's representing the start and end points of the cylinder. Using these points and the radius, I need to build and render a cylinder. I can build a cylinder with the correct distance between the points but in a fixed direction (currently always in the y (0, 1, 0) up direction). I am not sure what kind of calculations I need to make to get the cylinder on the correct plane between the two points so that a line would run through the two end points. I am thinking there is some sort of calculations I can apply as I create my vertex data with the direction vector, or angle, that will create the cylinder pointing the correct direction. Does anyone have an algorithm, or know of one, that will help?
Are you drawing more than one of these cylinders? Or ever drawing it in a different position? If so, using the algorithm from the awesome article is a not-so-awesome idea. Every time you upload geometry data to the GPU, you incur a performance cost.
A better approach is to calculate the geometry for a single basic cylinder once — say, one with unit radius and height — and stuff that vertex data into a VBO. Then, when you draw, use a model-to-world transformation matrix to scale (independently in radius and length if needed) and rotate the cylinder into place. This way, the only new data that gets sent to the GPU with each draw call is a 4x4 matrix instead of all the vertex data for whatever polycount of cylinder you're drawing.
Check this awesome article; it's dated but after adapting the algorithm, it works like a charm. One tip, OpenGL ES 2.0 only supports triangles so instead of using GL_QUAD_STRIP as the method does, use GL_TRIANGLE_STRIP instead and the result is identical. The site also contains a bunch of other useful information regarding OpenGL geometries.
See code below for solution. Self represents the mesh and contains the vertices, indices, and such.
- (instancetype)initWithOriginRadius:(CGFloat)originRadius
atOriginPoint:(GLKVector3)originPoint
andEndRadius:(CGFloat)endRadius
atEndPoint:(GLKVector3)endPoint
withPrecision:(NSInteger)precision
andColor:(GLKVector4)color
{
self = [super init];
if (self) {
// normal pointing from origin point to end point
GLKVector3 normal = GLKVector3Make(originPoint.x - endPoint.x,
originPoint.y - endPoint.y,
originPoint.z - endPoint.z);
// create two perpendicular vectors - perp and q
GLKVector3 perp = normal;
if (normal.x == 0 && normal.z == 0) {
perp.x += 1;
} else {
perp.y += 1;
}
// cross product
GLKVector3 q = GLKVector3CrossProduct(perp, normal);
perp = GLKVector3CrossProduct(normal, q);
// normalize vectors
perp = GLKVector3Normalize(perp);
q = GLKVector3Normalize(q);
// calculate vertices
CGFloat twoPi = 2 * PI;
NSInteger index = 0;
for (NSInteger i = 0; i < precision + 1; i++) {
CGFloat theta = ((CGFloat) i) / precision * twoPi; // go around circle and get points
// normals
normal.x = cosf(theta) * perp.x + sinf(theta) * q.x;
normal.y = cosf(theta) * perp.y + sinf(theta) * q.y;
normal.z = cosf(theta) * perp.z + sinf(theta) * q.z;
AGLKMeshVertex meshVertex;
AGLKMeshVertexDynamic colorVertex;
// top vertex
meshVertex.position.x = endPoint.x + endRadius * normal.x;
meshVertex.position.y = endPoint.y + endRadius * normal.y;
meshVertex.position.z = endPoint.z + endRadius * normal.z;
meshVertex.normal = normal;
meshVertex.originalColor = color;
// append vertex
[self appendVertex:meshVertex];
// append color vertex
colorVertex.colors = color;
[self appendColorVertex:colorVertex];
// append index
[self appendIndex:index++];
// bottom vertex
meshVertex.position.x = originPoint.x + originRadius * normal.x;
meshVertex.position.y = originPoint.y + originRadius * normal.y;
meshVertex.position.z = originPoint.z + originRadius * normal.z;
meshVertex.normal = normal;
meshVertex.originalColor = color;
// append vertex
[self appendVertex:meshVertex];
// append color vertex
[self appendColorVertex:colorVertex];
// append index
[self appendIndex:index++];
}
// draw command
[self appendCommand:GL_TRIANGLE_STRIP firstIndex:0 numberOfIndices:self.numberOfIndices materialName:#""];
}
return self;
}
What is the correct way to use CMAttitude:multiplyByInverseOfAttitude?
Assuming an iOS5 device laying flat on a table, after starting CMMotionManager with:
CMMotionManager *motionManager = [[CMMotionManager alloc]init];
[motionManager startDeviceMotionUpdatesUsingReferenceFrame:
CMAttitudeReferenceFrameXTrueNorthZVertical];
Later, CMDeviceMotion objects are obtained:
CMDeviceMotion *deviceMotion = [motionManager deviceMotion];
I expect that [deviceMotion attitude] reflects the rotation of the device from True North.
By observation, [deviceMotion userAcceleration] reports acceleration in the device reference frame. That is, moving the device side to side (keeping it flat on the table) registers acceleration in the x-axis. Turning the device 90° (still flat) and moving the device side to side still reports x acceleration.
What is the correct way to transform [deviceMotion userAcceleration] to obtain North-South/East-West acceleration rather than left-right/forward-backward?
CMAttitude multiplyByInverseOfAttitude seems unnecessary since a reference frame has already been specified and it is unclear from the documentation how to apply the attitude to CMAcceleration.
The question would not have arisen if CMDeviceMotion had an accessor for the userAcceleration in coordinates of the reference frame. So, I used a category to add the required method:
In CMDeviceMotion+TransformToReferenceFrame.h:
#import <CoreMotion/CoreMotion.h>
#interface CMDeviceMotion (TransformToReferenceFrame)
-(CMAcceleration)userAccelerationInReferenceFrame;
#end
and in CMDeviceMotion+TransformToReferenceFrame.m:
#import "CMDeviceMotion+TransformToReferenceFrame.h"
#implementation CMDeviceMotion (TransformToReferenceFrame)
-(CMAcceleration)userAccelerationInReferenceFrame
{
CMAcceleration acc = [self userAcceleration];
CMRotationMatrix rot = [self attitude].rotationMatrix;
CMAcceleration accRef;
accRef.x = acc.x*rot.m11 + acc.y*rot.m12 + acc.z*rot.m13;
accRef.y = acc.x*rot.m21 + acc.y*rot.m22 + acc.z*rot.m23;
accRef.z = acc.x*rot.m31 + acc.y*rot.m32 + acc.z*rot.m33;
return accRef;
}
#end
and in Swift 3
extension CMDeviceMotion {
var userAccelerationInReferenceFrame: CMAcceleration {
let acc = self.userAcceleration
let rot = self.attitude.rotationMatrix
var accRef = CMAcceleration()
accRef.x = acc.x*rot.m11 + acc.y*rot.m12 + acc.z*rot.m13;
accRef.y = acc.x*rot.m21 + acc.y*rot.m22 + acc.z*rot.m23;
accRef.z = acc.x*rot.m31 + acc.y*rot.m32 + acc.z*rot.m33;
return accRef;
}
}
Now, code that previously used [deviceMotion userAcceleration] can use [deviceMotion userAccelerationInReferenceFrame] instead.
According to Apple's Documentation, CMAttitude refers to the orientation of a body relative to a given frame of reference. And either userAcceleration or gravity is the value of the device's frame. So in order to get the value of reference frame. We should do as #Batti said
take the attitude rotation matrix every update time.
comput the inverse matrix.
multiplying the inverse matrix for the UserAcceleration vector.
Here's the Swift version
import CoreMotion
import GLKit
extension CMDeviceMotion {
func userAccelerationInReferenceFrame() -> CMAcceleration {
let origin = userAcceleration
let rotation = attitude.rotationMatrix
let matrix = rotation.inverse()
var result = CMAcceleration()
result.x = origin.x * matrix.m11 + origin.y * matrix.m12 + origin.z * matrix.m13;
result.y = origin.x * matrix.m21 + origin.y * matrix.m22 + origin.z * matrix.m23;
result.z = origin.x * matrix.m31 + origin.y * matrix.m32 + origin.z * matrix.m33;
return result
}
func gravityInReferenceFrame() -> CMAcceleration {
let origin = self.gravity
let rotation = attitude.rotationMatrix
let matrix = rotation.inverse()
var result = CMAcceleration()
result.x = origin.x * matrix.m11 + origin.y * matrix.m12 + origin.z * matrix.m13;
result.y = origin.x * matrix.m21 + origin.y * matrix.m22 + origin.z * matrix.m23;
result.z = origin.x * matrix.m31 + origin.y * matrix.m32 + origin.z * matrix.m33;
return result
}
}
extension CMRotationMatrix {
func inverse() -> CMRotationMatrix {
let matrix = GLKMatrix3Make(Float(m11), Float(m12), Float(m13), Float(m21), Float(m22), Float(m23), Float(m31), Float(m32), Float(m33))
let invert = GLKMatrix3Invert(matrix, nil)
return CMRotationMatrix(m11: Double(invert.m00), m12: Double(invert.m01), m13: Double(invert.m02),
m21: Double(invert.m10), m22: Double(invert.m11), m23: Double(invert.m12),
m31: Double(invert.m20), m32: Double(invert.m21), m33: Double(invert.m22))
}
}
Hope it helps a little bit
i tried to implement a solution after reading the paper linked above.
Steps are the follows:
take the attitude rotation matrix every update time.
comput the inverse matrix.
multiplying the inverse matrix for the UserAcceleration vector.
the resultant vector will be the projection of the vector.
-x north, +x south
-y east, +y weast
my code it's not perfect yet, i'm working on it.
The reference frame is related to the attitude value, look the value of attitude of yaw angle; If you don't use a reference frame, when you start your app, this value is always zero, instead if you use the reference frame CMAttitudeReferenceFrameXTrueNorthZVertical the yaw value indicates the angle between the x-axis and true north.
with this information you can identify the attitude of phone in the coordinates of the earth and therefore the position of axes of the accelerometer with respect to the cardinal points.