For Android world, we could tap any locations in by:
val device = UiDevice.getInstance(InstrumentationRegistry.getInstrumentation())
device.click(x, y)
And the x and y above could be get from Pixelmator like:
For example, the coordinate of key a should be x=100 and y=1800 (Ruler from Pixelmator).
According to tapCoordinate , we might do something similar in iOS:
func tapCoordinate(at xCoordinate: Double, and yCoordinate: Double) {
let normalized = app.coordinate(withNormalizedOffset: CGVector(dx: 0, dy: 0))
let coordinate = normalized.withOffset(CGVector(dx: xCoordinate, dy: yCoordinate))
coordinate.tap()
}
But it didn't work as expected, I was wondering if maybe we could tap some points by the x and y of the global screen?
Your question doesn't make a lot of sense. The function debugDescription() generates a string from various data types, suitable for logging to the console. iOS doesn't "calculate the coordinates from debugDescription".
You don't tell us what you are logging. You should edit your question to tell us what it is that you are logging, show the code that captures the value, and also show the print statement that logs it.
Based on the format of the output, it looks like you are logging a rectangle (CGRect data type.)
A CGRect is in the form (origin, size), where origin is a CGPoint and size is a CGSize.
It sounds like tapCoordinate is giving your the center of the rectangle, which would be:
x = rect.origin.x + size.width/2
y = rect.origin.y + size.width/2
That gives
x = 23+41/2 = 43.5
y = 573+49/2 = 597.5
Assuming your x value is a typo and should be 33, that is quite close to the values you give in your question.
Related
I'm using ARCore + SceneKit (Swift language) to calculate the distance from the centering point between two eyes to the camera.
I determine the coordinates of the camera:
let cameraPos = sceneView.pointOfView?.position
The coordinates of the left eye and right eye:
let buffer = face.mesh.vertices
let left = buffer[LF]
let right = buffer[RT]
NOTE:
LF and RT is defined base on: https://github.com/ManuelTS/augmentedFaceMeshIndices
LF = 159 is the index that contain the Vector3 condinate of the Left eye
RT = 386 is the index that contain the Vector3 condinate of the Right eye
Compute the centering point (in SCNVector3):
let center = SCNVector3(x: (left.x - right.x) * 0.5,
y: (left.y - right.y) * 0.5,
z: (left.z - right.z) * 0.5)
Finally, I calculate the distance:
let distance = distance(start: cameraPos!, end: center)
distance is defined as:
func distance(start: SCNVector3, end: SCNVector3) -> Float {
let dx = start.x - end.x
let dy = start.y - end.y
let dz = start.z - end.z
let distance = sqrt(dx * dx + dy * dy + dz * dz)
return round(distance * 100 * 10) / 10.0
}
Runtime result is incorrect.
Actual distance: ~20 cm
In-app distance: ~3 cm
Can someone tell me where the problem lies, even another solution?
Thanks.
Assuming center is the midpoint between the eyes, then shouldn't the formula be:
Midpoint:
(x1, y1, z1) and (x2, y2, z2) is (x1+x2 )/2,(y1+y2 )/2,(z1+z2 )/2.
Edit: Taking a guess here, but...
Example: So that a projectile will actually launch from a turret with a long barrel cannon exactly where the barrel is rotated to at the time of firing, you have to calculate that position at the end of the tube as it relates to the position of the node that the barrel is attached to, otherwise the shot will not look like it came from the right spot.
Requires a little imagination, but this is your face moving around = turret is moving around. I "think" that's what's happening to your math. I don't think you are getting the right LF/RF positions because you didn't mention converting the point. The link you sent [The face mesh consists of hundreds of vertices that make up the face, and is defined relative to the center pose.] Relative to the center pose - I'm pretty sure that means you have to convert LF with relation to the center to get the real position.
// Convert position something like this:
let REAL_LF = gNodes.gameNodes.convertPosition(LF.presentation.position, from: POSE_POSITION)
convertPosition(_:to:)
Converts a position from the node’s local coordinate space to that of another node
I try to calculate the bounce-vector or reflection-vector to a given direction at a specific intersection point/surface in 3D SceneKit space within a AR Session.
To do this, I send out a hittest from the exact center of the screen straight forward. There is i.Ex a cube positioned let’s say 2 meter in front of me. Now I’d like to continue this hittest in the logical re-bounce/reflection direction, just as a light-ray on a mirror would do. Of course the hittest is ended at its intersection point, but from there I would like to draw like a line or small and long SCNTube node to visualise the direction in which this hittest would continue if it was reflected by one of the faces of the cube. And this from any particular direction.
Lets say, I have the direction vector in which I send the hittest. I also have the intersection point given by the hittest result. And I have the normal of the surface at the intersection point.
Regarding to some Answers I found about this issue on Linear Algebra Forums:
https://math.stackexchange.com/questions/2235997/reflecting-ray-on-triangle-in-3d-space
https://math.stackexchange.com/questions/13261/how-to-get-a-reflection-vector
the following Formula should do, and this in 3D space:
(and it should give me the re-bounce/reflection vector)
r = d − 2(d⋅n)n
(where d⋅n is the dot product, and n must be normalised. r is the reflection vector.)
I tried to make a kind of Swift implementation of that resulting in nonsense. Here is my code:
let location: CGPoint = screenCenter
let hits = self.sceneView.hitTest(location, options: [SCNHitTestOption.categoryBitMask: NodeCategory.catCube.rawValue, SCNHitTestOption.searchMode: SCNHitTestSearchMode.any.rawValue as NSNumber])
if !hits.isEmpty {
print("we have a hittest")
let d = currentDirection
let p = hits.first?.worldCoordinates // Hit Location
let n = hits.first?.worldNormal // Normal of Hit Location
print("D = \(d)")
print("P = \(p)")
print("N = \(n)")
// r = d - 2*(d*n).normalized // the Formula
let r : SCNVector3 = d - (d.crossProduct(n!).crossProduct(d.crossProduct(n!))).normalized
// let r : SCNVector3 = d - (2 * d.crossProduct(n!).normalized) // I also tried this, but that gives me errors in Xcode
print("R = \(r)")
// This Function should setup then the node aligned to that new vector
setupRay(position: p!, euler: r)
}
All this results in nonsense. I get the following console output:
we are in gesture TAP recognizer
we have a hittest
D = SCNVector3(x: -0.29870644, y: 0.5494926, z: -0.7802771)
P = Optional(__C.SCNVector3(x: -0.111141175, y: 0.034069262, z: -0.62390435))
N = Optional(__C.SCNVector3(x: 2.672451e-08, y: 1.0, z: 5.3277716e-08))
R = SCNVector3(x: nan, y: nan, z: nan)
My Euler Angle: SCNVector3(x: nan, y: nan, z: nan)
(D is the direction of the hittest, P is the Point of intersetion, N is the Normal at the Point of intersection, R should be the reflection Vector but is always just nan, not a number)
I also tried the extension dotProduct instead of crossProduct, but dotProduct gives me a Float value, which I cannot calc with a SCNVector3
How can I calculate this re-bounce vector and align a SCNNode facing in that direction (with the Pivot at the start Point, the point of intersection from the hittest)
What am I doing wrong? Can anyone show me a working Swift implementation of that calculation?
Any Help would be so helpful. (Linear Algebra belongs not to my powers)
PS: I use standard SCNVector 3 math extensions as available from GitHub
Finally this Solution should work as far as I can say for iOS 12 (successfully tested). It gives you the reflection Vector from any surface and any Point of view.
let location: CGPoint = screenCenter
let hits = self.sceneView.hitTest(location, options: [SCNHitTestOption.categoryBitMask: NodeCategory.catCube.rawValue, SCNHitTestOption.searchMode: SCNHitTestSearchMode.any.rawValue as NSNumber])
if let hitResult = hits.first {
let direction = normalize(double3(sceneView.pointOfView!.worldFront))
// let reflectedDirection = reflect(direction, n: double3(hitResult.worldNormal))
let reflectedDirection = simd_reflect(direction, double3(hitResult.worldNormal))
print(reflectedDirection)
// Use the Result for whatever purpose
setupRay(position: hitResult.worldCoordinates, reflectDirection: SCNVector3(reflectedDirection))
}
the SIMD library is really useful for thing like that:
if let hitResult = hitResults.first {
let direction = normalize(scnView.pointOfView!.simdPosition - hitResult.simdWorldCoordinates)
let reflectedDirection = reflect(direction, n: hitResult.simdWorldNormal)
print(reflectedDirection)
}
I have an object displayed using OpenGL ES on an iPad. The model is defined by vertices, normals and indexes to vertices. The origin of the model is 0,0,0. Using UIGestureRecognizer I can detect various gestures - two-fingered swipe horizontally for rotation about y, vertically for rotation about x. Two-fingered rotate gesture for rotation about y. Pan to move the model around. Pinch/zoom gesture to scale. I want the viewer to be able to manipulate the model to see (for example) the reverse of the model or the whole thing at once.
The basic strategy comes from Ray Wenderlich's tutorial but I have rewritten this in Swift.
I understand quaternions to be a vector and an angle. The vectors up, right and front represent the three axes:
front = GLKVector3Make(0.0, 0.0, 1.0)
right = GLKVector3Make(1.0, 0.0, 0.0)
up = GLKVector3Make(0.0, 1.0, 0.0)
so the quaternion apples a rotation around each of the three axes (though only one of dx, dy, dz has a value, decided by the gesture recognizer.)
func rotate(rotation : GLKVector3, multiplier : Float) {
let dx = rotation.x - rotationStart.x
let dy = rotation.y - rotationStart.y
let dz = rotation.z - rotationStart.z
rotationStart = GLKVector3Make(rotation.x, rotation.y, rotation.z)
rotationEnd = GLKQuaternionMultiply(GLKQuaternionMakeWithAngleAndVector3Axis(dx * multiplier, up), rotationEnd)
rotationEnd = GLKQuaternionMultiply(GLKQuaternionMakeWithAngleAndVector3Axis(dy * multiplier, right), rotationEnd)
rotationEnd = GLKQuaternionMultiply((GLKQuaternionMakeWithAngleAndVector3Axis(-dz, front)), rotationEnd)
state = .Rotation
}
Drawing uses the modelViewMatrix, calculated by the following function:
func modelViewMatrix() -> GLKMatrix4 {
var modelViewMatrix = GLKMatrix4Identity
// translation and zoom
modelViewMatrix = GLKMatrix4Translate(modelViewMatrix, translationEnd.x, translationEnd.y, -initialDepth);
// rotation
let quaternionMatrix = GLKMatrix4MakeWithQuaternion(rotationEnd)
modelViewMatrix = GLKMatrix4Multiply(modelViewMatrix, quaternionMatrix)
// scale
modelViewMatrix = GLKMatrix4Scale(modelViewMatrix, scaleEnd, scaleEnd, scaleEnd);
// rotation
return modelViewMatrix
}
And mostly this works. However everything is relative to the origin.
If the model is rotated then the pivot is always an axis passing through the origin - if zoomed in looking at the end of the model away from the origin and then rotating, the model can rapidly swing out of view. If the model is scaled then the origin is always the fixed point with the model growing larger or smaller - if the origin is off-screen and scale is reduced the model can disappear from view as it collapses toward the origin...
What should happen is that whatever the current view, the model rotates or scales relative to the current view. For a rotation around the y axis that would mean defining the y axis around which the rotation occurs as passing vertically through the middle of the current view. For a scale operation the fixed point of the model would be in the centre of the screen with the model shrinking toward or growing outward from that point.
I know that in 2D the solution is always to translate to the origin, apply rotation and then apply the inverse of the first translation. I don't see why this should be different in 3D, but I cannot find any example doing this with quaternions only matrices. I have tried to apply a translation and its inverse around the rotation but nothing has an effect.
So I tried to do this in the rotate function:
let xTranslation : Float = 300.0
let yTranslation : Float = 300.0
let translation = GLKMatrix4Translate(GLKMatrix4Identity, xTranslation, yTranslation, -initialDepth);
rotationEnd = GLKQuaternionMultiply(GLKQuaternionMakeWithMatrix4(translation) , rotationEnd)
rotationEnd = GLKQuaternionMultiply(GLKQuaternionMakeWithAngleAndVector3Axis(dx * multiplier, up), rotationEnd)
rotationEnd = GLKQuaternionMultiply(GLKQuaternionMakeWithAngleAndVector3Axis(dy * multiplier, right), rotationEnd)
rotationEnd = GLKQuaternionMultiply((GLKQuaternionMakeWithAngleAndVector3Axis(-dz, front)), rotationEnd)
// inverse translation
let inverseTranslation = GLKMatrix4Translate(GLKMatrix4Identity, -xTranslation, -yTranslation, -initialDepth);
rotationEnd = GLKQuaternionMultiply(GLKQuaternionMakeWithMatrix4(inverseTranslation) , rotationEnd)
The translation is 300,300 but there is no effect at all, it still pivots around where I know the origin to be. I've searched a long time for sample code and not found any.
The modelViewMatrix is applied in update() with:
effect?.transform.modelviewMatrix = modelViewMatrix
I could also cheat by adjusting all of the values in the model so that 0,0,0 falls at a central point - but that would still be a fixed origin and would be only marginally better.
The problem is in the last operation you made, you should swap the inverseTranslation with rotationEnd :
rotationEnd = GLKQuaternionMultiply(rotationEnd, GLKQuaternionMakeWithMatrix4(inverseTranslation))
And I think the partial rotation(dx, dy, dz) should follow the same rule.
In fact, if you want to change the pivot, this is how your matrix multiplication should be done:
modelMatrix = translationMatrix * rotationMatrix * inverse(translationMatrix)
and the result in homogeneous coordinates will be calculated as follows:
newPoint = translationMatrix * rotationMatrix * inverse(translationMatrix) * v4(x,y,z,1)
Example
This is a 2D test example that you can run in a playground.
let v4 = GLKVector4Make(1, 0, 0, 1) // Point A
let T = GLKMatrix4Translate(GLKMatrix4Identity, 1, 2, 0);
let rot = GLKMatrix4MakeWithQuaternion(GLKQuaternionMakeWithAngleAndVector3Axis(Float(M_PI)*0.5, GLKVector3Make(0, 0, 1))) //rotate by PI/2 around the z axis.
let invT = GLKMatrix4Translate(GLKMatrix4Identity, -1, -2, 0);
let partModelMat = GLKMatrix4Multiply(T, rot)
let modelMat = GLKMatrix4Multiply(partModelMat, invT) //The parameters were swapped in your code
//and the result would the rot matrix, since T*invT will be identity
var v4r = GLKMatrix4MultiplyVector4(modelMat, v4) //ModelMatrix multiplication with pointA
print(v4r.v) //(3,2,0,1)
//Step by step multiplication using the relation described above
v4r = GLKMatrix4MultiplyVector4(invT, v4)
v4r = GLKMatrix4MultiplyVector4(rot, v4r)
v4r = GLKMatrix4MultiplyVector4(T, v4r)
print(v4r.v) //(3,2,0,1)
As for the scale, if I understand correctly what you want, I would recommend to do it like it's done here: https://gamedev.stackexchange.com/questions/61473/combining-rotation-scaling-around-a-pivot-with-translation-into-a-matrix
I have been doing a ton of research but found nothing. With MapKit, I have got a map that shows current location and elsewhere a function that calculates a heading/bearing value (not necessarily the actual heading). How can I draw a line on the map that will start at current location, and point in direction of the given heading ? (Does not matter how long the line is, as in it has no meaningful end point). I am not asking you to write the code for me but would appreciate some detailed direction. Hope this helps others too.
Cheers
Your coordinates are polar, which means you have a direction and a length. You just need to convert them to Cartesian, which gives you a horizontal offset and a vertical offset. You do that with a little trigonometry.
let origin = CGPoint(x: 10, y: 10)
let heading: CGFloat = CGFloat.pi
let length: CGFloat = 20
let endpoint = CGPoint(x: origin.x + cos(heading)*length,
y: origin.y + sin(heading)*length)
let path = UIBezierPath()
path.move(to: origin)
path.addLine(to: endpoint)
Note that trigonometric functions generally work in radians (2*PI = one revolution). Bearings are often in degrees (360 degrees = one revolution). Converting is straightforward, however:
func radians(forDegrees angle: CGFloat) -> CGFloat {
return CGFloat.pi * angle / 180.0
}
I have an angle that I am calculating based on the positioning of a view from the centre of the screen. I need a way to move the view from it's current position, off the screen in the direction of the angle.
I'm sure there is a fairly simple way of calculating a new x and y value, but I haven't been able to figure out the maths. I want to do it using an animation, but I can figure that out myself once I have the coordinates.
Anyone have any suggestions?
If you have angle you can calculate new coordinates by getting sine and cosine values. You can try out following code
let pathLength = 50 as Double // total distance view should move
let piFactor = M_PI / 180
let angle = 90 as Double // direction in which you need to move it
let xCoord = outView.frame.origin.x + CGFloat(pathLength * sin(piFactor*angle)) //outView is name of view you want to animate
let yCoord = outView.frame.origin.y + CGFloat(pathLength * cos(piFactor*angle))
UIView.animateWithDuration(1, delay: 0, options: UIViewAnimationOptions.CurveEaseInOut, animations: { () -> Void in
self.outView.frame = CGRectMake(xCoord, yCoord, self.outView.frame.size.width, self.outView.frame.size.height)
}, completion: { (Bool) -> Void in
})
To me it sounds what you need to do is convert a vector from polar representation (angle and radius) to cartesian representation (x and y coordinates) which should be fairly easy.
You already got the angle so you only need to get the radius, which is the length of the vector. In you case (if I understand it correctly) is the distance from the current center of the view that needs to be animated to it's new position. While it may be complex to know that exactly (cause this part of what you are trying to calculate) you can go on the safe side and take a large enough value that will surely throw the view out of its super view frame. The length of the superview diagonal plus the length of the animated view diagonal should do the work, or even more simple just take the sum of the height and width of both views.
Once you have the complete polar representation of the vector (angle and radius) you can use that simple formula to convert to cartesian representation (x = r * cos(a), y = r * sin(a)) and finally add that vector coordinates to the center of the view you need to animate.