I want to use Xcode UI tests with the Fastlane Snapshot to make screenshots of the Cordova app. Basically, as my entire app is just a web view, all the Xcode UI test helper methods become irrelevant, and I just want to tap on specific points, e.g. tap(x: 10, y: 10) should produce a tap at the point {10px; 10px}.
That's probably very simple, but I can't figure out how to do it.
Thanks.
You can tap a specific point with the XCUICoordinate API. Unfortunately you can't just say "tap 10,10" referencing a pixel coordinate. You will need to create the coordinate with a relative offset to an actual view.
We can use the mentioned web view to interact with the relative coordinate.
let app = XCUIApplication()
let webView = app.webViews.element
let coordinate = webView.coordinateWithNormalizedOffset(CGVector(dx: 10, dy: 10))
coordinate.tap()
Side note, but have you tried interacting with the web view directly? I've had a lot of success using app.links["Link title"].tap() or app.staticTexts["A different link title"].tap(). Here's a demo app I put together demonstrating interacting with a web view.
Update: As Michal W. pointed out in the comments, you can now tap a coordinate directly, without worrying about normalizing the offset.
let normalized = webView.coordinate(withNormalizedOffset: CGVector(dx: 0, dy: 0))
let coordinate = normalized.withOffset(CGVector(dx: 10, dy: 10))
coordinate.tap()
Notice that I pass 0,0 to the normalized vector and then the actual point, 10,10, to the second call.
#joe To go a little further off of Joe Masilotti's approach I put mine in an extensionand gave prepositional phrases to the global and local params.
func tapCoordinate(at xCoordinate: Double, and yCoordinate: Double) {
let normalized = app.coordinate(withNormalizedOffset: CGVector(dx: 0, dy: 0))
let coordinate = normalized.withOffset(CGVector(dx: xCoordinate, dy: yCoordinate))
coordinate.tap()
}
By giving the global an identifiable name I can easily understand the instance for example:
tapCoordinate(at x: 100, and y: 200)
I found Laser's answer to work fine with Xcode 11, but made a few tweaks to easily integrate it into my testing.
extension XCUIApplication {
func tapCoordinate(at point: CGPoint) {
let normalized = coordinate(withNormalizedOffset: .zero)
let offset = CGVector(dx: point.x, dy: point.y)
let coordinate = normalized.withOffset(offset)
coordinate.tap()
}
}
Now, when I need to tap on a given location, I just provide a CGPoint and call this against my XCUIApplication like so:
let point = CGPoint(x: xCoord, y: yCoord)
app.tapCoordinate(at: point)
<something>.coordinate(withNormalizedOffset: CGVector.zero).withOffset(CGVector(dx:10,dy:60)).tap()
Pass .zero to the normalized vector and then the actual point (10,60)
Related
I am developing a molecular visualizer for macOS / iPadOS with SceneKit. Long story short, I want that when the user clicks (or touches) the screen at a certain position, a new atom is placed (in this example just a SCNSphere).
Previously, I had the allowsCameraControl property of the SCNView active, which allowed me to freely move the camera and with the unprojectPoint() method, I could successfully place a new node at touch location. The limitation of the default camera controller is that it does not zoom. When you pinch the screen, it changes the FOV property of the camera instead of moving it through the Z axis.
Therefore, I made a custom camera node with a SCNCamera. I succesfully recreated the default camera behaviour (movement, rotation) and furthermore I am able to correclty zoom into the scene. The downside of this is that the unprojectPoint() method no longer works as expeced, as the new nodes are placed at a very close position of the camera node itself. No matter where I click on the scene, that the unprojected point will always be very close to 0, 0, 10
internal func newNodeAt(point: CGPoint) {
let pointVector = SCNVector3(point.x, point.y, 0.8)
let position = self.unprojectPoint(pointVector)
print("x:\(position.x), y: \(position.y), z: \(position.z)")
let newSphere = SCNSphere(radius: 1)
let newNode = SCNNode(geometry: newSphere)
self.scene?.rootNode.addChildNode(newNode)
}
The camera node is setup as folows and its directly attached to the scene root node.
internal func setupCameraNode() -> SCNNode {
let cam = SCNCamera()
cam.name = "camera"
cam.zFar = 200
cam.zNear = 0.1
let camNode = SCNNode()
camNode.camera = cam
camNode.position = SCNVector3(0, 0, 5)
camNode.name = "Camera node"
return camNode
}
These are the printed positions after clicking on random positions of the scene.
x:-0.1988764852285385, y: -0.05589345842599869, z: 10.920427322387695
x:-0.18989555537700653, y: 0.14564114809036255, z: 10.920427322387695
x: 0.2168566882610321, y: 0.13085339963436127, z: 10.920427322387695
x: 0.24202580749988556, y: -0.15493911504745483, z: 10.920427322387695
x:-0.06516486406326294, y: -0.1781780868768692, z: 10.920427322387695
x:-0.08134553581476212, y: 0.12478446960449219, z: 10.920427322387695
x:-0.25866374373435974, y: 0.1456427276134491, z: 10.920427322387695
x: 0.217658132314682, y: 0.16270162165164948, z: 10.920427322387695
x: 0.2053154855966568, y: -0.12679903209209442, z: 10.920427322387695
I suppose that the unprojectPoint() is somehow related to the point of view? But I do not know how to fix this. Thanks.
I think you are on the right track, you just have to provide some kind of depth reference for the user. This is my code for similar, but when I call airStrike, I deal with the depth based on a plane facing the user and that's how I know where Z needs to be.
Just a guess without a visual, but seems there are a couple of options. Create a reference plane in the middle of the molecule and ++/-- that to show where the tap will land from a depth perpective.
Or just let them put it anywhere, then select it and depth++/depth-- to get it in the right position.
#objc func handleTap(recognizer: UITapGestureRecognizer)
{
let location: CGPoint = recognizer.location(in: gameScene)
if(data.isAirStrikeModeOn == true)
{
let projectedPoint = gameScene.projectPoint(SCNVector3(0, 0, 0))
let scenePoint = gameScene.unprojectPoint(SCNVector3(location.x, location.y, CGFloat(projectedPoint.z)))
gameControl.airStrike(position: scenePoint)
}
}
After days of testing I figured out a workaround and now I can place the nodes correctly where they should be.
My node tree was is like this:
RootNode
CameraNode
atomNodes
atom (individual spheres)
Therefore, all I had to do was to convert the unprojected position from the RootNode (which I suppose is the one that the camera takes the reference from) to the atomNodes, thus:
let unprojected = unprojectPoint(SCNVector3(location.x, location.y, 0.99))
let position = atomNodes.convertPosition(unprojected, from: rootNode)
The 0.99 is just a nice Z position in my view for the spheres to be placed. (More info here)
My advice would be to always check the node tree because the positions are relative to each other.
let material = SimpleMaterial.init(color: .red,roughness: 1,isMetallic: false)
let doorBox = MeshResource.generateBox(width: 0.02,height: 1, depth: 0.5)
let doorEntity = ModelEntity(mesh: doorBox, materials: [material])
let anchor = ARAnchorEntity()
anchor.addChild(doorEntity)
In RealityKit, I am having box which is MeshResource, Box looks like a line. I have added this box in ARView, and have set realtime camera position. In one scenario I want to know Box/Line’s starting and ending position.
Lets say box with entity has middle/current position (0.1,0.23,-1.3) then what will be box’s left and right position ? Anchor with box is keep changing it's position with camera movement.
Thanks in advance.
Check explanation with the image
You can use this extension.
extension Entity {
func getDistancedPosition(x: Float, y: Float, z: Float) -> SIMD3<Float> {
let referenceNodeTransform = transform.matrix
var translationMatrix = matrix_identity_float4x4
translationMatrix.columns.3.x = x
translationMatrix.columns.3.y = y
translationMatrix.columns.3.z = z
let updatedTransform = matrix_multiply(referenceNodeTransform,
translationMatrix)
return .init(updatedTransform.columns.3.x,
updatedTransform.columns.3.y,
updatedTransform.columns.3.z)
}
}
To get left and right for your box, Use below code:
let side1Position = door.getDistancedPosition(x: 0, y: 0, z: self.viewModel.doorDepth/2)
let side2Position = door.getDistancedPosition(x: 0, y: 0, z: -(self.viewModel.doorDepth/2))
To make the box look like the line you must have used depth. If not then you can change the parameter accordingly. e.g. door.getDistancedPosition(x: -0.1, y: 0, z: 0)
You can also refer to this question and its accepted answer:
Position a SceneKit object in front of SCNCamera's current orientation
Simply put, I wish to have an automated test as part of my UI test suite that can scroll the map. I am not concerned about the location, I just need to move it from its original position.
Why?
Two reasons:
The UI updates once the user interacts with the map. I wish to validate these changes
While I can easily verify this on a device, I also want to include automated screenshots via fastlane. Having a test perform this makes that possible
What have I tested so far?
I found the following from a related issue and tested without success:
let map = app.maps.element
let start = map.coordinate(withNormalizedOffset: CGVector(dx: 200,
dy: 200))
let end = map.coordinate(withNormalizedOffset: CGVector(dx: 250,
dy: 250))
start.press(forDuration: 0.01, thenDragTo: end)
I can confirm that the map element is correctly set and contains the expected information.
I can also confirm that the coordinates I am using fall within the bounds of the map on the screen. I have also tested with a wide range of other values just in case.
I'm not concerned about how it is moved, or where it is moved to. All I need is to replicate a user moving the map by 1 point.
coordinate(withNormalizedOffset:) works a bit differently. The Vektor is multiplied to the size of your object.
From Apple's docs:
The coordinate’s screen point is computed by adding normalizedOffset
multiplied by the size of the element’s frame to the origin of the
element’s frame.
That means that if you want to start dragging an the center of your map and then drag the map a bit you have to use it like this:
let map = app.maps.element
let start = map.coordinate(withNormalizedOffset: CGVector(dx: 0.5, dy: 0.5))
let end = map.coordinate(withNormalizedOffset: CGVector(dx: 0.6, dy: 0.6))
start.press(forDuration: 0.01, thenDragTo: end)
This puts the start coordinate at 0.5 * map.frame.width and 0.5 * map.frame.height and the end coordinate at 0.6 * map.frame.width and 0.6 * map.frame.height
When you run the UITest with this you'll see that it drags the map.
With your parameters it puts the start coordinate at 200 * map.frame.width and 200 * map.frame.height which is waaaay outside the screen so no dragging occurs.
I'm building a UIPanGestureRecognizer so I can move nodes in 3D space.
Currently, I have something that works, but only when the camera is exactly perpendicular to the plane, my UIPanGestureRecognizer looks like this:
#objc func handlePan(_ sender:UIPanGestureRecognizer) {
let projectedOrigin = self.sceneView!.projectPoint(SCNVector3Zero)
let viewCenter = CGPoint(
x: self.view!.bounds.midX,
y: self.view!.bounds.midY
)
let touchlocation = sender.translation(in: self.view!)
let moveLoc = CGPoint(
x: CGFloat(touchlocation.x + viewCenter.x),
y: CGFloat(touchlocation.y + viewCenter.y)
)
let touchVector = SCNVector3(x: Float(moveLoc.x), y: Float(moveLoc.y), z: Float(projectedOrigin.z))
let worldPoint = self.sceneView!.unprojectPoint(touchVector)
let loc = SCNVector3( x: worldPoint.x, y: 0, z: worldPoint.z )
worldHandle?.position = loc
}
The problem happens when the camera is rotated, and the coordinates are effected by the perspective change. Here is you can see the touch position drifting:
Related SO post for which I used to get to this position:
How to use iOS (Swift) SceneKit SCNSceneRenderer unprojectPoint properly
It referenced these great slides: http://www.terathon.com/gdc07_lengyel.pdf
The tricky part of going from 2D touch position to 3D space is obviously the z-coordinate. Instead of trying to convert the touch position to an imaginary 3D space, map the 2D touch to a 2D plane in that 3D space using a hittest. Especially when movement is required only in two direction, for example like chess pieces on a board, this approach works very well. Regardless of the orientation of the plane and the camera settings (as long as the camera doesn't look at the plane from the side obviously) this will map the touch position to a 3D position directly under the finger of the touch and follow consistently.
I modified the Game template from Xcode with an example.
https://github.com/Xartec/PrecisePan/
The main parts are:
the pan gesture code:
// retrieve the SCNView
let scnView = self.view as! SCNView
// check what nodes are tapped
let p = gestureRecognize.location(in: scnView)
let hitResults = scnView.hitTest(p, options: [SCNHitTestOption.searchMode: 1, SCNHitTestOption.ignoreHiddenNodes: false])
if hitResults.count > 0 {
// check if the XZPlane is in the hitresults
for result in hitResults {
if result.node.name == "XZPlane" {
//NSLog("Local Coordinates on XZPlane %f, %f, %f", result.localCoordinates.x, result.localCoordinates.y, result.localCoordinates.z)
//NSLog("World Coordinates on XZPlane %f, %f, %f", result.worldCoordinates.x, result.worldCoordinates.y, result.worldCoordinates.z)
ship.position = result.worldCoordinates
ship.position.y += 1.5
return;
}
}
}
The addition of a XZ plane node in viewDidload:
let XZPlaneGeo = SCNPlane(width: 100, height: 100)
let XZPlaneNode = SCNNode(geometry: XZPlaneGeo)
XZPlaneNode.geometry?.firstMaterial?.diffuse.contents = UIImage(named: "grid")
XZPlaneNode.name = "XZPlane"
XZPlaneNode.rotation = SCNVector4(-1, 0, 0, Float.pi / 2)
//XZPlaneNode.isHidden = true
scene.rootNode.addChildNode(XZPlaneNode)
Uncomment the isHidden line to hide the helper plane and it will still work. The plane obviously needs to be large enough to fill the screen or at least the portion where the user is allowed to pan.
By setting a global var to hold a startWorldPosition of the pan (in state .began) and comparing it to the hit worldPosition in the state .change you can determine the delta/translation in world space and translate other objects accordingly.
I'm trying to build a small spriteKitGame. I have been trying to use the function to move a couple of tank sprite nodes to a specific point.
Attached below is the code snippet.
let tankSpawn = CGPoint(x: self.size.width , y: 70);
tank.position = tankSpawn;
tank.zPosition = 3.0;
let targetPoint = CGPoint(x: -tank.size.width/2, y: tank.position.y);
let actionMove = SKAction.move(to: targetPoint, duration: TimeInterval(tankMoveDuration))
This is my result. They are spawning at the correct point ( 70units high ), but are going down as shown.
I want them to go in a straight line. I set the target points y as a constant. I have no idea why they are going to wards some bottom source.
I have similar code for planes that spawn above( which are working perfectly ).
let plane = SKSpriteNode(imageNamed: "SpaceShip");
let planeMoveDuration = 3.0
let planeSpawn = CGPoint(x: self.size.width , y: self.size.height/2);
plane.position = planeSpawn;
plane.zPosition = 3.0;
let actionMove = SKAction.move(to: CGPoint(x: -plane.size.width/2, y: plane.position.y), duration: TimeInterval(planeMoveDuration))
I have no idea what my mistake here is.
I have tried changing the y co ordinate of the target to tank.position.y, but it doesn't work.
Are they falling under gravity? A sprite-kit scene with physics bodies will have gravity and things will fall unless you take specific action to avoid this.
It's easily done - you're developing your game, you have basic elements on screen and movement etc is all working well, then you want to add some collision detection so you add physicsBodies and then whoosh - where'd my sprites go?