Simply put, I wish to have an automated test as part of my UI test suite that can scroll the map. I am not concerned about the location, I just need to move it from its original position.
Why?
Two reasons:
The UI updates once the user interacts with the map. I wish to validate these changes
While I can easily verify this on a device, I also want to include automated screenshots via fastlane. Having a test perform this makes that possible
What have I tested so far?
I found the following from a related issue and tested without success:
let map = app.maps.element
let start = map.coordinate(withNormalizedOffset: CGVector(dx: 200,
dy: 200))
let end = map.coordinate(withNormalizedOffset: CGVector(dx: 250,
dy: 250))
start.press(forDuration: 0.01, thenDragTo: end)
I can confirm that the map element is correctly set and contains the expected information.
I can also confirm that the coordinates I am using fall within the bounds of the map on the screen. I have also tested with a wide range of other values just in case.
I'm not concerned about how it is moved, or where it is moved to. All I need is to replicate a user moving the map by 1 point.
coordinate(withNormalizedOffset:) works a bit differently. The Vektor is multiplied to the size of your object.
From Apple's docs:
The coordinate’s screen point is computed by adding normalizedOffset
multiplied by the size of the element’s frame to the origin of the
element’s frame.
That means that if you want to start dragging an the center of your map and then drag the map a bit you have to use it like this:
let map = app.maps.element
let start = map.coordinate(withNormalizedOffset: CGVector(dx: 0.5, dy: 0.5))
let end = map.coordinate(withNormalizedOffset: CGVector(dx: 0.6, dy: 0.6))
start.press(forDuration: 0.01, thenDragTo: end)
This puts the start coordinate at 0.5 * map.frame.width and 0.5 * map.frame.height and the end coordinate at 0.6 * map.frame.width and 0.6 * map.frame.height
When you run the UITest with this you'll see that it drags the map.
With your parameters it puts the start coordinate at 200 * map.frame.width and 200 * map.frame.height which is waaaay outside the screen so no dragging occurs.
Related
I have a SpriteKit platformer that uses a tile map for a background. The background in question is positioned 1 screen-height above the main content (it's positioned off-screen), acting as a forest canopy above the player. I accomplish that programmatically, like this:
let screenWidth = UIScreen.main.bounds.width
let screenHeight = UIScreen.main.bounds.height
let columns = 20
let rows = 1
let tileSize = CGSize(width: screenWidth, height: screenHeight)
let container = SKSpriteNode()
let tileDefinition = SKTileDefinition(texture: MainData.textureAtlas.textureNamed("someTexture"), size: CGSize(width: screenWidth, height: screenHeight))
let tileGroup = SKTileGroup(tileDefinition: tileDefinition)
let tileSet = SKTileSet(tileGroups: [tileGroup])
let layer = SKTileMapNode(tileSet: tileSet, columns: columns, rows: rows, tileSize: tileSize)
container.position = CGPoint(x: screenWidth*0.5, y: screenHeight*1.5)
container.size = CGSize(width: CGFloat(columns)*screenWidth, height: screenHeight)
container.zPosition = 3.0
layer.fill(with: tileGroup)
container.addChild(layer)
addChild(container)
A camera node follows the player.
The problem: If the player jumps up, the SKTileMapNode disappears when he comes back down. It never reappears. Its parent node, container, remains visible, so I think the problem is with the SKTileMapNode, not the container.
What I've tried:
I've tried the following, with numbers 2-5 being checked for the SKTileMapNode:
Setting view.shouldCullNonVisibleNodes = false.
Checking the alpha value. It's always 1.0.
Checking the position. It's always CGPoint(x: 0, y: 0).
Checking the anchorPoint. It's always CGPoint(x: 0.5, y: 0.5).
Checking the zPosition. It does not change, and there are no other nodes that could be obscuring the SKTileMapNode or its parent. Setting a higher value has no effect on the problem.
Checking that container remains visible. It does.
On culling:
It seems like the problem should be related to culling, but setting view.shouldCullNonVisibleNodes=false has no effect on the situation. I also checked to make sure the SKTileMapNode is always present as a child node of container. It is. I suppose this means that the node is not being culled. However, if I position container so that it's always on-screen, the problem does not occur at all; the SKTileMapNode remains visible. This leaves me very confused because it seems like these are conflicting facts.
On devices:
Using the simulator, at least, the problem does not occur on the older-style iPhones such as the SE and the iPhone 8. It
only happens on the newer iPhones, such as the iPhone 11 and iPhone
12. Having access to an iPhone 11, I can confirm that the problem is occurring on real devices, too.
Question: Why is my SKTileMapNode disappearing when off-camera (even with culling disabled)? How can I keep this node visible?
Thank you!
It seems that I've solved the problem.
I'm presenting my scene via SwiftUI SpriteView, which I had configured to allow background transparency, like this:
SpriteView(scene: theScene, options: [.allowsTransparency])
Removing the transparency option solved the problem:
SpriteView(scene: theScene)
Now, why should this be the case? I have no idea.
I am currently making a Sprite and I want it to animate before it disappears.
For example: I want it to animate it in the sense that it disappears from the top of the block until the bottom. To put it another way, I want to the size to decrease slowly until there is nothing left. But I want to give it the appearance that it is disappearing rather than scaling to nothing.
let hand = SKSpriteNode(imageNamed: "hand")
hand.size = CGSize(width: size.width/10, height: size.height/30)
hand.position = CGPoint(x: CGFloat(posX-2)*size.width/10+offsetX, y:CGFloat(posY)*size.height/30+offsetY)
addChild(hand)
tl;dr is it possible to make this sort of effect using SpriteKit in Swift.
Ideal Animation: https://ibb.co/sPsffmK
SKAction is an animation that is executed by a node in the scene. Actions are used to change a node in some way (like move its position over time), but you can also use actions to change the scene, like doing a fadeout.
In your case, you can apply multiple animations:
let move_action = SKAction.moveBy(x: -100, y: 0, duration: 1.0)
let scale_action = SKAction.scale(to: 0.0, duration: 1.0)
let fade_action = SKAction.fadeAlpha(to: 0.0, duration: 1.0)
hand.run(move_action)
hand.run(scale_action)
//hand.run(fade_action)
In the previous example, the hand runs the move and scale animation at the same time. But you can also make the hand to move to the position, and after it reaches, to scale it.
let sequence = SKAction.sequence([move_animation,scale_animation])
hand.run(sequence)
There are lots of animations that SKAction has, here you can find the complete list.
I have a moving background which is 1500 x 600 pixels and constantly moves vertically down the screen using this code:
let bgTexture = SKTexture(imageNamed: "bg.png")
let moveBGanimation = SKAction.move(by: CGVector(dx: 0, dy: -bgTexture.size().height), duration: 4)
let shiftBGAnimation = SKAction.move(by: CGVector(dx: 0, dy: bgTexture.size().height), duration: 0)
let moveBGForever = SKAction.repeatForever(SKAction.sequence([moveBGanimation, shiftBGAnimation]))
var i: CGFloat = 0
while i < 3 {
bg = SKSpriteNode(texture: bgTexture)
bg.position = CGPoint(x: self.frame.midX, y: bgTexture.size().height * i)
bg.size.width = self.frame.width
bg.zPosition = -2
bg.run(moveBGForever)
self.addChild(bg)
i += 1
}
I now want a new background to come onto the screen after x amount of time to give the feel the player is moving into a different part of the game.
Could I put this code into a function and trigger it with NSTimer after say 20 seconds but change the start position of the new bg to be off screen?
The trouble with repeatForever actions is you don't know where they are at a certain moment. NSTimers are not as precise as you'd like, so using a timer may miss the right time or jump in too early depending on rendering speeds and frame rate.
I would suggest replacing your moveBGForever with a bgAnimation as a sequence of your move & shift actions. Then, when you run bgAnimation action, you run it with a completion block of { self.cycleComplete = true }. cycleComplete would be a boolean variable that indicates whether the action sequence is done or not. In your scene update method you can check if this variable is true and if it is, you can run the sequence action once again. Don't forget to reset the cycleComplete var to false.
Perhaps it sounds more complex but gives you control of whether you want to run one more cycle or not. If not, then you can change the texture and run the cycle again.
Alternatively you may leave it as it is and only change the texture(s) after making sure the sprite is outside the visible area, e.g. its Y position is > view size height.
In SpriteKit you can use wait actions with completion blocks. This is more straightforward than using a NSTimer.
So, to answer your question - when using actions for moving the sprites on-screen, you should not change the sprite's position at any time - this is what the actions do. You only need to make sure that you update the texture when the position is off-screen. When the time comes, obviously some of the sprites will be displayed, so you can't change the texture of all 3 at the same time. For that you may need a helper variable to check in your update cycle (as I suggested above) and replace the textures when the time is right (sprite Y pos is off-screen).
I am very new to SceneKit and your help will be really appreciated!
I have a 200x200 sized SCNView in my UIView, which is at the centre of super view.
I want to put a SCNCylinder inside, such that the SCNCylinder covers full SCNView. I read that all these views of Scenekit are defined in meters, so how do I form a relationship between the dimensions of my screen and the
SCNCylinder.
I tried:
var coinNode = SCNNode()
let coinGeometry = SCNCylinder(radius: 100, height: 2)
coinNode = SCNNode(geometry: coinGeometry)
coinNode.position = SCNVector3Make(0, 0, 0)
coinScene.rootNode.addChildNode(coinNode)
let rotate90AboutZ = SCNAction.rotateByX(-CGFloat(M_PI_2), y: 0.0, z: CGFloat(M_PI_2), duration: 0.0)
coinNode.runAction(rotate90AboutZ)
ibOutletScene.scene = coinScene
But this leaves a margin between my coinScene and the ibOutletScene. How do I remove this space?
I also tried adding Camera:
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3Make(0, 0, 100)
coinScene.rootNode.addChildNode(cameraNode)
But I see random behaviour with this and the coinNode gets hidden! How should I position my camera? Or is there any other way to remove extra space from my ibOutletScene?
Edit:
This is how it looks if I don't add camera. There is a margin between red scene and green coin. I tried multiple sizes for the coin, but I am unable to remove this margin unless I add a camera. But, If I add camera, I get another problem, mentioned below this screenshot.
If I don't add the camera, The rotation animation on coin works perfectly, but If I add camera,the rotation enlarges the and then becomes small again with the animation. How can I rotate it on its axis, without increasing the size?
I am using following code to rotate the coin:
The same code works fine without camera, but enlarges the coin after adding camera. Checkout the snapshot.
let rotate = SCNAction.rotateByX(0, y: -2 * CGFloat(M_PI_2), z: 0, duration: 2)
coinNode.runAction(rotate)
The random behavior might be caused by the last line in your first code snippet. You're starting an animation, and then adding the scene to the view.
Instead, build your scene, attach it to the view, and then start your animation. Setting a non-zero duration for the action will give you a more pleasing transition.
As for the extra space, it would help us understand if you post a screenshot. But you're going to have to do a bit of trigonometry.
It looks like you have a scene that you want to be blocked by a coin, that then rotates out of the way? Simulate that yourself with real objects. Put your eye down at the edge of your desk. Put a coin out a ways from your eye. How far does that coin have to be in order to block particular objects farther away on your desk?
In SceneKit, you can query the field of view of the SCNCamera. You know the size of your coin and the size of the view. Calculate the distance from the camera needed for the projected diameter of your coin to equal the width of your view. Put the coin there.
I want to use Xcode UI tests with the Fastlane Snapshot to make screenshots of the Cordova app. Basically, as my entire app is just a web view, all the Xcode UI test helper methods become irrelevant, and I just want to tap on specific points, e.g. tap(x: 10, y: 10) should produce a tap at the point {10px; 10px}.
That's probably very simple, but I can't figure out how to do it.
Thanks.
You can tap a specific point with the XCUICoordinate API. Unfortunately you can't just say "tap 10,10" referencing a pixel coordinate. You will need to create the coordinate with a relative offset to an actual view.
We can use the mentioned web view to interact with the relative coordinate.
let app = XCUIApplication()
let webView = app.webViews.element
let coordinate = webView.coordinateWithNormalizedOffset(CGVector(dx: 10, dy: 10))
coordinate.tap()
Side note, but have you tried interacting with the web view directly? I've had a lot of success using app.links["Link title"].tap() or app.staticTexts["A different link title"].tap(). Here's a demo app I put together demonstrating interacting with a web view.
Update: As Michal W. pointed out in the comments, you can now tap a coordinate directly, without worrying about normalizing the offset.
let normalized = webView.coordinate(withNormalizedOffset: CGVector(dx: 0, dy: 0))
let coordinate = normalized.withOffset(CGVector(dx: 10, dy: 10))
coordinate.tap()
Notice that I pass 0,0 to the normalized vector and then the actual point, 10,10, to the second call.
#joe To go a little further off of Joe Masilotti's approach I put mine in an extensionand gave prepositional phrases to the global and local params.
func tapCoordinate(at xCoordinate: Double, and yCoordinate: Double) {
let normalized = app.coordinate(withNormalizedOffset: CGVector(dx: 0, dy: 0))
let coordinate = normalized.withOffset(CGVector(dx: xCoordinate, dy: yCoordinate))
coordinate.tap()
}
By giving the global an identifiable name I can easily understand the instance for example:
tapCoordinate(at x: 100, and y: 200)
I found Laser's answer to work fine with Xcode 11, but made a few tweaks to easily integrate it into my testing.
extension XCUIApplication {
func tapCoordinate(at point: CGPoint) {
let normalized = coordinate(withNormalizedOffset: .zero)
let offset = CGVector(dx: point.x, dy: point.y)
let coordinate = normalized.withOffset(offset)
coordinate.tap()
}
}
Now, when I need to tap on a given location, I just provide a CGPoint and call this against my XCUIApplication like so:
let point = CGPoint(x: xCoord, y: yCoord)
app.tapCoordinate(at: point)
<something>.coordinate(withNormalizedOffset: CGVector.zero).withOffset(CGVector(dx:10,dy:60)).tap()
Pass .zero to the normalized vector and then the actual point (10,60)