I have a SKSpriteNode create with the level generator.
I need to create exactly the same shape using CGPath.
self.firstSquare = childNodeWithName("square") as! SKSpriteNode
var transform = CGAffineTransformMakeRotation(self.firstSquare.zRotation)
let rect = CGRect(origin: self.firstSquare.position, size: self.firstSquare.size)
let firstSquareCGPath:CGPath=CGPathCreateWithRect(rect, &transform)
print(self.firstSquare.position)
=> firstSquare position (52.8359451293945, -52.9375076293945)
To check if my CGPath has been created as I want, I created a SKShapeNode with my CGPath:
let shape:SKShapeNode=SKShapeNode(path:path)
shape.fillColor = self.getRandomColor()
addChild(shape)
print(shape.position)
=> shape position (52.8359451293945, -52.9375076293945)
The result is not what I expected.
So I don't know if my CGPath is wrong, or if it's when I convert it in SKShapeNode that I lose the initial sprite properties.
To understand why I need to do that, please read this stack
EDIT 1,2
I added:
shape.position = self.firstSquare.position
And I obtained:
EDIT 3 :
I updated my explanations above, the anchor point of my firstSquare is now (0.5, 0.5)
Usually I don't use .SKS files, I preefer to write only code, but here seems you simply have two squares , one with the original .sks position, the other added in second time without any position.
If you want to overlay your first square simply:
shape.position = self.firstSquare.position
EDIT:
I also seen your anchorPoint was setted to (0.0,0.0) but the default anchor point is (0.5,0.5). Try to correct also this parameter to match with your sks.
You .sks size layout could be not the same as scene, so:
scene.size = skView.bounds.size
Please take a look also to this parameter to change before you present your scene:
scene.scaleMode = .ResizeFill
In fact, the scale mode affects also to the positioning.
What is scaleMode?
The scaleMode of a scene determines how the scene will be updated to fill the view when the iOS device is rotated. There are four different scaleModes defined:
SKSceneScaleModeFill – the x and y axis are each scaled up or down so that the scene fills the view. The content of the scene will not maintain its current aspect ratio. This is the default if you do not specify a scaleMode for your scene.
SKSceneScaleModeAspectFill – both the x and y scale factors are calculated. The larger scale factor will be used. This ensures that the view will be filled, but will usually result in parts of the scene being cropped. Images in the scene will usually be scaled up but will maintain the correct aspect ratio.
SKSceneScaleModeAspectFit – both the x and y scale factors are calculated. The smaller scale factor will be used. This ensures that the entire scene will be visible, but will usual result in the scene being letterboxed. Images in the scene will usually be scaled down but will maintain the correct aspect ratio.
SKSceneScaleModeResizeFill – The scene is not scaled. It is simply resized so that its fits the view. Because the scene is not scaled, the images will all remain at their original size and aspect ratio. The content will all remain relative to the scene origin (lower left).
I found a solution, I need to use SKShapeNode(path:, centered:), then apply the transformation and set the position.
self.firstSquare = childNodeWithName("square") as! SKSpriteNode
let firstSquareCGPath:CGPath=CGPath(rect: CGRect(origin: self.firstSquare.position, size: self.firstSquare.size), transform: nil)
let shape:SKShapeNode=SKShapeNode(path: firstSquareCGPath, centered: true) //important
shape.zRotation = self.firstSquare.zRotation
shape.position = self.firstSquare.position
shape.fillColor = SKColor.blue()
addChild(shape)
It works perfectly!
Just a pending question, as I set the transformation and the position in last (because of the SKShapeNode properties), I don't know if my CGPath above is set correctly (like my SKSpriteNode).
Related
Context:
there is a cursor (like your mouse) SKSpriteNode
cam is a SKCameraNode and is a child to the cursor (i.e. wherever your cursor goes, so follows the camera).
cam is purposely not centered on the cursor; rather, it is offset so the cursor appears at the top of the view, and there remains empty space below
A simple schematic is given below
Goal:
The goal is two add to sprites to the lower left and lower right corners of the camera's view. The sprites will be children of the camera, so that they always stay in view.
Question
How can I position a sprite in the corner of a camera, especially given that the SKSpriteNode does not have an anchorPoint attribute (as an SKSpriteNode typically has, which let me offset the camera as a child to the cursor)?
Note: One can position the SKSpriteNodes on the GameScene and then call .move(toParent: SKNode), which gets you closers but also messes with the position and scale of the SKSpriteNodes
var cam: SKCameraNode!
let cursor = SKSpriteNode(imageNamed: "cursor")
override func didMove(to view: SKView) {
// Set up the cursor
cursor.setScale(spriteScale)
cursor.position = CGPoint(x: self.frame.midX, y: raisedPositioning)
cursor.anchorPoint = CGPoint(x:0.5, y:0.5)
cursor.zPosition = CGFloat(10)
addChild(cursor)
// Set up the camera
cam = SKCameraNode()
self.camera = cam
cam.setScale(15.0)
// Camera is child of Cursor so that the camera follows the cursor
cam.position = CGPoint(x: cursor.size.width/2, y: -(cursor.size.height * 4))
cursor.addChild(cam)
// Add another sprite here and make it child to cursor
...
the cameraNode has no size, but you can get the current screen size with the frame property
frame.size
then you can position your node accordingly, for example if you want to position the center of yournode in the left corner you set the position as this:
yournode.position.x = 0
yournode.position.y = frame.size.height
This is best solved with a "dummy node" that acts as the camera's screen space coordinates system.
Place this dummy node at the exact centre of the view of the camera, at a zPosition you're happy with, as a child of the camera.
...from SKCameraNode docs page:
The scene is rendered so that the camera node’s origin is placed in
the middle of the scene.
Attach all the HUD elements and other pieces of graphics and objects you want to stay in place, relative to the camera, to this dummy object, in a coordinate system that makes sense relative to the camera's "angle of view", which is its frame of view.
...from a little further down the SKCameraNode docs page:
The camera’s viewport is the same size as the scene’s viewport
(determined by the scene’s size property) and the scene is still
scaled by its scaleMode property when it is rendered into the view.
Whenever the camera moves, it moves the dummy object, and all the children of the dummy object move with the dummy object.
The biggest advantage of this approach is that you can shake or otherwise move the dummy object to create visual effects indicative of motion and explosions. But also a neat system for removal from view, too.
I have 3 images:
topBg.png
midBg.png
botBg.png
I want to set topBg.png at top scene and height = 200
middleBg.png should be infinite scale or repeat vertically
botBg.png - should be in bottom and height = 200
i have next code:
override func didMove(to view: SKView) {
self.bgTopSpriteNode = self.childNode(withName: "//bgTopNode") as? SKSpriteNode
self.bgMiddleSpriteNode = self.childNode(withName: "//bgMiddleNode") as? SKSpriteNode
self.bgBottomSpriteNode = self.childNode(withName: "//bgBottomNode") as? SKSpriteNode
if let bgTopSpriteNode = self.bgTopSpriteNode,
let bgMiddleSpriteNode = self.bgMiddleSpriteNode,
let bgBottomSpriteNode = self.bgBottomSpriteNode {
bgTopSpriteNode.size.width = self.frame.width
bgTopSpriteNode.size.height = 200
bgTopSpriteNode.position.x = 0
bgMiddleSpriteNode.size.width = self.frame.width
bgMiddleSpriteNode.size.height = self.frame.height-400
bgMiddleSpriteNode.position.x = 0
bgBottomSpriteNode.size.width = self.frame.width
bgBottomSpriteNode.size.height = 200
bgBottomSpriteNode.position.x = 0
}
}
But how to set Y position of images. Because coordinates begin from center of screen, not from left top and i don't know how to convert them.
There are a couple of different ways to achieve what you're looking to do.
First, you can compute the y position of the top and the bottom of the screen using simply size.height / 2 if you have the anchorPoint of your scene at (0.5,0.5). (Don't use frame - use size. That way, you take into account the scaleMode of the scene.)
It sounds like you are frustrated that the origin of the scene is in the center. If you'd like to move it to the corner, you can easily do so by setting the scene's anchorPoint property, say, to (0.0, 0.0) for the lower left corner. Then, your y-values are 0 and size.height. If you are using the .sks editor, this is exposed in the interface - you can just set it there. Otherwise, you can set it programmatically.
Finally, you can set the scaleMode of your scene to something like .aspectFill, set the size of the scene directly (say, to 1024x768 for an iPad), and just place the images wherever they need to go. This approach works particularly well with .sks files, if you are using them; when you load up a scene, you can set the size of the scene based on the aspect ratio of the view it's in to accommodate different aspect ratios. For instance, you could adopt a 320x480 "reference size" for your iPhone scenes. Whenever you load up the scene, you could set the size of the scene to be 320 points wide and however many points tall to match the aspect ratio of the device. Then, all your graphics would be produced at 320pt wide, and you could slide them up or down proportionally across the scene's size for layout. This is a little more complicated, but it's a lot easier than trying to deal with separate layout considerations for multiple devices.
I should also point out a couple of things.
You can use the anchorPoint property of a sprite to dictate where the sprite's coordinates are measured from. This is handy for cases where you want images to be flush up against something. For instance, if you want an image flush against the left side of the screen, set its position to be exactly the left side of the screen, and then set its anchorPoint.x to 0.0; this will put the left edge of the sprite against the left edge of the screen. This also works for scenes, as you encountered - moving the anchorPoint of the scene moves everything in the scene relative to its size.
You don't need three images for what you're describing. You can use a single sprite and just set its centerRect property to tell it to use the top and bottom of an image and stretch the center part vertically. You have to do a little math to set the right xScale and yScale (not width and height, IIRC), but then you can draw all of that with one sprite instead of three. This would be really handy in your case, because you could just leave the sprite at (0,0), set its scale to match the size of the entire scene, and set the centerRect property - you wouldn't have to do any positioning math at all.
I'm teaching myself how to do SpriteKit programming by coding up a simple game that requires that I lay out a square "game field" on the left side of a landscape-oriented scene. I'm just using the stock 1024x768 view you get when creating a new SpriteKit "Game" project in XCode - nothing fancy. When I set up the game field in didMoveToView(), however, I'm finding the coordinate system to be a little weird. First of all, I expected I would have to place the board at (0, 0) for it to appear in the lower-left. Not so -- it turns out the game board has to be bumped up about 96 pixels in the y direction to work. So I end up with this weird code:
let gameFieldOrigin = CGPoint(x:0, y:96) // ???
let gameFieldSize = CGSize(width:560, height: 560)
let gameField = CGRect(origin: gameFieldOrigin, size: gameFieldSize)
gameBorder = SKShapeNode(rect: gameField)
gameBorder.strokeColor = UIColor.redColor()
gameBorder.lineWidth = 0.1
self.addChild(gameBorder) // "self" is the SKScene subclass GameScene
Furthermore, when I add a child to it (a ball that bounces inside the field), I assumed I would just use relative coordinates to place it in the center. However, I ended up having to use "absolute" coordinate and I had to offset the y-coordinate by 96 again.
Another thing I noticed is when I called touch.locationInNode(gameBorder), the coordinates were again not relative to the border, and start at (0, 96) at the bottom of the border instead of (0, 0) as I would have guessed.
So what am I missing here? Am I misunderstanding something fundamental about how coordinates work?
[PS: I wanted to add the tag "SpriteKit" to this question, but I don't have enough rep. :/]
You want to reference the whole screen as a coordinate system, but you're actually setting all the things on a scene loading from GameScene.sks. The right way to do is modify one line in your GameViewController.swift in order to set your scene size same as the screen size. Initialize scene size like this instead of unarchiving from .sks file:
let scene = GameScene(size: view.bounds.size)
Don't forget to remove the if-statement as well because we don't need it any more. In this way, the (0, 0) is at the lower-left corner.
To put something, e.g. aNode, in the center of the scene, you can set its position like:
aNode.position = CGPoint(x:CGRectGetMidX(self.frame), y:CGRectGetMidY(self.frame));
I'm trying to make a CAShapeLayer with a bezier path in it expand with a view (which is animated, of course). I am aware that I could change the size of the path with CATransform3DMakeScale(, , ) But this won't allow me to make the path an exact size (in points).
Does anybody know how to do this?
You would do this using good old fashioned math.
Simple solution
To phrase your question differently: you have something of one size (the path/shape layer), and you want to scale it so that it becomes another size.
To know how much you want to scale along X and Y (separately) you divide the size you want to fit to by the current size. You can get the bounding box of the path using
let boundingBox = CGPathGetBoundingBox(path)
I'm assuming that you already have some size that you want to scale it to (here I'm calling mine containingSize).
Using those two, you can calculate the two scale factors by dividing the dimension you are scaling to with the dimension you are scaling from
let xScaleFactor = containingSize.width / boundingBox.width
let yScaleFactor = containingSize.height / boundingBox.height
Using those, you can create the required scale transform
let scaleTransform = CATransform3DMakeMakeScale(xScaleFactor, yScaleFactor, 1.0)
Scaling this shape layer
using those two scale factors, will scale the shape layer to fill the container size. If the container size has the same aspect ratio as the path, everything will look as expected. If not, the scaled layer will appear stretched.
Fitting instead of filling
This problem (unless its what you want) can be solved by calculating a uniform scale factor, that is the smaller of the two, so that the scaled path fits the container instead of fills it.
We do this by finding out which scale factor is the most constrained one, and then apply that to both X and Y
let boundingBoxAspectRatio = boundingBox.width/boundingBox.height
let viewAspectRatio = containingSize.width/containingSize.height
let scaleFactor: CGFloat
if (boundingBoxAspectRatio > viewAspectRatio) {
// Width is limiting factor
scaleFactor = containingSize.width/boundingBox.width
} else {
// Height is limiting factor
scaleFactor = containingSize.height/boundingBox.height
}
let scaleTransform = CATransform3DMakeMakeScale(scaleFactor, scaleFactor, 1.0)
This will scale the path without changing its aspect ratio
Scaling the layer or scaling the path?
You might also have noticed that as the shape layer was scaled, the line width scaled as well, like if it was an image. There is a difference between scaling the layer, and scaling the path.
If you only want it to appear as the path of the shape layer is scaled, then you should scale the path instead of the layer. You can do this by creating a new path that is transformed, and using that path with your shape layer. Note that the scale factor is still calculated using the bounding box of the unscaled path.
var affineTransform = CGAffineTransformMakeScale(scaleFactor, scaleFactor)
let transformedPath = CGPathCreateCopyByTransformingPath(path, &affineTransform)
yourShapeLayer.path = transformedPath
This will scale up the path, without affecting the line width, etc of the shape layer.
I'm still playing around with learning SpriteKit in iOS & have been doing lots of reading & lots of experimenting. I'm confused by something else I've found* regarding coordinates, frames & child nodes.
Consider this snippet of code, in which I'm trying to draw a green box around my spaceship sprite for debugging purposes:
func addSpaceship()
{
let spaceship = SKSpriteNode.init(imageNamed: "rocketship.png")
spaceship.name = "spaceship"
// VERSION 1
spaceship.position = CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame))
let debugFrame = SKShapeNode(rect: spaceship.frame)
debugFrame.strokeColor = SKColor.greenColor()
spaceship.addChild(debugFrame)
// VERSION 2
// spaceship.position = CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame))
spaceship.setScale(0.50)
self.addChild(spaceship)
}
If I add the set the spaceship sprite with the line marked "VERSION 1" above, I get this:
which is clearly wrong. But if I comment out the line marked "VERSION 1" above, and instead use the line marked "VERSION 2", I get what I want:
Notice that the actual code for the lines marked Version 1 & Version 2 is identical:
spaceship.position = CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame))
So why does it matter when I set the position of the spaceship sprite?
To my way of thinking, the position of the spaceship sprite is irrelevant to the placement of the the debugFrame, because the debugFrame is a child of the spaceship sprite & thus, it's coordinates should be relative to the spaceship sprite's frame - right?
Thanks
WM
*This is somewhat related to a question I asked yesterday:
In SpriteKit on iOS, scaling a textured sprite produces an incorrect frame?
but a) I understand that one now, and b) this is a different enough that it deserves its own question.
UPDATE:
Hmmm - thanks, guys for the ideas below, but I still don't get it & maybe this will help.
I modified my code to print out the relavant positions & frames:
func addSpaceship()
{
let spaceship = SKSpriteNode.init(imageNamed: "rocketship.png")
spaceship.name = "spaceship"
println("Spaceship0 Pos \(spaceship.position) Frame = \(spaceship.frame)")
// VERSION 1 (WRONG)
spaceship.position = CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame))
println("Spaceship1 Pos \(spaceship.position) Frame = \(spaceship.frame)")
let debugFrame = SKShapeNode(rect: spaceship.frame)
println("DEBUG Pos \(debugFrame.position) Frame = \(debugFrame.frame)")
debugFrame.strokeColor = SKColor.greenColor()
spaceship.addChild(debugFrame)
// VERSION 2 (RIGHT)
// spaceship.position = CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame))
// println("Spaceship2 Pos \(spaceship.position) Frame = \(spaceship.frame)")
spaceship.setScale(0.50)
self.addChild(spaceship)
}
Then, I ran it both ways got these results. Since I understand Version 2, let's start there.
Running with the "VERSION 2 (RIGHT)" code, I got:
Spaceship0 Pos (0.0,0.0) Frame = (-159.0,-300.0,318.0,600.0)
DEBUG Pos (0.0,0.0) Frame = (-159.5,-300.5,319.0,601.0)
Spaceship2 Pos (384.0,512.0) Frame = (225.0,212.0,318.0,600.0)
The spaceship node starts, by default, with its position at the bottom left of the screen (Spaceship0). Its frame is also expressed in terms of its anchor point (center) being set in the bottom left of the screen, hence the negative numbers for the origin of its frame rect.
The debug frame is then created with its position set to 0, 0 by default & its frame set to be the same as the spaceship's.
The code (Spaceship2) then moves the spaceship node to a position in the view's coordinates (384.0,512.0), and its frame's origin is moved by adding the new position to the old origin (i.e. 384 + -159 = 225).
All is well.
Unfortunately, I still don't get Version 1.
When I run with the "VERSION 1 (WRONG)," code, I get
Spaceship0 Pos (0.0,0.0) Frame = (-159.0,-300.0,318.0,600.0)
Spaceship1 Pos (384.0,512.0) Frame = (225.0,212.0,318.0,600.0)
DEBUG Pos (0.0,0.0) Frame = (0.0,0.0,543.5,812.5)
As above, the spaceship node starts, by default, with its position at the bottom left of the screen (Spaceship0). Its frame is also expressed in terms of its anchor point (center) being set in the bottom left of the screen, hence the negative numbers for the origin of its frame rect.
The code (Spaceship1) then moves the spaceship node to a position in the view's coordinates (384.0,512.0), and its frame's origin is moved by adding the new position to the old origin (i.e. 384 + -159 = 225).
The debug frame is then created with its position set to 0, 0 by default & its frame set to have a strange width (543.5) & a strange height (812.5). Since I'm initializing the debugFrame.frame with spaceship.frame (i think that's what the default initializer does), I would expect the new debugFrame.frame to be the same as the spaceship's frame - but it isn't! The debug frame width & height values apparently come from adding the actual width & height to the origin of the spaceship node frame (543.5 = 225 + 318.5). But if that is the case, why is t's frame rect origin still 0, 0 & not the same adding (225.0 + 0 = 225.0)???
I don't get it.
You are creating the shape node using the sprite's frame, which is in scene coordinates. Because the shape will be a child of the sprite, you should create the node in the sprite's coordinates. For example, if you create the SKShapeNode with spaceship's size
let debugFrame = SKShapeNode(rectOfSize: spaceship.size)
instead of using the spaceship's frame, debugFrame will be centered on the spaceship regardless of when you set the ship's position. Also, debugFrame will scale/move appropriately with the ship.
In response to your 'I don't get it'.
Your code is ok but has a logical problem. The 'frame' is counted relative to the parent coordinates. Please realize that the parent of the spaceship and the parent of the debug window are different in your code.
Two ways to resolve your problem:
Add a zero offset for the debug window and use the spaceship as the parent.
The advantage of this is that the debug window will move, scale with the spaceship:
let rectDebug = CGRectMake( 0, 0, spaceship.frame.size.width, spaceship.frame.size.height)
let debugFrame = SKShapeNode(rect:rectDebug)
spaceship.addChild(debugFrame)
Add the debug window with the spaceship 'frame' coordinates to the parent of the spaceship (which is 'self'). The disadvantage of this, is that you have to move, scale the debug window yourself in the code, since it will not be attached to the spaceship.:
let debugFrame = SKShapeNode(rect:spaceship.frame)
self.addChild(debugFrame)
Both solutions are widely used. You should chose whichever is better for you in your case.
Three other problems might come up:
1.There might be code errors in my code, I just typed these into the web window directly without xcode syntax checking.
2.The anchor points of the two objects could be different. So you might need alignment in your code for this.
3.The zPosition of the objects should also be taken into consideration, so these objects will not be hidden under some other objects.
In response to the problem you are trying to solve, perhaps showPhysics would help:
skView.showsFPS = YES;
skView.showsNodeCount = YES;
skView.showsPhysics = YES;
This is almost the same problem as in the other question you mentioned. frame is a property that contains a position and a size. Both of them are subject to scaling of their ancestor node. Read section "A Node Applies Many of Its Properties to Its Descendants" in https://developer.apple.com/library/ios/documentation/GraphicsAnimation/Conceptual/SpriteKit_PG/Nodes/Nodes.html#//apple_ref/doc/uid/TP40013043-CH3-SW13
Again : never apply scaling to a node, never move a node before having fully constructed its hierarchy, except if you want some special or weird effect.