How do I turn off dragging on sublayers while dragging parent? - framerjs

I have a large parent layer with several children rows in side. I want to prevent the rows from being dragged horizontally while the parent is being dragged vertically. My guess was to make draggable = false on TouchStart, but no luck.
cellHeight = 170
cellWidth = 640
rows = 3
container = new Layer
container.draggable = true
container.draggable.speedX = 0
container.width = cellWidth
container.height = cellHeight * rows
container.on Events.TouchMove, ->
myLayer.draggable = false
container.on Events.TouchEnd, ->
container.animate
properties:
y: 0
curve: "spring(500, 40, 20)"
for row in [0..rows-1]
myLayer = new Layer
x: 0
y: cellHeight*row
width: cellWidth
height: cellHeight
backgroundColor: "#FFF"
myLayer.superLayer = container
myLayer.style =
borderBottom: "1px solid black"
myLayer.draggable = true
myLayer.draggable.speedY = 0
myLayer.on Events.TouchStart, ->
container.draggable = false
myLayer.on Events.TouchEnd, (event, touchedLayer) ->
container.draggable = true
touchedLayer.animate
properties:
x: 0
curve: "spring(700, 40, 20)"

I have tested your script.
Are you trying to prevent moving the parent while dragging the children?
You could try this in your for loop:
myLayer.on Events.DragMove, ->
container.draggable.enabled = false
myLayer.on Events.DragEnd, ->
container.draggable.enabled = true

Related

Drawing multiple rectangle using DrawRect efficiently

I'm trying to draw rectangles pattern using DrawRect like this:
Currently, I'm doing this like so:
class PatternView: UIView {
override func draw(_ rect: CGRect) {
let context = UIGraphicsGetCurrentContext()
let numberOfBoxesPerRow = 7
let boxSide: CGFloat = rect.width / CGFloat(numberOfBoxesPerRow)
var yOrigin: CGFloat = 0
var xOrigin: CGFloat = 0
var isBlack = true
for y in 0...numberOfBoxesPerRow - 1 {
yOrigin = boxSide * CGFloat(y)
for x in 0...numberOfBoxesPerRow - 1 {
xOrigin = boxSide * CGFloat(x)
let color = isBlack ? UIColor.red : UIColor.blue
isBlack = !isBlack
context?.setFillColor(color.cgColor)
let rectnagle = CGRect(origin: .init(x: xOrigin, y: yOrigin), size: .init(width: boxSide, height: boxSide))
context?.addRect(rectnagle)
context?.fill([rectnagle])
}
}
}
}
It's working but I'm trying to optimize it.
Any help will be highly appreciated!
It's difficult to answer "abstract" questions... which this one is, without knowing if you've run some tests / profiling to determine if this code is slow.
However, a couple things you can do to speed it up...
fill the view with one color (red, in this case) and then draw only the other-color boxes
add rects to the context's path, and fill the path once
Take a look at this modification:
class PatternView: UIView {
override func draw(_ rect: CGRect) {
guard let context = UIGraphicsGetCurrentContext() else { return }
let numberOfBoxesPerRow = 7
let boxSide: CGFloat = rect.width / CGFloat(numberOfBoxesPerRow)
context.setFillColor(UIColor.red.cgColor)
context.fill(bounds)
var r: CGRect = CGRect(origin: .zero, size: CGSize(width: boxSide, height: boxSide))
context.beginPath()
for row in 0..<numberOfBoxesPerRow {
r.origin.x = 0.0
for col in 0..<numberOfBoxesPerRow {
if (row % 2 == 0 && col % 2 == 1) || (row % 2 == 1 && col % 2 == 0) {
context.addRect(r)
}
r.origin.x += boxSide
}
r.origin.y += boxSide
}
context.setFillColor(UIColor.blue.cgColor)
context.fillPath()
}
}
There are other options... create a "pattern" background color... use CAShapeLayers and/or CAReplicatorLayers... for example.
Edit
The reason you are getting "blurry edges" is because, as you guessed, you're drawing on partial pixels.
If we modify the values to use whole numbers (using floor()), we can avoid that. Note that the wholeNumberBoxSide * numBoxes may then NOT be exactly equal to the view's rect, so we'll also want to inset the "grid":
class PatternView: UIView {
override func draw(_ rect: CGRect) {
guard let context = UIGraphicsGetCurrentContext() else { return }
let c1: UIColor = .white
let c2: UIColor = .lightGray
let numberOfBoxesPerRow = 7
// use a whole number
let boxSide: CGFloat = floor(rect.width / CGFloat(numberOfBoxesPerRow))
// inset because numBoxes * boxSide may not be exactly equal to rect
let inset: CGFloat = floor((rect.width - boxSide * CGFloat(numberOfBoxesPerRow)) * 0.5)
context.setFillColor(c1.cgColor)
context.fill(CGRect(x: inset, y: inset, width: boxSide * CGFloat(numberOfBoxesPerRow), height: boxSide * CGFloat(numberOfBoxesPerRow)))
var r: CGRect = CGRect(x: inset, y: inset, width: boxSide, height: boxSide)
context.beginPath()
for row in 0..<numberOfBoxesPerRow {
r.origin.x = inset
for col in 0..<numberOfBoxesPerRow {
if (row % 2 == 0 && col % 2 == 1) || (row % 2 == 1 && col % 2 == 0) {
context.addRect(r)
}
r.origin.x += boxSide
}
r.origin.y += boxSide
}
context.setFillColor(c2.cgColor)
context.fillPath()
}
}
We could also get the scale of the main screen (which will be 2x or 3x) and round the boxSide to half- or one-third points to align with the pixels... if really desired.
Edit 2
Additional modifications... settable colors and number of boxes.
Also, using this extension:
// extension to round CGFloat values to floor/nearest CGFloat
// so, for example
// if f == 10.6
// f.floor(nearest: 0.5) = 10.5
// f.floor(nearest: 0.3333) = 10.3333
// f.round(nearest: 0.5) = 10.5
// f.round(nearest: 0.3333) = 10.66666
extension CGFloat {
func round(nearest: CGFloat) -> CGFloat {
let n = 1/nearest
let numberToRound = self * n
return numberToRound.rounded() / n
}
func floor(nearest: CGFloat) -> CGFloat {
let intDiv = CGFloat(Int(self / nearest))
return intDiv * nearest
}
}
We can round the coordinates to match the screen scale.
PatternView class
class PatternView: UIView {
var c1: UIColor = .white { didSet { setNeedsDisplay() } }
var c2: UIColor = .lightGray { didSet { setNeedsDisplay() } }
var numberOfBoxesPerRow = 21 { didSet { setNeedsDisplay() } }
override func draw(_ rect: CGRect) {
guard let context = UIGraphicsGetCurrentContext() else { return }
let sc: CGFloat = 1.0 // / CGFloat(UIScreen.main.scale)
// use a whole number
let boxSide: CGFloat = (rect.width / CGFloat(numberOfBoxesPerRow)).floor(nearest: sc)
// inset because numBoxes * boxSide may not be exactly equal to rect
let inset: CGFloat = ((rect.width - boxSide * CGFloat(numberOfBoxesPerRow)) * 0.5).floor(nearest: sc)
context.setFillColor(c1.cgColor)
context.fill(CGRect(x: inset, y: inset, width: boxSide * CGFloat(numberOfBoxesPerRow), height: boxSide * CGFloat(numberOfBoxesPerRow)))
var r: CGRect = CGRect(x: inset, y: inset, width: boxSide, height: boxSide)
context.beginPath()
for row in 0..<numberOfBoxesPerRow {
r.origin.x = inset
for col in 0..<numberOfBoxesPerRow {
if (row % 2 == 0 && col % 2 == 1) || (row % 2 == 1 && col % 2 == 0) {
context.addRect(r)
}
r.origin.x += boxSide
}
r.origin.y += boxSide
}
context.setFillColor(c2.cgColor)
context.fillPath()
}
}
Example Controller View class
class PatternTestVC: UIViewController {
let pvA = PatternView()
let pvB = PatternView()
override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = .systemBlue
let stack = UIStackView()
stack.axis = .vertical
stack.spacing = 8
stack.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(stack)
let g = view.safeAreaLayoutGuide
NSLayoutConstraint.activate([
stack.leadingAnchor.constraint(equalTo: g.leadingAnchor, constant: 40.0),
stack.trailingAnchor.constraint(equalTo: g.trailingAnchor, constant: -40.0),
stack.centerYAnchor.constraint(equalTo: g.centerYAnchor),
])
[pvA, pvB].forEach { v in
v.backgroundColor = .red
v.numberOfBoxesPerRow = 7
v.heightAnchor.constraint(equalTo: v.widthAnchor).isActive = true
stack.addArrangedSubview(v)
}
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
pvB.numberOfBoxesPerRow += 1
}
}
Sets up two pattern views... both start at 7 boxes... each tap anywhere increments the boxes per row in the bottom view.
Here's how it looks with 21 boxes per row (actual size - so really big image):
and zoomed-in 1600%:
Note the red borders... I set the background of the view to red, so we can see that the grid must be inset to account for the non-whole-number box size.
Edit 3
Options to avoid "blurry edges" ...
Suppose we have a view width of 209 and we want 10 boxes.
That gives us a box width of 20.9 ... which results in "blurry edges" -- so we know we need to get to a whole number.
If we round it, we'll get 21 -- 21 x 10 = 210 which will exceed the width of the view. So we need to round it down (floor()).
So...
Option 1:
Option 2:
Option 3:
I think your first move would be to first draw a big red square, then to draw only the blue ones on top of it. It would spare half the computations, even if it does not change the order of magnitude.
EDIT
Note : it is always the drawing itself that consumes time, rarely the other computations. So that is what we have to minimize.
So, my second move would be to replace drawing squares by creating just one complicated BezierPath, that makes all the squares into just one form, and then display it only once.
I do not know if it is possible to do the whole in just one form, but it is possible to make two columns of blue squares into one form.
EDIT 2
Also, I do not understant why there are two instructions here :
context?.addRect(rectnagle)
context?.fill([rectnagle])
Shouldn't only the second be enough ?

UIImageView added as subview in an UIView with clipsToBounds is not working

I have a UIView with a UIImageView as subview added, the UIImageView is a texture that repeats. The UIView width and height are correct, but the image is out of the size. I added the ClipsToBounds, but it's not clipping the image at all. Is there a specific order or what am I doing wrong the image is not clipped inside it's parent view?
let rectangleView = UIView(frame: CGRect(x: x, y: y, width: width, height: height))
rectangleView.isUserInteractionEnabled = false
if let texturesUrl = layout.Url, let url = texturesUrl.isValidURL() ? URL(string: texturesUrl) : URL(string: String(format: AppManager.shared.baseTexturesUrl, texturesUrl)) {
let widthLimit = scale * CGFloat(layout.Width ?? 0)
let heightLimit = scale * CGFloat(layout.Height ?? 0)
let widthStep = scale * CGFloat(layout.TileWidth ?? layout.Width ?? 0)
let heightStep = scale * CGFloat(layout.TileHeight ?? layout.Height ?? 0)
var locY = CGFloat(0)
let size = CGSize(width: widthStep, height: heightStep)
if widthLimit > 0, heightLimit > 0 {
while locY < heightLimit {
var locX = CGFloat(0)
while locX < widthLimit {
let imageView = UIImageView()
rectangleView.addSubview(imageView)
imageView.contentMode = .scaleAspectFill
imageView.translatesAutoresizingMaskIntoConstraints = false
imageView.clipsToBounds = true
imageView.isUserInteractionEnabled = false
imageView.anchor(top: rectangleView.topAnchor, leading: rectangleView.leadingAnchor, bottom: nil, trailing: nil, padding: UIEdgeInsets(top: locY, left: locX, bottom: 0, right: 0), size: size)
imageView.setImage(with: url, size: size)
locX += widthStep
}
locY += heightStep
}
}
}
You don't need to add so many image views, just use it as a repeating background:
rectangleView.backgroundColor = UIColor(patternImage: myImage)
See documentation for UIColor(patternImage:).
You can do this much more efficiently with CAReplicatorLayer.
Here's a quick example:
class TileExampleViewController: UIViewController {
let tiledView = UIView()
override func viewDidLoad() {
super.viewDidLoad()
tiledView.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(tiledView)
let g = view.safeAreaLayoutGuide
NSLayoutConstraint.activate([
tiledView.topAnchor.constraint(equalTo: g.topAnchor, constant: 20.0),
tiledView.leadingAnchor.constraint(equalTo: g.leadingAnchor, constant: 20.0),
tiledView.trailingAnchor.constraint(equalTo: g.trailingAnchor, constant: -20.0),
tiledView.bottomAnchor.constraint(equalTo: g.bottomAnchor, constant: -20.0),
])
}
override func viewDidLayoutSubviews() {
// we want to do this here, when we know the
// size / frame of the tiledView
// make sure we can load the image
guard let tileImage = UIImage(named: "tileSquare") else { return }
// let's just pick 80 x 80 for the tile size
let tileSize: CGSize = CGSize(width: 80.0, height: 80.0)
// create a "horizontal" replicator layer
let hReplicatorLayer = CAReplicatorLayer()
hReplicatorLayer.frame.size = tiledView.frame.size
hReplicatorLayer.masksToBounds = true
// create a "vertical" replicator layer
let vReplicatorLayer = CAReplicatorLayer()
vReplicatorLayer.frame.size = tiledView.frame.size
vReplicatorLayer.masksToBounds = true
// create a layer to hold the image
let imageLayer = CALayer()
imageLayer.contents = tileImage.cgImage
imageLayer.frame.size = tileSize
// add the imageLayer to the horizontal replicator layer
hReplicatorLayer.addSublayer(imageLayer)
// add the horizontal replicator layer to the vertical replicator layer
vReplicatorLayer.addSublayer(hReplicatorLayer)
// how many "tiles" do we need to fill the width
let hCount = tiledView.frame.width / tileSize.width
hReplicatorLayer.instanceCount = Int(ceil(hCount))
// Shift each image instance right by tileSize width
hReplicatorLayer.instanceTransform = CATransform3DMakeTranslation(
tileSize.width, 0, 0
)
// how many "rows" do we need to fill the height
let vCount = tiledView.frame.height / tileSize.height
vReplicatorLayer.instanceCount = Int(ceil(vCount))
// shift each "row" down by tileSize height
vReplicatorLayer.instanceTransform = CATransform3DMakeTranslation(
0, tileSize.height, 0
)
// add the vertical replicator layer as a sublayer
tiledView.layer.addSublayer(vReplicatorLayer)
}
}
I used this tile image:
and we get this result with let tileSize: CGSize = CGSize(width: 80.0, height: 80.0):
with let tileSize: CGSize = CGSize(width: 120.0, height: 160.0):
with let tileSize: CGSize = CGSize(width: 40.0, height: 40.0):

Getting the size of an UIView to programmatically add subviews (with calculated X, Y, width and height)

In a segue I have an UIVIew which have constraints that lays it out to the same size as the screen on the top, left and right as well as to a button on the bottom:
Then from the ViewController I programmatically add UIButtons like this:
override func viewDidLoad() {
super.viewDidLoad()
drawCards()
}
private func drawCards(){
let deck = Deck()
var rowIndex = 0
var columnIndex = 0
let cardWidth = Int(Float((deckView.bounds.width) / 7))
let cardHeight = Int(Float(cardWidth) * 1.5)
for card in deck.cards{
let x = (cardWidth / 2) * columnIndex
let y = cardHeight * rowIndex
let button = CardButton()
button.delegate = self
button.setCard(card: card, x: x, y: y, width: cardWidth, height: cardHeight)
deckView.addSubview(button)
columnIndex += 1
if(columnIndex == 13){
columnIndex = 0
rowIndex += 1
}
}
}
The expected behavior is that this function should take the size of the UIView (called deckView), insert the UIButtons and to be stacked over each other and have the same layout consistently between devices. However, it looks as expected on iPhone but not on iPad as the UIButtons go outside of the UIView:
This is, as for as I can tell, because the X and Y values aren't calculated correctly because the width of the UIView (deckView) isn't retrieved correctly (or as expected).
Why isn't the width of the UIView being retrieved as expected? (i.e. why does it works as expected on iPhone but not on iPad)
Move your drawCards() from viewDidLoad to viewDidLayoutSuviews

SpriteKit: suggestions for rounding corners of unconventional grid?

The goal is to round the corners of an unconventional grid similar to the following:
https://s-media-cache-ak0.pinimg.com/564x/50/bc/e0/50bce0cb908913ebc2cf630d635331ef.jpg
https://s-media-cache-ak0.pinimg.com/564x/7e/29/ee/7e29ee80e957ec22bbba630ccefbfaa2.jpg
Instead of a grid with four corners like a conventional grid, these grids have multiple corners in need of rounding.
The brute force approach would be to identify tiles with corners exposed then round those corners either with a different background image or by clipping the corners in code.
Is there a cleaner approach?
The grid is rendered for an iOS app in a SpriteKit SKScene.
This is a really interesting question.You can build your matrix with different approaches but surely you must resolve everytime the changes about the 4 corners in background for each tiles.
Suppose you start with a GameViewController like this (without load SKS files and with anchorPoint equal to zero):
import UIKit
import SpriteKit
class GameViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
guard let view = self.view as! SKView? else { return }
view.ignoresSiblingOrder = true
view.showsFPS = true
view.showsNodeCount = true
let scene = GameScene(size:view.bounds.size)
scene.scaleMode = .resizeFill
scene.anchorPoint = CGPoint.zero
view.presentScene(scene)
}
}
My idea is to build a matrix like this:
import SpriteKit
class GameScene: SKScene {
private var sideTile:CGFloat = 40
private var gridWidthTiles:Int = 5
private var gridHeightTiles:Int = 6
override func didMove(to view: SKView) {
self.drawMatrix()
}
func drawMatrix(){
var index = 1
let matrixPos = CGPoint(x:50,y:150)
for i in 0..<gridHeightTiles {
for j in 0..<gridWidthTiles {
let tile = getTile()
tile.name = "tile\(index)"
addChild(tile)
tile.position = CGPoint(x:matrixPos.x+(sideTile*CGFloat(j)),y:matrixPos.y+(sideTile*CGFloat(i)))
let label = SKLabelNode.init(text: "\(index)")
label.fontSize = 12
label.fontColor = .white
tile.addChild(label)
label.position = CGPoint(x:tile.frame.size.width/2,y:tile.frame.size.height/2)
index += 1
}
}
}
func getTile()->SKShapeNode {
let tile = SKShapeNode(rect: CGRect(x: 0, y: 0, width: sideTile, height: sideTile), cornerRadius: 10)
tile.fillColor = .gray
tile.strokeColor = .gray
return tile
}
}
Output:
Now we can construct a background for each tile of our matrix.
We can made the same tile node but with a different color (maybe more clear than the tile color) and without corner radius. If we split this background in 4 parts we have:
left - bottom background tile
left - top background tile
right - bottom background tile
right - top background tile
Code for a typical background tile:
func getBgTileCorner()->SKShapeNode {
let bgTileCorner = SKShapeNode(rect: CGRect(x: 0, y: 0, width: sideTile/2, height: sideTile/2))
bgTileCorner.fillColor = .lightGray
bgTileCorner.strokeColor = .lightGray
bgTileCorner.lineJoin = .round
bgTileCorner.isAntialiased = false
return bgTileCorner
}
Now with the SKSCropNode we can obtain only the corner using the background tile and the tile:
func getCorner(at angle:String)->SKCropNode {
let cropNode = SKCropNode()
let tile = getTile()
let bgTile = getBgTileCorner()
cropNode.addChild(bgTile)
tile.position = CGPoint.zero
let tileFrame = CGRect(x: 0, y: 0, width: sideTile, height: sideTile)
switch angle {
case "leftBottom": bgTile.position = CGPoint(x:tile.position.x,y:tile.position.y)
case "rightBottom": bgTile.position = CGPoint(x:tile.position.x+tileFrame.size.width/2,y:tile.position.y)
case "leftTop": bgTile.position = CGPoint(x:tile.position.x,y:tile.position.y+tileFrame.size.height/2)
case "rightTop": bgTile.position = CGPoint(x:tile.position.x+tileFrame.size.width/2,y:tile.position.y+tileFrame.size.height/2)
default:break
}
tile.fillColor = self.backgroundColor
tile.strokeColor = self.backgroundColor
tile.lineWidth = 0.0
bgTile.lineWidth = 0.0
tile.blendMode = .replace
cropNode.position = CGPoint.zero
cropNode.addChild(tile)
cropNode.maskNode = bgTile
return cropNode
}
Output for a typical corner:
let corner = getCorner(at: "leftBottom")
addChild(corner)
corner.position = CGPoint(x:50,y:50)
Now we can rebuild the drawMatrix function with the corners for each tile:
func drawMatrix(){
var index = 1
let matrixPos = CGPoint(x:50,y:150)
for i in 0..<gridHeightTiles {
for j in 0..<gridWidthTiles {
let tile = getTile()
tile.name = "tile\(index)"
let bgTileLB = getCorner(at:"leftBottom")
let bgTileRB = getCorner(at:"rightBottom")
let bgTileLT = getCorner(at:"leftTop")
let bgTileRT = getCorner(at:"rightTop")
bgTileLB.name = "bgTileLB\(index)"
bgTileRB.name = "bgTileRB\(index)"
bgTileLT.name = "bgTileLT\(index)"
bgTileRT.name = "bgTileRT\(index)"
addChild(bgTileLB)
addChild(bgTileRB)
addChild(bgTileLT)
addChild(bgTileRT)
addChild(tile)
tile.position = CGPoint(x:matrixPos.x+(sideTile*CGFloat(j)),y:matrixPos.y+(sideTile*CGFloat(i)))
let label = SKLabelNode.init(text: "\(index)")
label.fontSize = 12
label.fontColor = .white
tile.addChild(label)
label.position = CGPoint(x:tile.frame.size.width/2,y:tile.frame.size.height/2)
bgTileLB.position = CGPoint(x:tile.position.x,y:tile.position.y)
bgTileRB.position = CGPoint(x:tile.position.x,y:tile.position.y)
bgTileLT.position = CGPoint(x:tile.position.x,y:tile.position.y)
bgTileRT.position = CGPoint(x:tile.position.x,y:tile.position.y)
index += 1
}
}
}
Output:
Very similar to your screenshots (these are two tile example:)
Now when you want to remove a tile, you can decide what corner you want to remove or leave because for each tile you have also the relative 4 corners :
Output:
Okay, the grid creation process isn't really relative to this. You just need some way of differentiating between a blank spot in the grid and a filled spot. In my example I have a Tile object with a type of .blank or .regular. You need to have all 15 images (you can change the style to whatever you like, although they have to be in the same order and they have to be prefixed with 1..15). It uses bit calculation to figure out which image to use as a background and offsets the background image by 1/2 tile size for x and y. Other than that it is pretty self explanitory. Those background images were my tester images I created when developing this, so feel free to use them.
struct GridPosition {
var col: Int = 0
var row: Int = 0
}
class GameScene: SKScene {
private var backgroundLayer = SKNode()
private var tileLayer = SKNode()
private var gridSize: CGSize = CGSize.zero
private var gridRows: Int = 0
private var gridCols: Int = 0
private var gridBlanks = [Int]()
private var tiles = [[Tile]]()
var tileSize: CGFloat = 150
override func didMove(to view: SKView) {
backgroundLayer.zPosition = 1
addChild(backgroundLayer)
tileLayer.zPosition = 2
addChild(tileLayer)
gridRows = 8
gridCols = 11
gridBlanks = [0,1,3,4,5,6,7,9,10,11,12,13,15,16,17,19,20,21,22,23,31,32,33,36,40,43,56,64,67,69,70,71,72,73,75,77,78,79,82,85,86,87]
createGrid()
createBackgroundTiles()
}
func createGrid() {
for row in 0 ..< gridRows {
var rowContent = [Tile]()
for col in 0 ..< gridCols {
let currentTileLocation: Int = row * gridCols + col
var tile: Tile
if gridBlanks.contains(currentTileLocation) {
tile = Tile(row: row, col: col, type: .blank, tileSize: tileSize)
}
else {
tile = Tile(row: row, col: col, type: .regular, tileSize: tileSize)
}
tile.position = positionInGrid(column: col, row: row)
tile.zPosition = CGFloat(100 + gridRows - row)
tileLayer.addChild(tile)
rowContent.append(tile)
}
tiles.append(rowContent)
}
}
func tileByGridPosition(_ gridPos: GridPosition) -> Tile {
return (tiles[Int(gridPos.row)][Int(gridPos.col)])
}
func positionInGrid(column: Int, row: Int) -> CGPoint {
let startX = 0 - CGFloat(gridCols / 2) * tileSize
let startY = 0 - CGFloat(gridRows / 2) * tileSize + tileSize / 2
return CGPoint(
x: startX + CGFloat(column) * tileSize,
y: startY + CGFloat(row) * tileSize)
}
func createBackgroundTiles() {
for row in 0...gridRows {
for col in 0...gridCols {
let topLeft = (col > 0) && (row < gridRows) && tileByGridPosition(GridPosition(col: col - 1, row: row)).type == .regular
let bottomLeft = (col > 0) && (row > 0) && tileByGridPosition(GridPosition(col: col - 1, row: row - 1)).type == .regular
let topRight = (col < gridCols) && (row < gridRows) && tileByGridPosition(GridPosition(col: col, row: row)).type == .regular
let bottomRight = (col < gridCols) && (row > 0) && tileByGridPosition(GridPosition(col: col, row: row - 1)).type == .regular
// The tiles are named from 0 to 15, according to the bitmask that is made by combining these four values.
let value = Int(NSNumber(value: topLeft)) | Int(NSNumber(value: topRight)) << 1 | Int(NSNumber(value: bottomLeft)) << 2 | Int(NSNumber(value: bottomRight)) << 3
// Values 0 (no tiles)
if value != 0 {
var gridPosition = positionInGrid(column: col, row: row)
gridPosition.x -= tileSize / 2
gridPosition.y -= tileSize / 2
let backgroundNode = SKSpriteNode(imageNamed: ("background_tile_\(value)"))
backgroundNode.size = CGSize(width: tileSize, height: tileSize)
backgroundNode.alpha = 0.8
backgroundNode.position = gridPosition
backgroundNode.zPosition = 1
backgroundLayer.addChild(backgroundNode)
}
}
}
}
}
class Tile: SKSpriteNode {
private var row = 0
private var col = 0
var type: TileType = .blank
init(row: Int, col: Int, type: TileType, tileSize: CGFloat) {
super.init(texture: nil ,color: .clear, size:CGSize(width: tileSize, height: tileSize))
self.type = type
size = self.size
let square = SKSpriteNode(color: type.color, size: size)
square.zPosition = 1
addChild(square)
}
}
Only thing that comes to mind is when one node touches another node, at that moment in time evaluate the display of said node, as well as change the neighbors that are affected by it.
What we did was lay out the tiles then call this function to round the nodes of exposed tiles.
// Rounds corners of exposed tiles. UIKit inverts coordinates so top is bottom and vice-versa.
fileprivate func roundTileCorners() {
// Get all tiles
var tiles = [TileClass]()
tileLayer.enumerateChildNodes(withName: ".//*") { node, stop in
if node is TileClass {
tiles.append(node as! TileClass)
}
}
// Round corners for each exposed tile
for t in tiles {
// Convert tile's position to root coordinates
let convertedPos = convert(t.position, from: t.parent!)
// Set neighbor positions
var leftNeighborPos = convertedPos
leftNeighborPos.x -= tileWidth
var rightNeighborPos = convertedPos
rightNeighborPos.x += tileWidth
var topNeighborPos = convertedPos
topNeighborPos.y += tileHeight
var bottomNeighborPos = convertedPos
bottomNeighborPos.y -= tileHeight
// Set default value for rounding
var cornersToRound : UIRectCorner?
// No neighbor below & to left? Round bottom left.
if !isTileAtPoint(point: bottomNeighborPos) && !isTileAtPoint(point: leftNeighborPos) {
cornersToRound = cornersToRound?.union(.topLeft) ?? .topLeft
}
// No neighbor below & to right? Round bottom right.
if !isTileAtPoint(point: bottomNeighborPos) && !isTileAtPoint(point: rightNeighborPos) {
cornersToRound = cornersToRound?.union(.topRight) ?? .topRight
}
// No neightbor above & to left? Round top left.
if !isTileAtPoint(point: topNeighborPos) && !isTileAtPoint(point: leftNeighborPos) {
cornersToRound = cornersToRound?.union(.bottomLeft) ?? .bottomLeft
}
// No neighbor above & to right? Round top right.
if !isTileAtPoint(point: topNeighborPos) && !isTileAtPoint(point: rightNeighborPos) {
cornersToRound = cornersToRound?.union(.bottomRight) ?? .bottomRight
}
// Any corners to round?
if cornersToRound != nil {
t.roundCorners(cornersToRound: cornersToRound!)
}
}
}
// Returns true if a tile exists at <point>. Assumes <point> is in root node's coordinates.
fileprivate func isTileAtPoint(point: CGPoint) -> Bool {
return nodes(at: point).contains(where: {$0 is BoardTileNode })
}

Trying to animate a SKNode inside a UIView

I have programmatically created a SKView inside of my UIView. Now I want to animate a new SKNodes in all directions, starting in the middle (this works) depending on the view that is passed through. All works fine except for the fact that the ending position of the SKNode is weird. Instead of shooting in all directions, it is going inside of a corner, way out of the view's boundaries. It should never go out of the view's boundaries. I am converting the CGPoint to my scene from the View.
This is my code:
func animateExplosion(sender: UIButton){
let star = SKSpriteNode(imageNamed: "ExplodingStar")
let starHeight = sender.frame.height / 3
star.size = CGSize(width: starHeight, height: starHeight)
var position = sender.frame.origin
position = self.sceneScene.convertPoint(fromView: position)
star.position = position
let minimumDuration = 1
let maximumDuration = 2
let randomDuration = TimeInterval(RandomInt(min: minimumDuration * 100, max: maximumDuration * 100) / 100)
let fireAtWill = SKAction.move(to: getRandomPosition(view: sender), duration: randomDuration)
let rotation = SKAction.rotate(byAngle: CGFloat(randomAngle()), duration: Double(randomDuration))
let fadeOut = SKAction.fadeOut(withDuration: randomDuration)
let scaleTo = SKAction.scale(to: starHeight * 2, duration: randomDuration)
let group = SKAction.group([fireAtWill, rotation]) //for testing no fade out or remove parent
let sequence = SKAction.sequence([group])
if randomAmountOfExplodingStars > 0{
randomAmountOfExplodingStars -= 1
sceneScene.addChild(star)
star.run(sequence)
animateExplosion(sender: sender)
}
}
the getRandomPosition where the bug properly is:
func getRandomPosition(view: UIView) -> CGPoint{
let direction = RandomInt(min: 1, max: 4)
var randomX = Int()
var randomY = Int()
if direction == 1{
randomX = Int(view.frame.width / 2)
randomY = RandomInt(min: -Int(view.frame.height / 2), max: Int(view.frame.height / 2))
}
if direction == 2{
randomX = RandomInt(min: -Int(view.frame.width / 2), max: Int(view.frame.width / 2))
randomY = Int(view.frame.height / 2)
}
if direction == 3{
randomX = -Int(view.frame.width / 2)
randomY = RandomInt(min: -Int(view.frame.height / 2), max: Int(view.frame.height / 2))
}
if direction == 4{
randomX = RandomInt(min: -Int(view.frame.width / 2), max: Int(view.frame.width / 2))
randomY = -Int(view.frame.height / 2)
}
var randomPosition = CGPoint(x: randomX, y: randomY)
//randomPosition = self.sceneScene.convertPoint(fromView: randomPosition)
return randomPosition
}
I know that code looks awful, but it should do the trick right? The passed through view is a UIButton inside of a UIView. The SKView shares exactly the same constrains as that UIView. The animation should start in the middle and end somewhere to the boundaries of the passed view.
Yay got it to work finally. So dumb I did not notice before. The sender would be always a child inside of a UIView which is smaller. The return function was great, but the SKNode should not move to the returned function, but moved by.
Updated code:
let randomPositionOfSender = getRandomPosition(view: sender)
let fireAtWill = SKAction.moveBy(x: randomPositionOfSender.x, y: randomPositionOfSender.y, duration: randomDuration)

Resources