I am applying pan gesture on UILabel. I use one finger scale to increase and decrease the size of the UILabel.
I tried using scale to add and subtract value from font size, but I am not getting the exact result.
#objc
func handleRotateGesture(_ recognizer: UIPanGestureRecognizer) {
let touchLocation = recognizer.location(in: self.superview)
let center = self.center
switch recognizer.state {
case .began:
self.deltaAngle = CGFloat(atan2f(Float(touchLocation.y - center.y), Float(touchLocation.x - center.x))) - CGAffineTransformGetAngle(self.transform)
self.initialBounds = self.bounds
self.initialDistance = CGPointGetDistance(point1: center, point2: touchLocation)
case .changed:
let angle = atan2f(Float(touchLocation.y - center.y), Float(touchLocation.x - center.x))
let angleDiff = Float(self.deltaAngle) - angle
self.transform = CGAffineTransform(rotationAngle: CGFloat(-angleDiff))
if let label = self.contentView as? UILabel {
var scale = CGPointGetDistance(point1: center, point2: touchLocation) / self.initialDistance
let minimumScale = CGFloat(self.minimumSize) / min(self.initialBounds.size.width, self.initialBounds.size.height)
scale = max(scale, minimumScale)
let scaledBounds = CGRectScale(self.initialBounds, wScale: scale, hScale: scale)
var pinchScale = scale
pinchScale = round(pinchScale * 1000) / 1000.0
var fontSize = label.font.pointSize
if(scale > minimumScale){
if (self.bounds.height > scaledBounds.height) {
// fontSize = fontSize - pinchScale
label.font = UIFont(name: label.font.fontName, size: fontSize - pinchScale)
}
else{
// fontSize = fontSize + pinchScale
label.font = UIFont(name: label.font.fontName, size: fontSize + pinchScale)
}
} else {
label.font = UIFont( name: label.font.fontName, size: fontSize)
}
print("PinchScale -- \(pinchScale), FontSize = \(fontSize)")
self.bounds = scaledBounds
} else {
var scale = CGPointGetDistance(point1: center, point2: touchLocation) / self.initialDistance
let minimumScale = CGFloat(self.minimumSize) / min(self.initialBounds.size.width, self.initialBounds.size.height)
scale = max(scale, minimumScale)
let scaledBounds = CGRectScale(self.initialBounds, wScale: scale, hScale: scale)
self.bounds = scaledBounds
}
self.setNeedsDisplay()
default:
break
}
}
However, we can achieve this using UIPinchGestureRecognizer. But how can we do the same effect with UIPanGestureRecognizer? Any help would be appreciated. Thanks.
Right now you’re adding and removing points to the label’s current font size. I’d suggest a simpler pattern, more consistent with what you’re doing elsewhere, namely just capturing the the initial point size at the start of the gesture and then, as the user’s finger moves, apply a scale to that saved value. So, I’d suggest:
Define property to save the initial font size:
var initialPointSize: CGFloat = 0
In the .began of the gesture, capture the current size
initialPointSize = (contentView as? UILabel)?.font?.pointSize ?? 0
In the .changed, adjust the font size:
let pinchScale = (scale * 1000).rounded() / 1000
label.font = label.font.withSize(initialPointSize * pinchScale)
As an aside, I’m not sure that it’s necessary to round the scale to three decimal places, but you had that in your original code snippet, so I preserved that.
Personally, I’d follow the same basic pattern with the transform:
Define a properties to capture the starting angle and the view’s current transform:
var initialAngle: CGFloat = 0
var initialTransform: CGAffineTransform = .identity
In the .began, capture the current the starting angle and existing transform:
initialAngle = atan2(touchLocation.y - center.y, touchLocation.x - center.x)
initialTransform = transform
In the .changed, update the transform:
let angle = atan2(touchLocation.y - center.y, touchLocation.x - center.x)
transform = initialTransform.rotated(by: angle - initialAngle)
This saves you from reverse engineering the angle associated with the current transform with CGAffineTransformGetAngle and instead just apply rotated(by:) to the saved transform.
So this, like the point size and the bounds, represents a consistent pattern: Capture the starting value in .began and just apply whatever change you need in .changed.
A few unrelated observations:
All those self. references are not needed. It just adds noise that makes it harder to read the code.
All of those casts between CGFloat and Float are not needed. If you use the atof2 function instead of atof2f, they all just work with CGFloat without any casting. E.g., instead of
let angle = atan2f(Float(touchLocation.y - center.y), Float(touchLocation.x - center.x))
let angleDiff = Float(self.deltaAngle) - angle
self.transform = CGAffineTransform(rotationAngle: CGFloat(-angleDiff))
You can just do:
let angle = atan2(touchLocation.y - center.y, touchLocation.x - center.x)
transform = CGAffineTransform(rotationAngle: angle - deltaAngle)
You actually don’t need any of those casts that are currently sprinkled throughout the code snippet.
All of those bounds.size.width and bounds.size.height can just be bounds.width and bounds.height respectively, again removing noise from the code.
When adjusting the font size, rather than:
label.font = UIFont(name: label.font.fontName, size: fontSize)
You should just use:
label.font = label.font.withSize(fontSize)
That way you preserve all of the essential characteristics of the font (the weight, etc.), and just adjust the size.
In your if let label = contentView as? UILabel test, the same five lines of code in your else clause appear in the if clause, too. You should just move those common lines before the if-else statement, and then you can lose the else clause entirely, simplifying your code.
Related
I have a camera node that is scaled at 1. When I run the game, I want it to scale it down (i.e. zoom out) but keep the "floor" at the bottom. How would I go about pinning the camera node to the bottom of the scene and effectively zooming "up" (difficult to explain). So the bottom of the scene stays at the bottom but the rest zooms out.
I have had a go with SKConstraints but not having any luck (I'm quite new at SpriteKit)
func setConstraints(with scene: SKScene, and frame: CGRect, to node: SKNode?) {
let scaledSize = CGSize(width: scene.size.width * xScale, height: scene.size.height * yScale)
let boardContentRect = frame
let xInset = min((scaledSize.width / 2), boardContentRect.width / 2)
let yInset = min((scaledSize.height / 2), boardContentRect.height / 2)
let insetContentRect = boardContentRect.insetBy(dx: xInset, dy: yInset)
let xRange = SKRange(lowerLimit: insetContentRect.minX, upperLimit: insetContentRect.maxX)
let yRange = SKRange(lowerLimit: insetContentRect.minY, upperLimit: insetContentRect.maxY)
let levelEdgeConstraint = SKConstraint.positionX(xRange, y: yRange)
if let node = node {
let zeroRange = SKRange(constantValue: 0.0)
let positionConstraint = SKConstraint.distance(zeroRange, to: node)
constraints = [positionConstraint, levelEdgeConstraint]
} else {
constraints = [levelEdgeConstraint]
}
}
then calling the function with:
gameCamera.setConstraints(with: self, and: scene!.frame, to: nil)
(This was code from a tutorial I was following) The "setConstraints" function is an extension of SKCameraNode
I'm not sure this will give me the correct output, but when I run the code to scale, it just zooms from the middle and shows the surrounding area of the scene .sks file.
gameCamera.run(SKAction.scale(to: 0.2, duration: 100))
This is the code to scale the gameCamera
EDIT: Answer below is nearly what I was looking for, this is my updated answer:
let scaleTo = 0.2
let duration = 100
let scaleTop = SKAction.customAction(withDuration:duration){
(node, elapsedTime) in
let newScale = 1 - ((elapsedTime/duration) * (1-scaleTo))
let currentScaleY = node.yScale
let currentHeight = node.scene!.size.height * currentScaleY
let newHeight = node.scene!.size.height * newScale
let heightDiff = newHeight - currentHeight
let yOffset = heightDiff / 2
node.setScale(newScale)
node.position.y += yOffset
}
You cannot use a constraint because your scale size is dynamic.
Instead you need to move your camera position to give the illusion it is only scaling in 3 directions.
To do this, I would recommend creating a custom action.
let scaleTo = 2.0
let duration = 1.0
let currentNodeScale = 0.0
let scaleTop = SKCustomAction(withDuration:duration){
(node, elapsedTime) in
if elapsedTime == 0 {currentNodeScale = node.scale}
let newScale = currentNodeScale - ((elapsedTime/duration) * (currentNodeScale-scaleTo))
let currentYScale = node.yScale
let currentHeight = node.scene.size.height * currentYScale
let newHeight = node.scene.size.height * newScale
let heightDiff = newHeight - currentHeight
let yOffset = heightDiff / 2
node.scale(to:newScale)
node.position.y += yOffset
}
What this is doing is comparing the new height of your camera with the old height, and moving it 1/2 the distance.
So if your current height is 1, this means your camera sees [-1/2 to 1/2] on the y axis. If you new scale height is 2, then your camera sees [-1 to 1] on the y axis. We need to move the camera up so that the camera sees [-1/2 to 3/2], meaning we need to add 1/2. So we do 2 - 1, which is 1, then go 1/2 that distance. This makes our yOffset 1/2, which you add to the camera.
I have a view with an image which responds to pinch, rotation and pan gestures. I want that pinching and rotation of the image would be done with respect to the anchor point placed in the middle of the screen, exactly as it is done using Xcode iPhone simulator by pressing the options key. How can I place the anchor point in the middle of the screen if the centre of the image might be scaled and panned to a different location?
Here's my scale and rotate gesture functions:
#IBAction func pinchGesture(_ gestureRecognizer: UIPinchGestureRecognizer) {
// Move the achor point of the view's layer to the touch point
// so that scaling the view and the layer becames simpler.
self.adjustAnchorPoint(gestureRecognizer: gestureRecognizer)
// Scale the view by the current scale factor.
if(gestureRecognizer.state == .began) {
// Reset the last scale, necessary if there are multiple objects with different scales
lastScale = gestureRecognizer.scale
}
if (gestureRecognizer.state == .began || gestureRecognizer.state == .changed) {
let currentScale = gestureRecognizer.view!.layer.value(forKeyPath:"transform.scale")! as! CGFloat
// Constants to adjust the max/min values of zoom
let kMaxScale:CGFloat = 15.0
let kMinScale:CGFloat = 1.0
var newScale = 1 - (lastScale - gestureRecognizer.scale)
newScale = min(newScale, kMaxScale / currentScale)
newScale = max(newScale, kMinScale / currentScale)
let transform = (gestureRecognizer.view?.transform)!.scaledBy(x: newScale, y: newScale);
gestureRecognizer.view?.transform = transform
lastScale = gestureRecognizer.scale // Store the previous scale factor for the next pinch gesture call
scale = currentScale // Save current scale for later use
}
}
#IBAction func rotationGesture(_ gestureRecognizer: UIRotationGestureRecognizer) {
// Move the achor point of the view's layer to the center of the
// user's two fingers. This creates a more natural looking rotation.
self.adjustAnchorPoint(gestureRecognizer: gestureRecognizer)
// Apply the rotation to the view's transform.
if gestureRecognizer.state == .began || gestureRecognizer.state == .changed {
gestureRecognizer.view?.transform = (gestureRecognizer.view?.transform.rotated(by: gestureRecognizer.rotation))!
// Set the rotation to 0 to avoid compouding the
// rotation in the view's transform.
angle += gestureRecognizer.rotation // Save rotation angle for later use
gestureRecognizer.rotation = 0.0
}
}
func adjustAnchorPoint(gestureRecognizer : UIGestureRecognizer) {
if gestureRecognizer.state == .began {
let view = gestureRecognizer.view
let locationInView = gestureRecognizer.location(in: view)
let locationInSuperview = gestureRecognizer.location(in: view?.superview)
// Move the anchor point to the the touch point and change the position of the view
view?.layer.anchorPoint = CGPoint(x: (locationInView.x / (view?.bounds.size.width)!), y: (locationInView.y / (view?.bounds.size.height)!))
view?.center = locationInSuperview
}
}
EDIT
I see that people aren't eager to get into this. Let me help by sharing some progress I've made in the past few days.
Firstly, I wrote a function centerAnchorPoint which correctly places the anchor point of an image to the centre of the screen regardless of where that anchor point was previously. However the image must not be scaled or rotated for it to work.
func setAnchorPoint(_ anchorPoint: CGPoint, forView view: UIView) {
var newPoint = CGPoint(x: view.bounds.size.width * anchorPoint.x, y: view.bounds.size.height * anchorPoint.y)
var oldPoint = CGPoint(x: view.bounds.size.width * view.layer.anchorPoint.x, y: view.bounds.size.height * view.layer.anchorPoint.y)
newPoint = newPoint.applying(view.transform)
oldPoint = oldPoint.applying(view.transform)
var position = view.layer.position
position.x -= oldPoint.x
position.x += newPoint.x
position.y -= oldPoint.y
position.y += newPoint.y
view.layer.position = position
view.layer.anchorPoint = anchorPoint
}
func centerAnchorPoint(gestureRecognizer : UIGestureRecognizer) {
if gestureRecognizer.state == .ended {
view?.layer.anchorPoint = CGPoint(x: (photo.bounds.midX / (view?.bounds.size.width)!), y: (photo.bounds.midY / (view?.bounds.size.height)!))
}
}
func centerAnchorPoint() {
// Position of the current anchor point
let currentPosition = photo.layer.anchorPoint
self.setAnchorPoint(CGPoint(x: 0.5, y: 0.5), forView: photo)
// Center of the image
let imageCenter = CGPoint(x: photo.center.x, y: photo.center.y)
self.setAnchorPoint(currentPosition, forView: photo)
// Center of the screen
let screenCenter = CGPoint(x: UIScreen.main.bounds.midX, y: UIScreen.main.bounds.midY)
// Distance between the centers
let distanceX = screenCenter.x - imageCenter.x
let distanceY = screenCenter.y - imageCenter.y
// Find new anchor point
let newAnchorPoint = CGPoint(x: (imageCenter.x+2*distanceX)/(UIScreen.main.bounds.size.width), y: (imageCenter.y+2*distanceY)/(UIScreen.main.bounds.size.height))
//photo.layer.anchorPoint = newAnchorPoint
self.setAnchorPoint(newAnchorPoint, forView: photo)
let dotPath = UIBezierPath(ovalIn: CGRect(x: photo.layer.position.x-2.5, y: photo.layer.position.y-2.5, width: 5, height: 5))
layer.path = dotPath.cgPath
}
Function setAchorPoint is used here to set anchor point to a new position without moving an image.
Then I updated panGesture function by inserting this at the end of it:
if gestureRecognizer.state == .ended {
self.centerAnchorPoint()
}
EDIT 2
Ok, so I'll try to simply explain the code above.
What I am doing is:
Finding the distance between the center of the photo and the center of the screen
Apply this formula to find the new position of anchor point:
newAnchorPointX = (imageCenter.x-distanceX)/screenWidth + distanceX/screenWidth
Then do the same for y position.
Set this point as a new anchor point without moving the photo using setAnchorPoint function
As I said this works great if the image is not scaled. If it is, then the anchor point does not stay at the center.
Strangely enough distanceX or distanceY doesn't exactly depend on scale value, so something like this doesn't quite work:
newAnchorPointX = (imageCenter.x-distanceX)/screenWidth + distanceX/(scaleValue*screenWidth)
EDIT 3
I figured out the scaling. It appears that the correct scale factor has to be:
scaleFactor = photo.frame.size.width/photo.layer.bounds.size.width
I used this instead of scaleValue and it worked splendidly.
So panning and scaling are done. The only thing left is rotation, but it appears that it's the hardest.
First thing I thought is to apply rotation matrix to increments in X and Y directions, like this:
let incrementX = (distanceX)/(screenWidth)
let incrementY = (distanceY)/(screenHeight)
// Applying rotation matrix
let incX = incrementX*cos(angle)+incrementY*sin(angle)
let incY = -incrementX*sin(angle)+incrementY*cos(angle)
// Find new anchor point
let newAnchorPoint = CGPoint(x: 0.5+incX, y: 0.5+incY)
However this doesn't work.
Since the question is mostly answered in the edits, I don't want to repeat myself too much.
Broadly what I changed from the code posted in the original question:
Deleted calls to adjustAnchorPoint function in pinch and rotation gesture functions.
Placed this piece of code in pan gesture function, so that the anchor point would update its position after panning the photo:
if gestureRecognizer.state == .ended {
self.centerAnchorPoint()
}
Updated centerAnchorPoint function to work for rotation.
A fully working centerAnchorPoint function (rotation included):
func centerAnchorPoint() {
// Scale factor
photo.transform = photo.transform.rotated(by: -angle)
let curScale = photo.frame.size.width / photo.layer.bounds.size.width
photo.transform = photo.transform.rotated(by: angle)
// Position of the current anchor point
let currentPosition = photo.layer.anchorPoint
self.setAnchorPoint(CGPoint(x: 0.5, y: 0.5), forView: photo)
// Center of the image
let imageCenter = CGPoint(x: photo.center.x, y: photo.center.y)
self.setAnchorPoint(currentPosition, forView: photo)
// Center of the screen
let screenCenter = CGPoint(x: UIScreen.main.bounds.midX, y: UIScreen.main.bounds.midY)
// Distance between the centers
let distanceX = screenCenter.x - imageCenter.x
let distanceY = screenCenter.y - imageCenter.y
// Apply rotational matrix to the distances
let distX = distanceX*cos(angle)+distanceY*sin(angle)
let distY = -distanceX*sin(angle)+distanceY*cos(angle)
let incrementX = (distX)/(curScale*UIScreen.main.bounds.size.width)
let incrementY = (distY)/(curScale*UIScreen.main.bounds.size.height)
// Find new anchor point
let newAnchorPoint = CGPoint(x: 0.5+incrementX, y: 0.5+incrementY)
self.setAnchorPoint(newAnchorPoint, forView: photo)
}
The key things to notice here is that the rotation matrix has to be applied to distanceX and distanceY. The scale factor is also updated to remain the same throughout the rotation.
So I have a project where I take an image, display it and depending on where you tap it gives back the rgb values.
However, the display on the iphone is much smaller than the image resolution so the image gets scaled down.
I tried to circumnavigate this by multiplying the coordinates of the tapLocation on the UIImageView by: image.x/imageview.x and image.y/imageview.y respectively.
But still the colors are way off.
My code:
#IBAction func imageTap(_ sender: UITapGestureRecognizer) {
if sender.state == .ended {
let location = sender.location(in: imageDisplay)
let widthFactor = image.size.width / imageDisplay.frame.width
let heightFactor = image.size.height / imageDisplay.frame.height
let scaledWidth = location.x * widthFactor
let scaledHeight = location.y * heightFactor
let scaledLocation = CGPoint(x: scaledWidth, y: scaledHeight)
let colorAtLocation = image.getPixelColor(pos: scaledLocation)
let rgbValues = colorAtLocation.rgb()
let rValue = rgbValues!.red
let gValue = rgbValues!.green
let bValue = rgbValues!.blue
redValue.text = "\(String(describing: rValue))"
greenValue.text = "\(String(describing: gValue))"
blueValue.text = "\(String(describing: bValue))"
colorViewer.backgroundColor = colorAtLocation
}
}
How should I calculate coordinate correctly?
Possible places where this could go wrong:
The 0;0 isn't where I think it is
The UIImageView's Content Mode shouldn't be Aspect fit
The image scaling isn't as linear as I taught
This is all I could think of but how would I go to check these?
I am trying to have my SKCameraNode start in the bottom left corner, and have my background anchored there as well. When I set the anchor point to CGPointZero, here is what my camera shows:
EDIT:
Interestingly, If I set my AnchorPoint to CGPoint(x:0.5, y:0.2), I get it mostly lined up. Does it have to do with the camera scale?
EDIT 2:
If I change my scene size, I can change where the background nodes show up. Usually they appear with their anchor point placed in the center of the screen, which implies the anchorPoint of the scene is in the center of the screen.
I am new to using the SKCameraNode, and so I am probably setting it's constraints incorrectly.
Here are my camera constraints: I don't have my player added yet, but I want to set my world up first before I add my player. Again I am trying to have everything anchored off CGPointZero.
//Camera Settings
func setCameraConstraints() {
guard let camera = camera else { return }
if let player = worldLayer.childNodeWithName("playerNode") as? EntityNode {
let zeroRange = SKRange(constantValue: 0.0)
let playerNode = player
let playerLocationConstraint = SKConstraint.distance(zeroRange, toNode: playerNode)
let scaledSize = CGSize(width: SKMViewSize!.width * camera.xScale, height: SKMViewSize!.height * camera.yScale)
let boardContentRect = worldFrame
let xInset = min((scaledSize.width / 2), boardContentRect.width / 2)
let yInset = min((scaledSize.height / 2), boardContentRect.height / 2)
let insetContentRect = boardContentRect.insetBy(dx: xInset, dy: yInset)
let xRange = SKRange(lowerLimit: insetContentRect.minX, upperLimit: insetContentRect.maxX)
let yRange = SKRange(lowerLimit: insetContentRect.minY, upperLimit: insetContentRect.maxY)
let levelEdgeConstraint = SKConstraint.positionX(xRange, y: yRange)
levelEdgeConstraint.referenceNode = worldLayer
camera.constraints = [playerLocationConstraint, levelEdgeConstraint]
}
}
I have been using a Udemy course to learn the SKCameraNode, and I have been trying to modify it.
Here is where I set the SKMViewSize:
convenience init(screenSize: CGSize, canvasSize: CGSize) {
self.init()
if (screenSize.height < screenSize.width) {
SKMViewSize = screenSize
}
else {
SKMViewSize = CGSize(width: screenSize.height, height: screenSize.width)
}
SKMSceneSize = canvasSize
SKMScale = (SKMViewSize!.height / SKMSceneSize!.height)
let scale:CGFloat = min( SKMSceneSize!.width/SKMViewSize!.width, SKMSceneSize!.height/SKMViewSize!.height )
SKMUIRect = CGRect(x: ((((SKMViewSize!.width * scale) - SKMSceneSize!.width) * 0.5) * -1.0), y: ((((SKMViewSize!.height * scale) - SKMSceneSize!.height) * 0.5) * -1.0), width: SKMViewSize!.width * scale, height: SKMViewSize!.height * scale)
}
How can I get both the camera to be constrained by my world, and have everything anchored to the CGPointZero?
So I have this sprite kit game, which is coded in swift 2. The game includes these colored circles (Green, Red, Purple, Yellow, Blue) that fall down the screen, starting from the same height, but starting at different widths. When the circles hit the bottom of the screen, the respectable method is called. The problem I am having is, the random x position can sometimes cut half of the circle off because it is on the very side of the screen. How can I prevent the circles from clipping the side of the screen? Here are the methods that are called when the circles hit the bottom of the screen.
func changeGreen(){
Green.position.y = frame.size.height * 0.9
let PositionX = arc4random_uniform(UInt32(self.frame.width))
Green.position.x = CGFloat(PositionX)
}
func changeRed(){
Red.position.y = frame.size.height * 0.9
let PositionX = arc4random_uniform(UInt32(self.frame.width))
Red.position.x = CGFloat(PositionX)
}
func changeBlue() {
Blue.position.y = frame.size.height * 0.9
let PositionX = arc4random_uniform(UInt32(self.frame.width))
Blue.position.x = CGFloat(PositionX)
}
func changeYellow() {
Yellow.position.y = frame.size.height * 0.9
let PositionX = arc4random_uniform(UInt32(self.frame.width))
Yellow.position.x = CGFloat(PositionX)
}
func changePurple() {
Purple.position.y = frame.size.height * 0.9
let PositionX = arc4random_uniform(UInt32(self.frame.width))
Purple.position.x = CGFloat(PositionX)
}
Assuming position.x is the center of the circle, I think something like this might work:
let maxX = 350 // this is your frame width
let radius = 50 // radius of the circle
var positionX = Int(arc4random_uniform(UInt32(maxX - radius))) // max position of x is taken care of here
let pointsFromLeft = positionX - radius
if pointsFromLeft < 0 {
positionX -= pointsFromLeft // move to the right if necessary
}
You have to offset the radius of the circle from both ends of frame so that the circles never get clipped.
let radius = 20 // radius of your circle
let positionX = radius + arc4random_uniform(UInt32(self.frame.width - 2 * radius))