Transforming a CAShapelayer to a specific size in swift - ios

I'm trying to make a CAShapeLayer with a bezier path in it expand with a view (which is animated, of course). I am aware that I could change the size of the path with CATransform3DMakeScale(, , ) But this won't allow me to make the path an exact size (in points).
Does anybody know how to do this?

You would do this using good old fashioned math.
Simple solution
To phrase your question differently: you have something of one size (the path/shape layer), and you want to scale it so that it becomes another size.
To know how much you want to scale along X and Y (separately) you divide the size you want to fit to by the current size. You can get the bounding box of the path using
let boundingBox = CGPathGetBoundingBox(path)
I'm assuming that you already have some size that you want to scale it to (here I'm calling mine containingSize).
Using those two, you can calculate the two scale factors by dividing the dimension you are scaling to with the dimension you are scaling from
let xScaleFactor = containingSize.width / boundingBox.width
let yScaleFactor = containingSize.height / boundingBox.height
Using those, you can create the required scale transform
let scaleTransform = CATransform3DMakeMakeScale(xScaleFactor, yScaleFactor, 1.0)
Scaling this shape layer
using those two scale factors, will scale the shape layer to fill the container size. If the container size has the same aspect ratio as the path, everything will look as expected. If not, the scaled layer will appear stretched.
Fitting instead of filling
This problem (unless its what you want) can be solved by calculating a uniform scale factor, that is the smaller of the two, so that the scaled path fits the container instead of fills it.
We do this by finding out which scale factor is the most constrained one, and then apply that to both X and Y
let boundingBoxAspectRatio = boundingBox.width/boundingBox.height
let viewAspectRatio = containingSize.width/containingSize.height
let scaleFactor: CGFloat
if (boundingBoxAspectRatio > viewAspectRatio) {
// Width is limiting factor
scaleFactor = containingSize.width/boundingBox.width
} else {
// Height is limiting factor
scaleFactor = containingSize.height/boundingBox.height
}
let scaleTransform = CATransform3DMakeMakeScale(scaleFactor, scaleFactor, 1.0)
This will scale the path without changing its aspect ratio
Scaling the layer or scaling the path?
You might also have noticed that as the shape layer was scaled, the line width scaled as well, like if it was an image. There is a difference between scaling the layer, and scaling the path.
If you only want it to appear as the path of the shape layer is scaled, then you should scale the path instead of the layer. You can do this by creating a new path that is transformed, and using that path with your shape layer. Note that the scale factor is still calculated using the bounding box of the unscaled path.
var affineTransform = CGAffineTransformMakeScale(scaleFactor, scaleFactor)
let transformedPath = CGPathCreateCopyByTransformingPath(path, &affineTransform)
yourShapeLayer.path = transformedPath
This will scale up the path, without affecting the line width, etc of the shape layer.

Related

SCNBox – Map a texture onto five of six sides

I'm trying to create something like canvas in SceneKit using an SCNBox, with a UIImage "wrapped" around from one surface and onto the four others adjacent to it.
The only way I can currently think to do this would be to chop up the UIImage into five separate images and put those onto the sides as materials, but I'm sure there must be an easier way.
Can anyone steer me in the right direction here? The box will have a separate texture/material on the side opposite the "front".
The easiest way would probably be to create a custom geometry with matching texture coordinates using +geometryWithSources:elements:
You can use contentsTransform property from SCNMaterialProperty, for adjust needed texture coordinates from your image to SCNBox
Some explanations with simplified example:
Lets suppose that you are using cube and you have a texture like this
By dividing it into rectangles, you will have
You want to skip rectangles 1, 3, 7, 9 and cover your cube with this texture.
For this just normalize the size of side from your SCNBox between 0 and 1, and use it to set the scale and transform in contentsTransform matrix.
I have a cube with equal sides in my example - so it will be the third part of the whole texture. For taking the 5 rectangle from the texture
let normalizedWidth = 1/3
let normilizedHeight = 1/3
let xOffset = 1 //skip 1,4,7 line
let yOffset = 1 //skip 1,2,3 line
let sideMaterial = SCNMaterial()
sideMaterial.diffuse.contents = textureImage
let scaleMatrix = SCNMatrix4MakeScale(normalizedWidth, normilizedHeight, 0.0)
sideMaterial.diffuse.contentsTransform = SCNMatrix4Translate(scaleMatrix,
normalizedWidth * xOffset, yOffset * yOffset, 0.0)
You can fill 5 sides with configured materials, and the last on (on the back) just with the color and set them to materials property of your SCNBox.
In the result you will have

How to create CGPath from a SKSpriteNode in SWIFT

I have a SKSpriteNode create with the level generator.
I need to create exactly the same shape using CGPath.
self.firstSquare = childNodeWithName("square") as! SKSpriteNode
var transform = CGAffineTransformMakeRotation(self.firstSquare.zRotation)
let rect = CGRect(origin: self.firstSquare.position, size: self.firstSquare.size)
let firstSquareCGPath:CGPath=CGPathCreateWithRect(rect, &transform)
print(self.firstSquare.position)
=> firstSquare position (52.8359451293945, -52.9375076293945)
To check if my CGPath has been created as I want, I created a SKShapeNode with my CGPath:
let shape:SKShapeNode=SKShapeNode(path:path)
shape.fillColor = self.getRandomColor()
addChild(shape)
print(shape.position)
=> shape position (52.8359451293945, -52.9375076293945)
The result is not what I expected.
So I don't know if my CGPath is wrong, or if it's when I convert it in SKShapeNode that I lose the initial sprite properties.
To understand why I need to do that, please read this stack
EDIT 1,2
I added:
shape.position = self.firstSquare.position
And I obtained:
EDIT 3 :
I updated my explanations above, the anchor point of my firstSquare is now (0.5, 0.5)
Usually I don't use .SKS files, I preefer to write only code, but here seems you simply have two squares , one with the original .sks position, the other added in second time without any position.
If you want to overlay your first square simply:
shape.position = self.firstSquare.position
EDIT:
I also seen your anchorPoint was setted to (0.0,0.0) but the default anchor point is (0.5,0.5). Try to correct also this parameter to match with your sks.
You .sks size layout could be not the same as scene, so:
scene.size = skView.bounds.size
Please take a look also to this parameter to change before you present your scene:
scene.scaleMode = .ResizeFill
In fact, the scale mode affects also to the positioning.
What is scaleMode?
The scaleMode of a scene determines how the scene will be updated to fill the view when the iOS device is rotated. There are four different scaleModes defined:
SKSceneScaleModeFill – the x and y axis are each scaled up or down so that the scene fills the view. The content of the scene will not maintain its current aspect ratio. This is the default if you do not specify a scaleMode for your scene.
SKSceneScaleModeAspectFill – both the x and y scale factors are calculated. The larger scale factor will be used. This ensures that the view will be filled, but will usually result in parts of the scene being cropped. Images in the scene will usually be scaled up but will maintain the correct aspect ratio.
SKSceneScaleModeAspectFit – both the x and y scale factors are calculated. The smaller scale factor will be used. This ensures that the entire scene will be visible, but will usual result in the scene being letterboxed. Images in the scene will usually be scaled down but will maintain the correct aspect ratio.
SKSceneScaleModeResizeFill – The scene is not scaled. It is simply resized so that its fits the view. Because the scene is not scaled, the images will all remain at their original size and aspect ratio. The content will all remain relative to the scene origin (lower left).
I found a solution, I need to use SKShapeNode(path:, centered:), then apply the transformation and set the position.
self.firstSquare = childNodeWithName("square") as! SKSpriteNode
let firstSquareCGPath:CGPath=CGPath(rect: CGRect(origin: self.firstSquare.position, size: self.firstSquare.size), transform: nil)
let shape:SKShapeNode=SKShapeNode(path: firstSquareCGPath, centered: true) //important
shape.zRotation = self.firstSquare.zRotation
shape.position = self.firstSquare.position
shape.fillColor = SKColor.blue()
addChild(shape)
It works perfectly!
Just a pending question, as I set the transformation and the position in last (because of the SKShapeNode properties), I don't know if my CGPath above is set correctly (like my SKSpriteNode).

UIViews with subviews: calculating position when scaling

I have a view that I draw using Core Graphics, which in this example is a segmented circle. The user can touch the circle to create a point along its circumference; this creates a subview on the UIView that contains the circle graphic.
Then I've implemented a pinch-zoom gesture which causes the circle to redraw to its new size. I've seen most implementations of pinch zoom use transform properties, but I've chosen to redraw because it's all vectors and gives a clean result.
My problem is repositioning the point views. I calculate the required position of those points based on the scale of the parent view: as it changes I update the x/y coords of the point views. However, it seems there are some precision issues: as the circle shape size increases, the points drift so they aren't right on the line anymore. Here's a couple examples:
This is where the circle is at 100% scale. Note the perfect positioning of that black point. But when you zoom in...
The point drifts off-line.
And here's some code. I derive the new size of the circle from the pinch gesture's scale (I modify if a bit to constrain and slow it down for UI purposes, so that's deltaScale) and then draw it like so:
let currentSize = self.shape!.bounds.size
let newSize = CGSize(width: self.originalSize.width * deltaScale, height: self.originalSize.height * deltaScale)
self.shape?.frame.size = newSize
self.shape?.center = self.originalCentre!
self.shape?.shapeSize = newSize
self.shape?.setNeedsDisplay()
As the pinch-zoom gesture completes, I calculate the factor:
let xScale = Double(newSize.width) / Double(currentSize.width)
let yScale = Double(newSize.height) / Double(currentSize.height)
self.points = self.points.map{(thisPoint) -> UIView in
thisPoint.center = CGPoint(x: Double(thisPoint.center.x) * xScale, y: Double(thisPoint.center.y) * yScale)
return thisPoint
}
(I was using CGFloats, but switched to Doubles in the hope that it would give me the precision I needed. Alas.)
You're accumulating roundoff errors. This is getting executed repeatedly:
thisPoint.center = CGPoint(x: Double(thisPoint.center.x) * xScale, y: Double(thisPoint.center.y) * yScale)
Repeating any calculation of the form 'x=f(x)' with anything less than unlimited precision will result in drift.
Trick is to not have 'thisPoint.center' on both sides of the equal sign. Best way to do that is to have thisPoint.center be a pure function of some other state. Commenter suggested storing desired angle, that would work well. Then you could do:
thisPoint.center = f(thisPoint.someRadians), where 'f' converts from polar to rectangular coordinates, factoring in the scale of the circle.

Rotated Image gets distorted and blurry?

I use an image view:
#IBOutlet weak var imageView: UIImageView!
to paint an image and also another image which has been rotated. It turns out that the rotated image has very bad quality. In the following image the glasses in the yellow box are not rotated. The glasses in the red box are rotated by 4.39 degrees.
Here is the code I use to draw the glasses:
UIGraphicsBeginImageContext(imageView.image!.size)
imageView.image!.drawInRect(CGRectMake(0, 0, imageView.image!.size.width, imageView.image!.size.height))
var drawCtxt = UIGraphicsGetCurrentContext()
var glassImage = UIImage(named: "glasses.png")
let yellowRect = CGRect(...)
CGContextSetStrokeColorWithColor(drawCtxt, UIColor.yellowColor().CGColor)
CGContextStrokeRect(drawCtxt, yellowRect)
CGContextDrawImage(drawCtxt, yellowRect, glassImage!.CGImage)
// paint the rotated glasses in the red square
CGContextSaveGState(drawCtxt)
CGContextTranslateCTM(drawCtxt, centerX, centerY)
CGContextRotateCTM(drawCtxt, 4.398 * CGFloat(M_PI) / 180)
var newRect = yellowRect
newRect.origin.x = -newRect.size.width / 2
newRect.origin.y = -newRect.size.height / 2
CGContextAddRect(drawCtxt, newRect)
CGContextSetStrokeColorWithColor(drawCtxt, UIColor.redColor().CGColor)
CGContextSetLineWidth(drawCtxt, 1)
// draw the red rect
CGContextStrokeRect(drawCtxt, newRect)
// draw the image
CGContextDrawImage(drawCtxt, newRect, glassImage!.CGImage)
CGContextRestoreGState(drawCtxt)
How can I rotate and paint the glasses without losing quality or get a distorted image?
You should use UIGraphicsBeginImageContextWithOptions(CGSize size, BOOL opaque, CGFloat scale) to create the initial context. Passing in 0.0 as the scale will default to the scale of the current screen (e.g., 2.0 on an iPhone 6 and 3.0 on an iPhone 6 Plus).
See this note on UIGraphicsBeginImageContext():
This function is equivalent to calling the UIGraphicsBeginImageContextWithOptions function with the opaque parameter set to NO and a scale factor of 1.0.
As others have pointed out, you need to set up your context to allow for retina displays.
Aside from that, you might want to use a source image that is larger than the target display size and scale it down. (2X the pixel dimensions of the target image would be a good place to start.)
Rotating to odd angles is destructive. The graphics engine has to map a grid of source pixels onto a different grid where they don't line up. Perfectly straight lines in the source image are no longer straight in the destination image, etc. The graphics engine has to do some interpolation, and a source pixel might be spread over several pixels, or less than a full pixel, in the destination image.
By providing a larger source image you give the graphics engine more information to work with. It can better slice and dice those source pixels into the destination grid of pixels.

Scaling sprites (not textures) for target viewport size/device in MonoGame

When you have to display a series of visual components (sprites) within the context of a game each taking a literal height and width that needs to be relative to the height & width of the Viewport (not necessarily aspect ratio) of the target device:
Is there a scaling class to help come up with scaling ratio in a dynamic fashion based on current device viewport size?
Will I need to roll my own scaling ratio algorithm?
Any cross platform issues I should be aware of?
This is not a question relating to the loading of assets based on target device nor is it a question of how to perform the scaling of the sprite (which is described here: http://msdn.microsoft.com/en-us/library/bb194913.aspx), rather a question of how to determine the scale of sprites based on view port size.
You can always create your own implementation of scaling.
For example, the default target viewport dimensions are:
const int defaultWidth = 1280, defaultHeight = 720;
And your current screen dimensions are 800×600, which gives you a (let's use a Vector2 instead of two floats):
int currentWidth = GraphicsDevice.Viewport.Width,
currentHeight = GraphicsDevice.Viewport.Height;
Vector2 scale = new Vector2(currentWidth / defaultWidth,
currentHeight / defaultHeight);
This gives you a {0.625; 0.83333}. You can now use this in a handy SpriteBatch.Draw() overload that takes a Vector2 scaling variable:
public void Draw (
Texture2D texture,
Vector2 position,
Nullable<Rectangle> sourceRectangle,
Color color,
float rotation,
Vector2 origin,
Vector2 scale,
SpriteEffects effects,
float layerDepth
)
Alternatively, you can draw all your stuff to a RenderTarget2D and copy the resulting image from there to a stretched texture on the main screen, but that will still require the above SpriteBatch.Draw() overload, though it might save you time if you have lots of draw calls.
Another Option to generate the scale would be to leverage:
var scaleMatrix = Matrix.CreateScale(
(float)GraphicsDevice.Viewport.Width / View.Width,
(float)GraphicsDevice.Viewport.Width / View.Width, 1f);
http://msdn.microsoft.com/en-gb/library/bb195692.aspx.
But this did not meet my needs, as I would then have to roll my own transform to map touch input location to the 'transformed' sprites (which respond to user touch input by knowing their own position and size).
In the end I used a percentage based approach.
I basically got the viewport height and width...
GraphicsDevice.Viewport.Width
GraphicsDevice.Viewport.Height
...then calculated the Height and Width of my sprites (Note: "as mentioned in question they take a literal height and width") based on their relative size to the screen myself using percentages.
//I want the buttons height and width to be 20% of the viewport
var x, y = GraphicsDevice.Viewport.Width * 0.2f; //20% of screen width
var btnsize = new Vector(x,y);
var button = new GameButton(btnsize);
Then once I have the size of the button I am able to calculate the position on the screen to render the button based of the size of the button and the available viewport size, against working in relative position based in percentages.

Resources