I have a UISlider which goes from 0 to 275. I want to use the slider to scale an UIImageView.
When my slider value is 0 my UIImageView should have the original size (scaleX: 1, scaleY: 1).
When my slider value is 275 my UIImageView should scale to 0.85.
Can someone suggest a good formula to calculate the scale value in relationship with slider value?
Something like this
let scale = slider.value >= 275 ? 0.85 : 1
imageView.transform = CGAffineTransform(scaleX: scale, scaleY: scale)
But I have some trouble making the scale dynamic based on slider value.
You could use a linear scale such as:
let scale = 1.0 - slider.value/275.0 * 0.15
Related
I am trying to Apply 3D transform after scaling image from pinch Gesture Recognizer.. When a 3d Transform is applied The image get rescaled to default size ( i.e size before Pinch) . How can i Stop imageView from going back to the Previous state ( i.e before Pinch )
self.transform = CATransform3DIdentity
self.transform.m34 = 1.0 / 500.0;
self.transform = CATransform3DRotate(self.transform, CGFloat(145 * M_PI / 180), 0, 1, 0)
viewToDelete.layer.transform = self.transform
func handlePinch(_ nizer:UIPinchGestureRecognizer) {
nizer.view!.transform = nizer.view!.transform.scaledBy(x: nizer.scale, y: nizer.scale)
nizer.scale = 1
}
In the Image Left most view is the created view//.. view next to It is scaled using Pinch... 3rd downside is after 3dTransform
I'm trying to create a paper folding effect in Swift using CALayers and CATransform3DRotate. There are some libraries out there, but those are pretty outdated and don't fit my needs (they don't have symmetric folds, for example).
My content view controller will squeeze to the right half side of the screen, revealing the menu at the left side.
Everything went well, until I applied perspective: then the dimensions I calculate are not correct anymore.
To explain the problem, I created a demo to show you what I'm doing.
This the content view controller with three squares. I will use three folds, so each square will be on a separate fold.
The even folds will get anchor point (0, 0.5) and the odd folds will get anchor point (1, 0.5), plus they'll receive a shadow.
When fully folded, the content view will be half of the screen's width.
On an iPhone 7, each fold/plane will be 125 points unfolded and 62.5 points fully folded when looked at.
To calculate the rotation needed to achieve this 62.5 points width, we can use a trigonometric function. To illustrate, look at this top-down view:
We know the original plane size (125) and the 2D width (62.5), so we can calculate the angle α using arccos:
let angle = acos(width / originalWidth)
The result is 1.04719755 rad or 60 degrees.
When using this formula with CATransform3DRotate, I get the correct result:
Now for the problem: when I add perspective, my calculation isn't correct anymore. The planes are bigger. Probably because of the now different projection.
You can see the planes are now overlapping and being clipped.
I reconstructed the desired result on the right by playing with the angle, but the correction needed is not consistent, unfortunately.
Here's the code I use. It works perfectly without perspective.
// Loop layers
for i in 0..<self.layers.count {
// Get layer
let layer = self.layers[i]
// Get dimensions
let width = self.frame.size.width / CGFloat(self.numberOfFolds)
let originalWidth = self.sourceView.frame.size.width / CGFloat(self.numberOfFolds)
// Calculate angle
let angle = acos(width / originalWidth)
// Set transform
layer.transform = CATransform3DIdentity
layer.transform.m34 = 1.0 / -500
layer.transform = CATransform3DRotate(layer.transform, angle * (i % 2 == 0 ? -1 : 1), 0, 1, 0)
// Update position
if i % 2 == 0 {
layer.position = CGPoint(x: (width * CGFloat(i)), y: layer.position.y)
} else {
layer.position = CGPoint(x: (width * CGFloat(i + 1)), y: layer.position.y)
}
}
So my question is: how do I achieve the desired result? Do I need to correct the angle, or should I calculate the projected/2D width differently?
Thanks in advance! :)
I know the distance between the camera and the object
I know the type of camera used
I know the width in pixel on the picture
Can I figure the real life width of the object?
you have to get the angle of camera. For example, iphone 5s is 61.4 in vertical and 48.0 horizontal. call it alpha.
then you calculate the width of object by this way:
viewWidth = distance * tan(alpha / 2) * 2;
objWidth = viewWidth * (imageWidth / screenWidth)
I am working on a few experiments to learn gestures and animations in iOS. Creating a Tinder-like interface is one of them. I am following this guide: http://guti.in/articles/creating-tinder-like-animations/
I understand the changing of the position of the image, but don't understand the rotation. I think I've pinpointed my problem to not understanding CGAfflineTransform. Particularly, the following code:
CGFloat rotationStrength = MIN(xDistance / 320, 1);
CGFloat rotationAngle = (CGFloat) (2 * M_PI * rotationStrength / 16);
CGFloat scaleStrength = 1 - fabsf(rotationStrength) / 4;
CGFloat scale = MAX(scaleStrength, 0.93);
CGAffineTransform transform = CGAffineTransformMakeRotation(rotationAngle);
CGAffineTransform scaleTransform = CGAffineTransformScale(transform, scale, scale);
self.draggableView.transform = scaleTransform;
Where are these values and calculations, such as: 320, 1-fabs(strength) / 4 , .93, etc, coming from? How do they contribute to the eventual rotation?
On another note, Tinder seems to use a combination of swiping and panning. Do they add a swipe gesture to the image, or do they just take into account the velocity of the pan?
That code has a lot of magic constants, most of which are likely chosen because they resulted in something that "looked good". This can make it hard to follow. It's not so much about the actual transforms, but about the values used to create them.
Let's break it down, line by line, and see if that makes it clearer.
CGFloat rotationStrength = MIN(xDistance / 320, 1);
The value 320 is likely assumed to be the width of the device (it was the portrait width of all iPhones until the 6 and 6+ came out).
This means that xDistance / 320 is a factor of how far along the the x axis (based on the name xDistance) that the user has dragged. This will be 0.0 when the user hasn't dragged any distance and 1.0 when the user has dragged 320 points.
MIN(xDistance / 320, 1) Takes the smallest value of the dragged distance factor and 1). This means that if the user drags further than 320 points (so that the distance factor would be larger than 1, the rotation strength would never be larger than 1.0. It doesn't protect agains negative values (if the user dragged to the left, xDistance would be a negative value, which would always be smaller than 1. However, I'm not sure if the guide accounted for that (since 320 is the full width, not the half width.
So, the first line is a factor between 0 and 1 (assuming no negative values) of how much rotation should be applied.
CGFloat rotationAngle = (CGFloat) (2 * M_PI * rotationStrength / 16);
The next line calculates the actual angle of rotation. The angle is specified in radians. Since 2π is a full circle (360°), the rotation angle is ranging from 0 and 1/16 of a full circle (22.5°). Th value 1/16 is likely chosen because it "looked good".
The two lines together means that as the user drags further, the view rotates more.
CGFloat scaleStrength = 1 - fabsf(rotationStrength) / 4;
From the variable name, it would look like it would calculate how much the view should scale. But it's actually calculating what scale factor the view should have. A scale of 1 means the "normal" or unscaled size. When the rotation strength is 0 (when the xDistance is 0), the scale strength will be 1 (unscaled). As rotation strength increase, approaching 1, this scale factor approaches 0.75 (since that's 1 - 1/4).
fabsf is simply the floating point absolute value (fabsf(-0.3) is equal to 0.3)
CGFloat scale = MAX(scaleStrength, 0.93);
On the next line, the actual scale factor is calculated. It's simply the largest value of the scaleStrength and 0.93 (scaled down to 93%). The value 0.93 is completely arbitrary and is likely just what the author found appealing.
Since the scale strength ranges from 1 to 0.75 and the scale factor is never smaller than 0.93, the scale factor only changes for the first third of the xDistance. All scale strength values in the next two thirds will be smaller than 0.93 and thus won't change the scale factor.
With the scaleFactor and rotationAngle calculated as above, the view is first rotated (by that angle) and then scaled down (by that scale factor).
Summary
So, in short. As the view is dragged to the right (as xDistance approaches 320 points), The view linearly rotates from 0° to 22.5° over the full drag and scales from 100% to 93% over the first third of the drag (and then stays at 93% for the remainder of the drag gesture).
I want to get scale factor and rotation angle form view. I've already applied CGAffineTransform to that view.
The current transformation of an UIView is stored in its transform property. This is a CGAffineTransform structure, you can read more about that here: https://developer.apple.com/library/ios/documentation/GraphicsImaging/Reference/CGAffineTransform/Reference/reference.html
You can get the angle in radians from the transform like this:
CGFloat angle = atan2f(yourView.transform.b, yourView.transform.a);
If you want the angle in degrees you need to convert it like this:
angle = angle * (180 / M_PI);
Get the scale like this:
CGFloat scaleX = view.transform.a;
CGFloat scaleY = view.transform.d;
I had the same problem, found this solution, but it only partially solved my problem.
In fact the proposed solution for extracting the scale from the transform:
(all code in swift)
scaleX = view.transform.a
scaleY = view.transform.d
only works when the rotation is 0.
When the rotation is not 0 the transform.a and transform.d are influenced by the rotation. To get the proper values you can use
scaleX = sqrt(pow(transform.a, 2) + pow(transform.c, 2))
scaleY = sqrt(pow(transform.b, 2) + pow(transform.d, 2))
note that the result is always positive. If you are also interested in the sign of the scaling (the view is flipped), then the sign of the scaling is the sign of transform.a for x flip and transform.d for y flip. One way to inherit the sign.
scaleX = (transform.a/abs(transform.a)) * sqrt(pow(transform.a, 2) + pow(transform.c, 2))
scaleY = (transform.d/abs(transform.d)) * sqrt(pow(transform.b, 2) + pow(transform.d, 2))
In Swift 3:
let rotation = atan2(view.transform.b, view.transform.a)