I am trying to solve a problem without success and am hoping someone could help.
I have looked for similar posts but haven't been able to find anything which solves my problem.
My Scenario is as follows:
I have a UIView on which a number of other UIViews can be placed. These can be moved, scaled and rotated using gesture recognisers (There is no issue here).
The User is able to change the Aspect Ratio of the Main View (the Canvas) and my problem is trying to scale the content of the Canvas to fit into the new destination size.
There are a number of posts with a similar theme e.g:
calculate new size and location on a CGRect
How to create an image of specific size from UIView
But these don't address the changing of ratios multiple times.
My Approach:
When I change the aspect ratio of the canvas, I make use of AVFoundation to calculate an aspect fitted rectangle which the subviews of the canvas should fit:
let sourceRectangleSize = canvas.frame.size
canvas.setAspect(aspect, screenSize: editorLayoutGuide.layoutFrame.size)
view.layoutIfNeeded()
let destinationRectangleSize = canvas.frame.size
let aspectFittedFrame = AVMakeRect(aspectRatio:sourceRectangleSize, insideRect: CGRect(origin: .zero, size: destinationRectangleSize))
ratioVisualizer.frame = aspectFittedFrame
The Red frame is simply to visualise the Aspect Fitted Rectangle. As you can see whilst the aspect fitted rectangle is correct, the scaling of objects isn't working. This is especially true when I apply scale and rotation to the subviews (CanvasElement).
The logic where I am scaling the objects is clearly wrong:
#objc
private func setRatio(_ control: UISegmentedControl) {
guard let aspect = Aspect(rawValue: control.selectedSegmentIndex) else { return }
let sourceRectangleSize = canvas.frame.size
canvas.setAspect(aspect, screenSize: editorLayoutGuide.layoutFrame.size)
view.layoutIfNeeded()
let destinationRectangleSize = canvas.frame.size
let aspectFittedFrame = AVMakeRect(aspectRatio:sourceRectangleSize, insideRect: CGRect(origin: .zero, size: destinationRectangleSize))
ratioVisualizer.frame = aspectFittedFrame
let scale = min(aspectFittedFrame.size.width/canvas.frame.width, aspectFittedFrame.size.height/canvas.frame.height)
for case let canvasElement as CanvasElement in canvas.subviews {
canvasElement.frame.size = CGSize(
width: canvasElement.baseFrame.width * scale,
height: canvasElement.baseFrame.height * scale
)
canvasElement.frame.origin = CGPoint(
x: aspectFittedFrame.origin.x + canvasElement.baseFrame.origin.x * scale,
y: aspectFittedFrame.origin.y + canvasElement.baseFrame.origin.y * scale
)
}
}
I am enclosing the CanvasElement Class as well if this helps:
final class CanvasElement: UIView {
var rotation: CGFloat = 0
var baseFrame: CGRect = .zero
var id: String = UUID().uuidString
// MARK: - Initialization
override init(frame: CGRect) {
super.init(frame: frame)
storeState()
setupGesture()
}
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
}
// MARK: - Gesture Setup
private func setupGesture() {
let panGestureRecognizer = UIPanGestureRecognizer(target: self, action: #selector(panGesture(_:)))
let pinchGestureRecognizer = UIPinchGestureRecognizer(target: self, action: #selector(pinchGesture(_:)))
let rotateGestureRecognizer = UIRotationGestureRecognizer(target: self, action: #selector(rotateGesture(_:)))
addGestureRecognizer(panGestureRecognizer)
addGestureRecognizer(pinchGestureRecognizer)
addGestureRecognizer(rotateGestureRecognizer)
}
// MARK: - Touches
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
super.touchesBegan(touches, with: event)
moveToFront()
}
//MARK: - Gestures
#objc
private func panGesture(_ sender: UIPanGestureRecognizer) {
let move = sender.translation(in: self)
transform = transform.concatenating(.init(translationX: move.x, y: move.y))
sender.setTranslation(CGPoint.zero, in: self)
storeState()
}
#objc
private func pinchGesture(_ sender: UIPinchGestureRecognizer) {
transform = transform.scaledBy(x: sender.scale, y: sender.scale)
sender.scale = 1
storeState()
}
#objc
private func rotateGesture(_ sender: UIRotationGestureRecognizer) {
rotation += sender.rotation
transform = transform.rotated(by: sender.rotation)
sender.rotation = 0
storeState()
}
// MARK: - Miscelaneous
func moveToFront() {
superview?.bringSubviewToFront(self)
}
public func rotated(by degrees: CGFloat) {
transform = transform.rotated(by: degrees)
rotation += degrees
}
func storeState() {
print("""
Element Frame = \(frame)
Element Bounds = \(bounds)
Element Center = \(center)
""")
baseFrame = frame
}
}
Any help or advise, approaches, with some actual examples would be great. Im not expecting anyone to provide full source code, but something which I could use as a basis.
Thank you for taking the time to read my question.
Here are a few thoughts and findings while playing around with this
1. Is the right scale factor being used?
The scaling you use is a bit custom and cannot be compared directly to the examples which has just 1 scale factor like 2 or 3. However, your scale factor has 2 dimensions but I see you compensate for this to get the minimum of the width and height scaling:
let scale = min(aspectFittedFrame.size.width / canvas.frame.width,
aspectFittedFrame.size.height / canvas.frame.height)
In my opinion, I don't think this is the right scale factor. To me this compares new aspectFittedFrame with the new canvas frame
when actually I believe the right scaling factor is to compare the new aspectFittedFrame with the previous canvas frame
let scale
= min(aspectFittedFrame.size.width / sourceRectangleSize.width,
aspectFittedFrame.size.height / sourceRectangleSize.height)
2. Is the scale being applied on the right values?
If you notice, the first order from 1:1 to 16:9 works quite well. However after that it does not seem to work and I believe the issue is here:
for case let canvasElement as CanvasElement in strongSelf.canvas.subviews
{
canvasElement.frame.size = CGSize(
width: canvasElement.baseFrame.width * scale,
height: canvasElement.baseFrame.height * scale
)
canvasElement.frame.origin = CGPoint(
x: aspectFittedFrame.origin.x
+ canvasElement.baseFrame.origin.x * scale,
y: aspectFittedFrame.origin.y
+ canvasElement.baseFrame.origin.y * scale
)
}
The first time, the scale works well because canvas and the canvas elements are being scaled in sync or mapped properly:
However, if you go beyond that, because you are always scaling based on the base values your aspect ratio frame and your canvas elements are out of sync
So in the example of 1:1 -> 16:9 -> 3:2
Your viewport has been scaled twice 1:1 -> 16:9 and from 16:9 -> 3:2
Whereas your elements are scaled once each time, 1:1 -> 16:9 and 1:1 -> 3:2 because you always scale from the base values
So I feel to see the values within the red viewport, you need to apply the same continuous scaling based on the previous view rather than the base view.
Just for an immediate quick fix, I update the base values of the canvas element after each change in canvas size by calling canvasElement.storeState():
for case let canvasElement as CanvasElement in strongSelf.canvas.subviews
{
canvasElement.frame.size = CGSize(
width: canvasElement.baseFrame.width * scale,
height: canvasElement.baseFrame.height * scale
)
canvasElement.frame.origin = CGPoint(
x: aspectFittedFrame.origin.x
+ canvasElement.baseFrame.origin.x * scale,
y: aspectFittedFrame.origin.y
+ canvasElement.baseFrame.origin.y * scale
)
// I added this
canvasElement.storeState()
}
The result is perhaps closer to what you want ?
Final thoughts
While this might fix your issue, you will notice that it is not possible to come back to the original state as at each step a transformation is applied.
A solution could be to store the current values mapped to a specific viewport aspect ratio and calculate the right sizes for the others so that if you needed to get back to the original, you could do that.
Couple suggestions...
First, when using your CanvasElement, panning doesn't work correctly if the view has been rotated.
So, instead of using a translate transform to move the view, change the .center itself. In addition, when panning, we want to use the translation in the superview, not in the view itself:
#objc
func panGesture(_ gest: UIPanGestureRecognizer) {
// change the view's .center instead of applying translate transform
// use translation in superview, not in self
guard let superV = superview else { return }
let translation = gest.translation(in: superV)
center = CGPoint(x: center.x + translation.x, y: center.y + translation.y)
gest.setTranslation(CGPoint.zero, in: superV)
}
Now, when we want to scale the subviews when the "Canvas" changes size, we can do this...
We'll track the "previous" bounds and use the "new bounds" to calculate the scale:
let newBounds: CGRect = bounds
let scW: CGFloat = newBounds.size.width / prevBounds.size.width
let scH: CGFloat = newBounds.size.height / prevBounds.size.height
for case let v as CanvasElement in subviews {
// reset transform before scaling / positioning
let tr = v.transform
v.transform = .identity
let w = v.frame.width * scW
let h = v.frame.height * scH
let cx = v.center.x * scW
let cy = v.center.y * scH
v.frame.size = CGSize(width: w, height: h)
v.center = CGPoint(x: cx, y: cy)
// re-apply transform
v.transform = tr
}
prevBounds = newBounds
Here's a complete sample implementation. Please note: this is Example Code Only!!! It is not intended to be "Production Ready."
import UIKit
// MARK: enum to provide strings and aspect ratio values
enum Aspect: Int, Codable, CaseIterable {
case a1to1
case a16to9
case a3to2
case a4to3
case a9to16
var stringValue: String {
switch self {
case .a1to1:
return "1:1"
case .a16to9:
return "16:9"
case .a3to2:
return "3:2"
case .a4to3:
return "4:3"
case .a9to16:
return "9:16"
}
}
var aspect: CGFloat {
switch self {
case .a1to1:
return 1
case .a16to9:
return 9.0 / 16.0
case .a3to2:
return 2.0 / 3.0
case .a4to3:
return 3.0 / 4.0
case .a9to16:
return 16.0 / 9.0
}
}
}
class EditorView: UIView {
// no code -
// just makes it easier to identify
// this view when debugging
}
// CanvasElement views will be added as subviews
// this handles the scaling / positioning when the bounds changes
// also (optionally) draws a grid (for use during development)
class CanvasView: UIView {
public var showGrid: Bool = true
private let gridLayer: CAShapeLayer = CAShapeLayer()
private var prevBounds: CGRect = .zero
// MARK: init
override init(frame: CGRect) {
super.init(frame: frame)
commonInit()
}
required init?(coder: NSCoder) {
super.init(coder: coder)
commonInit()
}
private func commonInit() {
gridLayer.fillColor = UIColor.clear.cgColor
gridLayer.strokeColor = UIColor.red.cgColor
gridLayer.lineWidth = 1
layer.addSublayer(gridLayer)
}
override func layoutSubviews() {
super.layoutSubviews()
// MARK: 10 x 10 grid
if showGrid {
// draw a grid on the inside of the bounds
// so the edges are not 1/2 point width
let gridBounds: CGRect = bounds.insetBy(dx: 0.5, dy: 0.5)
let path: UIBezierPath = UIBezierPath()
let w: CGFloat = gridBounds.width / 10.0
let h: CGFloat = gridBounds.height / 10.0
var p: CGPoint = .zero
p = CGPoint(x: gridBounds.minX, y: gridBounds.minY)
for _ in 0...10 {
path.move(to: p)
path.addLine(to: CGPoint(x: p.x, y: gridBounds.maxY))
p.x += w
}
p = CGPoint(x: gridBounds.minX, y: gridBounds.minY)
for _ in 0...10 {
path.move(to: p)
path.addLine(to: CGPoint(x: gridBounds.maxX, y: p.y))
p.y += h
}
gridLayer.path = path.cgPath
}
// MARK: update subviews
// we only want to move/scale the subviews if
// the bounds has > 0 width and height and
// prevBounds has > 0 width and height and
// the bounds has changed
guard bounds != prevBounds,
bounds.width > 0, prevBounds.width > 0,
bounds.height > 0, prevBounds.height > 0
else { return }
let newBounds: CGRect = bounds
let scW: CGFloat = newBounds.size.width / prevBounds.size.width
let scH: CGFloat = newBounds.size.height / prevBounds.size.height
for case let v as CanvasElement in subviews {
// reset transform before scaling / positioning
let tr = v.transform
v.transform = .identity
let w = v.frame.width * scW
let h = v.frame.height * scH
let cx = v.center.x * scW
let cy = v.center.y * scH
v.frame.size = CGSize(width: w, height: h)
v.center = CGPoint(x: cx, y: cy)
// re-apply transform
v.transform = tr
}
prevBounds = newBounds
}
override var bounds: CGRect {
willSet {
prevBounds = bounds
}
}
}
// self-contained Pan/Pinch/Rotate view
// set allowSimultaneous to TRUE to enable
// simultaneous gestures
class CanvasElement: UIView, UIGestureRecognizerDelegate {
public var allowSimultaneous: Bool = false
// MARK: init
override init(frame: CGRect) {
super.init(frame: frame)
commonInit()
}
required init?(coder: NSCoder) {
super.init(coder: coder)
commonInit()
}
private func commonInit() {
isUserInteractionEnabled = true
isMultipleTouchEnabled = true
let panG = UIPanGestureRecognizer(target: self, action: #selector(panGesture(_:)))
let pinchG = UIPinchGestureRecognizer(target: self, action: #selector(pinchGesture(_:)))
let rotateG = UIRotationGestureRecognizer(target: self, action: #selector(rotateGesture(_:)))
[panG, pinchG, rotateG].forEach { g in
g.delegate = self
addGestureRecognizer(g)
}
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
super.touchesBegan(touches, with: event)
// unwrap optional superview
guard let superV = superview else { return }
superV.bringSubviewToFront(self)
}
// MARK: UIGestureRecognizer Methods
#objc
func panGesture(_ gest: UIPanGestureRecognizer) {
// change the view's .center instead of applying translate transform
// use translation in superview, not in self
guard let superV = superview else { return }
let translation = gest.translation(in: superV)
center = CGPoint(x: center.x + translation.x, y: center.y + translation.y)
gest.setTranslation(CGPoint.zero, in: superV)
}
#objc
func pinchGesture(_ gest: UIPinchGestureRecognizer) {
// apply scale transform
transform = transform.scaledBy(x: gest.scale, y: gest.scale)
gest.scale = 1
}
#objc
func rotateGesture(_ gest : UIRotationGestureRecognizer) {
// apply rotate transform
transform = transform.rotated(by: gest.rotation)
gest.rotation = 0
}
// MARK: UIGestureRecognizerDelegate Methods
func gestureRecognizer(_ gestureRecognizer: UIGestureRecognizer, shouldRecognizeSimultaneouslyWith otherGestureRecognizer: UIGestureRecognizer) -> Bool {
return allowSimultaneous
}
}
// example view controller
// Aspect Ratio segmented control
// changes the Aspect Ratio of the Editor View
// includes triple-tap gesture to cycle through
// 3 "starting subview" layouts
class ViewController: UIViewController, UIGestureRecognizerDelegate {
let editorView: EditorView = {
let v = EditorView()
v.backgroundColor = UIColor(white: 0.9, alpha: 1.0)
v.translatesAutoresizingMaskIntoConstraints = false
return v
}()
let canvasView: CanvasView = {
let v = CanvasView()
v.backgroundColor = .yellow
v.translatesAutoresizingMaskIntoConstraints = false
return v
}()
// segmented control for selecting Aspect Ratio
let aspectRatioSeg: UISegmentedControl = {
let v = UISegmentedControl()
v.setContentCompressionResistancePriority(.required, for: .vertical)
v.setContentHuggingPriority(.required, for: .vertical)
v.translatesAutoresizingMaskIntoConstraints = false
return v
}()
// this will be changed by the Aspect Ratio segmented control
var evAspectConstraint: NSLayoutConstraint!
// used to cycle through intitial subviews layout
var layoutMode: Int = 0
override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = UIColor(white: 0.99, alpha: 1.0)
// container view for laying out editor view
let containerView: UIView = {
let v = UIView()
v.backgroundColor = .cyan
v.translatesAutoresizingMaskIntoConstraints = false
return v
}()
// setup the aspect ratio segmented control
for (idx, m) in Aspect.allCases.enumerated() {
aspectRatioSeg.insertSegment(withTitle: m.stringValue, at: idx, animated: false)
}
// add canvas view to editor view
editorView.addSubview(canvasView)
// add editor view to container view
containerView.addSubview(editorView)
// add container view to self's view
view.addSubview(containerView)
// add UI Aspect Ratio segmented control to self's view
view.addSubview(aspectRatioSeg)
// always respect the safe area
let safeG = view.safeAreaLayoutGuide
// editor view inset from container view sides
let evInset: CGFloat = 0
// canvas view inset from editor view sides
let cvInset: CGFloat = 0
// these sets of constraints will make the Editor View and the Canvas View
// as large as their superviews (with "Inset Edge Padding" if set above)
// while maintaining aspect ratios and centering
let evMaxW = editorView.widthAnchor.constraint(lessThanOrEqualTo: containerView.widthAnchor, constant: -evInset)
let evMaxH = editorView.heightAnchor.constraint(lessThanOrEqualTo: containerView.heightAnchor, constant: -evInset)
let evW = editorView.widthAnchor.constraint(equalTo: containerView.widthAnchor)
let evH = editorView.heightAnchor.constraint(equalTo: containerView.heightAnchor)
evW.priority = .required - 1
evH.priority = .required - 1
let cvMaxW = canvasView.widthAnchor.constraint(lessThanOrEqualTo: editorView.widthAnchor, constant: -cvInset)
let cvMaxH = canvasView.heightAnchor.constraint(lessThanOrEqualTo: editorView.heightAnchor, constant: -cvInset)
let cvW = canvasView.widthAnchor.constraint(equalTo: editorView.widthAnchor)
let cvH = canvasView.heightAnchor.constraint(equalTo: editorView.heightAnchor)
cvW.priority = .required - 1
cvH.priority = .required - 1
// editor view starting aspect ratio
// this is changed by the segmented control
let editorAspect: Aspect = .a1to1
aspectRatioSeg.selectedSegmentIndex = editorAspect.rawValue
evAspectConstraint = editorView.heightAnchor.constraint(equalTo: editorView.widthAnchor, multiplier: editorAspect.aspect)
// we can set the Aspect Ratio of the CanvasView here
// it will maintain its Aspect Ratio independent of
// the Editor View's Aspect Ratio
let canvasAspect: Aspect = .a1to1
NSLayoutConstraint.activate([
containerView.topAnchor.constraint(equalTo: safeG.topAnchor),
containerView.leadingAnchor.constraint(equalTo: safeG.leadingAnchor),
containerView.trailingAnchor.constraint(equalTo: safeG.trailingAnchor),
editorView.centerXAnchor.constraint(equalTo: containerView.centerXAnchor),
editorView.centerYAnchor.constraint(equalTo: containerView.centerYAnchor),
evMaxW, evMaxH,
evW, evH,
evAspectConstraint,
canvasView.centerXAnchor.constraint(equalTo: editorView.centerXAnchor),
canvasView.centerYAnchor.constraint(equalTo: editorView.centerYAnchor),
cvMaxW, cvMaxH,
cvW, cvH,
canvasView.heightAnchor.constraint(equalTo: canvasView.widthAnchor, multiplier: canvasAspect.aspect),
aspectRatioSeg.topAnchor.constraint(equalTo: containerView.bottomAnchor, constant: 8.0),
aspectRatioSeg.bottomAnchor.constraint(equalTo: safeG.bottomAnchor, constant: -8.0),
aspectRatioSeg.centerXAnchor.constraint(equalTo: safeG.centerXAnchor),
aspectRatioSeg.widthAnchor.constraint(greaterThanOrEqualTo: safeG.widthAnchor, multiplier: 0.5),
aspectRatioSeg.widthAnchor.constraint(lessThanOrEqualTo: safeG.widthAnchor),
])
aspectRatioSeg.addTarget(self, action: #selector(aspectRatioSegmentChanged(_:)), for: .valueChanged)
// triple-tap anywhere to "reset" the 3 subviews
// cycling between starting sizes/positions
let tt = UITapGestureRecognizer(target: self, action: #selector(resetCanvas))
tt.numberOfTapsRequired = 3
tt.delaysTouchesEnded = false
view.addGestureRecognizer(tt)
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
// we don't have the frames in viewDidLoad,
// so wait until now to add the CanvasElement views
resetCanvas()
}
#objc func resetCanvas() {
canvasView.subviews.forEach { v in
v.removeFromSuperview()
}
// add 3 views to the canvas
let v1 = CanvasElement()
v1.backgroundColor = .systemYellow
let v2 = CanvasElement()
v2.backgroundColor = .systemGreen
let v3 = CanvasElement()
v3.backgroundColor = .systemBlue
// default size of subviews is 2/10ths the width of the canvas
let w: CGFloat = canvasView.bounds.width * 0.2
[v1, v2, v3].forEach { v in
v.frame = CGRect(x: 0, y: 0, width: w, height: w)
canvasView.addSubview(v)
// if we want to allow simultaneous gestures
// i.e. pan/scale/rotate all at the same time
//v.allowSimultaneous = true
}
switch (layoutMode % 3) {
case 1:
// top-left corner
// center at 1.5 times the size
// bottom-right corner
v1.frame.origin = CGPoint(x: 0, y: 0)
v2.frame.size = CGSize(width: w * 1.5, height: w * 1.5)
v2.center = CGPoint(x: canvasView.bounds.midX, y: canvasView.bounds.midY)
v3.center = CGPoint(x: canvasView.bounds.maxX - w * 0.5, y: canvasView.bounds.maxY - w * 0.5)
()
case 2:
// different sized views
v1.frame = CGRect(x: 0, y: 0, width: w * 0.5, height: w)
v2.frame.size = CGSize(width: w, height: w)
v2.center = CGPoint(x: canvasView.bounds.midX, y: canvasView.bounds.midY)
v3.frame.size = CGSize(width: w, height: w * 0.5)
v3.center = CGPoint(x: canvasView.bounds.maxX - v3.frame.width * 0.5, y: canvasView.bounds.maxY - v3.frame.height * 0.5)
()
default:
// on a "diagonal"
// starting at top-left corner
v1.frame.origin = CGPoint(x: 0, y: 0)
v2.frame.origin = CGPoint(x: w, y: w)
v3.frame.origin = CGPoint(x: w * 2, y: w * 2)
()
}
layoutMode += 1
}
#objc func aspectRatioSegmentChanged(_ sender: Any?) {
if let seg = sender as? UISegmentedControl,
let r = Aspect.init(rawValue: seg.selectedSegmentIndex)
{
evAspectConstraint.isActive = false
evAspectConstraint = editorView.heightAnchor.constraint(equalTo: editorView.widthAnchor, multiplier: r.aspect)
evAspectConstraint.isActive = true
}
}
}
Some sample screenshots...
Yellow is the Canvas view... with optional red 10x10 grid
Gray is the Editor view... this is the view that changes Aspect Ratio
Cyan is the "Container" view.... Editor view fits/centers itself
Note that the Canvas view can be set to something other than a square (1:1 ratio). For example, here it's set to 9:16 ratio -- and maintains its Aspect Ratio independent of the Editor view Aspect Ratio:
With this example controller, triple-tap anywhere to cycle through 3 "starting layouts":
Maybe you can make the three rectangles in a view. And then you can keep the aspect-ratio for the view.
If you are using autolayout and Snapkit. The constrains maybe like this:
view.snp.makeConstraints { make in
make.width.height.lessThanOrEqualToSuperview()
make.centerX.centerY.equalToSuperview()
make.width.equalTo(view.snp.height)
make.width.height.equalToSuperview().priority(.high)
}
So this view will be aspect-fit in superview.
Back to children in this view. If you want to scale every child when view's frame changed, you should add contrains too. Or you can use autoresizingMask, it maybe simpler.
If you didn't want to use autolayout. Maybe you can try transform. When you transform some view, the children in this view will be changed too.
// The scale depends on the aspect-ratio of superview.
view.transform = CGAffineTransformMakeScale(0.5, 0.5);
I've created a UIScrollView to show it in another UIViewController with a header. In my scrollViewDidScroll(), I have some code that decreases header's height. But when it performs, all of the elements (Label) in header view changes. Is there any way to keep their dimensions fix?
my scrollViewDidScroll() function is here:
func scrollViewDidScroll(_ scrollView: UIScrollView) {
var labelAlpha = 4 * scrollView.contentOffset.y / scrollView.frame.height
labelAlpha = max(0, labelAlpha)
labelAlpha = min(1, labelAlpha)
let parent = self.parent as? ScrollParentViewController
var viewScale = 1.0 - 4 * scrollView.contentOffset.y / scrollView.frame.height
viewScale = max(0.5, viewScale)
parent?.redView.transform = CGAffineTransform(scaleX: 1, y: viewScale)
parent?.redView.frame.origin.y = 0
parent?.whiteLabel.alpha = labelAlpha
}
Finally found my answer. I'll post it for you to use if you need. For this case we shouldn't use transform. We can work with view.frame.size.height. You should save view's height size in a variable at the first because its value will change.
so add it to your HeaderViewController (In my case its name is ScrollParentViewController):
var height: CGFloat!
override func viewDidLoad() {
super.viewDidLoad()
height = redView.frame.height
}
then you can use this variable in your scrollViewDidScroll():
func scrollViewDidScroll(_ scrollView: UIScrollView) {
var viewScale = 1.0 - 4 * scrollView.contentOffset.y / scrollView.frame.height
viewScale = max(0.5, viewScale)
let parent = self.parent as? ScrollParentViewController
parent?.redView.frame.size.height = (parent?.height)! * viewScale
}
thanks for helps.
I'm trying to zoom my image view as I scroll the scrollView past the top of the screen. Here's my code:
func scrollViewDidScroll(_ scrollView: UIScrollView) {
let offset = scrollView.contentOffset.y
if (offset <= 0) {
let ratio: CGFloat = -offset*1.0 / UIScreen.main.bounds.height
self.coverImageView.transform = CGAffineTransform(scaleX: 1.0 + ratio, y: 1.0 + ratio)
}
}
This zooms the image as I scroll up, but because I am also scrolling up, my view goes down, and reveals the white background behind the image as it expands. How do I prevent that from happening?
It sounds like a scroll view is not really suited to what you are trying to do. How about using a gesture recognizer instead? Something along these lines:
override func viewDidLoad() {
super.viewDidLoad()
coverImageView.isUserInteractionEnabled = true
let panGestureRecognizer = UIPanGestureRecognizer(target: self, action: #selector(didPan))
coverImageView.addGestureRecognizer(panGestureRecognizer)
}
func didPan(panGestureRecognizer: UIPanGestureRecognizer) {
let translation = panGestureRecognizer.translation(in: coverImageView)
if translation.y > 0 {
let zoomRatio = (translation.y * 0.1) + 1.0
coverImageView.transform = CGAffineTransform(scaleX: zoomRatio, y: zoomRatio)
}
}
You'll have to play around to get it to behave exactly how you want, but it should be enough to get you started.
I've subclassed a UICollectionViewFlowLayout to achieve a small scaling effect during horizontal scroll. Therefore I had to subclass a UICollectionViewCell as well as to change the layer.anchorPoint of the cell (my scaling is from the bottom left of the cell rather than from the default center). Now all fine and well except the fact that when I am scrolling horizontally , my cell is reused too soon (I still can see the half cell when it suddenly disappear ).
I have the feeling that collection view still bases its calculations for reusing the cell on the anchor point positioned in the center of the cell...
However , this is my collection view . You can see how the item getting bigger as it reaches the left side of the collection view. This is the scaling I wanted.
Now here I am scrolling to the left. You can see how the right item became bigger and the left is getting out of the screen.
And here you see that the left item didn't get off the screen but already dissapeared. Only the right item remeained visible :/
So what I want is , to make the left item disappear only when it's right boundaries reaching the very left of the screen.Simply saying , to dissapear only when I don't see it anymore.
And here is my code:
class SongsCollectionViewCell : UICollectionViewCell {
#IBOutlet weak var imgAlbumCover: UIImageView!
override func applyLayoutAttributes(layoutAttributes: UICollectionViewLayoutAttributes) {
super.applyLayoutAttributes(layoutAttributes)
//we must change the anchor point for propper cells positioning and scaling
self.layer.anchorPoint.x = 0
self.layer.anchorPoint.y = 1
}
}
Here is the layout itself :
class SongsCollectionViewLayout : UICollectionViewFlowLayout {
override func prepareLayout() {
collectionView?.decelerationRate = UIScrollViewDecelerationRateFast
self.scrollDirection = UICollectionViewScrollDirection.Horizontal;
//size of the viewport
let size:CGSize = self.collectionView!.frame.size;
let itemWidth:CGFloat = size.width * 0.7//0.7//3.0;
self.itemSize = CGSizeMake(itemWidth, itemWidth);
self.sectionInset = UIEdgeInsetsMake(0,0,0,0);
self.minimumLineSpacing = 0
self.minimumInteritemSpacing = 0
}
override func shouldInvalidateLayoutForBoundsChange(newBounds: CGRect) -> Bool {
return true
}
override func layoutAttributesForElementsInRect(rect: CGRect) -> [UICollectionViewLayoutAttributes]? {
let attributes:[UICollectionViewLayoutAttributes] = super.layoutAttributesForElementsInRect(rect)!
var visibleRect:CGRect = CGRect()
visibleRect.origin = self.collectionView!.contentOffset;
visibleRect.size = self.collectionView!.bounds.size;
for layoutAttributes in attributes {
if (CGRectIntersectsRect(layoutAttributes.frame, rect)) {
//we must align items to the bottom of the collection view on y axis
let frameHeight = self.collectionView!.bounds.size.height
layoutAttributes.center.y = frameHeight
layoutAttributes.center.x = layoutAttributes.center.x - self.itemSize.width/2
//find where ite, left x is
let itemLeftX:CGFloat = layoutAttributes.center.x
//distance of the item from the left of the viewport
let distanceFromTheLeft:CGFloat = itemLeftX - CGRectGetMinX(visibleRect)
let normalizedDistanceFromTheLeft = abs(distanceFromTheLeft) / self.collectionView!.frame.size.width
//item that is closer to the left is most visible one
layoutAttributes.alpha = 1 - normalizedDistanceFromTheLeft
layoutAttributes.zIndex = abs(Int(layoutAttributes.alpha)) * 10;
//scale items
let scale = min(layoutAttributes.alpha + 0.5, 1)
layoutAttributes.transform = CGAffineTransformMakeScale(scale, scale)
}
}
return attributes;
}
override func targetContentOffsetForProposedContentOffset(proposedContentOffset: CGPoint, withScrollingVelocity velocity: CGPoint) -> CGPoint {
// Snap cells to centre
var newOffset = CGPoint()
let layout = collectionView!.collectionViewLayout as! UICollectionViewFlowLayout
let width = layout.itemSize.width + layout.minimumLineSpacing
var offset = proposedContentOffset.x + collectionView!.contentInset.left
if velocity.x > 0 {
//ceil returns next biggest number
offset = width * ceil(offset / width)
} else if velocity.x == 0 { //6
//rounds the argument
offset = width * round(offset / width)
} else if velocity.x < 0 { //7
//removes decimal part of argument
offset = width * floor(offset / width)
}
newOffset.x = offset - collectionView!.contentInset.left
newOffset.y = proposedContentOffset.y //y will always be the same...
return newOffset
}
}
Answering my own question.
So as I suspected , the layout was taking an old center into account that is why I had to correct the center of the cell right after changing the anchor point . So my custom cell now looks like this :
class SongsCollectionViewCell : UICollectionViewCell {
#IBOutlet weak var imgAlbumCover: UIImageView!
override func prepareForReuse() {
imgAlbumCover.hnk_cancelSetImage()
imgAlbumCover.image = nil
}
override func applyLayoutAttributes(layoutAttributes: UICollectionViewLayoutAttributes) {
super.applyLayoutAttributes(layoutAttributes)
//we must change the anchor point for propper cells positioning and scaling
self.layer.anchorPoint.x = 0
self.layer.anchorPoint.y = 1
//we need to adjust a center now
self.center.x = self.center.x - layoutAttributes.size.width/2
}
}
Hope it helps someone
I'm trying to mask a UIImageView in such a way that it would allow the user to drag the image around without moving its mask. The effect would be similar to how one can position an image within the Instagram app essentially allowing the user to define the crop region of the image.
Here's an animated gif to demonstrate what I'm after.
Here's how I'm currently masking the image and repositioning it on drag/pan events.
import UIKit
class ViewController: UIViewController {
var dragDelta = CGPoint()
#IBOutlet weak var imageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
attachMask()
// listen for pan/drag events //
let pan = UIPanGestureRecognizer(target:self, action:#selector(onPanGesture))
pan.maximumNumberOfTouches = 1
pan.minimumNumberOfTouches = 1
self.view.addGestureRecognizer(pan)
}
func onPanGesture(gesture:UIPanGestureRecognizer)
{
let point:CGPoint = gesture.locationInView(self.view)
if (gesture.state == .Began){
print("begin", point)
// capture our drag start position
dragDelta = CGPoint(x:point.x-imageView.frame.origin.x, y:point.y-imageView.frame.origin.y)
} else if (gesture.state == .Changed){
// update image position based on how far we've dragged from drag start
imageView.frame.origin.y = point.y - dragDelta.y
} else if (gesture.state == .Ended){
print("ended", point)
}
}
func attachMask()
{
let mask = CAShapeLayer()
mask.path = UIBezierPath(roundedRect: CGRect(x: 0, y: 100, width: imageView.frame.size.width, height: 400), cornerRadius: 5).CGPath
mask.anchorPoint = CGPoint(x: 0, y: 0)
mask.fillColor = UIColor.redColor().CGColor
view.layer.addSublayer(mask)
imageView.layer.mask = mask;
}
}
This results in both the image and mask moving together as you see below.
Any suggestions on how to "lock" the mask so the image can be moved independently underneath it would be very much appreciated.
Moving a mask and frame separately from each other to reach this effect isn't the best way to go about doing this. Most apps that do this sort of effect do the following:
Add a UIScrollView to the root view (with panning/zooming enabled)
Add a UIImageView to the UIScrollView
Size the UIImageView such that it has a 1:1 ratio with the image
Set the contentSize of the UIScrollView to match that of the UIImageView
The user can now pan around and zoom into the UIImageView as needed.
Next, if you're, say, cropping the image:
Get the visible rectangle (taken from Getting the visible rect of an UIScrollView's content)
CGRect visibleRect = [scrollView convertRect:scrollView.bounds toView:zoomedSubview];
Use whatever cropping method you'd like on the UIImage to get the necessary content.
This is the smoothest way to handle this kind of interaction and the code stays pretty simple!
Just figured it out. Setting the CAShapeLayer's position property to the inverse of the UIImageView's position as it's dragged will lock the CAShapeLayer in its original position however CoreAnimation by default will attempt to animate it whenever its position is reassigned.
This can be disabled by wrapping both position settings within a CATransaction as shown below.
func onPanGesture(gesture:UIPanGestureRecognizer)
{
let point:CGPoint = gesture.locationInView(self.view)
if (gesture.state == .Began){
print("begin", point)
// capture our drag start position
dragDelta = CGPoint(x:point.x-imageView.frame.origin.x, y:point.y-imageView.frame.origin.y)
} else if (gesture.state == .Changed){
// update image & mask positions based on the distance dragged
// and wrap both assignments in a CATransaction transaction to disable animations
CATransaction.begin()
CATransaction.setDisableActions(true)
mask.position.y = dragDelta.y - point.y
imageView.frame.origin.y = point.y - dragDelta.y
CATransaction.commit()
} else if (gesture.state == .Ended){
print("ended", point)
}
}
UPDATE
Here's an implementation of what I believe AlexKoren is suggesting. This approach nests a UIImageView within a UIScrollView and uses the UIScrollView to mask the image.
class ViewController: UIViewController, UIScrollViewDelegate {
#IBOutlet weak var scrollView: UIScrollView!
var imageView:UIImageView = UIImageView()
override func viewDidLoad() {
super.viewDidLoad()
let image = UIImage(named: "point-bonitas")
imageView.image = image
imageView.frame = CGRectMake(0, 0, image!.size.width, image!.size.height);
scrollView.delegate = self
scrollView.contentMode = UIViewContentMode.Center
scrollView.addSubview(imageView)
scrollView.contentSize = imageView.frame.size
let scale = scrollView.frame.size.width / scrollView.contentSize.width
scrollView.minimumZoomScale = scale
scrollView.maximumZoomScale = scale // set to 1 to allow zoom out to 100% of image size //
scrollView.zoomScale = scale
// center image vertically in scrollview //
let offsetY:CGFloat = (scrollView.contentSize.height - scrollView.frame.size.height) / 2;
scrollView.contentOffset = CGPointMake(0, offsetY);
}
func scrollViewDidZoom(scrollView: UIScrollView) {
print("zoomed")
}
func viewForZoomingInScrollView(scrollView: UIScrollView) -> UIView? {
return imageView
}
}
The other, perhaps simpler way would be to put the image view in a scroll view and let the scroll view manage it for you. It handles everything.