I have a UIScrollView with an UIImageView inside. The main goal is to move the UIImageView inside de UIScrollView like drag, zoom, etc.
Let's imagine that scrollview's frame is 100x100.
The UIImageView's size in 250x100. So, we can drag from left to right without zooming, and we when zoom in, we can drag from top to bottom.
I know the center of the UIScrollView = (50,50) => CGPoint
My question is: how can I get the CGPoint of the UIImageView equivalent to the center CGPoint of the scrollview ?
Here is, I hope, an helpful description of the scene:
Thank you a lot for your help guys!
Use the convert method of UIView to convert a point from the scroll view to the image view:
let imageViewPoint = scrollView.convert(CGPoint(x: 50, y: 50), to: imageView)
Thanks to rmaddy for you help.
Here is the solution:
//FYI: focusPoint.scale is the zoomScale of the UIScrollView.
let originVisible = scrollView.contentOffset
let convertedPoint = scrollView.convert(originVisible, to: imageView).scaledBy(scale: focusPoint.scale)
let imageViewPoint = CGPoint(x: convertedPoint.x + scrollView.center.x,
y: convertedPoint.y + scrollView.center.y)
extension CGPoint {
func scaledBy(scale: CGFloat) -> CGPoint {
return CGPoint(x: x * scale, y: y * scale)
}
}
Related
I want to fix a point (left top point) and scale (zoom out) the UIView according to the point
like this image
Now I use self.transform = CGAffineTransformMakeScale( 0.7 , 0.7); to scale the UIView , But it only can scale the UIView according to the center point
Edit
I try to use set anchorPoint with (0,0) but the view position be wrong
Use anchorPoint property:
self.layer.anchorPoint = CGPointMake(0.0, 0.0);
Try to set anchorPoint of layer
container!.layer.anchorPoint = CGPoint(x: 0, y: 0)
I have 2 UIImageView. Both UIImageView has the same size proportion. The first UIImageView's size is always smaller or equal to the second UIImageView's size.
On the first UIImageView I have a UIView. This UIView the user can place it anywhere on the first UIImageView.
I want to find the x,y of the second UIView that will be on the second UIImageView with same proportion as the first UIView.
Please look at this image to be clear. Please note that Rectangle 1 is first UIImageView and Rectangle 2 is second UIImageView.
What I've tried:
CGFloat xMultiplier = imageView2.frame.size.width / imageView1.frame.size.width;
CGFloat yMultiplier = imageView2.frame.size.height / imageView1.frame.size.height
CGFloat view2x = view1.frame.origin.x * xMultiplier;
CGFloat view2y = view1.frame.origin.y * yMultiplier;
My application is an application that users can choose stickers (UIView) and place it anywhere on their photo. The stickerView on the second image is not in the same place as stickerView in the second image
The resulting x,y for my code is close to where it should be but its not exactly where it should be.
What am I missing?
Thanks!
First you should get the factor ( which you are already doing correctly ) , but then you have to calculate it proportionally to Point1 ( to some point in View1 ) , for example
CGFloat yFactor = view1.frame.size.height / view2.frame.size.height;
CGFloat xFactor = view1.frame.size.width / view2.frame.size.width;
CGPoint pointInView2 = CGPointMake(pointInView1.x / xFactor,pointInView1.y / yFactor);
pointInView1 is the origin of Sticker in View1
if sticker2 is subview of view2 , then origin will be pointInView2
if sticker2 is not subview of view2 , and they have the same superview , the origin will
CGPointMake(pointInView2.x + view2.frame.size.width,pointInView2.y + view2.frame.size.height);
if there is any other hierarchy , you can use UIVIews method convertPoint ...
[view2 convertPoint:pointInView2 toView:sticker2.superView]
assuming that the rectangle 2 is merely a rescaled version of rectangle 1. the position of view 2 is :
xpos_view2 = xpos_view1/xwidth_Rect1 * xwidth_Rect2
ypos_view2 = ypos_view1/ywidth_Rect1 * ywidth_Rect2
View2.position = CGPoint(x:xpos_view2, y:ypos_view2)
Calculate Ratio from smallest iPhone device
func ratioW(_ ofSize : CGFloat) -> CGFloat {
let bounds = UIScreen.main.bounds
let windowWidth = bounds.size.width
let temp = 320/ofSize
return windowWidth / temp
}
func ratioH(_ ofSize : CGFloat) -> CGFloat {
let bounds = UIScreen.main.bounds
let windowHeight = bounds.size.height
let temp = 480/ofSize
return windowHeight / temp
}
I have a scrollview and I'm watching user input. I would like to know where their finger currently is on the X plane. Is this possible through the ScrollView and/or its delegate or do I have to override touchesBegan, etc?
I'm assuming you set up your scroll view delegate. If you did, then you only need to implement the scrollViewDidScroll: method from the UIScrollViewDelegate protocol...
- (void)scrollViewDidScroll:(UIScrollView *)scrollView {
CGPoint touchPoint = [scrollView.panGestureRecognizer locationInView:scrollView];
NSLog(#"Touch point: (%f, %f)", touchPoint.x, touchPoint.y);
}
This will update while the user is scrolling.
Additional info: Note that if you have something like a UIPageControl, that your scroll view is navigating between, you may have to calculate the x position based on the number of pages (ie if each page is 100 pixels wide, page 0 will start at point 0, page one at point 100, etc..).
CGPoint touchPoint = [scrollView.panGestureRecognizer locationInView:scrollView];
CGFloat x = touchPoint.x;
CGPoint point = [_scrollView locationOfTouch:touchIndex inView:_scrollView];
touchindex: 0 for first touch and 1, 2,3 so on
if inview:nil then point will be in the window base coordinate sysytem
CGFloat x = point.x;
CGFloat y = point.y
I use the panGesture of the scrollView because it's more accurate if you want to know the position of the x plane when you have only vertical scroll.
[self.scrollView.panGestureRecognizer addTarget:self action:#selector(handlePanForScrollView:)];
and then:
- (void)handlePanForScrollView:(UIPanGestureRecognizer *)gesture {
CGPoint positionInView = [gesture locationInView:self.scrollView];
}
I'm working on an app that lets the user resize and rotate a photo using UIGestureRecognizers. I have this code which adjusts the anchorPoint based on where the user is applying touches (to make it look like they're scaling the image at the point where their fingers actually are):
- (void)adjustAnchorPointForGestureRecognizer:(UIGestureRecognizer *)gestureRecognizer
{
UIView *gestureRecognizerView = gestureRecognizer.view;
CGPoint locationInView = [gestureRecognizer locationInView:gestureRecognizerView];
CGPoint locationInSuperview = [gestureRecognizer locationInView:gestureRecognizerView.superview];
gestureRecognizerView.layer.anchorPoint = CGPointMake(locationInView.x / gestureRecognizerView.bounds.size.width, locationInView.y / gestureRecognizerView.bounds.size.height);
gestureRecognizerView.center = locationInSuperview;
}
Later on, I'm simply wanting to calculate the origin based on the center and bounds with this code:
CGRect transformedBounds = CGRectApplyAffineTransform(view.bounds, view.transform);
CGPoint origin = CGPointMake(view.center.x - (transformedBounds.size.width * view.layer.anchorPoint.x), view.center.y - (transformedBounds.size.height * view.layer.anchorPoint.y));
And it's coming out incorrectly (I'm comparing against the frame value which ironically is supposed to be invalidated but actually does have the correct value).
So all in all I'm wondering, what am I not taking into account here? How is the anchorPoint influencing the center in a way I'm not able to determine?
I think the problem is that the origin you are calculating is not really an origin, but rather an offset of the origin of your transformedBounds rect.
I haven't fully tested it, but if you do something like this you should get the correct frame:
CGRect transformedBounds =
CGRectApplyAffineTransform(view.bounds, view.transform);
CGSize originOffset = CGSizeMake(
view.center.x - (transformedBounds.size.width * view.layer.anchorPoint.x),
view.center.y -
(transformedBounds.size.height * view.layer.anchorPoint.y));
transformedBounds.origin.x += originOffset.width;
transformedBounds.origin.y += originOffset.height;
I'm trying to create a Parallax effect on a UIView inside a UIScrollView.
The effect seems to work, but not so well.
First i add two UIView sub-views to a UIScrollView and set the UIScrollViews contentSize.
The Views sum up and create a contentSize of {320, 1000}.
Then I implemented the following in scrollViewDidScroll:
- (void)scrollViewDidScroll:(UIScrollView *)scrollView
{
CGFloat offsetY = scrollView.contentOffset.y;
CGFloat percentage = offsetY / scrollView.contentSize.height;
NSLog(#"percent = %f", percentage);
if (offsetY < 0) {
firstView.center = CGPointMake(firstView.center.x, firstView.center.y - percentage * 10);
} else if (offsetY > 0){
firstView.center = CGPointMake(firstView.center.x, firstView.center.y + percentage * 10);
}
}
These lines of code do create a parallax effect, but as the scrolling continues, the view does not return to it's original position if i scroll to the original starting position.
I have tried manipulating the views layers and frame, all with the same results.
Any Help will be much appreciated.
The problem you have is that you are basing your secondary scrolling on a ratio of offset to size, not just on the current offset. So when you increase from an offset of 99 to 100 (out of say 100) your secondary scroll increases by 10, but when you go back down to 99 your secondary scroll only decreases by 9.9, and is thereby no longer in the same spot as it was last time you were at 99. Non-linear scrolling is possible, but not the way you are doing it.
A possible easier way to deal with this is to create a second scrollview and place it below your actual scrollview. Make it non intractable (setUserInteractionEnabled:false) and modify it's contentOffset during the main scrolling delegate instead of trying to move a UIImageView manually.
- (void)scrollViewDidScroll:(UIScrollView *)scrollView
{
[scrollView2 setContentOffset:CGPointMake(scrollView.contentOffset.x,scrollView.contentOffset.y * someScalingFactor) animated:NO];
}
But make sure not to set a delegate for the scrollView2, otherwise you may get a circular delegate method call that will not end well for you.
Scaling Factor being the key element...
...let me offer a 1:1 calculation:
Assuming 2 UIScrollView, one in the foreground and on in the rear, assuming the foreground controls the rear, and further assuming that a full width in the foreground corresponds to a full width in the background, you then need to apply the fore ratio, not the fore offset.
func scrollViewDidScroll(_ scrollView: UIScrollView) {
let foreSpan = foreScrolView.bounds.width - foreScrolView.contentSize.width
let foreRatio = scrollView.contentOffset.x / foreSpan
let rearSpan = rearScrollView.bounds.width - rearScrollView.contentSize.width
rearScrollView.setContentOffset(
CGPoint(x: foreRatio * rearSpan, y: 0),
animated: false)
}
Final effect
The two scrollers, fore and rear, each contain a UIImageView displayed at its full width:
let foreImg = UIImageView.init(image: UIImage(named: "fore"))
foreImg.frame = CGRect(x: 0, y: 0,
width: foreImg.frame.width,
height: foreScrolView.bounds.height)
foreScrolView.contentSize = foreImg.frame.size
foreScrolView.addSubview(foreImg)
let rearImg = UIImageView.init(image: UIImage(named: "rear"))
rearImg.frame = CGRect(x: 0, y: 0,
width: rearImg.frame.width,
height: rearScrollView.bounds.height)
rearScrollView.contentSize = rearImg.frame.size
rearScrollView.addSubview(rearImg)
This will scroll both images at a different speed, covering each image in full from edge to edge.
► Find this solution on GitHub and additional details on Swift Recipes.