I'm trying to convert deep 'Subview' frame to an upper 'UIView'. I'm attaching the view hierarchy here.
Attaching illustration:
I've tried this, but the result are way off screen :
let rect = smallSubview.convert(smallSubview.frame, to: bigSuperview)
I'm trying to convert the small 'Subview' frame, to 'VideoCrop'/bigSuperView coordinate space. Any suggestions? Thank you!
Not sure, but shouldn't you be considering bounds rather than frame of your smallSubView ??
I mean :
let rect = smallSubview.convert(smallSubview.bounds, to: bigSuperview)
EDIT
I could not have answered your comment in answer hence updating my answer :)
The quick view of convert API suggests
func convert(_ rect: CGRect, to view: UIView?) -> CGRect Description
Converts a rectangle from the receiver’s coordinate system to that of
another view.
Parameters
rect A rectangle specified in the local
coordinate system (bounds) of the receiver.
view The view that is the
target of the conversion operation. If view is nil, this method
instead converts to window base coordinates. Otherwise, both view and
the receiver must belong to the same UIWindow object.
As it suggests you should be considering bounds rather than frame :)
Whats the difference between frame and bounds then ??
Bounds : Specifies the views location and size of view in its own coordinate system.
Frame: While this specifies the location and size of view in its superViews coordinate system :)
Hence bounds of any view will have its origin as (0,0) where as frame has its x and y with respect to its superview :) while height and width being same :)
Apple's convert#to is really silly.
One way to understand it:
Say you want a view named "echo" to be exactly where you are.
echo.frame = convert(bounds, to: echo.superview!)
is exactly the same as:
echo.frame = superview!.convert(frame, to: echo.superview!)
It's like ...
convert(bounds
means essentially "your frame in your superview", and that's exactly the same as
superview!.convert(frame
which also means "your frame in your superview"
You can always do either of these two things, they're identical:
convert(bounds ...
superview!.convert(frame ...
Related
CALayer objects have a property accessibilityPath which as stated is supposedly
Returns the path of the element in screen coordinates.
Of course as expected this does not return the path of the layer.
Is there a way to access the physical path of a given CALayer that has already been created? For instance, how would you grab the path of a UIButton's layer property once the button has been initialized?
EDIT
For reference, I am trying to detect whether a rotated button contains a point. The reason for the difficulty here is due to the fact that the buttons are drawn in a curved view...
My initial approach was to create bezier slices then pass them as a property to the button to check if the path contains the point. For whatever reason, there seems to be an ugly offset from the path and the button.
They are both added to the same view and use the same coordinates / values to determine their frame, but the registered path seems to be offset to the left from the actual drawn shape from the path. Below is an image of the shapes I have drawn. The green outline is where the path is drawn (and displayed....) where the red is approximately the area which registers as inside the path...
I'm having a hard time understanding how the registered area is different.
If anyone has any ideas on why this offset would be occurring would be most appreciated.
UPDATE
Here is a snippet of me adding the shapes. self in this case is simply a UIView added to a controller. it's frame is the full size of the controller which is `{0, height_of_device - controllerHeight, width_of_device, controllerHeight}
UIBezierPath *slicePath = UIBezierPath.new;
[slicePath moveToPoint:self.archedCenterRef];
[slicePath addArcWithCenter:self.archedCenterRef radius:outerShapeDiameter/2 startAngle:shapeStartAngle endAngle:shapeEndAngle clockwise:clockwise];
[slicePath addArcWithCenter:self.archedCenterRef radius:(outerShapeDiameter/2 - self.rowHeight) startAngle:shapeEndAngle endAngle:shapeStartAngle clockwise:!clockwise];
[slicePath closePath];
CAShapeLayer *sliceShape = CAShapeLayer.new;
sliceShape.path = slicePath.CGPath;
sliceShape.fillColor = [UIColor colorWithWhite:0 alpha:.4].CGColor;
[self.layer addSublayer:sliceShape];
...
...
button.hitTestPath = slicePath;
In a separate method in my button subclass to detect if it contains the point or not: (self here is the button of course)
...
if ([self.hitTestPath containsPoint:touchPosition]) {
if (key.alpha > 0 && !key.isHidden) return YES;
else return NO;
}
else return NO;
You completely missunderstood the property, this is for assistive technology, from the docs:
Excerpt:
"The default value of this property is nil. If no path is set, the accessibility frame rectangle is used to highlight the element.
When you specify a value for this property, the assistive technology uses the path object you specify (in addition to the accessibility frame, and not in place of it) to highlight the element."
You can only get the path from a CAShapeLayer, alls other CALayers don't need to be drawn with a path at all.
Update to your update:
I think the offset is due to a missing
UIView convert(_ point: CGPoint, to view: UIView?)
The point needs to be converted to the buttons coordinate systems.
I'm trying to dynamically create views (UIImageView and UITextView) at runtime by user request and then allow the user to move and resize them. I've got everything working great, except for the resizing. I tried using the pinch gesture recognizer, but find it too clumsy for what I want. Therefore, I would like to use sizing handles. I believe I could put a pan gesture recognizer on each handle, and adjust the view frame as one of them is moved.
The problem is, I'm not quite sure how to create the sizing handles. I would indicate all the things I've tried, but truthfully, I'm not quite sure where to start. I do have a few ideas...
1) Possibly use coregraphics to draw boxes or circles on the corners and sides? Would I create a new layer and draw them on that? Not sure.
2) Stick a little image of a box or circle on each corner?
3) XIB file with the handles already placed on it?
Any suggestions appreciated. I just need to be pointed in the right direction.
Edit: Something like what Apple uses in Pages would be perfect!
First, I suggest create a custom View subclass to UIView, you will handle all of the behavior here. Let's call it ResizableView.
In the custom view, You need to draw layer or view for these dot at corner and add PangestureRecognized to them.Then you can track the location of these dot using recognizer.locationInView() when user drag them, which you will use to calculate the scale of View.Here is the code you can refer to:
func rotateViewPanGesture(recognizer: UIPanGestureRecognizer) {
touchLocation = recognizer.locationInView(self.superview)
let center = CalculateFunctions.CGRectGetCenter(self.frame)
switch recognizer.state {
case .Began:
initialBounds = self.bounds
initialDistance = CalculateFunctions.CGpointGetDistance(center, point2: touchLocation!)
case .Changed:
//Finding scale between current touchPoint and previous touchPoint
let scale = sqrtf(Float(CalculateFunctions.CGpointGetDistance(center, point2: touchLocation!)) / Float(initialDistance!))
let scaleRect = CalculateFunctions.CGRectScale(initialBounds!, wScale: CGFloat(scale), hScale: CGFloat(scale))
self.bounds = scaleRect
self.refresh()
case:.Ended:
self.refresh()
default:break
Step by step
touchLocation location of the Pangesture
center is the center of ResizableView
initialBounds is the initial bounds of the ResizableView when PanGesture begin.
initailDistance is the distance between the center of the ResizableView of touchPoint of the dot the user is dragging.
Then you can calculate the scale given initialDistance, center, touch location
Now you have scaled the view as You want. You also need to refresh the position of these dot at corner accordingly, that's what refresh() for, you need to implement it yourself.
CalculateFunctions
I tend to define some helpFunctions to help me calculate.
CalculateFunctions.CGPointGetDistance is used to calculate the distance between center of the view and touch location.
CalculateFunctions.CGRectScale return the scaled CGRect given the the scale you just calculated.
CalculateFunctions.CGRectGetCenter return the center point of a CGRect
That's just a rough idea. Actually there are many Libraries you can refer to.
Some suggestions:
SPUserResizableView
This is a ResizableView exactly what you want, but it was written in ObjC and hasn't been updated for a long time. But you can refer to it.
JLStickerTextView This may not fit your requirement very well as it is for text(edit, resize, rotate with one finger) However, this one is written in Swift, a good example for you.
If you have any questions, feel free to post it.Good Luck:-)
I am trying to animate a UIView through non linear path(i'm not trying to draw the path itself) like this :
The initial position of the view is determinated using a trailing and bottom constraint (viewBottomConstraint.constant == 100 & viewTrailingConstraint.constant == 300)
I am using UIView.animatedWithDuration like this :
viewTrailingConstraint.constant = 20
viewBottomConstraint.constant = 450
UIView.animateWithDuration(1.5,animation:{
self.view.layoutIfNeeded()
},completition:nil)
But it animate the view in a linear path.
You can use keyFrame animation with path
let keyFrameAnimation = CAKeyframeAnimation(keyPath:"position")
let mutablePath = CGPathCreateMutable()
CGPathMoveToPoint(mutablePath, nil,50,200)
CGPathAddQuadCurveToPoint(mutablePath, nil,150,100, 250, 200)
keyFrameAnimation.path = mutablePath
keyFrameAnimation.duration = 2.0
keyFrameAnimation.fillMode = kCAFillModeForwards
keyFrameAnimation.removedOnCompletion = false
self.label.layer.addAnimation(keyFrameAnimation, forKey: "animation")
Gif
About this function
void CGContextAddQuadCurveToPoint (
CGContextRef _Nullable c,
CGFloat cpx,
CGFloat cpy,
CGFloat x,
CGFloat y
);
(cpx,cpy) is control point,and (x,y) is end point
Leo's answer of using Core Animation and CAKeyframeAnimation is good, but it operates on the view's "presentation layer" and only creates the appearance of moving the view to a new location. You'll need to add extra code to actually move the view to it's final location after the animation completes. Plus Core Animation is complex and confusing.
I'd recommend using the UIView method
animateKeyframesWithDuration:delay:options:animations:completion:. You'd probably want to use the option value UIViewKeyframeAnimationOptionCalculationModeCubic, which causes the object to move along a curved path that passes through all of your points.
You call that on your view, and then in the animation block, you make multiple calls to addKeyframeWithRelativeStartTime:relativeDuration:animations: that move your view to points along your curve.
I have a sample project on github that shows this and other techniques. It's called KeyframeViewAnimations (link)
Edit:
(Note that UIView animations like animateKeyframes(withDuration:delay:options:animations:completion:) don't actually animate your views along the path you specify. They use a presentation layer just like CALayer animations do, and while the presentation layer makes the view look like it's moving along the specified path, it actually snaps from the beginning position to the end position at the beginning of the animation. UIView animations do move the view to its destination position, where CALayer animations move the presentation layer while not moving the layer/view at all.)
Another subtle difference between Leo's path-based UIView animation and my answer using UIView animateKeyframes(withDuration:delay:options:animations:completion:)is that CGPath curves are cubic or quadratic Bezier curves, and my answer animates using a different kind of curve called a Katmull-Rom spline. Bezier paths start and end at their beginning and ending points, and are attracted to, but don't pass through their middle control points. Catmull-Rom splines generate a curve that passes through every one of their control points.
I have a UIView called container that I want to move (offset) using affine transfrom. This view contains UIImageView and is a subview of UICollectionViewCell.
So it should be simple:
container.transform = CGAffineTransformMakeTranslation(100, 200) //render container 100 points right and 200 points down
Instead it is very hard, because theat code does not do anything. The view is rendered excatly on the same place as if I delete that line. So I added 'print' to verify what affine translation was set:
container.transform = CGAffineTransformMakeTranslation(100, 200)
print(container.transform) //prints: CGAffineTransform(a: 1.0, b: 0.0, c: 0.0, d: 1.0, tx: 100.0, ty: 200.0)
That seems all right. So I tried rotating the container view instead with CGAffineTransformMakeRotation and it rotates the view just not around its center as it should according to documentation. I tried different combinations of translate, rotation and scale transforms just to find that the affine transformation matrixes set are OK, but attributes tx and ty seems to be ignored and a, b, c and d seems to be using different anchor point then the centre of the view (cannot say what that point is).
Any ideas on what can be causing this and how to fix it?
There must be something like auto layout messing things up for you. In the absence of outside influence, setting a view's affine transform to CGAffineTransformMakeTranslation(100, 200) will shift it right 100 points and down 200. I verified this by making a new Single View Project in Xcode and changing the viewDidLoad method in the ViewController.swift class to:
override func viewDidLoad()
{
super.viewDidLoad()
view.backgroundColor = UIColor.blueColor();
let container = UIView(frame: CGRectMake(0,0,100,100));
container.backgroundColor = UIColor.greenColor();
container.transform = CGAffineTransformMakeTranslation(100, 200);
view.addSubview(container);
}
As expected this makes the green container view appear 100 points to the right and 200 points down, even though its frame is (0,0,100,100).
So please check for auto layout and other such things that might influence the placement of this view, and if you can't find anything please post more code. Also, if your container view doesn't have a background color, please give it one so that you can see its position directly, instead of deducing its position by looking at the image view.
n.b. Setting a view's transform doesn't actually move the view itself, it just changes how/where it draws its content.
I want to get the current position of a UIView after a transform rotate has been applied to it. How can i do that ?
In the UIView doc i found this about the transform property :
"If this property is not the identity transform, the value of the frame property is undefined and therefore should be ignored."
I tried to use center and bounds instead but the result is translated (even when no transform was applied to the rotatedView)
newView.center = rotatedView.center;
newView.bounds = rotatedView.bounds;
newView.transform = rotatedView.transform;
Can anybody help please ?
Thanks in advance, vincent.
If you know the frame before transform, you can use CGRectApplyAffineTransform function to calculate it's value after transformation.