I want to get the current position of a UIView after a transform rotate has been applied to it. How can i do that ?
In the UIView doc i found this about the transform property :
"If this property is not the identity transform, the value of the frame property is undefined and therefore should be ignored."
I tried to use center and bounds instead but the result is translated (even when no transform was applied to the rotatedView)
newView.center = rotatedView.center;
newView.bounds = rotatedView.bounds;
newView.transform = rotatedView.transform;
Can anybody help please ?
Thanks in advance, vincent.
If you know the frame before transform, you can use CGRectApplyAffineTransform function to calculate it's value after transformation.
Related
CALayer objects have a property accessibilityPath which as stated is supposedly
Returns the path of the element in screen coordinates.
Of course as expected this does not return the path of the layer.
Is there a way to access the physical path of a given CALayer that has already been created? For instance, how would you grab the path of a UIButton's layer property once the button has been initialized?
EDIT
For reference, I am trying to detect whether a rotated button contains a point. The reason for the difficulty here is due to the fact that the buttons are drawn in a curved view...
My initial approach was to create bezier slices then pass them as a property to the button to check if the path contains the point. For whatever reason, there seems to be an ugly offset from the path and the button.
They are both added to the same view and use the same coordinates / values to determine their frame, but the registered path seems to be offset to the left from the actual drawn shape from the path. Below is an image of the shapes I have drawn. The green outline is where the path is drawn (and displayed....) where the red is approximately the area which registers as inside the path...
I'm having a hard time understanding how the registered area is different.
If anyone has any ideas on why this offset would be occurring would be most appreciated.
UPDATE
Here is a snippet of me adding the shapes. self in this case is simply a UIView added to a controller. it's frame is the full size of the controller which is `{0, height_of_device - controllerHeight, width_of_device, controllerHeight}
UIBezierPath *slicePath = UIBezierPath.new;
[slicePath moveToPoint:self.archedCenterRef];
[slicePath addArcWithCenter:self.archedCenterRef radius:outerShapeDiameter/2 startAngle:shapeStartAngle endAngle:shapeEndAngle clockwise:clockwise];
[slicePath addArcWithCenter:self.archedCenterRef radius:(outerShapeDiameter/2 - self.rowHeight) startAngle:shapeEndAngle endAngle:shapeStartAngle clockwise:!clockwise];
[slicePath closePath];
CAShapeLayer *sliceShape = CAShapeLayer.new;
sliceShape.path = slicePath.CGPath;
sliceShape.fillColor = [UIColor colorWithWhite:0 alpha:.4].CGColor;
[self.layer addSublayer:sliceShape];
...
...
button.hitTestPath = slicePath;
In a separate method in my button subclass to detect if it contains the point or not: (self here is the button of course)
...
if ([self.hitTestPath containsPoint:touchPosition]) {
if (key.alpha > 0 && !key.isHidden) return YES;
else return NO;
}
else return NO;
You completely missunderstood the property, this is for assistive technology, from the docs:
Excerpt:
"The default value of this property is nil. If no path is set, the accessibility frame rectangle is used to highlight the element.
When you specify a value for this property, the assistive technology uses the path object you specify (in addition to the accessibility frame, and not in place of it) to highlight the element."
You can only get the path from a CAShapeLayer, alls other CALayers don't need to be drawn with a path at all.
Update to your update:
I think the offset is due to a missing
UIView convert(_ point: CGPoint, to view: UIView?)
The point needs to be converted to the buttons coordinate systems.
I'm trying to convert deep 'Subview' frame to an upper 'UIView'. I'm attaching the view hierarchy here.
Attaching illustration:
I've tried this, but the result are way off screen :
let rect = smallSubview.convert(smallSubview.frame, to: bigSuperview)
I'm trying to convert the small 'Subview' frame, to 'VideoCrop'/bigSuperView coordinate space. Any suggestions? Thank you!
Not sure, but shouldn't you be considering bounds rather than frame of your smallSubView ??
I mean :
let rect = smallSubview.convert(smallSubview.bounds, to: bigSuperview)
EDIT
I could not have answered your comment in answer hence updating my answer :)
The quick view of convert API suggests
func convert(_ rect: CGRect, to view: UIView?) -> CGRect Description
Converts a rectangle from the receiver’s coordinate system to that of
another view.
Parameters
rect A rectangle specified in the local
coordinate system (bounds) of the receiver.
view The view that is the
target of the conversion operation. If view is nil, this method
instead converts to window base coordinates. Otherwise, both view and
the receiver must belong to the same UIWindow object.
As it suggests you should be considering bounds rather than frame :)
Whats the difference between frame and bounds then ??
Bounds : Specifies the views location and size of view in its own coordinate system.
Frame: While this specifies the location and size of view in its superViews coordinate system :)
Hence bounds of any view will have its origin as (0,0) where as frame has its x and y with respect to its superview :) while height and width being same :)
Apple's convert#to is really silly.
One way to understand it:
Say you want a view named "echo" to be exactly where you are.
echo.frame = convert(bounds, to: echo.superview!)
is exactly the same as:
echo.frame = superview!.convert(frame, to: echo.superview!)
It's like ...
convert(bounds
means essentially "your frame in your superview", and that's exactly the same as
superview!.convert(frame
which also means "your frame in your superview"
You can always do either of these two things, they're identical:
convert(bounds ...
superview!.convert(frame ...
I know that external change to center, bounds and transform will be ignored after UIDynamicItems init.
But I need to manually change the transform of UIView that in UIDynamicAnimator system.
Every time I change the transform, it will be covered at once.
So any idea? Thanks.
Any time you change one of the animated properties, you need to call [dynamicAnimator updateItemUsingCurrentState:item] to let the dynamic animator know you did it. It'll update it's internal representation to match the current state.
EDIT: I see from your code below that you're trying to modify the scale. UIDynamicAnimator only supports rotation and position, not scale (or any other type of affine transform). It unfortunately takes over transform in order to implement just rotation. I consider this a bug in UIDynamicAnimator (but then I find much of the implementation of UIKit Dynamics to classify as "bugs").
What you can do is modify your bounds (before calling updateItem...) and redraw yourself. If you need the performance of an affine transform, you have a few options:
Move your actual drawing logic into a CALayer or subview and modify its scale (updating your bounds to match if you need collision behaviors to still work).
Instead of attaching your view to the behavior, attach a proxy object (just implement <UIDyanamicItem> on an NSObject) that passes the transform changes to you. You can then combine the requested transform with your own transform.
You can also use the .action property of the UIDynamicBehavior to set your desired transform at every tick of the animation.
UIAttachmentBehavior *attachment = [[UIAttachmentBehavior alloc] initWithItem:item attachedToAnchor:item.center];
attachment.damping = 0.8f;
attachment.frequency = 0.8f;
attachment.action = ^{
CGAffineTransform currentTransform = item.transform;
item.transform = CGAffineTransformScale(currentTransform, 1.2, 1.2)
};
You would need to add logic within the action block to determine when the scale should be changed, and by how much, otherwise your view will always be at 120%.
Another way to fix this (I think we should call it bug) is to override the transform property of the UIView used. Something like this:
override var transform: CGAffineTransform {
set (value) {
}
get {
return super.transform
}
}
var ownTransform: CGAffineTransform. {
didSet {
super.transform = ownTransform
}
}
It's a kind of a hack, but it works fine, if you don't use rotation in UIKitDynamics.
Is it possible to give different alpha values to the same view at the same time?
I want to display color in a uiview, but it should faded out to a direction. Means I want to change its alpha value within increasing its x coordinate value.
Yes you can achieve this, but you have to remember that the range alpha value is from 0-1. So in this case you can define you can define change according to the width ie
float changeInAlphaPerX = 1/self.frame.width;
And where you are getting changed x value, you can do..
view.alpha = changeInAlphaPerX * x;
Hope it helps..
I have the following code :
#define ANG_TO_RAD(angle) ( angle/180*M_PI)
CGFloat degree = x;
CGAffineTransform transform = CGAffineTransformMakeRotation ( ANG_TO_RAD(degree) );
image.transform = transform;
This rotates the image. However, when I want to rotate it back to the original setting, and call the above function again but with degree=-degree : the image is not rotated exactly to the same position as it was before. There is always some tilt..
I tried to make degree = 180-degree when trying to un-rotate it but no luck
thanks
You're directly setting the transformation, that is applied to the image, the transformations are not chained automatically, like you seem to be expecting.
To get back to the original you have to apply CGAffineTransformIdentity.
For chaining you use CGAffineTransformConcat e.g.
image.transform = CCGAffineTransformConcat(image.transform, some_other_transformation);