I have controller where are randomly falling a lot of pictures using CAKeyframeAnimation and I should crop this images by touching track,
any animations using CALayer for present the animated image, and I am trying to detect touch event inner this layer using [layer presentationLayer].
The problem is - for cropping this image I should create paths from my touching tracker segment and layer, I don't figure out yet how I can create this paths but the question here is how detect this touch point in falling CALayer coordinate system, attached picture more informative.
Any ideas?
For detecting touch point in layer related with controller coordinate system I am using this code:
- (void) touchesMoved:(NSSet *)touches :(CGPoint) movingPoint :(UIEvent *)event
{
NSArray *layers = [[contextView layer] sublayers];
for (CALayer *layer in layers) {
CGRect imageRect = [[layer presentationLayer] frame];
if(CGRectContainsPoint(imageRect, movingPoint)) {
NSLog(#"Image position - x %f y %f", movingPoint.x, movingPoint.y);
}
}
}
As you likely know, the point your receive is in the view's coordinate system, which should generally be identical to the view's main layer's coordinate system. (If not, there are still ways to convert it, but unless you've done something weird, it's easier just to rely on the fact that these are the same.)
It's also important to know that once you've started rotating something, its frame is undefined. If you think a little bit about how frames work, it should be obvious why this has to be the case (you can't define a diamond using an unrotated rectangle).
We can easily convert from one system to the other using convertPoint:fromLayer:. There is no touchesMoved:movingPoint: method in iOS, so I'm assuming this is some custom method where you've already worked out the point in your own coordinate system. So you'd want something like:
CGPoint pointInLayer = [[layer presentationLayer] convertPoint:movingPoint fromLayer:self.view.layer];
CGRect layerBounds = [[layer presentationLayer] bounds];
if (CGRectContainsPoint(layerBounds, pointInLayer)) {
// Intersect!
}
The bounds are always defined, since they're always in a layer's own coordinate system. So we convert the point into the layer's coordinate system and ask if this point exists in its bounds.
Related
I am creating a UI where we have a deck of cards that you can swipe off the screen.
What I had hoped to be able to do was create a subclass of UIView which would represent each card and then to modify the transform property to move them back (z-axis) and a little up (y-axis) to get the look of a deck of cards.
Reading up on it I found I needed to use a CATransformLayer instead of the normal CALayer in order for the z-axis to not get flattened. I prototyped this by creating a CATransformLayer which I added to the CardDeckView's layer, and then all my cards are added to that CATransformLayer. The code looks a little bit like this:
In init:
// Initialize the CATransformSublayer
_rootTransformLayer = [self constructRootTransformLayer];
[self.layer addSublayer:_rootTransformLayer];
constructRootTransformLayer (the angle method is redundant, was going to angle the deck but later decided not to):
CATransformLayer* transformLayer = [CATransformLayer layer];
transformLayer.frame = self.bounds;
// Angle the transform layer so we an see all of the cards
CATransform3D rootRotateTransform = [self transformWithZRotation:0.0];
transformLayer.transform = rootRotateTransform;
return transformLayer;
Then the code to add the cards looks like:
// Set up a CardView as a wrapper for the contentView
RVCardView* cardView = [[RVCardView alloc] initWithContentView:contentView];
cardView.layer.cornerRadius = 6.0;
if (cardView != nil) {
[_cardArray addObject:cardView];
//[self addSubview:cardView];
[_rootTransformLayer addSublayer:cardView.layer];
[self setNeedsLayout];
}
Note that what I originally wanted was to simply add the RVCardView directly as a subview - I want to preserve touch events which adding just the layer doesn't do. Unfortunately what ends up happening is the following:
If I add the cards to the rootTransformLayer I end up with the right look which is:
Note that I tried using the layerClass on the root view (CardDeckView) which looks like this:
+ (Class) layerClass
{
return [CATransformLayer class];
}
I've confirmed that the root layer type is now CATransformLayer but I still get the flattened look. What else do I need to do in order to prevent the flattening?
When you use views, you see a flat scene because there is no perspective set in place. To make a comparison with 3D graphics, like OpenGL, in order to render a scene you must set the camera matrix, the one that transforms the 3D world into a 2D image.
This is the same: sublayers content are transformed using CATransform3D in 3D space but then, when the parent CALayer displays them, by default it projects them on x and y ignoring the z coordinate.
See Adding Perspective to Your Animations on Apple documentation. This is the code you are missing:
CATransform3D perspective = CATransform3DIdentity;
perspective.m34 = -1.0 / eyePosition; // ...on the z axis
myParentDeckView.layer.sublayerTransform = perspective;
Note that for this, you don't need to use CATransformLayer, a simple CALayer would suffice:
here is the transformation applied to the subviews in the picture (eyePosition = -0.1):
// (from ViewController-viewDidLoad)
for (UIView *v in self.view.subviews) {
CGFloat dz = (float)(arc4random() % self.view.subviews.count);
CATransform3D t = CATransform3DRotate(CATransform3DMakeTranslation(0.f, 0.f, dz),
0.02,
1.0, 0.0, 0.0);
v.layer.transform = t;
}
The reason for using CATransformLayer is pointed out in this question. CALayer "rasterizes" its transformed sublayers and then applies its own transformation, while CATransformLayer preserves the full hierarchy and draws each sublayer independently; it is useful only if you have more than one level of 3D-transformed sublayers. In your case, the scene tree has only one level: the deck view (which itself has the identity matrix as transformation) and the card views, the children (which are instead moved in the 3D space). So CATransformLayer is superfluous in this case.
I'm using CoreGraphics in my UIView to draw a graph and I want to be able to interact with the graph using touch input. Since touches are received in device coordinates, I need to transform it into user coordinates in order to relate it to the graph, but that has become an obstacle since CGContextConvertPointToUserSpace doesn't work outside of the graphics drawing context.
Here's what I've tried.
In drawRect:
CGContextScaleCTM(ctx,...);
CGContextTranslateCTM(ctx,...); // transform graph to fit the view nicely
self.ctm = CGContextGetCTM(ctx); // save for later
// draw points using user coordinates
In my touch event handler:
CGPoint touchDevice = [gesture locationInView:self]; // touch point in device coords
CGPoint touchUser = CGPointApplyAffineTransform(touchDevice, self.ctm); // doesn't give me what I want
// CGContextConvertPointToUserSpace(touchDevice) <- what I want, but doesn't work here
Using the inverse of ctm doesn't work either. I'll admit I'm having trouble getting my head around the meaning and relationships between device coordinates, user coordinates, and the transformation matrix. I think it's not as simple as I want it to be.
EDIT: Some background from Apple's documentation (iOS Coordinate Systems and Drawing Model).
"A window is positioned and sized in screen coordinates, which are defined by the coordinate system for the display."
"Drawing commands make reference to a fixed-scale drawing space, known as the user coordinate space. The operating system maps coordinate units in this drawing space onto the actual pixels of the corresponding target device."
"You can change a view’s default coordinate system by modifying the current transformation matrix (CTM). The CTM maps points in a view’s coordinate system to points on the device’s screen."
I discovered that the CTM already included a transformation to map view coordinates (with origin at the top left) to screen coordinates (with origin at the bottom left). So (0,0) got transformed to (0,800), where the height of my view was 800, and (0,2) mapped to (0,798) etc. So I gather there are 3 coordinate systems we're talking about: screen coordinates, view/device coordinates, user coordinates. (Please correct me if I am wrong.)
The CGContext transform (CTM) maps from user coordinates all the way to screen coordinates. My solution was to maintain my own transform separately which maps from user coordinates to view coordinates. Then I could use it to go back to user coordinates from view coordinates.
My Solution:
In drawRect:
CGAffineTransform scale = CGAffineTransformMakeScale(...);
CGAffineTransform translate = CGAffineTransformMakeTranslation(...);
self.myTransform = CGAffineTransformConcat(translate, scale);
// draw points using user coordinates
In my touch event handler:
CGPoint touch = [gesture locationInView:self]; // touch point in view coords
CGPoint touchUser = CGPointApplyAffineTransform(touchPoint, CGAffineTransformInvert(self.myTransform)); // this does the trick
Alternate Solution:
Another approach is to manually setup an identical context, but I think this is more of a hack.
In my touch event handler:
#import <QuartzCore/QuartzCore.h>
CGPoint touch = [gesture locationInView:self]; // view coords
CGSize layerSize = [self.layer frame].size;
UIGraphicsBeginImageContext(layerSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// as in drawRect:
CGContextScaleCTM(...);
CGContextTranslateCTM(...);
CGPoint touchUser = CGContextConvertPointToUserSpace(context, touch); // now it gives me what I want
UIGraphicsEndImageContext();
I am experimenting with Key Frame animation of the position of a UIImageView object moving along a bezier path. This pic shows the initial state before animation. The blue line is the path - initially moving straight up, the light green box is the initial bounding box or the image, and the dark green "ghost" is the image that I am moving:
When I kick off the animation with rotationMode set to nil, the image keeps the same orientation all the way through the path as expected.
But when I kick off the animation with rotationMode set to kCAAnimationRotateAuto, the image immediately rotates 90 degrees anti-clockwise and keeps this orientation all the way through the path. When it reaches the end of the path it redraws in the correct orientation (well it actually shows the UIImageView that I repositioned in the final location)
I was naively expecting that the rotationMode would orientate the image to the tangent of the path and not to the normal, especially when the Apple docs for the CAKeyframeAnimation rotationMode state
Determines whether objects animating along the path rotate to match the path tangent.
So what is the solution here? Do I have to pre-rotate the image by 90 degrees clockwise? Or is there something that I am missing?
Thanks for your help.
Edit 2nd March
I added a rotation step before the path animation using an Affine rotation like:
theImage.transform = CGAffineTransformRotate(theImage.transform,90.0*M_PI/180);
and then after the path animation, resetting the rotation with:
theImage.transform = CGAffineTransformIdentity;
This makes the image follow the path in the expected manner. However I am now running into a different problem of the image flickering. I am already looking for a solution to the flickering issue in this SO question:
iOS CAKeyFrameAnimation scaling flickers at animation end
So now I don't know if I have made things better or worse!
Edit March 12
While Caleb pointed out that yes, I did have to pre rotate my image, Rob provided an awesome
package of code that almost completely solved my problems. The only thing that Rob didn't do was compensating for my assets being drawn with a vertical rather than horizontal orientation, thus still requiring to preRotate them by 90 degrees before doing the animation. But hey, its only fair that I have to do some of the work to get things running.
So my slight changes to Rob's solution to suite my requirements are:
When I add the UIView, I pre Rotate it to counter the inherent rotation added by setting the rotationMode:
theImage.transform = CGAffineTransformMakeRotation(90*M_PI/180.0);
I need to keep that rotation at the end of the animation, so instead of just blasting the view's transform with a new scale factor after the completion block is defined, I build the scale based on the current transform:
theImage.transform = CGAffineTransformScale(theImage.transform, scaleFactor, scaleFactor);
And that's all I had to do to get my image to follow the path as I expected!
Edit March 22
I have just uploaded to GitHub a demo project that shows off the moving of an object along a bezier path. The code can be found at PathMove
I also wrote about it in my blog at Moving objects along a bezier path in iOS
The issue here is that Core Animation's autorotation keeps the horizontal axis of the view parallel to the path's tangent. That's just how it works.
If you want your view's vertical axis to follow the path's tangent instead, rotating the contents of the view as you're currently doing is the reasonable thing to do.
Here's what you need to know to eliminate the flicker:
As Caleb sort of pointed out, Core Animation rotates your layer so that its positive X axis lies along the tangent of your path. You need to make your image's “natural” orientation work with that. So, supposing that's a green spaceship in your example images, you need the spaceship to point to the right when it doesn't have rotation applied to it.
Setting a transform that includes rotation interferes with the rotation applied by `kCAAnimationRotateAuto'. You need to remove the rotation from your transform before applying the animation.
Of course that means you need to reapply the transformation when the animation completes. And of course you want to do that without seeing any flicker in the appearance of the image. That's not hard, but there some secret sauce involved, which I explain below.
You presumably want your spaceship to start out pointing along the tangent of the path, even when the spaceship is sitting still having not been animated yet. If your spaceship image is pointing to the right, but your path goes up, then you need to set the transform of the image to include a 90° rotation. But perhaps you don't want to hardcode that rotation - instead you want to look at the path and figure out its starting tangent.
I'll show some of the important code here. You can find my test project on github. You may find some use in downloading it and trying it out. Just tap on the green “spaceship” to see the animation.
So, in my test project, I have connected my UIImageView to an action named animate:. When you touch it, the image moves along half of a figure 8 and doubles in size. When you touch it again, the image moves along the other half of the figure 8 (back to the starting position), and returns to its original size. Both animations use kCAAnimationRotateAuto, so the image points along the tangent of the path.
Here's the start of animate:, where I figure out what path, scale, and destination point the image should end up at:
- (IBAction)animate:(id)sender {
UIImageView* theImage = self.imageView;
UIBezierPath *path = _isReset ? _path0 : _path1;
CGFloat newScale = 3 - _currentScale;
CGPoint destination = [path currentPoint];
So, the first thing I need to do is remove any rotation from the image's transform, since as I mentioned, it will interfere with kCAAnimationRotateAuto:
// Strip off the image's rotation, because it interferes with `kCAAnimationRotateAuto`.
theImage.transform = CGAffineTransformMakeScale(_currentScale, _currentScale);
Next, I go into a UIView animation block so that the system will apply animations to the image view:
[UIView animateWithDuration:3 animations:^{
I create the keyframe animation for the position and set a couple of its properties:
// Prepare my own keypath animation for the layer position.
// The layer position is the same as the view center.
CAKeyframeAnimation *positionAnimation = [CAKeyframeAnimation animationWithKeyPath:#"position"];
positionAnimation.path = path.CGPath;
positionAnimation.rotationMode = kCAAnimationRotateAuto;
Next is the secret sauce for preventing flicker at the end of the animation. Recall that animations do not effect the properties of the “model layer“ that you attach them to (theImage.layer in this case). Instead, they update the properties of the “presentation layer“, which reflects what's actually on the screen.
So first I set removedOnCompletion to NO for the keyframe animation. This means the animation will stay attached to the model layer when the animation is complete, which means I can access the presentation layer. I get the transform from the presentation layer, remove the animation, and apply the transform to the model layer. Since this is all happening on the main thread, these property changes all happen in one screen refresh cycle, so there's no flicker.
positionAnimation.removedOnCompletion = NO;
[CATransaction setCompletionBlock:^{
CGAffineTransform finalTransform = [theImage.layer.presentationLayer affineTransform];
[theImage.layer removeAnimationForKey:positionAnimation.keyPath];
theImage.transform = finalTransform;
}];
Now that I've set up the completion block, I can actually change the view properties. The system will automatically attach animations to the layer when I do this.
// UIView will add animations for both of these changes.
theImage.transform = CGAffineTransformMakeScale(newScale, newScale);
theImage.center = destination;
I copy some key properties from the automatically-added position animation to my keyframe animation:
// Copy properties from UIView's animation.
CAAnimation *autoAnimation = [theImage.layer animationForKey:positionAnimation.keyPath];
positionAnimation.duration = autoAnimation.duration;
positionAnimation.fillMode = autoAnimation.fillMode;
and finally I replace the automatically-added position animation with the keyframe animation:
// Replace UIView's animation with my animation.
[theImage.layer addAnimation:positionAnimation forKey:positionAnimation.keyPath];
}];
Double-finally I update my instance variables to reflect the change to the image view:
_currentScale = newScale;
_isReset = !_isReset;
}
That's it for animating the image view with no flicker.
And now, as Steve Jobs would say, One Last Thing. When I load the view, I need to set the transform of the image view so that it's rotated to point along the tangent of the first path that I will use to animate it. I do that in a method named reset:
- (void)reset {
self.imageView.center = _path1.currentPoint;
self.imageView.transform = CGAffineTransformMakeRotation(startRadiansForPath(_path0));
_currentScale = 1;
_isReset = YES;
}
Of course, the tricky bit is hidden in that startRadiansForPath function. It's really not that hard. I use the CGPathApply function to process the elements of the path, picking out the first two points that actually form a subpath, and I compute the angle of the line formed by those two points. (A curved path section is either a quadratic or cubic bezier spline, and those splines have the property that the tangent at the first point of the spline is the line from the first point to the next control point.)
I'm just going to dump the code here without explanation, for posterity:
typedef struct {
CGPoint p0;
CGPoint p1;
CGPoint firstPointOfCurrentSubpath;
CGPoint currentPoint;
BOOL p0p1AreSet : 1;
} PathState;
static inline void updateStateWithMoveElement(PathState *state, CGPathElement const *element) {
state->currentPoint = element->points[0];
state->firstPointOfCurrentSubpath = state->currentPoint;
}
static inline void updateStateWithPoints(PathState *state, CGPoint p1, CGPoint currentPoint) {
if (!state->p0p1AreSet) {
state->p0 = state->currentPoint;
state->p1 = p1;
state->p0p1AreSet = YES;
}
state->currentPoint = currentPoint;
}
static inline void updateStateWithPointsElement(PathState *state, CGPathElement const *element, int newCurrentPointIndex) {
updateStateWithPoints(state, element->points[0], element->points[newCurrentPointIndex]);
}
static void updateStateWithCloseElement(PathState *state, CGPathElement const *element) {
updateStateWithPoints(state, state->firstPointOfCurrentSubpath, state->firstPointOfCurrentSubpath);
}
static void updateState(void *info, CGPathElement const *element) {
PathState *state = info;
switch (element->type) {
case kCGPathElementMoveToPoint: return updateStateWithMoveElement(state, element);
case kCGPathElementAddLineToPoint: return updateStateWithPointsElement(state, element, 0);
case kCGPathElementAddQuadCurveToPoint: return updateStateWithPointsElement(state, element, 1);
case kCGPathElementAddCurveToPoint: return updateStateWithPointsElement(state, element, 2);
case kCGPathElementCloseSubpath: return updateStateWithCloseElement(state, element);
}
}
CGFloat startRadiansForPath(UIBezierPath *path) {
PathState state;
memset(&state, 0, sizeof state);
CGPathApply(path.CGPath, &state, updateState);
return atan2f(state.p1.y - state.p0.y, state.p1.x - state.p0.x);
}
Yow mention that you kick off the animation with "rotationMode set to YES", but the documentation states that rotationMode should be set using an NSString...
In particular:
These constants are used by the rotationMode property.
NSString * const kCAAnimationRotateAuto
NSString * const kCAAnimationRotateAutoReverse
Have you tried setting:
keyframe.animationMode = kCAAnimationRotateAuto;
The documentation states:
kCAAnimationRotateAuto: The objects travel on a tangent to the path.
I want to simultaneously scale and translate a CALayer from one CGrect (a small one, from a button) to a another (a bigger, centered one, for a view). Basically, the idea is that the user touches a button and from the button, a CALayer reveals and translates and scales up to end up centered on the screen. Then the CALayer (through another button) shrinks back to the position and size of the button.
I'm animating this through CATransform3D matrices. But the CALayer is actually the backing layer for a UIView (because I also need Responder functionality). And while applying my scale or translation transforms separately works fine. The concatenation of both (translation, followed by scaling) offsets the layer's position so that it doesn't align with the button when it shrinks.
My guess is that this is because the CALayer anchor point is in its center by default. The transform applies translation first, moving the 'big' CALayer to align with the button at the upper left corner of their frames. Then, when scaling takes place, since the CALayer anchor point is in the center, all directions scale down towards it. At this point, my layer is the button's size (what I want), but the position is offset (cause all points shrank towards the layer center).
Makes sense?
So I'm trying to figure out whether instead of concatenating translation + scale, I need to:
translate
change anchor point to upper-left.
scale.
Or, if I should be able to come up with some factor or constant to incorporate to the values of the translation matrix, so that it translates to a position offset by what the subsequent scaling will in turn offset, and then the final position would be right.
Any thoughts?
You should post your code. It is generally much easier for us to help you when we can look at your code.
Anyway, this works for me:
- (IBAction)showZoomView:(id)sender {
[UIView animateWithDuration:.5 animations:^{
self.zoomView.layer.transform = CATransform3DIdentity;
}];
}
- (IBAction)hideZoomView:(id)sender {
CGPoint buttonCenter = self.hideButton.center;
CGPoint zoomViewCenter = self.zoomView.center;
CATransform3D transform = CATransform3DIdentity;
transform = CATransform3DTranslate(transform, buttonCenter.x - zoomViewCenter.x, buttonCenter.y - zoomViewCenter.y, 0);
transform = CATransform3DScale(transform, .001, .001, 1);
[UIView animateWithDuration:.5 animations:^{
self.zoomView.layer.transform = transform;
}];
}
In my test case, self.hideButton and self.zoomView have the same superview.
I'm working on a drawing app.
I want the user to be able to "drop" a shape on the screen and then move, resize or rotate it as desired.
The problem is with the rotation. I have the moving and resizing working fine.
I did this before with a rather complex and memory/processor-intensive process, which I am now trying to improve.
I've searched and searched but haven't found an answer similar to what I'm trying to do.
Basically, let's say the user drops a square on the "surface". Then, they tap it and get some handles. They can touch anywhere and pan to move the square around (working already), touch and drag on a resize handle to resize the square (working already), or grab the rotation handle to have the square rotate around its center.
I've looked into drawing the square using UIBezierPath or just having it be a subclass of UIView that I fill.
In either case, I'm trying to rotate the UIView itself, not some contents inside. Every time I try to rotate the view, either nothing happens, the view vacates the screen or it rotates just a little bit and stops.
Here's some of the code I've tried (this doesn't work, and I've tried a lot of different approaches to this):
- (void) rotateByAngle:(CGFloat)angle
{
CGPoint cntr = [self center];
CGAffineTransform move = CGAffineTransformMakeTranslation(-1 * cntr.x, -1 * cntr.y);
[[self path] applyTransform:move];
CGAffineTransform rotate = CGAffineTransformMakeRotation(angle * M_PI / 180.0);
[[self path] applyTransform:rotate];
[self setNeedsDisplay];
CGAffineTransform moveback = CGAffineTransformMakeTranslation(cntr.x, cntr.y);
[[self path] applyTransform:moveback];
}
In case it isn't obvious, the thinking here it to move the view to the origin (0,0), rotate around that point and then move it back.
In case you're wondering, "angle" is calculated correctly. I've also wrapped the code above in a [UIView beginAnimations:nil context:NULL]/[UIView commitAnimations] block.
Is it possible to rotate a UIView a "custom" amount? I've seen/done it before where I animate a control to spin, but in those examples, the control always ended up "square" (i.e., it rotated 1 or more full circles and came back to its starting orientation).
Is it possible to perform this rotation "real-time" in response to UITouches? Do I need to draw the square as an item in the layer of the UIView and rotate the layer instead?
Just so you know, what I had working before was a shape drawn by a set of lines or UIBezierPaths. I would apply a CGAffineTransform to the data and then call the drawRect: method, which would re-draw the object inside of a custom UIView. This UIView would host quite a number of these items, all of which would need to be re-drawn anytime one of them needed it.
So, I'm trying to make the app more performant by creating a bunch of UIView subclasses, which will only get a command to re-draw when the user does something with them. Apple's Keynote for the iPad seems to accomplish this using UIGestureRecognizers, since you have to use two fingers to rotate an added shape. Is this the way to go?
Thoughts?
Thanks!
-(void)rotate{
CGAffineTransform transform;
[UIView beginAnimations:nil context:NULL];
[UIView setAnimationDuration:1.0];
[UIView setAnimationCurve:UIViewAnimationOptionBeginFromCurrentState];
myView.alpha = 1;
transform = CGAffineTransformRotate(myView.transform,0.5*M_PI);
[myView setUserInteractionEnabled:YES];
myView.transform = transform;
[UIView commitAnimations];
}
This might help.
for just simple rotation you might leave out the Animation:
-(void)rotate:(CGFloat)angle
{
CGAffineTransform transform = CGAffineTransformRotate( myView.transform, angle * M_PI / 180.0 );
myView.transform = transform;
}
I use a simple NSTimer to control a animation.