I am creating a UI where we have a deck of cards that you can swipe off the screen.
What I had hoped to be able to do was create a subclass of UIView which would represent each card and then to modify the transform property to move them back (z-axis) and a little up (y-axis) to get the look of a deck of cards.
Reading up on it I found I needed to use a CATransformLayer instead of the normal CALayer in order for the z-axis to not get flattened. I prototyped this by creating a CATransformLayer which I added to the CardDeckView's layer, and then all my cards are added to that CATransformLayer. The code looks a little bit like this:
In init:
// Initialize the CATransformSublayer
_rootTransformLayer = [self constructRootTransformLayer];
[self.layer addSublayer:_rootTransformLayer];
constructRootTransformLayer (the angle method is redundant, was going to angle the deck but later decided not to):
CATransformLayer* transformLayer = [CATransformLayer layer];
transformLayer.frame = self.bounds;
// Angle the transform layer so we an see all of the cards
CATransform3D rootRotateTransform = [self transformWithZRotation:0.0];
transformLayer.transform = rootRotateTransform;
return transformLayer;
Then the code to add the cards looks like:
// Set up a CardView as a wrapper for the contentView
RVCardView* cardView = [[RVCardView alloc] initWithContentView:contentView];
cardView.layer.cornerRadius = 6.0;
if (cardView != nil) {
[_cardArray addObject:cardView];
//[self addSubview:cardView];
[_rootTransformLayer addSublayer:cardView.layer];
[self setNeedsLayout];
}
Note that what I originally wanted was to simply add the RVCardView directly as a subview - I want to preserve touch events which adding just the layer doesn't do. Unfortunately what ends up happening is the following:
If I add the cards to the rootTransformLayer I end up with the right look which is:
Note that I tried using the layerClass on the root view (CardDeckView) which looks like this:
+ (Class) layerClass
{
return [CATransformLayer class];
}
I've confirmed that the root layer type is now CATransformLayer but I still get the flattened look. What else do I need to do in order to prevent the flattening?
When you use views, you see a flat scene because there is no perspective set in place. To make a comparison with 3D graphics, like OpenGL, in order to render a scene you must set the camera matrix, the one that transforms the 3D world into a 2D image.
This is the same: sublayers content are transformed using CATransform3D in 3D space but then, when the parent CALayer displays them, by default it projects them on x and y ignoring the z coordinate.
See Adding Perspective to Your Animations on Apple documentation. This is the code you are missing:
CATransform3D perspective = CATransform3DIdentity;
perspective.m34 = -1.0 / eyePosition; // ...on the z axis
myParentDeckView.layer.sublayerTransform = perspective;
Note that for this, you don't need to use CATransformLayer, a simple CALayer would suffice:
here is the transformation applied to the subviews in the picture (eyePosition = -0.1):
// (from ViewController-viewDidLoad)
for (UIView *v in self.view.subviews) {
CGFloat dz = (float)(arc4random() % self.view.subviews.count);
CATransform3D t = CATransform3DRotate(CATransform3DMakeTranslation(0.f, 0.f, dz),
0.02,
1.0, 0.0, 0.0);
v.layer.transform = t;
}
The reason for using CATransformLayer is pointed out in this question. CALayer "rasterizes" its transformed sublayers and then applies its own transformation, while CATransformLayer preserves the full hierarchy and draws each sublayer independently; it is useful only if you have more than one level of 3D-transformed sublayers. In your case, the scene tree has only one level: the deck view (which itself has the identity matrix as transformation) and the card views, the children (which are instead moved in the 3D space). So CATransformLayer is superfluous in this case.
Related
I have looked at the libraries like gaugekit but they does not solve my problem.
Are there any other libraries for making gauge view as in the image?
If not, then how can I go around about it?
As #DonMag pointed out.
I have tried to make the changes in gaugekit by adding a view on top the gauge view....but it does not turns out to be good.
So I am stuck out at making the spaces in between the actual gauge.
https://imgur.com/Qk1EpcV
I suggest you create your own custom view, it's not so difficult. Here is how I would do it. I have left out some details for clarity, but you can see in the comments my suggested solutions for that.
First, create a sub-class of UIVew. We will need one property to keep track of the gauge position. This goes into your .h file.
#interface GaugeView : UIView
#property (nonatomic) CGFloat knobPosition;
#end
Next, add the implementation. The GaugeView is a view in itself, so it will be used as the container for the other parts we want. I have used awakeFromNib function to do the initialization, so that you can use the class for a UIView in Storyboard. If you prefer, you can do the initialization from an init function also.
I have not provided code for the knob in the center, but I would suggest you simply create one view with a white disc (or two to make the gray circle) and the labels to hold the texts parts, and beneath that you add an image view with the gray pointer. The pointer can be moved by applying a rotational transform it.
- (void)awakeFromNib {
[super awakeFromNib];
// Initialization part could also be placed in init
[self createSegmentLayers];
// Add knob views to self
// :
// Start somewhere
self.knobPosition = 0.7;
}
Next, create the segments. The actual shapes are not added here, since they will require the size of the view. It is better to defer that to layoutSubviews.
- (void)createSegmentLayers {
for (NSInteger segment = 0; segment < 10; ++segment) {
// Create the shape layer and set fixed properties
CAShapeLayer *shapeLayer = [CAShapeLayer layer];
// Color can be set differently for each segment
shapeLayer.strokeColor = [UIColor blueColor].CGColor;
shapeLayer.lineWidth = 1.0;
[self.layer addSublayer:shapeLayer];
}
}
Next, we need to respond to size changes to the view. This is where we create the actual shapes too.
- (void)layoutSubviews {
[super layoutSubviews];
// Dynamically create the segment paths and scale them to the current view width
NSInteger segment = 0;
for (CAShapeLayer *layer in self.layer.sublayers) {
layer.frame = self.layer.bounds;
layer.path = [self createSegmentPath:segment radius:self.bounds.size.width / 2.0].CGPath;
// If we should fill or not depends on the knob position
// Since the knobPosition's range is 0.0..1.0 we can just multiply by 10
// and compare to the segment number
layer.fillColor = segment < (_knobPosition * 10) ? layer.strokeColor : nil;
// Assume we added the segment layers first
if (++segment >= 10)
break;
}
// Move and size knob images
// :
}
Then we need the shapes.
- (UIBezierPath *)createSegmentPath:(NSInteger)segment radius:(CGFloat)radius {
UIBezierPath *path = [UIBezierPath bezierPath];
// We could also use a table with start and end angles for different segment sizes
CGFloat startAngle = segment * 21.0 + 180.0 - 12.0;
CGFloat endAngle = startAngle + 15.0;
// Draw the path, two arcs and two implicit lines
[path addArcWithCenter:CGPointMake(radius, radius) radius:0.9 * radius startAngle:DEG2RAD(startAngle) endAngle:DEG2RAD(endAngle) clockwise:YES];
[path addArcWithCenter:CGPointMake(radius, radius) radius:0.75 * radius startAngle:DEG2RAD(endAngle) endAngle:DEG2RAD(startAngle) clockwise:NO];
[path closePath];
return path;
}
Finally, we want to respond to changes to the knobPosition property. Calling setNeedsLayout will trigger a call to layoutSubviews.
// Position is 0.0 .. 1.0
- (void)setKnobPosition:(CGFloat)knobPosition {
// Rotate the knob image to point at the right segment
// self.knobPointerImageView.transform = CGAffineTransformMakeRotation(DEG2RAD(knobPosition * 207.0 + 180.0));
_knobPosition = knobPosition;
[self setNeedsLayout];
}
This is what it will look like now. Add the knob, some colors and possibly different sized segments and you are done!
Based on the image I saw the easiest solution might be to create 12 images and then programmatically swap the images as the value it represents grows or shrinks.
Scenario
I have two views. One is the "parent" view which contains a "child" view that does the drawing. I refer to the child as QuartzView in the code to follow. QuartzView knows how to draw a square to it's own context.
Issue
When I tell the QuartzView on it's self to draw a square it does so as expected. When I use the parent view to tell QuartsView to draw a square on it's self it draws the square in the lower left corner of the screen at about 1/5 the expected size.
Question
I assume there's some parent/child or context issues here but I'm not sure what they are. How can I get both squares to draw in the exact same place at the exact same size?
Parent ViewController
- (void)drawASquare {
// this code draws the "goofy" square that is smaller and off in the bottom left corner
x = qv.frame.size.width / 2;
y = qv.frame.size.height / 2;
CGPoint center = CGPointMake(x, y);
[qv drawRectWithCenter:center andWidth:50 andHeight:50 andFillColor:[UIColor blueColor]];
}
Child QuartzView
- (void)drawRect:(CGRect)rect
{
self.context = UIGraphicsGetCurrentContext();
UIColor *color = [UIColor colorWithRed:0 green:1 blue:0 alpha:0.5];
// this code draws a square as expected
float w = self.frame.size.width / 2;
float h = self.frame.size.height / 2;
color = [UIColor blueColor];
CGPoint center = CGPointMake(w, h);
[self drawRectWithCenter:center andWidth:20 andHeight:20 andFillColor:color];
}
- (void)drawRectWithCenter:(CGPoint)center andWidth:(float)w andHeight:(float)h andFillColor:(UIColor *)color
{
CGContextSetFillColorWithColor(self.context, color.CGColor);
CGContextSetRGBStrokeColor(self.context, 0.0, 1.0, 0.0, 1);
CGRect rectangle = CGRectMake(center.x - w / 2, center.x - w / 2, w, h);
CGContextFillRect(self.context, rectangle);
CGContextStrokeRect(self.context, rectangle);
}
Note
The opacities are the same for both squares
I turned off "Autoresize subviews" with no noticeable difference
view.contentScaleFactor = [[UIScreen mainScreen] scale]; has not helped
Edit
I'm noticing that the x/y values of the square when drawn the parent starting from the bottom left as 0,0 whereas normally 0,0 would be the top left.
The return value from UIGraphicsGetCurrentContext() is only valid inside the drawRect method. You can not and must not use that context in any other method. So the self.context property should just be a local variable.
In the drawRectWithCenter method, you should store all of the parameters in properties, and then request a view update with [self setNeedsDisplay]. That way, the framework will call drawRect with the new information. The drawRectWithCenter method should look something like this
- (void)drawRectWithCenter:(CGPoint)center andWidth:(float)w andHeight:(float)h andFillColor:(UIColor *)color
{
self.showCenter = center;
self.showWidth = w;
self.showHeight = h;
self.showFillColor = color;
[self setNeedsDisplay];
}
And of course, the drawRect function needs to take that information, and do the appropriate drawing.
I assume there's some parent/child or context issues here but I'm not sure what they are. How can I get both squares to draw in the exact same place at the exact same size?
You normally don't need to worry about the graphics context in your -drawRect: method because Cocoa Touch will set up the context for you before calling -drawRect:. But your -drawASquare method in the view controller calls -drawRectWithCenter:... to draw outside the normal drawing process, so the context isn't set up for your view. You should really have the view do its drawing in -drawRect:. If your view controller wants to make the view redraw, it should call -setNeedsDisplay, like:
[qv setNeedsDisplay];
That will add the view to the drawing list, and the graphics system will set up the graphics context and call the view's -drawRect: for you.
I'm noticing that the x/y values of the square when drawn the parent starting from the bottom left as 0,0 whereas normally 0,0 would be the top left.
UIKit and Core Animation use an upper left origin, but Core Graphics (a.k.a. Quartz) normally uses a lower left origin. The docs say:
The default coordinate system used by Core Graphics framework is LLO-based.
I am experimenting with Key Frame animation of the position of a UIImageView object moving along a bezier path. This pic shows the initial state before animation. The blue line is the path - initially moving straight up, the light green box is the initial bounding box or the image, and the dark green "ghost" is the image that I am moving:
When I kick off the animation with rotationMode set to nil, the image keeps the same orientation all the way through the path as expected.
But when I kick off the animation with rotationMode set to kCAAnimationRotateAuto, the image immediately rotates 90 degrees anti-clockwise and keeps this orientation all the way through the path. When it reaches the end of the path it redraws in the correct orientation (well it actually shows the UIImageView that I repositioned in the final location)
I was naively expecting that the rotationMode would orientate the image to the tangent of the path and not to the normal, especially when the Apple docs for the CAKeyframeAnimation rotationMode state
Determines whether objects animating along the path rotate to match the path tangent.
So what is the solution here? Do I have to pre-rotate the image by 90 degrees clockwise? Or is there something that I am missing?
Thanks for your help.
Edit 2nd March
I added a rotation step before the path animation using an Affine rotation like:
theImage.transform = CGAffineTransformRotate(theImage.transform,90.0*M_PI/180);
and then after the path animation, resetting the rotation with:
theImage.transform = CGAffineTransformIdentity;
This makes the image follow the path in the expected manner. However I am now running into a different problem of the image flickering. I am already looking for a solution to the flickering issue in this SO question:
iOS CAKeyFrameAnimation scaling flickers at animation end
So now I don't know if I have made things better or worse!
Edit March 12
While Caleb pointed out that yes, I did have to pre rotate my image, Rob provided an awesome
package of code that almost completely solved my problems. The only thing that Rob didn't do was compensating for my assets being drawn with a vertical rather than horizontal orientation, thus still requiring to preRotate them by 90 degrees before doing the animation. But hey, its only fair that I have to do some of the work to get things running.
So my slight changes to Rob's solution to suite my requirements are:
When I add the UIView, I pre Rotate it to counter the inherent rotation added by setting the rotationMode:
theImage.transform = CGAffineTransformMakeRotation(90*M_PI/180.0);
I need to keep that rotation at the end of the animation, so instead of just blasting the view's transform with a new scale factor after the completion block is defined, I build the scale based on the current transform:
theImage.transform = CGAffineTransformScale(theImage.transform, scaleFactor, scaleFactor);
And that's all I had to do to get my image to follow the path as I expected!
Edit March 22
I have just uploaded to GitHub a demo project that shows off the moving of an object along a bezier path. The code can be found at PathMove
I also wrote about it in my blog at Moving objects along a bezier path in iOS
The issue here is that Core Animation's autorotation keeps the horizontal axis of the view parallel to the path's tangent. That's just how it works.
If you want your view's vertical axis to follow the path's tangent instead, rotating the contents of the view as you're currently doing is the reasonable thing to do.
Here's what you need to know to eliminate the flicker:
As Caleb sort of pointed out, Core Animation rotates your layer so that its positive X axis lies along the tangent of your path. You need to make your image's “natural” orientation work with that. So, supposing that's a green spaceship in your example images, you need the spaceship to point to the right when it doesn't have rotation applied to it.
Setting a transform that includes rotation interferes with the rotation applied by `kCAAnimationRotateAuto'. You need to remove the rotation from your transform before applying the animation.
Of course that means you need to reapply the transformation when the animation completes. And of course you want to do that without seeing any flicker in the appearance of the image. That's not hard, but there some secret sauce involved, which I explain below.
You presumably want your spaceship to start out pointing along the tangent of the path, even when the spaceship is sitting still having not been animated yet. If your spaceship image is pointing to the right, but your path goes up, then you need to set the transform of the image to include a 90° rotation. But perhaps you don't want to hardcode that rotation - instead you want to look at the path and figure out its starting tangent.
I'll show some of the important code here. You can find my test project on github. You may find some use in downloading it and trying it out. Just tap on the green “spaceship” to see the animation.
So, in my test project, I have connected my UIImageView to an action named animate:. When you touch it, the image moves along half of a figure 8 and doubles in size. When you touch it again, the image moves along the other half of the figure 8 (back to the starting position), and returns to its original size. Both animations use kCAAnimationRotateAuto, so the image points along the tangent of the path.
Here's the start of animate:, where I figure out what path, scale, and destination point the image should end up at:
- (IBAction)animate:(id)sender {
UIImageView* theImage = self.imageView;
UIBezierPath *path = _isReset ? _path0 : _path1;
CGFloat newScale = 3 - _currentScale;
CGPoint destination = [path currentPoint];
So, the first thing I need to do is remove any rotation from the image's transform, since as I mentioned, it will interfere with kCAAnimationRotateAuto:
// Strip off the image's rotation, because it interferes with `kCAAnimationRotateAuto`.
theImage.transform = CGAffineTransformMakeScale(_currentScale, _currentScale);
Next, I go into a UIView animation block so that the system will apply animations to the image view:
[UIView animateWithDuration:3 animations:^{
I create the keyframe animation for the position and set a couple of its properties:
// Prepare my own keypath animation for the layer position.
// The layer position is the same as the view center.
CAKeyframeAnimation *positionAnimation = [CAKeyframeAnimation animationWithKeyPath:#"position"];
positionAnimation.path = path.CGPath;
positionAnimation.rotationMode = kCAAnimationRotateAuto;
Next is the secret sauce for preventing flicker at the end of the animation. Recall that animations do not effect the properties of the “model layer“ that you attach them to (theImage.layer in this case). Instead, they update the properties of the “presentation layer“, which reflects what's actually on the screen.
So first I set removedOnCompletion to NO for the keyframe animation. This means the animation will stay attached to the model layer when the animation is complete, which means I can access the presentation layer. I get the transform from the presentation layer, remove the animation, and apply the transform to the model layer. Since this is all happening on the main thread, these property changes all happen in one screen refresh cycle, so there's no flicker.
positionAnimation.removedOnCompletion = NO;
[CATransaction setCompletionBlock:^{
CGAffineTransform finalTransform = [theImage.layer.presentationLayer affineTransform];
[theImage.layer removeAnimationForKey:positionAnimation.keyPath];
theImage.transform = finalTransform;
}];
Now that I've set up the completion block, I can actually change the view properties. The system will automatically attach animations to the layer when I do this.
// UIView will add animations for both of these changes.
theImage.transform = CGAffineTransformMakeScale(newScale, newScale);
theImage.center = destination;
I copy some key properties from the automatically-added position animation to my keyframe animation:
// Copy properties from UIView's animation.
CAAnimation *autoAnimation = [theImage.layer animationForKey:positionAnimation.keyPath];
positionAnimation.duration = autoAnimation.duration;
positionAnimation.fillMode = autoAnimation.fillMode;
and finally I replace the automatically-added position animation with the keyframe animation:
// Replace UIView's animation with my animation.
[theImage.layer addAnimation:positionAnimation forKey:positionAnimation.keyPath];
}];
Double-finally I update my instance variables to reflect the change to the image view:
_currentScale = newScale;
_isReset = !_isReset;
}
That's it for animating the image view with no flicker.
And now, as Steve Jobs would say, One Last Thing. When I load the view, I need to set the transform of the image view so that it's rotated to point along the tangent of the first path that I will use to animate it. I do that in a method named reset:
- (void)reset {
self.imageView.center = _path1.currentPoint;
self.imageView.transform = CGAffineTransformMakeRotation(startRadiansForPath(_path0));
_currentScale = 1;
_isReset = YES;
}
Of course, the tricky bit is hidden in that startRadiansForPath function. It's really not that hard. I use the CGPathApply function to process the elements of the path, picking out the first two points that actually form a subpath, and I compute the angle of the line formed by those two points. (A curved path section is either a quadratic or cubic bezier spline, and those splines have the property that the tangent at the first point of the spline is the line from the first point to the next control point.)
I'm just going to dump the code here without explanation, for posterity:
typedef struct {
CGPoint p0;
CGPoint p1;
CGPoint firstPointOfCurrentSubpath;
CGPoint currentPoint;
BOOL p0p1AreSet : 1;
} PathState;
static inline void updateStateWithMoveElement(PathState *state, CGPathElement const *element) {
state->currentPoint = element->points[0];
state->firstPointOfCurrentSubpath = state->currentPoint;
}
static inline void updateStateWithPoints(PathState *state, CGPoint p1, CGPoint currentPoint) {
if (!state->p0p1AreSet) {
state->p0 = state->currentPoint;
state->p1 = p1;
state->p0p1AreSet = YES;
}
state->currentPoint = currentPoint;
}
static inline void updateStateWithPointsElement(PathState *state, CGPathElement const *element, int newCurrentPointIndex) {
updateStateWithPoints(state, element->points[0], element->points[newCurrentPointIndex]);
}
static void updateStateWithCloseElement(PathState *state, CGPathElement const *element) {
updateStateWithPoints(state, state->firstPointOfCurrentSubpath, state->firstPointOfCurrentSubpath);
}
static void updateState(void *info, CGPathElement const *element) {
PathState *state = info;
switch (element->type) {
case kCGPathElementMoveToPoint: return updateStateWithMoveElement(state, element);
case kCGPathElementAddLineToPoint: return updateStateWithPointsElement(state, element, 0);
case kCGPathElementAddQuadCurveToPoint: return updateStateWithPointsElement(state, element, 1);
case kCGPathElementAddCurveToPoint: return updateStateWithPointsElement(state, element, 2);
case kCGPathElementCloseSubpath: return updateStateWithCloseElement(state, element);
}
}
CGFloat startRadiansForPath(UIBezierPath *path) {
PathState state;
memset(&state, 0, sizeof state);
CGPathApply(path.CGPath, &state, updateState);
return atan2f(state.p1.y - state.p0.y, state.p1.x - state.p0.x);
}
Yow mention that you kick off the animation with "rotationMode set to YES", but the documentation states that rotationMode should be set using an NSString...
In particular:
These constants are used by the rotationMode property.
NSString * const kCAAnimationRotateAuto
NSString * const kCAAnimationRotateAutoReverse
Have you tried setting:
keyframe.animationMode = kCAAnimationRotateAuto;
The documentation states:
kCAAnimationRotateAuto: The objects travel on a tangent to the path.
I want to simultaneously scale and translate a CALayer from one CGrect (a small one, from a button) to a another (a bigger, centered one, for a view). Basically, the idea is that the user touches a button and from the button, a CALayer reveals and translates and scales up to end up centered on the screen. Then the CALayer (through another button) shrinks back to the position and size of the button.
I'm animating this through CATransform3D matrices. But the CALayer is actually the backing layer for a UIView (because I also need Responder functionality). And while applying my scale or translation transforms separately works fine. The concatenation of both (translation, followed by scaling) offsets the layer's position so that it doesn't align with the button when it shrinks.
My guess is that this is because the CALayer anchor point is in its center by default. The transform applies translation first, moving the 'big' CALayer to align with the button at the upper left corner of their frames. Then, when scaling takes place, since the CALayer anchor point is in the center, all directions scale down towards it. At this point, my layer is the button's size (what I want), but the position is offset (cause all points shrank towards the layer center).
Makes sense?
So I'm trying to figure out whether instead of concatenating translation + scale, I need to:
translate
change anchor point to upper-left.
scale.
Or, if I should be able to come up with some factor or constant to incorporate to the values of the translation matrix, so that it translates to a position offset by what the subsequent scaling will in turn offset, and then the final position would be right.
Any thoughts?
You should post your code. It is generally much easier for us to help you when we can look at your code.
Anyway, this works for me:
- (IBAction)showZoomView:(id)sender {
[UIView animateWithDuration:.5 animations:^{
self.zoomView.layer.transform = CATransform3DIdentity;
}];
}
- (IBAction)hideZoomView:(id)sender {
CGPoint buttonCenter = self.hideButton.center;
CGPoint zoomViewCenter = self.zoomView.center;
CATransform3D transform = CATransform3DIdentity;
transform = CATransform3DTranslate(transform, buttonCenter.x - zoomViewCenter.x, buttonCenter.y - zoomViewCenter.y, 0);
transform = CATransform3DScale(transform, .001, .001, 1);
[UIView animateWithDuration:.5 animations:^{
self.zoomView.layer.transform = transform;
}];
}
In my test case, self.hideButton and self.zoomView have the same superview.
Twitter for iPad implements a fancy "pinch to expand paper fold" effect. A short video clip here.
http://www.youtube.com/watch?v=B0TuPsNJ-XY
Can this be done with CATransform3D without OpenGL? A working example would be thankful.
Update: I was interested in the approach or implementation to this animation effect. That's why I offered bounty on this question - srikar
Here's a really simple example using a gesture recognizer and CATransform3D to get you started. Simply pinch to rotate the gray view.
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
// ...
CGRect rect = self.window.bounds;
view = [[UIView alloc] initWithFrame:CGRectMake(rect.size.width/4, rect.size.height/4,
rect.size.width/2, rect.size.height/2)];
view.backgroundColor = [UIColor lightGrayColor];
[self.window addSubview:view];
CATransform3D transform = CATransform3DIdentity;
transform.m34 = -1/500.0; // this allows perspective
self.window.layer.sublayerTransform = transform;
UIPinchGestureRecognizer *rec = [[UIPinchGestureRecognizer alloc] initWithTarget:self
action:#selector(pinch:)];
[self.window addGestureRecognizer:rec];
[rec release];
return YES;
}
- (void)pinch:(UIPinchGestureRecognizer *)rec
{
CATransform3D t = CATransform3DIdentity;
t = CATransform3DTranslate(t, 0, -self.view.bounds.size.height/2, 0);
t = CATransform3DRotate(t, rec.scale * M_PI, 1, 0, 0);
t = CATransform3DTranslate(t, 0, -self.view.bounds.size.height/2, 0);
self.view.layer.transform = t;
}
Essentially, this effect is comprised of several different steps:
Gesture recognizer to detect when a pinch-out is occurring.
When the gesture starts, Twitter is likely creating a graphics context for the top and bottom portion, essentially creating images from their layers.*
Attach the images as subviews on the top and bottom.
As the fingers flex in and out, use a CATransform3D to add perspective to the images.
Once the view has 'fully stretched out', make the real subviews visible and remove the graphics context-created images.
To collapse the views, do the inverse of the above.
*Because these views are relatively simple, they may not need to be rendered to a graphics context.
The effect is basically just a view rotating about the X axis: when you drag a tweet out of the list, there's a view that starts out parallel to the X-Z plane. As the user un-pinches, the view rotates around the X axis until it comes fully into the X-Y plane. The documentation says:
The CATransform3D data structure
defines a homogenous three-dimensional
transform (a 4 by 4 matrix of CGFloat
values) that is used to rotate, scale,
offset, skew, and apply perspective
transformations to a layer.
Furthermore, we know that CALayer's transform property is a CATransform3D structure, and that it's also animatable. Ergo, I think it's safe to say that the folding effect in question is do-able with Core Animation.