Rotate view and subviews - ios

I am having a problem rotating an image with subviews in an iOS 9+ app. I have a container view containing 2 subviews. The subviews are the same size as the container view. First subview contains an image from a PDF page. Second subview contains UIImageViews as subviews that sit on top of the PDF image. I use the coordinate system of the PDF to place and size the image views correctly. (Maybe I should mention that the container view is itself a subview of a UIScrollview).
The image views are properly placed and oriented whether the PDF is portrait or landscape. However, when the PDF is landscape, I would like to rotate and scale the final image so that it displays normally. One way I can do this is to rotate, transform and scale the PDF and each image view individually, putting drawing code in each view's drawRect method. This works but it's really slow.
I learned from a SO post that if I apply the rotation and transform to the CALayer of the container view, iOS applies the changes to the entire view hierarchy. This runs much more quickly when rotating a landscape image. But I have not been able to scale the final image using the layer of the container view. On an iPad I end up with a correctly rotated final image, centered horizontally at the top of the screen, but clipped on the left and right sides. The long axis of the image is still equal to the height of the screen, wider than the width of the screen.
The code in the container view is really short:
- (void) setOrientation:(NSInteger)orientation
{
_orientation = orientation;
if (orientation == foPDFLandscape)
{
// [[self layer] setNeedsDisplayOnBoundsChange:YES];// no effect
// [[self layer] setBounds:CGRectMake(0.0, 0.0, 100.0, 100.0)];//does not change image size or scale
// [[self layer] setContentsScale:0.5];//does not change image size or scale
[[self layer] setAnchorPoint:CGPointMake(0.0, 0.9)];
CATransform3D transform = CATransform3DMakeRotation(90.0 * (M_PI / 180.0), 0.0, 0.0, 1.0);
[[self layer] setTransform:transform];
//putting the scaling code here instead of before the transform makes no difference
}
}
Setting the bounds, frame, or the contentsScale, either before or after the transform and in various combinations, has no effect on the final image. Nor does changing the Content Gravity Values and autoResizeMask.
Is there a way I can do this?
Thanks

I needed to concatenate the transforms like this:
- (void) setOrientation:(NSInteger)orientation
{
_orientation = orientation;
if (orientation == foPDFLandscape)
{
[[self layer] setAnchorPoint:CGPointMake(0.0, 1.0)];
CATransform3D rotate = CATransform3DMakeRotation(90.0 * (M_PI / 180.0), 0.0, 0.0, 1.0);
CATransform3D scale = CATransform3DMakeScale(0.77, 0.77, 1.0);
CATransform3D concat = CATransform3DConcat(rotate, scale);
[[self layer] setTransform:concat];
}
}
But this is a partial solution. The final image is clipped to the size of the screen -- OK on an iPad but not on an iPhone. Also, the image no longer responds to pinch gestures for zooming.

Related

Clear Mask on UIView (actually a video view)

I'm using OpenTok which is a webRTC framework. What I need to do is take the displayed video view and crop it to a circle. Problem is, since this video avatar view will be placed in a view with a clear background, I can't just use a mask as shown in this S.O. question:
Cut Out Shape with Animation
I've also tried to use layer.radius in a UIView category:
-(void)setRoundedViewToDiameter:(float)newSize;
{
CGPoint saveCenter = self.center;
CGRect newFrame = CGRectMake(self.frame.origin.x, self.frame.origin.y, newSize, newSize);
self.frame = newFrame;
self.layer.cornerRadius = newSize / 2.0;
self.center = saveCenter;
}
And then applied like so:
- (void) setUserVideoView:(UIView *)view {
[view setRoundedViewToDiameter:[WSUserView dimForUserAvatar:_sizeIndex]];
self.userVideo = view;
[self.userVideo setRoundedViewToDiameter:[WSUserView dimForUserAvatar:_sizeIndex]];
[self addSubview:self.userVideo];
[self sendSubviewToBack:self.userVideo];
[self layoutSubviews];
}
But it's still an uncropped rectangle. Here's the portion of the video view. I'm showing user image avatars at first, but then when a video stream connects I want to replace the image with the video view, but as a circle. The left image is the stream view that I need make a circle.
Also, here's the inspector view of the video view I'm trying to crop. As you can see, it's a OTGLKVideoView class.
Migrated from my comment:
You should set self.layer.masksToBounds = YES because this ensures that the layer's sublayers are clipped with the corner radius too. I'm assuming that the problem is arising because the ever-changing sublayer that is updated whenever the video's frame changes is thereby ignoring the corner radius.
More details can be found through this answer which solves a similar problem: https://stackoverflow.com/a/11325605/556479

AVFoundation photo size and rotation

I'm having a nightmare time trying to correct a photo taken with AVFoundation captureStillImageAsynchronouslyFromConnection to size and orient to exactly what is shown on the screen.
I show the AVCaptureVideoPreviewLayer with this code to make sure it displays the correct way up at all rotations:
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
previewLayer.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height);
if ([[previewLayer connection] isVideoOrientationSupported])
{
[[previewLayer connection] setVideoOrientation:(AVCaptureVideoOrientation)[UIApplication sharedApplication].statusBarOrientation];
}
[self.view.layer insertSublayer:previewLayer atIndex:0];
Now when I have a returned image it needs cropping as it's much bigger than what was displayed.
I know there are loads of UIImage cropping examples, but the first hurdle I seem to have is finding the correct CGRect to use. When I simply crop to self.view.frame the image is cropped at the wrong location.
The preview is using AVLayerVideoGravityResizeAspectFill and I have my UIImageView also set to AspectFill
So how can I get the correct frame that AVFoundation is displaying on screen from the preview layer?
EDIT ----
Here's an example of the problem i'm facing. Using the front camera of an iPad Mini, the camera using the resolution 720x1280 but the display is 768x0124. The view displays this (See the dado rail at the top of the image:
Then when I take the image and display it, it looks like this:
Obviously the camera display was centred in the view, but the cropped image is taken from the top(none seen) section of the photo.
I'm working on a similar project right now and thought I might be able to help, if you haven't already figured this out.
the first hurdle I seem to have is finding the correct CGRect to use. When I simply crop to self.view.frame the image is cropped at the wrong location.
Let's say your image is 720x1280 and you want your image to be cropped to the rectangle of your display, which is a CGRect of size 768x1024. You can't just pass a rectangle of size 768x1024. First, your image isn't 768 pixels wide. Second, you need to specify the placement of that rectangle with respects to the image (i.e. by specifying the rectangle's origin point). In your example, self.view.frame is a CGRect that has an origin of (0, 0). That's why it's always cropping from the top of your image rather than from the center.
Calculating the cropping rectangle is a bit tricky because you have a few different coordinate systems.
You've got your view controller's view, which has...
...a video preview layer as a sublayer, which is displaying an aspect-filled image, but...
...the AVCaptureOutput returns a UIImage that not only has a different width/height than the video preview, but it also has a different aspect ratio.
So because your preview layer is displaying a centered and cropped preview image (i.e. aspect fill), what you basically want to find is the CGRect that:
Has the same display ratio as self.view.bounds
Has the same smaller dimension size as the smaller dimension of the UIImage (i.e. aspect fit)
Is centered in the UIImage
So something like this:
// Determine the width:height ratio of the crop rect, based on self.bounds
CGFloat widthToHeightRatio = self.bounds.size.width / self.bounds.size.height;
CGRect cropRect;
// Set the crop rect's smaller dimension to match the image's smaller dimension, and
// scale its other dimension according to the width:height ratio.
if (image.size.width < image.size.height) {
cropRect.size.width = image.size.width;
cropRect.size.height = cropRect.size.width / widthToHeightRatio;
} else {
cropRect.size.width = image.size.height * widthToHeightRatio;
cropRect.size.height = image.size.height;
}
// Center the rect in the longer dimension
if (cropRect.size.width < cropRect.size.height) {
cropRect.origin.x = 0;
cropRect.origin.y = (image.size.height - cropRect.size.height) / 2.0;
} else {
cropRect.origin.x = (image.size.width - cropRect.size.width) / 2.0;
cropRect.origin.y = 0;
}
So finally, to go back to your original example where the image is 720x1280, and you want your image to be cropped to the rectangle of your display which is 768x1024, you will end up with a CGRect of size 720x960, with an origin of x = 0, y = 1280-960/2 = 160.

UIPinchGestureRecognizer transform not resizing from center

Problem: I have a pinch gesture recognizer on a View Controller which I'm using to scale an image nested inside the View Controller. The transform below works fine, except that the image is scaled from the upper left instead of the center. I want it to scale from the center.
Setup:
a UIImageView set to Aspect Fill mode (nested within a few views, origin set to center).
a UIPinchGestureRecognizer on the container View Controller
I verified:
anchorPoint for image view is (0.5, 0.5)
the center is moving after every transform
no auto layout constraints on the view or its parent (at least at build time)
Also, I tried setting center of the UIImageView after the transform, the change doesn't take effect until after the user is done pinching.
I don't want to center the image on the touch because the image is smaller than the view controller.
CGFloat _lastScale = 1.0;
- (IBAction)pinch:(UIPinchGestureRecognizer *)sender {
if ([sender state] == UIGestureRecognizerStateBegan) {
_lastScale = 1.0;
}
CGFloat scale = 1.0 - (_lastScale - [sender scale]);
_lastScale = [sender scale];
CGAffineTransform currentTransform = self.imageView.transform;
CGAffineTransform newTransform = CGAffineTransformScale(currentTransform, scale, scale);
[self.imageView setTransform:newTransform];
NSLog(#"center: %#", NSStringFromCGPoint(self.imageView.center));
}
Here's a complete project demonstrating the issue.
https://github.com/adamloving/PinchDemo
no auto layout constraints on the view or its parent (at least at build time)
If your nib / storyboard uses auto layout, then there are certainly auto layout constraints, even if you did not deliberately construct them. And let's face it, auto layout does not play nicely with view transforms. A scale transform should scale from the center, but auto layout is fighting against you, forcing the frame to be reset by its top and its left (because that is where the constraints are).
See my essay on this topic here:
How do I adjust the anchor point of a CALayer, when Auto Layout is being used?
See also the discussion of this problem in my book.
In your case, the simplest solution is probably to apply the transform to the view's layer rather than to the view itself. In other words, instead of saying this sort of thing:
self.v.transform = CGAffineTransformMakeScale(1.3, 1.3);
Say this sort of thing:
self.v.layer.transform = CATransform3DMakeScale(1.3, 1.3, 1);

iOS - squish an image vertically

I have a round image that I want to "squish" vertically so that it looks more like a horizontal line, then expand it back to the original shape. I thought this would work by setting the layer's anchor point to the center and then animating the frame via UIViewAnimation with the height of the frame = 1.
[self.imageToSquish.layer setAnchorPoint:CGPointMake(0.5, 0.5)];
CGRect newFrame = CGRectMake(self.imageToSquish.frame.origin.x, self.imageToSquish.frame.origin.y, self.imageToSquish.frame.size.width, 1 );
[UIView animateWithDuration:3
animations:^{self.imageToSquish.frame = newFrame;}
completion:nil];
But the image shrinks toward the top instead of around the center.
You’re giving it a frame that has its origin—at the top left—in the same position as it started. You probably want to do something more like this, adding half the image’s height:
CGRect newFrame = CGRectMake(self.imageToSquish.frame.origin.x, self.imageToSquish.frame.origin.y + self.imageToSquish.frame.size.height / 2, self.imageToSquish.frame.size.width, 1);
Alternatively—and more efficiently—you could set the image view’s transform property to, say, CGAffineTransformMakeScale(1, 0.01) instead of messing with its frame. That’ll be centered on the middle of the image, and you can easily undo it by setting the transform to CGAffineTransformIdentity.

How to compose Core Animation CATransform3D matrices to animate simultaneous translation and scaling

I want to simultaneously scale and translate a CALayer from one CGrect (a small one, from a button) to a another (a bigger, centered one, for a view). Basically, the idea is that the user touches a button and from the button, a CALayer reveals and translates and scales up to end up centered on the screen. Then the CALayer (through another button) shrinks back to the position and size of the button.
I'm animating this through CATransform3D matrices. But the CALayer is actually the backing layer for a UIView (because I also need Responder functionality). And while applying my scale or translation transforms separately works fine. The concatenation of both (translation, followed by scaling) offsets the layer's position so that it doesn't align with the button when it shrinks.
My guess is that this is because the CALayer anchor point is in its center by default. The transform applies translation first, moving the 'big' CALayer to align with the button at the upper left corner of their frames. Then, when scaling takes place, since the CALayer anchor point is in the center, all directions scale down towards it. At this point, my layer is the button's size (what I want), but the position is offset (cause all points shrank towards the layer center).
Makes sense?
So I'm trying to figure out whether instead of concatenating translation + scale, I need to:
translate
change anchor point to upper-left.
scale.
Or, if I should be able to come up with some factor or constant to incorporate to the values of the translation matrix, so that it translates to a position offset by what the subsequent scaling will in turn offset, and then the final position would be right.
Any thoughts?
You should post your code. It is generally much easier for us to help you when we can look at your code.
Anyway, this works for me:
- (IBAction)showZoomView:(id)sender {
[UIView animateWithDuration:.5 animations:^{
self.zoomView.layer.transform = CATransform3DIdentity;
}];
}
- (IBAction)hideZoomView:(id)sender {
CGPoint buttonCenter = self.hideButton.center;
CGPoint zoomViewCenter = self.zoomView.center;
CATransform3D transform = CATransform3DIdentity;
transform = CATransform3DTranslate(transform, buttonCenter.x - zoomViewCenter.x, buttonCenter.y - zoomViewCenter.y, 0);
transform = CATransform3DScale(transform, .001, .001, 1);
[UIView animateWithDuration:.5 animations:^{
self.zoomView.layer.transform = transform;
}];
}
In my test case, self.hideButton and self.zoomView have the same superview.

Resources