Determining the center of a zoomed and panned image in UIScrollview - ios

I am building a photo app for iPhone which allows the user to take a photo with the camera or grab one from the Camera Roll, then pan and zoom this image as needed in a UIScrollView. The user then taps a button to save the image. I am having trouble with the key method that returns the exact center of the visible area of an imageview embedded in the scrollview. I need this method to allow for variations in the dimensions of the device screen (i.e., iPhone 4 vs 5), as well as for variations in the size, aspect ratio and zoom scale of the source image.
As an example, I need to get this to work for the following:
iPhone 5 with screen dimensions that are 320 X 568
Scrollview frame size of 320 X 568
An image with dimensions 380 width and 284 height
Zoom scale of 3.0
Alternatively, I also need for it to work for this:
iPhone 4 with screen dimensions of 320 X 480
Scrollview frame size of 320 X 480
An image with dimensions 640 width and 1048 height
Zoom scale of 1.3
The following is my current code, which tries to account for variations in the screen's dimensions and the image's dimensions, but it does not work for both iPhone 4 and 5, and for all types of images, such as those in portrait or landscape format. Is there a simpler way to get the center point of the visible portion of a scrollview? I need a clearer understanding of how to interpret and manipulate the properties of a view, such as the bounds.origin and bounds.size and use content size and zoom scale in the scrollview.
I have looked at various questions that are similar but none of these seem to account adequately for all variations in device or image aspect ratio or size. Any help would be greatly appreciated!
Similar questions
How can I determine the area currently visible in a scrollview and determine the center?
Getting the right coordinates of the visible part of a UIImage inside of UIScrollView
- (CGPoint)centerOfVisibleFrame:(UIImage *)image inScrollView:(UIScrollView *)scrollView
{
CGPoint frameCenter;
CGFloat zoomScale = scrollView.zoomScale;
// First determine the dimensions of the device
CGSize deviceFrameSize = [UIScreen mainScreen].bounds.size;
// Need to determine the scale factor for adjusting the image to full width/full height
CGFloat imageDeviceAspectFitScale;
// Compare the aspect ratio (height / width) of the device to the aspect ratio of the image
CGFloat deviceAspectRatio = deviceFrameSize.height / deviceFrameSize.width;
CGFloat imageAspectRatio = image.size.height / image.size.width;
// If the device's aspect ratio is greater than the image aspect ratio
if (deviceAspectRatio > imageAspectRatio)
{
// Set the imageDeviceAspectFitScale for full width
imageDeviceAspectFitScale = image.size.width / deviceFrameSize.width;
}
// Otherwise the image's aspect ratio is greater than the device aspect ratio
else
{
// Set the imageDeviceAspectFitScale for full height
imageDeviceAspectFitScale = image.size.height / deviceFrameSize.height;
}
// Create the frame for the image at full width or full height
CGSize imageAspectFitSize;
imageAspectFitSize.width = image.size.width / imageDeviceAspectFitScale;
imageAspectFitSize.height = image.size.height / imageDeviceAspectFitScale;
// Calculate the vertical and horizontal offset to adjust the coordinates of the
// image center to account for greater device height or device width
CGFloat verticalOffset = deviceFrameSize.height - imageAspectFitSize.height;
CGFloat horizontalOffset = deviceFrameSize.width - imageAspectFitSize.width;
if (self.debug) NSLog(#"verticalOffset = %f horizontalOffset = %f", verticalOffset, horizontalOffset);
if (self.debug) NSLog(#"image.size.width: %f image.size.height: %f", image.size.width, image.size.height);
if (self.debug) NSLog(#"scrollView.frame.size w: %f h: %f", scrollView.frame.size.width, scrollView.frame.size.height);
if (self.debug) NSLog(#"scrollView.bounds.size.width: %f scrollView.bounds.size.height: %f",
scrollView.bounds.size.width, scrollView.bounds.size.height);
if (self.debug) NSLog(#"scrollView.contentSize w=%f h=%f", scrollView.contentSize.width, scrollView.contentSize.height);
// imageRect represents the coordinate space for the image, adjusted for zoom scale
CGRect imageRect;
// First use the visible frame's origin to determine the top left corner of the visible rectangle
imageRect.origin.x = scrollView.contentOffset.x;
imageRect.origin.y = scrollView.contentOffset.y;
if (self.debug) NSLog(#"imageRect.origin x = %f y = %f", imageRect.origin.x, imageRect.origin.y);
// Adjust the image rect for zoom - Multiply by zoom scale
imageRect.size.width = image.size.width * zoomScale;
imageRect.size.height = image.size.height * zoomScale;
if (self.debug) NSLog(#"Zoomed imageRect.size width = %f height = %f", imageRect.size.width, imageRect.size.height);
// Then scale the image down to fit into the device frame
// Divide by the image device aspect fit scale
imageRect.size.width = imageRect.size.width / imageDeviceAspectFitScale;
imageRect.size.height = imageRect.size.height / imageDeviceAspectFitScale;
if (self.debug) NSLog(#"CVF Aspect fit imageRect.size width = %f height = %f", imageRect.size.width, imageRect.size.height);
// Then calculate the frame center by using the x and y dimensions of the DEVICE frame
frameCenter.x = imageRect.origin.x + (deviceFrameSize.width / 2);
frameCenter.y = imageRect.origin.y + (deviceFrameSize.height / 2);
// Scale back to original image dimensions from zoom
frameCenter.x = frameCenter.x / zoomScale;
frameCenter.y = frameCenter.y / zoomScale;
if (self.debug) NSLog(#"frameCenter.x = %f frameCenter.y = %f", frameCenter.x, frameCenter.y);
// Scale back up for the aspect fit scale
frameCenter.x = frameCenter.x * imageDeviceAspectFitScale;
frameCenter.y = frameCenter.y * imageDeviceAspectFitScale;
// Correct the coordinates for horizontal and vertical offset
frameCenter.x = frameCenter.x - (horizontalOffset);
frameCenter.y = frameCenter.y - (verticalOffset);
if (self.debug) NSLog(#"CVF frameCenter.x = %f frameCenter.y = %f", frameCenter.x, frameCenter.y);
return frameCenter;
}

basically this gives you the visible rect of your scrollview.
CGRect visibleRect = [yourScrollView convertRect:yourScrollView.bounds toView:yourImageview];
or
CGRect visibleRect;
visibleRect.origin = scrollView.contentOffset;
visibleRect.size = scrollView.frame.size;
please have a look this question and answers for more details. Getting the visible rect of an UIScrollView's content.
as long as you know the rect you can easily calculate center point.

Related

Load an image in UIImageView and resize the container - swift

I have an s3 link using which I need to load an image in UIImageView. I don't have the dimensions of image with me. I have defined mainImageView (leading and trailing constraints) in storyboard
In the code, I am loading the image using:
mainImageView.setImageWith(URL(string: ("https:" + (content?.imagePath)!)), placeholderImage: nil)
Do we have any way of resizing the container once the image loads? I want the container to take the dimensions (height and width) of the image. I heard that there is something called content wrap in Android to achieve the same. However, I am unable to find an equivalent in iOS.
If you are using Autolayout, you dont actually have to do much as UIImageView has an intrinsic size which makes it take the width and height of the image. In your .xib or .storyboard you need to position the image view so that it can resolve its position(horizontal and vertical). For size you can provide a default image(ow autolayout will show error).
When you will change the image at runtime, the imageview will take the size of its image.
You can achieve it using:
imgView.frame = [self frameForImage:self.image inImageViewAspectFit:imgView];
function implementation:
-(CGRect)frameForImage:(UIImage*)image inImageViewAspectFit:(UIImageView*)imageView
{
float imageRatio = image.size.width / image.size.height;
float viewRatio = imageView.frame.size.width / imageView.frame.size.height;
if(imageRatio < viewRatio) {
float scale = imageView.frame.size.height / image.size.height;
float width = scale * image.size.width;
float topLeftX = (imageView.frame.size.width - width) * 0.5;
return CGRectMake(topLeftX, 0, width, imageView.frame.size.height);
} else {
float scale = imageView.frame.size.width / image.size.width;
float height = scale * image.size.height;
float topLeftY = (imageView.frame.size.height - height) * 0.5;
return CGRectMake(0, topLeftY, imageView.frame.size.width, height);
}
}
You can resize the image by modifying its frame to the image's size:
mainImageView.frame = CGRect(origin: mainImageView.frame.origin, size: mainImageView.image.size)
If you are using Auto Layout, you need to modify the height and width constraints accordingly or let the layout to its job.

Zoom a rotated image inside scroll view to fit (fill) frame of overlay rect

Through this question and answer I've now got a working means of detecting when an arbitrarily rotated image isn't completely outside a cropping rect.
The next step is to figure out how to correctly adjust it's containing scroll view zoom to ensure that there are no empty spaces inside the cropping rect. To clarify, I want to enlarge (zoom in) the image; the crop rect should remain un-transformed.
The layout hierarchy looks like this:
containing UIScrollView
UIImageView (this gets arbitrarily rotated)
crop rect overlay view
... where the UIImageView can also be zoomed and panned inside the scrollView.
There are 4 gesture events that occur that need to be accounted for:
Pan gesture (done): accomplished by detecting if it's been panned incorrectly and resets the contentOffset.
Rotation CGAffineTransform
Scroll view zoom
Adjustment of the cropping rect overlay frame
As far as I can tell, I should be able to use the same logic for 2, 3, and 4 to adjust the zoomScale of the scroll view to make the image fit properly.
How do I properly calculate the zoom ratio necessary to make the rotated image fit perfectly inside the crop rect?
To better illustrate what I'm trying to accomplish, here's an example of the incorrect size:
I need to calculate the zoom ratio necessary to make it look like this:
Here's the code I've got so far using Oluseyi's solution below. It works when the rotation angle is minor (e.g. less than 1 radian), but anything over that and it goes really wonky.
CGRect visibleRect = [_scrollView convertRect:_scrollView.bounds toView:_imageView];
CGRect cropRect = _cropRectView.frame;
CGFloat rotationAngle = fabs(self.rotationAngle);
CGFloat a = visibleRect.size.height * sinf(rotationAngle);
CGFloat b = visibleRect.size.width * cosf(rotationAngle);
CGFloat c = visibleRect.size.height * cosf(rotationAngle);
CGFloat d = visibleRect.size.width * sinf(rotationAngle);
CGFloat zoomDiff = MAX(cropRect.size.width / (a + b), cropRect.size.height / (c + d));
CGFloat newZoomScale = (zoomDiff > 1) ? zoomDiff : 1.0 / zoomDiff;
[UIView animateWithDuration:0.2
delay:0.05
options:NO
animations:^{
[self centerToCropRect:[self convertRect:cropRect toView:self.zoomingView]];
_scrollView.zoomScale = _scrollView.zoomScale * newZoomScale;
} completion:^(BOOL finished) {
if (![self rotatedView:_imageView containsViewCompletely:_cropRectView])
{
// Damn, it's still broken - this happens a lot
}
else
{
// Woo! Fixed
}
_didDetectBadRotation = NO;
}];
Note I'm using AutoLayout which makes frames and bounds goofy.
Assume your image rectangle (blue in the diagram) and crop rectangle (red) have the same aspect ratio and center. When rotated, the image rectangle now has a bounding rectangle (green) which is what you want your crop scaled to (effectively, by scaling down the image).
To scale effectively, you need to know the dimensions of the new bounding rectangle and use a scale factor that fits the crop rect into it. The dimensions of the bounding rectangle are rather obviously
(a + b) x (c + d)
Notice that each segment a, b, c, d is either the adjacent or opposite side of a right triangle formed by the bounding rect and the rotated image rect.
a = image_rect_height * sin(rotation_angle)
b = image_rect_width * cos(rotation_angle)
c = image_rect_width * sin(rotation_angle)
d = image_rect_height * cos(rotation_angle)
Your scale factor is simply
MAX(crop_rect_width / (a + b), crop_rect_height / (c + d))
Here's a reference diagram:
Fill frame of overlay rect:
For a square crop you need to know new bounds of the rotated image which will fill the crop view.
Let's take a look at the reference diagram:
You need to find the altitude of a right triangle (the image number 2). Both altitudes are equal.
CGFloat sinAlpha = sin(alpha);
CGFloat cosAlpha = cos(alpha);
CGFloat hypotenuse = /* calculate */;
CGFloat altitude = hypotenuse * sinAlpha * cosAlpha;
Then you need to calculate the new width for the rotated image and the desired scale factor as follows:
CGFloat newWidth = previousWidth + altitude * 2;
CGFloat scale = newWidth / previousWidth;
I have implemented this method here.
I will answer using sample code, but basically this problem becomes really easy, if you will think in rotated view coordinate system.
UIView* container = [[UIView alloc] initWithFrame:CGRectMake(80, 200, 100, 100)];
container.backgroundColor = [UIColor blueColor];
UIView* content2 = [[UIView alloc] initWithFrame:CGRectMake(-50, -50, 150, 150)];
content2.backgroundColor = [[UIColor greenColor] colorWithAlphaComponent:0.5];
[container addSubview:content2];
[self.view setBackgroundColor:[UIColor blackColor]];
[self.view addSubview:container];
[container.layer setSublayerTransform:CATransform3DMakeRotation(M_PI / 8.0, 0, 0, 1)];
//And now the calculations
CGRect containerFrameInContentCoordinates = [content2 convertRect:container.bounds fromView:container];
CGRect unionBounds = CGRectUnion(content2.bounds, containerFrameInContentCoordinates);
CGFloat midX = CGRectGetMidX(content2.bounds);
CGFloat midY = CGRectGetMidY(content2.bounds);
CGFloat scaleX1 = (-1 * CGRectGetMinX(unionBounds) + midX) / midX;
CGFloat scaleX2 = (CGRectGetMaxX(unionBounds) - midX) / midX;
CGFloat scaleY1 = (-1 * CGRectGetMinY(unionBounds) + midY) / midY;
CGFloat scaleY2 = (CGRectGetMaxY(unionBounds) - midY) / midY;
CGFloat scaleX = MAX(scaleX1, scaleX2);
CGFloat scaleY = MAX(scaleY1, scaleY2);
CGFloat scale = MAX(scaleX, scaleY);
content2.transform = CGAffineTransformScale(content2.transform, scale, scale);

Horizontal-only pinch zoom in a UIScrollView

I'm using a UIPinchGestureRecognizer to adjust the width (not the height) of a view in a UIScrollView. It works with the pinch gesture's scale property, but the contentOffset of the scrollview doesn't change, so the view always increases on the right. This looks a bit better if I scale the contentOffset along with the width, since then the view increases from the left-most side of the screen.
The problem is that the location of the pinch is ignored - so it always appears that a pinch is on the left side of the screen.
I need to somehow factor in the location of the pinch to the contentOffset adjustment, so that the offset can be adjusted to keep the content at the pinch point to be in the same place.
Note: I cannot use built-in UIScrollView pinch-zoom gesture as I only want the zoom to be one dimension, horizontal. Also, I cannot use transforms on the UIView as I need to use the UIScrollView.
I was pinch zooming a graph, so the pinch adjusts the width constraint on the graph view.
Here is the pinch handler:
- (void) doPinch:(UIPinchGestureRecognizer*)pinch;
{
CGFloat width = self.graphWidthConstraint.constant;
CGFloat idealWidth = 1500;
CGFloat currentScale = width / idealWidth;
CGFloat scale = currentScale - (1.0 - pinch.scale);
CGFloat minScale = 0.5;
CGFloat maxScale = 3.0;
scale = MIN(scale, maxScale);
scale = MAX(scale, minScale);
CGPoint locationScrollview = [pinch locationInView:self.graphScrollView];
CGFloat pinchXNormalized = locationScrollview.x / width;
CGPoint locationView = [pinch locationInView:self.view];
// resize
CGFloat newWidth = idealWidth * scale;
self.graphWidthConstraint.constant = newWidth;
// set offset to position point under touch
CGFloat pinchXScaled = newWidth * pinchXNormalized;
CGFloat x = pinchXScaled - locationView.x;
self.graphScrollView.contentOffset = CGPointMake(x, 0);
pinch.scale = 1;
}

UIImageView pinch zoom and reposition to fit overlay

I am using an UIImagePickerView to grab photos from the camera or camera roll. When the user 'picks' an image, I'm inserting the image onto an UIImageView, which is nested in a UIScrollView to allow pinch/pan. I have an overlay above the image view which represents the area to which the image will be cropped (just like when UIImagePickerView's .allowEditing property is YES).
The Apple-provided "allowEditing" capability also has the same problem I'm seeing with my code (which I why I tried to write it myself in the first place, and I need custom shapes in the overlay). The problem is that I can't seem to find a good way to allow the user to pan around over ALL the image. There are always portions of the image which can't be placed in the crop window. It's always content around the edges (maybe the outside 10%) of the image, which cannot be panned into the crop window.
In the above photo, the brown area at the top and bottom are the scroll view's background color. The image view is sized to the image.
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
// Calculate what height/width we'll need for our UIImageView.
CGSize screenSize = [UIScreen mainScreen].bounds.size;
float width = 0.0f, height = 0.0f;
if (image.size.width > image.size.height) {
width = screenSize.width;
height = image.size.height / (image.size.width/screenSize.width);
} else {
height = screenSize.height;
width = image.size.width / (image.size.height/screenSize.height);
}
if (width > screenSize.width) {
height /= (width/screenSize.width);
width = screenSize.width;
}
if (height > screenSize.height) {
width /= (height/screenSize.height);
height = screenSize.height;
}
// Update the image view to the size of the image and center it.
// Image view is a subview of the scroll view.
imageView.frame = CGRectMake((screenSize.width - width) / 2, (screenSize.height - height) / 2, width, height);
imageView.image = image;
// Setup our scrollview so we can scroll and pinch zoom the image!
imageScrollView.contentSize = CGSizeMake(screenSize.width, screenSize.height);
// Close the picker.
[[picker presentingViewController] dismissViewControllerAnimated:YES completion:NULL];
}
I've considered monitoring scroll position and zoom level of the scroll view and disallow a side of the image to pass into the crop "sweet spot" of the image. This seems like over-engineering, however.
Does anyone know of a way to accomplish this?
I'm a moron. What a difference a good night's sleep can make ;-) Hopefully, it will help someone in the future.
Setting the correct scroll view contentSize and contentInset did the trick. The working code is below.
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
// Change the frame of the image view so that it fits the image!
CGSize screenSize = [UIScreen mainScreen].bounds.size;
float width = 0.0f, height = 0.0f;
if (image.size.width > image.size.height) {
width = screenSize.width;
height = image.size.height / (image.size.width/screenSize.width);
} else {
height = screenSize.height;
width = image.size.width / (image.size.height/screenSize.height);
}
// Make sure the new height and width aren't bigger than the screen
if (width > screenSize.width) {
height /= (width/screenSize.width);
width = screenSize.width;
}
if (height > screenSize.height) {
width /= (height/screenSize.height);
height = screenSize.height;
}
CGRect overlayRect = cropOverlay.windowRect;
imageView.frame = CGRectMake((screenSize.width - width) / 2, (screenSize.height - height) / 2, width, height);
imageView.image = image;
// Setup our scrollview so we can scroll and pinch zoom the image!
imageScrollView.contentSize = imageView.frame.size;
imageScrollView.contentInset = UIEdgeInsetsMake(overlayRect.origin.y - imageView.frame.origin.y,
overlayRect.origin.x,
overlayRect.origin.y + imageView.frame.origin.y,
screenSize.width - (overlayRect.origin.x + overlayRect.size.width));
// Dismiss the camera's VC
[[picker presentingViewController] dismissViewControllerAnimated:YES completion:NULL];
}
The scrollview and image view are set up like this:
imageScrollView = [[UIScrollView alloc] initWithFrame:self.view.bounds];
imageScrollView.showsHorizontalScrollIndicator = NO;
imageScrollView.showsVerticalScrollIndicator = NO;
imageScrollView.backgroundColor = [UIColor blackColor];
imageScrollView.userInteractionEnabled = YES;
imageScrollView.delegate = self;
imageScrollView.minimumZoomScale = MINIMUM_SCALE;
imageScrollView.maximumZoomScale = MAXIMUM_SCALE;
[self.view addSubview:imageScrollView];
imageView = [[UIImageView alloc] initWithFrame:self.view.bounds];
imageView.contentMode = UIViewContentModeScaleAspectFit;
imageView.backgroundColor = [UIColor grayColor]; // I had this set to gray so I could see if/when it didn't align properly in the scroll view. You'll likely want to change it to black
[imageScrollView addSubview:imageView];
Edit 3-21-14 Newer, fancier, better implementation of the method that calculates where to place the image in the screen and scrollview. So what's better? This new implementation will check for any image that is being set into the scrollview which is SMALLER in width or height, and adjust the frame of the image view such that it expands to be at least as wide or tall as the overlay rect, so you don't ever have to worry about your user selecting an image that isn't optimal for your overlay. Yay!
- (void)imagePickerController:(UIImagePickerController *)pickerUsed didFinishPickingMediaWithInfo:(NSDictionary *)info {
UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
// Change the frame of the image view so that it fits the image!
CGSize screenSize = [UIScreen mainScreen].bounds.size;
float width = 0.0f, height = 0.0f;
if (image.size.width > image.size.height) {
width = screenSize.width;
height = image.size.height / (image.size.width/screenSize.width);
} else {
height = screenSize.height;
width = image.size.width / (image.size.height/screenSize.height);
}
CGRect overlayRect = cropOverlay.windowRect;
// We should check the the width and height are at least as big as our overlay window
if (width < overlayRect.size.width) {
float ratio = overlayRect.size.width / width;
width *= ratio;
height *= ratio;
}
if (height < overlayRect.size.height) {
float ratio = overlayRect.size.height / height;
height *= ratio;
width *= ratio;
}
CGRect imageViewFrame = CGRectMake((screenSize.width - width) / 2, (screenSize.height - height) / 2, width, height);
imageView.frame = imageViewFrame;
imageView.image = image;
// Setup our scrollview so we can scroll and pinch zoom the image!
imageScrollView.contentSize = imageView.frame.size;
imageScrollView.contentInset = UIEdgeInsetsMake(overlayRect.origin.y - imageView.frame.origin.y,
(imageViewFrame.origin.x * -1) + overlayRect.origin.x,
overlayRect.origin.y + imageView.frame.origin.y,
imageViewFrame.origin.x + (screenSize.width - (overlayRect.origin.x + overlayRect.size.width)));
// Calculate the REAL minimum zoom scale!
float minZoomScale = 1 - MIN(fabsf(fabsf(imageView.frame.size.width) - fabsf(overlayRect.size.width)) / imageView.frame.size.width,
fabsf(fabsf(imageView.frame.size.height) - fabsf(overlayRect.size.height)) / imageView.frame.size.height);
imageScrollView.minimumZoomScale = minZoomScale;
// Dismiss the camera's VC
[[picker presentingViewController] dismissViewControllerAnimated:YES completion:NULL];
}

Rotation changing UIImageView frame. How to avoid this?

I have two sliders - one for changing image size and one for rotating this image. My imageview is 60x60. The problem is that I rotate the image using CGAffineTransformMakeRotation, but when I try to resize it after that (like, from 60x60 to 65x65 using the slider), it acts weirdly - the frame of the image view has changed like 80x2. How can I avoid this? Here is my code for the slider that resizes the image:
-(IBAction)imageSliderAction:(UISlider *)sender
{
NSUInteger value = sender.value;
float oldCenterX = logoImageView.center.x;
float oldCenterY = logoImageView.center.y;
newWidth = value;
newHeight = value;
CGRect frame = [logoImageView frame];
frame.size.width = newWidth;
frame.size.height = newHeight;
[logoImageView setFrame:frame];
logoImageView.center = CGPointMake(oldCenterX, oldCenterY);
}
And here is the code for my rotating slider:
-(IBAction)rotationSliderAction:(UISlider *)sender
{
NSUInteger angle = sender.value;
if (sender.value >= 1)
{
CGAffineTransform rotate = CGAffineTransformMakeRotation(angle / 180.0 * 3.14);
[logoImageView setTransform:rotate];
}
if (sender.value <= 0 )
{
CGAffineTransform rotate = CGAffineTransformMakeRotation( (360 + sender.value) / 180.0 * 3.14);
[logoImageView setTransform:rotate];
}
}
How can I avoid autochanging frame's width and height when rotating? Because after that I can't resize the image correctly.
From UIView reference
Warning: If the transform property is not the identity transform, the
value of this property is undefined and therefore should be ignored.
If you want to change the size of view that has nontrivial transform you should do that by changing its bounds property (view's center will remain the same so you won't need any extra logic to maintain its position):
[logoImageView setBounds:CGRectMake(0,0,sender.value, sender.value)];

Resources