I'm working on an app that lets the user resize and rotate a photo using UIGestureRecognizers. I have this code which adjusts the anchorPoint based on where the user is applying touches (to make it look like they're scaling the image at the point where their fingers actually are):
- (void)adjustAnchorPointForGestureRecognizer:(UIGestureRecognizer *)gestureRecognizer
{
UIView *gestureRecognizerView = gestureRecognizer.view;
CGPoint locationInView = [gestureRecognizer locationInView:gestureRecognizerView];
CGPoint locationInSuperview = [gestureRecognizer locationInView:gestureRecognizerView.superview];
gestureRecognizerView.layer.anchorPoint = CGPointMake(locationInView.x / gestureRecognizerView.bounds.size.width, locationInView.y / gestureRecognizerView.bounds.size.height);
gestureRecognizerView.center = locationInSuperview;
}
Later on, I'm simply wanting to calculate the origin based on the center and bounds with this code:
CGRect transformedBounds = CGRectApplyAffineTransform(view.bounds, view.transform);
CGPoint origin = CGPointMake(view.center.x - (transformedBounds.size.width * view.layer.anchorPoint.x), view.center.y - (transformedBounds.size.height * view.layer.anchorPoint.y));
And it's coming out incorrectly (I'm comparing against the frame value which ironically is supposed to be invalidated but actually does have the correct value).
So all in all I'm wondering, what am I not taking into account here? How is the anchorPoint influencing the center in a way I'm not able to determine?
I think the problem is that the origin you are calculating is not really an origin, but rather an offset of the origin of your transformedBounds rect.
I haven't fully tested it, but if you do something like this you should get the correct frame:
CGRect transformedBounds =
CGRectApplyAffineTransform(view.bounds, view.transform);
CGSize originOffset = CGSizeMake(
view.center.x - (transformedBounds.size.width * view.layer.anchorPoint.x),
view.center.y -
(transformedBounds.size.height * view.layer.anchorPoint.y));
transformedBounds.origin.x += originOffset.width;
transformedBounds.origin.y += originOffset.height;
Related
I been working on an iOS app which should display indoor blueprints. You should be able to switch between floors and each floor image is controlled by gesture recognisers to handle pan, rotate and scale.
I have been using this example for the gesture recognisers: https://github.com/GreenvilleCocoa/UIGestures/blob/master/UIGestures/RPSimultaneousViewController.m
So now to the problem. Whenever the user switch floor I want to keep the transformation of the image as well as the corresponding center lat/lng. However, the new image can have another rotation offset and aspect ratio.
I have been able to update the new frame of the image with the new size and update the transform with the new rotation offset and verified it. It is when I try to calculate the new center point I can not get it to work. The following code is how I currently do it and it works as long as the view is not rotated:
-(void)changeFromFloor:(int)oldFloorNr toFloor:(int)newFloorNr
{
CGPoint centerPoint = CGPointMake(self.frame.size.width/2, self.frame.size.height/2);
// This is the old non transformed center point.
CGPoint oldCenterOnImage = [self.layer convertPoint:centerPoint toLayer:self.mapOverlayView.layer]; // Actual non transformed point
// This point is verified to be the corresponding non transformed center point
CGPoint newCenterOnImage = [self calculateNewCenterFor:oldCenterOnImage fromFloor:oldFloorNr toFloor:newFloorNr];
// Change image, sets a new image and change the fram of mapOverlayView
[self changeImageFromFloor:oldFloorNr toFloor:newFloorNr]
// Adjust transformed rotation on map if new map have different rotation
[self adjustRotationFromFloorNr:oldFloorNr toFloorNr:newFloorNr];
CGPoint centerOfMapOverlay = CGPointMake((self.mapOverlayView.frame.size.width / 2), (self.mapOverlayView.frame.size.height / 2));
CGPoint newCenterOnImageTransformed = CGPointApplyAffineTransform(newCenterOnImage, self.mapOverlayView.transform);
CGFloat newCenterX = centerPoint.x + centerOfMapOverlay.x - newCenterOnImageTransformed.x;
CGFloat newCenterY = centerPoint.y + centerOfMapOverlay.y - newCenterOnImageTransformed.y;
// This only works without any rotation
self.mapOverlayView.center = CGPointMake(newCenterX, newCenterY);
}
Any idea where I go wrong? I have been working with this problem some days now and I can not seem to figure it out.
Please let me know if I need to add something or if something is unclear.
Thanks!
Code added after help was given:
CGPoint centerOfMapOverlay = CGPointMake(
(self.mapOverlayView.bounds.size.width / 2,
(self.mapOverlayView.bounds.size.height / 2)
);
centerOfMapOverlay = CGPointApplyAffineTransform(
centerOfMapOverlay,
self.mapOverlayView.transform
);
If you change the transform on it's view then the frame property becomes undefined. You should instead use the center property to change the view's position and bounds.size to change the view's size.
I'm making a photo hunt style app. I've got a number of X-Rays and I need to set specific areas of the uiimage to process touch events as correct and others as incorrect.
I understand that I can use the code below to get the tap location in the image view but how do I declare an area on the image view as correct and compare it to the tap location value?
CGPoint tapLocation = [gesture locationInView:self.imagePlateA];
Any help much appreciated!
So you have to programmatically create "regions" and test to see whether or not they're in that region after you get that point. For example:
//Get the tap location
CGPoint tapLocation = [gesture locationInView:self.imagePlateA];
if ([self checkIfTap:tapLocation inRegionWithCenter:CGPointMake(someX, someY) radius:radius]) {
//YAY WE'RE WITHIN THE BOUNDS OF A CIRCLE AT POINT (someX, someY)
//THAT HAS A RADIUS OF radius
}
and the method of checkIfTap: inRegionWithCenter: radius: can be defined like this:
- (BOOL)checkIfTap:(CGPoint)tapLocation inRegionWithCenter:(CGPoint)center radius:(CGFloat)radius {
CGFloat dx = tapLocation.x - center.x;
CGFloat dy = tapLocation.y - center.y;
//Pythagorean theorem
if (sqrt(dx * dx + dy * dy) < radius) {
return YES;
} else {
return NO;
}
}
If the correct locations of the image is a CGRect rather than points, you could use CGRectContainsPoint()
CGGeometry Reference
I'm trying to zoom inside a UIScrollView by specifying a rect which is within scrollview's coordinates. However it's not working as expected. and I think it's because of zoom scale or maybe I'm missing a transformation. The scroll view I'm trying to zoom is from the Apple's example PhotoScroller -- ImageScrollView. Also I copied the code to generate a frame to zoom from Apple's example as well:
- (CGRect)zoomRectForScrollView:(UIScrollView *)scrollView withScale:(float)scale withCenter:(CGPoint)center {
CGRect zoomRect;
// The zoom rect is in the content view's coordinates.
// At a zoom scale of 1.0, it would be the size of the
// imageScrollView's bounds.
// As the zoom scale decreases, so more content is visible,
// the size of the rect grows.
zoomRect.size.height = scrollView.frame.size.height / scale;
zoomRect.size.width = scrollView.frame.size.width / scale;
// choose an origin so as to get the right center.
zoomRect.origin.x = center.x - (zoomRect.size.width / 2.0);
zoomRect.origin.y = center.y - (zoomRect.size.height / 2.0);
return zoomRect;
}
Now the code to actually zoom is the following:
CGPoint scrollRectCenter = CGPointMake(rect.origin.x + (rect.size.width /2) ,
rect.origin.y + (rect.size.height / 2));
CGFloat newZoomScale = self.imageScrollView.zoomScale * 1.3f;
newZoomScale = MIN(newZoomScale, self.imageScrollView.maximumZoomScale);
CGRect zoomToRect = [self zoomRectForScrollView:self.imageScrollView withScale:newZoomScale withCenter:scrollRectCenter];
[self.imageScrollView zoomToRect:zoomToRect animated:YES];
How can I zoom to a rect taking in consideration the zoomed imageView zoomscale, so that it fit's correctly?
What I'm trying to achieve is the effect of the photos app, in which the crop grid is moved and the scrollview zooms to that rect.
Does anybody know a link or code example to achieve a similar effect to the photos app? Thanks a lot.
Let me guess...you have implemented - (void)scrollViewDidEndZooming:(UIScrollView *)scrollView withView:(UIView *)view atScale:(float)scale delegate method.
Your problem occurs because when this method is called the scrollview's zoom scale is reseted to 1. I don't know why, don't ask me.
You can fix it in 2 ways :
you save the scale into a variable and in - (void)scrollViewDidEndZooming:(UIScrollView *)scrollView withView:(UIView *)view atScale:(float)scale you set your variable to the appropriate scale and do the calculations with your scale.
You don't implement the bloody method, unless you have multiple views in your scrollview and each can zoom individually it doesn't worth it (after all you can access the zoom scale with scrollview.zoomScale)
If you don't implement the method disregard this answer :)
I have a image view with positions ( x:138 and y:107 ) which isn't in the center of the screen. Now I wan't to calculate the angle between these points and the horizontal line but I don't know how to do this.
Can anyone tell me more about this?
You can do something like this, where Start- and endpoint is the images positions.
Example:
CGPoint endPoint = CGPointMake(50, 100);
CGPoint startPoint = CGPointMake(100, 100);
float angleVal = (((atan2((endPoint.x - startPoint.x) , (endPoint.y - startPoint.y)))*180)/M_PI);
I've been working on a maps replacement for quite a while now. The whole thing works with a UIScrollView backed by a CATiledLayer.
To rotate my map, i rotate the layer itself. (Using CATransform3DMakeRotation) Works pretty well so far =)
But if I ever call setZoomScale method the CATransform3D that is going to be submitted to my layer is resetting the rotation to 0.
My question is, is there any way to set the zoomscale of my scrollView without loosing the applied rotation?
The same problem exists for the pinch gestures.
//Additional Infos
To rotate around the current Position, i have to edit the anchor point. Maybe this is a problem for the scaling, too.
- (void)correctLayerPosition {
CGPoint position = rootView.layer.position;
CGPoint anchorPoint = rootView.layer.anchorPoint;
CGRect bounds = rootView.bounds;
// 0.5, 0.5 is the default anchorPoint; calculate the difference
// and multiply by the bounds of the view
position.x = (0.5 * bounds.size.width) + (anchorPoint.x - 0.5) * bounds.size.width;
position.y = (0.5 * bounds.size.height) + (anchorPoint.y - 0.5) * bounds.size.height;
rootView.layer.position = position;
}
- (void)onFinishedUpdateLocation:(CLLocation *)newLocation {
if (stayOnCurrentLocation) {
[self scrollToCurrentPosition];
}
if (rotationEnabled) {
CGPoint anchorPoint = [currentConfig layerPointForLocation:newLocation];
anchorPoint.x = anchorPoint.x / rootView.bounds.size.width;
anchorPoint.y = anchorPoint.y / rootView.bounds.size.height;
rootView.layer.anchorPoint = anchorPoint;
[self correctLayerPosition];
}
}
You can implement scrollViewDidZoom: delegate method and concatenate the two transforms to achieve desired effect:
- (void) scrollViewDidZoom:(UIScrollView *) scrollView
{
CATransform3D scale = contentView.layer.transform;
CATransform3D rotation = CATransform3DMakeRotation(M_PI_4, 0, 0, 1);
contentView.layer.transform = CATransform3DConcat(rotation, scale);
}
EDIT
I've got simpler idea! How about adding another view to the hierarchy with the rotation transform attached? Here's the proposed hierarchy:
ScrollView
ContentView - the one returned by viewForZoomingInScrollView:
RotationView - the one with rotation transform
MapView - the one with all the tiles
I don't think that performance should be any concern here and it's worth trying.