Pretty much I am trying to find a way where you can zoom and rotate at the same time on an image in an app.
I have found this code that instructs me to put it into the touchesMoved method:
UITouch *touch1 = [[allTouches allObjects] objectAtIndex:0];
UITouch *touch2 = [[allTouches allObjects] objectAtIndex:1];
CGPoint previousPoint1 = [touch1 previousLocationInView:nil];
CGPoint previousPoint2 = [touch2 previousLocationInView:nil];
CGFloat previousAngle = atan2 (previousPoint2.y - previousPoint1.y, previousPoint2.x - previousPoint1.x);
CGPoint currentPoint1 = [touch1 locationInView:nil];
CGPoint currentPoint2 = [touch2 locationInView:nil];
CGFloat currentAngle = atan2 (currentPoint2.y - currentPoint1.y, currentPoint2.x - currentPoint1.x);
transform = CGAffineTransformRotate(transform, currentAngle - previousAngle);
self.view.transform = transform;
Now that is only for rotating with two fingers but I need to be able to zoom at the same time too using two fingers. I have tried everything but I am just not sure what is wrong or how to go from here.
Anyway, the maps application does something similar where you can zoom in on the map and rotate it at the same time and that is what I am trying to accomplish on an image in my app.
Anyways, what do I do from this point?
Thanks!
Add UIPinchGestureRecognizer and UIRotationGestureRecognizer to your view, set their delegates and implement the below delegate function.
-(BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
{
return YES;
}
That should do it.
Related
I'm using ACEDrawingView to draw within a view.
How would I detect the width and height of the drawing, so that I can crop around it, something like this:
Update: After #Duncan pointed me in the right direction, I was able to look through the source code and found the following:
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
// save all the touches in the path
UITouch *touch = [touches anyObject];
previousPoint2 = previousPoint1;
previousPoint1 = [touch previousLocationInView:self];
currentPoint = [touch locationInView:self];
if ([self.currentTool isKindOfClass:[ACEDrawingPenTool class]]) {
CGRect bounds = [(ACEDrawingPenTool*)self.currentTool addPathPreviousPreviousPoint:previousPoint2 withPreviousPoint:previousPoint1 withCurrentPoint:currentPoint];
CGRect drawBox = bounds;
drawBox.origin.x -= self.lineWidth * 2.0;
drawBox.origin.y -= self.lineWidth * 2.0;
drawBox.size.width += self.lineWidth * 4.0;
drawBox.size.height += self.lineWidth * 4.0;
self.drawingBounds = bounds; // I added this property to allow me to extract the bounds and use it in my view controller
[self setNeedsDisplayInRect:drawBox];
}
else if ([self.currentTool isKindOfClass:[ACEDrawingTextTool class]]) {
[self resizeTextViewFrame: currentPoint];
}
else {
[self.currentTool moveFromPoint:previousPoint1 toPoint:currentPoint];
[self setNeedsDisplay];
}
}
However I get this when I test the bounds:
I'm going to keep trying to figure it out, but if anyone could help that would be great!
Update 3: Using CGContextGetPathBoundingBox I was finally able to achieve it.
Every time you get a touchesMoved, record the location of the point you are now drawing. When you are all done, you have all the points. Now look at the largest x value and the smallest x value and the largest y value and the smallest y value in all of those points. That's the bounding box of the drawing.
Another approach (which you've already discovered) is to save the CGPath and then call CGContextGetPathBoundingBox. Basically that does exactly the same thing.
Note that a path has no thickness, whereas your stroke does. You will need to inset the bounding box negatively to allow for this (my screencast doesn't do that).
I'm not familiar with the AceDrawingView class. I can tell you how to do it with iOS frameworks though:
Create your path as a UIBezierPath.
Interrogate the bounds property of the path.
I've been working on this code for quite a while now but it just feels like one step forward and two steps back. I'm hoping someone can help me.
I'm working with Sprite Kit so I have a Scene file that manages the rendering, UI and touch controls. I have an SKNode thats functioning as the camera like so:
_world = [[SKNode alloc] init];
[_world setName:#"world"];
[self addChild:_world];
I am using UIGestureRecognizer, so I add the ones I need like so:
_panRecognizer = [[UIPanGestureRecognizer alloc]initWithTarget:self action:#selector(handlePanFrom:)];
[[self view] addGestureRecognizer:_panRecognizer];
_pinchRecognizer = [[UIPinchGestureRecognizer alloc]initWithTarget:self action:#selector(handlePinch:)];
[[self view] addGestureRecognizer:_pinchRecognizer];
The panning is working okay, but not great. The pinching is the real problem. The idea for the pinching is to grab a point at the center of the screen, convert that point to the world node, and then move to it while zooming in. Here is the method for pinching:
-(void) handlePinch:(UIPinchGestureRecognizer *)sender {
if (sender.state == UIGestureRecognizerStateBegan) {
_tempScale = [sender scale];
}
if (sender.state == UIGestureRecognizerStateChanged) {
if([sender scale] > _tempScale) {
if (_world.xScale < 6) {
//_world.xScale += 0.05;
//_world.yScale += 0.05;
//[_world setScale:[sender scale]];
[_world setScale:_world.xScale += 0.05];
CGPoint screenCenter = CGPointMake(_initialScreenSize.width/2, _initialScreenSize.height/2);
CGPoint newWorldPoint = [self convertTouchPointToWorld:screenCenter];
//crazy method why does this work
CGPoint alteredWorldCenter = CGPointMake(((newWorldPoint.x*_world.xScale)*-1), (newWorldPoint.y*_world.yScale)*-1);
//why does the duration have to be exactly 0.3 to work
SKAction *moveToCenter = [SKAction moveTo:alteredWorldCenter duration:0.3];
[_world runAction:moveToCenter];
}
} else if ([sender scale] < _tempScale) {
if (_world.xScale > 0.5 && _world.xScale > 0.3){
//_world.xScale -= 0.05;
//_world.yScale -= 0.05;
//[_world setScale:[sender scale]];
[_world setScale:_world.xScale -= 0.05];
CGPoint screenCenter = CGPointMake(_initialScreenSize.width/2, _initialScreenSize.height/2);
CGPoint newWorldPoint = [self convertTouchPointToWorld:screenCenter];
//crazy method why does this work
CGPoint alteredWorldCenter = CGPointMake(((newWorldPoint.x*_world.xScale - _initialScreenSize.width)*-1), (newWorldPoint.y*_world.yScale - _initialScreenSize.height)*-1);
SKAction *moveToCenter = [SKAction moveTo:alteredWorldCenter duration:0.3];
[_world runAction:moveToCenter];
}
}
}
if (sender.state == UIGestureRecognizerStateEnded) {
[_world removeAllActions];
}
}
I've tried many iterations of this, but this exact code is what is getting me the closest to pinching on a point in the world. There are some problems though. As you get further out from the center, it doesn't work as well, as it pretty much still tries to zoom in on the very center of the world. After converting the center point to the world node, I still need to manipulate it again to get it centered properly (the formula I describe as crazy). And it has to be different for zooming in and zooming out to work. The duration of the move action has to be set to 0.3 or it pretty much won't work at all. Higher or lower and it doesn't zoom in on the center point. If I try to increment the zoom by more than a small amount, it moves crazy fast. If I don't end the actions when the pinch ends, the screen jerks. I don't understand why this works at all (it smoothly zooms in to the center point before the delay ends and the screen jerks) and I'm not sure what I'm doing wrong. Any help is much appreciated!
Take a look at my answer to a very similar question.
https://stackoverflow.com/a/21947549/3148272
The code I posted "anchors" the zoom at the location of the pinch gesture instead of the center of the screen, but that is easy to change as I tried it both ways.
As requested in the comments below, I am also adding my panning code to this answer.
Panning Code...
// instance variables of MyScene.
SKNode *_mySkNode;
UIPanGestureRecognizer *_panGestureRecognizer;
- (void)didMoveToView:(SKView *)view
{
_panGestureRecognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(handlePanFrom:)];
[[self view] addGestureRecognizer:_panGestureRecognizer];
}
- (void)handlePanFrom:(UIPanGestureRecognizer *)recognizer
{
if (recognizer.state == UIGestureRecognizerStateBegan) {
[recognizer setTranslation:CGPointZero inView:recognizer.view];
} else if (recognizer.state == UIGestureRecognizerStateChanged) {
CGPoint translation = [recognizer translationInView:recognizer.view];
translation = CGPointMake(-translation.x, translation.y);
_mySkNode.position = CGPointSubtract(_mySkNode.position, translation);
[recognizer setTranslation:CGPointZero inView:recognizer.view];
} else if (recognizer.state == UIGestureRecognizerStateEnded) {
// No code needed for panning.
}
}
The following are the two helper functions that were used above. They are from the Ray Wenderlich book on Sprite Kit.
SKT_INLINE CGPoint CGPointAdd(CGPoint point1, CGPoint point2) {
return CGPointMake(point1.x + point2.x, point1.y + point2.y);
}
SKT_INLINE CGPoint CGPointSubtract(CGPoint point1, CGPoint point2) {
return CGPointMake(point1.x - point2.x, point1.y - point2.y);
}
I am currently having a map(tilemap) within a layer that i would like to pan/zoom using the following code:
- (void) pinchGestureUpdated: (UIPinchGestureRecognizer *) recognizer {
if([recognizer state] == UIGestureRecognizerStateBegan) {
_lastScale = [recognizer scale];
CGPoint touchLocation1 = [recognizer locationOfTouch:0 inView:recognizer.view];
CGPoint touchLocation2 = [recognizer locationOfTouch:1 inView:recognizer.view];
CGPoint centerGL = [[CCDirector sharedDirector] convertToGL: ccpMidpoint(touchLocation1, touchLocation2)];
_pinchCenter = [self convertToNodeSpace:centerGL];
}
else if ([recognizer state] == UIGestureRecognizerStateChanged) {
// NSLog(#"%d", recognizer.scale);
CGFloat newDeltaScale = 1 - (_lastScale - [recognizer scale]);
newDeltaScale = MIN(newDeltaScale, kMaxScale / self.scale);
newDeltaScale = MAX(newDeltaScale, kMinScale / self.scale);
CGFloat newScale = self.scale * newDeltaScale;
//self.scale = newScale;
[self scale: newScale atCenter:_pinchCenter];
_lastScale = [recognizer scale];
}
}
- (void) scale: (CGFloat) newScale atCenter: (CGPoint) center {
CGPoint oldCenterPoint = ccp(center.x * self.scale, center.y * self.scale);
// Set the scale.
self.scale = newScale;
// Get the new center point.
CGPoint newCenterPoint = ccp(center.x * self.scale, center.y * self.scale);
// Then calculate the delta.
CGPoint centerPointDelta = ccpSub(oldCenterPoint, newCenterPoint);
// Now adjust your layer by the delta.
self.position = ccpAdd(self.position, centerPointDelta);
}
my issue is that the zoom is not taking in effect at the center of the pinch even though i am trying to change it at the same time i am zooming in through this method: (void) scale: (CGFloat) newScale atCenter: (CGPoint) center. Is there any reason this might not happening properly? Also how do i convert to the center location of the pinch into the coordinates system for my scene/layer?
Everything was actually fine in my approach. The problem i was having though is that the layer anchor point was different from the map one i defined, which was introducing an offset during scaling. I had to set both anchor to ccp(0,0).
The concersion from the screen coordinates of the pinch gesture's center to the layer is correct and can be achived by the following instructions when using UIKIt gesture recognizers:
CGPoint centerGL = [[CCDirector sharedDirector] convertToGL: ccpMidpoint(touchLocation1, touchLocation2)];
_pinchCenter = [self convertToNodeSpace:centerGL];
First of all, you cannot do ([recognizer state] == UIGestureRecognizerStateBegan) because state is a bitfield! So you have to do:
([recognizer state] & UIGestureRecognizerStateBegan)
The center location of your pinch is going to be in coordinates on the screen basically. As far as how you convert that into your own coordinate system, you need to figure out what the bounds on the device's screen are of the part of your scene/layer that is shown at the time the gesture starts. That's going to be coordinates like 10,10 x 200,200 or something, representing the pixel grid of the screen. Then you will have to figure out, in the coordinate system of your own app's scene/layer, what 10,10 maps to, and what 200,200 maps to. From there, you can derive a factor to apply to the screen coordinates of the pinch gesture's center, that would translate the pinch gesture's screen coordinates into the scene/layer coordinates.
What you're trying to do is tricky because as you scale the scene/layer, your centering that scaling around a point that's not in the center of the view. I'm sure if you look through some of Apple's sample code in one of the map-related apps, you can probably find some examples of methods that have this kind of pinch zooming.
I hope this helps.
I have one UIImageView having an image of an arrow. When user taps on the UIView this arrow should point to the direction of the tap maintaing its position it should just change the transform. I have implemented following code. But it not working as expected. I have added a screenshot. In this screenshot when i touch the point upper left the arrow direction should be as shown.But it is not happening so.
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
UITouch *touch=[[event allTouches]anyObject];
touchedPoint= [touch locationInView:touch.view];
imageViews.transform = CGAffineTransformMakeRotation(DEGREES_TO_RADIANS(rangle11));
previousTouchedPoint = touchedPoint ;
}
- (CGFloat) pointPairToBearingDegrees:(CGPoint)startingPoint secondPoint:(CGPoint) endingPoint
{
CGPoint originPoint = CGPointMake(endingPoint.x - startingPoint.x, endingPoint.y - startingPoint.y); // get origin point to origin by subtracting end from start
float bearingRadians = atan2f(originPoint.y, originPoint.x); // get bearing in radians
float bearingDegrees = bearingRadians * (180.0 / M_PI); // convert to degrees
bearingDegrees = (bearingDegrees > 0.0 ? bearingDegrees : (360.0 + bearingDegrees)); // correct discontinuity
return bearingDegrees;
}
I assume you wanted an arrow image to point to where ever you touch, I tried and this is what i could come up with. I put an image view with an arrow pointing upwards (haven't tried starting from any other position, log gives correct angles) and on touching on different locations it rotates and points to touched location. Hope it helps ( tried some old math :-) )
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
UITouch *touch=[[event allTouches]anyObject];
touchedPoint= [touch locationInView:touch.view];
CGFloat angle = [self getAngle:touchedPoint];
imageView.transform = CGAffineTransformMakeRotation(angle);
}
-(CGFloat) getAngle: (CGPoint) touchedPoints
{
CGFloat x1 = imageView.center.x;
CGFloat y1 = imageView.center.y;
CGFloat x2 = touchedPoints.x;
CGFloat y2 = touchedPoints.y;
CGFloat x3 = x1;
CGFloat y3 = y2;
CGFloat oppSide = sqrtf(((x2-x3)*(x2-x3)) + ((y2-y3)*(y2-y3)));
CGFloat adjSide = sqrtf(((x1-x3)*(x1-x3)) + ((y1-y3)*(y1-y3)));
CGFloat angle = atanf(oppSide/adjSide);
// Quadrant Identifiaction
if(x2 < imageView.center.x)
{
angle = 0-angle;
}
if(y2 > imageView.center.y)
{
angle = M_PI/2 + (M_PI/2 -angle);
}
NSLog(#"Angle is %2f",angle*180/M_PI);
return angle;
}
-anoop4real
Given what you told me, I think the problem is that you are not resetting your transform in touchesBegan. Try changing it to something like this and see if it works better:
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
UITouch *touch=[[event allTouches]anyObject];
touchedPoint= [touch locationInView:touch.view];
imageViews.transform = CGAffineTransformIdentity;
imageViews.transform = CGAffineTransformMakeRotation(DEGREES_TO_RADIANS(rangle11));
previousTouchedPoint = touchedPoint ;
}
Do you need the line to "remove the discontinuity"? Seems atan2f() returns values between +π to -π. Won't those work directly with CATransform3DMakeRotation()?
What you need is that the arrow points to the last tapped point. To simplify and test, I have used a tap gesture (but it's similar to a touchBegan:withEvent:).
In the viewDidLoad method, I register the gesture :
UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(tapped:)];
[self.view addGestureRecognizer:tapGesture];
[tapGesture release];
The method called on each tap :
- (void)tapped:(UITapGestureRecognizer *)gesture
{
CGPoint imageCenter = mFlecheImageView.center;
CGPoint tapPoint = [gesture locationInView:self.view];
double deltaY = tapPoint.y - imageCenter.y;
double deltaX = tapPoint.x - imageCenter.x;
double angleInRadians = atan2(deltaY, deltaX) + M_PI_2;
mFlecheImageView.transform = CGAffineTransformMakeRotation(angleInRadians);
}
One key is the + M_PI_2 because UIKit coordinates have the origin at the top left corner (while in trigonometric, we use a bottom left corner).
I successfully implemented a pinch a zoom of a view. However, the view doesn't position itself where I wished it to be. For the stackoverflowers with an iPad, I would like my view to be centered like on the iPad Photos.app : when you pinch&zoom on an album, the photos present themselves in a view that is expanding. This view is approximately centered with the top right hand corner on the first finger and the bottom left hand finger on the other finger. I mixed it with a pan recognizer, but this way the user always has to pinch, and then pan to adjust.
Here are so graphic explanation, I could post a video of my app if that's unclear (no secret, i'm trying to reproduce the Photos.app of the iPad...)
So for an initial position of the fingers, begining zooming :
This is the actual "zoomed" frame for now. The square is bigger, but the position is below the fingers
Here is what I would like to have : same size, but different origin.x and y :
(sorry about my poor photoshop skills ^^)
You can get the CGPoint of the midpoint between two fingers via the following code in the method handlingPinchGesture.
CGPoint point = [sender locationInView:self];
My whole handlePinchGesture method is below.
/*
instance variables
CGFloat lastScale;
CGPoint lastPoint;
*/
- (void)handlePinchGesture:(UIPinchGestureRecognizer *)sender {
if ([sender numberOfTouches] < 2)
return;
if (sender.state == UIGestureRecognizerStateBegan) {
lastScale = 1.0;
lastPoint = [sender locationInView:self];
}
// Scale
CGFloat scale = 1.0 - (lastScale - sender.scale);
[self.layer setAffineTransform:
CGAffineTransformScale([self.layer affineTransform],
scale,
scale)];
lastScale = sender.scale;
// Translate
CGPoint point = [sender locationInView:self];
[self.layer setAffineTransform:
CGAffineTransformTranslate([self.layer affineTransform],
point.x - lastPoint.x,
point.y - lastPoint.y)];
lastPoint = [sender locationInView:self];
}
Have a look at the Touches sample project. Specifically these methods could help you:
// scale and rotation transforms are applied relative to the layer's anchor point
// this method moves a gesture recognizer's view's anchor point between the user's fingers
- (void)adjustAnchorPointForGestureRecognizer:(UIGestureRecognizer *)gestureRecognizer {
if (gestureRecognizer.state == UIGestureRecognizerStateBegan) {
UIView *piece = gestureRecognizer.view;
CGPoint locationInView = [gestureRecognizer locationInView:piece];
CGPoint locationInSuperview = [gestureRecognizer locationInView:piece.superview];
piece.layer.anchorPoint = CGPointMake(locationInView.x / piece.bounds.size.width, locationInView.y / piece.bounds.size.height);
piece.center = locationInSuperview;
}
}
// scale the piece by the current scale
// reset the gesture recognizer's rotation to 0 after applying so the next callback is a delta from the current scale
- (void)scalePiece:(UIPinchGestureRecognizer *)gestureRecognizer
{
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
[gestureRecognizer view].transform = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
[gestureRecognizer setScale:1];
}
}