Unpinch custom gesture recognizer with three fingers in iOS - ios

I want to make a custom gesture recognizer with three fingers. Which is similar to unpinch gesture recognizer.
All I need is an idea about how to recognize it.
My gesture needs to recognize three fingers with three directions. For example:
I hope images makes sense. I need to make it flexible for any three opposite directions. Thanks in advance. Any help would be appreciated.
I am aware about the subclass methods and I've created custom gestures already with single finger like semicircle, full circle. I need a coding idea about how to handle that.

You need to create a UIGestureRecognizer subclass of your own (let's call it DRThreeFingerPinchGestureRecognizer) and in it to implement:
– touchesBegan:withEvent:
– touchesMoved:withEvent:
– touchesEnded:withEvent:
– touchesCancelled:withEvent:
These methods are called when touches are accepted by the system and possibly before they are sent to the view itself (depending on how you setup the gesture recognizer). Each of these methods will give you a set of touches, for which you can check the current location in your view and previous location. Since pinch gesture is relatively very simple, this information is enough for you to test if the user is performing a pinch, and fail the test (UIGestureRecognizerStateFailed). If state was not failed by – touchesEnded:withEvent:, you can recognize the gesture.
I say pinch gestures are simple, because you can easily track each touch and see how it moves compared to other touches and itself. If a threshold of an angle is passed and broken, you fail the test, otherwise you allow it to continue. If touches do not move in separate angles to each other, you fail the test. You will have to play with what angles of the vectors are acceptable, because 120 degrees are not optimal for the three most common fingers (thumb + index + middle fingers). You may just want to check that the vectors are not colliding.
Make sure to read the UIGestureRecognizer documentation for an in-depth look at the various methods, as well as subclassing notes.

Quick note for future readers: the way you do an unpinch/pinch with three fingers is add the distances ab,bc,ac.
However if your graphics package just happens to have on hand "area of a triangle" - simply use that. ("It saves one whole line of code!")
Hope it helps.
All you need to do is track:
the distance between the three fingers!
Simply add up "every" permutation
(Well, there's three .. ab, ac and cb. Just add those; that's all there is to it!)
When that value, say, triples from the start value, that's an "outwards triple unpinch".
... amazingly it's that simple.
Angles are irrelevant.
Footnote if you want to be a smartass: this applies to any pinch/unpinch gesture, 2, 3 fingers, whatever:
track the derivative of the sum-distance (I mean to say, the velocity) rather than the distance. (Bizarrely this is often EASIER TO DO! because it is stateless! you need only look at the previous frame!!!!)
So in other words, the gesture is trigger when the expansion/contraction VELOCITY of the fingers reaches a certain value, rather than a multiple of the start value.
More interesting footnote!
However there is a subtle problem here: whenever you do anything like this (any platform) you have to be careful to measure "on the glass".
IF You are just doing distance (ie, my first solution above) of course everything cancels out and you can just say "if it doubles" (in pixels -- points -- whatever the hell). BUT if you are doing velocity as part of the calculation in any gesture, then somewhat surprisingly, you have to literally find the velocity in meters per second in the real world, which sounds weird at first! Of course you can't do this exactly (particularly with android) coz glass sizes vary somewhat, but you have to get close to it. Here is a long post discussing this problem http://answers.unity3d.com/questions/292333/how-to-calculate-swipe-speed-on-ios.html In practice you usually have to make do with "screen-widths-per-second" which is pretty good. (But this may be vastly different on phones, large tablets, and these days "surface" type things. on your whole iMac screen, 0.1 screenwidthspersecond may be fast, but on an iPhone that is nothing, not a gesture.)
Final footnote! I simply don't know if Apple use "distance multiple" or "glass velocity" in their gesture recognition, or also likely is some subtle mix. I've never read an article from them commenting on it.
Another footnote! -- if for whatever reason you do want to find the "center" of the triangle (I mean the center of the three fingers). This is a well-travelled problem for game programmers because, after all, all 3D mesh is triangles.
Fortunately it's trivial to find the center of three points, just add the three vectors and divide by three! (Confusingly this even works in higher dimensions!!)
You can see endless posts on this issue...
http://answers.unity3d.com/questions/445442/calculate-uv-at-center-of-triangle.html
http://answers.unity3d.com/questions/424950/mid-point-of-a-triangle.html
Conceivably, if you were incredibly anal, you would want the "barycenter" which is more like the center of mass, just google if you want that.

I think track angles is leading you down the wrong path. I think it's likely a more flexible and intuitive gesture if you don't constrain it based on the angles between the fingers. It'll be less error prone if you just deal with it as a three-fingered pinch regardless of how the fingers move relative to each other. This is what I'd do:
if(presses != 3) {
state = UIGestureRecognizerStateCancelled;
return;
}
// After three fingers are detected, begin tracking the gesture.
state = UIGestureRecognizerStateBegan;
central_point_x = (point1.x + point2.x + point3.x) / 3;
central_point_y = (point1.y + point2.y + point3.y) / 3;
// Record the central point and the average finger distance from it.
central_point = make_point(central_point_x, central_point_y);
initial_pinch_amount = (distance_between(point1, central_point) + distance_between(point2, central_point) + distance_between(point3, central_point)) / 3;
Then on each update for touches moved:
if(presses != 3) {
state = UIGestureRecognizerStateEnded;
return;
}
// Get the new central point
central_point_x = (point1.x + point2.x + point3.x) / 3;
central_point_y = (point1.y + point2.y + point3.y) / 3;
central_point = make_point(central_point_x, central_point_y);
// Find the new average distance
pinch_amount = (distance_between(point1, central_point) + distance_between(point2, central_point) + distance_between(point3, central_point)) / 3;
// Determine the multiplicative factor between them.
difference_factor = pinch_amount / initial_pinch_amount
Then you can do whatever you want with the difference_factor. If it's greater than 1, then the pinch has moved away from the center. If it's less than one, it's moved towards the center. This will also give the user the ability to hold two fingers stationary and only move a third to perform your gesture. This will address certain accessibility issues that your users may encounter.
Also, you could always track the incremental change between touch move events, but they won't be equally spaced in time and I suspect you'll have more troubles dealing with it.
I also apologize for the pseudo-code. If something isn't clear I can look at doing up a real example.

Simple subclass of UIGestureRecognizer. It calculates the relative triangular center of 3 points, and then calculates the average distance from that center, angle is not important. You then check the average distance in your Gesture Handler.
.h
#import <UIKit/UIKit.h>
#import <UIKit/UIGestureRecognizerSubclass.h>
#interface UnPinchGestureRecognizer : UIGestureRecognizer
#property CGFloat averageDistanceFromCenter;
#end
.m
#import "UnPinchGestureRecognizer.h"
#implementation UnPinchGestureRecognizer
-(CGPoint)centerOf:(CGPoint)pnt1 pnt2:(CGPoint)pnt2 pnt3:(CGPoint)pnt3
{
CGPoint center;
center.x = (pnt1.x + pnt2.x + pnt3.x) / 3;
center.y = (pnt1.y + pnt2.y + pnt3.y) / 3;
return center;
}
-(CGFloat)averageDistanceFromCenter:(CGPoint)center pnt1:(CGPoint)pnt1 pnt2:(CGPoint)pnt2 pnt3:(CGPoint)pnt3
{
CGFloat distance;
distance = (sqrt(fabs(pnt1.x-center.x)*fabs(pnt1.x-center.x)+fabs(pnt1.y-center.y)*fabs(pnt1.y-center.y))+
sqrt(fabs(pnt2.x-center.x)*fabs(pnt2.x-center.x)+fabs(pnt2.y-center.y)*fabs(pnt2.y-center.y))+
sqrt(fabs(pnt3.x-center.x)*fabs(pnt3.x-center.x)+fabs(pnt3.y-center.y)*fabs(pnt3.y-center.y)))/3;
return distance;
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
if ([touches count] == 3) {
[super touchesBegan:touches withEvent:event];
NSArray *touchObjects = [touches allObjects];
CGPoint pnt1 = [[touchObjects objectAtIndex:0] locationInView:self.view];
CGPoint pnt2 = [[touchObjects objectAtIndex:1] locationInView:self.view];
CGPoint pnt3 = [[touchObjects objectAtIndex:2] locationInView:self.view];
CGPoint center = [self centerOf:pnt1 pnt2:pnt2 pnt3:pnt3];
self.averageDistanceFromCenter = [self averageDistanceFromCenter:center pnt1:pnt1 pnt2:pnt2 pnt3:pnt3];
self.state = UIGestureRecognizerStateBegan;
}
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
if ([touches count] == 3)
{
NSArray *touchObjects = [touches allObjects];
CGPoint pnt1 = [[touchObjects objectAtIndex:0] locationInView:self.view];
CGPoint pnt2 = [[touchObjects objectAtIndex:1] locationInView:self.view];
CGPoint pnt3 = [[touchObjects objectAtIndex:2] locationInView:self.view];
CGPoint center = [self centerOf:pnt1 pnt2:pnt2 pnt3:pnt3];
self.averageDistanceFromCenter = [self averageDistanceFromCenter:center pnt1:pnt1 pnt2:pnt2 pnt3:pnt3];
self.state = UIGestureRecognizerStateChanged;
return;
}
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
[super touchesEnded:touches withEvent:event];
self.state = UIGestureRecognizerStateEnded;
}
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event {
[super touchesEnded:touches withEvent:event];
self.state = UIGestureRecognizerStateFailed;
}
#end
implementation of Gesture, I have a max avg distance set to start, and then a minimum to end, you can also check during changed as well:
-(IBAction)handleUnPinch:(UnPinchGestureRecognizer *)sender
{
switch (sender.state) {
case UIGestureRecognizerStateBegan:
//If you want a maximum starting distance
self.validPinch = (sender.averageDistanceFromCenter<75);
break;
case UIGestureRecognizerStateEnded:
//Minimum distance from relative center
if (self.validPinch && sender.averageDistanceFromCenter >=150) {
NSLog(#"successful unpinch");
}
break;
default:
break;
}
}

Related

objective c - SKSpriteNode - when touch ends outside of the sprite

i followed this wonderful guide about mario-style game:
http://www.raywenderlich.com/62053/sprite-kit-tutorial-make-platform-game-like-super-mario-brothers-part-2
however, i wanted to convert the movement controls to arrow keys, implemented by SKSpriteNodes with names that are detected by:
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch* touch = [touches anyObject];
CGPoint location = [touch locationInNode:self];
SKNode* node = [self nodeAtPoint:location];
// other left, up, down arrows with same code here
if ([node.name isEqualToString:#"rightArrow"]) {
self.player.moveRight = YES;
self.rightOriginalTouchLocation = location;
...
}
}
self.player.moveRight is a boolean value (much like moveForward in the guide), that tells the character at update to move.
it is terminated at:
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch* touch = [touches anyObject];
CGPoint location = [touch locationInNode:self];
SKNode* node = [self nodeAtPoint:location];
// other left, up, down arrows with same code here
if ([node.name isEqualToString:#"rightArrow"]) {
self.player.moveRight = NO;
}
...
}
however, i encounter the following problem - when i start the touch on the arrow, drag it outside the arrow, and then release the tap, it is not recognized as 'touch ended' for the arrow node (and it doesn't stop moving because of that).
i tried to solve it in many ways (even calculating touch move distance from original location and see if its too far, then cancel movement), but i always manage to reproduce the constant motion problem.
the issue lies with the fact that i can tap two arrows at the same time, so it is not enough to remember the last node tapped.
since i want to allow movement for different directions at the same time, i cant stop all movements in case one button is dismissed. i need to specifically know which button was released so i can stop that direction's movement only.
do you have any ideas for me? should i implement it in another way considering i want the arrow keys, or is there a method to detect which node is released even though it is not at its original location (of the tap)?
Thank you very much!
if anyone is interested about the issue - i had a problem where touching and moving from SKSpriteNode didnt call the touchesEnded for that SKSpriteNode (since the 'release' of the touch was not in the node).
i solved it by keeping the CGPoint of the touch at touchesBegan, and when touchesEnded called, i calculated distances to nearest key using simple distance function:
-(int)calculateDistanceWithPoints:(CGPoint)point1 andPoint:(CGPoint)point2 {
float d = sqrtf(pow((point1.x - point2.x), 2) + pow((point1.y - point2.y), 2));
return (int)d;
}
and then, in touchesEnded, i checked to which key the distance is minimal(i have 3 keys, so which key is most likely to be the key that was 'released') - and did the action required for that key release.

Knob rotation gesture recognizer

I'm trying to create a gesture recognizer able to detect the rotation of 4 fingers (similar when you rotate a volume knob).
The main idea was to create a subclass of UIRotateGestureRecognizer and override its method. In the -touchesBegan I detect the number of touches, if the number is lower than 4 the state of the gesture is fail. After that I pass the location point to an algorithm that find the diameter of a convex hull. If you think about it, your fingers are the vertices and I just need to find the two vertices with the max distance. Obtained these two points I reference them as ivar and I pass them to the superclass as it is a simple rotation with just two fingers.
It doesn't work:
the detection of the touches seems pretty hard
very rarely the -touchesHasMoved is called
when its called it hangs the most of time
Can someone help me?
Here is the code:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
if (touches.count<4) {
//FAIL
self.state = UIGestureRecognizerStateFailed;
return;
}
//Find the diameter of the convex hull
NSArray * touchesArray = [touches allObjects];
NSMutableArray * pointsArray = #[].mutableCopy;
for (UITouch * touch in touchesArray) {
[pointsArray addObject:[NSValue valueWithCGPoint:[touch locationInView:touch.view]]];
}
DiameterType convexHullDiameter = getDiameterFromPoints(pointsArray);
CGPoint firstPoint = convexHullDiameter.firstPoint;
CGPoint secondPoint = convexHullDiameter.secondPoint;
for (UITouch * touch in touchesArray) {
if (CGPointEqualToPoint([touch locationInView:touch.view], firstPoint) ) {
self.fistTouch = touch;
}
else if (CGPointEqualToPoint([touch locationInView:touch.view], secondPoint)){
self.secondTouch = touch;
}
}
//Calculating the rotation center as a mid point between the diameter vertices
CGPoint rotationCenter = (CGPoint) {
.x = (convexHullDiameter.firstPoint.x + convexHullDiameter.secondPoint.x)/2,
.y = (convexHullDiameter.firstPoint.y + convexHullDiameter.secondPoint.y)/2
};
self.rotationCenter = rotationCenter;
//Passing touches to super as a fake rotation gesture
NSSet * touchesSet = [[NSSet alloc] initWithObjects:self.fistTouch, self.secondTouch, nil];
[super touchesBegan:touchesSet withEvent:event];
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
if (touches.count<4) {
self.state = UIGestureRecognizerStateFailed;
return;
}
[super touchesMoved:[[NSSet alloc] initWithObjects:self.fistTouch, self.secondTouch, nil] withEvent:event];
}
- (void) touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event {
[super touchesCancelled:[[NSSet alloc] initWithObjects:self.fistTouch, self.secondTouch, nil] withEvent:event];
}
- (void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
[super touchesEnded:[[NSSet alloc] initWithObjects:self.fistTouch, self.secondTouch, nil] withEvent:event];
}
The reason initial detection is hard is that all the touches may not start at the same time. touchesBegan will likely be called multiple times as separate touches land on the screen. You can use the event parameter to query all of the current touches with event.allTouches. So your current approach for triggering the gesture to fail will not work. You should not set state to fail if touches.count is < 4 but instead just return if event.allTouches.count < 4. You could use a timer to set the state to fail if the fourth touch does not happen within a certain time from the first.
touchesMoved likely has problems because the touches in the event object do not match up with those in the set that you pass to super.
If you think about it, your fingers are the vertices and I just need to find the two vertices with the max distance.
I don't think this will work in practice, even if you are able to trick the UIGestureRecognizer.
This is how I would implement the algorithm in the 'correct' way:
Remember the 'old' touches.
When you're given 'new' touches, try to match each finger to the previous touch. If you can't, fail.
Compute the center of 'new' + 'old' touches.
For each of 4 fingers identified two steps ago, compute angle traveled in radians, approximated as
new(i) - old(i) divided by distance to center
If any angle is too big (> 0.5), fail.
This guarantees that approximation is valid.
Now compute the average of 4 angles.
Congratulations, you now have the rotation angle (measured in radians).
I would put this in a comment if I had enough Rep.
[super touchesMoved:[[NSSet alloc] initWithObjects:self.fistTouch, self.secondTouch, nil] withEvent:event];
You're using something called fistTouch, which doesn't sound like what you want. My guess is you want firstTouch.
Additionally there are possible collisions between gestures going on that may be overriding each other. Did you know there is a 4-finger zoom-out in iOS7 that is a system-wide gesture? Also, a 4-finger zoom-in during an app will close it.

How to program draggable tiles for a word game?

I am an iOS programming newbie (reading several books on the subject simultaneously) and I would like to develop a (yet another) word game.
Coming from Flash/Flex programming background I was first expecting the tiles to be best bundled as gif or png assets.
But then I have taken a look (by using iFunbox) at the popular word games (Lexulous, Wordament, Words with Friends, Ruzzle, ...) and none of them is doing that:
That is none of the many apps I've looked at includes any letter pieces as images.
So my question is what would be the recommended approach (in Xcode 5 and with no additional SDKs like Cocos2d or Sparrow) to create a letter tile for a word game?
On a tile I'd like to have
the center-aligned letter (obviously!),
then an index in a corner displaying the letter value
and then another index for a total word value
When touched I'd like to make the tile a bit larger and add a shadow underneath it.
Should my tile class be a UIView (can they be dragged around, grow and have shadows?)
Should I use a .nib file for the tile?
For dragging I have found a good suggestions already: Dragging an UIView inside UIScrollView
But what I really would like to know here is: if UIView would make a good tile (performance- and feature-wise) or should I go for another base class (like maybe some shapes)?
Yes UIView would be a good container.
Create a subclass of UIView, say TileView, put in some labels, image view and a button over it, override buttons UIControlEventTouchDown, UIControlEventTouchUpInside, UIControlEventTouchDragInside events to help U navigate the view in its parent. Put the TileView in some container (your view controller view or where U would like it to be) and this is basically it.
-(void)btnPressed:(id)sender withEvent:(UIEvent *) event
{
CGPoint point = [[[event allTouches] anyObject] locationInView:self.view];
dx = 0;
dy = 0;
oldPoint = point;
}
-(void)btnRelesed:(id)sender
{
// stop moving code
}
-(void)btnDragged:(id)sender withEvent:(UIEvent *)event
{
CGPoint point = [[[event allTouches] anyObject] locationInView:self.view];
dx = point.x - oldPoint.x;
dy = point.y - oldPoint.y;
oldPoint = point;
// set tile view center position using
CGPoint ptCenter;
ptCenter = self.view.center;
ptCenter.x = ptCenter.x + dx;
ptCenter.y = ptCenter.y + dy;
self.view.center = ptCenter;
}
self.view is your TileView and its self.view cause U have ovrriden UIView class ;)

Check when user slides over UIView - touchesMoved too laggy

I know there are better ways to achieve drawing tools using CoreGraphics and what-not, I'm using this UIView method because I want to achieve a pixelated effect and I have some other custom stuff in mind that would be difficult to do with just the built in CG code.
I have a grid of 400 UIViews to create my pixelated canvas...
The user selects a paint color and begins to drag their finger over the canvas of 400 UIViews, as their finger hits a UIView the color of that UIView (aka pixel) will change to the color the user has selected...
Right now I have coded the whole thing and everything is working but it is laggy (because I am using touchesMoved) and many points are skipped...
I am currently using touchesMoved and am moving a UIView to the fingers location(I am calling this the fingerLocationRect).
When I update a pixel's (1 of 400 miniature UIViews) color I am doing so by checking if the fingerLocationRect is intersection any one of my UIView rects. If they are I update the color.
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint currentPosition = [touch locationInView:self.view];
fingerLocationRect.center = CGPointMake(currentPosition.x, currentPosition.y);
int tagID = 1;
while (tagID <= 400) {
if(CGRectIntersectsRect(fingerLocationRect.frame, [drawingView viewWithTag:tagID].frame)) {
[drawingView viewWithTag:tagID].backgroundColor = [UIColor colorWithRed:redVal/255.f green:greenVal/255.f blue:blueVal/255.f alpha:1.f];
}
tagID ++;
}
}
The problem is that touchesMoved is not updated very often... so if I drag my finger across the canvas only like 5 of the pixels change instead of the 40 I actually touched.
I'm wondering if using CGRectIntersectsRect with touchesMoved is the best method to detect if the user slides of a UIView... Right now it is not updating nearly fast enough and I am getting the result on the left instead of the result on the right. (I only achieve the effect on the right if I move my finger VERY slowly...)
All that said, is there a method for detecting when a user slides over a UIView other than using touchesMoved? If not, any ideas on how to grab all of the UIViews in-between the touch locations returned by touchesMoved? I have them all having tags of 1-400 (the first row is 1-10, second row is 11-21, etc.)
P.s. if a panGestureRecognizer updated more frequently than a touchesMoved?
To get an idea of what the 400 UIViews look like here they all are with arc4random based colors:
You are looping through 400 views every time touchesMoved: is called. You just need to do a little math to determine which view in the grid is being touched. Assuming that drawingView is the "canvas" view that contains all 400 "pixels" and the tags are ordered from the top-left to the bottom-right then:
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint point = [touch locationInView:drawingView];
NSInteger column = floorf(20 * point.x / drawingView.bounds.size.width); //20 is the number of column in the canvas
NSInteger row = floorf(20 * point.y / drawingView.bounds.size.height); //20 is the number of rows in the canvas
[viewWithTag:column + 20*row].backgroundColor = [UIColor colorWithRed:redVal/255.f green:greenVal/255.f blue:blueVal/255.f alpha:1.f];
}
This method performance improvement will allow cocoa to make more frequent calls to touchesMoved:

How to smoothly move a UIView with the users finger on iOS

in my app I have a UIImageView representing a chalk. The user can pick that up and drag it over the screen to draw with it.
I implemented that using touchesBegan, touchesMoved and touchesEnded. In touchesMoved I move the center of my UIImageView to the current touch location and draw a line with core graphics from the last to the current location.
This works well but the image view movement isn't very smooth and it also lags behind the touch. I already tried to use a UIPanGestureRecognizer (didn't recognize the gesture) and a UIScrollView in which I placed the chalk (didn't really figure out how to configure it so that it could be moved far enough in all directions).
Can you give me some hints how to improve the quality of my chalk movement?
Thanks!
The following tapGesture method method works smooth for me:
- (void)handlePan:(UIPanGestureRecognizer*)gesture{
View* view = (View*)gesture.view;
if (gesture.state == UIGestureRecognizerStateEnded){
view.dragStartingPoint = CGPointZero;
return;
}
CGPoint point = [gesture locationInView:gesture.view];
if (gesture.state == UIGestureRecognizerStateBegan){
view.dragStartingPoint = point;
view.dragStartingFrame = view.frame;
return;
}
if (CGPointEqualToPoint(view.dragStartingPoint, CGPointZero))
return;
CGFloat x = view.dragVerticallyOnly? view.frame.origin.x: view.dragStartingFrame.origin.x + point.x - view.dragStartingPoint.x;
CGFloat y = view.dragHorizontallyOnly? view.frame.origin.y: view.dragStartingFrame.origin.y + point.y - view.dragStartingPoint.y;
if (self.dragVerticallyOnly == NO){
if (x < view.dragMinPoint.x){
x = view.dragMinPoint.x;
}
else if (x > self.dragMaxPoint.x){
x = view.dragMaxPoint.x;
}
}
if (self.dragHorizontallyOnly == NO){
if (y < view.dragMinPoint.y){
y = view.dragMinPoint.y;
}
else if (y > self.dragMinPoint.y){
y = view.dragMinPoint.y;
}
}
CGFloat deltaX = x - view.frame.origin.x;
CGFloat deltaY = y - view.frame.origin.y;
if ((fabs(deltaX) <= 1.0 && fabs(deltaY) <= 1.0))
return; // Ignore very small movements
[UIView animateWithDuration:0.1 animations:^{
view.frame = CGRectMake(x, y, view.frame.size.width, view.frame.size.height);
}];
}
The most important points are:
Make the movement with the animated option; (one tenth of a second seems to do the job well done).
Avoid making movements when not necessary. Making some calculations will not make it slower, but making unnecessary movements might. However you cannot avoid to many movements otherwise it will not follow the user pan gesture.
Depending on how you want your view to behave, there can be more optimisations that you may want to make.
Remark: My method also takes into account boundaries and movement directions limits that you may define in case you need it. For instance, in my case I added a speed limit. If the user goes beyond a certain pan speed I just ignore further pan gestures and move the view (animated) where the user was pointing to, and at that speed. It may or not make sense in same cases. In mine, it did!
I hope I have been of some help!
look at this component : spuserresizableview
http://cocoacontrols.com/platforms/ios/controls/spuserresizableview
The source code is very simple and can help you to understand the view handling.
Try reducing the size of the Chalk image u are using....
Edited: By size i meant the image file size.
It helped me when i faced similar issue.
I rewrote how the movement is calculated and it is a little smoother now. Not perfect, but it's enough.

Resources