Swift - Tinder effect - ios

How can I achieve Tinder effect in Swift?
I mean, I have an image and want to accept if I swipe to right and reject if I swipe to left.
I can do it with the code bellow:
#IBAction func SwipeRight(sender: UISwipeGestureRecognizer) {
UIView.animateWithDuration(1) {
self.Imagem.center = CGPointMake(self.Imagem.center.x - 150, self.Imagem.center.y )
}
//other things after acception
}
and
#IBAction func SwipeLeft(sender: UISwipeGestureRecognizer) {
UIView.animateWithDuration(1) {
self.Imagem.center = CGPointMake(self.Imagem.center.x + 150, self.Imagem.center.y )
}
//other things after rejection
}
But this way the user can't cancel the action. I want that if the user swipes to a delta distance from the edge (left or right), it would appear an image to let the user now that, if he ends the movement, the action will take place. Otherwise, the user can, without ending the movement, go back to a distance bigger than delta, and the action would be cancelled.

I would like to thanks the people who suggested solutions. Follow the solution I developed with a huge help of a lot of people from Stack Overflow:
#IBAction func Arrastei(sender: UIPanGestureRecognizer) {
var origem = CGPoint(x: 0, y: 0)
var translation : CGPoint = sender.translationInView(Imagem)
var txy : CGAffineTransform = CGAffineTransformMakeTranslation(translation.x, -abs(translation.x) / 15)
var rot : CGAffineTransform = CGAffineTransformMakeRotation(-translation.x / 1500)
var t : CGAffineTransform = CGAffineTransformConcat(rot, txy);
Imagem.transform = t
if (translation.x > 100) {
LbResultado.textColor = btVerdadeiro.textColor
LbResultado.text = btVerdadeiro.text
LbResultado.hidden = false
} else {
if (translation.x < -100) {
LbResultado.textColor = btFalso.textColor
LbResultado.text = btFalso.text
LbResultado.hidden = false
} else {
LbResultado.hidden = true
}
}
if sender.state == UIGestureRecognizerState.Ended {
if (translation.x > 100) {
objJogo.Rodada_Vigente!.Responder(true)
} else {
if (translation.x < -100) {
objJogo.Rodada_Vigente!.Responder(false)
} else {
sender.view.transform = CGAffineTransformMakeTranslation(origem.x, origem.y)
sender.view.transform = CGAffineTransformMakeRotation(0)
}
}
}
}
This solution uses:
Imagem --> UIImageView - to be accepted or rejected
LbResultado --> UITextView - to show the user he is in acceptance or rejection area
There is no math calculations to set rotation and translation. I used values that give me a visually good effect.
The action (acceptance and rejection) area is when the user drag image more than 100 pixels to left (reject) or right (accept). If the user ends the movement out the action area, the image will go back to its original position.
I will be glad if someone suggests improvements to this code.

It's better to use UIPanGestureRecognizer here and manage its states, as you've already figured out. For managing cards it would be good solution to create class-manager that will handle interactions between cards(moving background cards on swiping the front). You can look at the implementation of card and manager here, there are implementation of dragging, moving background cards and revert animations.
https://github.com/Yalantis/Koloda

Try this:
https://github.com/cwRichardKim/TinderSimpleSwipeCards
You can find a better solution here with rotation. See DraggableView.m
-(void)beingDragged:(UIPanGestureRecognizer *)gestureRecognizer
{
//%%% this extracts the coordinate data from your swipe movement. (i.e. How much did you move?)
xFromCenter = [gestureRecognizer translationInView:self].x; //%%% positive for right swipe, negative for left
yFromCenter = [gestureRecognizer translationInView:self].y; //%%% positive for up, negative for down
//%%% checks what state the gesture is in. (are you just starting, letting go, or in the middle of a swipe?)
switch (gestureRecognizer.state) {
//%%% just started swiping
case UIGestureRecognizerStateBegan:{
self.originalPoint = self.center;
break;
};
//%%% in the middle of a swipe
case UIGestureRecognizerStateChanged:{
//%%% dictates rotation (see ROTATION_MAX and ROTATION_STRENGTH for details)
CGFloat rotationStrength = MIN(xFromCenter / ROTATION_STRENGTH, ROTATION_MAX);
//%%% degree change in radians
CGFloat rotationAngel = (CGFloat) (ROTATION_ANGLE * rotationStrength);
//%%% amount the height changes when you move the card up to a certain point
CGFloat scale = MAX(1 - fabsf(rotationStrength) / SCALE_STRENGTH, SCALE_MAX);
//%%% move the object's center by center + gesture coordinate
self.center = CGPointMake(self.originalPoint.x + xFromCenter, self.originalPoint.y + yFromCenter);
//%%% rotate by certain amount
CGAffineTransform transform = CGAffineTransformMakeRotation(rotationAngel);
//%%% scale by certain amount
CGAffineTransform scaleTransform = CGAffineTransformScale(transform, scale, scale);
//%%% apply transformations
self.transform = scaleTransform;
[self updateOverlay:xFromCenter];
break;
};
//%%% let go of the card
case UIGestureRecognizerStateEnded: {
[self afterSwipeAction];
break;
};
case UIGestureRecognizerStatePossible:break;
case UIGestureRecognizerStateCancelled:break;
case UIGestureRecognizerStateFailed:break;
}
}

Related

In Cocos2d setting anchor point on a layer for pinch to zoom not working as expected

Right now I'm trying to implement a pinch-to-zoom feature in my Cocos2D game for iOS and I'm encountering really strange behavior. My goal is to use the handler for UIPinchGestureRecognizer to scale one of the CCNodes that represents the game level when a player pinches the screen. This has the effect of zooming.
The issue is if I set the anchor for zooming to some arbitrary value such as .5, .5 (the center of the level CCNode) it scales perfectly around the center of the level, but I want to scale around the center of the player's view. Here is what that looks like:
- (void) handlePinchFrom:(UIPinchGestureRecognizer*) recognizer
{
if(recognizer.state == UIGestureRecognizerStateEnded)
{
_isScaling = false;
_prevScale = 1.0;
}
else
{
_isScaling = true;
float deltaScale = 1.0 - _prevScale + recognizer.scale;
// Obtain the center of the camera.
CGPoint center = CGPointMake(self.contentSize.width/2, self.contentSize.height/2);
CGPoint worldPoint = [self convertToWorldSpace:center];
CGPoint areaPoint = [_area convertToNodeSpace:worldPoint];
// Now set anchor point to where areaPoint is relative to the whole _area contentsize
float areaLocationX = areaPoint.x / _area.contentSize.width;
float areaLocationY = areaPoint.y / _area.contentSize.height;
[_area moveDebugParticle:areaPoint];
[_area setAnchorPoint:CGPointMake(areaLocationX, areaLocationY)];
if (_area.scale*deltaScale <= ZOOM_RADAR_THRESHOLD)
{
_area.scale = ZOOM_RADAR_THRESHOLD;
}
else if (_area.scale*deltaScale >= ZOOM_MAX)
{
_area.scale = ZOOM_MAX;
}
else
{
// First set the anchor point.
_area.scale *= deltaScale;
}
_prevScale = recognizer.scale;
}
}
If I set the anchor point to .5, .5 and print the calculated anchor point (areaLocationX, areaLocationY) using a CCLog it looks right, but when I set the anchor point to these values the layer scales out of control and entirely leaves the view of the player. The anchor point takes on crazy values like (-80, 10), although generally it should be relatively close to something in the range of 0 to 1 for either coordinate.
What might be causing this kind of behavior?
Ok it looks like I solved it. I was continually moving the anchor point -during- the scaling rather than setting it at the very beginning once. The result was really erratic scaling rather than something smooth and expected. The resolved code looks like this:
- (void) handlePinchFrom:(UIPinchGestureRecognizer*) recognizer
{
if(recognizer.state == UIGestureRecognizerStateEnded)
{
_isScaling = false;
_prevScale = 1.0;
}
else
{
if (!_isScaling)
{
// Obtain the center of the camera.
CGPoint center = CGPointMake(self.contentSize.width/2, self.contentSize.height/2);
CGPoint areaPoint = [_area convertToNodeSpace:center];
// Now set anchor point to where areaPoint is relative to the whole _area contentsize
float anchorLocationX = areaPoint.x / _area.contentSize.width;
float anchorLocationY = areaPoint.y / _area.contentSize.height;
[_area moveDebugParticle:areaPoint];
[_area setAnchorPoint:CGPointMake(anchorLocationX, anchorLocationY)];
CCLOG(#"Anchor Point: (%f, %f)", anchorLocationX, anchorLocationY);
}
_isScaling = true;
float deltaScale = 1.0 - _prevScale + recognizer.scale;
if (_area.scale*deltaScale <= ZOOM_RADAR_THRESHOLD)
{
_area.scale = ZOOM_RADAR_THRESHOLD;
}
else if (_area.scale*deltaScale >= ZOOM_MAX)
{
_area.scale = ZOOM_MAX;
}
else
{
_area.scale *= deltaScale;
}
_prevScale = recognizer.scale;
}
}

ios Drag & Drop with Device Rotation

I've used the following code, shown below, to implement drag & drop in several different places and has always worked well for me in the past.
Now I have a problem with it and have no idea why. It works perfectly so long as I have the device (or simulator) oriented portrait with the button down. But in any of the other three orientations, as the finger dragging the view moves one direction, the "dragged" view moves a different direction.
As shown below I'm logging the value of translation with each move. In the original orientation, as I drag from the middle of the screen toward the lower-left corner, the values for translation are:
-, +
If I rotate left, and do it again:
-, -
Rotate left again:
+, -
Rotate left again:
+, +
I am totally not getting what happens here, particularly since this code seems to work well in other view controllers.
Any suggestions will be appreciated.
- (void) didMakePanGesture:(UIPanGestureRecognizer *)panGesture
{
if (panGesture.state == UIGestureRecognizerStateBegan)
{
[self setDropTargetsCoordinates]; // saves correct drop target & its coordinates
dragViewStartLocation = receptiveClassificationImageView.center; // save center in case we have to snap back
receptiveClassificationImageView.transform = CGAffineTransformMakeScale(0.40f, 0.40f); // make the image smaller for dragging
receptiveClassificationImageView.layer.cornerRadius = 12.0f; // we lose the rounded corners in the scaling; this to fix
}
else if (panGesture.state == UIGestureRecognizerStateChanged)
{
//
// Adjust the location of the dragged view whenever state changes
//
CGPoint translation = [panGesture translationInView:nil];
CGAffineTransform transform = receptiveClassificationImageView.transform;
transform.tx = translation.x;
transform.ty = translation.y;
receptiveClassificationImageView.transform = transform;
NSLog(#"Translation=%f,%f" translation.x, translation.y);
}
else if (panGesture.state == UIGestureRecognizerStateEnded)
{
// do stuff when dropped
}
Just in case someone sees this later, this problem was solved with the following changes to the code above:
if (panGesture.state == UIGestureRecognizerStateBegan)
{
[self setDropTargetsCoordinates]; // saves correct drop target & its coordinates
dragViewStartLocation = receptiveClassificationImageView.center; // save center in case we have to snap back
**receptiveClassificationImageView.center = [panGesture locationInView:receptiveClassificationImageView.superview]; // re-center the view before scaling**
receptiveClassificationImageView.transform = CGAffineTransformMakeScale(0.40f, 0.40f); // make the image smaller for dragging
receptiveClassificationImageView.layer.cornerRadius = 12.0f; // we lose the rounded corners in the scaling; this to fix
}
else if (panGesture.state == UIGestureRecognizerStateChanged)
{
//
// Adjust the location of the dragged view whenever state changes
//
CGPoint translation = [panGesture translationInView:**self.view**];
CGAffineTransform transform = receptiveClassificationImageView.transform;
transform.tx = translation.x;
transform.ty = translation.y;
receptiveClassificationImageView.transform = transform;
}

UIView animation with UIPanGestureRecognizer velocity way too fast (not decelerating)

Update: Though I'd still like to solve this, I ended up switching to animateWithDuration:delay:options:animations:completion: and it works much nicer. It's missing that nice "bounce" at the end that the spring affords, but at least it's controllable.
I am trying to create a nice gesture-driven UI for iOS but am running into some difficulties getting the values to result in a nice natural feeling app.
I am using animateWithDuration:delay:usingSpringWithDamping:initialSpringVelocity:options:animations:completion: because I like the bouncy spring animation. I am initializing the velocity argument with the velocity as given by the gesture recognizer in the completed state. The problem is if I pan quickly enough and let go, the velocity is in the thousands, and my view ends up flying right off the screen and then bouncing back and forth with such dizzying vengeance.
I'm even adjusting the duration of the animation relative to the amount of distance the view needs to move, so that if there are only a few pixels needed, the animation will take less time. That, however, didn't solve the issue. It still ends up going nuts.
What I want to happen is the view should start out at whatever velocity the user is dragging it at, but it should quickly decelerate when reaching the target point and only bounce a little bit at the end (as it does if the velocity is something reasonable).
I wonder if I am using this method or the values correctly. Here is some code to show what I'm doing. Any help would be appreciated!
- (void)handlePanGesture:(UIPanGestureRecognizer*)gesture {
CGPoint offset = [gesture translationInView:self.view];
CGPoint velocity = [gesture velocityInView:self.view];
NSLog(#"pan gesture state: %d, offset: %f velocity: %f", gesture.state, offset.x, velocity.x);
static CGFloat initialX = 0;
switch ( gesture.state ) {
case UIGestureRecognizerStateBegan: {
initialX = self.blurView.x;
break; }
case UIGestureRecognizerStateChanged: {
self.blurView.x = initialX + offset.x;
break; }
default:
case UIGestureRecognizerStateCancelled:
case UIGestureRecognizerStateEnded: {
if ( velocity.x > 0 )
[self openMenuWithVelocity:velocity.x];
else
[self closeMenuWithVelocity:velocity.x];
break; }
}
}
- (void)openMenuWithVelocity:(CGFloat)velocity {
if ( velocity < 0 )
velocity = 1.5f;
CGFloat distance = -40 - self.blurView.x;
CGFloat distanceRatio = distance / 260;
NSLog(#"distance: %f ratio: %f", distance, distanceRatio);
[UIView animateWithDuration:(0.9f * distanceRatio) delay:0 usingSpringWithDamping:0.7 initialSpringVelocity:velocity options:UIViewAnimationOptionBeginFromCurrentState animations:^{
self.blurView.x = -40;
} completion:^(BOOL finished) {
self.isMenuOpen = YES;
}];
}
Came across this post while looking for a solution to a related issue. The problem is, you're passing in the velocity from UIPanGestureRecognizer, which is in points/second, when - animateWithDuration:delay:usingSpringWithDamping:initialSpringVelocity:options:animations:completion wants… a slightly odder value:
The initial spring velocity. For smooth start to the animation, match this value to the view’s velocity as it was prior to attachment.
A value of 1 corresponds to the total animation distance traversed in one second. For example, if the total animation distance is 200 points and you want the start of the animation to match a view velocity of 100 pt/s, use a value of 0.5.
The animation method wants velocity in "distances" per second, not points per second. So, the value you should be passing in is (velocity from the gesture recognizer) / (total distance traveled during the animation).
That said, that's exactly what I'm doing and there's still a slight, yet noticeable "hiccup" between when the gesture recognizer is moving it, and when the animation picks up. That said, it should still work a lot better than what you had before. 😃
First you need to calculate the remaining distance that the animation will have to take care of. When you have the delta distance you can proceed to calculate the velocity like this:
CGFloat springVelocity = fabs(gestureRecognizerVelocity / distanceToAnimate);
For clean velocity transfer you must use UIViewAnimationOptionCurveLinear.
I had a slightly different need, but my code may help, namely the velocity calculation, based on (pan velocity / pan translation).
A bit of context: I needed to use a panGestureRecognizer on the side of a UIView to resize it.
If the iPad is in portrait mode, the view is attached on the left, bottom
and right sides, and I drag on the top border of the view to resize
it.
If the iPad is in landscape mode, the view is attached to the
left, top and bottom sides, and I drag on the right border to resize it.
This is what I used in the IBAction for the UIPanGestureRecognizer:
var velocity: CGFloat = 1
switch gesture.state {
case .changed:
// Adjust the resizableView size according to the gesture translation
let translation = gesture.translation(in: resizableView)
let panVelocity = gesture.velocity(in: resizableView)
if isPortrait { // defined previously in the class based on UIDevice orientation
let newHeight = resizableViewHeightConstraint.constant + (-translation.y)
resizableViewHeightConstraint.constant = newHeight
// UIView animation initialSpringVelocity requires a velocity based on the total distance traveled during the animation
velocity = -panVelocity.y / -panTranslation.y
} else { // Landscape
let newWidth = resizableViewWidthConstraint.constant + (translation.x)
// Limit the resizing to half the width on the left and the full width on the right
resizableViewWidthConstraint.constant = min(max(resizableViewInitialSize, newWidth), self.view.bounds.width)
// UIView animation initialSpringVelocity requires a velocity based on the total distance traveled during the animation
velocity = panVelocity.x / panTranslation.x
}
UIView.animate(withDuration: 0.5,
delay: 0,
usingSpringWithDamping: 1,
initialSpringVelocity: velocity,
options: [.curveEaseInOut],
animations: {
self.view.layoutIfNeeded()
},
completion: nil)
// Reset translation
gesture.setTranslation(CGPoint.zero, in: resizableView)
}
Hope that helps.

Keeping SKSpriteNode in bounds of screen

I am trying to check whether my SKSpriteNode will remain in bounds of the screen during a drag gesture. I've gotten to the point where I am pretty sure my logic toward approaching the problem is right, but my implementation is wrong. Basically, before the player moves from the translation, the program checks to see whether its in bounds. Here is my code:
-(CGPoint)checkBounds:(CGPoint)newLocation{
CGSize screenSize = self.size;
CGPoint returnValue = newLocation;
if (newLocation.x <= self.player.position.x){
returnValue.x = MIN(returnValue.x,0);
} else {
returnValue.x = MAX(returnValue.x, screenSize.width);
}
if (newLocation.y <= self.player.position.x){
returnValue.y = MIN(-returnValue.y, 0);
} else {
returnValue.y = MAX(returnValue.y, screenSize.height);
}
NSLog(#"%#", NSStringFromCGPoint(returnValue));
return returnValue;
}
-(void)dragPlayer: (UIPanGestureRecognizer *)gesture {
CGPoint translation = [gesture translationInView:self.view];
CGPoint newLocation = CGPointMake(self.player.position.x + translation.x, self.player.position.y - translation.y);
self.player.position = [self checkBounds:newLocation];
}
For some reason, my player is going off screen. I think my use of the MIN & MAX macros may be wrong, but I am not sure.
Exactly, you mixed up MIN/MAX. The line MIN(x, 0) will return the lower value of x or 0, meaning the result will be 0 or less.
At one line you're using -returnValue.y which makes no sense.
You can (and should for readability) omit the if/else because MIN/MAX, if used correctly, make if/else unnecessary here.

How to perform a flip animation between two views by dragging the finger?

I want to perform a flip animation when the user drags the finger from the right side of the screen. The state of the animation should be influenced by the length of the drag and shouldn't work automatically.
I used something like this:
if (transitionBegan) {
flipTransition = CATransform3DIdentity;
flipTransition.m34 = 1.0 / -500;
flipTransition = CATransform3DRotate(flipTransition, degree * M_PI / 180.0f,0.0f, 1.0f, 0.0f);
self.view.layer.transform = flipTransition;
}
But now I don't know how to realize the transition between my views, so that view A disappears and view B appears.
Can you help me?
You can write a gesture recognizer that does the transform for you. For example, assuming you're able to flip both left and right, you could do something like the following. But the idea is that you transform the current view if you're less than half way through the flip, and transform the next view if more than half way through it:
- (void)handlePan:(UIPanGestureRecognizer *)gesture
{
static UIView *currentView;
static UIView *previousView;
static UIView *nextView;
if (gesture.state == UIGestureRecognizerStateBegan)
{
// Set the three view variables here, based upon the logic of your app.
// If there is no `previousView` or `nextView`, then set them to `nil`
// as appropriate.
// I happen to be choosing views for child view controllers for my
// custom container, but I'll spare you that in case you're not using
// custom container controller.
}
// lets set the "percent" rotated as the percent across the screen the user's
// finger has travelled
CGPoint translation = [gesture translationInView:gesture.view.superview];
CGFloat percent = translation.x / gesture.view.frame.size.width;
CGFloat rotationPercent = percent;
// let's use the var to keep track of which view will be rotated
UIView *viewToTransform = nil;
if (percent < -0.5 && nextView)
{
// if user has moved finger more than half way across the screen to
// the left, and there is a `nextView`, then we're showing the second
// half of flip to the next screen
currentView.hidden = YES;
nextView.hidden = NO;
previousView.hidden = YES;
rotationPercent += 1.0;
viewToTransform = nextView;
}
else if (percent > 0.5 && previousView)
{
// if user has moved finger more than half way across the screen to
// the right, and there is a `previousView`, then we're showing the second
// half of flip to the previous screen
currentView.hidden = YES;
nextView.hidden = YES;
previousView.hidden = NO;
rotationPercent -= 1.0;
viewToTransform = previousView;
}
else if ((percent < 0 && nextView) || (percent > 0 && previousView))
{
// otherwise we're in the first half of the flip animation, so we're
// showing the `currentView`
currentView.hidden = NO;
nextView.hidden = YES;
previousView.hidden = YES;
viewToTransform = currentView;
}
// do the flip `transform`
CATransform3D transform = CATransform3DIdentity;
transform.m34 = 1.0 / -800;
viewToTransform.layer.transform = CATransform3DRotate(transform, M_PI * rotationPercent, 0.0, 1.0, 0.0);
// if we're all done, let's animate the completion (or if we didn't move far enough,
// the reversal) of the pan gesture
if (gesture.state == UIGestureRecognizerStateEnded ||
gesture.state == UIGestureRecognizerStateCancelled ||
gesture.state == UIGestureRecognizerStateFailed)
{
// I'm personally using an index of my custom container child views, so I'm
// just updating my index appropriately; your logic may obviously differ
// here.
if (percent < -0.5 && nextView)
self.currentChildIndex++;
else if (percent > 0.5 && previousView)
self.currentChildIndex--;
// and animate the completion of the flip animation
[UIView animateWithDuration:0.25
delay:0.0
options:UIViewAnimationOptionCurveEaseInOut
animations:^{
previousView.transform = CGAffineTransformIdentity;
currentView.transform = CGAffineTransformIdentity;
nextView.transform = CGAffineTransformIdentity;
}
completion:NULL];
}
}
You should be able to do it with the help of a UIPanGestureRecognizer (Gesture recognizer that is listening on finger dragging), you'll be able to get the length of the Pan, and from there, calculate your CATransform3D-based translations and scalings following the progress of the panning.
(Built-in animations are not useful there, you have to make some use of CoreAnimation here, it's fun, I can tell you ;-))
Use UIView animateWithDuration: animations: completion: method to animate the current view up to the transition point then in the completion block remove the current view, add the new view and start another similar animation on the new view.

Resources