Animation completion block running too soon - ios

I am trying to move a plane along a path as the user is defining it, similar to the controls in Flight Control. I am able to draw the path, albeit inefficiently, but I cannot get the plane to animate along the path smoothly.
I originally tried to move the plane along the path by using the implicit animations provided by changing the position, but because it does not allow me to alter the speed of the animation, the plane behaves badly.
I am now trying to use an animation block recursively, but the completion block is being called way too soon, resulting in the plane just following the path being drawn.
This is the setup code:
- (CALayer *)plane{
if(!_plane){
_plane = [CALayer layer];
UIImage *planeImage = [UIImage imageNamed:#"airplane.png"];
_plane.bounds = CGRectMake(20.0, 20.0, planeImage.size.width, planeImage.size.height);
_plane.contents = (id)(planeImage.CGImage);
[self.layer addSublayer:_plane];
}
return _plane;
}
- (void)touchesEnded:(NSSet *)touches
withEvent:(UIEvent *)event{
if(isDrawingPath){
[self.trackPath addLineToPoint:[[touches anyObject] locationInView:self]];
NSLog(#"%#", self.trackPath);
UITouch *touch = [touches anyObject];
CGPoint toPoint = [touch locationInView:self];
//now, save each point in order to make the path
[self.points addObject:[NSValue valueWithCGPoint:toPoint]];
[self setNeedsDisplay];
[self goToPointWithIndex:0];
}
isDrawingPath = NO;
}
This implementation works, but badly. The plane follows the path, but it is choppy:
- (void)goToPointWithIndex:(NSNumber *)indexer{
int toIndex = [indexer intValue];
if(toIndex < self.points.count){
//extract the value from array
CGPoint toPoint = [(NSValue *)[self.points objectAtIndex:toIndex] CGPointValue];
CGPoint pos = self.plane.position;
float delay = PLANE_SPEED * (sqrt( pow(toPoint.x - pos.x, 2) + pow(toPoint.y - pos.y, 2)));
self.plane.position = toPoint;
// Allows animation to continue running
if(toIndex < self.points.count - 1){
toIndex++;
}
//float delay = 0.2;
NSLog(#"%f", delay);
//repeat the method with a new index
//this method will stop repeating as soon as this "if" gets FALSE
NSLog(#"index: %d, x: %f, y: %f", toIndex, toPoint.x, toPoint.y);
[self performSelector:#selector(goToPointWithIndex:) withObject:[NSNumber numberWithInt:toIndex] afterDelay:delay];
}
}
This is what I am trying to do with blocks. It just skips to the end of the path drawn instead of follow the entire thing.
- (void)goToPointWithIndex:(int)toIndex{
if(self.resetPath) return;
//extract the value from array
if(toIndex < self.points.count) {
CGPoint toPoint = [(NSValue *)[self.points objectAtIndex:toIndex] CGPointValue];
NSLog(#"index: %d, x: %f, y: %f", toIndex, toPoint.x, toPoint.y);
CGPoint pos = self.plane.position;
//float delay = PLANE_SPEED * (sqrt( pow(toPoint.x - pos.x, 2) + pow(toPoint.y - pos.y, 2)));
// Allows animation to continue running
if(toIndex < self.points.count - 1){
toIndex++;
}
[UIView animateWithDuration:0.2
animations:^{
self.plane.position = toPoint;
}
completion:^(BOOL finished) {
if(finished == YES) {NSLog(#"Complete");[self goToPointWithIndex:toIndex];}
}];
}
}
I have no idea where I'm going wrong. I'm new to objective-c and blocks. Is the completion block supposed to run right after the animation begins? It doesn't make sense to me, but its the only explanation I can find.
EDIT: I ended up making the plane a UIView so I could still use block animations. The CALayer transactions did not like my recursion very much at all.

Since here self.plane is a custom CALayer (as opposed to the layer backing a UIView), it will not respect the parameters of UIView animateWithDuration:
From the docs:
Custom layer objects ignore view-based animation block parameters and use the default Core Animation parameters instead.
Instead use a CABasicAnimation and CALayer's addAnimation:forKey: method (and probably in this case a CAAnimationGroup). How to animate CALayers like this is described in the docs under "Animating Layer Content."

Some reason you're only overriding touchesEnded? Then you will only pick up one point for any finger path across the screen. It seems to me you should override touchesMoved and maybe touchesBegan too (and do more or less the same thing as in touchesEnded).

Related

Hit Test to determine which CGPoint was closest to touch - 16 CGPoint dots in 1 UIView

I've been modifying this custom UIView by LMinh called LMGaugeView in order to make it look like a 16-vial circular "vial carousel".
Imagine 16 dots (CGPoints) evenly dispersed around the edge of the circle (UIView). I want to be able to do the following scenario:
The picture shows 10 vials, but you get the idea. As soon as I touch the circle view, I want to be able to determine which "vial" I tapped based on their CGPoint value alone.
I created an app (called Twinstones, just to throw that out there) that required the hitTest:withEvent: method, but I was dealing with 2 subviews that could be touched (within the frame of their superview.)
For this, the circle is the only view (which means the hitTest:withEvent: will only return the circle view every time I come in contact with it.)
Here's that hitTest:... implementation:
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event {
if (!self.isUserInteractionEnabled || self.isHidden || self.alpha <= 0.01) {
return nil;
}
CGRect touchRect = CGRectInset(self.bounds, -14, -14);
if (CGRectContainsPoint(touchRect, point)) {
for (UIView *subview in [self.subviews reverseObjectEnumerator]) {
CGPoint convertedPoint = [subview convertPoint:point fromView:self];
UIView *hitTestView = [subview hitTest:convertedPoint withEvent:event];
if (hitTestView) {
return hitTestView;
}
}
return self;
}
return nil;
}
Is there another hitTest-related method I need to use to get this to work? If you need to see more code, let me know.
The pythagorean theorem is useful here. You can get the point where your user touched the screen, then calculate the distance to each vial with map() and find the smallest value:
let p1 = //where your user touched the view
let vialDistances = vials.map { // your vials array
let p2 = // vial position
let diffX = p1.x - p2.x
let diffY = p1.y - p2.y
return diffX * diffX + diffY * diffY
}
let index = find(vialDistances, vialDistances.min())
let closestVial = vials[index]

Drawing a self-erasing path with CGContextRef

I would like to draw a "disappearing stroke" on a UIImageView, which follows a touch event and self-erases after a fixed time delay. Here's what I have in my ViewController.
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:self.view];
CGPoint lp = lastPoint;
UIColor *color = [UIColor blackColor];
[self drawLine:5 from:lastPoint to:currentPoint color:color blend:kCGBlendModeNormal];
double delayInSeconds = 1.0;
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, (int64_t)(delayInSeconds * NSEC_PER_SEC));
dispatch_after(popTime, dispatch_get_main_queue(), ^(void){
[self drawLine:brush from:lp to:currentPoint color:[UIColor clearColor] blend:kCGBlendModeClear];
});
lastPoint = currentPoint;
}
- (void)drawLine:(CGFloat)width from:(CGPoint)from to:(CGPoint)to color:(UIColor*)color blend:(CGBlendMode)mode {
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.tempDrawImage.image drawInRect:CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height)];
CGContextMoveToPoint(context, from.x, from.y);
CGContextAddLineToPoint(context, to.x, to.y);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, width);
CGContextSetStrokeColorWithColor(context, [color CGColor]);
CGContextSetBlendMode(context, mode);
CGContextStrokePath(context);
self.tempDrawImage.image = UIGraphicsGetImageFromCurrentImageContext();
[self.tempDrawImage setAlpha:1];
UIGraphicsEndImageContext();
}
The draw phase works nicely, but there are a couple problems with the subsequent erase phase.
While the line "fill" is correctly cleared, a thin stroke around the path remains.
The "erase phase" is choppy, nowhere near as smooth as the drawing phase. My best guess is that this is due to the cost of UIGraphicsBeginImageContext run in dispatch_after.
Is there a better approach to drawing a self-erasing line?
BONUS: What I'd really like is for the path to "shrink and vanish." In other words, after the delay, rather than just clearing the stroked path, I'd like to have it shrink from 5pt to 0pt while simultaneously fading out the opacity.
I would just let the view draw continuously with 60 Hz, and each time draw the entire line using points stored in an array. This way, if you remove the oldest points from the array, they will not be drawn anymore.
to hook up your view to display refresh rate (60 Hz), try this:
displayLink = [CADisplayLink displayLinkWithTarget:self selector:#selector(update)];
[displayLink addToRunLoop:[NSRunLoop mainRunLoop] forMode:NSDefaultRunLoopMode];
Store an age property along with each point, then just loop over the array and remove points which are older than your threshold.
e.g.
#interface AgingPoint <NSObject>
#property CGPoint point;
#property NSTimeInterval birthdate;
#end
// ..... later, in the draw call
NSTimeInterval now = CACurrentMediaTime();
AgingPoint *p = [AgingPoint new];
p.point = touchlocation; // get yr touch
p.birthdate = now;
// remove old points
while(myPoints.count && now - [myPoints[0] birthdate] > 1)
{
[myPoints removeObjectAtIndex: 0];
}
myPoints.add(p);
if(myPoints.count < 2)
return;
UIBezierPath *path = [UIBezierPath path];
[path moveToPoint: [myPoints[0] point]];
for (int i = 1; i < myPoints.count; i++)
{
[path lineToPoint: [myPoints[i] point];
}
[path stroke];
So on each draw call, make a new bezierpath, move to the first point, then add lines to all other points. Finally, stroke the line.
To implement the "shrinking" line, you could draw just short lines between consecutive pairs of points in your array, and use the age property to calculate stroke width. This is not perfect, as the individual segments will have the same width at start and end point, but it's a start.
Important: If you are going to draw a lot of points, performance will become an issue. This kind of path rendering with Quartz is not exactly tuned to render real fast. In fact, it is very, very slow.
Cocoa arrays and objects are also not very fast.
If you run into performance issues and you want to continue this project, look into OpenGL rendering. You will be able to have this run a lot faster with plain C structs pushed into your GPU.
There were a lot of great answers here. I think the ideal solution is to use OpenGL, as it'll inevitably be the most performant and provide the most flexibility in terms of sprites, trails, and other interesting visual effects.
My application is a remote controller of sorts, designed to simply provide a small visual aid to track motion, rather than leave persistent or high fidelity strokes. As such, I ended up creating a simple subclass of UIView which uses CoreGraphics to draw a UIBezierPath. I'll eventually replace this quick-fix solution with an OpenGL solution.
The implementation I used is far from perfect, as it leaves behind white paths which interfere with future strokes, until the user lifts their touch, which resets the canvas. I've posted the solution I used here, in case anyone might find it helpful.

Converting Points to Node in Sprite-Kit

I've been working on this problem for a few days now and I just can't seem to figure it out. I've done a ton of searching for an answer, and I've seen hints that maybe the problem is with Sprite-Kit itself, so I am debating moving to Cocos2D and starting over. But, I hope that someone here can help me.
I have a basic camera node called _world that I am using to pan around the world and to zoom. The panning and zooming works fine, but I've been trying to get the world node to move to the center of where the pinch occurs. It sort of works, but converting the point to a position in the world node seems to be the problem. Here is my code:
I use this code to convert a scene point to a world point elsewhere in the code and it works fine:
CGPoint positionInScene = [touch locationInNode:self];
CGPoint locationInWorld = [self.scene convertPoint:positionInScene toNode:_world];
But later I try to do this using the middle point of the two points found in the pinch, and it doesn't seem to convert properly:
originalLocationOne = [sender locationOfTouch:0 inView:self.view];
originalLocationTwo = [sender locationOfTouch:1 inView:self.view];
//I do this because SpriteKit doesn't have a ccpAdd method
originalAddedMidPoint = CGPointMake(originalLocationOne.x + originalLocationTwo.x, originalLocationOne.y + originalLocationTwo.y);
//same thing here but for ccpMidPoint
originalMidPoint = CGPointMake(originalAddedMidPoint.x * 0.5f, originalAddedMidPoint.y * 0.5f);
_newWorldPos = [self convertPoint:originalMidPoint toNode:_world];
I would really appreciate it if someone can point me in the right direction! Thank you so much!
EDIT: I've been working on this problem some more, and certainly something weird is happening but I still can't tell what.
Here is my complete touchesbegan method:
-(void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
_myTouches = [[NSMutableArray alloc]init];
for (UITouch *touch in touches) {
[_myTouches addObject:touch];
}
if (_myTouches.count > 1) {
UITouch *touch1 = [_myTouches objectAtIndex:0];
UITouch *touch2 = [_myTouches objectAtIndex:1];
CGPoint touch1Location = [touch1 locationInView:self.view];
CGPoint touch2Location = [touch2 locationInView:self.view];
NSLog(#"tpoint1 = %#, tpoint2 = %#", NSStringFromCGPoint(touch1Location), NSStringFromCGPoint(touch2Location));
CGPoint touchAddedPoint = CGPointMake(touch1Location.x + touch2Location.x, touch1Location.y + touch2Location.y);
CGPoint touchMidPoint = CGPointMake(touchAddedPoint.x * 0.5f, touchAddedPoint.y * 0.5f);
NSLog(#"touch mid point = %#", NSStringFromCGPoint(touchMidPoint));
CGPoint newWorldPoint = [self convertTouchPointToWorld:touchMidPoint];
//camera needs to be offset to work properly
CGPoint alteredWorldPoint = CGPointMake(-newWorldPoint.x * 0.75f, -newWorldPoint.y * 0.75f);
_firstTouch = alteredWorldPoint;
_tempWorldLocation = _firstTouch;
_worldMovedForUpdate = YES;
}
}
Here is the method that I extracted to convert the position to the world node:
-(CGPoint) convertTouchPointToWorld:(CGPoint)touchLocation {
CGPoint firstLocationInWorld = [self.scene convertPoint:touchLocation toNode:_world];
NSLog(#"inside converting method %#", NSStringFromCGPoint(firstLocationInWorld));
return firstLocationInWorld;
}
Here is the entire method that places a building in the game map based on its position in the world map:
-(void)selectNodeForTouch:(CGPoint)touchLocation {
SKSpriteNode *touchedNode = (SKSpriteNode *)[self nodeAtPoint:touchLocation];
//NSLog(#"node name is = %#", touchedNode.name);
_selectedNode = touchedNode;
if ([[touchedNode name] isEqualToString:#"hudswitch1"]) {
if([self.theGame getMovesLeft] == 0){
_hudNeedsUpdate = YES;
_turnNeedsUpdating = YES;
}
}
if ([self.theGame getMovesLeft] > 0) {
[self handleButtonsForTouch:touchedNode];
}
//this inserts the tile at the location in world node
NSLog(#"touchLocation.x = %f, touchlocation.y = %f", touchLocation.x, touchLocation.y);
if ([[touchedNode name] isEqualToString:#"tile"] && _selectedBuildingType != 0) {
//CGPoint locationInWorld = [self.scene convertPoint:touchLocation toNode:_world];
CGPoint locationInWorld = [self convertTouchPointToWorld:touchLocation];
CGPoint gameLocation = [self convertWorldPointToGamePoint:locationInWorld];
NSLog(#"locationWorld.x = %f, locationWorld.y = %f", locationInWorld.x, locationInWorld.y);
if(![self.theGame isBuildingThere:gameLocation] && [self.theGame isValidLocation:gameLocation]) {
[self updateActiveTilePos:locationInWorld];
}
}
}
Inserting a tile into the map works fine. It gets the world location and then divides it by the tile size to figure out the position to place the tile in the world map. It is always inserting a tile in the proper place.
However when I use the middle point of the two touches in a pinch and convert it to a world point, it returns a value but the value is off by a large amount, and different amounts depending on the distances from the center...
Maybe I just need to figure out how to offset the camera properly? Thanks again for any help with this problem, it is driving me crazy!
I had the same problem.
It is counterintuitive but point conversion is not smart.
It does not convert from one node to another node.
It can convert to scene coordinates and to node coordinates from scene coordinates.
It does not work from one node to another node.
What you need to do is convert point from node and then convert point to node. After two these calls coordinates will be in the node you need.

How would I get an image to appear at specific coordinates in iOS based on user input?

In Xcode, how would I make an image appear in a certain place on the iPhone screen by touching the place where I would want it to appear? The method that I'm currently trying to use is by getting the coordinates of the point and then trying to get a png file to appear at that point. Here is what I have so far for code:
- (void) touchesBegan:(NSSet *)touches
withEvent:(UIEvent *)event {
UITouch *theTouch = [touches anyObject];
startPoint = [theTouch locationInView:self.view];
CGFloat x = startPoint.x;
CGFloat y = startPoint.y;
But now that I have the points, I'm not sure how to get an image to appear there or if there might be a more efficient method. Any ideas?
You can use [-UIImage drawAtPoint:].
After getting x,y coordinates of a touch event, set center point of your UIImageView to it.
UIImageView *myImage = ...
CGPoint myPoint = CGPointMake(x_touch, y_touch);
[myImage setCenter:myPoint];
you can also animate this change by using animate block on the image view like so:
nsTimerInterval myAnimationDuration = 1.0;
[UIView animateWithDuration:myAnimationDuration animations:^{
[myImage setCenter:myPoint];
}
completion:^(BOOL finished){
//handle completion tasks
}];

iPhone - Move object along touch and drag path

I am developing a simple animation where an UIImageView moves along a UIBezierPath, now I want to provide user interation to the moving UIImageView so that user can guide the UIImageView by touching the UIImageView and drag the UIImageview around the screen.
Change the frame of the position of the image view in touchesMoved:withEvent:.
Edit: Some code
- (void)touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event {
UITouch *touch = [[event allTouches] anyObject];
if ([touch.view isEqual: self.view] || touch.view == nil) {
return;
}
lastLocation = [touch locationInView: self.view];
}
- (void)touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event {
UITouch *touch = [[event allTouches] anyObject];
if ([touch.view isEqual: self.view]) {
return;
}
CGPoint location = [touch locationInView: self.view];
CGFloat xDisplacement = location.x - lastLocation.x;
CGFloat yDisplacement = location.y - lastLocation.y;
CGRect frame = touch.view.frame;
frame.origin.x += xDisplacement;
frame.origin.y += yDisplacement;
touch.view.frame = frame;
lastLocation=location;
}
You should also implement touchesEnded:withEvent: and touchesCanceled:withEvent:.
So you want the user to be able to touch an image in the middle of a keyframe animation along a curved path, and drag it to a different location? What do you want to happen to the animation at that point?
You have multiple challenges.
First is detecting the touch on the object while a keyframe animation is "in flight".
To do that, you want to use the parent view's layer's presentation layer's hitTest method.
A layer's presentation layer represents the state of the layer at any given instant, including animations.
Once you detect touches on your view, you will need to get the image's current location from the presentation layer, stop the animation, and take over with a touchesMoved/touchesDragged based animation.
I wrote a demo application that shows how to detect touches on an object that's being animated along a path. That would be a good starting point.
Take a look here:
Core Animation demo including detecting touches on a view while an animation is "in flight".
Easiest way would be subclassing UIImageView.
For simple dragging take a look at the code here (code borrowed from user MHC):
UIView drag (image and text)
Since you want to drag along Bezier path you'll have to modify touchesMoved:
-(void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *aTouch = [touches anyObject];
//here you have location of user's finger
CGPoint location = [aTouch locationInView:self.superview];
[UIView beginAnimations:#"Dragging A DraggableView" context:nil];
//commented code would simply move the view to that point
//self.frame = CGRectMake(location.x-offset.x,location.y-offset.y,self.frame.size.width, self.frame.size.height);
//you need some kind of a function
CGPoint calculatedPosition = [self calculatePositonForPoint: location];
self.frame = CGRectMake(calculatedPosition.x,calculatedPosition.y,self.frame.size.width, self.frame.size.height);
[UIView commitAnimations];
}
What exactly you would like to do in -(CGPoint) calculatePositionForPoint:(CGPoint)location
is up to you. You could for example calculate point in Bezier path that is the closest to location. For simple test you can do:
-(CGPoint) calculatePositionForPoint:(CGPoint)location {
return location;
}
Along the way you're gonna have to decide what happens if user wonders off to far from your
precalculated Bezier path.

Resources