I have a UIPanGestureRecognizer attached to a parent view, with various CCSprite I want to move around in the parent when panned. Using [gesture locationOfTouch:i inView:recognizer.view] I can get the location of the touch but if I assign that to my subview's center it often makes the subview move unexpectedly since the original touch is probably not in the exact center of the sprite. Really what I want is to get the [gesture translationInView:recognizer.view] for each of the touch locations and use that. It works perfect when you only have 1 panning touch, but more then 1 and there appears no way to get translations for them. Because each touch can be panning in a different directon/speed. The user can use 2 fingers to move two different sprite completely independent of each other. -[UIPanGestureRecognizer translationInView:] doesn't allow me to get the different translations.
How should I do this?
I found a category CCNode-SFGestureRecognizers that adds the ability to attach UIGestureRecognizers to any CCNode. This way I can get around needing multiple translation values.
Now, if you're only interested in pan gesture, why not use cocos touch methods?
In touchBegan: calculate (and save) the location of touch in the subview, in touchMoved:, calculate translation which is just current location less the previous location. Move your subview by that amount.
Related
I have array of UIImageView that printed on the main View (as subviews) in a matrix.
That UIImageViews interact while I am tap on them (work like a pixel when I touch one of them it turn on (move from black to green)
but I want to do it with swipe gesture so i can with one swipe trigger more than one "pixel" (UIImageView)
I found this for android triggering-multiple-buttonsonclick-event-with-one-swipe-gesture
and I wonder if there is something like that in ios with swift that recognise general touch (not tap or swipe) so I can looks for it.
The main purpose of all of this is to draw "shapes" on matrix of pixels with one swipe gestures.
If there is another way that you think will help I will be happy to here about it.
Many thanks
You are looking for the UIGestureRecognizer.
With this you can add many types of gestures, as swipe, touch..etc.
You can also get position, duration, and basically all the information about it.
You can check step by step tutorial in this link.
http://www.raywenderlich.com/76020/using-uigesturerecognizer-with-swift-tutorial
And also in the apple documentation.
https://developer.apple.com/library/ios/documentation/UIKit/Reference/UIGestureRecognizer_Class/
i Manage to do the the swipe action using touchesMoved and touchesEnded
while the main idea is to invoke the UIIMageViews using the Coordinate of the touch and compare it to the to the UIImageViews Coordinates in the touchesMoved function
using flags to disable already edited UIIMageViews (while i am in the same touch session , the finger still on the screen) and refresh the UIImageViews to be editable again in the touchesEnded
func swipeTouches(touches: NSSet!) {
// Get the first touch and its location in this view controller's view coordinate system
let touch = touches.allObjects[0] as! UITouch
let touchLocation = touch.locationInView(self.view)
for pixel in pixelArrays {
// Convert the location of the obstacle view to this view controller's view coordinate system
let pixelViewFrame = self.view.convertRect(pixel.pixelImage.frame, fromView: pixel.pixelImage.superview)
// Check if the touch is inside the obstacle view
if CGRectContainsPoint(pixelViewFrame, touchLocation) {
// check if the pixel is Editable
if(!pixel.isEditable){
let index = pixel.index
pixelArrays.insert(updatePixel(index) , atIndex: index)
}
}
}
}
the only problem that i have now that if the swipe begin on one of the UIImageViews the touchesMoved function consider it the the view to looks for the coordinate and the other UIImageViews are not effected
my idea to solve it is to add layer on top all of the UIImageViews and disable the tap recognition that they alredy have and implement the tap also with the Coordinates way.
i will be happy to hear if there is another way to do it
Update :
i mange to solve the problem above by sort of what i wrote but instead of add another layer i disable the touch on all of the UIImageViews and invoke them using the Coordinates of the touch and them
many thanks
I'm create a SKNode to the scene and logging touch events on each individual SKNode. I can add as many as I want and the touches work as expected, if I touch the node and only the visible node do I see the log messages. Now, if I add another SKShapeNode to any of the previous SKNode's the touch area expands to be more of a rectangle and now I see the log message even if I touch outside the origin SKNode.
The first picture shows the original touch area of the SKNode and the second is the new touch area after adding a child SKShapeNode to that SKNode. The SKShapeNode being added is 20x20 so it fits within the 20x100 bar.
The problem is I now get multiple touch events when touching the other bars since they overlap. Is there any way around this?
You appear to be using separate graphics for each angle of your line. Instead try using the same graphic with your line at 0 degrees and then using the zRotation to angle it. I have not tried this myself but I think it will fix your issue.
Alternately try using containsPoint for your touch recognition in the touchesBegan method. You can check if the touch is within any of the nodes and process accordingly.
I am using a UIPanGestureRecogniser to implement drag and drop. When the drag starts I need to identify the object that is being dragged. However the objects are relatively small. And if the user doesn't hit the object right in the centre of the object it isn't getting dragged.
The problem is that when the gesture handler is first called with the state UIGestureRecognizerStateBegan, the finger has already moved several pixels, and so [UIPanGestureRecognizer locationInView:] returns that point, which is not where the gesture truly started. That makes sense as it can only recognize a pan after a few pixels of movement. However, I need the absolute start of the gesture, not the position after the gesture has first been recognized.
I'm thinking that maybe I need to implement a tap gesture recognizer as well, purely to capture the first touch. But that seems like a hack for what is not an unusual requirement. Is there no other way of getting that first touch from within the pan gesture recognizer?
UIGestureRecognizerDelegate protocol provides methods gestureRecognizerShouldBegin: and gestureRecognizer:shouldReceiveTouch: that can help you evaluate the touches before the pan has transitioned to state UIPanGestureRecognizerStateBegan
I'm new to developing iOS apps,
I've successfully implemented a Swipe Gesture Recognizer,
What I was wondering is if there is an easy to use recognizer like the swipe gesture. That would let you implement the homescreen page turning effect but just on a small view in the view controller?
If your unclear on what effect I mean, when you look at the iPhone's homescreen you can drag your finger and it responds instantly (unlike swipe) and also has some spring feeling to it, is this some effect I can use, or do I manually have to program this into the code if so is there a tutorial that explains this?
Thanks,
I hope my question makes sense.
Have a look at UIPanGestureRecognizer:
https://developer.apple.com/library/ios/documentation/uikit/reference/UIPanGestureRecognizer_Class/Reference/Reference.html
UIPanGestureRecognizer is a concrete subclass of UIGestureRecognizer
that looks for panning (dragging) gestures. The user must be pressing
one or more fingers on a view while they pan it. Clients implementing
the action method for this gesture recognizer can ask it for the
current translation and velocity of the gesture.
A panning gesture is continuous. It begins
(UIGestureRecognizerStateBegan) when the minimum number of fingers
allowed (minimumNumberOfTouches) has moved enough to be considered a
pan. It changes (UIGestureRecognizerStateChanged) when a finger moves
while at least the minimum number of fingers are pressed down. It ends
(UIGestureRecognizerStateEnded) when all fingers are lifted.
Clients of this class can, in their action methods, query the
UIPanGestureRecognizer object for the current translation of the
gesture (translationInView:) and the velocity of the translation
(velocityInView:). They can specify the view whose coordinate system
should be used for the translation and velocity values. Clients may
also reset the translation to a desired value.
Edit: The spring feeling part you would need to implement yourself. Since iOS 7 there is UIDynamics which contains different animators, for what you describe you may need UIGravityBehavior and maybe UICollisionBehaviour. Look at the WWDC 2013 videos for this topic, I think you will find some examples there.
I'm facing a delicated problem handling touch events. This is problably not a usual stuff to make but i think it is possible. I just don't know how...
I have a Main View (A) and Main View (B) with a lot of subviews 1,2,3,4,5,...
MainView
SubView(A)
1
2
3
SubView(B)
1
2
3
Some of these sub sub views (1,2,4) are scrollviews.
It happens that I want to change between A and B with a two finger pan.
I have tried to attach a UIPanGestureRecognizer to MainView but the scrollviews cancel the touches and it only works sometimes.
I need a consistent method to first capture the touches, detect if it is a two finger pan, and only then decide if it will pass the touches downwards (or upwards... i'm not sure) the responder chain.
I tried to create a top level view to handle that, but I cant make to have the touches being passed through that view.
I have found a lot of people with similar problems but couldn't find a solution to this problem from their solutions.
If any one could give me a light, that would be great as i'm already desperate with this.
You can create a top level view to capture the touches and the coordinates of touches then you can check if the coordinates of touch is inside of the sub views. You can do that using
BOOL CGRectContainsPoint(CGRect rect, CGPoint point)
Method. Rect is a frame of the view, and point is point of touch.
Please not that the frames and touch locations are relative to their super views, therefore you need to convert them to coordinate system of app window.
Or maybe it can be more helpful
Receiving touch events on more then one UIView simultaneously