I'm having trouble getting a UIView to respond how I want with multiple touches. Basically certain UITouches are in UITouchPhaseBegan but never make it to UITouchPhaseEnded or UITouchPhaseCancelled. Here's the code I'm using to handle touches, which is called from touchesBegan:withEvent, touchesMoved:withEvent, touchesEnded:withEvent and touchesCancelled:withEvent. If I put one finger down, then another, move them, and release them simultaneously, the NSLog output is sometimes Began! Began! Ended! rather than Began! Began! Ended! Ended!. Are these touches getting lost somewhere? How can I keep track of them?
- (void) handleTouchEvent:(UIEvent *)event {
for( UITouch* touch in [event allTouches] ) {
if( touch.phase == UITouchPhaseBegan ) {
NSLog(#"Began!");
if( ![m_pCurrentTouches containsObject:touch] )
[m_pCurrentTouches addObject:touch];
uint iVoice= [m_pCurrentTouches indexOfObject:touch];
CGPoint location = [touch locationInView:self];
m_pTouchPad->SetTouchPoint( location.x, location.y, iVoice );
m_pTouchPad->SetIsTouching( true, iVoice );
}
else if( touch.phase == UITouchPhaseMoved ) {
uint index= [m_pCurrentTouches indexOfObject:touch];
CGPoint location = [touch locationInView:self];
m_pTouchPad->SetTouchPoint( location.x, location.y, index );
}
else if( touch.phase == UITouchPhaseEnded || touch.phase == UITouchPhaseCancelled ) {
uint index= [m_pCurrentTouches indexOfObject:touch];
[m_pCurrentTouches removeObject:touch];
NSLog(#"Ended!");
m_pTouchPad->SetIsTouching( false, index );
}
}
}
EDIT:
I'm offering a bounty because I really want a good solution to this. To summarize: I need a system where every touch that begins also ends, so if a user puts down one finger and then another elsewhere, I can see both touches begin, and by the time there are no fingers in contact with the device anymore, I have seen both touches end.
Am I pursuing the wrong strategy to achieve this?
One event can report many touches. So you are sometimes getting "Ended!" once, because only one event arrived and only one touch event handler call was made - but it reported both touches ending. If you are manually handling multiple simultaneous touches (fingers), it is up to you to track each touch individually and check every touch in every event to see how many of your touches are being reported and decide what to do.
Apple has sample code showing how to do this by maintaining a CFDictionaryRef:
http://developer.apple.com/library/IOs/documentation/EventHandling/Conceptual/EventHandlingiPhoneOS/MultitouchEvents/MultitouchEvents.html#//apple_ref/doc/uid/TP40009541-CH3-SW7
(Scroll down to the section called "Handling Multitouch Events".)
Just tried your code, which have some problems.
I get "Began Began Began End End" for two fingers sometimes because touchesBegan get called two times and first time have one began touch second time have two began touches.
I don't know why you didn't split the method and put the code into the touchesBegan, touchesMoved, touchesEnded methods. But you should use touches that passed from the argument instead of [event allTouches].
- (void) handleTouches:(NSSet *)touches {
for( UITouch* touch in touches ) {
if( touch.phase == UITouchPhaseBegan ) {
NSLog(#"Began!");
}
else if( touch.phase == UITouchPhaseMoved ) {
}
else if( touch.phase == UITouchPhaseEnded || touch.phase == UITouchPhaseCancelled ) {
NSLog(#"Ended!");
}
}
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
[self handleTouches:touches];
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
[self handleTouches:touches];
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
[self handleTouches:touches];
}
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event {
[self handleTouches:touches];
}
Related
I use the following code to add a gesture recognizer on the top right side of uinavigationbar but i get result if i tap anywhere on the navbar. How am i supposed to make a gesture for the top right corner?
- (void)handleGestureForTopRightBarButtonItem:(UIGestureRecognizer *)gestureRecognizer {
CGPoint p = [gestureRecognizer locationInView:self.navigationController.navigationBar];
if (CGRectContainsPoint(CGRectMake(self.navigationController.navigationController.view.width-20,0,100,self.navigationController.navigationController.view.height), p)) {
NSLog(#"got a tap on the right side in the region i care about");
} else {
NSLog(#"got a tap on the right side, but not where i need it");
}
}
As UIGestureRecognizer is reporting to a class object there are a couple of ways to solve this. UIGestureRecognizer was not meant to be stacked multiple times on the same view, if you do so you would very likely drain more Energy than you need apart from the loss of CPU power and lots of comparison code that has to distinguish all the running recognisers. But it would work..
a) write code that compares its coordinates and expected values and if they match in the range you want do your actions.
b) create another object that is living only in the coordinates you want and has it own UIGestureRecognizer. Not ideal, as written above.
c) use the power of UIControl which are also inherited UIView's that are also UIResponders.
- (BOOL)beginTrackingWithTouch:(UITouch *)touch withEvent:(nullable UIEvent *)event;
- (BOOL)continueTrackingWithTouch:(UITouch *)touch withEvent:(nullable UIEvent *)event;
- (void)endTrackingWithTouch:(nullable UITouch *)touch withEvent:(nullable UIEvent *)event; // touch is sometimes nil if cancelTracking calls through to this.
- (void)cancelTrackingWithEvent:(nullable UIEvent *)event;
d) use the power of UIView without a UIGestureRecognizer. Which by the way basically works also on CALayers, they do not have the sendAction methods, they are not UIControls.
-(BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event {
BOOL inside = [super pointInside:point withEvent:event];
if (inside && event.type == UIEventTypeTouches) {
//[super sendActionsForControlEvents:UIControlEventTouchDown];
}
return inside;
}
e) code some Class that inherits from UIResponder, which basically is what UIControls do and use their API instead so you make use of touch coordinates as well.
-(void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event {
for (UITouch *touch in touches) {
[self someCoRoutineWithTouch:touch];
}
//[super touchesBegan:touches withEvent:event];
}
-(void)touchesMoved :(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event {
for (UITouch *touch in touches) {
//do some stuff per touch
}
//[super touchesMoved:touches withEvent:event];
}
-(void)touchesEnded:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event {
for (UITouch *touch in touches) {
//do some stuff per touch
}
//[super touchesEnded:touches withEvent:event];
}
-(void)touchesCancelled:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event {
[self touchesEnded:touches withEvent:event];
}
Just don't forget that UIGestureRecognizer is basically for gestures that are made of touches, even multiple touches but not mainly to catch touches in general, despite in a lot of examples they are used instead of coding a proper UIControl. Also dont forget in a Devices edges the recognition of gestures is limited by the nature of finger size.
I'm using spriteKit for my game, I detect single tap and double tap by using the following code:
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch* touch = [touches anyObject];
if (touch.tapCount == 1){
[self.tapQueue addObject:#1];
NSLog(#"touch.tapCount == 1 :)");
}
if (touch.tapCount == 2) {
[self.tapQueue addObject:#2];
NSLog(#"touch.tapCount == 2 :)");
}
}
-(void)processUserTapsForUpdate:(NSTimeInterval)currentTime {
for (NSNumber* tapCount in [self.tapQueue copy]) {
if ([tapCount unsignedIntegerValue] == 1)
[self singleTap];
if ([tapCount unsignedIntegerValue] == 2)
[self doubleTap];
[self.tapQueue removeObject:tapCount];
}
}
This code detect single tap but when it detect double tap it detect single tap with it. How can I difference between single tap and double tap?
Thanks
Are you saying that a single tap is reported twice during a double tap, once for the single tap and once for the double tap, before the double tap is reported?
Also to try answering your question, create a global tapCount instead of relying on the tapCount from the touch events. Or a boolean var tappedOnce where you check to make sure its double tapped.
I have an NSMutableArray filled with different sprites. These sprites are all on the screen. How can I detect if a touch is landing on one of these sprites, and then do something if a touch on the sprite has occurred?
This is what I have now,
CGPoint touchLocation = [touch locationInNode:_physicsNode];
if(CGRectContainsPoint((starInArray.boundingBox), touchLocation)) {
Instead of (starInArray.boundingBox), I want to be able to say something like (anyObjectInMyArray.boundingBox).
Any way to go about this?
Thanks!
Something along the lines of this should work.
- (BOOL) ccTouchBegan:(UITouch *)touches withEvent:(UIEvent *)event
{
CGPoint touchLocation = [self convertTouchToNodeSpace:touches];
for (CCSprite *star in starInArray)
{
if (CGRectContainsPoint(CGRectMake(star.position.x - star.contentSize.width/2,
star.position.y - star.contentSize.height/2, star.contentSize.width, star.contentSize.height), touchLocation))
{
//Do Something
break;
}
}
}
I have a problem as following, I've developed a ios app, to be concice , It has a UIViewController as parent, also it has a button , and the UIViewController popup a transparent UIView as a mask. When I click on the UIView(exactly within the underlying button boundary ) , the button could not receive any event(such as "touch up inside"), how could the button get the "touch up inside" event from transparent the UIView which is above the UIViewController?
This is not possible directly as the event triggered will not be for the button as button is not visible
(ie. another View is completly covering the button and is blocking the interaction with the user).
But i can give u a work around.
1.Declare a Bool Variable in your UIViewController
2.Implement the touches methods as shown below
- (void)touchesBegin:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch=[touches anyObject];
CGPoint p=[touch locationInView:self.view];
if(CGRectContainsPoint(button.frame, p) && !boolVariable) {
boolVariable = YES;
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch=[touches anyObject];
CGPoint p=[touch locationInView:self.view];
// If the below condition is true then it mean there the user tapped on the same location as that of button..(touchesEnded and touchesCanceled was not called) so the event is just like touchUpInside
if(CGRectContainsPoint(button.frame, p) && boolVariable) {
boolVariable = NO;
[Here you can call the method which you wanted to call on touchUpInside of the button];
}
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
boolVariable = NO;
}
- (void)touchesCanceled:(NSSet *)touches withEvent:(UIEvent *)event {
boolVariable = NO;
}
I was not able to test the able code on Xcode but i think it will work..
Note: The frame of the button should be with respect to the UIViewController.
Hope this helps u out :)
UIView instances, do not listen for touches. In order to get callbacks for events such as "touch up inside" are only sent to subclasses of UIControl
The most basic concrete subclass is UIButton.
Without code or more details about your app's setup, it's difficult to give better advice.
I have subclassed a UIView that already handles single touches and drags. I want to enhance the interaction of this view so that, while dragging, if the user touches with a second finger (anywhere else in the view), then the system prints a message. I've made a stab at the code:
In my header file I've declared:
NSString *key; // This unique key identifies the first touch
My .m file I have:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
for (UITouch *t in touches) {
if (key == NULL)
{
key = [[[NSValue valueWithPointer:t] description] copy];
}
if ([key isEqualToString:[[NSValue valueWithPointer:t] description]])
{
NSLog(#"calling parent to handle single touch");
[super touchesBegan:[NSSet setWithObject:t] withEvent:event];
}
else
{
[self twoTouchDetected];
}
}
}
-(void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
for (UITouch *t in touches) {
if ([key isEqualToString:[[NSValue valueWithPointer:t] description]])
{
[super touchesMoved:[NSSet setWithObject:t] withEvent:event];
}
}
}
-(void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
for (UITouch *t in touches) {
if ([key isEqualToString:[[NSValue valueWithPointer:t] description]])
{
[super touchesEnded:[NSSet setWithObject:t] withEvent:event];
key = NULL;
}
}
}
Unfortunately there are issues with this implementation. The first time (while dragging with one finger) if I touch with the second figer, the system will register it immediately. However, the second time I touch with a second finger (while still continuing to drag with the first finger), the second finger touch does not register until the first finger is lifted up. The events from the second finger are backed up...
What is also strange is that sometimes, the parent gets called with touch data from the 2nd finger and not the 1st.
It turns out my code worked, but the problem was that I subclassed an object belonging to the Core Plot framework. This framework does weird things to their objects and therefore the touches were coming back in the wrong order.
I created an empty project to receive touches and everything came out great.