When I touch on a line plot in my multiline plot graph, the method for displaying a symbol with values corresponding to the point is not called frequently,
-(void)scatterPlot:(CPTScatterPlot *)plot plotSymbolWasSelectedAtRecordIndex:(NSUInteger)index;
This problem is also with barplots. plotSymbolMarginForHitDetection property also set to high value. But No effect. How can I increase my graph's user interaction?
There is no scatter plot delegate method for detecting hits on the line between plot points. If that's what you're after, you'll need to use a plot space delegate. Handle the touch event and look through the plot data to find which line segment (if any) is near the touched point.
Bar plots aren't as complicated. Any touch inside a bar should trigger the delegate method. You might have issues if the bars are very narrow. The only solution in that case is to make them wider.
Another way to increase your "hit area" is to monitor all touches in your graph and translate that to the closest index.
To do this, you'll have to make sure the delegate is nil (since you are manually monitoring).
self.myBarPlot.delegate = nil;
Then, on your CPTGraphHostingView, set up your UIGestureRecognizers. I've found that using both tap and pan recogizners works best. Set these up like so.
UITapGestureRecognizer *tapRecognizer = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(graphTapped:)];
[self.hostView addGestureRecognizer:tapRecognizer];
UIPanGestureRecognizer *panRecognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(graphPanned:)];
[self.hostView addGestureRecognizer:panRecognizer];
The recognizers will monitor when your hostView has been tapped or panned. From there, you can easily translate the location of the touch to an index doing the following.
- (void)graphTapped:(UITapGestureRecognizer *)sender {
if (sender.state == UIGestureRecognizerStateEnded) {
[self gestureUpdated:sender];
}
}
- (void)graphPanned:(UIPanGestureRecognizer *)sender {
if (sender.state == UIGestureRecognizerStateEnded || sender.state == UIGestureRecognizerStateChanged) {
[self gestureUpdated:sender];
}
}
- (void)gestureUpdated:(UIGestureRecognizer *)sender {
CGFloat width = self.hostView.frame.size.width;
CGPoint loc = [sender locationInView:self.hostView];
NSInteger index = (loc.x / width) * [self numberOfRecordsForPlot:self.myBarPlot];
NSLog(#"Touch index: %li", index);
}
Now that we have an index, just go ahead and do what you did in your original delegate callback.
For bar plots:
[self barPlot:self.myBarPlot barWasSelectedAtRecordIndex:index];
For scatter plots (untested):
[self scatterPlot:self.myScatterPlot plotSymbolWasSelectedAtRecordIndex:index];
ViolĂ !
Related
I'm using UIScroll View to make a gallery-like ui with paging functionality. Basically like this:
Since I need paging, so I set the width of scrollview equals to the width of a single page, in my example, the width of the pink rectangular.
But I want two extra things:
Tapping the yellow or blue area will bring the corresponding rectangular to the center.
One can scroll/swipe on yellow or blue area (out of the scrollview), which means the entire width of the screen is scrollable.
I followed this thread and added - (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event. BUT by doing so, I can only achieve my second goal. When I set selector or delegate handling tapping reaction of yellow and blue, it does't work. Any idea about it?
That answer you referenced is one of my old favorites. It doesn't contemplate your first requirement, but I think it can handle it very neatly with just the addition of a tap gesture recognizer.
Create it on your "ClipView":
UITapGestureRecognizer *tapGR = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(tap:)];
[self.myClipView addGestureRecognizer:tapGR];
// myClipView is the view that contains the paging scroll view
- (void)tap: (UITapGestureRecognizer *)gr {
// there are a few challenges here:
// 1) get the tap location in the correct coordinate system
// 2) convert that to which "page" was tapped
// 3) scroll to that page
}
Challenge 1) is easy thanks to the gesture recognizer, which answer locationInView:
CGPoint location = [gr locationInView:self.scrollView];
For challenge 2) we need to work out what page within your scroll view was tapped. That can be done with pretty simple arithmetic given the page width.
// assuming you have something like this
#define kPAGE_WIDTH // some float
// page is just how many page-width's are represented by location.y
NSInteger page = floor(location.y/kPAGE_WIDTH);
Now, challenge 3) is easy now because we can change a page to it's scroll position straight-forwardly...
CGFloat y = page * kPAGE_WIDTH;
[self.scrollView setContentOffset:CGPointMake(y, 0.0f) animated:YES];
Or, all in one chunk of code...
- (void)tap: (UITapGestureRecognizer *)gr {
CGPoint location = [gr locationInView:self.scrollView];
NSInteger page = floor(location.y/kPAGE_WIDTH);
CGFloat y = page * kPAGE_WIDTH;
[self.scrollView setContentOffset:CGPointMake(y, 0.0f) animated:YES];
}
EDIT
You may also want to exclude the "current page" area from the gesture recognizer. That's simply done by qualifying the test in the tap method.
The only trick is to get the tap position in the same coordinate system as the scroll view's frame, that is, the clip view...
CGPoint locationInClipper = [gr locationInView:gr.view];
And the SDK provides a nice method to test...
BOOL inScrollView = [self.scrollView pointInside:locationInClipper withEvent:nil];
So...
- (void)tap: (UITapGestureRecognizer *)gr {
CGPoint locationInClipper = [gr locationInView:gr.view];
BOOL inScrollView = [self.scrollView pointInside:locationInClipper withEvent:nil];
if (!inScrollView) {
CGPoint location = [gr locationInView:self.scrollView];
NSInteger page = floor(location.y/kPAGE_WIDTH);
CGFloat y = page * kPAGE_WIDTH;
[self.scrollView setContentOffset:CGPointMake(y, 0.0f) animated:YES];
}
}
Ok so I am helping convert an android game to iOS. The game is based on 2048, but with letters instead of numbers. I have a good bit of it working but am still learning Objective C/iOS quirks. So far I have the tiles/grid working, movement is working, etc but I need a bit of help. The goal is to allow the user to long-press on a tile to select it, then slide their finger to an adjacent tile to begin spelling a word. I have the long press portion implemented but I'm at a bit of a loss on how to get it to long-press then swipe. On top of this I already have a swipe that allows the user to move the tiles. In searching on here I've seen suggestions about subclassing so I am figuring I need to subclass the UISwipeGestureRecognizer method. I already put in the simultaneously gesture recognizer, but am unsure where to go from here.
So, there are several questions to this.
What would be the best way to do this? Implement a subclass of each UISwipeGestureRecognizer?
Will my current swipe detection interfere? (right now a swipe by itself moves tiles in direction of swipe)
I would guess I need to do a (if long press) then activate subclassed swipe methods?
Any examples to answer the above questions would be of great help. I'm not asking you to do it for me but at least point me in a general direction. Thanks!
Code below.
// Grid.m
#import "Grid.h"
#import "Tile.h"
- (void)didLoadFromCCB {
// listen for swipes to the left
UISwipeGestureRecognizer * swipeLeft= [[UISwipeGestureRecognizer alloc]initWithTarget:self action:#selector(swipeLeft)];
swipeLeft.direction = UISwipeGestureRecognizerDirectionLeft;
[[[CCDirector sharedDirector] view] addGestureRecognizer:swipeLeft];
// listen for swipes to the right
UISwipeGestureRecognizer * swipeRight= [[UISwipeGestureRecognizer alloc]initWithTarget:self action:#selector(swipeRight)];
swipeRight.direction = UISwipeGestureRecognizerDirectionRight;
[[[CCDirector sharedDirector] view] addGestureRecognizer:swipeRight];
// listen for swipes up
UISwipeGestureRecognizer * swipeUp= [[UISwipeGestureRecognizer alloc]initWithTarget:self action:#selector(swipeUp)];
swipeUp.direction = UISwipeGestureRecognizerDirectionUp;
[[[CCDirector sharedDirector] view] addGestureRecognizer:swipeUp];
// listen for swipes down
UISwipeGestureRecognizer * swipeDown= [[UISwipeGestureRecognizer alloc]initWithTarget:self action:#selector(swipeDown)];
swipeDown.direction = UISwipeGestureRecognizerDirectionDown;
[[[CCDirector sharedDirector] view] addGestureRecognizer:swipeDown];
// listen for long press
UILongPressGestureRecognizer *longpress = [[UILongPressGestureRecognizer alloc]initWithTarget:self action:#selector(onLongPress:)];
[longpress setMinimumPressDuration:0.5];
[[[CCDirector sharedDirector] view] addGestureRecognizer:longpress];
}
- (void)swipeLeft {
[self move:ccp(-1, 0)];
}
- (void)swipeRight {
[self move:ccp(1, 0)];
}
- (void)swipeDown {
[self move:ccp(0, -1)];
}
- (void)swipeUp {
[self move:ccp(0, 1)];
}
// detect longpress, convert to NodeSpace and check if touch location is within tile boundingbox. If yes, set background white, text black.
- (void)onLongPress:(UILongPressGestureRecognizer *) recognizer {
CGPoint touchPoint = [[CCDirector sharedDirector] convertToGL:[recognizer locationInView:[recognizer view]]];
touchPoint = [self convertToNodeSpace:touchPoint];
if (recognizer.state == UIGestureRecognizerStateBegan) {
for (Tile *tile in self.children) {
if([tile isKindOfClass:[Tile class]]) {
CGRect tileBoundingBox = tile.boundingBox;
if (CGRectContainsPoint(tileBoundingBox, touchPoint)) {
tile.backgroundNode.color = [CCColor whiteColor];
tile.valueLabel.color = [CCColor blackColor];
[self spellWord:tile.value];
[_word setString:[_word lowercaseString]];
CCLOG(#"%#", _word);
}
}
}
}
if (recognizer.state == UIGestureRecognizerStateChanged) {
}
if (recognizer.state == UIGestureRecognizerStateEnded) {
for (Tile *tile in self.children) {
if([tile isKindOfClass:[Tile class]]) {
CGRect tileBoundingBox = tile.boundingBox;
if (CGRectContainsPoint(tileBoundingBox, touchPoint)) {
tile.backgroundNode.color = [tile getColor:tile.value];
tile.valueLabel.color = [self getContrastColor:r green:g blue:b];
}
}
}
}
}
// allow for simultaneous gestures
- (BOOL)gestureRecognizer:(UIGestureRecognizer *) recognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer {
return YES;
}
In answer to your questions:
This doesn't strike me as a coding situation that requires one to subclass UILongPressGestureRecognizer. Having said that, subclassing is often a nice way to clean up one's view controller code so you don't have gory gesture recognizer code in the view controller class. But there's nothing here (as I understand it) that demands that. You generally dive into subclassing of gesture recognizers where you need some special custom behavior (e.g. have the gesture fail if some complicated criterion fails). I'd first see if you could achieve the desired UX with standard gestures before I went down that road, though.
The only reason I could see the swipe gestures interfering with each other is that you've specified that shouldRecognizeSimultaneouslyWithGestureRecognizer should return YES. That's used in cases where you need multiple recognizers running at the same, which doesn't seem necessary here (and only a source of problems).
It's unclear to me as to whether you really wanted a separate swipe gesture or whether you just wanted a single gesture ("long press and drag"). If you needed that separate swipe gesture, though, you would generally specify the relative priority of gesture recognizers by specifying requireGestureRecognizerToFail (e.g. have the swipe require long press to fail in order for the swipe to be recognized). But if you really only have one gesture ("long press and drag"), then only one gesture recognizer is needed.
It seems unnecessary. If you want to detect movement after the long press has been recognized, you can put that "move after long press" code in the if statement for UIGestureRecognizedStateChanged in your onLongPress, which occurs after the long press has been recognized, but before the user lifts their finger. The UILongPressGestureRecognizer is a continuous gesture recognizer which will continue to get updates as the user's finger moves after the gesture was initially recognized.
I know you didn't ask for code, but if you wanted a swipe gesture, as well as a long press gesture that was, essentially, the idea of picking it up and dragging it, you could do something like the following. Note, I make the swipe gesture require the long press to fail, so if the user is long pressing, that takes precedence, otherwise it does swipe. But you may not need the swipe gesture at all, so if you don't need it, just remove it altogether:
#import <UIKit/UIGestureRecognizerSubclass.h>
- (void)viewDidLoad {
[super viewDidLoad];
UILongPressGestureRecognizer *longPress = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:#selector(handleLongPress:)];
[self.view addGestureRecognizer:longPress];
// if you needed a second gesture, a swipe, completely distinct from the long press and drag
// gesture, you could add it like so:
//
// UISwipeGestureRecognizer *swipe = [[UISwipeGestureRecognizer alloc] initWithTarget:self action:#selector(handleSwipe:)];
// [swipe requireGestureRecognizerToFail:longPress];
// // do additional swipe configuration
// [self.view addGestureRecognizer:swipe];
}
- (void)handleSwipe:(UISwipeGestureRecognizer *)gesture
{
// do your separate swipe stuff here
}
- (void)handleLongPress:(UILongPressGestureRecognizer *)gesture
{
static UIView *tileToMove;
static CGPoint startCenter;
static CGPoint startLocation;
CGPoint location = [gesture locationInView:self.view];
switch (gesture.state) {
case UIGestureRecognizerStateBegan:
{
// find the tile
tileToMove = [self findTileToMove:location];
if (tileToMove) {
// if found, capture state ...
startCenter = tileToMove.center;
startLocation = location;
// ... and animate "pick up tile", so the user gets positive feedback
// that the drag/swipe portion of the gesture is starting.
[UIView animateWithDuration:0.25 animations:^{
tileToMove.transform = CGAffineTransformMakeScale(1.2, 1.2);
}];
} else {
gesture.state = UIGestureRecognizerStateFailed;
}
break;
}
case UIGestureRecognizerStateChanged:
{
// move the tile as the user's finger moves
CGPoint translate = CGPointMake(location.x - startLocation.x, location.y - startLocation.y);
// note, if you want to constrain the translation to be, for example, on the
// x-axis alone, you could do something like:
//
// CGPoint translate = CGPointMake(location.x - startLocation.x, 0);
tileToMove.center = CGPointMake(startCenter.x + translate.x, startCenter.y + translate.y);
break;
}
case UIGestureRecognizerStateEnded:
{
// animate "drop the tile"
[UIView animateWithDuration:0.25 animations:^{
tileToMove.transform = CGAffineTransformIdentity;
// if you want the tile to "snap" to some location having let it go,
// set the `center` or `frame` here.
}];
// clear our variables, just in case
tileToMove = nil;
startCenter = CGPointZero;
startLocation = CGPointZero;
break;
}
default:
break;
}
}
- (UIView *)findTileToMove:(CGPoint)location
{
for (UIView *tile in self.tiles) {
if (CGRectContainsPoint(tile.frame, location)) {
return tile;
}
}
return nil;
}
This might not be quite the exact UI you're looking for, but it illustrates:
How to have two gestures, where one requires the other to fail in order to establish a precedence between the gestures (and clearly only an issue if you want two distinct gestures, which you probably don't);
To not have shouldRecognizeSimultaneouslyWithGestureRecognizer method because I don't want them both to be recognized simultaneously. Note, that's only needed if you really need two gestures, which you may or may not need; and
How to have a long press that not only recognizes initial long press, but subsequent swipe/drag movement, too.
In iPad when you put your finger outside top or bottom edge of screen and then drag it on screen a menu is revealed. How can I implement that?
There is specifically a Gesture Recogniser class for this, introduced in iOS 7. It's the UIScreenEdgePanGestureRecognizer. The documentation for it is here. Check it out.
To test this in the simulator, just start the drag from near the edge (~15 points).
Also, you will have to create a gestureRecognizer for each edge. You can't OR edges together, so UIRectEdgeAll won't work.
There is a simple example here. Hope this helps!
Well you can do something like this, this example is the case where you want you pan gesture to work only when the user swipes 20px inside from the right hand side of the screen
First of all add the gesture to your window
- (void)addGestures {
if (!_panGesture) {
_panGesture = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(handlePanGesture:)];
[_panGesture setDelegate:self];
[self.view addGestureRecognizer:_panGesture];
}
}
After adding the check whether the touch you recieved is a pan gesture and then perform your action accordingly
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldReceiveTouch:(UITouch *)touch {
CGPoint point = [touch locationInView:self.view];
if (gestureRecognizer == _panGesture) {
return [self slideMenuForGestureRecognizer:gestureRecognizer withTouchPoint:point];
}
return YES;
}
Here is how you can check whether your touch is contained in the region where you want it to be
-(BOOL)isPointContainedWithinBezelRect:(CGPoint)point {
CGRect leftBezelRect;
CGRect tempRect;
//this will be the width between CGRectMaxXEdge and the screen offset, thus identifying teh region
CGFloat bezelWidth =20.0;
CGRectDivide(self.view.bounds, &leftBezelRect, &tempRect, bezelWidth, CGRectMaxXEdge);
return CGRectContainsPoint(leftBezelRect, point);
}
Right now my UIPanGestureRecognizer recognizes every single pan, which is great and necessary, but as I'm using it as a sliding gesture to increase and decrease a variable's value, within the method I only want to act every so often. If I increment by even 1 every time it's detected the value goes up far too fast.
Is there a way to do something like, every 10 pixels of panning do this, or something similar?
You're looking for translationInView:, which tells you how far the pan has progressed and can be tested against your minimum distance. This solution doesn't cover the case where you go back and forth in one direction in an amount equal to the minimum distance, but if that's important for your scenario it's not too hard to add.
#define kMinimumPanDistance 100.0f
UIPanGestureRecognizer *recognizer;
CGPoint lastRecognizedInterval;
- (void)viewDidLoad {
[super viewDidLoad];
recognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(didRecognizePan:)];
[self.view addGestureRecognizer:recognizer];
}
- (void)didRecognizePan:(UIPanGestureRecognizer*)sender {
CGPoint thisInterval = [recognizer translationInView:self.view];
if (abs(lastRecognizedInterval.x - thisInterval.x) > kMinimumPanDistance ||
abs(lastRecognizedInterval.y - thisInterval.y) > kMinimumPanDistance) {
lastRecognizedInterval = thisInterval;
// you would add your method call here
}
}
UIPanGestureRecognizer *panRecognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(pan:)];
[self addGestureRecognizer:panRecognizer];
- (void)pan:(UIPanGestureRecognizer *)gesture {
NSLog(#"%f", [gesture translationInView:self].x);
}
The above code will log the relative position of my current pan, but how can I get the absolute position for the view I'm in?
I'm simply just wanting to slide a UIImageView to wherever the user's finger is.
translationInView gives you the pan translation (how much x has changed) and not the position of the pan in the view (the value of x). If you need the position of the pan, you have to use the method locationInView.
You can find the coordinates relatively to the view as follows:
- (void)pan:(UIPanGestureRecognizer *)gesture {
NSLog(#"%f", [gesture locationInView:self].x);
}
Or relatively to the superview:
- (void)pan:(UIPanGestureRecognizer *)gesture {
NSLog(#"%f", [gesture locationInView:self.superview].x);
}
Or relatively to the window:
- (void)pan:(UIPanGestureRecognizer *)gesture {
NSLog(#"%f", [gesture locationInView:self.window].x);
}
Swift 5
Use the method .location() that returns a CGPoint value. [documentation]
For example, relative location of your gesture to self.view:
let relativeLocation = gesture.location(self.view)
print(relativeLocation.x)
print(relativeLocation.y)
I think a simple way of something like this is to get the x and y of the touch and tracking it, once it has 2 points (say X:230 Y:122) you set the scroll of the UIScroll view to the x and y.