Adding multiple tappable shapes to UIView - ios

I have a floor plan with many exhibitor stands. When a UIView loads, a UIImage is displayed with the floor plan and a database is checked for a list of exhibitors and their locations. The locations are loaded into an array and a UIButton is created for each exhibitor and placed over the floor plan where their stand is. When tapped, this button will show information about that exhibitor.
Here is a screenshot of the floor plan with boxes where the buttons are rendered.
This works fine as it is BUT I need the buttons to be irregular shapes (triangles, pentagons, circles etc). So I need a way of drawing these shapes and having them clickable in the same way the buttons were.
I have created a test class which generates a UIView which contains the shape and added it to my original UIView. I get the feeling this may not be the correct way to do this as I will need to have many buttons on the screen and this would mean many views stacked on each other. I don't know how I could check which shape was tapped as the UIViews would overlap each other.
Can all the shapes be drawn on one view and then the view added? What is the best approach to this?

In terms of UI the cleanest thing to do is to use actual buttons. Set their type to custom, and set their image property to the image you want to display. That way the buttons handle highlighting correctly, and manage IBActions just like regular buttons. You can set all the buttons to point to the same action, and use tag values to figure out which button is which.
You can create these buttons either from code or in IB - whichever fits your design better.
You could also do this with custom views or a single view that has drawing on it. IF you use views for each booth, you would need to attach a tap gesture recognizer to each view, and set it's userInteractionEnabled flag to YES.
If you want to use a drawing for the entire floor plan, you would need to add a tap gesture recognizer to the drawing view, and then interpret the coordinates of the tap to figure out which image it lands on.

override
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
method of your map representer view. So you will be able to compare current touch location with all of your shapes, and calculate in which shape your touch point is laying. There different approach for different shapes (e.g is point inside a rectangle?)

ole has a great project: OBShapedButtons. He achieves it by checking the alpha value of the touched pixel and overwriting -pointInside:withEvent:
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event
{
// Return NO if even super returns NO (i.e., if point lies outside our bounds)
BOOL superResult = [super pointInside:point withEvent:event];
if (!superResult) {
return superResult;
}
// Don't check again if we just queried the same point
// (because pointInside:withEvent: gets often called multiple times)
if (CGPointEqualToPoint(point, self.previousTouchPoint)) {
return self.previousTouchHitTestResponse;
} else {
self.previousTouchPoint = point;
}
BOOL response = NO;
if (self.buttonImage == nil && self.buttonBackground == nil) {
response = YES;
}
else if (self.buttonImage != nil && self.buttonBackground == nil) {
response = [self isAlphaVisibleAtPoint:point forImage:self.buttonImage];
}
else if (self.buttonImage == nil && self.buttonBackground != nil) {
response = [self isAlphaVisibleAtPoint:point forImage:self.buttonBackground];
}
else {
if ([self isAlphaVisibleAtPoint:point forImage:self.buttonImage]) {
response = YES;
} else {
response = [self isAlphaVisibleAtPoint:point forImage:self.buttonBackground];
}
}
self.previousTouchHitTestResponse = response;
return response;
}
Another sample code that test if the point is within a layer mask, maybe you can adapt that more easily:
#implementation MyView
//
// ...
//
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event
{
CGPoint p = [self convertPoint:point toView:[self superview]];
if(self.layer.mask){
if (CGPathContainsPoint([(CAShapeLayer *)self.layer.mask path], NULL, p, YES) )
return YES;
}else {
if(CGRectContainsPoint(self.layer.frame, p))
return YES;
}
return NO;
}
#end
The full article: http://blog.vikingosegundo.de/2013/10/01/hittesting-done-right/

In the end this is what I did:
Looped through all of the stand shapes I needed to have and created a dictionary with UIBezierpath of their coordinates on the floorplan image and the standNo for the shape.
I then added these dictionaries to an array.
When the screen was tapped I would note the X and Y of the tap position and looped through the array of dictionaries and it checked if the UIBezierpath contained a point made up of the X and Y tapped coordinates.
If a shape was found that had the X and Y coordinates within its bounds I would draw a CAShapelayer using the UIBezierpath and adding a fillColor so it showed up on the map. An alertView was then displayed which showed more information about the exhibitor.
This method seemed WAY more efficient than actually drawing out hundreds of UIButtons or even CAShaplayers and even with 300+ areas on the floorplan to check through the process appears instant.

Related

UIPageViewController with UISlider inside controller - increase hit area of slider

I have multiple controllers in my PageViewController and in one controller I have a few sliders. Now there is a problem that user must touch exactly slider circle (I am not sure about right expression, thumb? - that moving part) and I would like to increase area in which reacts slider and not the whole PageViewController. I tried these solutions but it doesn't help:
thumbRectForBounds:
- (CGRect)thumbRectForBounds:(CGRect)bounds trackRect:(CGRect)rect value:(float)value
{
return CGRectInset ([super thumbRectForBounds:bounds trackRect:rect value:value], 15, 15);
}
Increase hitTest area:
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event {
if (CGRectContainsPoint(CGRectInset(self.frame, 200, 200), point) || CGRectContainsPoint(CGRectInset(self.frame, 200, 200), point)) {
return self;
}
return [super hitTest:point withEvent:event];
}
I have these methods in my custom slider class because I would like to reuse this. Last thing what I found and not tried yet is create some object layer over slider which "takes" gesture and disable PageViewController but I am not sure how to do it and I am not sure if it's good/best solution.
I am not a big fan of the UISlider component because as you noticed, it is not trivial to increase the hit area of the actual slider. I would urge you to replicate the UISlider instead using a pan gesture for a much better user experience:
i. create a slider background with a seperate UIImageView with a slider image.
ii. create the PanGesture:
UIPanGestureRecognizer *pan = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(handlePan:);
[imageView addGestureRecognizer:pan];
iii. implement handlePan Method:
- (IBAction)handlePan:(UIPanGestureRecognizer *)recognizer {
//pan (slide) begins
CGPoint translation = [recognizer locationInView:self.view];
translation.y = self.slideImage.center.y;
self.slideImage.center = translation;
if(recognizer.state == UIGestureRecognizerStateEnded) {
LTDebugLog(#"\n\n PAN, with spot: %f\n\n", self.slideImage.center.x);
//do something after user is done sliding
}
}
The big benefit of this method is that you will have a much better user experience as you can make the responsive UIImageView as big as you want.
Alternatively, you could subclass a UISlider and increase the hit space there, although in my experience this gives mixed results.
Hope this helps
In your CustomSlider class override thumbRectForBounds method:
Simply return rect value as you required:
- (CGRect)thumbRectForBounds:(CGRect)bounds trackRect:(CGRect)rect value:(float)value
{
return CGRectMake (bounds.origin.x, bounds.origin.y, yourWidthValue, yourHeightValue );
}
Change yourWidthValue and yourHeightValue as per your requirement. And then while using
Create object like below:
CustomSlider *slider = [[CustomSlider alloc] initWithFrame:CGRectMake(0, 0, 300, 20)];
[slider thumbRectForBounds: slider.bounds trackRect:slider.frame value:15.f]; // change values as per your requirements.
Hope this helps.
Create a custom thumb image which has a large empty margin and set that on your slider, like this:
[theSlider setThumbImage:[UIImage imageNamed:#"slider_thumb_with_margins"] forState:UIControlStateNormal];
To make the image, get a copy of the system thumb image using any one of a number of UIKit artwork extractors (just search the web for one). Open the thumb image in Photoshop and increase the canvas size by twice the margin you want to add. Make sure you change the canvas size and not the image size, as the image size will stretch the image to fill the new size. This will put empty space around the thumb which will be part of the hit-test area but since it is all transparent it won't change the look of the slider.

iOS - How do I know the view that is under the one I'm dragging?

I have a UIScrollView filled with draggable (UIView) cards as subviews and I want the cards to re-organize themselves (make space for new when the user drags one of them into the UIScrollView.
The problem is: how do I know which of the UIViews is under the one I'm dragging, so I can get its index and move it away from the card being dragged?
I tried using hitTest:withEvent: but I think I'm far from doing it right, since it's returning nil.
UIView *viewUnderCard = [card hitTest:card.center withEvent:nil];
Just started developing for iOS. Any help please?
You're on the right track, hitTest:withEvent: can be used. But #Mysiaq is correct that pointInside:withEvent: is probably even better.
You need to make sure that the coordinates are relative to the correct view. If you're using card.center, the coordinate system is that of the card's parent view.
The code could look something like this:
UIView *container = viewThatHasAllTheCards;
UIView *targetCard = nil;
CGPoint cardInWindow = [draggedCard.superview convertPoint:draggedCard.center toView:nil];
CGPoint cardInContainer = [container convertPoint:cardInWindow fromView:nil];
for (UIView *subview in container.subviews) {
if (subview == draggedCard) {
// Skip the dragged card.
continue;
}
if ([subview pointInside:cardInContainer withEvent:nil]) {
targetCard = subview;
// If you want the lower-most card, break here.
// If you want the top-most card, do not break here.
}
}
Get point of your touch and then call function
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event
for all UIScrollView subviews and function will return you YES if provided CGPoint is inside of its frame.
You can compare bounding rects of two views:
CGRect boundsView1 = [view1 convertRect:view1.bounds toView:nil];
CGRect boundsView2 = [view2 convertRect:view2.bounds toView:nil];
Boolean viewsOverlap = CGRectIntersectsRect(boundsView1, boundsView2);
From here, you should be able to figure out how to iterate efficiently through your list of views to determine if any overlap.

Why is the top portion of my UISegmentedControl not tappable?

While I was playing on my phone, I noticed that my UISegmentedControl was not very responsive. It would take 2 or more tries to make my taps register. So I decided to run my app in Simulator to more precisely probe what was wrong. By clicking dozens of times with my mouse, I determined that the top 25% of the UISegmentedControl does not respond (the portion is highlighted in red with Photoshop in the screenshot below). I am not aware of any invisible UIView that could be blocking it. Do you know how to make the entire control tappable?
self.segmentedControl = [[UISegmentedControl alloc] initWithItems:[NSArray arrayWithObjects:#"Uno", #"Dos", nil]];
self.segmentedControl.selectedSegmentIndex = 0;
[self.segmentedControl addTarget:self action:#selector(segmentedControlChanged:) forControlEvents:UIControlEventValueChanged];
self.segmentedControl.height = 32.0;
self.segmentedControl.width = 310.0;
self.segmentedControl.segmentedControlStyle = UISegmentedControlStyleBar;
self.segmentedControl.tintColor = [UIColor colorWithWhite:0.9 alpha:1.0];
self.segmentedControl.autoresizingMask = UIViewAutoresizingFlexibleLeftMargin | UIViewAutoresizingFlexibleRightMargin;
UIView* toolbar = [[UIView alloc] initWithFrame:CGRectMake(0, 0, self.view.width, HEADER_HEIGHT)];
toolbar.autoresizingMask = UIViewAutoresizingFlexibleWidth;
CAGradientLayer *gradient = [CAGradientLayer layer];
gradient.frame = CGRectMake(
toolbar.bounds.origin.x,
toolbar.bounds.origin.y,
// * 2 for enough slack when iPad rotates
toolbar.bounds.size.width * 2,
toolbar.bounds.size.height
);
gradient.colors = [NSArray arrayWithObjects:
(id)[[UIColor whiteColor] CGColor],
(id)[[UIColor
colorWithWhite:0.8
alpha:1.0
] CGColor
],
nil
];
[toolbar.layer insertSublayer:gradient atIndex:0];
toolbar.backgroundColor = [UIColor navigationBarShadowColor];
[toolbar addSubview:self.segmentedControl];
UIView* border = [[UIView alloc] initWithFrame:CGRectMake(0, HEADER_HEIGHT - 1, toolbar.width, 1)];
border.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleTopMargin;
border.backgroundColor = [UIColor colorWithWhite:0.7 alpha:1.0];
border.autoresizingMask = UIViewAutoresizingFlexibleWidth;
[toolbar addSubview:border];
[self.segmentedControl centerInParent];
self.tableView.tableHeaderView = toolbar;
http://scs.veetle.com/soget/session-thumbnails/5363e222d2e10/86a8dd984fcaddee339dd881544ecac7/5363e222d2e10_86a8dd984fcaddee339dd881544ecac7_20140509171623_536d6fd78f503_68_896x672.jpg
As already written in other answers, UINavigationBar grabs the touches made near the nav bar itself, but not because it has some subviews extended over the edges: this is not the reason.
If you log the whole view hierarchy, you will see that the UINavigationBar doesn't extends over the defined edges.
The reason why it receives the touches is another:
in UIKit, there are many "special cases", and this is one of them.
When you tap the screen, a process called "hit testing" starts. Starting from the first UIWindow, all views are asked to answer two "questions": is the point tapped inside your bounds? what is the subviews that must receive the touch event?
this questions are answered by these two methods:
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event;
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event;
Ok, now we can continue.
After the tap, UIApplicationMain starts the hit testing process. The hit test starts from the main UIWindow (and is executed even on the status bar window and the alert view window, for example), and goes through all subviews.
This process is executed 3 times:
two times starting from UIWindow
one times starting from _UIApplicationHandleEvent
If you tap on the Navigation Bar, you will see that hitTest on UIWindow will return the UINavigationBar (all three times)
If you tap on the area below the Navigation Bar however, you will se something strange:
the first two hitTest will return your UISegmentedControl
the last hitTest will return UINavigationBar
why this?
If you swizzle and subclass UIView, overriding hitTest, you will see that the first two times the tapped point is correct. The third time, something changes the point doing something like point - 15 (or a similar number)
After a lot of searching, I have found where this is happening:
UIWindow has a (private) method called
-(CGPoint)warpPoint:(CGPoint)point;
debugging it, I saw that this method changes the tapped point if it is immediately below the status bar.
Debugging more, I saw that the stack calls that make this possible, are only 3:
[UINavigationBar, _isChargeEnabled]
[UINavigationBar, isEnabled]
[UINavigationBar, _isAlphaHittableAndHasAlphaHittableAncestors]
So, at the end, this warpPoint method checks if the UINavigationBar is enabled and hittable, if yes it "warps" the point. The point is warped of a number of pixel between 0 and 15, and this "warp" increases when you get closer to the Navigation Bar.
Now that you know what happens behind the scenes, you have to know how to avoid it (if you want).
You can't simply override warpPoint: if the application must go on the AppStore: it's a private method and your app will be rejected.
You have to find another system (like as suggested, overriding sendEvent, but I'm not sure if it will work)
Because this question is interesting, I will think about a legal solution tomorrow and update this answer (one good starting point can be subclassing UINavigationBar, overriding hitTest and pointInside, returning nil/false if, given the same event over multiple calls, the point changes. But I must test if it works tomorrow)
EDIT
Ok, I've tried many solutions but it's not simple to find a legal and stable one.
I've described the actual behavior of the system, that could vary on different versions (hitTest called more or less than 3 times, the warpPoint warping the point of about 15px that can change ecc ecc).
The most stable is obviously the illegal override of warpPoint: in a UIWindow subclass:
-(CGPoint)warpPoint:(CGPoint)point;
{
return point;
}
however, I've found that a method like this (in UIWindow subclass) it's stable enough and does the trick:
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event
{
// this method is not safe if you tap the screen two times at the same x position and y position different for 16px, because it moves the point
if (self.lastPoint.x == point.x)
{
// the points are on the same vertical line
if ((0 < (self.lastPoint.y - point.y)) && ((self.lastPoint.y - point.y) < 16) )
{
// there is a differenc of ~15px in the y position?
// if so, the point has been changed
point.y = self.lastPoint.y;
}
}
self.lastPoint = point;
return [super hitTest:point withEvent:event];
}
This method records the last point tapped, and if the subsequent tap is at the same x, and an y different for max 16px, then uses the previous point.
I've tested a lot and it seems stable.
If you want, you can add more controls to enable this behavior only in particular controllers, or only on a defined portion of the window, ecc ecc.
If I find another solution, I'll update the post
I believe the problem is because the buttons in the UINavigationBar have a larger than normal touch area. See this SO post. You can also find plenty of discussion on this with a 'UINavigationBar touch area' Google search.
As a possible solution, you could put the segmented control IN the navigation bar, but you would know better than I if that fits your use cases or not.
I've come up with an alternate solution that to me seems safer than LombaX's. It uses the fact that both events come in with the same timestamp to reject the subsequent event.
#interface RFNavigationBar ()
#property (nonatomic, assign) NSTimeInterval lastOutOfBoundsEventTimestamp;
#end
#implementation RFNavigationBar
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event
{
// [rfillion 2014-03-28]
// UIApplication/UIWindow/UINavigationBar conspire against us. There's a band under the UINavigationBar for which the bar will return
// subviews instead of nil (to make those tap targets larger, one would assume). We don't want that. To do this, it seems to end up
// calling -hitTest twice. Once with a value out of bounds which is easy to check for. But then it calls it again with an altered point
// value that is actually within bounds. The UIEvent it passes to both seem to be the same. However, we can't just compare UIEvent pointers
// because it looks like these get reused and you end up rejecting valid touches if you just keep around the last bad touch UIEvent. So
// instead we keep around the timestamp of the last bad event, and try to avoid processing any events whose timestamp isn't larger.
if (point.y > self.bounds.size.height)
{
self.lastOutOfBoundsEventTimestamp = event.timestamp;
return nil;
}
if (event.timestamp <= self.lastOutOfBoundsEventTimestamp + 0.001)
{
return nil;
}
return [super hitTest:point withEvent:event];
}
#end
You might want to check which view is recording the touches. Try this method-
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
[touch locationInView:self.view];
if([touch.view isKindOfClass:[UISegmentedControl class]])
{
NSLog(#"This is UISegment");
}
else if([touch.view isKindOfClass:[UITabBar class]])
{
NSLog(#"This is UITabBar");
} else if(...other views...) {
...
}
}
Once you figure that out you maybe able to narrow down your problem.
It looks as if you're using a category extension to set width/height on views, as well as center them in their parent. Perhaps there is a hidden issue here - can you refactor to do your layout w/out this category?
I copied your code into a clean project and ran it in a UITableViewController's viewDidLoad method - it works fine and I have no dead spots like you report. I had to change your code slightly since I don't have the same category extension that you're using.
Also, if you're running this code in viewDidLoad, you should verify that your view has a defined size (you access your view.width). If you're creating your UITableViewController programmatically (vs from a nib/storyboard) then the frame may be CGRectZero. Mine was loaded from a nib so the frame was preset.
I'd also try temporarily removing your border view to see if it's the culprit.
I recommend that you avoid having touch-sensitive UI in such close proximity to the nav bar or toolbar. These areas are typically known as "slop factors" making it easier for users to perform touch events on buttons without the difficulty of performing precision touches. This is also the case for UIButtons for example.
But if you want to capture the touch event before the navigation bar or toolbar receives it, you can subclass UIWindow and override: -(void)sendEvent:(UIEvent *)event;
An easy way to debug this is to try using DCIntrospect in your project. It's a very easy to use/implement library that makes finding out what views are where when in the simulator a breeze.
Install the library and configure it
Run the application in the simulator and navigate to the screen with the issue
Press spacebar on the keyboard (the computer keyboard, not the simulator's
keyboard)
Click on the 25% area and see what gets highlighted.
If what's highlighted isn't the segmented view controller, that view could be what's covering up the touch event.
Create a protocol for UINavigationBar: (add new file and paste below code)
/******** file: UINavigationBar+BelowSpace.h*******/
"UINavigationBar+BelowSpace.h"
#import <Foundation/Foundation.h>
#interface UINavigationBar (BelowSpace)
#end
/*******- file: UINavigationBar+BelowSpace.m*******/
#import "UINavigationBar+BelowSpace.h"
#implementation UINavigationBar (BelowSpace)
-(UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event {
int errorMargin = 5;// space left to decrease the click event area
CGRect smallerFrame = CGRectMake(0 , 0 - errorMargin, self.frame.size.width, self.frame.size.height);
BOOL isTouchAllowed = (CGRectContainsPoint(smallerFrame, point) == 1);
if (isTouchAllowed) {
self.userInteractionEnabled = YES;
} else {
self.userInteractionEnabled = NO;
}
return [super hitTest:point withEvent:event];
}
#end
Hope this help ^ ^
Try this
self.navigationController!.navigationBar.userInteractionEnabled = false;

Cocos2D - Better to keep array of CGRects or array of CCSprites to track hit areas?

I am putting together a simple board game in cocos2d to get my feet wet.
To track user clicks I intend to listen for click on game squares, not the game pieces to simplify tracking game pieces. It would be 8x8 board.
Is it more efficient to:
A. Make an array of CGRects to test against and need to put the struct in an NSObject before adding to array. Seams simple but looks like a lot of work going into access the CGRects every time they are needed.
or
B. Make actual CCSprites and test against their bounding rectangle. Simple to code, but that there are an extra 64 unneeded visual objects on the screen, bloating memory use.
or even
C. Some other method, and I am fundamentally misunderstanding this tool.
I agree it seems unnecessary to create a sprite for each game square on your board if your board is totally static.
However CGRect is not an object type so it can not be added to an NSMutableArray, and I will also assume that at some point you will want to do other things with your game square, such as highlighting them and other stuff. What I suggest you do is that you create a class called GameSquare that inherits from CCNode and put them into an array:
// GameSquare.h
#interface GameSquare : CCNode {
//Add nice stuff here about gamesquares and implement in GameSquare.m
}
After that, you can create gamesquares as nodes:
// SomeLayer.h
#interface SomeLayer : CCLayer {
NSMutableArray *myGameSquares;
GameSquare *magicGameSquare;
}
#property (nonatomic, strong) GameSquare *magicGameSquare;
// SomeLayer.m
/* ... (somewhere in the layer setup after init of myGameSquares) ... */
GameSquare *g = [[GameSquare alloc] init];
g.position = CGPointMake(x,y); //replace x,y with your coordinates
g.size = CGSizeMake(w,h); //replace w,h with your sizes
[myGameSquares addObject:g];
self.magicGameSquare = [[GameSquare alloc] init];
magicGameSquare.position = CGPointMake(mX,mY); //replace with your coordinates
magicGameSquare.size = CGSizeMake(mW,mH); //replace with your sizes
After that, you can do hit test against the gamesquares like this (in your CCLayer subclass):
// SomeLayer.m
- (BOOL)ccTouchBegan:(UITouch *)touch withEvent:(UIEvent *)event {
CGPoint location = [self convertTouchToNodeSpace: touch];
// Example of other objects that user might have pressed
if (CGRectContainsPoint([magicSquare getBounds], location)) {
// User pressed magic square!
[magicSquare doHitStuff];
} else {
for (int i=0; i<myGameSquares.count; i++) {
GameSquare *square = [myGameSquares objectAtIndex:i];
if (CGRectContainsPoint(square.boundingBox, location)) {
// This is the square user has pressed!
[square doHitStuff];
break;
}
}
}
return YES;
}
Yes, you will have to look through the list, but unless the player can press many squares in one touch, you can stop the search as soon as the right one is found, as demonstrated in the example.
(Assumed use of ARC)
PS. If you at some point need to add a any sprite for the GameSquare, simply add a CCSprite member in your GameSquare class and refer to that.

How to make possible interaction with child UIView's drawn outside the parent UIView bounds?

I have something like a mixer, which opens the tracks volumes when I touch a button.
To avoid making the view bigger than it needs, i'm drawing the volume sliders outside the bounds. The thing is that now, I have the touch being handled by what is below those sliders and not the sliders them selfs.
How can I make a UIView child receive the touch when it is outside the parents bound, but above anything else that is drawn around?
Is this possible?
I tried the hit test method suggested in the link below without success:
interaction beyond bounds of uiview
Thanks,
With my best regards,
Nuno Santos
I have found the solution to this problem. Basically I need to override the method
-(BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event
which
"Returns a Boolean value indicating whether the receiver contains the specified point."
First I test the point with the super view. If it returns none, I'll test against the objects that are being drawn outside the parent's bounds.
-(BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event
{
if ([super pointInside:point withEvent:event])
{
return YES;
}
else
{
id elem;
NSEnumerator * enumerator = [tracks objectEnumerator];
while(elem = [enumerator nextObject])
{
LKTrack *track = (LKTrack*) elem;
if ([track pointInside:[self convertPoint:point toView:track] withEvent:event])
{
return YES;
}
}
}
return NO;
}
In the answer above, what is the 'tracks' and how do you get it?

Resources