UITapGestureRecognizer Programmatically trigger a tap in my view - ios

Edit: Updated to make question more obvious
Edit 2: Made question more accurate to my real-world problem. I'm actually looking to take action if they tap anywhere EXCEPT in an on-screen text-field. Thus, I can't simply listen for events within the textfield, I need to know if they tapped anywhere in the View.
I'm writing unit tests to assert that a certain action is taken when a gesture recognizer recognizes a tap within certain coordinates of my view. I want to know if I can programmatically create a touch (at specific coordinates) that will be handled by the UITapGestureRecognizer. I'm attempting to simulate the user interaction during a unit test.
The UITapGestureRecognizer is configured in Interface Builder
//MYUIViewControllerSubclass.m
-(IBAction)viewTapped:(UITapGestureRecognizer*)gesture {
CGPoint tapPoint = [gesture locationInView:self.view];
if (!CGRectContainsPoint(self.textField, tapPoint)) {
// Do stuff if they tapped anywhere outside the text field
}
}
//MYUIViewControllerSubclassTests.m
//What I'm trying to accomplish in my unit test:
-(void)testThatTappingInNoteworthyAreaTriggersStuff {
// Create fake gesture recognizer and ViewController
MYUIViewControllerSubclass *vc = [[MYUIViewControllersSubclass alloc] init];
UITapGestureRecognizer *tgr = [[UITapGestureRecognizer initWithView: vc.view];
// What I want to do:
[[ Simulate A Tap anywhere outside vc.textField ]]
[[ Assert that "Stuff" occured ]]
}

There is a much simpler way to trigger a touch for a UITapGestureRecognizer in a unit test using a single line. Assuming you have a var that holds a reference to the tap gesture recognizer all you need is the following:
singleTapGestureRecognizer?.state = .ended

I think you have multiple options here:
May be the simplest would be to send a push event action to your view but i don't think that what you really want since you want to be able to choose where the tap action occurs.
[yourView sendActionsForControlEvents: UIControlEventTouchUpInside];
You could use UI automation tool that is provided with XCode instruments. This blog explains well how to automate your UI tests with script then.
There is this solution too that explain how to synthesize touch events on the iPhone but make sure you only use those for unit tests. This sounds more like a hack to me and I will consider this solution as the last resort if the two previous points doesn't fulfill your need.

What you attempt to do is very hard (but not entirely impossible) while staying on the (iTunes-)legal path.
Let me first draft the right way;
The proper way out for doing this is using UIAutomation. UIAutomation does exactly what you ask for, it simulates user behaviour for all kinds of tests.
Now that hard way;
The issue that your problems boils down to is to instantiate a new UIEvent. (Un)fortunately UIKit does not offer any constructors for such events due to obvious security reasons. There are however workarounds that did work in the past, not sure if they still do.
Have a look at Matt Galagher's awesome blog drafting a solution on how to synthesise touch events.

If used in tests you can use either a test library called SpecTools which helps with all this and more or use it's code directly:
// Return type alias
public typealias TargetActionInfo = [(target: AnyObject, action: Selector)]
// UIGestureRecognizer extension
extension UIGestureRecognizer {
// MARK: Retrieving targets from gesture recognizers
/// Returns all actions and selectors for a gesture recognizer
/// This method uses private API's and will most likely cause your app to be rejected if used outside of your test target
/// - Returns: [(target: AnyObject, action: Selector)] Array of action/selector tuples
public func getTargetInfo() -> TargetActionInfo {
var targetsInfo: TargetActionInfo = []
if let targets = self.value(forKeyPath: "_targets") as? [NSObject] {
for target in targets {
// Getting selector by parsing the description string of a UIGestureRecognizerTarget
let selectorString = String.init(describing: target).components(separatedBy: ", ").first!.replacingOccurrences(of: "(action=", with: "")
let selector = NSSelectorFromString(selectorString)
// Getting target from iVars
let targetActionPairClass: AnyClass = NSClassFromString("UIGestureRecognizerTarget")!
let targetIvar: Ivar = class_getInstanceVariable(targetActionPairClass, "_target")
let targetObject: AnyObject = object_getIvar(target, targetIvar) as! AnyObject
targetsInfo.append((target: targetObject, action: selector))
}
}
return targetsInfo
}
/// Executes all targets on a gesture recognizer
public func execute() {
let targetsInfo = self.getTargetInfo()
for info in targetsInfo {
info.target.performSelector(onMainThread: info.action, with: nil, waitUntilDone: true)
}
}
}
Both, library as well as the snippet use private API's and will probably cause a rejection if used outside of your test suite ...

Answer by #Ondrej updated to Swift 4:
// Return type alias
typealias TargetActionInfo = [(target: AnyObject, action: Selector)]
// UIGestureRecognizer extension
extension UIGestureRecognizer {
// MARK: Retrieving targets from gesture recognizers
/// Returns all actions and selectors for a gesture recognizer
/// This method uses private API's and will most likely cause your app to be rejected if used outside of your test target
/// - Returns: [(target: AnyObject, action: Selector)] Array of action/selector tuples
func getTargetInfo() -> TargetActionInfo {
guard let targets = value(forKeyPath: "_targets") as? [NSObject] else {
return []
}
var targetsInfo: TargetActionInfo = []
for target in targets {
// Getting selector by parsing the description string of a UIGestureRecognizerTarget
let description = String(describing: target).trimmingCharacters(in: CharacterSet(charactersIn: "()"))
var selectorString = description.components(separatedBy: ", ").first ?? ""
selectorString = selectorString.components(separatedBy: "=").last ?? ""
let selector = NSSelectorFromString(selectorString)
// Getting target from iVars
if let targetActionPairClass = NSClassFromString("UIGestureRecognizerTarget"),
let targetIvar = class_getInstanceVariable(targetActionPairClass, "_target"),
let targetObject = object_getIvar(target, targetIvar) {
targetsInfo.append((target: targetObject as AnyObject, action: selector))
}
}
return targetsInfo
}
/// Executes all targets on a gesture recognizer
func sendActions() {
let targetsInfo = getTargetInfo()
for info in targetsInfo {
info.target.performSelector(onMainThread: info.action, with: self, waitUntilDone: true)
}
}
}
Usage:
struct Automator {
static func tap(view: UIView) {
let grs = view.gestureRecognizers?.compactMap { $0 as? UITapGestureRecognizer } ?? []
grs.forEach { $0.sendActions() }
}
}
let myView = ... // View under UI Logic Test
Automator.tap(view: myView)

I was facing the same issue, trying to simulate a tap on a table cell to automate a test for a view controller which handles tapping on a table.
The controller has a private UITapGestureRecognizer created as below:
gestureRecognizer = [[UITapGestureRecognizer alloc] initWithTarget:self
action:#selector(didRecognizeTapOnTableView)];
The unit test should simulate a touch so that the gestureRecognizer would trigger the action as it was originated from the user interaction.
None of the proposed solutions worked in this scenario, so I solved it decorating UITapGestureRecognizer, faking the exact methods called by the controller. So I added a "performTap" method that call the action in a way the controller itself is unaware of where the action is originated from. This way, I could make a test unit for the controller independent of the gesture recognizer, just of the action triggered.
This is my category, hope it helps someone.
CGPoint mockTappedPoint;
UIView *mockTappedView = nil;
id mockTarget = nil;
SEL mockAction;
#implementation UITapGestureRecognizer (MockedGesture)
-(id)initWithTarget:(id)target action:(SEL)action {
mockTarget = target;
mockAction = action;
return [super initWithTarget:target action:action];
// code above calls UIGestureRecognizer init..., but it doesn't matters
}
-(UIView *)view {
return mockTappedView;
}
-(CGPoint)locationInView:(UIView *)view {
return [view convertPoint:mockTappedPoint fromView:mockTappedView];
}
-(UIGestureRecognizerState)state {
return UIGestureRecognizerStateEnded;
}
-(void)performTapWithView:(UIView *)view andPoint:(CGPoint)point {
mockTappedView = view;
mockTappedPoint = point;
[mockTarget performSelector:mockAction];
}
#end

Okay, I've turned the above into a category that works.
Interesting bits:
Categories can't add member variables. Anything you add becomes static to the class and thus is clobbered by Apple's many UITapGestureRecognizers.
So, use associated_object to make the magic happen.
NSValue for storing non-objects
Apple's init method contains important configuration logic; we could guess at what is set (number of taps, number of touches, what else?
But this is doomed. So, we swizzle in our init method that preserves the mocks.
The header file is trivial; here's the implementation.
#import "UITapGestureRecognizer+Spec.h"
#import "objc/runtime.h"
/*
* With great contributions from Matt Gallagher (http://www.cocoawithlove.com/2008/10/synthesizing-touch-event-on-iphone.html)
* And Glauco Aquino (http://stackoverflow.com/users/2276639/glauco-aquino)
* And Codeshaker (http://codeshaker.blogspot.com/2012/01/calling-original-overridden-method-from.html)
*/
#interface UITapGestureRecognizer (SpecPrivate)
#property (strong, nonatomic, readwrite) UIView *mockTappedView_;
#property (assign, nonatomic, readwrite) CGPoint mockTappedPoint_;
#property (strong, nonatomic, readwrite) id mockTarget_;
#property (assign, nonatomic, readwrite) SEL mockAction_;
#end
NSString const *MockTappedViewKey = #"MockTappedViewKey";
NSString const *MockTappedPointKey = #"MockTappedPointKey";
NSString const *MockTargetKey = #"MockTargetKey";
NSString const *MockActionKey = #"MockActionKey";
#implementation UITapGestureRecognizer (Spec)
// It is necessary to call the original init method; super does not set appropriate variables.
// (eg, number of taps, number of touches, gods know what else)
// Swizzle our own method into its place. Note that Apple misspells 'swizzle' as 'exchangeImplementation'.
+(void)load {
method_exchangeImplementations(class_getInstanceMethod(self, #selector(initWithTarget:action:)),
class_getInstanceMethod(self, #selector(initWithMockTarget:mockAction:)));
}
-(id)initWithMockTarget:(id)target mockAction:(SEL)action {
self = [self initWithMockTarget:target mockAction:action];
self.mockTarget_ = target;
self.mockAction_ = action;
self.mockTappedView_ = nil;
return self;
}
-(UIView *)view {
return self.mockTappedView_;
}
-(CGPoint)locationInView:(UIView *)view {
return [view convertPoint:self.mockTappedPoint_ fromView:self.mockTappedView_];
}
//-(UIGestureRecognizerState)state {
// return UIGestureRecognizerStateEnded;
//}
-(void)performTapWithView:(UIView *)view andPoint:(CGPoint)point {
self.mockTappedView_ = view;
self.mockTappedPoint_ = point;
// warning because a leak is possible because the compiler can't tell whether this method
// adheres to standard naming conventions and make the right behavioral decision. Suppress it.
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Warc-performSelector-leaks"
[self.mockTarget_ performSelector:self.mockAction_];
#pragma clang diagnostic pop
}
# pragma mark - Who says we can't add members in a category?
- (void)setMockTappedView_:(UIView *)mockTappedView {
objc_setAssociatedObject(self, &MockTappedViewKey, mockTappedView, OBJC_ASSOCIATION_ASSIGN);
}
-(UIView *)mockTappedView_ {
return objc_getAssociatedObject(self, &MockTappedViewKey);
}
- (void)setMockTappedPoint_:(CGPoint)mockTappedPoint {
objc_setAssociatedObject(self, &MockTappedPointKey, [NSValue value:&mockTappedPoint withObjCType:#encode(CGPoint)], OBJC_ASSOCIATION_COPY);
}
- (CGPoint)mockTappedPoint_ {
NSValue *value = objc_getAssociatedObject(self, &MockTappedPointKey);
CGPoint aPoint;
[value getValue:&aPoint];
return aPoint;
}
- (void)setMockTarget_:(id)mockTarget {
objc_setAssociatedObject(self, &MockTargetKey, mockTarget, OBJC_ASSOCIATION_ASSIGN);
}
- (id)mockTarget_ {
return objc_getAssociatedObject(self, &MockTargetKey);
}
- (void)setMockAction_:(SEL)mockAction {
objc_setAssociatedObject(self, &MockActionKey, NSStringFromSelector(mockAction), OBJC_ASSOCIATION_COPY);
}
- (SEL)mockAction_ {
NSString *selectorString = objc_getAssociatedObject(self, &MockActionKey);
return NSSelectorFromString(selectorString);
}
#end

CGPoint tapPoint = [gesture locationInView:self.view];
should be
CGPoint tapPoint = [gesture locationInView:gesture.view];
because the cgpoint should be retrieved from exactly where the gesture target is rather than trying to guess where in the view it's in

Related

iOS 7 UIWebView keyboard issue

I have to remove this bar as here link but for iOS 7 this code does not work.
We remove this bar with some Objective C runtime trickery.
We have a class which has one method:
#interface _SwizzleHelper : NSObject #end
#implementation _SwizzleHelper
-(id)inputAccessoryView
{
return nil;
}
#end
Once we have a web view which we want to remove the bar from, we iterate its scroll view's subviews and take the UIWebDocumentView class. We then dynamically make the superclass of the class we created above to be the subview's class (UIWebDocumentView - but we cannot say that upfront because this is private API), and replace the subview's class to our class.
#import "objc/runtime.h"
-(void)__removeInputAccessoryView
{
UIView* subview;
for (UIView* view in self.scrollView.subviews) {
if([[view.class description] hasPrefix:#"UIWeb"])
subview = view;
}
if(subview == nil) return;
NSString* name = [NSString stringWithFormat:#"%#_SwizzleHelper", subview.class.superclass];
Class newClass = NSClassFromString(name);
if(newClass == nil)
{
newClass = objc_allocateClassPair(subview.class, [name cStringUsingEncoding:NSASCIIStringEncoding], 0);
if(!newClass) return;
Method method = class_getInstanceMethod([_SwizzleHelper class], #selector(inputAccessoryView));
class_addMethod(newClass, #selector(inputAccessoryView), method_getImplementation(method), method_getTypeEncoding(method));
objc_registerClassPair(newClass);
}
object_setClass(subview, newClass);
}
The equivalent of the above in Swift 3.0:
import UIKit
import ObjectiveC
var swizzledClassMapping = [AnyClass]()
extension UIWebView {
func noInputAccessoryView() -> UIView? {
return nil
}
public func removeInputAccessoryView() {
var subview: AnyObject?
for (_, view) in scrollView.subviews.enumerated() {
if NSStringFromClass(type(of: view)).hasPrefix("UIWeb") {
subview = view
}
}
guard subview != nil else {
return
}
//Guard in case this method is called twice on the same webview.
guard !(swizzledClassMapping as NSArray).contains(type(of: subview!)) else {
return;
}
let className = "\type(of: subview!)_SwizzleHelper"
var newClass : AnyClass? = NSClassFromString(className)
if newClass == nil {
newClass = objc_allocateClassPair(type(of: subview!), className, 0)
guard newClass != nil else {
return;
}
let method = class_getInstanceMethod(type(of: self), #selector(UIWebView.noInputAccessoryView))
class_addMethod(newClass!, #selector(getter: UIResponder.inputAccessoryView), method_getImplementation(method), method_getTypeEncoding(method))
objc_registerClassPair(newClass!)
swizzledClassMapping += [newClass!]
}
object_setClass(subview!, newClass!)
}
}
I've made a cocoapod based on this blog post from #bjhomer.
You can replace the inputaccessoryview and not just hide it. I hope this will help people with the same issue.
https://github.com/lauracpierre/FA_InputAccessoryViewWebView
You can find the cocoapod page right here.
I've came across this awesome solution, but I needed to get the inputAccessoryView back as well. I added this method:
- (void)__addInputAccessoryView {
UIView* subview;
for (UIView* view in self.scrollView.subviews) {
if([[view.class description] hasSuffix:#"SwizzleHelper"])
subview = view;
}
if(subview == nil) return;
Class newClass = subview.superclass;
object_setClass(subview, newClass);
}
It does seem to work as intended with no side effects, but I can't get rid of the feeling that my pants are on fire.
If you want Leo Natan's solution to work with WKWebView instead of UIWebView just change prefix from "UIWeb" to "WKContent".
I created a gist to accomplish this:
https://gist.github.com/kgaidis/5f9a8c7063b687cc3946fad6379c1a66
It's a UIWebView category where all you do is change the customInputAccessoryView property:
#interface UIWebView (CustomInputAccessoryView)
#property (strong, nonatomic) UIView *customInputAccessoryView;
#end
You can either set it to nil to remove it or you can set a new view on it to change it.
Keep in mind, this also uses private API's, so use at your own risk, but it seems like a lot of apps do similar things nonetheless.

Program IBAction Button To Turn Map Layer ON/OFF

I'm using a IBAction button to turn a map layer on. This code turns it on when the button is tapped.
- (IBAction)lightingLayer:(id)sender {
[_mapView addTileSource:[[RMMapBoxSource alloc] initWithMapID:#"MapID"]];
}
Now I'd like to adjust it so when the user taps it once, the layers turns on. And when it's tapped again, it turns on and so forth. I took a stab at it by borrowing code from a similar example but it doesn't work.
- (IBAction)lightingLayer:(id)sender {
_Bool *isON = NULL;
isON = !isON;
if(isON) {
[_mapView addTileSource:[[RMMapBoxSource alloc] initWithMapID:#"MapID"]];
} else {
[_mapView removeTileSource:[[RMMapBoxSource alloc] initWithMapID:#"MapID"]];
}
This flags, incompatible integer to pointer conversion assigning bool from int. Can someone provide some simple code to help me achieve my goal. Thanks in advance for your time.
This error it's because you are assigning a bool value to a pointer. A pointer is nothing but a integer value, which holds a memory position as a hexadecimal number.
But actually, to accomplish what you want, you don't need a pointer, just use a property to store this bool and create a toggle funcionality.
declare this private property:
#property (strong, assign) BOOL isChecked;
And in your action:
- (IBAction)lightingLayer:(id)sender {
self.isChecked = !self.isChecked;
if(self.isChecked) {
[_mapView addTileSource:[[RMMapBoxSource alloc] initWithMapID:#"MapID"]];
} else {
[_mapView removeTileSource:[[RMMapBoxSource alloc] initWithMapID:#"MapID"]];
}
}
Ps: I only focused here in explain the error you are getting now. This add/remove tile logic is probably wrong too. I think you still would have to save the same reference to be added and later removed.
Do this way
BOOL isON;
- (IBAction)lightingLayer:(id)sender {
if(isON) {
[_mapView addTileSource:[[RMMapBoxSource alloc] initWithMapID:#"MapID"]];
isON=NO;
} else {
[_mapView removeTileSource:[[RMMapBoxSource alloc] initWithMapID:#"MapID"]];
isON=YES;
}
This is what I went with. It's a slight adjustment from Lucas' answer. This will alternate turning the map on an off. Thanks for the responses.
//.h
#property BOOL *isChecked;
//.m
self.isChecked = !self.isChecked;
if((self.isChecked)) {
[_mapView addTileSource:onlineSource atIndex:1];
} else {
[_mapView setHidden:YES forTileSourceAtIndex:1 ];

iOS Multi-Touch Not Working

I have the regular OpenGL / EAGL setup going on:
#interface EAGLView : UIView {
#public
EAGLContext* context;
}
#property (nonatomic, retain) EAGLContext* context;
#end
#implementation EAGLView
#synthesize context;
+ (Class)layerClass {
return [CAEAGLLayer class];
}
#end
#interface EAGLViewController : UIViewController {
#public
EAGLView* glView;
}
#property(nonatomic, retain) EAGLView* glView;
#end
#implementation EAGLViewController
#synthesize glView;
- (void)touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event {
for (UITouch* touch in touches) {
CGPoint location = [touch locationInView:glView];
int index;
for (index = 0; index < gCONST_CURSOR_COUNT; ++index) {
if (sCursor[index] == NULL) {
sCursor[index] = touch;
break;
}
}
}
[super touchesBegan:touches withEvent:event];
}
That implementation includes corresponding touchesEnded/Canceled/Moved as well. The code fully works and tracks well.
I also make sure that I'm giving proper values for everything:
sViewController = [EAGLViewController alloc];
CGRect rect = [[UIScreen mainScreen] applicationFrame];
sViewController.glView = [[EAGLView alloc] initWithFrame:CGRectMake(rect.origin.x, rect.origin.y, rect.size.width, rect.size.height)];
Assert(sViewController.glView);
sViewController.glView.userInteractionEnabled = YES;
sViewController.glView.multipleTouchEnabled = YES;
sViewController.glView.exclusiveTouch = YES;
It all compiles just fine, but I'm never receiving more than one UITouch. I don't mean in a single touchesBegan, but the index never goes past 0. I also set a breakpoint for the second time it enters that function, and putting two fingers on doesn't make it trigger.
If you want to detect multiple touches (and/or distinguish between a one finger, two finger etc. touch), try using a UIPanGestureRecognizer. When you set it up, you can specify the minimum and maximum number of touches. Then attach it to the view where you want to detect the touches. When you receive events from it, you can ask it how many touches it received and branch accordingly.
Here's the apple documentation:
http://developer.apple.com/library/ios/#documentation/uikit/reference/UIPanGestureRecognizer_Class/Reference/Reference.html
If you do this, you might not need to use the touchesBegan/Moved/Ended methods at all and, depending on how you set up the gesturerecognizer, touchesBegan/Moved/Ended may never get called.
Use [event allTouches] in place of touches. touches represents only the touches that have 'changed'. From the apple docs:
If you are interested in touches that have not changed since the last
phase or that are in a different phase than the touches in the
passed-in set, you can find those in the event object. Figure 3-2
depicts an event object that contains touch objects. To get all of
these touch objects, call the allTouches method on the event object.
It seems all I was missing was this:
sViewController.view = sViewController.glView;

IOS UIMenuController UIMenuItem, how to determine item selected with generic selector method

With the following setup
....
MyUIMenuItem *someAction = [[MyUIMenuItem alloc]initWithTitle : #"Something" action : #selector(menuItemSelected:)];
MyUIMenuItem *someAction2 = [[MyUIMenuItem alloc]initWithTitle : #"Something2" action : #selector(menuItemSelected:)];
....
- (IBAction) menuItemSelected : (id) sender
{
UIMenuController *mmi = (UIMenuController*) sender;
}
How to figure out which menu item was selected.
And don't say that you need to have two methods... Thanks in advance.
Okay, I've solved this one. The solution isn't pretty, and the better option is "Apple fixes the problem", but this at least works.
First of all, prefix your UIMenuItem action selectors with "magic_". And don't make corresponding methods. (If you can do that, then you don't need this solution anyway).
I'm building my UIMenuItems thus:
NSArray *buttons = [NSArray arrayWithObjects:#"some", #"random", #"stuff", nil];
NSMutableArray *menuItems = [NSMutableArray array];
for (NSString *buttonText in buttons) {
NSString *sel = [NSString stringWithFormat:#"magic_%#", buttonText];
[menuItems addObject:[[UIMenuItem alloc]
initWithTitle:buttonText
action:NSSelectorFromString(sel)]];
}
[UIMenuController sharedMenuController].menuItems = menuItems;
Now your class that catches the button tap messages needs a few additions. (In my case the class is a subclass of UITextField. Yours might be something else.)
First up, the method that we've all been wanting to have but that didn't exist:
- (void)tappedMenuItem:(NSString *)buttonText {
NSLog(#"They tapped '%#'", buttonText);
}
Then the methods that make it possible:
- (BOOL)canPerformAction:(SEL)action withSender:(id)sender {
NSString *sel = NSStringFromSelector(action);
NSRange match = [sel rangeOfString:#"magic_"];
if (match.location == 0) {
return YES;
}
return NO;
}
- (NSMethodSignature *)methodSignatureForSelector:(SEL)sel {
if ([super methodSignatureForSelector:sel]) {
return [super methodSignatureForSelector:sel];
}
return [super methodSignatureForSelector:#selector(tappedMenuItem:)];
}
- (void)forwardInvocation:(NSInvocation *)invocation {
NSString *sel = NSStringFromSelector([invocation selector]);
NSRange match = [sel rangeOfString:#"magic_"];
if (match.location == 0) {
[self tappedMenuItem:[sel substringFromIndex:6]];
} else {
[super forwardInvocation:invocation];
}
}
One would expect that the action associated with a given menu item would include a sender parameter that should point to the chosen menu item. Then you could simply examine the title of the item, or do as kforkarim suggests and subclass UIMenuItem to include a proeprty that you can use to identify the item. Unfortunately, according to this SO question, the sender parameter is always nil. That question is over a year old, so things may have changed -- take a look at what you get in that parameter.
Alternately, it looks like you'll need to a different action for each menu item. Of course, you could set it up so that all your actions call a common method, and if they all do something very similar that might make sense.
Turns out it's possible to obtain the UIButton object (which is actually UICalloutBarButton) that represents UIMenuItem if you subclass UIApplication and reimplement -sendAction:to:from:forEvent:. Although only -flash selector goes through UIApplication, it's enough.
#interface MyApplication : UIApplication
#end
#implementation MyApplication
- (BOOL)sendAction:(SEL)action to:(id)target from:(id)sender forEvent:(UIEvent *)event
{
// target == sender condition is just an additional one
if (action == #selector(flash) && target == sender && [target isKindOfClass:NSClassFromString(#"UICalloutBarButton")]) {
NSLog(#"pressed menu item title: %#", [(UIButton *)target titleLabel].text);
}
return [super sendAction:action to:target from:sender forEvent:event];
}
#end
You can save target (or any data you need from it) in e.g. property and access it later from your UIMenuItem's action.
And to make your UIApplication subclass work, you must pass its name as a third parameter to UIApplicationMain():
int main(int argc, char *argv[])
{
#autoreleasepool {
return UIApplicationMain(argc, argv, NSStringFromClass([MyApplication class]), NSStringFromClass([YOUR_APP_DELEGATE class]));
}
}
This solution works on iOS 5.x-7.0 as of post date (didn't test on older versions).
ort11, you might want to create a property of myuimenuitem and set some sort of Tag. Thay way the object of sender could be recognized by its tag it. In Ibaction then you can set a switch statement that can correspond to each sender.tag and work throught that logic. I guess thats the simplest way to go.

How can I click a button behind a transparent UIView?

Let's say we have a view controller with one sub view. the subview takes up the center of the screen with 100 px margins on all sides. We then add a bunch of little stuff to click on inside that subview. We are only using the subview to take advantage of the new frame ( x=0, y=0 inside the subview is actually 100,100 in the parent view).
Then, imagine that we have something behind the subview, like a menu. I want the user to be able to select any of the "little stuff" in the subview, but if there is nothing there, I want touches to pass through it (since the background is clear anyway) to the buttons behind it.
How can I do this? It looks like touchesBegan goes through, but buttons don't work.
Create a custom view for your container and override the pointInside: message to return false when the point isn't within an eligible child view, like this:
Swift:
class PassThroughView: UIView {
override func point(inside point: CGPoint, with event: UIEvent?) -> Bool {
for subview in subviews {
if !subview.isHidden && subview.isUserInteractionEnabled && subview.point(inside: convert(point, to: subview), with: event) {
return true
}
}
return false
}
}
Objective C:
#interface PassthroughView : UIView
#end
#implementation PassthroughView
-(BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event {
for (UIView *view in self.subviews) {
if (!view.hidden && view.userInteractionEnabled && [view pointInside:[self convertPoint:point toView:view] withEvent:event])
return YES;
}
return NO;
}
#end
Using this view as a container will allow any of its children to receive touches but the view itself will be transparent to events.
I also use
myView.userInteractionEnabled = NO;
No need to subclass. Works fine.
From Apple:
Event forwarding is a technique used by some applications. You forward touch events by invoking the event-handling methods of another responder object. Although this can be an effective technique, you should use it with caution. The classes of the UIKit framework are not designed to receive touches that are not bound to them .... If you want to conditionally forward touches to other responders in your application, all of these responders should be instances of your own subclasses of UIView.
Apples Best Practise:
Do not explicitly send events up the responder chain (via nextResponder); instead, invoke the superclass implementation and let the UIKit handle responder-chain traversal.
instead you can override:
-(BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event
in your UIView subclass and return NO if you want that touch to be sent up the responder chain (I.E. to views behind your view with nothing in it).
A far simpler way is to "Un-Check" User Interaction Enabled in the interface builder. "If you are using a storyboard"
Lately I wrote a class that will help me with just that. Using it as a custom class for a UIButton or UIView will pass touch events that were executed on a transparent pixel.
This solution is a somewhat better than the accepted answer because you can still click a UIButton that is under a semi transparent UIView while the non transparent part of the UIView will still respond to touch events.
As you can see in the GIF, the Giraffe button is a simple rectangle but touch events on transparent areas are passed on to the yellow UIButton underneath.
Link to class
Top voted solution was not fully working for me, I guess it was because I had a TabBarController into the hierarchy (as one of the comments points out) it was in fact passing along touches to some parts of the UI but it was messing with my tableView's ability to intercept touch events, what finally did it was overriding hitTest in the view I want to ignore touches and let the subviews of that view handle them
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event{
UIView *view = [super hitTest:point withEvent:event];
if (view == self) {
return nil; //avoid delivering touch events to the container view (self)
}
else{
return view; //the subviews will still receive touch events
}
}
Building on what John posted, here is an example that will allow touch events to pass through all subviews of a view except for buttons:
-(BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event
{
// Allow buttons to receive press events. All other views will get ignored
for( id foundView in self.subviews )
{
if( [foundView isKindOfClass:[UIButton class]] )
{
UIButton *foundButton = foundView;
if( foundButton.isEnabled && !foundButton.hidden && [foundButton pointInside:[self convertPoint:point toView:foundButton] withEvent:event] )
return YES;
}
}
return NO;
}
Swift 3
override func point(inside point: CGPoint, with event: UIEvent?) -> Bool {
for subview in subviews {
if subview.frame.contains(point) {
return true
}
}
return false
}
According to the 'iPhone Application Programming Guide':
Turning off delivery of touch events.
By default, a view receives touch
events, but you can set its userInteractionEnabled property to NO
to turn off delivery of events. A view also does not receive events if it’s hidden
or if it’s transparent.
http://developer.apple.com/iphone/library/documentation/iPhone/Conceptual/iPhoneOSProgrammingGuide/EventHandling/EventHandling.html
Updated: Removed example - reread the question...
Do you have any gesture processing on the views that may be processing the taps before the button gets it? Does the button work when you don't have the transparent view over it?
Any code samples of non-working code?
As far as I know, you are supposed to be able to do this by overriding the hitTest: method. I did try it but could not get it to work properly.
In the end I created a series of transparent views around the touchable object so that they did not cover it. Bit of a hack for my issue this worked fine.
Taking tips from the other answers and reading up on Apple's documentation, I created this simple library for solving your problem:
https://github.com/natrosoft/NATouchThroughView
It makes it easy to draw views in Interface Builder that should pass touches through to an underlying view.
I think method swizzling is overkill and very dangerous to do in production code because you are directly messing with Apple's base implementation and making an application-wide change that could cause unintended consequences.
There is a demo project and hopefully the README does a good job explaining what to do. To address the OP, you would change the clear UIView that contains the buttons to class NATouchThroughView in Interface Builder. Then find the clear UIView that overlays the menu that you want to be tap-able. Change that UIView to class NARootTouchThroughView in Interface Builder. It can even be the root UIView of your view controller if you intend those touches to pass through to the underlying view controller. Check out the demo project to see how it works. It's really quite simple, safe, and non-invasive
I created a category to do this.
a little method swizzling and the view is golden.
The header
//UIView+PassthroughParent.h
#interface UIView (PassthroughParent)
- (BOOL) passthroughParent;
- (void) setPassthroughParent:(BOOL) passthroughParent;
#end
The implementation file
#import "UIView+PassthroughParent.h"
#implementation UIView (PassthroughParent)
+ (void)load{
Swizz([UIView class], #selector(pointInside:withEvent:), #selector(passthroughPointInside:withEvent:));
}
- (BOOL)passthroughParent{
NSNumber *passthrough = [self propertyValueForKey:#"passthroughParent"];
if (passthrough) return passthrough.boolValue;
return NO;
}
- (void)setPassthroughParent:(BOOL)passthroughParent{
[self setPropertyValue:[NSNumber numberWithBool:passthroughParent] forKey:#"passthroughParent"];
}
- (BOOL)passthroughPointInside:(CGPoint)point withEvent:(UIEvent *)event{
// Allow buttons to receive press events. All other views will get ignored
if (self.passthroughParent){
if (self.alpha != 0 && !self.isHidden){
for( id foundView in self.subviews )
{
if ([foundView alpha] != 0 && ![foundView isHidden] && [foundView pointInside:[self convertPoint:point toView:foundView] withEvent:event])
return YES;
}
}
return NO;
}
else {
return [self passthroughPointInside:point withEvent:event];// Swizzled
}
}
#end
You will need to add my Swizz.h and Swizz.m
located Here
After that, you just Import the UIView+PassthroughParent.h in your {Project}-Prefix.pch file, and every view will have this ability.
every view will take points, but none of the blank space will.
I also recommend using a clear background.
myView.passthroughParent = YES;
myView.backgroundColor = [UIColor clearColor];
EDIT
I created my own property bag, and that was not included previously.
Header file
// NSObject+PropertyBag.h
#import <Foundation/Foundation.h>
#interface NSObject (PropertyBag)
- (id) propertyValueForKey:(NSString*) key;
- (void) setPropertyValue:(id) value forKey:(NSString*) key;
#end
Implementation File
// NSObject+PropertyBag.m
#import "NSObject+PropertyBag.h"
#implementation NSObject (PropertyBag)
+ (void) load{
[self loadPropertyBag];
}
+ (void) loadPropertyBag{
#autoreleasepool {
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
Swizz([NSObject class], NSSelectorFromString(#"dealloc"), #selector(propertyBagDealloc));
});
}
}
__strong NSMutableDictionary *_propertyBagHolder; // Properties for every class will go in this property bag
- (id) propertyValueForKey:(NSString*) key{
return [[self propertyBag] valueForKey:key];
}
- (void) setPropertyValue:(id) value forKey:(NSString*) key{
[[self propertyBag] setValue:value forKey:key];
}
- (NSMutableDictionary*) propertyBag{
if (_propertyBagHolder == nil) _propertyBagHolder = [[NSMutableDictionary alloc] initWithCapacity:100];
NSMutableDictionary *propBag = [_propertyBagHolder valueForKey:[[NSString alloc] initWithFormat:#"%p",self]];
if (propBag == nil){
propBag = [NSMutableDictionary dictionary];
[self setPropertyBag:propBag];
}
return propBag;
}
- (void) setPropertyBag:(NSDictionary*) propertyBag{
if (_propertyBagHolder == nil) _propertyBagHolder = [[NSMutableDictionary alloc] initWithCapacity:100];
[_propertyBagHolder setValue:propertyBag forKey:[[NSString alloc] initWithFormat:#"%p",self]];
}
- (void)propertyBagDealloc{
[self setPropertyBag:nil];
[self propertyBagDealloc];//Swizzled
}
#end
Try set a backgroundColor of your transparentView as UIColor(white:0.000, alpha:0.020). Then you can get touch events in touchesBegan/touchesMoved methods. Place the code below somewhere your view is inited:
self.alpha = 1
self.backgroundColor = UIColor(white: 0.0, alpha: 0.02)
self.isMultipleTouchEnabled = true
self.isUserInteractionEnabled = true
Try this
class PassthroughToWindowView: UIView {
override func test(_ point: CGPoint, with event: UIEvent?) -> UIView? {
var view = super.hitTest(point, with: event)
if view != self {
return view
}
while !(view is PassthroughWindow) {
view = view?.superview
}
return view
}
}
I use that instead of override method point(inside: CGPoint, with: UIEvent)
override func hitTest(_ point: CGPoint, with event: UIEvent?) -> UIView? {
guard self.point(inside: point, with: event) else { return nil }
return self
}
If you can't bother to use a category or subclass UIView, you could also just bring the button forward so that it is in front of the transparent view. This won't always be possible depending on your application, but it worked for me. You can always bring the button back again or hide it.

Resources