Lets say I have view1 which is intercepting touch events and view2 which is not.
Can view1 pass those events to view2 and call [view2 touchesBegin] [view2 touchesMoved]...etc?
Yes, sometimes, maybe. The technique you're asking about is known as event forwarding. I'll refer you to the Forwarding Touch Events section of the Event Handling Guide for iOS, which says the following:
The classes of the UIKit framework are not designed to receive touches
that are not bound to them; in programmatic terms, this means that the
view property of the UITouch object must hold a reference to the
framework object in order for the touch to be handled. If you want to
conditionally forward touches to other responders in your application,
all of these responders should be instances of your own subclasses of
UIView.
So, if you're looking to forward events from one view to another and both views are instances of your own UIView subclass, event forwarding may work for you. If your view2 is an instance of a UIKit class, though -- say, UIScrollView or UITextView, you shouldn't be surprised to encounter problems. And even if it works now, it could easily break in the future. A little further on in the text, that section states this more simply:
Do not forward events to UIKit framework objects.
I'd urge you to read the entire section -- there's some good advice and an example that may help if you do decide to use event forwarding.
Related
Is there any possible way to detect every change on User Interface during runtime??
I'm trying to find all objects in the current app interface.
I'm trying to to get all nodes inspecting recursively the main Window, but, for example, how to know if the top viewcontroller changes or if it's added a uiview dynamically, or is presented a modalview??
The main objective is to have a library to do this..
Any idea, help?
Thanks!
You could write your own library based on this, using advanced Objective-C techniques. I do not recommend you to do this, since it mostly breaks MVC patterns on iOS. Depends on what do you want to use it for, maybe analytics?
So these are the options I believe, if you want to actively inspect UIView hierarchy. All options are pretty complicated though.
Swizzle methods such as addSubview and removeFromSuperview of UIView, so you could know when changes like that happens. Including the getters of frame and bounds, if you wish to know the position.
You could use KVO to watch properties such as: subviews, frame, bounds, superview to notice any changes. But at one point you would have to add the same object as the observer (could be singleton).
Decide for an interval that is fired by a NSTimer and go through the hierarchy recursively beginning at keyWindow on UIApplication. This would have a big performance impact though.
There may be other options, but these are the ones I believe to be the best choices.
I have transparent UIView (we'll call it receiver) layered over a UIScrollView (scrollView) which has a UIScrollViewDelegate (scrollViewDelegate).
In some situations, receiver needs to consume the touch events which land on it. In others, I need to glean some positional information and then either pass the touch events intact through to scrollView or alternatively spoof the events which would comprise a drag movement and result in the appropriate deceleration etc. How can I achieve this?
EDIT - Research thus far:
I've put a breakpoint inside scrollViewDelegate's scrollViewWillBeginDragging, which reveals the call sequence to be:
(...various calls filtering down from UIApplicationMain...)
[UIWindow sendEvent:]
[UIWindow _sendGesturesForEvent:]
...
[UIGestureRecognizer _delayedUpdateGesture:]
[UIGestureRecognizer _updateGestureWithEvent]
_UIGestureRecognizerSendActions
[UIScrollView handlePan:]
[UIScrollView _updatePanGesture]
....
[UIScrollViewDelegate scrollViewWillBeginDragging:]
Ideally, I'd like to be able to call into UIScrollView handlePan: with the appropriate information, but I can't find anything which details how to correctly call handlePan (I assume it must be possible, since _UIGestureRecognizerSendActions does it?).
Alternatively I suppose I could subclass UIGestureRecognizer, but this seems a pretty heavy handed way of doing it (and again I have no idea how to communicate the sequence of touch events).
The easiest way to do this is to place your UIScrollView inside of your receiver.
You can then do one of two things:
Set userInteractionsEnabled on your scroll view to NO if you know ahead-of-time that the events shouldn't be delivered to it. The events will be delivered to your receiver view instead.
Override hitTest:withEvent: to note the incoming events. To squelch them, just return your receiver (indicating that it should be the target of those events); to pass them along, return the value from super's implementation (which will correctly identify the scroll view as the target when appropriate).
If you really need to synthesize events, you can construct them and pass them to UIApplication's - (void)sendEvent: method. (Calling handlePan: directly probably won't work, since it takes a UIGestureRecognizer and so will be hard to fake.)
I have derived my own View class from UIView that handles gestures and drawing itself.
I use Interface Builder to place several instances of it on a View.
On certain events, I want to call several delegates in the UIViewController, just like an UIButton::onTouchUpInside event. I don't want to set up an interface protocol and connect an IBOutled id instance, like in (1).
I was looking all around documentation and also stack overflow, but I haven't found any clue about the syntax.
So, what is the syntax for that with Xcode 4.4 (just updated)?
Deployment Target will be IOS >=5.0 because of the custom properties I already use.
[EDIT]
Subclassing from UIControl does give indeed access to the standard UI events like TouchUpInside, but is it possible to add custom named events like "onSomethingElse"?
(1) Events for custom UIView
I'm not entirely sure exactly what you want from this view, but if you want to handle things like UIButton::onTouchUpInside event, then maybe you should look into subclassing UIControl instead of UIView. It gives you access to events, just like UIButton.
If you post a NSNotification of whatever custom event you define, you can have multiple listeners registered with the NSNotificationCenter to respond to the event.
You can see details by looking for addObserver:selector:name:object: and postNotificationName:object:.
What's the purpose for making UIViewController a subclass of UIResponder? Was it done solely to pass the rotation events?
I could not find any definitive info on that in the docs.
Update
I understand that if something is made a UIResponder, this something is suppose to be included in the responder chain and process events. But I have two gnawing doubts.
As far as I know, UIViewController is put into the responder chain right after its view. Why do we need a view controller in the responder chain at all? Its view is already there, so why don't we let the view process the events that were not handled by its subviews?
OK, I'm ready to agree that we might need this. But I would like to see some real life examples, when processing events in a view controller is really needed and is the best/easiest/most appropriate way to do something.
I think your problem may simply be a failure of object oriented thinking.
Per the docs:
The responder chain is a linked series of responder objects to which
an event or action message is applied.
In UIKit the view controller sits in the responder chain between its view and the view to which the controller was pushed. So it is offered any event or action that its views don't handle.
The topmost view controller's next responder is the window, the window's next responder is the application, the application's next responder is the application delegate and the application delegate is where the buck stops.
Your question "Was it done solely to pass the rotation events?" applies the incorrect test; it implies that at some point the responder chain had otherwise been fully engineered and somebody thought 'oh, wait, what about rotation? Better chuck the view controllers into the chain'.
The original question will have been: is it helpful if events or actions can be handled by the view controller if none of the views handle them? The answer should obviously be 'yes' as — even on a touch-screen device — there will be events or actions that aren't inherently related to a view.
The most obvious examples are those related to physical inputs other than the screen. So device rotation is one. Key presses on a bluetooth keyboard are another. Remote controls are a third. The accelerometer is a fourth.
The next most obvious example is any system-generated events or actions that should go to the single most local actor rather than to everyone. In iOS that's generally requests for a more specific actor, like the most local undo manager or the identity of the input view to show if focus comes to you.
A slightly less obvious example is that exemplified by UIMenuController — a pop-up view that posts a user-input event that may need to traverse several view controllers to get to the one that should act on it. iOS 5's child view controllers increase the number of possibilities here enormously; quite often you're going to have one parent view controller with the logic to do a bunch of things and children that want to pass messages up to whomever knows how to handle them, without hard coding the hierarchy.
So, no, view controllers weren't added to to the responder chain just to handle rotation events. They were added because logically they belong to be there based on the initial definition of the responder chain.
This is going to sound like a glib answer, but it really isn't. UIViewController is a subclass of UIResponder so that is can respond to user actions (e.g. touches, motion, etc.).
If a view does not respond to an event it is passed up the responder chain giving higher level objects a chance to handle it. Hence, view controllers and the application class are all subclasses of UIResponder
You can find more detailed information about the responder chain in Cocoa Application Competencies for iOS: Responder Object on Apple's developer site.
UIViewController is in the responder chain to allow for it to process any event. There are more then just the events you think of (touches) that pass through this chain. Motion events get passed through the chain, touch events that a specific view doesn't handle, you can also force things through the responder chain using [UIApplication sendEvent:...] with a nil target.
The other thing you may notice is UIApplication is also a subclass of UIResponder. All events that aren't handled will end up there.
In MVC conception handling of event occurred in view should perform controller.
There is a feature, to set "nil" to "target" parameter of -[UIControl addTarget:action:forControlEvents:] in with case the responder chain is searched for an object willing to respond to the action.
That is, because of search option, UIViewController is subclass of UIResponder.
So, you can send your custom control events in same manner to get benefits of "search" option.
note: This is an expansion (and clarification) of a question I asked yesterday.
I am conducting a research project where I want to record all of the user's touches in the iPhone app. After the experiment, I will be able to download the data and process it in either Excel or (more likely) Matlab and determine how many times they clicked on certain buttons, when they clicked certain buttons, etc. To do this, I would need to know:
a) When they touched
b) Where they touched
c) Which view they touched
The first two are easy, but the third I am having trouble with. I know I can do this to get the reference to the UIView that was touched:
CGPoint locationPoint = [[touches anyObject] locationInView:self];
UIView* viewYouWishToObtain = [self hitTest:locationPoint withEvent:event];
However, that will just give me a pointer to the view, not the name of the view that was touched. I could assign each view a tag, but then every time I create a new view I would need to remember to tag it (or, alternatively, log the address of each view when initialized and log it when the view is touched). Subclassing UIView and adding an automatic tag isn't really an option since I'm creating other UIButtons and UISliders and would need to subclass those also, which doesn't seem like a very good solution.
Does anyone know of a clean, easy way to do this?
For "Which view they touched", what information do you need?
Perhaps you could use a category to add a method to UIView. This method would generate a string containing information about the view. Such as:
its type e.g. UIButton etc.
its size and position
the title of the view, if it has one (e.g. the button title)
the parent view type and title
other stuff e.g. is the view enabled, what state it is in. anything you like.
For example: "Type:UIButton Title:"Back" Rect:{3,5,40,25}" or some such string.
This is very clean and gives you quite a lot of information to be going with.
You could add a category to UIView which would then be inherited by all UIView descended objects, although I'm not sure its any more efficient than tagging. Since a category can override methods then you could override init methods for automatic tagging I suppose.
http://macdevelopertips.com/objective-c/objective-c-categories.html
I'm not sure what you mean by the "name" of the view. If you mean the view name in Interface Builder, I don't believe it includes that in the instantiated objects. You could use the Tag attribute which is included, but that's just a number and not a name.