I am learning Swift and, as part of the process, trying to figure out what exactly is going on here. I have a custom segue where I want to place my modal view controller dismissing transition. What used to be in objective-c as:
UIViewController *sourceViewController = self.sourceViewController;
[sourceViewController.presentingViewController dismissViewControllerAnimated:YES completion:nil];
self is an instance of UIStoryboardSegue.
I translated this snippet in Swift as:
self.sourceViewController.presentingViewController?.dismissViewControllerAnimated(true, completion: nil)
getting this error from the compiler:
'UIViewController?' does not have a member named
'dismissViewControllerAnimated'
Now, by documentation, the presentingViewController method looks like this:
var presentingViewController: UIViewController? { get }
From what I understood by the Swift language documentation, ? should unwrap the value, if any. In this case the view controller. The unexplained fact is: if I put a double question mark, it compiles and it works:
self.sourceViewController.presentingViewController??.dismissViewControllerAnimated(true, completion: nil)
Can someone tell me what I am missing? What should that do?
The extra ? required is due sourceViewController returning an AnyObject instead of a UIViewController. This is a flaw in the API conversion from Objective-C (in which such property returns a rather meaningless id). It's still an on-going process that started with iOS 8 beta 5, and apparently those API have not been fixed yet.
If you provide an appropriate cast, it will work as expected
(self.sourceViewController as UIViewController).presentingViewController?.dismissViewControllerAnimated(true, completion: nil)
Now, why do we need an extra ? when dealing with AnyObject?
AnyObject can represent any object type, pretty much as id does in Objective-C. So at compile-time you can invoke any existing method on it, for example sourceViewController.
When you do so, it triggers an implicit downcast from AnyObject to UIViewController and according to the official guide:
As with all downcasts in Swift, casting from AnyObject to a more specific object type is not guaranteed to succeed and therefore returns an optional value
So when you do
self.sourceViewController.presentingViewController??
it implicitly translates to something like
let source: UIViewController? = self.sourceViewController as? UIViewController
let presenting: UIViewController? = source?.presentingViewController
and that's why you need two ?: one for resolving the downcast and one for the presentingViewController.
Finally, always according to the documentation:
Of course, if you are certain of the type of the object (and know that it is not nil), you can force the invocation with the as operator.
which is exactly my proposed solution above.
Related
In NotificationCenter Class , Why apple has created Observer of type Any?
func addObserver(Any, selector: Selector, name: NSNotification.Name?, object: Any?)
My Reasoning.
If observer is struct then on assigning inside as function parameter, It will be copied then how my observer can receive the notification.
I can't write any function which uses #objc prefix in Struct.
Selector is always be type of #objc.
So What is the use of Any in addObserver.....
It should always be of type AnyObject.
Secondly we already known that NotificationCenter keep the weak copy of observer, And we can't use weak modifier for type Any. Then how apple is managing all this?
Any help in understanding this concept is highly appreciated.
No one chose to make this parameter Any. It's just what they got by default. It's automatically bridged from ObjC:
- (void)addObserver:(id)observer
selector:(SEL)aSelector
name:(nullable NSNotificationName)aName
object:(nullable id)anObject;
The default way that id is bridged is Any. It hasn't been specially refined for Swift. In practice, you can't really use structs meaningfully here. The fact that the compiler won't stop you from calling it in an unhelpful way doesn't imply that it's intended to be used that way.
Why type Any? - because in Objective C it is type id.
Why you can't mark your function as #obj - #obc is the keyword for Swift code which indicates what compiler should add this method to a header file for this Class, yes you can make headers only for Classes.
Selector also is the objective-c term, it just says which function to invoke, similar to msg_send
In NotificationCenter Class , Why apple has created Observer of type Any.
Because all Objective-C id declarations are translated into Swift as Any.
You might object that this really should be AnyObject, because only a class will work here. And indeed, that's the way id used to be translated into Swift. But nowadays you can pass anything where an id is expected, because if it's something Objective-C can't understand, it will be boxed up as a class instance (e.g. as a _SwiftValue) so that it can make the round-trip into Objective-C and back again to Swift. Therefore id is translated as Any.
However, just because you can pass a struct here doesn't mean you should. It won't work, as you've discovered. Objective-C cannot introspect a Swift struct.
There are lots of situations like this, where Cocoa gives you enough room to hang yourself by passing the wrong thing. The contents of a CALayer is typed as Any, but if you pass anything other than a CGImage, nothing will happen. The layerClass if a UIView is typed as AnyClass, but you'd better pass a CALayer subclass. I could go on and on.
I have a class that looks like this (simplified):
class GuideViewController: UIViewController, StoreSubscriber {
var tileRenderer: MKTileOverlayRenderer! // <------ this needs to be set by whoever instantiates this class
override func viewDidLoad() {
super.viewDidLoad()
...
}
}
My app uses this GuideViewController class to display many different styles of maps, so the tileRenderer instance variable can have many different values.
I want a compile-time guarantee that tileRenderer will never be nil, instead of using an implicitly-unwrapped optional.
How can I achieve this?
Things I've considered so far but am unsure about
Setting tileRenderer in the init() method of the GuideViewController. This was my first instinct by this answer implies that this is not possible, or an antipattern.
Setting tileRenderer in viewDidLoad(). This seems to require using an implicitly unwrapped optional which bypasses compile-time checks. Also, I'm under the impression that viewDidLoad() is only called once for the view controller in the lifecycle of the app
Manually setting tileRenderer after instantiating the VC. E.g.,
let vc = self.storyboard?.instantiateViewController(withIdentifier: "GuideViewController")
vc.tileRenderer = MKTileOverlayRenderer(...) // <----- can I make the compiler force me to write this line?
navigationController?.pushViewController(vc!, animated: true)
Forgive me for asking such a naive question—I'm fairly new to iOS development.
It isn't possible for there to be a compile-time check, since that would require the compiler to completely analyse the flow of your program.
You can't make the property a non-optional (well, you can - see point 4), since that requires a custom initialiser, which doesn't really work with UIViewController subclasses.
You have a few choices:
Use an implicitly unwrapped optional and crash at runtime if it is nil - hopefully the developer will quickly identify their mistake
Check for nil at a suitable point (such as viewWillAppear) and issue a warning to the console, followed by a crash when you try and access the value or call fatalError - This will give the developer more hints as to what they have done wrong
Use an optional and unwrap it before you use it
Use a non-optional and provide a default value
Option 4 may be the best option; Provide a default render that does nothing (or perhaps issues warnings to the console log that it is a default renderer and that the developer needs to provide one).
Why do we need to downcast using the keyword as after instantiating a view controller with UIStoryboard's instantiateViewController(withIdentifier:) method in order for the view controller's properties to be accessible? The UIStoryboard method instantiateViewController(withIdentifier:) already returns a UIViewController and knows which class based on the Storyboard ID or this is what I assume happens but not totally true.
The following code works and compiles but I want to understand why. If I were building this based on the documentation, I wouldn't have assumed downcasting would be necessary so I'm trying to figure out what part I haven't learned or understood in regards to types and/or objects being returned from functions.
func test_TableViewIsNotNilOnViewDidLoad() {
let storyboard = UIStoryboard(name: "Main", bundle: nil)
let viewController = storyboard.instantiateViewController(
withIdentifier: "ItemListViewController")
let sut = viewController as! ItemListViewController
_ = sut.view
XCTAssertNotNil(sut.tableView)
}
Because storyboard.instantiateViewController... always returns a UIViewController (the base class for your specific subclass) and thus cannot know implementation details specific to your subclass.
The method mentioned above doesn't infer your sub-class type based on the storyboard id, this is something you do in your code when downcasting (see here).
func instantiateViewController(withIdentifier identifier: String) -> UIViewController
So it works because you get an UIViewController from the method above and then you force downcast it to your ItemListViewController (it always works because you defined ItemListViewController as an UIViewController subclass).
PS. I'm not sure I've understood your question though, this seems pretty straightforward.
knows which class based on the Storyboard ID
This is completely incorrect. The Storyboard ID is a string. In your case it happens to be a static string, but this method doesn't require that. It could be computed at run time (and I've personally written code that does that). Your string happens to match the classname, but there's no requirement that this be true. The identifier can be any string at all. And the storyboard isn't part of the compilation process. The storyboard could easily be changed between the time the code is compiled and the time it's run such that the object in question has a different type.
Since there isn't enough information to compute the class at compile time, the compiler requires that you explicitly promise that it's going to work out and to decide what to do if it fails. Since you use as!, you're saying "please crash if this turns out to be wrong."
This is partially due to polymorphism. Take for example the following code:
class Person {
let name: String
}
class Student: Person {
let schoolName: String
}
func createRandomPerson(tag: Int) -> Person {
if tag == 1 { return Person() }
else { return Student() }
}
let p = createRandomPerson(2)
Depending on the value of the tag parameter you'll get either a Person or a Student.
Now if you pass 2 to the function, then you can be sure that createRandomPerson will return a Student instance, but you will still need to downcast as there are scenarios where createRandomPerson will return an instance of the base class.
Similar with the storyboard, you know that if you pass the correct identifier you'll get the view controller you expect, however since the storyboard can create virtually any instance of UIViewController subclasses (or even UIViewController) the function in discussion has the return type set to UIViewController.
And similarly, if you do your math wrong - i.e. you pass 1 to createRandomPerson(), or wrong identifier to the storyboard, then you won't receive what you expect to.
You maybe need to do some background reading on object oriented programming in general to help you understand. Key concepts are class, instance/object, inheritance, and polymorphism.
storyboard.instantiateViewController() will create an instance of ItemListViewController but it is being returned as UIViewController. If this part is difficult to understand then this is where you need objected oriented knowledge background.
In OO languges, like C++ for example, an instance of a class (also known as an object) can be referenced by a pointer to its parent (or grandparent or great grand parent etc.) class. In the majority of literature and tutorials on object orientation the concept on inheritance and casting and polymorphism is always illustrated using pointers to base classes and derived objects and so on.
However with Swift, pointers aren't exposed in the way there are in many other OO languages.
In your code, and explaining this in a simplified way and as if it was C++ or a similar OO language as that is how most tutorials on OO explain things:
let viewController = storyboard.instantiateViewController(
withIdentifier: "ItemListViewController")
viewController is a "pointer" of class type UIViewController which is pointing to an object of type ItemListViewController.
The compiler sees the pointer viewController as being of type UIViewController and therefore it does not know about any of the specific methods or properties of ItemListViewController unless you explicitly do the cast so that it knows about them.
When accessing UIapplication's main window it is returned as a UIWindow??
let view = UIApplication.sharedApplication().delegate?.window // view:UIWindow??
Why is it returning as a double optional and what does it mean and if put into a if let should I add one ! after it?
if let view = UIApplication.sharedApplication().delegate?.window!
My first though was to replace ? with a ! after delegate but that was not the solution.
#matt has the details, but there is a (somewhat horrible, somewhat awesome) workaround. (See edit below, though)
let window = app.delegate?.window??.`self`()
I will leave the understanding of this line of code as an exercise for the reader.
OK, I lie, let's break it down.
app.delegate?.window
OK, so far so good. At this point we have the UIWindow?? that is giving us a headache (and I believe is a bug in Swift disconnect between Swift and Cocoa). We want to collapse it twice. We can do that with optional chaining (?.), but that unwraps and rewraps, so we're back where we started from. You can double-optional-chain, though, with ??. which is bizarre, but works.
That's great, but ?? isn't a legal suffix operator. You have to actually chain to something. Well, we want to chain back to itself (i.e. "identity"). The NSObject protocol gives us an identity method: self.
self is a method on NSObject, but it's also a reserved word in Swift, so the syntax for it is `self`()
And so we get our madness above. Do with it as you will.
Note that since ??. works, you don't technically need this. You can just accept that view is UIWindow?? and use ??. on it like view??.frame. It's a little noisy, but probably doesn't create any real problems for the few places it should be needed.
(*) I used to think of this as a bug in Swift, but it's not fixable directly by optional chaining. The problem is that there is no optional chaining past window. So I'm not sure where the right place to fix it is. Swift could allow a postfix-? to mean "flatten" without requiring chaining, but that feels odd. I guess the right operator would be interrobang delegate?.window‽ :D I'm sure that wouldn't cause any confusion.
EDIT:
Joseph Lord pointed out the better solution (which is very similar to techniques I've been using to avoid trivial if-let, but hadn't thought of this way before):
let window = app.delegate?.window ?? nil // UIWindow?
I agree with him that this is the right answer.
It's because the window property is itself in doubt (it's optional). Thus, you need one question mark because there might or might not be a window property, and another question mark because the return value of that window property is itself an Optional. Thus we get a double-wrapped Optional (as I explain in my tutorial: scroll down to the Tip box where I talk about what happens when an optional property has an Optional value).
Thus, one way to express this would be in two stages — one to cast (and unwrap that Optional), and one to fetch the window (and unwrap that Optional):
if let del = UIApplication.sharedApplication().delegate as? AppDelegate {
if let view = del.window {
Now view is a UIWindow.
Of course, if you're sure of your ground (which you probably are), you can force the cast in the first line and the unwrapping in the second line. So, in Swift 1.2:
let del = UIApplication.sharedApplication().delegate as! AppDelegate
let view = del.window!
Oh the double optional! Sometimes you can use a double-bang (two exclamation marks) but you cannot cast that way with optional binding. So... my remix of all the other code gets you a UIWindow object called window of the non-optional kind:
guard let w = UIApplication.shared.delegate?.window, let window = w else { return }
But let's not waste time and just use
let window = UIApplication.shared.delegate!.window!!
and be done.
With advent of Swift2 for me a usual workaround in this kind of cases is
if let _window = UIApplication.sharedApplication().delegate?.window, window = _window {
// Some code... i.e.
let frame = window.frame
}
I'm trying to switch from Objective-C to Swift.
I don't understand the point of declaring a function to return a AnyObject! instead of AnyObject?.
For example:
func instantiateViewControllerWithIdentifier(identifier: String) -> AnyObject!
Why does this method return an implicitly unwrapped optional and not a simple optional?
I get the AnyObject part, but what's the point of allowing us to avoid using the ! to unwrap the optional if it may be nil? (thus crashing the app, even if it's very unlikely)
What am I missing?
Is it just a convenient way to use the return value of this method without having to use ! or is there something else I can't see? In which case, is the program doomed to crash if it returns nil?
It sounds to me that:
AnyObject means 100% chance to return something
AnyObject? means 50% chance to return nil, you should always check for nil
AnyObject! means 99% percent chance to return something not nil, checking for nil is not required
Also, I assume there is some kind of link with Objective-C/NSObject classes, but can't figure out what...
Thank you
The frequency of implicitly unwrapped optionals (IUOs) in Cocoa APIs is an artifact of importing those APIs to Swift from ObjC.
In ObjC, all references to objects are just memory pointers. It's always technically possible for any pointer to be nil, as far as the language/compiler knows. In practice, a method that takes an object parameter may be implemented to never allow passing nil, or a method that returns an object to never return nil. But ObjC doesn't provide a way to assert this in an an API's header declaration, so the compiler has to assume that nil pointers are valid for any object reference.
In Swift, object references aren't just pointers to memory; instead they're a language construct. This gets additional safety by allowing us to assert that a given reference always points to an object, or to use optionals to both allow for that possibility and require that it be accounted for.
But dealing with optional unwrapping can be a pain in cases where it's very likely that a reference will never be nil. So we also have the IUO type, which lets you use optionals without checking them for nil (at your own risk).
Because the compiler doesn't know which references in ObjC APIs are or aren't safely nullable, it must import all object references in ObjC APIs (Apple's or your own) with some kind of optional type. For whatever reason, Apple chose to use IUOs for all imported APIs. Possibly because many (but importantly not all!) imported APIs are effectively non-nullable, or possibly because it lets you write chaining code like you would in ObjC (self.view.scene.rootNode etc).
In some APIs (including much of Foundation and UIKit), Apple has manually audited the imported declarations to use either full optionals (e.g. UIView?) or non-optional references (UIView) instead of IUOs when semantically appropriate. But not all APIs have yet been audited, and some that have still use IUOs because that's still the most appropriate thing for those APIs to do. (They're pretty consistent about always importing id as AnyObject!, for example.)
So, getting back to the table in your question: it's better not to think in terms of probabilities. When dealing with an API that returns...
AnyObject: always has a value, cannot be nil.
AnyObject?: may be nil, must check for nil
AnyObject!: no guarantees. Read the header, documentation, or source code (if available) for that API to find out if it can really be nil in your situation and treat it accordingly. Or just assume it can be nil and code defensively.
In the case of instantiateViewControllerWithIdentifier, there are two options you might call equally valid:
Say you're guaranteed to never get nil because you know you put a view controller in your storyboard with that identifier and you know what class it is.
let vc = storyboard.instantiateViewControllerWithIdentifier("vc") as MyViewController
// do stuff with vc
Assume your storyboard can change in the future and set yourself up with a meaningful debug error in case you ever break something.
if let vc = storyboard.instantiateViewControllerWithIdentifier("vc") as? MyViewController {
// do something with vc
} else {
fatalError("missing expected storyboard content")
}
AnyObject means it cannot be nil
AnyObject? and AnyObject! both mean they can be nil. There is no "probability" difference between them. The only difference between them is with AnyObject you don't have to explicitly force-unwrap it, for convenience. With both of them you can use optional binding and optional chaining if you want to use it safely.
Object pointer types in Objective-C are by default imported into Swift as implicitly-unwrapped optionals (!). That is because all Objective-C pointers could potentially be nil (there is no way to tell automatically from an Objective-C declaration whether a particular object-pointer can be nil or not), but maybe in a particular API it can't nil, so they don't want to burden you with explicitly unwrap if if this is the case.
Apple has been "auditing" its APIs to manually add information for whether each object-pointer type can be nil or not, so over time you will see AnyObject! in the API change to AnyObject or AnyObject?. This process takes lots of human effort and is slow and ongoing.
Regarding the following function declarations:
#1) func instantiateViewControllerWithIdentifier(identifier: String) -> AnyObject
#2) func instantiateViewControllerWithIdentifier(identifier: String) -> AnyObject!
#3) func instantiateViewControllerWithIdentifier(identifier: String) -> AnyObject?
#2 and #3 are logically the same, however, as per p 40 of The Swift Programming Language iBook, the answer is, much like Rickster's answer, a matter of semantics.
AnyObject! and AnyObject? are treated the same behind the scenes (they are initially set to nil and can be nil at any point), however it is the intention that once AnyObject! has been set it should no longer point to nil.
This allows one to avoid using the forced optional unwrapping syntax (!) and simply use implicitly unwrapped optionals (variables declared with !) to access the value contained within the assigned variable.
An example from the book:
let assumedString: String! = "An implicitly unwrapped optional string."
let implicitString: String = assumedString // no need for an exclamation mark
As applied to the above:
func instantiateViewControllerWithIdentifier(identifier: String) -> AnyObject! {
return nil
}
let optionalVC: AnyObject! = instantiateViewControllerWithIdentifier("myVC")
self.viewController.presentViewController(optionalVC as UIViewController, animated: true, completion: nil)
// The above is perfectly valid at compile time, but will fail at runtime:
// fatal error: unexpectedly found nil while unwrapping an Optional value
Thus, a function that declares the return value as AnyObject! can return nil however, should avoid doing so to avoid breaking the implied contract of implicitly unwrapped optionals