When it launches, the first thing our app does is to load a user object from our backend, while showing a splash screen.
Once that user has been fetched, our home screen is shown and we can consider that this user is valid. However, this property can only be optional in our codebase since it doesn't have a default value, so swift forces us to either mark it as optional or implicitly unwrapped.
We don't want to make it implicitly unwrapped because then if future developper were to add code using the user object that is actually ran before the object is loaded from the server, the app would crash.
My question is then, is there a design pattern on iOS that would let swift understand something like: first run that small part of the codebase that loads the user, then consider the rest of the project can be ran knowing the object is valid ?
Once that user has been fetched, our home screen is shown and we can consider that this user is valid. However, this property can only be optional in our codebase since it doesn't have a default value, so swift forces us to either mark it as optional or implicitly unwrapped.
It can be non-optional if you programmatically create your home screen and add User to its init. (This fairly straightforward idea often goes by the IMO confusing name of "dependency injection" rather than the obvious name of "passing parameters.") This tends to spread through the whole system, requiring that most view controller be programmatically created. That does not mean you can't use Storyboards (or simpler XIBs), but it does typically mean you can't use segues and will need to handle navigation more programmatically to pass your properties through the system.
This approach lets you make strong requirements for the existence of various properties at each point that you need them. It is not all-or-nothing. You can have some screens that use optionals, some that use implicitly unwrapped optionals, and some that are programmatically created. It's even possible to evolve an existing system over to this a few pieces at a time.
To your last paragraph, there is a common technique of a shared instance that is implicitly unwrapped and gets assigned early in the program's life. You then have to promise not to call any of the "main" code before you initialize it. As you note, that means if it isn't called correctly, it will crash. Ultimately there is either a compile-time proof that a value must exist, or there is only a runtime assertions.
You prove that a value must exist by passing it. You implicitly assert the the object should exist by using !. If it's possible for a developer to call the code incorrectly, then it's not a proof.
The correct design pattern is exactly what you are doing. When you have a data that won't be present at initialization time, make the property that holds it an Optional. nil means the data is not yet present; non-nil means that it is. This is completely standard stuff. Just keep doing it the way you are doing it.
If what bothers you is the fact that you have to fetch this thing as an Optional in every method that deals with it, well, tough. But at least the Swift language has nice techniques for safe unwrapping. Typically the first thing you will do in your method is to unwrap safely:
func f() {
guard let myData = self.myData else { return }
// and now myData is not Optional
}
The only I thing I would advise against here is the splash screen. What if the data never arrives? You need to show the user something real, perhaps with actual words saying "please wait while we fetch the data". You need to plan for possible timeout. So, what I'm saying is, your proposed interface might need some work. But your underlying technique of using an Optional is exactly right.
Related
As someone who doesn't have too much experience coding apps I've always wondered a lot about this. Everywhere you look you will always find people saying that you should always avoid using ! so as to make your code more safe. This is because crashes, even if they happen 0.001% of the time, are always a big no in programming. However, in some situations I can never really judge if it's okay to use the ! operator. For example, let's say I have a function which updates a document in Cloud Firestore. Now there are two instances where you may possible use !. One is when you force unwrap the documents id property to reference the document and second is when you disable error propagation for setData(from:completion:).
This is my reasoning for using ! in both situations:
Force unwrapping the book's id property is fine because if I've fetched any books from Firestore, or created one using the initialiser, I will always fill in the id field with a value. The only way this may fail is if Firestore is unable to parse it into the field, which to my understanding only happens if you've annotated say a field called name with #DocumentID and the Firestore document has a field called name as well. This should never happen because I would never put a field named id in a book document.
.setData(from:completion:) should never throw either because it only throws if there's an error in encoding a Book instance. Again, this should never happen because firstly it's marked with Codable and also every book I add to the database will be by encoding that struct, guaranteeing it to work every time if it works once.
struct Book: Identifiable, Codable {
#DocumentID var id: String?
var title: String
}
func update(_ book: Book, completion: #escaping (Error?) -> Void) {
let docRef = db.collection("books").document(book.id!) //<<< Force unwrapping
try! docRef.setData(from: book) { error in //<<< Disabling error propagation
completion(error) // Any network related error thrown by Firestore is, however, handled
}
}
Can someone please give me reasons as to why this should be avoided. If you think this has a chance to make the program crash make sure to explain the reason why it will crash as well so that I have a better understanding of these edge-cases.
If you have any personal experiences with anything even remotely related to this please share them as I would love to learn about them.
Thanks in advance!
P.S. This isn't restricted to just Firestore related situations, if you can share any situation in iOS programming where ! can be used without problem (except for the obvious ones such as URL(string: "https://www.apple.com")! or when using .randomElement()!) please do mention them.
First of all, that is a very fair question and can be confusing when you start dealing with optional types. When you are force-unwrapping a value, it means that you are sure that the value should be there. But should doesn't me it will be there.
In some cases, like you pointed out, there could be instances where, for reasons that you can't control like in your Firestore example, force-unwrapping a value could throw an error that you might not be prepared to handle.
After years of working on mobile apps, I learned that you have to use the common sense for such cases. There is no silver-bullet for such cases and it will depend on the risk you can afford to take when doing a force-unwrap.
Things you should consider before doing that:
If you force-unwrap using the ! operator and it throws an error, can you handle the error accordingly and perform other actions to mitigate the issue?
If yes, you can safely do it.
if not, you can consider using a guard statement to protect you against failures and perform any actions where the expected value is nil.
If you force-unwrap a value that your app or view controller needs to function in all cases and there is no way to recover from that, it's better to let it crash than to let the user perform actions in an invalid state. E.g. when a required ID is necessary to perform a transaction.
But like I mentioned before, it will depend on that affordances you can take when you do this kind of operation.
I have a class that looks like this (simplified):
class GuideViewController: UIViewController, StoreSubscriber {
var tileRenderer: MKTileOverlayRenderer! // <------ this needs to be set by whoever instantiates this class
override func viewDidLoad() {
super.viewDidLoad()
...
}
}
My app uses this GuideViewController class to display many different styles of maps, so the tileRenderer instance variable can have many different values.
I want a compile-time guarantee that tileRenderer will never be nil, instead of using an implicitly-unwrapped optional.
How can I achieve this?
Things I've considered so far but am unsure about
Setting tileRenderer in the init() method of the GuideViewController. This was my first instinct by this answer implies that this is not possible, or an antipattern.
Setting tileRenderer in viewDidLoad(). This seems to require using an implicitly unwrapped optional which bypasses compile-time checks. Also, I'm under the impression that viewDidLoad() is only called once for the view controller in the lifecycle of the app
Manually setting tileRenderer after instantiating the VC. E.g.,
let vc = self.storyboard?.instantiateViewController(withIdentifier: "GuideViewController")
vc.tileRenderer = MKTileOverlayRenderer(...) // <----- can I make the compiler force me to write this line?
navigationController?.pushViewController(vc!, animated: true)
Forgive me for asking such a naive question—I'm fairly new to iOS development.
It isn't possible for there to be a compile-time check, since that would require the compiler to completely analyse the flow of your program.
You can't make the property a non-optional (well, you can - see point 4), since that requires a custom initialiser, which doesn't really work with UIViewController subclasses.
You have a few choices:
Use an implicitly unwrapped optional and crash at runtime if it is nil - hopefully the developer will quickly identify their mistake
Check for nil at a suitable point (such as viewWillAppear) and issue a warning to the console, followed by a crash when you try and access the value or call fatalError - This will give the developer more hints as to what they have done wrong
Use an optional and unwrap it before you use it
Use a non-optional and provide a default value
Option 4 may be the best option; Provide a default render that does nothing (or perhaps issues warnings to the console log that it is a default renderer and that the developer needs to provide one).
What is the difference between a Lazy or Optional property in Swift?
For example, if someone is building a navigation bar that comes in from the side, I think that should all be within one UIViewController. The user might never open the menu but sometimes they will.
var menu: NavigationBar?
lazy var menu: NavigationBar = NavigationBar.initialize()
Both of the optional I think are good code, because they don't create the view unless its needed. I understand Optional means there might be a value it might be nil. I also understand Lazy means don't worry about it until I need it.
Specific Question
My question is are their performance patterns (safety and speed) that say optionals are faster and safer or vise versa?
OK, this is an interesting question, and I don't want to imply that the existing answers aren't good, but I thought I'd offer my take on things.
lazy variables are great for things that need to be setup once, then never re-set. It's a variable, so you could change it to be something else, but that kind of defeats the purpose of a lazy variable (which is to set itself up upon demand).
Optionals are more for things that might go away (and might come back again). They need to be set up each time.
So let's look at two scenarios for your side menu: one where it stays around while it's not visible, and another for when it is deallocated.
lazy var sideMenu = SideMenu()
So the first time the sideMenu property is accessed, SideMenu() is called and it is assigned to the property. The instance stays around forever, even when you're not using it.
Now let's see another approach.
var _sideMenu: SideMenu?
var sideMenu: SideMenu! {
get {
if let sm = _sideMenu {
return sm
} else {
let sm = SideMenu()
_sideMenu = sm
return sm
}
}
set(newValue) {
_sideMenu = newValue
}
}
(Note this only works for classes, not structs.)
OK so what does this do? Well it behaves very similarly to the lazy var, but it let's you re-set it to nil. So if you try to access sideMenu, you are guaranteed to get an instance (either the one that was stored in _sideMenu or a new one). This is a similar pattern in that it lazily loads SideMenu() but this one can create many SideMenu() instances, where the previous example can only create one once.
Now, most view controllers are small enough that you should probably just use lazy from earlier.
So two different approaches to the same problem. Both have benefits and drawbacks, and work better or worse in different situations.
They're actually pretty different.
Optional means that the value could possibly be nil, and the user isn't guaranteeing that it won't be. In your example, var menu: NavigationBar? could be nil for the entire lifetime of the class, unless something explicitly assigns it.
Lazy on the other hand means that the assignment will not be called until it is first accessed, meaning that somewhere in code someone tries to use your object. Note however that it is STILL promised to not be nil if you declare it like you have here lazy var menu: NavigationBar = NavigationBar.initialize(), so no need to do optional chaining.
And actually, a variable can be BOTH Lazy AND Optional, which means that it's value will be loaded when it is first accessed, and that value might be nil at the point it's initialized or at any future point. For example:
lazy var menu: NavigationBar? = NavigationBar.initialize()
That NavigationBar.initialize() is now allowed to return nil, or someone in the future could set the menu to be nil without the compiler/runtime throwing errors!
Does that make the difference clear?
Edit:
As to which is BETTER that's really a case by case thing. Lazy variables take a performance hit on first initialization, so the first access will be a slow one if the initialization process is long. Otherwise, they're nearly identical in terms of safety/performance. Optional variables you have to unwrap before using and so there is a very minor performance cost with that (one machine instruction, not worth the time to think about)
Optional and lazy properties are not the same
An optional property is used when there are chances that the value might not be available(i.e can be nil). But in your scenario, the navigation bar will always be available, its just that the user might not open it.
So using a lazy property serves your purpose. The NavigationBar will only be initialised if the user taps on it.
I do not see any performance issues except that if you use an optional, there is an additional overhead of checking if the value is nil each time before accessing it.
~ Will ARC always release an object the line after the last strong pointer is removed? Or is it undetermined and at some unspecified point in the future it will be released? Similarly, assuming that you don't change anything with your program, will ARC always be the same each time you run and compile your program?
~ How do you deal with handing an object off to other classes? For example, suppose we are creating a Cake object in a Bakery class. This process would probably take a long time and involve many different methods, so it may be reasonable for us to put the cake in a strong property. Now suppose we want to hand this cake object off to a customer. The customer would also probably want to have a strong pointer to it. Is this ok? Having two classes with strong pointers to the same object? Or should we nil out the Bakery's pointer as soon as we hand off?
Your code should be structured so the answer to this doesn't matter - if you want to use an object, keep a pointer to it, don't rely on ARC side effects to keep it around :) And these side effects might change with different compilers.
Two strong pointers is absolutely fine. ARC will only release the object when both pointers are pointing to something else (or nothing!)
ARC will implement the proper retains and releases at compile time. It will not behave any different than if you put them in there yourself so it will always do the same compilation and to answer your question should always behave the same. But that said it does not mean that your object will always be released immediately after the pointer is removed. Because you never call dealloc directly in any form of objective C you are only telling it that there is no reference count and that it is safe to release. This usually means that it will be released right away though.
If you pass an object from one class to another and the receiving class has a strong property associated with it and the class that passes it off eventually nils its pointer it will still have a reference count of at least 1 and will be fine.
Ok, first this answer might helpt you also a little bit: ARC equivalent of autorelease?
Generally after the last strong variable is nilled, the object is released immediately. If you store it in a property, you can nil the property, assign it to something like __strong Foo *temp = self.bar; before you nil, and return that local __strong variable (although arc normally detects the return, and inferes the __strong byitself).
Some more details on that: Handling Pointer-to-Pointer Ownership Issues in ARC
DeanWombourne's answer is correct; but to add to (1).
In particular, the compiler may significantly re-order statements as a part of optimization. While method calls will always occur in the order written in code (because any method call may have side effects), any atomic expression may be re-ordered by the compiler as long as that re-order doesn't impact behavior. Same thing goes for local variable re-use, etc...
Thus, the ARC compiler will guarantee that a pointer is valid for as long as it is needed, no more. But there is no guarantee when the pointed to object might be released other than that it isn't going to happen beyond the scope of declaration. There is also no guarantee that object A is released before B simply because A is declared and last used before B.
IN other words, as long as you write your code without relying on side effects and race conditions, it should all just work.
Please keep you code proper as it has diffrent behaviour on diffrent complier.
I'd like a critique of the following method I use to create objects:
In the interface file:
MyClass * _anObject;
...
#property (retain, nonatomic) MyClass * anObject;
In the implementation file:
#property anObject = _anObject
so far, so simple. Now let's override the default getter:
(MyClass *) anObject {
if(_anObject == nil) {
self.anObject = [[MyClass alloc] init];
[_anObject dowWhateverInitAction];
}
return _anObject;
}
EDIT:
My original question was about creating the object only (instead of the whole life-cycle), but I'm adding the following so that it doesn't through off anyone:
- (void) dealloc {
self.anObject = nil;
}
/EDIT
The main point of the exercise is that setter is used inside the getter. I've used it for all kind of objects (ViewController, myriad other types, etc.) The advantage I get is:
An object is created only when needed. It makes the app pretty fast
(for example, there are 6-7 views in an app, only one gets created in
the beginning).
I don't have to worry about creating an object before it's used... it happens automatically.
I don't have to worry about where the object will be needed the first time... I can just access the object as if it were already there and if it were not, it just gets created fresh.
Questions:
Does it happen to be an established pattern?
Do you see any drawbacks of doing this?
This pattern is quite commonly used as a lazy-loading technique, whereby the object is only created when first requested.
There could be a drawback to this approach if the object being created lazily takes a fair amount of computation to create, and is requested in a time-critical situation (in which case, it doesn't make sense to use this technique). However I would say that this is a reasonable enough thing to do should the object be quick to create.
The only thing wrong with your implementation (assuming you’re not using ARC yet) is that you’ve got a memory leak—using the setter means that your MyClass instance is getting over-retained. You should either release or autorelease _anObject after that initialization, or assign its value directly instead of calling the setter.
Aside from that, this is totally fine, and it’s a good pattern to follow when the MyClass is an object that isn’t necessarily needed right away and can be recreated easily: your response to memory warnings can include a self.anObject = nil to free up the instance’s memory.
It looks like a decent lazy initialization. Philosophically, one can argue that the drawback is that a getter has a side effect. But the side effect is not visible outside and it is kind of an established pattern.
Lazy instantiation is an established pattern, and it is used by Apple in their (terrible) Core Data templates.
The main drawback is that it is overly complex and often unnecessary. I've lost count of the number of times I've seen this where it would make more sense to simply instantiate the objects when the parent object is initialised.
If a simple solution is just as good, go with the simpler solution. Is there are particular reason why you can't instantiate these objects when the parent object is initialised? Perhaps the child objects take up a lot of memory and are only rarely accessed? Does it take a significant amount of time to create the object and you are initialising your parent object in a time-sensitive section of your application? Then feel free to use lazy instantiation. But for the most part, you should prefer the simpler approach.
It's also not thread-safe.
Regarding your advantages:
An object is created only when needed. It makes the app pretty fast (for example, there are 6-7 views in an app, only one gets created in the beginning).
Are you referring to views or view controllers? Your statement doesn't really make sense with views. I don't normally find myself needing to store view controllers in instance variables/properties at all, I instantiate them when I need to switch to them and push them onto the navigation stack, then pop them off when I'm done.
Have you tried your app without using this pattern? Conjecture about performance is often wrong.
I don't have to worry about creating an object before it's used... it happens automatically.
No, now you have to worry about writing a special getter instead. This is more complex and prone to mistakes than simple instantiation. It also makes your application logic and performance more difficult to understand and reason about.
I don't have to worry about where the object will be needed the first time... I can just access the object as if it were already there and if it were not, it just gets created fresh.
You don't have to worry about that when you instantiate it during your parent object's initialisation.
Yes this is an established pattern. I often use lazy instantiation like this as an alternative to cluttering up -init or -viewDidLoad with a bunch of setup code. I would assign the value to the instance variable instead of using the synthesized setter in the event that this object ends up being created as a result of something happening in -init.