Faulting or "lazy initialisation" pattern in Swift - ios

I commonly use a "faulting" or lazy initialization pattern in iOS when dealing with big objects.
Whenever a class has a property pointing to a "fat" object, I create a custom getter that checks if the iVar is nil. If it is, it creates the fat object and returns it. If it isn't, it just returns the "fat" object.
The container of this property also subscribes to memory warnings, and when one is received, it sets the iVar to nil, reducing the memory footprint. As you see, it's pretty similar to faulting in Core Data.
I'm trying to reproduce this in Swift, but haven't found a decent and elegant solution so far.
a) First Attempt: lazy stored properties
This doesn't work, because if I set the property to nil, it will remain nil forever. The "magic" only happens the first time you access the property:
struct FatThing{
// I represent something big, which might have to be
// "faulted" (set to nil) when a memory warning
// is received
var bigThing = "I'm fat"
}
class Container {
lazy var fat: FatThing? = FatThing()
}
var c = Container()
c.fat
c.fat = nil
c.fat // returns nil
b) Second attempt: A stored property with observers
This also fails, because of the lack of get observers. I need a willGet and didGet, not just willSet and didSet.
Why on Earth are there no get obeservers? What's the use of this half backed thing called observers?
c) Third attempt: A computed property with a stored helper property
This is the only working option I've found so far, but it's as ugly as a baboon's rear end!
struct FatThing{
// I represent something big, which might have to be
// "faulted" (set to nil) when a memory warning
// is received
var bigThing = "I'm fat"
}
class Container {
private var _fat : FatThing? // having this extra and exposed var kills my inner child
var fat: FatThing? {
get{
if _fat == nil {
_fat = FatThing()
}
return _fat
}
set{
_fat = newValue
}
}
}
var c = Container()
c.fat
c.fat = nil
c.fat // returns FatThing
So much for making my code look simpler and shorter...
Is there a simple and elegant way to implement this? This is no exotic stuff in a memory deprived environment as iOS!

The ability to override the getter or setter individually is a peculiarity of objective C that has no counterpart in swift.
The right option at your disposal is no. 3, using a backing property to store the fat data and a computed property to make it accessible. I agree that there's some boilerplate code, but that's the tradeoff for having what you need.
However, if you use that pattern frequently, then you can create a protocol:
protocol Instantiable {
init()
}
and implement it in your FatThing struct/class. Next, create a generic function containing the boilerplate code:
func lazyInitializer<T: Instantiable>(inout property: T?) -> T {
if property == nil {
property = T()
}
return property!
}
Note that T must implement Instantiable - that allows you to create an instance with a parameterless constructor.
Last, use it as follows:
private var _fat : FatThing? // having this extra and exposed var kills my inner child
var fat: FatThing? {
get { return lazyInitializer(&self._fat) }
set { _fat = newValue }
}
Note that in your code you don't have to declare the fat computer property as optional - you are ensuring in the get implementation that it is always not nil, so a better implementation is:
var fat: FatThing {
get{
if _fat == nil {
_fat = FatThing()
}
return _fat!
}
set{
_fat = newValue
}
}

Related

How to use lazy initialization with getter/setter method?

How i can use lazy initialization with get and set() closure.
Here is lazy initialization code:
lazy var pi: Double = {
// Calculations...
return resultOfCalculation
}()
and here is getter/setter code :
var pi: Double {
get {
//code to execute
return someValue
}
set(newValue) {
//code to execute
}
}
I assume what you're trying to do is lazily generate the default for a writable property. I often find that people jump to laziness when it isn't needed. Make sure this is really worth the trouble. This would only be worth it if the default value is rarely used, but fairly expensive to create. But if that's your situation, this is one way to do it.
lazy implements one very specific and fairly limited pattern that often is not what you want. (It's not clear at all that lazy was a valuable addition to the language given how it works, and there is active work in replacing it with a much more powerful and useful system of attributes.) When lazy isn't the tool you want, you just build your own. In your example, it would look like this:
private var _pi: Double?
var pi: Double {
get {
if let pi = _pi { return pi }
let result = // calculations....
_pi = result
return result
}
set { _pi = newValue }
}
This said, in most of the cases I've seen this come up, it's better to use a default value in init:
func computePi() -> Double {
// compute and return value
}
// This is global. Globals are lazy (in a thread-safe way) automatically.
let computedPi = computePi()
struct X {
let pi: Double // I'm assuming it was var only because it might be overridden
init(pi: Double = computedPi) {
self.pi = pi
}
}
Doing it this way only computes pi once in the whole program (rather than once per instance). And it lets us make pi "write-exactly-once" rather than mutable state. (That may or may not match your needs; if it really needs to be writable, then var.)
A similar default value approach can be used for objects that are expensive to construct (rather than static things that are expensive to compute) without needing a global.
struct X {
let pi: Double
init(pi: ExpensiveObject = ExpensiveObject()) {
self.pi = pi
}
}
But sometimes getters and setters are a better fit.
The point of a lazy variable is that it is not initialized until it is fetched, thus preventing its (possibly expensive) initializer from running until and unless the value of the variable is accessed.
Well, that's exactly what a getter for a calculated variable does too! It doesn't run until and unless it is called. Therefore, a getter for a calculated variable is lazy.
The question, on the whole, is thus meaningless. (The phrase "How i can use lazy initialization" reveals the flaw, since a calculated variable is never initialized — it is calculated!)

Is accessing objects frequently using Computed Properties affecting performance?

I am writing a game using GameplayKit & SpriteKit Frameworks. In Apples examples about the GameplayKit you often see the following:
class PlayerEntity: GKEntity {
// MARK: Components
var renderComponent: RPRenderComponent {
guard let renderComponent = componentForClass(RPRenderComponent.self) else {
fatalError()
}
return renderComponent
}
var stateMachineComponent: RPStateMachineComponent {
guard let stateMachineComponent = componentForClass(RPStateMachineComponent.self) else {
fatalError()
}
return stateMachineComponent
}
// MARK: Initialisation
override init() {
super.init()
let renderComponent = RPRenderComponent()
renderComponent.node.entity = self;
let stateMachineComponent = RPStateMachineComponent(states: [
RPPlayerStandingState(entity: self),
RPPlayerFallingState(entity: self),
RPPlayerBouncingDownState(entity: self),
RPPlayerBouncingUpState(entity: self),
RPPlayerJumpingState(entity: self),
RPPlayerBoostState(entity: self)
])
addComponent(renderComponent)
addComponent(stateMachineComponent)
}
}
Components are created and initialized during the initialization of the class they belong to and added to the components-array via addComponent(component: GKComponent).
To make these components accessible from outside the class Apples example always use computed properties which call componentForClass() to return the corresponding component-instance.
The render component however is accessed 'per-frame' meaning that during every update-cycle I need to call the render component and this will lead to call the computed property which in my eyes leads to additional and avoidable processing-load.
Calling would look like:
func update(withDeltaTime time: NSTimeInterval) {
playerEntity.renderNode.DoSomethingPerFrame()
}
Instead of this I am doing it like following:
class PlayerEntity: GKEntity {
// MARK: Components
let renderComponent: RPRenderComponent
var stateMachineComponent: RPStateMachineComponent!
// MARK: Initialisation
override init() {
renderComponent = RPRenderComponent()
super.init()
renderComponent.node.entity = self
stateMachineComponent = RPStateMachineComponent(states: [
RPPlayerStandingState(entity: self),
RPPlayerFallingState(entity: self),
RPPlayerBouncingDownState(entity: self),
RPPlayerBouncingUpState(entity: self),
RPPlayerJumpingState(entity: self),
RPPlayerBoostState(entity: self)
])
addComponent(renderComponent)
addComponent(stateMachineComponent)
}
}
Instead of getting access to the components using a computed property I am holding a strong reference to it in my class. In my eyes I am avoiding additional overhead when calling a computed property since I avoid that "computation".
But I am not pretty sure why Apple has done that using computed properties. Maybe I am totally wrong about computed properties or maybe it's just because this is the coding-style of the one who wrote that example!?
Are Computed Properties affecting performance? Do they automatically mean more overhead?
Or:
Is it OK and 'Safe' (in terms of resources) to use computed properties like this? (In my eyes computed properties are despite being afraid quite an elegant solution, btw.)
It depends - how much each calculation of the property costs, and how often you do that calculation.
In general, you should use computed properties if the calculation cost is so low that it makes little difference, and functions if it takes longer. Alternatively, a computed property can cache its values if the computed property often returns the same result.
Used properly, there is very little cost involved. You also would need to suggest an alternative to a computed property that is as easy to use and faster.

Protocol Oriented Programming and the Delegate Pattern

A WWDC 2015 session video describes the idea of Protocol-Oriented Programming, and I want to adopt this technique in my future apps. I've been playing around with Swift 2.0 for the last couple of days in order to understand this new approach, and am stuck at trying to make it work with the Delegate Pattern.
I have two protocols that define the basic structure of the interesting part of my project (the example code is nonsense but describes the problem):
1) A delegation protocol that makes accessible some information, similar to UITableViewController's dataSource protocol:
protocol ValueProvider {
var value: Int { get }
}
2) An interface protocol of the entity that does something with the information from above (here's where the idea of a "Protocol-First" approach comes into play):
protocol DataProcessor {
var provider: ValueProvider { get }
func process() -> Int
}
Regarding the actual implementation of the data processor, I can now choose between enums, structs, and classes. There are several different abstraction levels of how I want to process the information, therefore classes appear to fit best (however I don't want to make this an ultimate decision, as it might change in future use cases). I can define a base processor class, on top of which I can build several case-specific processors (not possible with structs and enums):
class BaseDataProcessor: DataProcessor {
let provider: ValueProvider
init(provider: ValueProvider) {
self.provider = provider
}
func process() -> Int {
return provider.value + 100
}
}
class SpecificDataProcessor: BaseDataProcessor {
override func process() -> Int {
return super.process() + 200
}
}
Up to here everything works like a charm. However, in reality the specific data processors are tightly bound to the values that are processed (as opposed to the base processor, for which this is not true), such that I want to integrate the ValueProvider directly into the subclass (for comparison: often, UITableViewControllers are their own dataSource and delegate).
First I thought of adding a protocol extension with a default implementation:
extension DataProcessor where Self: ValueProvider {
var provider: ValueProvider { return self }
}
This would probably work if I did not have the BaseDataProcessor class that I don't want to make value-bound. However, subclasses that inherit from BaseDataProcessor and adopt ValueProvider seem to override that implementation internally, so this is not an option.
I continued experimenting and ended up with this:
class BaseDataProcessor: DataProcessor {
// Yes, that's ugly, but I need this 'var' construct so I can override it later
private var _provider: ValueProvider!
var provider: ValueProvider { return _provider }
func process() -> Int {
return provider.value + 10
}
}
class SpecificDataProcessor: BaseDataProcessor, ValueProvider {
let value = 1234
override var provider: ValueProvider { return self }
override func process() -> Int {
return super.process() + 100
}
}
Which compiles and at first glance appears to do what I want. However, this is not a solution as it produces a reference cycle, which can be seen in a Swift playground:
weak var p: SpecificDataProcessor!
autoreleasepool {
p = SpecificDataProcessor()
p.process()
}
p // <-- not nil, hence reference cycle!
Another option might be to add class constraints to the protocol definitions. However, this would break the POP approach as I understand it.
Concluding, I think my question boils down to the following: How do you make Protocol Oriented Programming and the Delegate Pattern work together without restricting yourself to class constraints during protocol design?
It turns out that using autoreleasepool in Playgrounds is not suited to proof reference cycles. In fact, there is no reference cycle in the code, as can be seen when the code is run as a CommandLine app. The question still stands whether this is the best approach. It works but looks slightly hacky.
Also, I'm not too happy with the initialization of BaseDataProcessors and SpecificDataProcessors. BaseDataProcessors should not know any implementation detail of the sub classes w.r.t. valueProvider, and subclasses should be discreet about themselves being the valueProvider.
For now, I have solved the initialization problem as follows:
class BaseDataProcessor: DataProcessor {
private var provider_: ValueProvider! // Not great but necessary for the 'var' construct
var provider: ValueProvider { return provider_ }
init(provider: ValueProvider!) {
provider_ = provider
}
func process() -> Int {
return provider.value + 10
}
}
class SpecificDataProcessor: BaseDataProcessor, ValueProvider {
override var provider: ValueProvider { return self } // provider_ is not needed any longer
// Hide the init method that takes a ValueProvider
private init(_: ValueProvider!) {
super.init(provider: nil)
}
// Provide a clean init method
init() {
super.init(provider: nil)
// I cannot set provider_ = self, because provider_ is strong. Can't make it weak either
// because in BaseDataProcessor it's not clear whether it is of reference or value type
}
let value = 1234
}
If you have a better idea, please let me know :)

What if I want to assign a property to itself?

If I attempt to run the following code:
photographer = photographer
I get the error:
Assigning a property to itself.
I want to assign the property to itself to force the photographer didSet block to run.
Here's a real-life example: In the "16. Segues and Text Fields" lecture of the Winter 2013 Stanford iOS course (13:20), the professor recommends writing code similar to the following:
#IBOutlet weak var photographerLabel: UILabel!
var photographer: Photographer? {
didSet {
self.title = photographer.name
if isViewLoaded() { reload() }
}
}
override func viewDidLoad() {
super.viewDidLoad()
reload()
}
func reload() {
photographerLabel.text = photographer.name
}
Note: I made the following changes: (1) the code was switched from Objective-C to Swift; (2) because it's in Swift, I use the didSet block of the property instead of the setPhotographer: method; (3) instead of self.view.window I am using isViewLoaded because the former erroneously forces the view to load upon access of the view property; (4) the reload() method (only) updates a label for simplicity purposes, and because it resembles my code more closely; (5) the photographer IBOutlet label was added to support this simpler code; (6) since I'm using Swift, the isViewLoaded() check no longer exists simply for performance reasons, it is now required to prevent a crash, since the IBOutlet is defined as UILabel! and not UILabel? so attempting to access it before the view is loaded will crash the application; this wasn't mandatory in Objective-C since it uses the null object pattern.
The reason we call reload twice is because we don't know if the property will be set before or after the view is created. For example, the user might first set the property, then present the view controller, or they might present the view controller, and then update the property.
I like how this property is agnostic as to when the view is loaded (it's best not to make any assumptions about view loading time), so I want to use this same pattern (only slightly modified) in my own code:
#IBOutlet weak var photographerLabel: UILabel?
var photographer: Photographer? {
didSet {
photographerLabel?.text = photographer.name
}
}
override func viewDidLoad() {
super.viewDidLoad()
photographer = photographer
}
Here instead of creating a new method to be called from two places, I just want the code in the didSet block. I want viewDidLoad to force the didSet to be called, so I assign the property to itself. Swift doesn't allow me to do that, though. How can I force the didSet to be called?
Prior to Swift 3.1 you could assign the property name to itself with:
name = (name)
but this now gives the same error: "assigning a property to itself".
There are many other ways to work around this including introducing a temporary variable:
let temp = name
name = temp
This is just too fun not to be shared. I'm sure the community can come up with many more ways to do this, the crazier the better
class Test: NSObject {
var name: String? {
didSet {
print("It was set")
}
}
func testit() {
// name = (name) // No longer works with Swift 3.1 (bug SR-4464)
// (name) = name // No longer works with Swift 3.1
// (name) = (name) // No longer works with Swift 3.1
(name = name)
name = [name][0]
name = [name].last!
name = [name].first!
name = [1:name][1]!
name = name ?? nil
name = nil ?? name
name = name ?? name
name = {name}()
name = Optional(name)!
name = ImplicitlyUnwrappedOptional(name)
name = true ? name : name
name = false ? name : name
let temp = name; name = temp
name = name as Any as? String
name = (name,0).0
name = (0,name).1
setValue(name, forKey: "name") // requires class derive from NSObject
name = Unmanaged.passUnretained(self).takeUnretainedValue().name
name = unsafeBitCast(name, to: type(of: name))
name = unsafeDowncast(self, to: type(of: self)).name
perform(#selector(setter:name), with: name) // requires class derive from NSObject
name = (self as Test).name
unsafeBitCast(dlsym(dlopen("/usr/lib/libobjc.A.dylib",RTLD_NOW),"objc_msgSend"),to:(#convention(c)(Any?,Selector!,Any?)->Void).self)(self,#selector(setter:name),name) // requires class derive from NSObject
unsafeBitCast(class_getMethodImplementation(type(of: self), #selector(setter:name)), to:(#convention(c)(Any?,Selector!,Any?)->Void).self)(self,#selector(setter:name),name) // requires class derive from NSObject
unsafeBitCast(method(for: #selector(setter:name)),to:(#convention(c)(Any?,Selector,Any?)->Void).self)(self,#selector(setter:name),name) // requires class derive from NSObject
_ = UnsafeMutablePointer(&name)
_ = UnsafeMutableRawPointer(&name)
_ = UnsafeMutableBufferPointer(start: &name, count: 1)
withUnsafePointer(to: &name) { name = $0.pointee }
//Using NSInvocation, requires class derive from NSObject
let invocation : NSObject = unsafeBitCast(method_getImplementation(class_getClassMethod(NSClassFromString("NSInvocation"), NSSelectorFromString("invocationWithMethodSignature:"))),to:(#convention(c)(AnyClass?,Selector,Any?)->Any).self)(NSClassFromString("NSInvocation"),NSSelectorFromString("invocationWithMethodSignature:"),unsafeBitCast(method(for: NSSelectorFromString("methodSignatureForSelector:"))!,to:(#convention(c)(Any?,Selector,Selector)->Any).self)(self,NSSelectorFromString("methodSignatureForSelector:"),#selector(setter:name))) as! NSObject
unsafeBitCast(class_getMethodImplementation(NSClassFromString("NSInvocation"), NSSelectorFromString("setSelector:")),to:(#convention(c)(Any,Selector,Selector)->Void).self)(invocation,NSSelectorFromString("setSelector:"),#selector(setter:name))
var localVarName = name
withUnsafePointer(to: &localVarName) { unsafeBitCast(class_getMethodImplementation(NSClassFromString("NSInvocation"), NSSelectorFromString("setArgument:atIndex:")),to:(#convention(c)(Any,Selector,OpaquePointer,NSInteger)->Void).self)(invocation,NSSelectorFromString("setArgument:atIndex:"), OpaquePointer($0),2) }
invocation.perform(NSSelectorFromString("invokeWithTarget:"), with: self)
}
}
let test = Test()
test.testit()
There are some good workarounds but there is little point in doing that.
If a programmer (future maintainer of the code) sees code like this:
a = a
They will remove it.
Such a statement (or a workaround) should never appear in your code.
If your property looks like this:
var a: Int {
didSet {
// code
}
}
then it's a not a good idea to invoke the didSet handler by assignment a = a.
What if a future maintainer adds a performance improvement to the didSet like this?
var a: Int {
didSet {
guard a != oldValue else {
return
}
// code
}
}
The real solution is to refactor:
var a: Int {
didSet {
self.updateA()
}
}
fileprivate func updateA() {
// the original code
}
And instead of a = a directly call updateA().
If we are speaking about outlets, a suitable solution is to force the loading of views before assigning for the first time:
#IBOutlet weak var photographerLabel: UILabel?
var photographer: Photographer? {
didSet {
_ = self.view // or self.loadViewIfNeeded() on iOS >= 9
photographerLabel?.text = photographer.name // we can use ! here, it makes no difference
}
}
That will make the code in viewDidLoad unnecessary.
Now you might be asking "why should I load the view if I don't need it yet? I want only to store my variables here for future use". If that's what you are asking, it means you are using a view controller as your model class, just to store data. That's an architecture problem by itself. If you don't want to use a controller, don't even instantiate it. Use a model class to store your data.
I hope one day #Swift developers will fix this miscuzzi :)
Simple crutch:
func itself<T>(_ value: T) -> T {
return value
}
Use:
// refresh
style = itself(style)
image = itself(image)
text = itself(text)
(optionals including)
Make a function that the didSet calls then call that function when you want to update something? Seems like this would guard against developers going WTF? in future
#vacawama did a great job with all those options. However in iOS 10.3, Apple banned some of these ways and most likely will be doing it in the future again.
Note: To avoid the risk and future errors, I will use a temporary variable.
We can create a simple function for that:
func callSet<T>(_ object: inout T) {
let temporaryObject = object
object = temporaryObject
}
Would be used like: callSet(&foo)
Or even a unary operator, if there is a fitting one ...
prefix operator +=
prefix func +=<T>(_ object: inout T) {
let temporaryObject = object
object = temporaryObject
}
Would be used like: +=foo

initializing class properties before use in Swift/iOS

I'm having trouble grasping the proper way of instantiating variables that always need to be set before an object is fully functional but may need to be instantiated after the constructor. Based on Swift's other conventions and restrictions it seems like there is a design pattern I'm unaware of.
Here is my use case:
I have a class that inherits from UIViewController and will programmatically create views based on user actions
I need to attach these views to this class, but to do so I need to retrieve their content based on configuration data supplied by another controller
I don't care if this configuration data is passed to the constructor (in which case it would always be required) or supplied by a secondary call to this object before it is used
My problem seems to be that both of the approaches in bullet 3 seem flawed.
In the first case, there is only one legitimate constructor this class can be called with, yet I'm forced to override other constructors and initialize member variables with fake values even if the other constructors are never intended to be used (I'm also trying to keep these variables as let types based on Swift's best practices).
In the second case, I'm effectively splitting my constructor into two parts and introduce an additional point of failure in case the second part fails to be called prior to class being used. I also can't move this second part to a method that's guaranteed to be called prior to usage (such as viewDidLoad) because I still need to pass in additional arguments from the config. While I can make sure to call the initPartTwo manually, I'd prefer to have a mechanism that better groups it with the actual constructor. I can't be the first one to run into this and it seems like there is a pattern I'm not seeing to make this cleaner.
UPDATE:
I ended up going with a modified version of the pattern matt suggested:
struct Thing {
let item1: String
let item2: String
struct Config {
let item3: String
let item4: String
}
var config:Config! {
willSet {
if self.config != nil {
fatalError("tried to initialize config twice")
}
}
}
init() {
self.item1 = ...
self.item2 = ...
...
}
public func phaseTwoInit(item3: String, item4: String) {
self.item3 = item3
self.item4 = item4
...
}
}
var t = Thing()
...
t.phaseTwoInit(...)
...
// start using t
If an initial instance variable property value can't be supplied at object initialization time, the usual thing is to declare it as an Optional. That way it doesn't need to be initialized by the class's initializers (it has a value - it is nil automatically), plus your code subsequently can distinguished uninitialized (nil) from initialized (not nil).
If the Optional if an implicitly unwrapped Optional, this arrangement need have no particular effect on your code (i.e. it won't have to be peppered with unwrappings).
If your objection is that you are forced to open the door to multiple settings of this instance variable because now it must be declared with var, then close the door with a setter observer:
struct Thing {
var name:String! {
willSet {
if self.name != nil {
fatalError("tried to set name twice")
}
}
}
}
var t = Thing()
t.name = "Matt" // no problem
t.name = "Rumplestiltskin" // crash

Resources