Let's say, there is a variable that I want to make thread safe. One of the most common ways to do this:
var value: A {
get { return queue.sync { self._value } }
set { queue.sync { self._value = newValue } }
}
However, this property is not completely thread safe if we change the value as in the example below:
Class.value += 1
So my question is: Using NSLock on the same principle is also not completely thread safe?
var value: A {
get {
lock.lock()
defer { lock.unlock() }
return self._value
}
set {
lock.lock()
defer { lock.unlock() }
self._value = newValue
}
}
In answer to your question, the lock approach suffers the exact same problems that the GCD approach does. Atomic accessor methods simply are insufficient to ensure broader thread-safety.
The issue is, as discussed elsewhere, that the innocuous += operator is retrieving the value via the getter, incrementing that value, and storing that new value via the setter. To achieve thread-safety, the whole process needs to be wrapped in a single synchronization mechanism. You want an atomic increment operation, you would write a method to do that.
So, taking your NSLock example, I might move the synchronization logic into its own method, e.g.:
class Foo<T> {
private let lock = NSLock()
private var _value: T
init(value: T) {
_value = value
}
var value: T {
get { lock.synchronized { _value } }
set { lock.synchronized { _value = newValue } }
}
}
extension NSLocking {
func synchronized<T>(block: () throws -> T) rethrows -> T {
lock()
defer { unlock() }
return try block()
}
}
But if you wanted to have an operation to increment the value in a thread-safe manner, you would write a method to do that, e.g.:
extension Foo where T: Numeric {
func increment(by increment: T) {
lock.synchronized {
_value += increment
}
}
}
Then, rather than this non-thread-safe attempt:
foo.value += 1
You would instead employ the following thread-safe rendition:
foo.increment(by: 1)
This pattern, of wrapping the increment process in its own method that synchronizes the whole operation, would be applicable regardless of what synchronization mechanism you use (e.g., locks, GCD serial queue, reader-writer pattern, os_unfair_lock, etc.).
For what it is worth, the Swift 5.5 actor pattern (outlined in SE-0306) formalizes this pattern. Consider:
actor Bar<T> {
var value: T
init(value: T) {
self.value = value
}
}
extension Bar where T: Numeric {
func increment(by increment: T) {
value += increment
}
}
Here, the increment method is automatically an “actor-isolated” method (i.e., it will be synchronized) but the actor will control interaction with the setter for its property, namely if you try to set value from outside this class, you will receive an error:
Actor-isolated property 'value' can only be mutated from inside the actor
That's interesting, I'm learning about this for the first time.
The issue in the first bit of code, is that:
object.value += 1
has the same semantics as
object.value = object.value + 1
which we can further expand to:
let originalValue = queue.sync { object._value }
let newValue = origiinalValue + 1
queue.sync { self._value = newValue }
Expanding it so makes it clear that the synchronization of the getter and setter work fine, but they're not synchronized as a whole. A context switch in the middle of the code above could cause _value to be mutated by another thread, without newValue reflecting the change.
Using a lock would have the exact same problem. It would expand to:
lock.lock()
let originalValue = object._value
lock.unlock()
let newValue = originalValue + 1
lock.lock()
object._value = newValue
lock.unlock()
You can see this for yourself by instrumenting your code with some logging statements, which show that the mutation isn't fully covered by the lock:
class C {
var lock = NSLock()
var _value: Int
var value: Int {
get {
print("value.get start")
print("lock.lock()")
lock.lock()
defer {
print("lock.unlock()")
lock.unlock()
print("value.get end")
}
print("getting self._value")
return self._value
}
set {
print("\n\n\nvalue.set start")
lock.lock()
print("lock.lock()")
defer {
print("lock.unlock()")
lock.unlock()
print("value.set end")
}
print("setting self._value")
self._value = newValue
}
}
init(_ value: Int) { self._value = value }
}
let object = C(0)
object.value += 1
Related
I am writing a custom image fetcher to fetch the images needed for my collection view. Below is my image fetcher logic
class ImageFetcher {
/// Thread safe cache that stores `UIImage`s against corresponding URL's
private var cache = Synchronised([URL: UIImage]())
/// Inflight Requests holder which we can use to cancel the requests if needed
/// Thread safe
private var inFlightRequests = Synchronised([UUID: URLSessionDataTask]())
func fetchImage(using url: URL, completion: #escaping (Result<UIImage, Error>) -> Void) -> UUID? {
/// If the image is present in cache return it
if let image = cache.value[url] {
completion(.success(image))
}
let uuid = UUID()
let dataTask = URLSession.shared.dataTask(with: url) { [weak self] data, response, error in
guard let self = self else { return }
defer {
self.inFlightRequests.value.removeValue(forKey:uuid )
}
if let data = data, let image = UIImage(data: data) {
self.cache.value[url] = image
DispatchQueue.main.async {
completion(.success(image))
}
return
}
guard let error = error else {
// no error , no data
// trigger some special error
return
}
// Task cancelled do not send error code
guard (error as NSError).code == NSURLErrorCancelled else {
completion(.failure(error))
return
}
}
dataTask.resume()
self.inFlightRequests.value[uuid] = dataTask
return uuid
}
func cancelLoad(_ uuid: UUID) {
self.inFlightRequests.value[uuid]?.cancel()
self.inFlightRequests.value.removeValue(forKey: uuid)
}
}
This is a block of code that provides the thread safety needed to access the cache
/// Use to make a struct thread safe
public class Synchronised<T> {
private var _value: T
private let queue = DispatchQueue(label: "com.sync", qos: .userInitiated, attributes: .concurrent)
public init(_ value: T) {
_value = value
}
public var value: T {
get {
return queue.sync { _value }
}
set { queue.async(flags: .barrier) { self._value = newValue }}
}
}
I am not seeing the desired scroll performance and I anticipate that is because my main thread is getting blocked when I try to access the cache(queue.sync { _value }). I am calling the fetchImage method from the cellForRowAt method of the collectionView and I can't seem to find a way to dispatch it off the main thread because I would need the request's UUID so I would be able to cancel the request if needed. Any suggestions on how to get this off the main thread or are there any suggestions to architect this in a better way?
I do not believe that your scroll performance is related to fetchImage. While there are modest performance issues in Synchronized, it likely is not enough to explain your issues. That having been said, there are several issue here, but blocking the main queue does not appear to be one of them.
The more likely culprit might be retrieving assets that are larger than the image view (e.g. large asset in small image view requires resizing which can block the main thread) or some mistake in the fetching logic. When you say “not seeing desired scroll performance”, is it stuttering or just slow? The nature of the “scroll performance” problem will dictate the solution.
A few unrelated observations:
Synchronised, used with a dictionary, is not thread-safe. Yes, the getter and setter for value is synchronized, but not the subsequent manipulation of that dictionary. It is also very inefficient (though, not likely sufficiently inefficient to explain the problems you are having).
I would suggest not synchronizing the retrieval and setting of the whole dictionary, but rather make a synchronized dictionary type:
public class SynchronisedDictionary<Key: Hashable, Value> {
private var _value: [Key: Value]
private let queue = DispatchQueue(label: "com.sync", qos: .userInitiated, attributes: .concurrent)
public init(_ value: [Key: Value] = [:]) {
_value = value
}
// you don't need/want this
//
// public var value: [Key: Value] {
// get { queue.sync { _value } }
// set { queue.async(flags: .barrier) { self._value = newValue } }
// }
subscript(key: Key) -> Value? {
get { queue.sync { _value[key] } }
set { queue.async(flags: .barrier) { self._value[key] = newValue } }
}
var count: Int { queue.sync { _value.count } }
}
In my tests, in release build this was about 20 times faster. Plus it is thread-safe.
But, the idea is that you should not expose the underlying dictionary, but rather just expose whatever interface you need for the synchronization type to manage the dictionary. You will likely want to add additional methods to the above (e.g. removeAll or whatever), but the above should be sufficient for your immediate purposes. And you should be able to do things like:
var dictionary = SynchronizedDictionary<String, UIImage>()
dictionary["foo"] = image
imageView.image = dictionary["foo"]
print(dictionary.count)
Alternatively, you could just dispatch all updates to the dictionary to the main queue (see point 4 below), then you don't need this synchronized dictionary type at all.
You might consider using NSCache, instead of your own dictionary, to hold the images. You want to make sure that you respond to memory pressure (emptying the cache) or some fixed total cost limit. Plus, NSCache is already thread-safe.
In fetchImage, you have several paths of execution where you do not call the completion handler. As a matter of convention, you will want to ensure that the completion handler is always called. E.g. what if the caller started a spinner before fetching the image, and stopping it in the completion handler? If you might not call the completion handler, then the spinner might never stop, either.
Similarly, where you do call the completion handler, you do not always dispatch it back to the main queue. I would either always dispatch back to the main queue (relieving the caller from having to do so) or just call the completion handler from the current queue, but only dispatching some of them to the main queue is an invitation for confusion.
FWIW, you can create Unit Tests target and demonstrate the difference between the original Synchronised and the SynchronisedDictionary, by testing a massively concurrent modification of the dictionary with concurrentPerform:
// this is not thread-safe if T is mutable
public class Synchronised<T> {
private var _value: T
private let queue = DispatchQueue(label: "com.sync", qos: .userInitiated, attributes: .concurrent)
public init(_ value: T) {
_value = value
}
public var value: T {
get { queue.sync { _value } }
set { queue.async(flags: .barrier) { self._value = newValue }}
}
}
// this is thread-safe dictionary ... assuming `Value` is not mutable reference type
public class SynchronisedDictionary<Key: Hashable, Value> {
private var _value: [Key: Value]
private let queue = DispatchQueue(label: "com.sync", qos: .userInitiated, attributes: .concurrent)
public init(_ value: [Key: Value] = [:]) {
_value = value
}
subscript(key: Key) -> Value? {
get { queue.sync { _value[key] } }
set { queue.async(flags: .barrier) { self._value[key] = newValue } }
}
var count: Int { queue.sync { _value.count } }
}
class SynchronisedTests: XCTestCase {
let iterations = 10_000
func testSynchronised() throws {
let dictionary = Synchronised([String: Int]())
DispatchQueue.concurrentPerform(iterations: iterations) { i in
let key = "\(i)"
dictionary.value[key] = i
}
XCTAssertEqual(iterations, dictionary.value.count) // XCTAssertEqual failed: ("10000") is not equal to ("834")
}
func testSynchronisedDictionary() throws {
let dictionary = SynchronisedDictionary<String, Int>()
DispatchQueue.concurrentPerform(iterations: iterations) { i in
let key = "\(i)"
dictionary[key] = i
}
XCTAssertEqual(iterations, dictionary.count) // success
}
}
I am using iOS Swift, and I am trying to understand how to execute a method once the value of two variables have been set up (non-null value) once the requests have finished.
After reading some documentation, I have found out some concepts which are interesting. The first one would be didSet, which works as an observer.
I could call the method using this method by simply using didSet if I would require just one variable
didSet
var myVar: String 0 {
didSet {
print("Hello World.")
}
}
Nevertheless, I also need to wait for the second one myVar2, so it would not work.
I have also found DispatchQueue, which I could use to wait a second before calling the method (the requests that I am using are pretty fast)
DispatchQueue
DispatchQueue.main.asyncAfter(deadline: .now() + 2, execute: {
print("Hello world")
})
but I consider that this solution is not efficient.
Is there anyway to combine these two variables or requests in order to call a method once they have finishing setting the value?
Update
I have tried to replicate David s answer, which I believe is correct but I get the following error on each \.
Type of expression is ambiguous without more context
I copy here my current code
var propertiesSet: [KeyPath<SearchViewController, Car>:Bool] = [\SearchViewController.firstCar:false, \SearchViewController.secondCar:false] {
didSet {
if propertiesSet.allSatisfy({ $0.value }) {
// Conditions passed, execute your custom logic
print("All Set")
} else {
print("Not yet")
}
}
}
var firstCar: Car? {
didSet {
propertiesSet[\SearchViewController.firstCar] = true
}
}
var secondCar: Car? {
didSet {
propertiesSet[\SearchViewController.secondCar] = true
}
}
The variables are set individually, each one on its own request.
You could make your properties optional and check they both have values set before calling your function.
var varA: String? = nil {
didSet {
if varA != nil && varB != nil {
myFunc()
}
}
}
var varB: String? = nil {
didSet {
if varA != nil && varB != nil {
myFunc()
}
}
}
Or you can call your function on each didSet and use a guard condition at the start of your function to check that both of your properties have values, or bail out:
var varA: String? = nil {
didSet {
myFunc()
}
}
var varB: String? = nil {
didSet {
myFunc()
}
}
func myFunc() {
guard varA != nil && varB != nil else { return }
// your code
}
First, you should think very carefully about what your semantics are here. When you say "set," do you mean "assigned a value" or do you mean "assigned a non-nil value?" (I assume you mean the latter in this case.) You should ask yourself, what should happen if your method has already fired, and then another value is set? What if one of the properties has a value is set, then nil is set, then another value set? Should that fire the method 1, 2, or 3 times?
Whenever possible you should work to make these kinds of issues impossible by requiring that the values be set together, in an init rather than mutable properties, for example.
But obviously there are cases where this is necessary (UI is the most common).
If you're targeting iOS 13+, you should explore Combine for these kinds of problems. As one approach:
class Model: ObservableObject {
#Published var first: String?
#Published var second: String?
#Published var ready = false
private var observers: Set<AnyCancellable> = []
init() {
$first.combineLatest($second)
.map { $0 != nil && $1 != nil }
.assign(to: \.ready, on: self)
.store(in: &observers)
}
}
let model = Model()
var observers: Set<AnyCancellable> = []
model.$ready
.sink { if $0 { print("GO!") } }
.store(in: &observers)
model.first = "ready"
model.second = "set"
// prints "GO!"
Another approach is to separate the incidental state that includes optionals, from the actual object you're constructing, which does not.
// Possible parameters for Thing
struct Parameters {
var first: String?
var second: String?
}
// The thing you're actually constructing that requires all the parameters
struct Thing {
let first: String
let second: String
init?(parameters: Parameters) {
guard let first = parameters.first,
let second = parameters.second
else { return nil }
self.first = first
self.second = second
}
}
class TheUIElement {
// Any time the parameters change, try to make a Thing
var parameters: Parameters = Parameters() {
didSet {
thing = Thing(parameters: parameters)
}
}
// If you can make a Thing, then Go!
var thing: Thing? {
didSet {
if thing != nil { print("GO!") }
}
}
}
let element = TheUIElement()
element.parameters.first = "x"
element.parameters.second = "y"
// Prints "GO!"
You need to add a didSet to all variables that need to be set for your condition to pass. Also create a Dictionary containing KeyPaths to your variables that need to be set and a Bool representing whether they have been set already.
Then you can create a didSet on your Dictionary containing the "set-state" of your required variables and when all of their values are true meaning that all of them have been set, execute your code.
This solution scales well to any number of properties due to the use of a Dictionary rather than manually writing conditions like if aSet && bSet && cSet, which can get out of hand very easily.
class AllSet {
var propertiesSet: [KeyPath<AllSet, String>:Bool] = [\.myVar:false, \.myVar2:false] {
didSet {
if propertiesSet.allSatisfy({ $0.value }) {
// Conditions passed, execute your custom logic
print("All Set")
} else {
print("Not yet")
}
}
}
var myVar: String {
didSet {
propertiesSet[\.myVar] = true
}
}
var myVar2: String {
didSet {
propertiesSet[\.myVar2] = true
}
}
init(myVar: String, myVar2: String) {
self.myVar = myVar
self.myVar2 = myVar2
}
}
let all = AllSet(myVar: "1", myVar2: "2")
all.myVar = "2" // prints "Not yet"
all.myVar2 = "1" // prints "All set"
class MyClass {
static var name: String = "Hello"
}
Static variables in swift are not thread-safe by default. If I want to make them thread-safe, how can I achieve that ?
Initialization of static variable is thread-safe. But if the object, itself, is not thread-safe, must synchronize your interaction with it from multiple threads (as you must with any non-thread-safe object, whether static or not).
At the bare minimum, you can make your exposed property a computed property that synchronizes access to some private property. For example:
class MyClass {
private static let lock = NSLock()
private static var _name: String = "Hello"
static var name: String {
get { lock.withCriticalSection { _name } }
set { lock.withCriticalSection { _name = newValue } }
}
}
Where
extension NSLocking {
func withCriticalSection<T>(block: () throws -> T) rethrows -> T {
lock()
defer { unlock() }
return try block()
}
}
Or you can use GCD serial queue, reader-writer, or a variety of other mechanisms to synchronize, too. The basic idea would be the same, though.
That having been said, it’s worth noting that this sort of property accessor synchronization is insufficient for mutable types. A higher level of synchronization is needed.
Consider:
let group = DispatchGroup()
DispatchQueue.global().async(group: group) {
for _ in 0 ..< 100_000 {
MyClass.name += "x"
}
}
DispatchQueue.global().async(group: group) {
for _ in 0 ..< 100_000 {
MyClass.name += "y"
}
}
group.notify(queue: .main) {
print(MyClass.name.count)
}
You’d think that because we have thread-safe accessors that everything is OK. But it’s not. This will not add 200,000 characters to the name. You’d have to do something like:
class MyClass {
private static let lock = NSLock()
private static var _name: String = ""
static var name: String {
get { lock.withCriticalSection { _name } }
}
static func appendString(_ string: String) {
lock.withCriticalSection {
_name += string
}
}
}
And then the following works:
let group = DispatchGroup()
DispatchQueue.global().async(group: group) {
for _ in 0 ..< 100_000 {
MyClass.appendString("x")
}
}
DispatchQueue.global().async(group: group) {
for _ in 0 ..< 100_000 {
MyClass.appendString("y")
}
}
group.notify(queue: .main) {
print(MyClass.name.count)
}
The other classic example is where you have two properties that related to each other, for example, maybe firstName and lastName. You cannot just make each of the two properties thread-safe, but rather you need to make the single task of updating both properties thread-safe.
These are silly examples, but illustrate that sometimes a higher level of abstraction is needed. But for simple applications, the synchronizing the computed properties’ accessor methods may be sufficient.
As a point of clarification, while statics, like globals, are instantiated lazily, standard stored properties bearing the lazy qualifier are not thread-safe. As The Swift Programming Language: Properties warns us:
If a property marked with the lazy modifier is accessed by multiple threads simultaneously and the property hasn’t yet been initialized, there’s no guarantee that the property will be initialized only once.
I am making three api calls and want that API1 should execute first, once completed API2 should execute followed by API3.
I used operation queue for this with adding dependency over operations. I tried setting priority as well but not getting api calls in order. Help me out how to make it properly.
Code is like this :
let op1 = Operation()
op1.completionBlock = {
self.APICall(urlString: self.url1)
}
op1.queuePriority = .veryHigh
let op2 = Operation()
op2.completionBlock = {
self.APICall(urlString: self.url2)
}
op2.queuePriority = .high
let op3 = Operation()
op3.completionBlock = {
self.APICall(urlString: self.url3)
}
op3.queuePriority = .normal
op2.addDependency(op1)
op3.addDependency(op2)
queue.addOperations([op1, op2, op3], waitUntilFinished: false)
I put the API Call Method in DispatchQueue.main.sync like this:
func APICall(urlString: String) {
let headers: HTTPHeaders = [
"Accept": "text/html"
]
print(urlString)
DispatchQueue.main.sync {
Alamofire.request(urlString.addingPercentEncoding(withAllowedCharacters: CharacterSet.urlQueryAllowed)!, method: .get, parameters: nil, encoding: JSONEncoding.default, headers: headers).responseJSON {
response in
// self.stopActivityIndicator()
print(response.result.value)
switch response.result {
case .success:
break
case .failure(let error):
break
}
}
}
}
There are several issues:
If you’re trying to manage dependencies between operations, you cannot use the operation’s completionBlock for the code that the dependencies rely upon. The completion block isn't called until after the operation is complete (and thus defeating the purpose of any dependencies).
So the following will not work as intended:
let queue = OperationQueue()
let op1 = Operation()
op1.completionBlock = {
print("starting op1")
Thread.sleep(forTimeInterval: 1)
print("finishing op1")
}
let op2 = Operation()
op2.completionBlock = {
print("starting op2")
Thread.sleep(forTimeInterval: 1)
print("finishing op2")
}
op2.addDependency(op1)
queue.addOperations([op1, op2], waitUntilFinished: false)
But if you define the operations like so, it will work:
let op1 = BlockOperation() {
print("starting op1")
Thread.sleep(forTimeInterval: 1)
print("finishing op1")
}
let op2 = BlockOperation {
print("starting op2")
Thread.sleep(forTimeInterval: 1)
print("finishing op2")
}
(But this only works because I redefined operations that were synchronous. See point 3 below.)
It’s worth noting generally you never use Operation directly. As the docs say:
An abstract class that represents the code and data associated with a single task. ...
Because the Operation class is an abstract class, you do not use it directly but instead subclass or use one of the system-defined subclasses (NSInvocationOperation or BlockOperation) to perform the actual task.
Hence the use of BlockOperation, above, or subclassing it as shown below in point 3.
One should not use priorities to manage the order that operations execute if the order must be strictly honored. As the queuePriority docs say (emphasis added):
This value is used to influence the order in which operations are dequeued and executed...
You should use priority values only as needed to classify the relative priority of non-dependent operations. Priority values should not be used to implement dependency management among different operation objects. If you need to establish dependencies between operations, use the addDependency(_:) method instead.
So, if you queue 100 high priority operations and 100 default priority operations, you are not guaranteed that all of the high priority ones will start before the lower priority ones start running. It will tend to prioritize them, but not strictly so.
The first point is moot, as you are calling asynchronous methods. So you can’t use simple Operation or BlockOperation. If you don’t want a subsequent network request to start until the prior one finishes, you’ll want to wrap these network request in custom asynchronous Operation subclass with all of the special KVO that entails:
class NetworkOperation: AsynchronousOperation {
var request: DataRequest
static var sessionManager: SessionManager = {
let manager = Alamofire.SessionManager(configuration: .default)
manager.startRequestsImmediately = false
return manager
}()
init(urlString: String, parameters: [String: String]? = nil, completion: #escaping (Result<Any>) -> Void) {
let headers: HTTPHeaders = [
"Accept": "text/html"
]
let string = urlString.addingPercentEncoding(withAllowedCharacters: .urlQueryAllowed)!
let url = URL(string: string)!
request = NetworkOperation.sessionManager.request(url, parameters: parameters, headers: headers)
super.init()
request.responseJSON { [weak self] response in
completion(response.result)
self?.finish()
}
}
override func main() {
request.resume()
}
override func cancel() {
request.cancel()
}
}
Then you can do:
let queue = OperationQueue()
let op1 = NetworkOperation(urlString: ...) { result in
...
}
let op2 = NetworkOperation(urlString: ...) { result in
...
}
let op3 = NetworkOperation(urlString: ...) { result in
...
}
op2.addDependency(op1)
op3.addDependency(op2)
queue.addOperations([op1, op2, op3], waitUntilFinished: false)
And because that’s using AsynchronousOperation subclass (shown below), the operations won’t complete until the asynchronous request is done.
/// Asynchronous operation base class
///
/// This is abstract to class performs all of the necessary KVN of `isFinished` and
/// `isExecuting` for a concurrent `Operation` subclass. You can subclass this and
/// implement asynchronous operations. All you must do is:
///
/// - override `main()` with the tasks that initiate the asynchronous task;
///
/// - call `completeOperation()` function when the asynchronous task is done;
///
/// - optionally, periodically check `self.cancelled` status, performing any clean-up
/// necessary and then ensuring that `finish()` is called; or
/// override `cancel` method, calling `super.cancel()` and then cleaning-up
/// and ensuring `finish()` is called.
public class AsynchronousOperation: Operation {
/// State for this operation.
#objc private enum OperationState: Int {
case ready
case executing
case finished
}
/// Concurrent queue for synchronizing access to `state`.
private let stateQueue = DispatchQueue(label: Bundle.main.bundleIdentifier! + ".rw.state", attributes: .concurrent)
/// Private backing stored property for `state`.
private var _state: OperationState = .ready
/// The state of the operation
#objc private dynamic var state: OperationState {
get { stateQueue.sync { _state } }
set { stateQueue.sync(flags: .barrier) { _state = newValue } }
}
// MARK: - Various `Operation` properties
open override var isReady: Bool { return state == .ready && super.isReady }
public final override var isAsynchronous: Bool { return true }
public final override var isExecuting: Bool { return state == .executing }
public final override var isFinished: Bool { return state == .finished }
// KVN for dependent properties
open override class func keyPathsForValuesAffectingValue(forKey key: String) -> Set<String> {
if ["isReady", "isFinished", "isExecuting"].contains(key) {
return [#keyPath(state)]
}
return super.keyPathsForValuesAffectingValue(forKey: key)
}
// Start
public final override func start() {
if isCancelled {
state = .finished
return
}
state = .executing
main()
}
/// Subclasses must implement this to perform their work and they must not call `super`. The default implementation of this function throws an exception.
open override func main() {
fatalError("Subclasses must implement `main`.")
}
/// Call this function to finish an operation that is currently executing
public final func finish() {
if !isFinished { state = .finished }
}
}
As very minor observation, your code specified GET request with JSON parameters. That doesn’t make sense. GET requests have no body in which JSON could be included. GET requests only use URL encoding. Besides you’re not passing any parameters.
I have an array of 'updateBlocks' (closures) that I use in a singleton class to notify any observers (UIViewControllers, etc) when data updates.
I am wondering what the best way to remove the observer would be so that it is not executed when the observer is deallocated (or no longer wants updates).
Here is my current setup:
MySingleton Class
var updateBlock: (() -> ())? {
didSet {
self.updateBlocks.append(updateBlock!)
self.updateBlock!() // Call immediately to give initial data
}
}
var updateBlocks = [() -> ()]()
func executeUpdateBlocks() {
for block in updateBlocks {
block()
}
}
MyObserver Class
MySingleton.shared.updateBlock = {
...handle updated data...
}
MySingleton.shared.updateBlock = nil // How to properly remove???
Your singleton design has some problems.
Having updateBlock be a variable who's didSet method appends a block to your updateBlocks array is bad design.
I would suggest getting rid of the updateBlock var, and instead defining an addUpdateBlock method and a removeAllUpdateBlocks method:
func addUpdateBlock(_ block () -> ()) {
updateBlocks.append(block)
}
func removeAllUpdateBlocks() {
updateBlocks.removeAll()
}
func executeUpdateBlocks() {
for block in updateBlocks {
block()
}
}
If you want to remove single blocks then you'll need some way to keep track of them. As rmaddy says, you would need some sort of ID for each block. You could refactor your container for your blocks to be a dictionary and use sequential integer keys. When you add a new block, your addBlock function could return the key:
var updateBlocks = [Int: () -> ()]()
var nextBlockID: Int = 0
func addUpdateBlock(_ block () -> ()) -> Int {
updateBlocks[nextBlockID] = block
let result = nextBlockID
nextBlockID += 1
//Return the block ID of the newly added block
return result
}
func removeAllUpdateBlocks() {
updateBlocks.removeAll()
}
func removeBlock(id: Int) -> Bool {
if updateBlocks[id] == nil {
return false
} else {
updateBlocks[id] = nil
return true
}
func executeUpdateBlocks() {
for (_, block) in updateBlocks {
block()
}
If you save your blocks in a dictionary then they won't be executed in any defined order.
That's a very confusing API. From the client's point of view you are setting the value of a single block. But the implementation actually adds that block to an array and then immediately calls that block. And why would you force-unwrap the optional block?
Since you want to support several observers and provide the ability to remove observers, you really show have addBlock and removeBlock methods in your singleton. Then the API and its functionality are clear.
The trick is how to provide an API that lets an observer tell the singleton to remove a specific block. I would model the API after how it is done in the NotificationCenter class where the addBlock method returns some generated token. That token is then passed to the removeBlock method.
The implementation would likely be a dictionary keyed on the token and the value is the block. The token can be a UUID or some other generated, unique opaque value. That makes the addBlock and removeBlock methods simple. Then the executeBlocks method would iterate the values of the dictionary and call those blocks.
Here's one possible implementation:
class UpdateBlocks {
static let shared = UpdateBlocks()
var blocks = [UUID: () -> ()]()
private init() {
}
func addBlock(_ block: #escaping () -> ()) -> Any {
let token = UUID()
blocks[token] = block
return token
}
func removeBlock(_ token: Any) {
if let token = token as? UUID {
blocks[token] = nil
}
}
func executeBlocks() {
for (_, value) in blocks {
value()
}
}
}
let token = UpdateBlocks.shared.addBlock {
print("hello")
}
UpdateBlocks.shared.executeBlocks() // Outputs "hello"
UpdateBlocks.shared.removeBlock(token)
UpdateBlocks.shared.executeBlocks() // No output