DispatchGroup notify before all leave called - ios

I have an iOS application that runs multiple classes(every class in a separate thread) and I want to get a notification when all the class finishes their run.
So I have a BaseClass:
class BaseClass {
var completeState: CompleteState
init(completeState: CompleteState) {
self.completeState = completeState
self.completeState.dispatch.enter()
}
func start() { }
}
And this is the CompleteState Class:
class CompleteState {
let userId:String
let dispatch = DispatchGroup()
init(userId:String) {
self.userId = userId
dispatch.notify(queue: DispatchQueue.main) {
print("Finish load all data")
}
}
}
And I have multiple classes that inherit from BaseClass, and all of them are in this form:
class ExampleClass1: BaseClass {
override func start() {
DispatchQueue.global().async {
print("ExampleClass1 - Start running")
//DOING SOME CALCULATIONS
print("ExampleClass1 - Finish running")
self.completeState.dispatch.leave()
}
}
}
This is the code that runs all of this:
public class RunAll {
let completeState: CompleteState
let classArray: [BaseClass]
public init(userId: String) {
self.completeState = CompleteState(userId: userId)
classArray = [ExampleClass1(completeState: completeState),
ExampleClass2(completeState: completeState),
ExampleClass3(completeState: completeState),
ExampleClass4(completeState: completeState),
ExampleClass5(completeState: completeState),
ExampleClass6(completeState: completeState),
ExampleClass7(completeState: completeState),
ExampleClass8(completeState: completeState),
ExampleClass9(completeState: completeState)]
startClasses()
}
private func startClasses() {
for foo in classArray {
foo.start()
}
}
}
And the problem is that I get dispatch.notify called before all the classes finish their work, any idea what is the problem?

The documentation for notify states:
This function schedules a notification block to be submitted to the specified queue when all blocks associated with the dispatch group have completed. If the group is empty (no block objects are associated with the dispatch group), the notification block object is submitted immediately. When the notification block is submitted, the group is empty.
You are calling dispatch.notify() in your CompleteState init. At the time you are calling notify, the dispatch group is empty, because you haven't yet created a BaseClass subclass let along called start, which is where the dispatch group is entered.
Because you call notify when the dispatch group is empty, the block is submitted immediately.
It may be worth looking into OperationQueue, but if you want to use what you have you can split the notify out from the init:
lass CompleteState {
let userId:String
let dispatch = DispatchGroup()
init(userId:String) {
self.userId = userId
}
func registerCompletionBlock(handler: #escaping () -> Void) {
self.dispatch.notify(queue: DispatchQueue.main, execute: handler)
}
}
You would then provide the completion block after you call startClasses
public init(userId: String) {
self.completeState = CompleteState(userId: userId)
classArray = [ExampleClass1(completeState: completeState),
ExampleClass2(completeState: completeState),
ExampleClass3(completeState: completeState),
ExampleClass4(completeState: completeState),
ExampleClass5(completeState: completeState),
ExampleClass6(completeState: completeState),
ExampleClass7(completeState: completeState),
ExampleClass8(completeState: completeState),
ExampleClass9(completeState: completeState)]
startClasses()
self.completeState.registerCompletionBlockHandler {
print("Finish load all data")
}
}

Related

Structured Concurrency difference between addTask and addTaskUnlessCancelled Swift

I am have confusion between Calling addTask() and addTaskUnlessCancelled. By definition addTask() on your group will unconditionally add a new task to the group
func testCancellation() async {
do {
try await withThrowingTaskGroup(of: Void.self) { group -> Void in
group.addTaskUnlessCancelled {
print("added")
try await Task.sleep(nanoseconds: 1_000_000_000)
throw ExampleError.badURL
}
group.addTaskUnlessCancelled {
print("added")
try await Task.sleep(nanoseconds: 2_000_000_000)
print("Task is cancelled: \(Task.isCancelled)")
}
group.addTaskUnlessCancelled {
print("added")
try await Task.sleep(nanoseconds: 5_000_000_000)
print("Task is cancelled: \(Task.isCancelled)")
}
group.cancelAll()
try await group.next()
}
} catch {
print("Error thrown: \(error.localizedDescription)")
}
}
If you want to avoid adding tasks to a cancelled group, we have to use the addTaskUnlessCancelled() method. But even with group.cancelAll(), it is adding all Task to group. Then what's the difference here and returning value which return true only if Task throws some error?
Short answer
The virtue of addTaskUnlessCancelled is that:
it will not even start the task if the group has been canceled; and
it allows you to add an early exit if the group has been canceled.
If you are interested, you can compare the source code for addTask with that of addTaskUnlessCancelled, right below it.
Long answer
I think the problem is best exemplified if we change the task group is processing requests over time (e.g. consuming an AsyncSequence).
Consider the following
func a() async {
await withTaskGroup(of: Void.self) { group in
for await i in tickSequence() {
group.addTask(operation: { await self.b() })
if i == 3 {
group.cancelAll()
}
}
}
}
func b() async {
let start = CACurrentMediaTime()
while CACurrentMediaTime() - start < 3 { }
}
func tickSequence() -> AsyncStream<Int> {
AsyncStream { continuation in
Task {
for i in 0 ..< 12 {
try await Task.sleep(nanoseconds: NSEC_PER_SEC / 2)
continuation.yield(i)
}
continuation.finish()
}
}
}
In this example, b is awaiting tick sequence events, calling c for each one. But we cancel the group after the fourth task is added. That results in the following:
So we can see the signpost ⓢ where the group was canceled after the fourth task was added, but it proceeded unaffected.
But that is because b is not responding to cancelation. So let's say you fix that:
func b() async {
let start = CACurrentMediaTime()
while CACurrentMediaTime() - start < 3 {
if Task.isCancelled { return }
}
}
You now see behavior like the following:
That is better, as b is now canceling, but a is still attempting to add tasks to the group even though it has been canceled. And you can see that b is repeatedly running (though at least it is now immediately terminating) and the task initiated by a is not finishing in a timely manner.
However, if you use group.addTaskUnlessCancelled it not only will not add new tasks to the group at all (i.e., it does not rely on the cancellation capabilities of the task), but it also lets you exit when the group is canceled.
func a() async {
await withTaskGroup(of: Void.self) { group in
for await i in tickSequence() {
guard group.addTaskUnlessCancelled(operation: { await self.b() })
else { break }
if i == 3 {
group.cancelAll()
}
}
}
}
That results in the desired behavior:
Obviously, this is a rather contrived example, explicitly constructed to illustrate the difference, but in many cases, the difference between addTask and addTaskUnlessCancelled is less stark. But hopefully the above illustrates the difference.
Bottom line, if you might be canceling groups in the middle of adding additional tasks to that group, addTaskUnlessCancelled is well advised. That having been said, if you do not cancel groups (e.g., canceling tasks is more common than canceling groups, IMHO), it is not entirely clear how often addTaskUnlessCancelled will really be needed.
Note, in the above, I removed all of the OSLog and signpost code to generate all of the above intervals and signposts in Xcode Instruments (as I did not want to distract from the question at hand). But, if you are interested, this was the actual code:
import os.log
private let log = OSLog(subsystem: "Test", category: .pointsOfInterest)
func a() async {
await withTaskGroup(of: Void.self) { group in
let id = log.begin(name: #function, "begin")
defer { log.end(name: #function, "end", id: id) }
for await i in tickSequence() {
guard group.addTaskUnlessCancelled(operation: { await self.b() }) else { break }
if i == 3 {
log.event(name: #function, "Cancel")
group.cancelAll()
}
}
}
}
func b() async {
let id = log.begin(name: #function, "begin")
defer { log.end(name: #function, "end", id: id) }
let start = CACurrentMediaTime()
while CACurrentMediaTime() - start < 3 {
if Task.isCancelled { return }
}
}
func tickSequence() -> AsyncStream<Int> {
AsyncStream { continuation in
Task {
for i in 0 ..< 12 {
try await Task.sleep(nanoseconds: NSEC_PER_SEC / 2)
continuation.yield(i)
}
continuation.finish()
}
}
}
With
extension OSLog {
func event(name: StaticString = "Points", _ string: String) {
os_signpost(.event, log: self, name: name, "%{public}#", string)
}
/// Manually begin an interval
func begin(name: StaticString = "Intervals", _ string: String) -> OSSignpostID {
let id = OSSignpostID(log: self)
os_signpost(.begin, log: self, name: name, signpostID: id, "%{public}#", string)
return id
}
/// Manually end an interval
func end(name: StaticString = "Intervals", _ string: String, id: OSSignpostID) {
os_signpost(.end, log: self, name: name, signpostID: id, "%{public}#", string)
}
func interval<T>(name: StaticString = "Intervals", _ string: String, block: () throws -> T) rethrows -> T {
let id = OSSignpostID(log: self)
os_signpost(.begin, log: self, name: name, signpostID: id, "%{public}#", string)
defer { os_signpost(.end, log: self, name: name, signpostID: id, "%{public}#", string) }
return try block()
}
}

Image loading/caching off the main thread

I am writing a custom image fetcher to fetch the images needed for my collection view. Below is my image fetcher logic
class ImageFetcher {
/// Thread safe cache that stores `UIImage`s against corresponding URL's
private var cache = Synchronised([URL: UIImage]())
/// Inflight Requests holder which we can use to cancel the requests if needed
/// Thread safe
private var inFlightRequests = Synchronised([UUID: URLSessionDataTask]())
func fetchImage(using url: URL, completion: #escaping (Result<UIImage, Error>) -> Void) -> UUID? {
/// If the image is present in cache return it
if let image = cache.value[url] {
completion(.success(image))
}
let uuid = UUID()
let dataTask = URLSession.shared.dataTask(with: url) { [weak self] data, response, error in
guard let self = self else { return }
defer {
self.inFlightRequests.value.removeValue(forKey:uuid )
}
if let data = data, let image = UIImage(data: data) {
self.cache.value[url] = image
DispatchQueue.main.async {
completion(.success(image))
}
return
}
guard let error = error else {
// no error , no data
// trigger some special error
return
}
// Task cancelled do not send error code
guard (error as NSError).code == NSURLErrorCancelled else {
completion(.failure(error))
return
}
}
dataTask.resume()
self.inFlightRequests.value[uuid] = dataTask
return uuid
}
func cancelLoad(_ uuid: UUID) {
self.inFlightRequests.value[uuid]?.cancel()
self.inFlightRequests.value.removeValue(forKey: uuid)
}
}
This is a block of code that provides the thread safety needed to access the cache
/// Use to make a struct thread safe
public class Synchronised<T> {
private var _value: T
private let queue = DispatchQueue(label: "com.sync", qos: .userInitiated, attributes: .concurrent)
public init(_ value: T) {
_value = value
}
public var value: T {
get {
return queue.sync { _value }
}
set { queue.async(flags: .barrier) { self._value = newValue }}
}
}
I am not seeing the desired scroll performance and I anticipate that is because my main thread is getting blocked when I try to access the cache(queue.sync { _value }). I am calling the fetchImage method from the cellForRowAt method of the collectionView and I can't seem to find a way to dispatch it off the main thread because I would need the request's UUID so I would be able to cancel the request if needed. Any suggestions on how to get this off the main thread or are there any suggestions to architect this in a better way?
I do not believe that your scroll performance is related to fetchImage. While there are modest performance issues in Synchronized, it likely is not enough to explain your issues. That having been said, there are several issue here, but blocking the main queue does not appear to be one of them.
The more likely culprit might be retrieving assets that are larger than the image view (e.g. large asset in small image view requires resizing which can block the main thread) or some mistake in the fetching logic. When you say “not seeing desired scroll performance”, is it stuttering or just slow? The nature of the “scroll performance” problem will dictate the solution.
A few unrelated observations:
Synchronised, used with a dictionary, is not thread-safe. Yes, the getter and setter for value is synchronized, but not the subsequent manipulation of that dictionary. It is also very inefficient (though, not likely sufficiently inefficient to explain the problems you are having).
I would suggest not synchronizing the retrieval and setting of the whole dictionary, but rather make a synchronized dictionary type:
public class SynchronisedDictionary<Key: Hashable, Value> {
private var _value: [Key: Value]
private let queue = DispatchQueue(label: "com.sync", qos: .userInitiated, attributes: .concurrent)
public init(_ value: [Key: Value] = [:]) {
_value = value
}
// you don't need/want this
//
// public var value: [Key: Value] {
// get { queue.sync { _value } }
// set { queue.async(flags: .barrier) { self._value = newValue } }
// }
subscript(key: Key) -> Value? {
get { queue.sync { _value[key] } }
set { queue.async(flags: .barrier) { self._value[key] = newValue } }
}
var count: Int { queue.sync { _value.count } }
}
In my tests, in release build this was about 20 times faster. Plus it is thread-safe.
But, the idea is that you should not expose the underlying dictionary, but rather just expose whatever interface you need for the synchronization type to manage the dictionary. You will likely want to add additional methods to the above (e.g. removeAll or whatever), but the above should be sufficient for your immediate purposes. And you should be able to do things like:
var dictionary = SynchronizedDictionary<String, UIImage>()
dictionary["foo"] = image
imageView.image = dictionary["foo"]
print(dictionary.count)
Alternatively, you could just dispatch all updates to the dictionary to the main queue (see point 4 below), then you don't need this synchronized dictionary type at all.
You might consider using NSCache, instead of your own dictionary, to hold the images. You want to make sure that you respond to memory pressure (emptying the cache) or some fixed total cost limit. Plus, NSCache is already thread-safe.
In fetchImage, you have several paths of execution where you do not call the completion handler. As a matter of convention, you will want to ensure that the completion handler is always called. E.g. what if the caller started a spinner before fetching the image, and stopping it in the completion handler? If you might not call the completion handler, then the spinner might never stop, either.
Similarly, where you do call the completion handler, you do not always dispatch it back to the main queue. I would either always dispatch back to the main queue (relieving the caller from having to do so) or just call the completion handler from the current queue, but only dispatching some of them to the main queue is an invitation for confusion.
FWIW, you can create Unit Tests target and demonstrate the difference between the original Synchronised and the SynchronisedDictionary, by testing a massively concurrent modification of the dictionary with concurrentPerform:
// this is not thread-safe if T is mutable
public class Synchronised<T> {
private var _value: T
private let queue = DispatchQueue(label: "com.sync", qos: .userInitiated, attributes: .concurrent)
public init(_ value: T) {
_value = value
}
public var value: T {
get { queue.sync { _value } }
set { queue.async(flags: .barrier) { self._value = newValue }}
}
}
// this is thread-safe dictionary ... assuming `Value` is not mutable reference type
public class SynchronisedDictionary<Key: Hashable, Value> {
private var _value: [Key: Value]
private let queue = DispatchQueue(label: "com.sync", qos: .userInitiated, attributes: .concurrent)
public init(_ value: [Key: Value] = [:]) {
_value = value
}
subscript(key: Key) -> Value? {
get { queue.sync { _value[key] } }
set { queue.async(flags: .barrier) { self._value[key] = newValue } }
}
var count: Int { queue.sync { _value.count } }
}
class SynchronisedTests: XCTestCase {
let iterations = 10_000
func testSynchronised() throws {
let dictionary = Synchronised([String: Int]())
DispatchQueue.concurrentPerform(iterations: iterations) { i in
let key = "\(i)"
dictionary.value[key] = i
}
XCTAssertEqual(iterations, dictionary.value.count) // XCTAssertEqual failed: ("10000") is not equal to ("834")
}
func testSynchronisedDictionary() throws {
let dictionary = SynchronisedDictionary<String, Int>()
DispatchQueue.concurrentPerform(iterations: iterations) { i in
let key = "\(i)"
dictionary[key] = i
}
XCTAssertEqual(iterations, dictionary.count) // success
}
}

Wait for DispatchQueue [duplicate]

How could I make my code wait until the task in DispatchQueue finishes? Does it need any CompletionHandler or something?
func myFunction() {
var a: Int?
DispatchQueue.main.async {
var b: Int = 3
a = b
}
// wait until the task finishes, then print
print(a) // - this will contain nil, of course, because it
// will execute before the code above
}
I'm using Xcode 8.2 and writing in Swift 3.
If you need to hide the asynchronous nature of myFunction from the caller, use DispatchGroups to achieve this. Otherwise, use a completion block. Find samples for both below.
DispatchGroup Sample
You can either get notified when the group's enter() and leave() calls are balanced:
func myFunction() {
var a = 0
let group = DispatchGroup()
group.enter()
DispatchQueue.main.async {
a = 1
group.leave()
}
// does not wait. But the code in notify() is executed
// after enter() and leave() calls are balanced
group.notify(queue: .main) {
print(a)
}
}
or you can wait:
func myFunction() {
var a = 0
let group = DispatchGroup()
group.enter()
// avoid deadlocks by not using .main queue here
DispatchQueue.global(qos: .default).async {
a = 1
group.leave()
}
// wait ...
group.wait()
print(a) // you could also `return a` here
}
Note: group.wait() blocks the current queue (probably the main queue in your case), so you have to dispatch.async on another queue (like in the above sample code) to avoid a deadlock.
Completion Block Sample
func myFunction(completion: #escaping (Int)->()) {
var a = 0
DispatchQueue.main.async {
let b: Int = 1
a = b
completion(a) // call completion after you have the result
}
}
// on caller side:
myFunction { result in
print("result: \(result)")
}
In Swift 3, there is no need for completion handler when DispatchQueue finishes one task.
Furthermore you can achieve your goal in different ways
One way is this:
var a: Int?
let queue = DispatchQueue(label: "com.app.queue")
queue.sync {
for i in 0..<10 {
print("Ⓜ️" , i)
a = i
}
}
print("After Queue \(a)")
It will wait until the loop finishes but in this case your main thread will block.
You can also do the same thing like this:
let myGroup = DispatchGroup()
myGroup.enter()
//// Do your task
myGroup.leave() //// When your task completes
myGroup.notify(queue: DispatchQueue.main) {
////// do your remaining work
}
One last thing: If you want to use completionHandler when your task completes using DispatchQueue, you can use DispatchWorkItem.
Here is an example how to use DispatchWorkItem:
let workItem = DispatchWorkItem {
// Do something
}
let queue = DispatchQueue.global()
queue.async {
workItem.perform()
}
workItem.notify(queue: DispatchQueue.main) {
// Here you can notify you Main thread
}
Swift 5 version of the solution
func myCriticalFunction() {
var value1: String?
var value2: String?
let group = DispatchGroup()
group.enter()
//async operation 1
DispatchQueue.global(qos: .default).async {
// Network calls or some other async task
value1 = //out of async task
group.leave()
}
group.enter()
//async operation 2
DispatchQueue.global(qos: .default).async {
// Network calls or some other async task
value2 = //out of async task
group.leave()
}
group.wait()
print("Value1 \(value1) , Value2 \(value2)")
}
Use dispatch group
dispatchGroup.enter()
FirstOperation(completion: { _ in
dispatchGroup.leave()
})
dispatchGroup.enter()
SecondOperation(completion: { _ in
dispatchGroup.leave()
})
dispatchGroup.wait() // Waits here on this thread until the two operations complete executing.
In Swift 5.5+ you can take advantage of Swift Concurrency which allows to return a value from a closure dispatched to the main thread
func myFunction() async {
var a : Int?
a = await MainActor.run {
let b = 3
return b
}
print(a)
}
Task {
await myFunction()
}
Swift 4
You can use Async Function for these situations. When you use DispatchGroup(),Sometimes deadlock may be occures.
var a: Int?
#objc func myFunction(completion:#escaping (Bool) -> () ) {
DispatchQueue.main.async {
let b: Int = 3
a = b
completion(true)
}
}
override func viewDidLoad() {
super.viewDidLoad()
myFunction { (status) in
if status {
print(self.a!)
}
}
}
Somehow the dispatchGroup enter() and leave() commands above didn't work for my case.
Using sleep(5) in a while loop on the background thread worked for me though. Leaving here in case it helps someone else and it didn't interfere with my other threads.

Operation Queue not executing in order even after setting priority and dependency on Operations

I am making three api calls and want that API1 should execute first, once completed API2 should execute followed by API3.
I used operation queue for this with adding dependency over operations. I tried setting priority as well but not getting api calls in order. Help me out how to make it properly.
Code is like this :
let op1 = Operation()
op1.completionBlock = {
self.APICall(urlString: self.url1)
}
op1.queuePriority = .veryHigh
let op2 = Operation()
op2.completionBlock = {
self.APICall(urlString: self.url2)
}
op2.queuePriority = .high
let op3 = Operation()
op3.completionBlock = {
self.APICall(urlString: self.url3)
}
op3.queuePriority = .normal
op2.addDependency(op1)
op3.addDependency(op2)
queue.addOperations([op1, op2, op3], waitUntilFinished: false)
I put the API Call Method in DispatchQueue.main.sync like this:
func APICall(urlString: String) {
let headers: HTTPHeaders = [
"Accept": "text/html"
]
print(urlString)
DispatchQueue.main.sync {
Alamofire.request(urlString.addingPercentEncoding(withAllowedCharacters: CharacterSet.urlQueryAllowed)!, method: .get, parameters: nil, encoding: JSONEncoding.default, headers: headers).responseJSON {
response in
// self.stopActivityIndicator()
print(response.result.value)
switch response.result {
case .success:
break
case .failure(let error):
break
}
}
}
}
There are several issues:
If you’re trying to manage dependencies between operations, you cannot use the operation’s completionBlock for the code that the dependencies rely upon. The completion block isn't called until after the operation is complete (and thus defeating the purpose of any dependencies).
So the following will not work as intended:
let queue = OperationQueue()
let op1 = Operation()
op1.completionBlock = {
print("starting op1")
Thread.sleep(forTimeInterval: 1)
print("finishing op1")
}
let op2 = Operation()
op2.completionBlock = {
print("starting op2")
Thread.sleep(forTimeInterval: 1)
print("finishing op2")
}
op2.addDependency(op1)
queue.addOperations([op1, op2], waitUntilFinished: false)
But if you define the operations like so, it will work:
let op1 = BlockOperation() {
print("starting op1")
Thread.sleep(forTimeInterval: 1)
print("finishing op1")
}
let op2 = BlockOperation {
print("starting op2")
Thread.sleep(forTimeInterval: 1)
print("finishing op2")
}
(But this only works because I redefined operations that were synchronous. See point 3 below.)
It’s worth noting generally you never use Operation directly. As the docs say:
An abstract class that represents the code and data associated with a single task. ...
Because the Operation class is an abstract class, you do not use it directly but instead subclass or use one of the system-defined subclasses (NSInvocationOperation or BlockOperation) to perform the actual task.
Hence the use of BlockOperation, above, or subclassing it as shown below in point 3.
One should not use priorities to manage the order that operations execute if the order must be strictly honored. As the queuePriority docs say (emphasis added):
This value is used to influence the order in which operations are dequeued and executed...
You should use priority values only as needed to classify the relative priority of non-dependent operations. Priority values should not be used to implement dependency management among different operation objects. If you need to establish dependencies between operations, use the addDependency(_:) method instead.
So, if you queue 100 high priority operations and 100 default priority operations, you are not guaranteed that all of the high priority ones will start before the lower priority ones start running. It will tend to prioritize them, but not strictly so.
The first point is moot, as you are calling asynchronous methods. So you can’t use simple Operation or BlockOperation. If you don’t want a subsequent network request to start until the prior one finishes, you’ll want to wrap these network request in custom asynchronous Operation subclass with all of the special KVO that entails:
class NetworkOperation: AsynchronousOperation {
var request: DataRequest
static var sessionManager: SessionManager = {
let manager = Alamofire.SessionManager(configuration: .default)
manager.startRequestsImmediately = false
return manager
}()
init(urlString: String, parameters: [String: String]? = nil, completion: #escaping (Result<Any>) -> Void) {
let headers: HTTPHeaders = [
"Accept": "text/html"
]
let string = urlString.addingPercentEncoding(withAllowedCharacters: .urlQueryAllowed)!
let url = URL(string: string)!
request = NetworkOperation.sessionManager.request(url, parameters: parameters, headers: headers)
super.init()
request.responseJSON { [weak self] response in
completion(response.result)
self?.finish()
}
}
override func main() {
request.resume()
}
override func cancel() {
request.cancel()
}
}
Then you can do:
let queue = OperationQueue()
let op1 = NetworkOperation(urlString: ...) { result in
...
}
let op2 = NetworkOperation(urlString: ...) { result in
...
}
let op3 = NetworkOperation(urlString: ...) { result in
...
}
op2.addDependency(op1)
op3.addDependency(op2)
queue.addOperations([op1, op2, op3], waitUntilFinished: false)
And because that’s using AsynchronousOperation subclass (shown below), the operations won’t complete until the asynchronous request is done.
/// Asynchronous operation base class
///
/// This is abstract to class performs all of the necessary KVN of `isFinished` and
/// `isExecuting` for a concurrent `Operation` subclass. You can subclass this and
/// implement asynchronous operations. All you must do is:
///
/// - override `main()` with the tasks that initiate the asynchronous task;
///
/// - call `completeOperation()` function when the asynchronous task is done;
///
/// - optionally, periodically check `self.cancelled` status, performing any clean-up
/// necessary and then ensuring that `finish()` is called; or
/// override `cancel` method, calling `super.cancel()` and then cleaning-up
/// and ensuring `finish()` is called.
public class AsynchronousOperation: Operation {
/// State for this operation.
#objc private enum OperationState: Int {
case ready
case executing
case finished
}
/// Concurrent queue for synchronizing access to `state`.
private let stateQueue = DispatchQueue(label: Bundle.main.bundleIdentifier! + ".rw.state", attributes: .concurrent)
/// Private backing stored property for `state`.
private var _state: OperationState = .ready
/// The state of the operation
#objc private dynamic var state: OperationState {
get { stateQueue.sync { _state } }
set { stateQueue.sync(flags: .barrier) { _state = newValue } }
}
// MARK: - Various `Operation` properties
open override var isReady: Bool { return state == .ready && super.isReady }
public final override var isAsynchronous: Bool { return true }
public final override var isExecuting: Bool { return state == .executing }
public final override var isFinished: Bool { return state == .finished }
// KVN for dependent properties
open override class func keyPathsForValuesAffectingValue(forKey key: String) -> Set<String> {
if ["isReady", "isFinished", "isExecuting"].contains(key) {
return [#keyPath(state)]
}
return super.keyPathsForValuesAffectingValue(forKey: key)
}
// Start
public final override func start() {
if isCancelled {
state = .finished
return
}
state = .executing
main()
}
/// Subclasses must implement this to perform their work and they must not call `super`. The default implementation of this function throws an exception.
open override func main() {
fatalError("Subclasses must implement `main`.")
}
/// Call this function to finish an operation that is currently executing
public final func finish() {
if !isFinished { state = .finished }
}
}
As very minor observation, your code specified GET request with JSON parameters. That doesn’t make sense. GET requests have no body in which JSON could be included. GET requests only use URL encoding. Besides you’re not passing any parameters.

Create serial queue for network calls generated by push notifications

I am receiving up to four push notifications for each event I am subscribed to. I have gone through everything related to my CloudKit subscriptions and notification registry and I am convinced this is an Apple problem. I have instead turned my attention toward correctly processing the notifications no matter how many I receive. Here is a simplified version of what I am doing:
func recievePrivatePush(_ pushInfo: [String:NSObject], completion: #escaping ()->Void) {
let notification = CKNotification(fromRemoteNotificationDictionary: pushInfo)
let alertBody = notification.alertBody
if let queryNotification = notification as? CKQueryNotification {
let recordID = queryNotification.recordID
guard let body = queryNotification.alertBody else {
return
}
if recordID != nil {
switch body {
case "Notification Type":
let id = queryNotification.recordID
switch queryNotification.queryNotificationReason {
case .recordCreated:
DataCoordinatorInterface.sharedInstance.fetchDataItem(id!.recordName, completion: {
//
})
break
default:
break
}
}
}
}
}
The fetching code looks something like this:
func fetchDataItem(_ id: String, completion: #escaping ()-> Void) {
if entityExistsInCoreData(id) {return}
let db = CKContainer.default().privateCloudDatabase
let recordID = CKRecordID(recordName: id)
db.fetch(withRecordID: recordID) { (record, error) in
if let topic = record {
//Here I create and save the object to core data.
}
completion()
}
}
All of my code works, the problem I am having is that when I receive multiple notifications, multiple fetch requests are started before the first core data entity is created, resulting in redundant core data objects.
What I would like to do is find a way to add the fetch requests to a serial queue so they are processed one at a time. I can put my request calls in a serial queue, but the callbacks always run asynchronously, so multiple fetch requests are still make before the first data object is persisted.
I have tried using semaphores and dispatch groups with a pattern that looks like this:
let semaphore = DispatchSemaphore(value: 1)
func recievePrivatePush(_ pushInfo: [String:NSObject], completion: #escaping ()->Void) {
_ = semaphore.wait(timeout: .distantFuture)
let notification = CKNotification(fromRemoteNotificationDictionary: pushInfo)
let alertBody = notification.alertBody
if let queryNotification = notification as? CKQueryNotification {
let recordID = queryNotification.recordID
guard let body = queryNotification.alertBody else {
return
}
if recordID != nil {
switch body {
case "Notification Type":
let id = queryNotification.recordID
switch queryNotification.queryNotificationReason {
case .recordCreated:
DataCoordinatorInterface.sharedInstance.fetchDataItem(id!.recordName, completion: {
semaphore.signal()
})
break
default:
break
}
}
}
}
}
Once the above function is called for the second time, and semaphore.wait is called, the execution of the first network request pauses, resulting in a frozen app.
Again, what I would like to accomplish it adding the asynchronous network requests to a queue so that they are made only one at a time i.e. the first network call is completed before the second request is started.
Carl,
Perhaps you'll find your solutions with dispatch groups, a few key expressions to look into.
let group = DispatchGroup()
group.enter()
... code ...
group.leave
group.wait()
I use them to limit the number of http requests I send out in a batch, to wait for the response. Perhaps you could use them together with the suggestion in my comment. Watch this video too, dispatch groups in here, I think more.
https://developer.apple.com/videos/play/wwdc2016/720/
These simple classes helped me solve the problem.
class PushQueue {
internal var pushArray: Array<String> = [String]()
internal let pushQueue = DispatchQueue(label: "com.example.pushNotifications")
public func addPush(_ push: Push) {
pushQueue.sync {
if pushArray.contains(push.id) {
return
} else {
pushArray.append(push.id)
processNotification(push: push)
}
}
}
internal func processNotification(push: Push) {
PushInterface.sharedInstance.recievePrivatePush(push.userInfo as! [String: NSObject])
}
}
class CKPush: Equatable {
init(userInfo: [AnyHashable: Any]) {
let ck = userInfo["ck"] as? NSDictionary
let id = ck?["nid"] as? String
self.id = id!
self.userInfo = userInfo
}
var id: String
var userInfo: [AnyHashable:Any]
public static func ==(lhs: CKPush, rhs: CKPush) -> Bool {
return lhs.id == rhs.id ? true : false
}
}
Please ignore the sloppy force unwraps. They need to be cleaned up.

Resources