In the accepted answer for Timer.scheduledTimer not firing, it is emphasised to start the timer on the main thread to ensure that it fires. However, if I do that then I often end up with the timer being slow to initialise, and therefore failing in its purpose as a debouncer. Just wondering if there is something I am doing wrong, or a better way of doing this.
My problem (pseudocode at the bottom):
I use a JWT to authenticate my server calls, and I check this locally to see if it's expired before submitting it. However, I don't want several network calls to notice the expired JWT all at once and submit several refresh requests, so I use a semaphore to ensure only one call at a time is checking/renewing the JWT. I also use a dispatchGroup to delay the original network call until after the checking/renewing is done. However, if the refresh fails I want to avoid all the queued calls then trying it again. I don't want to block all refresh calls forever more with a boolean, so I thought I would create a scheduledTimer to block it. However, if I create it on the main thread, there's a delay before it's created and the released network calls submit a few more refresh attempts before they're blocked.
Questions
Should I just create the timer on the local thread to ensure there's no delay (I presume the main thread is occupied with some UI tasks which is why the timer doesn't get created instantly?)
More generally, is there a better way of doing this? I suspect there is - I tried playing with adding items to a queue, and then cancelling them, but then I began getting worried about creating work items with out of date values of functions, and capturing things in closures etc (it was a while ago, I can't remember the details), so I went with my current bodge.
This might all be easier if I was using await/async, but our app supports all the way back to iOS12, so I'm stuck with nests of completion handlers.
Hopefully this pseudocode is accurate enough to be helpful!
private static let requestQueue: DispatchQueue = DispatchQueue(label: "requestQueue", qos: .userInteractive, attributes: .concurrent)
public static let jwtValidityCheckSemaphore: DispatchSemaphore = DispatchSemaphore(value: 1)
private static var uglyHackTimer: Timer?
#objc private class func clearUglyHackTimer(){
uglyHackTimer?.invalidate()
uglyHackTimer = nil
}
class func myNetworkCall(for: myPurposes){
let group = DispatchGroup()
jwtValidityCheckSemaphore.wait()
if (uglyHackTimer?.isValid ?? false){
jwtValidityCheckSemaphore.signal()
return
}
group.enter()
if jwtIsInvalid(){
refreshJWT(){success in
if !success{
DispatchQueue.main.async{
self.uglyHackTimer = Timer.scheduledTimer(timeInterval: TimeInterval(2), target: self, selector: #selector(clearUglyHackTimer), userInfo: nil, repeats: false)
}
}
group.leave()
jwtValidityCheckSemaphore.signal()
}
}else{
group.leave()
jwtValidityCheckSemaphore.signal()
}
// Make the original network call
newNetworkRequest = DispatchWorkItem{
// Blah, blah
}
group.notify(queue: requestQueue, work: newNetworkRequest)
}
Related
The popular Concurrent-Ruby library has a Concurrent::Event class that I find wonderful. It very neatly encapsulates the idea of, “Some threads need to wait for another thread to finish something before proceeding.”
It only takes three lines of code to use:
One to create the object
One to call .wait to start waiting, and
One to call .set when the thing is ready.
All the locks and booleans you’d need to use to create this out of other concurrency primitives are taken care of for you.
To quote some of the documentation, along with with a sample usage:
Old school kernel-style event reminiscent of Win32 programming in C++.
When an Event is created it is in the unset state. Threads can choose to
#wait on the event, blocking until released by another thread. When one
thread wants to alert all blocking threads it calls the #set method which
will then wake up all listeners. Once an Event has been set it remains set.
New threads calling #wait will return immediately.
require 'concurrent-ruby'
event = Concurrent::Event.new
t1 = Thread.new do
puts "t1 is waiting"
event.wait
puts "event ocurred"
end
t2 = Thread.new do
puts "t2 calling set"
event.set
end
[t1, t2].each(&:join)
which prints output like the following
t1 is waiting
t2 calling set
event occurred
(Several different orders are possible because it is multithreaded, but ‘t2 calling set’ always comes out before ‘event occurred’.)
Is there something like this in Swift on iOS?
I think the closest thing to that is the new async/await syntax in Swift 5.5. There's no equivalent of event.set, but await waits for something asynchronous to finish. A particularly nice expression of concurrency is async let, which proceeds concurrently but then lets you pause to gather up all the results of the async let calls:
async let result1 = // do something asynchronous
async let result2 = // do something else asynchronous at the same time
// ... and keep going...
// now let's gather up the results
return await (result1, result2)
You can achieve the result in your example using a Grand Central Dispatch DispatchSemaphore - This is a traditional counting semaphore. Each call to signal increments the semaphore. Each call to wait decrement the semaphore and if the result is less than zero it blocks and waits until the semaphore is 0
let semaphore = DispatchSemaphore(value: 0)
let q1 = DispatchQueue(label:"q1", target: .global(qos: .utility))
let q2 = DispatchQueue(label:"q2", target: .global(qos: .utility))
q1.async {
print("q1 is waiting")
semaphore.wait()
print("event occurred")
}
q2.async {
print("q2 calling signal")
semaphore.signal()
}
Output:
q1 is waiting
q2 calling signal
event occurred
But this object won't work if you have multiple threads that want to wait. Since each call to wait decrements the semaphore the other tasks would remain blocked.
For that you could use a DispatchGroup. You call enter before you start a task in the group and leave when it is done. You can use wait to block until the group is empty, and like your Ruby object, wait will not block if the group is already empty and multiple threads can wait on the same group.
let group = DispatchGroup()
let q1 = DispatchQueue(label:"q1", target: .global(qos: .utility))
let q2 = DispatchQueue(label:"q2", target: .global(qos: .utility))
q1.async {
print("q1 is waiting")
group.wait()
print("event occurred")
}
group.enter()
q2.async {
print("q2 calling leave")
group.leave()
}
Output:
q1 is waiting
q2 calling leave
event occurred
You generally want to avoid blocking threads on iOS if possible as there is a risk of deadlocks and if you block the main thread your whole app will become non responsive. It is more common to use notify to schedule code to execute when the group becomes empty.
I understand that your code is simply a contrived example, but depending on what you actually want to do and your minimum supported iOS requirements, there may be better alternatives.
DispatchGroup to execute code when several asynchronous tasks are complete using notify rather than wait
Combine to process asynchronous events in a pipeline (iOS 13+)
Async/Await (iOS 15+)
I have a requirement to download large number of files - previously only one file could be downloaded at a time. The current design is such that when the user downloads a single file, a URLSession task is created and the progress/completion/fail is recorded using the delegate methods for urlsession. My question is, how can I leave a dispatch group in this delegate method? I need to download 10 files at a time, start the next 10 when the previous ten finishes. Right now, if I leave the dispatch group in the delegate method, the dispatch group wait waits forever. Here's what I've implemented so far:
self.downloadAllDispatchQueue.async(execute: {
self.downloadAllDispatchGroup = DispatchGroup()
let maximumConcurrentDownloads: Int = 10
var concurrentDownloads = 0
for i in 0..<files.count
{
if self.cancelDownloadAll {
return
}
if concurrentDownloads >= maximumConcurrentDownloads{
self.downloadAllDispatchGroup.wait()
concurrentDownloads = 0
}
if let workVariantPart = libraryWorkVariantParts[i].workVariantPart {
concurrentDownloads += 1
self.downloadAllDispatchGroup.enter()
//call method for download
}
}
self.downloadAllDispatchGroup!.notify(queue: self.downloadAllDispatchQueue, execute: {
DispatchQueue.main.async {
}
})
})
In the delegates:
func downloadDidFinish(_ notification: Notification){
if let dispatchGroup = self.downloadAllDispatchGroup {
self.downloadAllDispatchQueue.async(execute: {
dispatchGroup.leave()
})
}
}
Is this even possible? If not, how can I achieve this?
If downloadAllDispatchQueue is a serial queue, the code in your question will deadlock. When you call wait, it blocks that current thread until it receives the leave call(s) from another thread. If you try to dispatch the leave to a serial queue that is already blocked with a wait call, it will deadlock.
The solution is to not dispatch the leave to the queue at all. There is no need for that. Just call it directly from the current thread:
func downloadDidFinish(_ notification: Notification) {
downloadAllDispatchGroup?.leave()
}
When downloading a large number of files, we often use a background session. See Downloading Files in the Background. We do this so downloads continue even after the user leaves the app.
When you start using background session, there is no need to introduce this “batches of ten” logic. The background session manages all of these requests for you. Layering on a “batches of ten” logic only introduces unnecessary complexities and inefficiencies.
Instead, we just instantiate a single background session and submit all of the requests, and let the background session manage the requests from there. It is simple, efficient, and offers the ability to continue downloads even after the user leaves the app. If you are downloading so many files that you feel like you need to manage them like this, it is just as likely that the end user will get tired of this process and may want to leave the app to do other things while the requests finish.
In my iOS app, I make a lot of web requests. When these requests succeed / fail, a delegate method in the view controller is triggered. The delegate method contains code that is responsible for updating the UI. In the following examples didUpdate(foo:) is the delegate method, and presentAlert(text:) is my UI update.
Without DispatchQueue, the code would like this:
func didUpdate(foo: Foo) {
self.presentAlert(text: foo.text)
}
func presentAlert(text: String) {
let alertController = ...
self.present(alertController, animated: true)
}
When it comes to using DispatchQueue to make sure my UI will update quickly, I start to lose my ability to tell what's actually happening in the code. Is there any difference between the following two implementations?
First Way:
func didUpdate(foo: Foo) {
self.presentAlert(text: foo.text)
}
func presentAlert(text: String) {
let alertController = ...
DispatchQueue.main.async {
self.present(alertController, animated: true)
}
}
Second way:
func didUpdate(foo: Foo) {
DispatchQueue.main.async {
self.presentAlert(text: foo.text)
}
}
func presentAlert(text: String) {
let alertController = ...
self.present(alertController, animated: true)
}
Does it matter which approach I go with? It seems like having the DispatchQueue block inside of the presentAlert function is better, so I don't have to include DispatchQueue.main.async any time I want to call presentAlert?
Is it only necessary to explicitly send a block to the main queue when you (or a framework you are using) has "moved" yourself into a background queue?
If there are any external resources that may help my understanding of GCD, please let me know!
Does it matter which approach I go with? It seems like having the DispatchQueue block inside of the presentAlert function is better, so I don't have to include DispatchQueue.main.async any time I want to call presentAlert?
There is no difference between the two approaches. But the disadvantage with the second approach, like you said, is you have to wrap all the calls to presentAlert around the DispatchQueue.main.async closure.
Is it only necessary to explicitly send a block to the main queue when you (or a framework you are using) has "moved" yourself into a background queue?
If your question here is whether there is going to be a problem if you dispatch to the main queue from the main queue, then the answer is no. If you dispatch asynchronously on the main queue from within the main queue, all it does is call your method later in the run loop.
If there are any external resources that may help my understanding of GCD, please let me know!
There are many sources on the Internet to understand GCD better. Check out this Raywenderlich tutorial. Its a good place to start.
My recommendation would be, if you have a central class that handles all the web service calls, it might be better to invoke the completion callback closure on the main queue once you parse your data after getting the web service response. This way, you won't have to keep dispatching to the main queue in your view or viewcontroller classes.
I use GCD's DispatchWorkItem to keep track of my data that's being sent to firebase.
The first thing I do is declare 2 class properties of type DispatchWorkItem and then when I'm ready to send the data to firebase I initialize them with values.
The first property is named errorTask. When initialized it cancels the firebaseTask and sets it to nil then prints "errorTask fired". It has a DispatchAsync Timer that will call it in 0.0000000001 seconds if the errorTask isn't cancelled before then.
The second property is named firebaseTask. When initialized it contains a function that sends the data to firebase. If the firebase callback is successful then errorTask is cancelled and set to nil and then a print statement "firebase callback was reached" prints. I also check to see if the firebaseTask was cancelled.
The problem is the code inside the errorTask always runs before the firebaseTask callback is reached. The errorTask code cancels the firebaseTask and sets it to nil but for some reason the firebaseTask still runs. I can't figure out why?
The print statements support the fact that the errorTask runs first because
"errorTask fired" always gets printed before "firebase callback was reached".
How come the firebaseTask isn't getting cancelled and set to nil even though the errorTask makes those things happen?
Inside my actual app what happens is if a user is sending some data to Firebase an activity indicator appears. Once the firebase callback is reached then the activity indicator is dismissed and an alert is shown to the user saying it was successful. However if the activity indicator doesn't have a timer on it and the callback is never reached then it will spin forever. The DispatchAsyc after has a timer set for 15 secs and if the callback isn't reached an error label would show. 9 out of 10 times it always works .
send data to FB
show activity indicator
callback reached so cancel errorTask, set it to nil, and dismiss activity indicator
show success alert.
But every once in while
it would take longer then 15 secs
firebaseTask is cancelled and set to nil, and the activity indicator would get dismissed
the error label would show
the success alert would still appear
The errorTask code block dismisses the actiInd, shows the errorLabel, and cancels the firebaseTask and sets it to nil. Once the firebaseTask is cancelled and set to nil I assumed everything inside of it would stop also because the callback was never reached. This may be the cause of my confusion. It seems as if even though the firebaseTask is cancelled and set to nil, someRef?.updateChildValues(... is somehow still running and I need to cancel that also.
My code:
var errorTask:DispatchWorkItem?
var firebaseTask:DispatchWorkItem?
#IBAction func buttonPush(_ sender: UIButton) {
// 1. initialize the errorTask to cancel the firebaseTask and set it to nil
errorTask = DispatchWorkItem{ [weak self] in
self?.firebaseTask?.cancel()
self?.firebaseTask = nil
print("errorTask fired")
// present alert that there is a problem
}
// 2. if the errorTask isn't cancelled in 0.0000000001 seconds then run the code inside of it
DispatchQueue.main.asyncAfter(deadline: .now() + 0.0000000001, execute: self.errorTask!)
// 3. initialize the firebaseTask with the function to send the data to firebase
firebaseTask = DispatchWorkItem{ [weak self] in
// 4. Check to see the if firebaseTask was cancelled and if it wasn't then run the code
if self?.firebaseTask?.isCancelled != true{
self?.sendDataToFirebase()
}
// I also tried it WITHOUT using "if firebaseTask?.isCancelled... but the same thing happens
}
// 5. immediately perform the firebaseTask
firebaseTask?.perform()
}
func sendDataToFirebase(){
let someRef = Database.database().reference().child("someRef")
someRef?.updateChildValues(myDict(), withCompletionBlock: {
(error, ref) in
// 6. if the callback to firebase is successful then cancel the errorTask and set it to nil
self.errorTask?.cancel()
self.errorTask? = nil
print("firebase callback was reached")
})
}
This cancel routine is not doing what I suspect you think it is. When you cancel a DispatchWorkItem, it performs no preemptive cancellation. It certainly has no bearing on the updateChildValues call. All it does is perform a thread-safe setting of the isCancelled property, which if you were manually iterating through a loop, you could periodically check and exit prematurely if you see that the task was canceled.
As a result, the checking of isCancelled at the start of the task isn't terribly useful pattern, because if the task has not yet been created, there is nothing to cancel. Or if the task has been created and added to a queue, and canceled before the queue had a chance to start, it will obviously just be canceled but never started, you'll never get to your isCancelled test. And if the task has started, it's likely gotten past the isCancelled test before cancel was called.
Bottom line, attempts to time the cancel request so that they are received precisely after the task has started but before it has gotten to the isCancelled test is going to be an exercise in futility. You have a race that will be almost impossible to time perfectly. Besides, even if you did happen to time this perfectly, this merely demonstrates how ineffective this whole process is (only 1 in a million cancel requests will do what you intended).
Generally, if you had asynchronous task that you wanted to cancel, you'd wrap it in an asynchronous custom Operation subclass, and implement a cancel method that stops the underlying task. Operation queues simply offer more graceful patterns for canceling asynchronous tasks than dispatch queues do. But all of this presumes that the underlying asynchronous task offers a mechanism for canceling it and I don't know if Firebase even offers a meaningful mechanism to do that. I certainly haven't seen it contemplated in any of their examples. So all of this may be moot.
I'd suggest you step away from the specific code pattern in your question and describe what you are trying to accomplish. Let's not dwell on your particular attempted solution to your broader problem, but rather let's understand what the broader goal is, and then we can talk about how to tackle that.
As an aside, there are other technical issues in your example.
Specifically, I'm assuming you're running this on the main queue. So task.perform() runs it on the current queue immediately. But your DispatchQueue.main.asyncAfter(...) can only be run when whatever is running on the main queue is done. So, even though you specified a delay of 0.0000000001 seconds, it actually won't run until the main queue is available (namely, after your perform is done running on the main queue and you're well past the isCancelled test).
If you want to test this race between running the task and canceling the task, you need to perform the cancel on a different thread. For example, you could try:
weak var task: DispatchWorkItem?
let item = DispatchWorkItem {
if (task?.isCancelled ?? true) {
print("canceled")
} else {
print("not canceled in time")
}
}
DispatchQueue.global().asyncAfter(deadline: .now() + 0.00001) {
task?.cancel()
}
task = item
DispatchQueue.main.async {
item.perform()
}
Now you can play with various delays and see the different behavior between a delay of 0.1 seconds and one of 0.0000000001 seconds. And you'll want to make sure the app has reached quiescence before you try this test (e.g. do it on a button press event, not in viewDidLoad).
But again, this will merely illustrate the futility of the whole exercise. You're going to have a really hard time catching the task between the time it started and before it checked the isCancelled property. If you really want to manifest the cancel logic in some repeatable manner, we're going to have to artificially make this happen:
weak var task: DispatchWorkItem?
let queue = DispatchQueue(label: "com.domain.app.queue") // create a queue for our test, as we never want to block the main thread
let semaphore = DispatchSemaphore(value: 0)
let item = DispatchWorkItem {
// You'd never do this in a real app, but let's introduce a delay
// long enough to catch the `cancel` between the time the task started.
//
// You could sleep for some interval, or we can introduce a semphore
// to have it not proceed until we send a signal.
print("starting")
semaphore.wait() // wait for a signal before proceeding
// now let's test if it is cancelled or not
if (task?.isCancelled ?? true) {
print("canceled")
} else {
print("not canceled in time")
}
}
DispatchQueue.global().asyncAfter(deadline: .now() + 0.5) {
task?.cancel()
semaphore.signal()
}
task = item
queue.async {
item.perform()
}
Now, you'd never do this, but it just illustrates that isCancelled does work.
Frankly, you'd never use isCancelled like this. You would generally use the isCancelled process if doing some long process where you can periodically check the isCancelled status and exit if it is true. But that's not the case in your situation.
The conclusion of all of this is that checking isCancelled at the start of a task is unlikely to ever achieve what you had hoped for.
I have 5 timers running with a different time interval.All these timers calling the same function.
At a certain time, one or more timer function trying to access the same method, this will crash my app.
How I can implement NSOperation queue for this particular scenario.
Appreciate your help.
You can create a dispatch queue with label.
let queue = DispatchQueue(label: "update") // later on your "other threads"
queue.sync {
//perform any task you want
}
In your each timer use the above code. In this way all will be able to execute this "update" thread only when it will be free