One of the reasons async was praised for ASP.NET was to follow Nodejs async platform which led to more scalability with the freeing up of threads to handle subsequent requests etc.
However, I've since read that wrapping CPU-bound code in Task.Run will have the opposite effect i.e. add even more overhead to a server and use more threads.
Apparently, only true async operations will benefit e.g. making a web-request or a call to a database.
So, my question is as follows. Is there any clear guidance out there as to when action methods should be async?
Mr Cleary is the one who opined about the fruitlessness of wrapping CPU-bound operations in async code.
Not exactly, there is a difference between wrapping CPU-bound async code in an ASP.NET app and doing that in a - for example - WPF desktop app. Let me use this statement of yours to build my answer upon.
You should categorize the async operations in your mind (in its simplest form) as such:
ASP.NET async methods, and among those:
CPU-bound operations,
Blocking operations, such as IO-operations
Async methods in a directly user-facing application, among those, again:
CPU-bound operations,
and blocking operations, such as IO-operations.
I assume that by reading Stephen Cleary's posts you've already got to understand that the way async operations work is that when you are doing a CPU-bound operation then that operation is passed on to a thread pool thread and upon completion, the program control returns to the thread it was started from (unless you do a .ConfigureAwait(false) call). Of course this only happens if there actually is an asynchronous operation to do, as I was wondering myself as well in this question.
When it comes to blocking operations on the other hand, things are a bit different. In this case, when the thread from which the code performed asynchronously gets blocked, then the runtime notices it, and "suspends" the work being done by that thread: saves all state so it can continue later on and then that thread is used to perform other operations. When the blocking operation is ready - for example, the answer to a network call has arrived - then (I don't exactly know how it is handled or noticed by the runtime, but I am trying to provide you with a high-level explanation, so it is not absolutely necessary) the runtime knows that the operation you initiated is ready to continue, the state is restored and your code can continue to run.
With these said, there is an important difference between the two:
In the CPU-bound case, even if you start an operation asynchronously, there is work to do, your code does not have to wait for anything.
In the IO-bound case or blocking case, however, there might be some time during which your code simply cannot do anything but wait and therefore it is useful that you can release that thread that has done the processing up until that point and do other work (perhaps process another request) meanwhile using it.
When it comes to a directly-user-facing application, for example, a WPF app, if you are performing a long-running CPU-operation on the main thread (GUI thread), then the GUI thread is obviously busy and therefore appears unresponsive towards the user because any interaction that is normally handled by the GUI thread just gets queued up in the message queue and doesn't get processed until the CPU-bound operation finishes.
In the case of an ASP.NET app, however, this is not an issue, because the application does not directly face the user, so he does not see that it is unresponsive. Why you don't gain anything by delegating the work to another thread is because that would still occupy a thread, that could otherwise do other work, because, well, whatever needs to be done must be done, it just cannot magically be done for you.
Think of it the following way: you are in a kitchen with a friend (you and your friend are one-one threads). You two are preparing food being ordered. You can tell your friend to dice up onions, and even though you free yourself from dicing onions, and can deal with the seasoning of the meat, your friend gets busy by dicing the onions and so he cannot do the seasoning of the meat meanwhile. If you hadn't delegated the work of dicing onions to him (which you already started) but let him do the seasoning, the same work would have been done, except that you would have saved a bit of time because you wouldn't have needed to swap the working context (the cutting boards and knives in this example). So simply put, you are just causing a bit of overhead by swapping contexts whereas the issue of unresponsiveness is invisible towards the user. (The customer neither sees nor cares which of you do which work as long as (s)he receives the result).
With that said, the categorization I've outlined at the top could be refined by replacing ASP.NET applications with "whatever application has no directly visible interface towards the user and therefore cannot appear unresponsive towards them".
ASP.NET requests are handled in thread pool threads. So are CPU-bound async operations (Task.Run).
Dispatching async calls to a thread pool thread in ASP.NET will result in returning a thread pool thread to the thread pool and getting a another. one to run the code and, in the end, returning that thread to the thread pool and getting a another one to get back to the ASP.NET context. There will be a lot of thread switching, thread pool management and ASP.NET context management going on that will make that request (and the whole application) slower.
Most of the times one comes up with a reason to do this on a web application ends up being because the web applications is doing something that it shouldn't.
Related
First off, I'd like to clarify that I'm not talking about concurrency here. I fully understand that having multiple threads modify the UI at the same time is bad, can give race conditions, deadlocks, bugs etc, but that's separate to my question.
I'd like to know why MacOS/iOS forces the main thread (ID 0, first thread, whatever) to be the thread on which the GUI must be used/updated/created on.
see here, related:
on OSX/iOS the GUI must always be updated from the main thread, end of story.
I understand that you only ever want a single thread doing the acutal updating of the GUI, but why does that thread have to be ID 0?
(this is background info, TLDR below)
In my case, I'm making a rust app that uses a couple of threads to do things:
engine - does processing and calculations
ui - self explanatory
program/main - monitors other threads and generally synchronizes things
I'm currently doing something semi-unsafe and creating the UI on it's own thread, which works since I'm on windows, but the API is explicitly marked as BAD to use, and it's not cross compatible for MacOS/iOS for obvious reasons (and I want it to be as compatible as possible).
With the UI/engine threads (there may be more in the future), they are semi-unstable and could crash/exit early, outside of my control (external code). This has happened before, and so I want to have a graceful shutdown if anything goes wrong, hence the 'main' thread monitoring (among other things it does).
I am aware that I could just make Thread 0 the UI thread and move the program to another thread, but the app will immediately quit when the main thread quits, which means if the UI crashes the whole things just aborts (and I don't want this). Essentially, I need my main function on the main thread, since I know it won't suddenly exit and abort the whole app abruptly.
TL;DR
Overall, I'd like to know three things
Why does MacOS/iOS enforce the GUI being on THread 0 (ignoring thread-safety outlined above)
Are there any ways to bypass this (use a different thread for GUI), or will I simply need to sacrifice those platforms (and possible others I'm unaware of)?
Would it be possible to do something like have the UI run as a separate process, and have it share some memory/communicate with the main process, using safe, simple rust?
p.s. I'm aware of this question, it's relevant but doesn't really answer my questions.
Why does MacOS/iOS enforce the GUI being on Thread 0.
Because it's been that way for over 30 years now (since NeXTSTEP), and changing it would break just about every program out there, since almost every Cocoa app assumes this, and relies on it regularly, not just for the main thread, but also the main runloop, the main dispatch group, and now the main actor. External UI events (which come from other processes like the window manager) are delivered on thread 0. NSDistributedNotifications are delivered on thread 0. Signal handling, the list goes on. Yes, it is certainly possible for Darwin (which underlies Cocoa) to be rewritten to allow this. That's not going to happen. I'm not sure what other answer you want.
Would it be possible to do something like have the UI run as a separate process, and have it share some memory/communicate with the main process, using safe, simple rust?
Absolutely. See XPC, which is explicitly for this purpose (communicating, not sharing memory; don't share memory, that's a mess). See sys-xpc for the Rust interface.
I tried multiple ways of wrapping a file read within a synchronous method call (including using multiple queues, specifying target queues, setting up an NSThread and signalling with NSCondition's, even moving the allocation of the UIDocument to the background thread in the end, and also trying dispatch_sync on the background queue as well).
What ended up consistently happening is the completion handler for UIDocument.openWithCompletionHandler wasn't executing, although the documentation indicates that shall happen on the same queue that initiated the openWithCompletionHandler call.
I figured this has ultimately something to do with the control not being returned by the outer/top-level method call to the run loop. It would seem that regardless of what other queues or threads are being set up, the dispatch system expects me to return from the outermost method call, or things get blocked. This would however defeat the whole synchronous design approach.
My use case requires synchronous file reads (with very small data sizes), and I'd prefer the convenience of UIDocument over moving to lower level methods, or looking at ways to introduce async patterns. I reckon UIDocument was designed for more conventional cases, I understand well enough the ubiquity - and in most cases user friendliness and efficiency of async patterns, but in this case it would present a cumbersome situation for both development and user experience.
I wonder if there is something else that could be tried with dispatch queues that could still be explored (like manually consuming events from a queue, creating a custom run loop) that could avoid this seemingly global synchronization effect.
EDIT: this is for an Audio Unit app extension. Instantiation is controlled by the platform, and a "half-initialized" state could become a problematic situation. It is pretty much standard in the industry to fully load the plugin before even allowing the host app to start playing any audio for example, not to mention starting to stream MIDI/automation events. (That's not to say there aren't extensions with crazy load times that could take another look at their design, but in most cases these are well justified in this domain).
Why is it the responsibility of the programmer to call UI related methods on the main thread with:
DispatchQueue.main.async {}
Theoretically, couldn’t this be left up to the compiler or some other agent to determine?
The actual answer is developer inertia and grandfathering.
The Cocoa UI API is huge—nay, gigantic. It has also been in continuous development since the 1990's.
Back when I was a youth and there were no multi-core, 64-bit, anything, 99.999% of all applications ran on the main thread. Period. (The original Mac OS, pre-OS X, didn't even have threads.)
Later, a few specialized tasks could be run on background threads, but largely apps still ran on the main thread.
Fast forward to today where it's trivial to dispatch thousands of tasks for background execution and CPUs can run 30 or more current threads, it's easy to say "hey, why doesn't the compiler/API/OS handle this main-thread thing for me?" But what's nigh on impossible is re-engineering four decades of Cocoa code and apps to make that work.
There are—I'm going to say—hundreds of millions of lines of code that all assume UI calls are executing concurrently on the main thread. As others have pointed out, there is no cleaver switch or pre-processor that's going to undo all of those assumptions, fix all of those potential deadlocks, etc.
(Heck, if the compiler could figure this kind of stuff out we wouldn't even have to write multi-threaded code; you'd just let the compiler slice up your code so it runs concurrently.)
Finally, such a change just isn't worth the effort. I do Cocoa development full time and the number of times I have to deal with the "update control from a background thread problem" occurs, at most, once a week or so. There's no development cost-benefit analysis that's going to dedicate a million man-hours to solving a problem that already has a straight forward solution.
Now if you were developing a new, modern, UI API from scratch, you'd simply make the entire UI framework thread safe and whole question goes away. And maybe Apple has a brand new, redesigned-from-the-ground-up, UI framework in a lab somewhere that does that. But that's the only way I see something like this happening.
You would be substituting one kind of frustration for another.
Suppose that all UI-related methods that require invocation on the main thread did so by:
using DispatchQueue.main.async: You would be hiding asynchronous behaviour, with no obvious way to "follow up" on the result. Code like this would now fail:
label.text = "new value"
assert(label.text == "new value")
You would have thought that the property text just harmlessly assigned some value. In fact, it enqueued a work item to asynchronously execute on the main thread. In doing so, you've broken the expectation that your system has reached its desired state by the time you've completed that line.
using DispatchQueue.main.sync: You would be hiding a potential for deadlock. Synchronous code on the main queue can be very dangerous, because it's easy to unintentionally block (on the main thread) yourself waiting for such work, causing deadlock.
I think one way this could have been achieved is by having a hidden thread dedicated to UI. All UI-related APIs would switch to that thread to do their work. Though I don't know how expensive that would be (each switch to that thread is probably no faster than waiting on a lock), and I could imagine there's lots of "fun" ways that'll get you to write deadlocking code.
Only on rare instances would the UI call anything in the main thread, except for user login timeouts for security. Most UI related methods for any particular window are called within the thread that was started when the window was initialized.
I would rather manage my UI calls instead of the compiler because as a developer, I want control and do not want to rely on third party 'black boxes'.
check https://developer.apple.com/documentation/code_diagnostics/main_thread_checker
and UPDATE UI FROM MAIN THREAD ONLY!!!
As we know, dart is a single-threaded language. So according to the document, we can use Futrure/Stream to implement a async opetation. It sends the time-consuming operation to the Event Queue.
What confused me is where the Event Queue working on. It is working on the dart threat? if yes, it will block the app.
Another question is Event Queue a FIFO queue. If i have two opertion, one is a 1mins needed networking request, the other is a click event. The two operation will send to the Event Queue.
So if the click event will blocked by the networking request? Because the queue is a FIFO queue?
So where is the event queue working on?
Thank you very much!
One thing to note is that asynchronous and multithreading are two different things. Dart uses Futures and async/await to achieve asynchronicity, but Dart is still inherently a single-threaded language.
The way it works is when a Future is created (either manually or via calling an async method), that process is added to an event queue, as you read. Then, in the middle of all the synchronous execution, whenever there is a lull, the event queue can take priority. It can then go through the processes and figure out if any of the Futures have been completed. If so, the result is passed along to any other asynchronous processes that are waiting on that resource, if any.
This also means that, yes, if your program hangs in the middle of an asynchronous operation (with the easy example of an endless loop via while (true) {}), it will freeze the entire program, including the synchronous code and other asynchronous processes still waiting to resolve (even if the conditions allowing them to resolve have already occurred).
However, in your case, this won't be an issue. If you fire an asynchronous process in the form of a network request followed by another in the form of a "click event" (not sure what you're referring to, but I'll assume it's asynchronous as well), they will both be added to the event queue in that order. But if the click event resolves before the network request, the event queue will merely recognize that the network request Future has not yet resolved and will move on to the click event that has.
As a side note, it's worth noting that Dart does have a multi-threading capability, albeit in a fairly roundabout way. Dart has something called an Isolate, which isn't a thread but a completely separate child program. This means that the Isolate cannot access any of the same data in memory as the root program itself. However, data can be passed between the two using SendPorts and ReceivePorts. This makes using Isolates slightly more complicated than threads, but it also means that, if no memory is shared, it virtually eliminates race conditions based on which thread accesses the memory first.
Right now I have some older code I wrote years ago that allows an iOS app to queue up jobs (sending messages or submitting data to a back-end server, etc...) when the user is offline. When the user comes back online the tasks are run. If the app goes into the background or is terminated the queue is serialized and then loaded back when the app is launched again. I've subclassed NSOperationQueue and my jobs are subclasses of NSOperation. This gives me the flexibility of having a data structure provided for me that I can subclass directly (the operation queue) and by subclassing NSOperation I can easily requeue if my task fails (server is down, etc...).
I will very likely leave this as it is, because if it's not broke don't fix it, right? Also these are very lightweight operations and I don't expect in the current app I'm working on for there to be very many tasks queued at any given time. However I know there is some extra overhead with using NSOperation rather than using GCD directly.
I don't believe I could subclass a dispatch queue the way I can an NSOperationQueue, so there would be extra code overheard for me to maintain my own data structure and load this into & out of a dispatch queue each time the app is sent to the background, right? Also not sure how I'd handle requeueing the job if it fails. Right now if I get a HTTP 500 response from the server, for example, in my operation code I send a notification with a deep copy of the failed NSOperation object. My custom operation queue picks this notification up and adds the task to itself. Not sure how of if I'd be able to do something similar with GCD. I would also need an easy way to cancel all operations or suspend the queue when network connectivity is lost then reactivate when network access is regained.
Just hoping to get some thoughts, opinions and ideas from others who might have done something similar or are more familiar with GCD than I am.
Also worth noting I know there's some new background task support coming in iOS 7 but it will likely be a while before that will be my deployment target. I am also not sure yet if it would exactly do what I need, so at the moment just looking at the possibility of GCD.
Thanks.
If NSOperation vs submitting blocks to GCD ever shows up as measurable overhead, the problem isn't that you're using NSOperation, it's that your operations are far too granular. I would expect this overhead to be effectively unmeasurable in any real-world situation. (Sure, you could contrive a test harness to measure the overhead, but only by making operations that did effectively nothing.)
Use the highest level of abstraction that gets the job done. Move down only when hard data tells you that you should.