Delphi - Terminate blocked thread [duplicate] - delphi

This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
How to stop long executing threads gracefully?
Hello.
I have a background thread which needs to perform an operation, it works fine all of the time except in one case : when the resource is corrupted. When that happens the thread gets Blocked in the Load (to that resource) calls in the Execute method.
When that happens the thread Won't respond to the Terminate method ( call from main thread ) and gets blocked.
So, my question is : How to properly terminate the blocked thread ( from the main thread ). And no I cannot modify the class which loads the resource, or neither know from before if the resource is corrupted or not.

Look for TerminateThread() WinAPI function.
Some useful explanation can be found here or look at MSDN documentation.
Of course, after terminating you must look if any resources allocated in thread not freed and free it appropriately.
Update
Yes, using TerminateThread is bad practice (as specified in comments). I'm agree with this opinion. But "never use it, even if you really need to use it" recomendation it too strong from my point of view and very theoretic. Real world full of design flaws and buggy 3rd-party libraries.
Information, given in question not enough for making right decision about this concrete situation. E.g. it may be temporary workaround with no alternatives, etc.
Therefore, from theoretic point of view right answer is : "There are no way to terminate process properly if you can't control how to "freezing" step in background thread processed."
From practical point of view right answer is: "There are no way to terminate process properly if you can't control how to "freezing" step in background thread processed. But if you realize that you can't, but still needs such functionality - use TerminateThread() API call"
About TerminateThread vs. TerminateProcess:
- Creating/terminating process requires more resources than creating/terminating thread
- Creating/terminating process more complicated => more place for bugs
- TerminateProcess don't terminates immediately and waits for I/O operations to complete (MSDN) => not a choice for scenario where remote shared folder becomes unavailable while reading and other similar I/O scenarios.
- Creating and terminating process requires more user privileges than creating thread, compare MSDN here and here
About resource freeing:
Thread stack freed automatically when terminating thread (as mentonied in MSDN). Resources is primarily resources, allocated by main thread for communication with background thread. E.g. memory structures, mutexes, etc.

Related

Why does MacOS/iOS *force* the main thread to be the UI thread, and are there any workarounds?

First off, I'd like to clarify that I'm not talking about concurrency here. I fully understand that having multiple threads modify the UI at the same time is bad, can give race conditions, deadlocks, bugs etc, but that's separate to my question.
I'd like to know why MacOS/iOS forces the main thread (ID 0, first thread, whatever) to be the thread on which the GUI must be used/updated/created on.
see here, related:
on OSX/iOS the GUI must always be updated from the main thread, end of story.
I understand that you only ever want a single thread doing the acutal updating of the GUI, but why does that thread have to be ID 0?
(this is background info, TLDR below)
In my case, I'm making a rust app that uses a couple of threads to do things:
engine - does processing and calculations
ui - self explanatory
program/main - monitors other threads and generally synchronizes things
I'm currently doing something semi-unsafe and creating the UI on it's own thread, which works since I'm on windows, but the API is explicitly marked as BAD to use, and it's not cross compatible for MacOS/iOS for obvious reasons (and I want it to be as compatible as possible).
With the UI/engine threads (there may be more in the future), they are semi-unstable and could crash/exit early, outside of my control (external code). This has happened before, and so I want to have a graceful shutdown if anything goes wrong, hence the 'main' thread monitoring (among other things it does).
I am aware that I could just make Thread 0 the UI thread and move the program to another thread, but the app will immediately quit when the main thread quits, which means if the UI crashes the whole things just aborts (and I don't want this). Essentially, I need my main function on the main thread, since I know it won't suddenly exit and abort the whole app abruptly.
TL;DR
Overall, I'd like to know three things
Why does MacOS/iOS enforce the GUI being on THread 0 (ignoring thread-safety outlined above)
Are there any ways to bypass this (use a different thread for GUI), or will I simply need to sacrifice those platforms (and possible others I'm unaware of)?
Would it be possible to do something like have the UI run as a separate process, and have it share some memory/communicate with the main process, using safe, simple rust?
p.s. I'm aware of this question, it's relevant but doesn't really answer my questions.
Why does MacOS/iOS enforce the GUI being on Thread 0.
Because it's been that way for over 30 years now (since NeXTSTEP), and changing it would break just about every program out there, since almost every Cocoa app assumes this, and relies on it regularly, not just for the main thread, but also the main runloop, the main dispatch group, and now the main actor. External UI events (which come from other processes like the window manager) are delivered on thread 0. NSDistributedNotifications are delivered on thread 0. Signal handling, the list goes on. Yes, it is certainly possible for Darwin (which underlies Cocoa) to be rewritten to allow this. That's not going to happen. I'm not sure what other answer you want.
Would it be possible to do something like have the UI run as a separate process, and have it share some memory/communicate with the main process, using safe, simple rust?
Absolutely. See XPC, which is explicitly for this purpose (communicating, not sharing memory; don't share memory, that's a mess). See sys-xpc for the Rust interface.

Why is it the programmer’s responsibility to call things on the main thread?

Why is it the responsibility of the programmer to call UI related methods on the main thread with:
DispatchQueue.main.async {}
Theoretically, couldn’t this be left up to the compiler or some other agent to determine?
The actual answer is developer inertia and grandfathering.
The Cocoa UI API is huge—nay, gigantic. It has also been in continuous development since the 1990's.
Back when I was a youth and there were no multi-core, 64-bit, anything, 99.999% of all applications ran on the main thread. Period. (The original Mac OS, pre-OS X, didn't even have threads.)
Later, a few specialized tasks could be run on background threads, but largely apps still ran on the main thread.
Fast forward to today where it's trivial to dispatch thousands of tasks for background execution and CPUs can run 30 or more current threads, it's easy to say "hey, why doesn't the compiler/API/OS handle this main-thread thing for me?" But what's nigh on impossible is re-engineering four decades of Cocoa code and apps to make that work.
There are—I'm going to say—hundreds of millions of lines of code that all assume UI calls are executing concurrently on the main thread. As others have pointed out, there is no cleaver switch or pre-processor that's going to undo all of those assumptions, fix all of those potential deadlocks, etc.
(Heck, if the compiler could figure this kind of stuff out we wouldn't even have to write multi-threaded code; you'd just let the compiler slice up your code so it runs concurrently.)
Finally, such a change just isn't worth the effort. I do Cocoa development full time and the number of times I have to deal with the "update control from a background thread problem" occurs, at most, once a week or so. There's no development cost-benefit analysis that's going to dedicate a million man-hours to solving a problem that already has a straight forward solution.
Now if you were developing a new, modern, UI API from scratch, you'd simply make the entire UI framework thread safe and whole question goes away. And maybe Apple has a brand new, redesigned-from-the-ground-up, UI framework in a lab somewhere that does that. But that's the only way I see something like this happening.
You would be substituting one kind of frustration for another.
Suppose that all UI-related methods that require invocation on the main thread did so by:
using DispatchQueue.main.async: You would be hiding asynchronous behaviour, with no obvious way to "follow up" on the result. Code like this would now fail:
label.text = "new value"
assert(label.text == "new value")
You would have thought that the property text just harmlessly assigned some value. In fact, it enqueued a work item to asynchronously execute on the main thread. In doing so, you've broken the expectation that your system has reached its desired state by the time you've completed that line.
using DispatchQueue.main.sync: You would be hiding a potential for deadlock. Synchronous code on the main queue can be very dangerous, because it's easy to unintentionally block (on the main thread) yourself waiting for such work, causing deadlock.
I think one way this could have been achieved is by having a hidden thread dedicated to UI. All UI-related APIs would switch to that thread to do their work. Though I don't know how expensive that would be (each switch to that thread is probably no faster than waiting on a lock), and I could imagine there's lots of "fun" ways that'll get you to write deadlocking code.
Only on rare instances would the UI call anything in the main thread, except for user login timeouts for security. Most UI related methods for any particular window are called within the thread that was started when the window was initialized.
I would rather manage my UI calls instead of the compiler because as a developer, I want control and do not want to rely on third party 'black boxes'.
check https://developer.apple.com/documentation/code_diagnostics/main_thread_checker
and UPDATE UI FROM MAIN THREAD ONLY!!!

Async and CPU-Bound Operations with ASP.NET

One of the reasons async was praised for ASP.NET was to follow Nodejs async platform which led to more scalability with the freeing up of threads to handle subsequent requests etc.
However, I've since read that wrapping CPU-bound code in Task.Run will have the opposite effect i.e. add even more overhead to a server and use more threads.
Apparently, only true async operations will benefit e.g. making a web-request or a call to a database.
So, my question is as follows. Is there any clear guidance out there as to when action methods should be async?
Mr Cleary is the one who opined about the fruitlessness of wrapping CPU-bound operations in async code.
Not exactly, there is a difference between wrapping CPU-bound async code in an ASP.NET app and doing that in a - for example - WPF desktop app. Let me use this statement of yours to build my answer upon.
You should categorize the async operations in your mind (in its simplest form) as such:
ASP.NET async methods, and among those:
CPU-bound operations,
Blocking operations, such as IO-operations
Async methods in a directly user-facing application, among those, again:
CPU-bound operations,
and blocking operations, such as IO-operations.
I assume that by reading Stephen Cleary's posts you've already got to understand that the way async operations work is that when you are doing a CPU-bound operation then that operation is passed on to a thread pool thread and upon completion, the program control returns to the thread it was started from (unless you do a .ConfigureAwait(false) call). Of course this only happens if there actually is an asynchronous operation to do, as I was wondering myself as well in this question.
When it comes to blocking operations on the other hand, things are a bit different. In this case, when the thread from which the code performed asynchronously gets blocked, then the runtime notices it, and "suspends" the work being done by that thread: saves all state so it can continue later on and then that thread is used to perform other operations. When the blocking operation is ready - for example, the answer to a network call has arrived - then (I don't exactly know how it is handled or noticed by the runtime, but I am trying to provide you with a high-level explanation, so it is not absolutely necessary) the runtime knows that the operation you initiated is ready to continue, the state is restored and your code can continue to run.
With these said, there is an important difference between the two:
In the CPU-bound case, even if you start an operation asynchronously, there is work to do, your code does not have to wait for anything.
In the IO-bound case or blocking case, however, there might be some time during which your code simply cannot do anything but wait and therefore it is useful that you can release that thread that has done the processing up until that point and do other work (perhaps process another request) meanwhile using it.
When it comes to a directly-user-facing application, for example, a WPF app, if you are performing a long-running CPU-operation on the main thread (GUI thread), then the GUI thread is obviously busy and therefore appears unresponsive towards the user because any interaction that is normally handled by the GUI thread just gets queued up in the message queue and doesn't get processed until the CPU-bound operation finishes.
In the case of an ASP.NET app, however, this is not an issue, because the application does not directly face the user, so he does not see that it is unresponsive. Why you don't gain anything by delegating the work to another thread is because that would still occupy a thread, that could otherwise do other work, because, well, whatever needs to be done must be done, it just cannot magically be done for you.
Think of it the following way: you are in a kitchen with a friend (you and your friend are one-one threads). You two are preparing food being ordered. You can tell your friend to dice up onions, and even though you free yourself from dicing onions, and can deal with the seasoning of the meat, your friend gets busy by dicing the onions and so he cannot do the seasoning of the meat meanwhile. If you hadn't delegated the work of dicing onions to him (which you already started) but let him do the seasoning, the same work would have been done, except that you would have saved a bit of time because you wouldn't have needed to swap the working context (the cutting boards and knives in this example). So simply put, you are just causing a bit of overhead by swapping contexts whereas the issue of unresponsiveness is invisible towards the user. (The customer neither sees nor cares which of you do which work as long as (s)he receives the result).
With that said, the categorization I've outlined at the top could be refined by replacing ASP.NET applications with "whatever application has no directly visible interface towards the user and therefore cannot appear unresponsive towards them".
ASP.NET requests are handled in thread pool threads. So are CPU-bound async operations (Task.Run).
Dispatching async calls to a thread pool thread in ASP.NET will result in returning a thread pool thread to the thread pool and getting a another. one to run the code and, in the end, returning that thread to the thread pool and getting a another one to get back to the ASP.NET context. There will be a lot of thread switching, thread pool management and ASP.NET context management going on that will make that request (and the whole application) slower.
Most of the times one comes up with a reason to do this on a web application ends up being because the web applications is doing something that it shouldn't.

Need guarantee that signals will be deilivered by offending thread

I'm working on a project and cannot find any documentation that verifies signal/thread behavior for the following case.
I need to ensure that signals generated in a thread will be delivered by the offending thread (namely, SIGSEGV). I've been told that POSIX doesn't ensure this behavior and that pthreads, for example, can generate signals in pthread 1 yet have the signal delivered in pthread 2. Therefore, I'm planning on using clone(2) to have finer control of signal/thread behavior, but still cannot find documentation in the man pages that ensures signals will be delivered by the offending thread.
Hardcore systems programmers: any documentation or insights would be very much appreciated!
The POSIX chapter on Signal Generation and Delivery states that:
At the time of generation, a
determination shall be made whether
the signal has been generated for the
process or for a specific thread
within the process. Signals which are
generated by some action attributable
to a particular thread, such as a
hardware fault, shall be generated for
the thread that caused the signal to
be generated. Signals that are
generated in association with a
process ID or process group ID or an
asynchronous event, such as terminal
activity, shall be generated for the
process.
A synchronous SIGSEGV caused by an incorrect memory access in a thread is clearly such a signal "...generated by some action attributable to a particular thread...", so they are guaranteed to be generated for the offending thread (which means handled or ignored by that thread, as appropriate).
I'm pretty sure this works, even if it isn't guaranteed. I base this observation on the fact that I used to work at a company where we routinely handled SIGSEGV in multithreaded programs and printed a stack trace to a log file (based on currently location). We did this on a variety of platforms--windows, linux, tru64 unix, aix, hpux, sunos ... and maybe one or two others that I can't recall. This (usually!) works because the location of SIGSEGV is still on the current stack (the signal handling mechanism just adds a few call frames over top of it).
It's only fair to mention that there's very little that you should do in a signal handler because there aren't many functions that are async signal safe. We ignored this and mostly got away with it, except if I recall on sunos (or aix) where we would get burned--the process would occasionally (and seemingly randomly) hard exit inside the signal handler.
I believe the recommended approach is to NOT handle SIGSEGV, and let the process exit with a core dump so you can analyze later in a debugger.

When to use pthread_cancel and not pthread_kill?

When does one use pthread_cancel and not pthread_kill?
I would use neither of those two but that's just personal preference.
Of the two, pthread_cancel is the safest for terminating a thread since the thread is only supposed to be affected when it has set its cancelability state to true using pthread_setcancelstate().
In other words, it shouldn't disappear while holding resources in a way that might cause deadlock. The pthread_kill() call sends a signal to the specific thread, and this is a way to affect a thread asynchronously for reasons other than cancelling it.
Most of my threads tends to be in loops doing work anyway and periodically checking flags to see if they should exit. That's mostly because I was raised in a world when pthread_kill() was dangerous and pthread_cancel() didn't exist.
I subscribe to the theory that each thread should totally control its own resources, including its execution lifetime. I've always found that to be the best way to avoid deadlock. To that end, I simply use mutexes for communication between threads (I've rarely found a need for true asynchronous communication) and a flag variable for termination.
You can not "kill" a thread with pthread_kill(). If you try to send SIGTERM or SIGKILL to a thread with pthread_kill(), it will terminate the entire process.
I subscribe to the theory that the PROGRAMMER and not the THREAD (nor the API designers) should totally control its own software in all aspects, including which threads cancel which.
I once worked in a firm where we developed a server that used a pool of worker threads and one special master thread that had the responsibility to create, suspend, resume and terminate the worker threads at any time it wanted. Of course the threads used some sort of synchronization, but it was of our design and not some API-enforced dogmas. The system worked very well and efficiently!
This was under Windows. Then I tried to port it for Linux and I stumbled at the pthreads' stupid "theories" about how wrong it is to suspend another thread etc. So I had to abandon pthreads and directly use the native Linux system calls (clone()) to implement the threads for our server.

Resources