Does pthread_cancel free up the thread stack? - pthreads

Can anyone help me??. I am trying to kill them, but, that will require a signal. So, I thought of using cancel.

Definitely pthread_cancel will not free the thread's stack, given that the canceled thread may continue to execute for some time, for example, executing cancellation handlers.
Thread resources are cleaned up after both pthread_detach were called on the thread and the thread has terminated (which is possible to occur in either order).

Related

Understanding dispatch_queues and synchronous/asynchronous dispatch

I'm an Android engineer trying to port some iOS code that uses 5 SERIAL dispatch queues. I want to make sure I'm thinking about things the right way.
dispatch_sync to a SERIAL queue is basically using the queue as a synchronized queue- only one thread may access it and the block that gets executed can be thought of as a critical region. It happens immediately on the current thread- its the equivalent of
get_semaphore()
queue.pop()
do_block()
release_semaphore()
dispatch_async to a serial queue- performs the block on another thread and lets the current thread return immediately. However since its a serial queue it promises only one of these asynchronous thread is going to execute at a time (the next call to dispatch_async will wait until all other threads are finished). That block can also be thought of as a critical region, but it will occur on another thread. So the same code as above, but its passed to a worker thread first.
Am I off in any of that, or did I figure it out correctly?
This feels like an overly complicated way of thinking of it and there are lots of little details of that description that aren't quite right. Specifically, "it happens immediately on the current thread" is not correct.
First, let's step back: The distinction between dispatch_async and dispatch_sync is merely whether the current thread waits for it or not. But when you dispatch something to a serial queue, you should always imagine that it's running on a separate worker thread of GCD's own choosing. Yes, as an optimization, sometimes dispatch_sync will use the current thread, but you are in no way guaranteed of this fact.
Second, when you discuss dispatch_sync, you say something about it running "immediately". But it's by no means assured to be immediate. If a thread does dispatch_sync to some serial queue, then that thread will block until (a) any block currently running on on that serial queue finish; (b) any other queued blocks for that serial queue run and complete; and (c) obviously, the block that thread A itself dispatched runs and completes.
Now, when you use a serial queue for synchronization for, some thread-safe access to some object in memory, often that synchronization process is very quick, so the waiting thread will generally be blocked for a negligible amount of time as its dispatched block (and any prior dispatched blocks) to finish. But in general, it's misleading to say that it will run immediately. (If it always could run immediately, then you wouldn't need a queue to synchronize access).
Now your question talks about a "critical region", to which I assume you're talking about some bit of code that, in order to ensure thread-safety or for some other reason like that, must be synchronized. So, when running this code to be synchronized, the only question re dispatch_sync vs dispatch_async is whether the current thread must wait. A common pattern, for example, is to say that one may dispatch_async writes to some model (because there's no need to wait for the model to update before proceeding), but dispatch_sync reads from some model (because you obviously don't want to proceed until the read value is returned).
A further optimization of that sync/async pattern is the reader-writer pattern, where concurrent reads are permissible but concurrent writes are not. Thus, you'll use a concurrent queue, dispatch_barrier_async the writes (achieving serial-like behavior for the writes), but dispatch_sync the reads (enjoying concurrent performance with respect to other read operations).
To pick nits, dispatch_sync doesn't necessarily run the code on the current thread, but if it doesn't, it still blocks the current thread until the task completes. The distinction is only potentially important if you're relying on thread IDs or thread-local storage.
But otherwise, yes, unless I missed something subtle.

Can you call dispatch_sync from a concurrent thread to itself without deadlocking?

I know you would deadlock by doing this on a serial queue, but I haven't found anything that mentions deadlocking by doing it on a concurrent queue. I just wanted to verify it wont deadlock (it doesn't seem like it would, as it would only block one of the threads on the queue and the task would run on another thread on the same queue)
Also, is it true that you can control order of execution by calling dispatch_sync on a concurrent queue? (It's mentioned here) I don't understand why that would happen, as async vs sync have to do with the caller thread.
This will not deadlock since the dispatched block can start running immediately - it's not a serial queue so it doesn't have to wait for the current block to finish.
But it's still not a good idea. This will block one thread causing the OS to spin up a new one (because it still has free CPU as the thread is sleeping) wasting memory.

how to schedule after the current thread has terminated?

I am creating a user defined thread library. I use Round-Robin scheduling algorithm and use the context switching method. But, I am unable to know what to do when a thread finishes its execution before the allotted time slot. The program is getting terminated. I actually want to reschedule all the threads, by calling the schedule function when the current thread gets terminated.
I found two ways to overcome this problem.
By calling explicitly thread_exit function at the end of the function that is being executed by the current thread.
By changing the stack contents such that the thread_exit function gets executed after the current function gets terminated.
But I am unable to find how to apply these solutions....
Anybody out there... plz help me...
It sounds like you have a bit of a design flaw. If I'm understanding you correctly, you're trying to implement a solution where you have threads that can be allocated to perform some task and after the task is complete, the thread goes idle waiting for the next task.
If that's true, I think I would design something like a daemon process or service that manages a queue for tasks coming in, a pool of threads responsible for executing the tasks with a controller that listens for new tasks.

When to use pthread_cancel and not pthread_kill?

When does one use pthread_cancel and not pthread_kill?
I would use neither of those two but that's just personal preference.
Of the two, pthread_cancel is the safest for terminating a thread since the thread is only supposed to be affected when it has set its cancelability state to true using pthread_setcancelstate().
In other words, it shouldn't disappear while holding resources in a way that might cause deadlock. The pthread_kill() call sends a signal to the specific thread, and this is a way to affect a thread asynchronously for reasons other than cancelling it.
Most of my threads tends to be in loops doing work anyway and periodically checking flags to see if they should exit. That's mostly because I was raised in a world when pthread_kill() was dangerous and pthread_cancel() didn't exist.
I subscribe to the theory that each thread should totally control its own resources, including its execution lifetime. I've always found that to be the best way to avoid deadlock. To that end, I simply use mutexes for communication between threads (I've rarely found a need for true asynchronous communication) and a flag variable for termination.
You can not "kill" a thread with pthread_kill(). If you try to send SIGTERM or SIGKILL to a thread with pthread_kill(), it will terminate the entire process.
I subscribe to the theory that the PROGRAMMER and not the THREAD (nor the API designers) should totally control its own software in all aspects, including which threads cancel which.
I once worked in a firm where we developed a server that used a pool of worker threads and one special master thread that had the responsibility to create, suspend, resume and terminate the worker threads at any time it wanted. Of course the threads used some sort of synchronization, but it was of our design and not some API-enforced dogmas. The system worked very well and efficiently!
This was under Windows. Then I tried to port it for Linux and I stumbled at the pthreads' stupid "theories" about how wrong it is to suspend another thread etc. So I had to abandon pthreads and directly use the native Linux system calls (clone()) to implement the threads for our server.

How to avoid a thread freezing when Main Application is Busy

I'm having a bit of a problem. I want to display a progress form that just shows an animation on a when the main application preforms heavy operations.
I've done this in a thread and it works fine when the user isn't preforming any operations. But it just stops when my main application is busy.
I'm not able to put Application.ProcessMessages in between the different lines of code because I'm using 3rdparty components with heavy processing time.
My idea was to create a new process and in the process create a thread that execures the animation. Now that wouldn't stop the thread form executing when the main application performs heavy operations.
But as I see it you can only create a new process if you executes a new program.
Does any one have a solution on how to make a thread continue executing even when the main application is busy?
/Brian
If your worker thread does not have a lower priority than the main thread, you don't use the Synchronize() method, don't call SendMessage() and don't try to acquire any synchronization object that the main GUI thread has already acquired, then your secondary thread should continue to work.
As the VCL isn't thread-safe people do often advise to use Synchronize() to execute code to update VCL controls synchronously in the context of the VCL thread. This however does not work if the VCL thread is itself busy. Your worker thread will block until the main thread continues to process messages.
Your application design is unfortunate, anyway. You should perform all lengthy operations in worker threads, and keep the main thread responsive for user interaction. Even with the fancy animation your app will appear hung to the user since it won't redraw while the VCL thread is busy doing other things and processes no messages. Try to put your lengthy code in worker threads and perform your animation in timer events in the main thread.
Your logic is backward. Your thread should be doing the "heavy work", and passing messages to your main application to update the progress or animation.
If you leave all the "heavy work" in your main application, the other thread won't get enough chances to execute, which means it won't get a chance to update anything. Besides, all access to the GUI (VCL controls) must happen in the application's main thread; the VCL isn't thread-safe. (Neither is Windows itself, when it comes to visual controls.)
If by "Does any one have a solution on how to make a thread continue executing even when the main application is busy?" you mean that main thread is busy you should move the code that is consumming main thread to another other thread. In other words main thread should be responsible for starting and stopping actions and not executing them.
Disclaymer:
Actually I don't know delphy but I think/hope the concepts are quite similar to C++ or C#.

Resources