A couple of quotes from the manual.
Quoting man 3 pthread_mutex_unlock:
None of the mutex functions is a cancellation point, not even pthread_mutex_lock, in spite of the fact that it can suspend a thread for arbitrary durations. This way, the status of mutexes at cancellation points is predictable, allowing cancellation handlers to unlock precisely those mutexes that need to be unlocked before the thread stops executing.
But one paragraph later it is written that:
The mutex functions are not async-signal safe. What this means is that they should not be called from a signal handler. In particular, calling pthread_mutex_lock or pthread_mutex_unlock from a signal handler may deadlock the calling thread.
OK, so the manual orders me to unlock mutexes in a cleanup handler, but prohibits me from unlockin mutexes in a signal handler. Well, quoting man 3 pthread_cancel:
On Linux, cancellation is implemented using signals.
Ah. So a thread is cancelled by receiving a signal.
Doesn’t this make a cancellation cleanup handler actually a signal handler? Or maybe rather, I dunno, the cleanup handler is being called from a signal handler whose default action is to call functions installed by pthread_cleanup_push? One cannot deny that these cleanup handlers are called when a signal is being received.
But this would make the manual contradict it’s own statements…
How to understand things properly?
The fact that cancellation is handled using signals on Linux is an implementation detail. It doesn't mean you're only allowed to use async-signal-safe functions in them.
At least for deferred cancellation at a cancellation point, POSIX doesn't limit the functions that can be called, so the implementation has to make that work.
Related
I use process shared pthread_mutex_t on shared memory. I wonder what if a process lock the mutex and somehow exit, what will happen? As my experiment shows, deadlock happens, but this is a bad news. So is there a way to prevent this? Should not the mutex automatically unlocked when process exit?
No, the mutex shouldn't be automatically unlocked, because the shared data protected by the mutex may be in an inconsistent state.
If you want to handle this situation, you need to use "robust mutexes". To create a robust mutex, set the mutex robustness property to PTHREAD_MUTEX_ROBUST by using pthread_mutexattr_setrobust() on a pthread_mutexattr_t object that is used to initialise the mutex.
If a thread or process exits while holding a robust mutex, the next call to pthread_mutex_lock() on that mutex will return the EOWNERDEAD error. If this error is returned, your code must carefully check all the shared state protected by the mutex and fix any inconsistencies. It can then mark the state as consistent by calling pthread_mutex_consistent() on the mutex, and then continue its execution as normal.
I have a property #property NSLock *myLock
And I want to write two methods:
- (void) lock
and
- (void) unlock
These methods lock and unlock myLock respectively and they need to do this regardless of what thread or queue called them. For instance, thread A might have called lock but queue B might be the one calling unlock. Both of these methods should work appropriately without reporting that I am trying to unlock a lock from a different thread/queue that locked it. Additionally, they need to do this synchronously.
It is rare anymore that NSLock is the right tool for the job. There much better tools now, particularly with GCD; more later.
As you probably already know from the docs, but I'll repeat for those reading along:
Warning: The NSLock class uses POSIX threads to implement its locking behavior. When sending an unlock message to an NSLock object, you must be sure that message is sent from the same thread that sent the initial lock message. Unlocking a lock from a different thread can result in undefined behavior.
That's very hard to implement without deadlocking if you're trying to lock and unlock on different threads. The fundamental problem is that if lock blocks the thread, then there is no way for the subsequent unlock to ever run on that thread, and you can't unlock on a different thread. NSLock is not for this problem.
Rather than NSLock, you can implement the same patterns with dispatch_semaphore_create(). These can be safely updated on any thread you like. You can lock using dispatch_semaphore_wait() and you can unlock using dispatch_semaphore_signal(). That said, this still usually isn't the right answer.
Most resource contention is best managed with an operation queue or dispatch queue. These provide excellent ways to handle work in parallel, manage resources, wait on events, implement producer/consumer patterns, and otherwise do almost everything that you would have done with an NSLock or NSThread in the past. I highly recommend the Concurrency Programming Guide as an introduction to how to design with queues rather than locks.
Can anyone help me??. I am trying to kill them, but, that will require a signal. So, I thought of using cancel.
Definitely pthread_cancel will not free the thread's stack, given that the canceled thread may continue to execute for some time, for example, executing cancellation handlers.
Thread resources are cleaned up after both pthread_detach were called on the thread and the thread has terminated (which is possible to occur in either order).
Here's the situation, I have a thread running that is partially controlled by code that I don't own. I started the thread so I have it's thread id but then I passed it off to some other code. I need to be able to tell if that other code has currently caused the thread to block from another thread that I am in control of. Is there are way to do this in pthreads? I think I'm looking for something equivalent to the getState() method in Java's Thread class (http://download.oracle.com/javase/6/docs/api/java/lang/Thread.html#getState() ).
--------------Edit-----------------
It's ok if the solution is platform dependent. I've already found a solution for linux using the /proc file system.
You could write wrappers for some of the pthreads functions, which would simply update some state information before/after calling the original functions. That would allow you to keep track of which threads are running, when they're acquiring or holding mutexes (and which ones), when they're waiting on which condition variables, and so on.
Of course, this only tells you when they're blocked on pthreads synchronization objects -- it won't tell you when they're blocking on something else.
Before you hand the thread off to some other code, set a flag protected by a mutex. When the thread returns from the code you don't control, clear the flag protected by the mutex. You can then check, from wherever you need to, whether the thread is in the code you don't control.
From outside the code, there is no distinction between blocked and not-blocked. If you literally checked the state of the thread, you would get nonsensical results.
For example, consider two library implementations.
A: We do all the work in the calling thread.
B: We dispatch a worker thread to do the work. The calling thread blocks until the worker is done.
In both cases A and B the code you don't control is equally making forward progress. Your 'getstate' idea would provide different results. So it's not what you want.
When does one use pthread_cancel and not pthread_kill?
I would use neither of those two but that's just personal preference.
Of the two, pthread_cancel is the safest for terminating a thread since the thread is only supposed to be affected when it has set its cancelability state to true using pthread_setcancelstate().
In other words, it shouldn't disappear while holding resources in a way that might cause deadlock. The pthread_kill() call sends a signal to the specific thread, and this is a way to affect a thread asynchronously for reasons other than cancelling it.
Most of my threads tends to be in loops doing work anyway and periodically checking flags to see if they should exit. That's mostly because I was raised in a world when pthread_kill() was dangerous and pthread_cancel() didn't exist.
I subscribe to the theory that each thread should totally control its own resources, including its execution lifetime. I've always found that to be the best way to avoid deadlock. To that end, I simply use mutexes for communication between threads (I've rarely found a need for true asynchronous communication) and a flag variable for termination.
You can not "kill" a thread with pthread_kill(). If you try to send SIGTERM or SIGKILL to a thread with pthread_kill(), it will terminate the entire process.
I subscribe to the theory that the PROGRAMMER and not the THREAD (nor the API designers) should totally control its own software in all aspects, including which threads cancel which.
I once worked in a firm where we developed a server that used a pool of worker threads and one special master thread that had the responsibility to create, suspend, resume and terminate the worker threads at any time it wanted. Of course the threads used some sort of synchronization, but it was of our design and not some API-enforced dogmas. The system worked very well and efficiently!
This was under Windows. Then I tried to port it for Linux and I stumbled at the pthreads' stupid "theories" about how wrong it is to suspend another thread etc. So I had to abandon pthreads and directly use the native Linux system calls (clone()) to implement the threads for our server.