Hello I have an question about cancelling a thread that uses mutexes and conditional variables. The thread has cancel type deferred. When I use only the functions pthread_mutex_lock/unlock and pthread_cond_wait, and a cancel request arrives, the thread's cancelation point is only pthread_cond_wait. Will it lock the mutex or not? I am not sure, if thread always leaves the mutex unlock. Or are the pthread_mutex_lock/unlock functions also cancellation points? Thank you.
I doubt that I can phrase this better than the documentation:
A condition wait (whether timed or not) is a cancellation point. When
the cancelability type of a thread is set to PTHREAD_CANCEL_DEFERRED,
a side effect of acting upon a cancellation request while in a
condition wait is that the mutex is (in effect) re-acquired before
calling the first cancellation cleanup handler. The effect is as if
the thread were unblocked, allowed to execute up to the point of
returning from the call to pthread_cond_timedwait() or
pthread_cond_wait(), but at that point notices the cancellation
request and instead of returning to the caller of
pthread_cond_timedwait() or pthread_cond_wait(), starts the thread
cancellation activities, which includes calling cancellation cleanup
handlers.
Also just be sure you are aware that other functions are cancellation points as well.
If you have multiple threads waiting on pthread_cond_wait and you call pthread_cancel before sending out some of kind of broadcast using pthread_cond_broadcast or pthread_cond_signal, then it may lead to a deadlock where the pthread_cond_wait is waiting on thread to cancel and thread is waiting on the pthread_cond_wait to come out.
Per "The Linux Programming Interface" book pages 667-678
the mutex will be locked. So you can use
pthread_mutex_lock(&mutex);
pthread_cleanup_push((void (*)(void *))pthread_mutex_unlock, &mutex);
while (var == 0)) {
pthread_cond_wait(&cond, &mutex);
}
pthread_cleanup_pop(1);
Related
I read somewhere that we should lock the mutex before calling pthread_cond_signal and unlock the mutex after calling it:
The pthread_cond_signal() routine is
used to signal (or wake up) another
thread which is waiting on the
condition variable. It should be
called after mutex is locked, and must
unlock mutex in order for
pthread_cond_wait() routine to
complete.
My question is: isn't it OK to call pthread_cond_signal or pthread_cond_broadcast methods without locking the mutex?
If you do not lock the mutex in the codepath that changes the condition and signals, you can lose wakeups. Consider this pair of processes:
Process A:
pthread_mutex_lock(&mutex);
while (condition == FALSE)
pthread_cond_wait(&cond, &mutex);
pthread_mutex_unlock(&mutex);
Process B (incorrect):
condition = TRUE;
pthread_cond_signal(&cond);
Then consider this possible interleaving of instructions, where condition starts out as FALSE:
Process A Process B
pthread_mutex_lock(&mutex);
while (condition == FALSE)
condition = TRUE;
pthread_cond_signal(&cond);
pthread_cond_wait(&cond, &mutex);
The condition is now TRUE, but Process A is stuck waiting on the condition variable - it missed the wakeup signal. If we alter Process B to lock the mutex:
Process B (correct):
pthread_mutex_lock(&mutex);
condition = TRUE;
pthread_cond_signal(&cond);
pthread_mutex_unlock(&mutex);
...then the above cannot occur; the wakeup will never be missed.
(Note that you can actually move the pthread_cond_signal() itself after the pthread_mutex_unlock(), but this can result in less optimal scheduling of threads, and you've necessarily locked the mutex already in this code path due to changing the condition itself).
According to this manual :
The pthread_cond_broadcast() or
pthread_cond_signal() functions
may be called by a thread whether or not it currently owns the mutex that
threads calling pthread_cond_wait()
or pthread_cond_timedwait() have
associated with the condition variable
during their waits; however, if
predictable scheduling behavior is
required, then that mutex shall be
locked by the thread calling
pthread_cond_broadcast() or
pthread_cond_signal().
The meaning of the predictable scheduling behavior statement was explained by Dave Butenhof (author of Programming with POSIX Threads) on comp.programming.threads and is available here.
caf, in your sample code, Process B modifies condition without locking the mutex first. If Process B simply locked the mutex during that modification, and then still unlocked the mutex before calling pthread_cond_signal, there would be no problem --- am I right about that?
I believe intuitively that caf's position is correct: calling pthread_cond_signal without owning the mutex lock is a Bad Idea. But caf's example is not actually evidence in support of this position; it's simply evidence in support of the much weaker (practically self-evident) position that it is a Bad Idea to modify shared state protected by a mutex unless you have locked that mutex first.
Can anyone provide some sample code in which calling pthread_cond_signal followed by pthread_mutex_unlock yields correct behavior, but calling pthread_mutex_unlock followed by pthread_cond_signal yields incorrect behavior?
When pthread_exit(PTHREAD_CANCELED) is called I have expected behavior (stack unwinding, destructors calls) but the call to pthread_cancel(pthread_self()) just terminated the thread.
Why pthread_exit(PTHREAD_CANCELED) and pthread_cancel(pthread_self()) differ significantly and the thread memory is not released in the later case?
The background is as follows:
The calls are made from a signal handler and reasoning behind this strange approach is to cancel a thread waiting for the external library semop() to complete (looping around on EINTR I suppose)
I have noticed that calling pthread_cancel from other thread does not work (as if semop was not a cancellation point) but signalling the thread and then calling pthread_exit works but calls the destructor within a signal handler.
pthread_cancel could postpone the action to the next cancellation point.
In terms of thread specific clean-up behaviour there should be no difference between cancelling a thread via pthread_cancel() and exiting a thread via pthread_exit().
POSIX says:
[...] When the cancellation is acted on, the cancellation clean-up handlers for thread shall be called. When the last cancellation clean-up handler returns, the thread-specific data destructor functions shall be called for thread. When the last destructor function returns, thread shall be terminated.
From Linux's man pthread_cancel:
When a cancellation requested is acted on, the following steps occur for thread (in this order):
Cancellation clean-up handlers are popped (in the reverse of the order in which they were pushed) and called. (See pthread_cleanup_push(3).)
Thread-specific data destructors are called, in an unspecified order. (See pthread_key_create(3).)
The thread is terminated. (See pthread_exit(3).)
Referring the strategy to introduce a cancellation point by signalling a thread, I have my doubts this were the cleanest way.
As many system calls return on receiving a signal while setting errno to EINTR, it would be easy to catch this case and simply let the thread end itself cleanly under this condition via pthread_exit().
Some pseudo code:
while (some condition)
{
if (-1 == semop(...))
{ /* getting here on error or signal reception */
if (EINTR == errno)
{ /* getting here on signal reception */
pthread_exit(...);
}
}
}
Turned out that there is no difference.
However some interesting side effects took place.
Operations on std::iostream especially cerr/cout include cancellation points. When the underlying operation is canceled the stream is marked as not good. So you will get no output from any other thread if only one has discovered cancellation on an attempt to print.
So play with pthread_setcancelstate() and pthread_testcancel() or just call cerr.clear() when needed.
Applies to C++ streams only, stderr,stdin seems not be affected.
First of all, there are two things associated to thread which will tell what to do when you call pthread_cancel().
1. pthread_setcancelstate
2. pthread_setcanceltype
first function will tell whether that particular thread can be cancelled or not, and the second function tells when and how that thread should be cancelled, for example, should that thread be terminated as soon as you send cancellation request or it need to wait till that thread reaches some milestone before getting terminated.
when you call pthread_cancel(), thread wont be terminated directly, above two actions will be performed, i.e., checking whether that thread can be cancelled or not, and if yes, when to cancel.
if you disable cancel state, then pthread_cancel() can't terminate that thread, but the cancellation request will stay in a queue waiting for that thread to become cancellable, i.e., at some point of time if you are enabling cancel state, then your cancel request will start working on terminating that thread
whereas if you use pthread_exit(), then the thread will be terminated irrespective to the cancel state and cancel type of that particular thread.
*this is one of the differences between pthread_exit() and pthread_cancel(), there can be few more.
Before pthread wait we lock using a mutex so that some other code might not try to change the condition variable. wait then unlocks the mutex and waits for the signal.
Say, in some other thread i had locked the same mutex and after that, i had used 'signal'. and then unlock thread.
when signal is done, the waiting thread wakes up and aquires the mutex again.
Thread1 Thread2
{ {
lock(mutex); lock(mutex);
wait(mutex); signal(mutex);
unlock(mutex); unlock(mutex);
} }
Say the three thread one statements are enclosed in a while(1) loop. Then assume that thread2 locks the mutex, signals it, and unlocks the mutex. and then doesn't end, but goes to sleep.
So will the value of the condition variable be changed permanently? If three statements of thread one are running in infinite lop, will it never wait and just find that the signal has been given? When wait call returns, does it set the value of the condition variable back to initial value?
If yes, can I use create,destroy or initialize methods on the variables to set the value back? If yes, how? What exactly do these functions do?
Thanks,
pthread_cond_signal() will always wake at least one thread that is currently waiting on that condition variable in pthread_cond_wait(). If the same thread or a different thread then calls pthread_cond_wait() again, it will block and wait for another signal.
This means that pthread condition variables must always be paired with some kind of shared data, protected by the mutex that is held when calling pthread_cond_wait(). Before calling pthread_cond_wait(), the thread must check the shared data to see if the condition it wants to wait for has occurred - if not, it shouldn't wait.
The simplest example of such shared data might be a global flag. In your example:
int flag = 0;
Thread 1 {
pthread_mutex_lock(&mutex);
while (!flag)
pthread_cond_wait(&cond, &mutex);
pthread_mutex_unlock(&mutex);
}
Thread 2 {
pthread_mutex_lock(&mutex);
flag = 1;
pthread_mutex_signal(&cond);
pthread_mutex_unlock(&mutex);
}
You can see here that when the condition is "reset" is entirely under your control - for example you could have Thread 1 set flag = 0; before it calls pthread_mutex_unlock().
The shared state is often more complex than a simple flag - for example you might have a producer thread call pthread_mutex_wait() while there is no room in a shared buffer.
I have the following piece of code in thread A, which blocks using pthread_cond_wait()
pthread_mutex_lock(&my_lock);
if ( false == testCondition )
pthread_cond_wait(&my_wait,&my_lock);
pthread_mutex_unlock(&my_lock);
I have the following piece of code in thread B, which signals thread A
pthread_mutex_lock(&my_lock);
testCondition = true;
pthread_cond_signal(&my_wait);
pthread_mutex_unlock(&my_lock);
Provided there are no other threads, would it make any difference if pthread_cond_signal(&my_wait) is moved out of the critical section block as shown below ?
pthread_mutex_lock(&my_lock);
testCondition = true;
pthread_mutex_unlock(&my_lock);
pthread_cond_signal(&my_wait);
My recommendation is typically to keep the pthread_cond_signal() call inside the locked region, but probably not for the reasons you think.
In most cases, it doesn't really matter whether you call pthread_cond_signal() with the lock held or not. Ben is right that some schedulers may force a context switch when the lock is released if there is another thread waiting, so your thread may get switched away before it can call pthread_cond_signal(). On the other hand, some schedulers will run the waiting thread as soon as you call pthread_cond_signal(), so if you call it with the lock held, the waiting thread will wake up and then go right back to sleep (because it's now blocked on the mutex) until the signaling thread unlocks it. The exact behavior is highly implementation-specific and may change between operating system versions, so it isn't anything you can rely on.
But, all of this looks past what should be your primary concern, which is the readability and correctness of your code. You're not likely to see any real-world performance benefit from this kind of micro-optimization (remember the first rule of optimization: profile first, optimize second). However, it's easier to think about the control flow if you know that the set of waiting threads can't change between the point where you set the condition and send the signal. Otherwise, you have to think about things like "what if thread A sets testCondition=TRUE and releases the lock, and then thread B runs and sees that testCondition is true, so it skips the pthread_cond_wait() and goes on to reset testCondition to FALSE, and then finally thread A runs and calls pthread_cond_signal(), which wakes up thread C because thread B wasn't actually waiting, but testCondition isn't true anymore". This is confusing and can lead to hard-to-diagnose race conditions in your code. For that reason, I think it's better to signal with the lock held; that way, you know that setting the condition and sending the signal are atomic with respect to each other.
On a related note, the way you are calling pthread_cond_wait() is incorrect. It's possible (although rare) for pthread_cond_wait() to return without the condition variable actually being signaled, and there are other cases (for example, the race I described above) where a signal could end up awakening a thread even though the condition isn't true. In order to be safe, you need to put the pthread_cond_wait() call inside a while() loop that tests the condition, so that you call back into pthread_cond_wait() if the condition isn't satisfied after you reacquire the lock. In your example it would look like this:
pthread_mutex_lock(&my_lock);
while ( false == testCondition ) {
pthread_cond_wait(&my_wait,&my_lock);
}
pthread_mutex_unlock(&my_lock);
(I also corrected what was probably a typo in your original example, which is the use of my_mutex for the pthread_cond_wait() call instead of my_lock.)
The thread waiting on the condition variable should keep the mutex locked, and the other thread should always signal with the mutex locked. This way, you know the other thread is waiting on the condition when you send the signal. Otherwise, it's possible the waiting thread won't see the condition being signaled and will block indefinitely waiting on it.
Condition variables are typically used like this:
#include <stdio.h>
#include <pthread.h>
#include <unistd.h>
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
int go = 0;
void *threadproc(void *data) {
printf("Sending go signal\n");
pthread_mutex_lock(&lock);
go = 1;
pthread_cond_signal(&cond);
pthread_mutex_unlock(&lock);
}
int main(int argc, char *argv[]) {
pthread_t thread;
pthread_mutex_lock(&lock);
printf("Waiting for signal to go\n");
pthread_create(&thread, NULL, &threadproc, NULL);
while(!go) {
pthread_cond_wait(&cond, &lock);
}
printf("We're allowed to go now!\n");
pthread_mutex_unlock(&lock);
pthread_join(thread, NULL);
return 0;
}
This is valid:
void *threadproc(void *data) {
printf("Sending go signal\n");
go = 1;
pthread_cond_signal(&cond);
}
However, consider what's happening in main
while(!go) {
/* Suppose a long delay happens here, during which the signal is sent */
pthread_cond_wait(&cond, &lock);
}
If the delay described by that comment happens, pthread_cond_wait will be left waiting—possibly forever. This is why you want to signal with the mutex locked.
Both are correct, however for reactivity issues, most schedulers give hand to another thread when a lock is released. I you don't signal before unlocking, your waiting thread A is not in the ready list and thous will not be scheduled until B is scheduled again and call pthread_cond_signal().
The Open Group Base Specifications Issue 7 IEEE Std 1003.1, 2013 Edition (which as far as I can tell is the official pthread specification) says this on the matter:
The pthread_cond_broadcast() or pthread_cond_signal() functions may be
called by a thread whether or not it currently owns the mutex that
threads calling pthread_cond_wait() or pthread_cond_timedwait() have
associated with the condition variable during their waits; however, if
predictable scheduling behavior is required, then that mutex shall be
locked by the thread calling pthread_cond_broadcast() or
pthread_cond_signal().
To add my personal experience, I was working on an application that had code where the conditional variable was destroyed (and the memory containing it freed) by the thread that was woken up. We found that on a multi-core device (an iPad Air 2) the pthread_cond_signal() could actually crash sometimes if it was outside the mutex lock, as the waiter woke up and destroyed the conditional variable before the pthread_cond_signal had completed. This was quite unexpected.
So I would definitely veer towards the 'signal inside the lock' version, it appears to be safer.
Here is nice write up about the conditional variables: Techniques for Improving the Scalability of Applications Using POSIX Thread Condition Variables (look under 'Avoiding the Mutex Contention' section and point 7)
It says that, the second version may have some performance benefits. Because it makes possible for thread with pthread_cond_wait to wait less frequently.
I'm implementing a thread with a queue of tasks. As soon as as the first task is added to the queue the thread starts running it.
Should I use pthread condition variable to wake up the thread or there is more appropriate mechanism?
If I call pthread_cond_signal() when the other thread is not blocked by pthread_cond_wait() but rather doing something, what happens? Will the signal be lost?
Semaphores are good if-and-only-if your queue already is thread safe. Also,
some semaphore implementations may be limited by top counter value.
Even it is unlikely you would overrun maximal value.
Simplest and correct way to do this is following:
pthread_mutex_t queue_lock;
pthread_cond_t not_empty;
queue_t queue;
push()
{
pthread_mutex_lock(&queue_lock);
queue.insert(new_job);
pthread_cond_signal(¬_empty)
pthread_mutex_unlock(&queue_lock);
}
pop()
{
pthread_mutex_lock(&queue_lock);
if(queue.empty())
pthread_cond_wait(&queue_lock,¬_empty);
job=quque.pop();
pthread_mutex_unlock(&queue_lock);
}
From the pthread_cond_signal Manual:
The pthread_cond_broadcast() and pthread_cond_signal() functions shall have no effect if there are no threads currently blocked on cond.
I suggest you use Semaphores. Basically, each time a task is inserted in the queue, you "up" the semaphore. The worker thread blocks on the semaphore by "down"'ing it. Since it will be "up"'ed one time for each task, the worker thread will go on as long as there are tasks in the queue. When the queue is empty the semaphore is at 0, and the worker thread blocks until a new task arrives. Semaphores also easily handle the case when more than 1 task arrived while the worker was busy. Notice that you still have to lock access to the queue to keep inserts/removes atomic.
The signal will be lost, but you want the signal to be lost in that case. If there is no thread to wakeup, the signal serves no purpose. (If nobody is waiting for something, nobody needs to be notified when it happens, right?)
With condition variables, lost signals cannot cause a thread to "sleep through a fire". Unless you actually code a thread to go to sleep when there's already a fire, there is no need to "save a signal". When the fire starts, your broadcast will wake up any sleeping threads. And you would have to be pretty daft to code a thread to go to sleep when there's already a fire.
As already suggested, semaphores should be the best choice. If you need a fixed-size queue just use 2 semaphores (as in classical producer-consumer).
In artyom code, it would be better to replace "if" with "while" in pop() function, to handle spurious wakeup.
No effects.
If you check how pthread_condt_signal is implemented, the condt uses several counters to check whether there are any waiting threads to wake up. e.g., glibc-nptl
/* Are there any waiters to be woken? */
if (cond->__data.__total_seq > cond->__data.__wakeup_seq){
...
}