I am writing a program which creates a thread that prints 10 numbers. When it prints 5 of them, it waits and it is notifying the main thread and then it continues for the next 5 numbers
This is test.c
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <time.h>
#include <pthread.h>
#include <unistd.h>
int rem = 10;
int count = 5;
pthread_mutex_t mtx;
pthread_cond_t cond1;
pthread_cond_t cond2;
void *f(void *arg)
{
int a;
srand(time(NULL));
while (rem > 0) {
a = rand() % 100;
printf("%d\n",a);
rem--;
count--;
if (count==0) {
printf("time to wake main thread up\n");
pthread_cond_signal(&cond1);
printf("second thread waits\n");
pthread_cond_wait(&cond2, &mtx);
printf("second thread woke up\n");
}
}
pthread_exit(NULL);
}
int main()
{
pthread_mutex_init(&mtx, 0);
pthread_cond_init(&cond1, 0);
pthread_cond_init(&cond2, 0);
pthread_t tids;
pthread_create(&tids, NULL, f, NULL);
while(1) {
if (count != 0) {
printf("main: waiting\n");
pthread_cond_wait(&cond1, &mtx);
printf("5 numbers are printed\n");
printf("main: waking up\n");
pthread_cond_signal(&cond2);
break;
}
pthread_cond_signal(&cond2);
if (rem == 0) break;
}
pthread_join(tids, NULL);
}
The output of the program is:
main: waiting
//5 random numbers
time to wake main thread up
second thread waits
5 numbers are printed
main: waking up
Since I do pthread_cond_signal(&cond2);I thought that the thread will wake up and prints the rest numbers but this is not the case. Any ideas why? Thanks in advance.
Summary
The issues have been summarized in comments, or at least most of them. So as to put an actual answer on record, however:
Pretty much nothing about the program's use of shared variables and synchronization objects is correct. It's behavior is undefined, and the specific manifestation observed is just one of the more likely in a universe of possible behaviors.
Accessing shared variables
If two different threads access (read or write) the same non-atomic object during their runs, and at least one of the accesses is a write, then all accesses must be properly protected by synchronization actions.
There is a variety of these, too large to cover comprehensively in a StackOverflow answer, but among the most common is to use a mutex to guard access. In this approach, a mutex is created in the program and designated for protecting access to one or more shared variables. Each thread that wants to access one of those variables locks the mutex before doing so. At some later point, the thread unlocks the mutex, lest other threads be permanently blocked from locking the mutex themselves.
Example:
pthread_mutex_t mutex; // must be initialized before use
int shared_variable;
// ...
void *thread_one_function(void *data) {
int rval;
// some work ...
rval = pthread_mutex_lock(&mutex);
// check for and handle lock failure ...
shared_variable++;
// ... maybe other work ...
rval = pthread_mutex_unlock(&mutex);
// check for and handle unlock failure ...
// more work ...
}
In your program, the rem and count variables are both shared between threads, and access to them needs to be synchronized. You already have a mutex, and using it to protect accesses to these variables looks like it would be appropriate.
Using condition variables
Condition variables have that name because they are designed to support a specific thread interaction pattern: that one thread wants to proceed past a certain point only if a certain condition, which depends on actions performed by other threads, is satisfied. Such requirements arise fairly frequently. It is possible to implement this via a busy loop, in which the thread repeatedly tests the condition (with proper synchronization) until it is true, but this is wasteful. Condition variables allow such a thread to instead suspend operation until a time when it makes sense to check the condition again.
The correct usage pattern for a condition variable should be viewed as a modification and specialization of the busy loop:
the thread locks a mutex guarding the data on which the condition is to be computed;
the thread tests the condition;
if the condition is satisfied then this procedure ends;
otherwise, the thread waits on a designated condition variable, specifying the (same) mutex;
when the thread resumes after its wait, it loops back to (2).
Example:
pthread_cond_t cv; // must be initialized before use
void *thread_two_function(void *data) {
int rval;
// some work ...
rval = pthread_mutex_lock(&mutex);
// check for and handle lock failure ...
while (shared_variable < 5) {
rval = pthread_cond_wait(&cv, &mutex);
// check for and handle wait failure ...
}
// ... maybe other work ...
rval = pthread_mutex_unlock(&mutex);
// check for and handle unlock failure ...
// more work ...
}
Note that
the procedure terminates (at (3)) with the thread still holding the mutex locked. The thread has an obligation to unlock it, but sometimes it will want to perform other work under protection of that mutex first.
the mutex is automatically released while the thread is waiting on the CV, and reacquired before the thread returns from the wait. This allows other threads the opportunity to access shared variables protected by the mutex.
it is required that the thread calling pthread_cond_wait() have the specified mutex locked. Otherwise, the call provokes undefined behavior.
this pattern relies on threads to signal or broadcast to the CV at appropriate times to notify any then-waiting other threads that they might want to re-evaluate the condition for which they are waiting. That is not modeled in the examples above.
multiple CVs can use the same mutex.
the same CV can be used in multiple places and with different associated conditions. It makes sense to do this when all the conditions involved are affected by the same or related actions by other threads.
condition variables do not store signals. Only threads that are already blocked waiting for the specified CV are affected by a pthread_cond_signal() or pthread_cond_broadcast() call.
Your program
Your program has multiple problems in this area, among them:
Both threads access shared variables rem and count without synchronization, and some of the accesses are writes. The behavior of the whole program is therefore undefined. Among the common manifestations would be that the threads do not observe each other's updates to those variables, though it's also possible that things seem to work as expected. Or anything else.
Both threads call pthread_cond_wait() without holding the mutex locked. The behavior of the whole program is therefore undefined. "Undefined" means "undefined", but it is plausible that the UB would manifest as, for example, one or both threads failing to return from their wait after the CV is signaled.
Neither thread employs the standard pattern for CV usage. There is no clear associated condition for either one, and the threads definitely don't test one. That leaves an implied condition of "this CV has been signaled", but that is unsafe because it cannot be tested before waiting. In particular, it leaves open this possible chain of events:
The main thread blocks waiting on cond1.
The second thread signals cond1.
The main thread runs all the way at least through signaling cond2 before the second thread proceeds to waiting on cond2.
Once (3) occurs, the program cannot avoid deadlock. The main thread breaks from the loop and tries to join the second thread, meanwhile the second thread reaches its pthread_cond_wait() call and blocks awaiting a signal that will never arrive.
That chain of events can happen even if the issues called out in the previous points is corrected, and it could manifest exactly the observable behavior you report.
Related
Could anyone please suggest how to optimize the application code below using pthread_mutex_lock?
Let me describe the situation:
I have 2 threads sharing a global shared memory variable. The variable shmPtr->status is protected with mutex lock in both functions. Although there is a sleep(1/2) inside the "for loop" in the task1 function, I can't access the shmPtr->status in task2 when required and have to wait until the "for loop" is finished in the task1 function. It takes around 50 seconds for the shmPtr->status to be available for the task2 function.
I am wondering why the task1 function is not releasing the mutex lock despite the sleep(1/2) line. I don't want to wait with processing the shmPtr->status in the task2 function. Please advice.
thr_id1 = pthread_create ( &p_thread1, NULL, (void *)execution_task1, NULL );
thr_id2 = pthread_create ( &p_thread2, NULL, (void *)execution_task2, NULL );
void execution_task1()
{
for(int i = 0;i < 100;i++)
{
//100 lines of application code running here
pthread_mutex_lock(&lock);
shmPtr->status = 1; //shared memory variable
pthread_mutex_unlock(&lock);
sleep(1/2);
}
}
void execution_task2()
{
//100 lines of application code running here
pthread_mutex_lock(&lock);
shmPtr->status = 0; //shared memory variable
pthread_mutex_unlock(&lock);
sleep(1/2);
}
Regards,
NK
I am wondering why the task1 function is not releasing the mutex lock even with an sleep(1/2).
There is no reason to think that the thread running execution_task1() in your example fails to release the mutex, though you would know for sure if you appropriately tested the return value of its pthread_mutex_unlock() call. Rather, the potential problem is that it reacquires the mutex before any other thread contending for it has an opportunity to acquire it.
It seems plausible that a call to sleep(1/2) is ineffective at preventing that. 1/2 is an integer division, evaluating to 0, so you are performing a sleep(0). That probably does not cause the calling thread to suspend at all, and it may not even cause the thread to yield the CPU.
More generally, sleep() is never a good solution for any thread-synchronization problem.
If you are running on a multi-core system, however, and maybe even if you aren't, then freezing out other threads by such a mechanism seems unlikely if the function really does execute a hundred lines of code between releasing the mutex and trying to lock it again. If that's what you think you see then look harder.
If you really do need to force a thread to allow others a chance to acquire the mutex, then you could perhaps set up a fair queueing system as described in Implementing a FIFO mutex in pthreads. For a case such as you dsecribe, however, with one long-running thread needing occasionally to yield to other, quicker tasks, you could consider introducing a condition variable on which that long-running thread can suspend, and an atomic counter by which it can determine whether it should do so:
#include <stdatomic.h>
// ...
pthread_cond_t yield_cv = PTHREAD_COND_INITIALIZER;
_Atomic unsigned int waiting_thread_count = ATOMIC_VAR_INIT(0);
void execution_task1() {
for (int i = 0; i < 100; i++) {
// ...
pthread_mutex_lock(&lock);
if (waiting_thread_count > 0) {
pthread_cond_wait(&yield_cv, &lock);
// spurrious wakeup ok
}
// ... critical section ...
pthread_mutex_unlock(&lock);
}
}
void execution_task2() {
// ...
waiting_thread_count += 1; // Atomic get & increment; safe w/o mutex
pthread_mutex_lock(&lock);
waiting_thread_count -= 1;
pthread_cond_signal(&yield_cv); // no problem if no-one is presently waiting
// ... critical section ...
pthread_mutex_unlock(&lock);
}
Using an atomic counter relieves the program from having to protect that counter with its own mutex, which could just shift the problem instead of solving it. That allows threads to use the counter to signal upcoming attempts to acquire the mutex. This intent is then visible to the other thread, so that it can suspend on the CV to allow the other to acquire the mutex.
The short-running threads then acknowledge acquiring the mutex by decrementing the counter. They must do so before releasing the mutex, else the long-running thread might cycle around, acquire the mutex, and read the counter before it is decremented, thus erroneously blocking on the CV when no additional signal can be expected.
Although CV's can be subject to spurrious wakeup, that does not present a serious problem for this approach. If the long-running thread wakes spurriously from it wait on the CV, then the worst that happens is that it performs one more iteration of its main loop, then waits again.
I read somewhere that we should lock the mutex before calling pthread_cond_signal and unlock the mutex after calling it:
The pthread_cond_signal() routine is
used to signal (or wake up) another
thread which is waiting on the
condition variable. It should be
called after mutex is locked, and must
unlock mutex in order for
pthread_cond_wait() routine to
complete.
My question is: isn't it OK to call pthread_cond_signal or pthread_cond_broadcast methods without locking the mutex?
If you do not lock the mutex in the codepath that changes the condition and signals, you can lose wakeups. Consider this pair of processes:
Process A:
pthread_mutex_lock(&mutex);
while (condition == FALSE)
pthread_cond_wait(&cond, &mutex);
pthread_mutex_unlock(&mutex);
Process B (incorrect):
condition = TRUE;
pthread_cond_signal(&cond);
Then consider this possible interleaving of instructions, where condition starts out as FALSE:
Process A Process B
pthread_mutex_lock(&mutex);
while (condition == FALSE)
condition = TRUE;
pthread_cond_signal(&cond);
pthread_cond_wait(&cond, &mutex);
The condition is now TRUE, but Process A is stuck waiting on the condition variable - it missed the wakeup signal. If we alter Process B to lock the mutex:
Process B (correct):
pthread_mutex_lock(&mutex);
condition = TRUE;
pthread_cond_signal(&cond);
pthread_mutex_unlock(&mutex);
...then the above cannot occur; the wakeup will never be missed.
(Note that you can actually move the pthread_cond_signal() itself after the pthread_mutex_unlock(), but this can result in less optimal scheduling of threads, and you've necessarily locked the mutex already in this code path due to changing the condition itself).
According to this manual :
The pthread_cond_broadcast() or
pthread_cond_signal() functions
may be called by a thread whether or not it currently owns the mutex that
threads calling pthread_cond_wait()
or pthread_cond_timedwait() have
associated with the condition variable
during their waits; however, if
predictable scheduling behavior is
required, then that mutex shall be
locked by the thread calling
pthread_cond_broadcast() or
pthread_cond_signal().
The meaning of the predictable scheduling behavior statement was explained by Dave Butenhof (author of Programming with POSIX Threads) on comp.programming.threads and is available here.
caf, in your sample code, Process B modifies condition without locking the mutex first. If Process B simply locked the mutex during that modification, and then still unlocked the mutex before calling pthread_cond_signal, there would be no problem --- am I right about that?
I believe intuitively that caf's position is correct: calling pthread_cond_signal without owning the mutex lock is a Bad Idea. But caf's example is not actually evidence in support of this position; it's simply evidence in support of the much weaker (practically self-evident) position that it is a Bad Idea to modify shared state protected by a mutex unless you have locked that mutex first.
Can anyone provide some sample code in which calling pthread_cond_signal followed by pthread_mutex_unlock yields correct behavior, but calling pthread_mutex_unlock followed by pthread_cond_signal yields incorrect behavior?
Before pthread wait we lock using a mutex so that some other code might not try to change the condition variable. wait then unlocks the mutex and waits for the signal.
Say, in some other thread i had locked the same mutex and after that, i had used 'signal'. and then unlock thread.
when signal is done, the waiting thread wakes up and aquires the mutex again.
Thread1 Thread2
{ {
lock(mutex); lock(mutex);
wait(mutex); signal(mutex);
unlock(mutex); unlock(mutex);
} }
Say the three thread one statements are enclosed in a while(1) loop. Then assume that thread2 locks the mutex, signals it, and unlocks the mutex. and then doesn't end, but goes to sleep.
So will the value of the condition variable be changed permanently? If three statements of thread one are running in infinite lop, will it never wait and just find that the signal has been given? When wait call returns, does it set the value of the condition variable back to initial value?
If yes, can I use create,destroy or initialize methods on the variables to set the value back? If yes, how? What exactly do these functions do?
Thanks,
pthread_cond_signal() will always wake at least one thread that is currently waiting on that condition variable in pthread_cond_wait(). If the same thread or a different thread then calls pthread_cond_wait() again, it will block and wait for another signal.
This means that pthread condition variables must always be paired with some kind of shared data, protected by the mutex that is held when calling pthread_cond_wait(). Before calling pthread_cond_wait(), the thread must check the shared data to see if the condition it wants to wait for has occurred - if not, it shouldn't wait.
The simplest example of such shared data might be a global flag. In your example:
int flag = 0;
Thread 1 {
pthread_mutex_lock(&mutex);
while (!flag)
pthread_cond_wait(&cond, &mutex);
pthread_mutex_unlock(&mutex);
}
Thread 2 {
pthread_mutex_lock(&mutex);
flag = 1;
pthread_mutex_signal(&cond);
pthread_mutex_unlock(&mutex);
}
You can see here that when the condition is "reset" is entirely under your control - for example you could have Thread 1 set flag = 0; before it calls pthread_mutex_unlock().
The shared state is often more complex than a simple flag - for example you might have a producer thread call pthread_mutex_wait() while there is no room in a shared buffer.
I have the following piece of code in thread A, which blocks using pthread_cond_wait()
pthread_mutex_lock(&my_lock);
if ( false == testCondition )
pthread_cond_wait(&my_wait,&my_lock);
pthread_mutex_unlock(&my_lock);
I have the following piece of code in thread B, which signals thread A
pthread_mutex_lock(&my_lock);
testCondition = true;
pthread_cond_signal(&my_wait);
pthread_mutex_unlock(&my_lock);
Provided there are no other threads, would it make any difference if pthread_cond_signal(&my_wait) is moved out of the critical section block as shown below ?
pthread_mutex_lock(&my_lock);
testCondition = true;
pthread_mutex_unlock(&my_lock);
pthread_cond_signal(&my_wait);
My recommendation is typically to keep the pthread_cond_signal() call inside the locked region, but probably not for the reasons you think.
In most cases, it doesn't really matter whether you call pthread_cond_signal() with the lock held or not. Ben is right that some schedulers may force a context switch when the lock is released if there is another thread waiting, so your thread may get switched away before it can call pthread_cond_signal(). On the other hand, some schedulers will run the waiting thread as soon as you call pthread_cond_signal(), so if you call it with the lock held, the waiting thread will wake up and then go right back to sleep (because it's now blocked on the mutex) until the signaling thread unlocks it. The exact behavior is highly implementation-specific and may change between operating system versions, so it isn't anything you can rely on.
But, all of this looks past what should be your primary concern, which is the readability and correctness of your code. You're not likely to see any real-world performance benefit from this kind of micro-optimization (remember the first rule of optimization: profile first, optimize second). However, it's easier to think about the control flow if you know that the set of waiting threads can't change between the point where you set the condition and send the signal. Otherwise, you have to think about things like "what if thread A sets testCondition=TRUE and releases the lock, and then thread B runs and sees that testCondition is true, so it skips the pthread_cond_wait() and goes on to reset testCondition to FALSE, and then finally thread A runs and calls pthread_cond_signal(), which wakes up thread C because thread B wasn't actually waiting, but testCondition isn't true anymore". This is confusing and can lead to hard-to-diagnose race conditions in your code. For that reason, I think it's better to signal with the lock held; that way, you know that setting the condition and sending the signal are atomic with respect to each other.
On a related note, the way you are calling pthread_cond_wait() is incorrect. It's possible (although rare) for pthread_cond_wait() to return without the condition variable actually being signaled, and there are other cases (for example, the race I described above) where a signal could end up awakening a thread even though the condition isn't true. In order to be safe, you need to put the pthread_cond_wait() call inside a while() loop that tests the condition, so that you call back into pthread_cond_wait() if the condition isn't satisfied after you reacquire the lock. In your example it would look like this:
pthread_mutex_lock(&my_lock);
while ( false == testCondition ) {
pthread_cond_wait(&my_wait,&my_lock);
}
pthread_mutex_unlock(&my_lock);
(I also corrected what was probably a typo in your original example, which is the use of my_mutex for the pthread_cond_wait() call instead of my_lock.)
The thread waiting on the condition variable should keep the mutex locked, and the other thread should always signal with the mutex locked. This way, you know the other thread is waiting on the condition when you send the signal. Otherwise, it's possible the waiting thread won't see the condition being signaled and will block indefinitely waiting on it.
Condition variables are typically used like this:
#include <stdio.h>
#include <pthread.h>
#include <unistd.h>
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
int go = 0;
void *threadproc(void *data) {
printf("Sending go signal\n");
pthread_mutex_lock(&lock);
go = 1;
pthread_cond_signal(&cond);
pthread_mutex_unlock(&lock);
}
int main(int argc, char *argv[]) {
pthread_t thread;
pthread_mutex_lock(&lock);
printf("Waiting for signal to go\n");
pthread_create(&thread, NULL, &threadproc, NULL);
while(!go) {
pthread_cond_wait(&cond, &lock);
}
printf("We're allowed to go now!\n");
pthread_mutex_unlock(&lock);
pthread_join(thread, NULL);
return 0;
}
This is valid:
void *threadproc(void *data) {
printf("Sending go signal\n");
go = 1;
pthread_cond_signal(&cond);
}
However, consider what's happening in main
while(!go) {
/* Suppose a long delay happens here, during which the signal is sent */
pthread_cond_wait(&cond, &lock);
}
If the delay described by that comment happens, pthread_cond_wait will be left waiting—possibly forever. This is why you want to signal with the mutex locked.
Both are correct, however for reactivity issues, most schedulers give hand to another thread when a lock is released. I you don't signal before unlocking, your waiting thread A is not in the ready list and thous will not be scheduled until B is scheduled again and call pthread_cond_signal().
The Open Group Base Specifications Issue 7 IEEE Std 1003.1, 2013 Edition (which as far as I can tell is the official pthread specification) says this on the matter:
The pthread_cond_broadcast() or pthread_cond_signal() functions may be
called by a thread whether or not it currently owns the mutex that
threads calling pthread_cond_wait() or pthread_cond_timedwait() have
associated with the condition variable during their waits; however, if
predictable scheduling behavior is required, then that mutex shall be
locked by the thread calling pthread_cond_broadcast() or
pthread_cond_signal().
To add my personal experience, I was working on an application that had code where the conditional variable was destroyed (and the memory containing it freed) by the thread that was woken up. We found that on a multi-core device (an iPad Air 2) the pthread_cond_signal() could actually crash sometimes if it was outside the mutex lock, as the waiter woke up and destroyed the conditional variable before the pthread_cond_signal had completed. This was quite unexpected.
So I would definitely veer towards the 'signal inside the lock' version, it appears to be safer.
Here is nice write up about the conditional variables: Techniques for Improving the Scalability of Applications Using POSIX Thread Condition Variables (look under 'Avoiding the Mutex Contention' section and point 7)
It says that, the second version may have some performance benefits. Because it makes possible for thread with pthread_cond_wait to wait less frequently.
I've been working with pthreads a fair bit recently and there's one little thing I still don't quite get. I know that condition variables are designed to wait for a specific condition to come true (or be 'signalled'). My question is, how does this differ at all from normal mutexes?
From what I understand, aren't condition variables just a mutex with additional logic to unlock another mutex (and lock it again) when the condition becomes true?
Psuedocode example:
mutex mymutex;
condvar mycond;
int somevalue = 0;
onethread()
{
lock(mymutex);
while(somevalue == 0)
cond_wait(mycond, mymutex);
if(somevalue == 0xdeadbeef)
some_func()
unlock(mymutex);
}
otherthread()
{
lock(mymutex);
somevalue = 0xdeadbeef;
cond_signal(mycond);
unlock(mymutex);
}
So cond_wait in this example unlocks mymutex, and then waits for mycond to be signalled.
If this is so, aren't condition variables just mutexes with extra magic? Or do I have a misunderstanding of the fundamental basics of mutexes and condition variables?
The two structures are quite different. A mutex is meant to provide serialised access to a resource of some kind. A condition variable is meant to allow one thread to notify some other thread that some event has occurred.
They aren't exactly mutexes with extra magic, although in some abstractions (monitor as used in java and C#) the condition variable and the mutex are combined into a single unit. The purpose of condition variables is to avoid busing waiting/polling and to hint to the run time which thread(s) should be scheduled "next". Consider how you would write this example without condition variables.
while(1) {
lock(mymutex)
if( somevalue != 0)
break;
unlock(mymutex);
}
if( somevalue == 0xdeadbeef )
myfunc();
You'll be sitting in a tight loop in this thread, burning up a lot of cpu, and making for a lot of lock contention. If locking/unlocking a mutex is cheap enough, you may be in a situation where otherthread never even has a chance to obtain the lock (although real world mutexes generally distinguish between the owning thread and having the lock, as well as having notions of fairness so this is unlikely to happen in reality).
You could reduce the busy waiting by sticking a sleep in,
while(1) {
lock(mymutex)
if( somevalue != 0)
break;
unlock(mymutex);
sleep(1); // let some other thread do work
}
but how long is a good time to sleep for? You'll basically just be guessing. The run time also can't tell why you are sleeping, or what you are waiting on. A condition variable lets the run time be at least somewhat aware of what threads are interested in the same event currently.
The simple answer is that you might want to wake more than one thread from the condition variables, but mutex allows only one thread execute the guarded block.