I'm writing a library that needs a fast ipc lock, around a small critical section. I want to ensure that crashes in unrelated code, during the critical section, will still release the ipc lock.
In this example, main creates a shared memory object and memory maps it. Initializes the memory as a pthread_spinlock_t with PTHREAD_PROCESS_SHARED, allowing multiple processes to use the same lock.
void proc1_main() {
auto shm_obj = ...
pthread_spin_lock((pthread_spinlock_t*)shm_obj.ptr);
this_thread::sleep_for(chrono::seconds(2));
pthread_spin_unlock((pthread_spinlock_t*)obj.ptr);
}
void proc2_main() {
auto shm_obj = ...
this_thread::sleep_for(chrono::seconds(1));
cout << "waiting for spin lock...\n";
pthread_spin_lock((pthread_spinlock_t*)shm_obj.ptr);
cout << "got spin lock\n";
pthread_spin_unlock((pthread_spinlock_t*)shm_obj.ptr);
}
int main() {
auto shm_obj = ...
pthread_spin_init((pthread_spinlock_t*)shm_obj.ptr, PTHREAD_PROCESS_SHARED);
shm_obj.close();
if (::fork()) {
proc1_main();
} else {
proc2_main();
}
}
The sleeps ensure that process ordering is (more-or-less) guaranteed.
proc1 grabs the spin lock.
proc2 prints "waiting for spin lock...".
proc2 starts waiting on the spin lock.
proc1 releases the spin lock.
proc2 acquires the spin lock.
proc2 prints "got spin lock".
proc2 releases the spin lock.
This is all good and expected behavior for a lock.
I'm concerned what happens if a process crashes and doesn't release the lock.
void proc1_main() {
auto shm_obj = ...
pthread_spin_lock((pthread_spinlock_t*)shm_obj.ptr);
this_thread::sleep_for(chrono::seconds(2));
// pthread_spin_unlock((pthread_spinlock_t*)obj.ptr);
}
With this modification, proc1 completes without releasing the spin lock. proc2 hangs forever.
If I was using a pthread_mutex_t, I could add the PTHREAD_MUTEX_ROBUST flag and continue on. I don't need to worry about consistency.
Is there any robustness for spin locks?
Related
I have some rookie questions on how to port an existing application on ESP32 to freeRTOS. I have a first setup running but am hitting some roadblocks.
We have built a test device that exercises a unit under test by stepping through several steps. The proof-of-concept is running freeRTOS on an ESP32 with an LCD display, user buttons, and all the needed I/O circuits. In general, the procedure for each measurement step is:
Set up HW (open/close relays)
Measure values (voltages, currents)
Wait for readings to stabilize
Validate if readings are correct for given stage
Determine pass/fail, eventually upload results to server
We have 12 different steps as outlined above plus some extra ones to select the test protocol and configure the WiFi interface.
We reasoned (wrongly?) that we want to create individual tasks on the fly when needed and not create them at the start and kept in suspended mode. This is as safety measure since we want to avoid that (inadvertently) two different tasks would be running simultaneously when they shouldn't: that might create the danger of messing up the relays that control AC power.
So, we want to create two tasks for each test step:
Task1: with an infinite loop that reads the corresponding values
CheckTask1: waits x seconds then should delete Task1, determine pass/fail and then delete itself
The problem I have is that the for(;;) that is in the function that creates Task1 never ends, so we can´t fire off TaskCheck1.
The abbreviated code is below:
/************************************************************************
*
* MAIN LOOP
*
************************************************************************/
void loop()
{
// We keep the main loop always running and firing off the different tasks.
if (some_condition_to_create_task_1){
startTask1(); // Creates a task that shows an analog reading on the display
startCheckTask1(); // Creates a task that waits x secs, then deletes Task1 and deletes itself
// --> PROBLEM: startCheckTask1() is never executed, the for(;;) in startTask1() never exits
}
// Similar code to above for additional tasks to create when their turn comes
}
/************************************************************************
*
* TASKS
*
************************************************************************/
void Task1(void *pvParameters) // Reads analog value and shows on display
{
(void) pvParameters;
for(;;){
readAnalogValue();
showOnDisplay();
vTaskDelay(500); // Update reading every 0,5 sec to avoid flicker
}
}
void startCheckTask1(void *pvParameters) // Waits for 5 seconds, then deletes Task1, interprets results and shows
// passed/failed on screen
{
(void) pvParameters;
unsigned long oldTicks, NewTicks;
vTaskDelay(5000); // Allow 5 secs for readings to stabilize
showResultOnDisplay(); // Show passed/failed on the display
// Now delete both tasks
vTaskDelete(hTask1);
vTaskDelete(hCheckTask1);
}
/************************************************************************
*
* FUNCTIONS
*
************************************************************************/
void startTask1(){
xTaskCreatePinnedToCore(Task1,"Task 1", 1024, NULL, 1, &hTask1, 1);
for(;;){ // PROBLEM: This is the loop that prevents creating CheckTask1
}
}
void startCheckTask1(){
xTaskCreatePinnedToCore(CheckTask1,"Check Task 1", 1024, NULL, 1, &hCheckTask1, 1);
for(;;){
}
}
Any ideas how to solve this? Thx!
I am writing a program which creates a thread that prints 10 numbers. When it prints 5 of them, it waits and it is notifying the main thread and then it continues for the next 5 numbers
This is test.c
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <time.h>
#include <pthread.h>
#include <unistd.h>
int rem = 10;
int count = 5;
pthread_mutex_t mtx;
pthread_cond_t cond1;
pthread_cond_t cond2;
void *f(void *arg)
{
int a;
srand(time(NULL));
while (rem > 0) {
a = rand() % 100;
printf("%d\n",a);
rem--;
count--;
if (count==0) {
printf("time to wake main thread up\n");
pthread_cond_signal(&cond1);
printf("second thread waits\n");
pthread_cond_wait(&cond2, &mtx);
printf("second thread woke up\n");
}
}
pthread_exit(NULL);
}
int main()
{
pthread_mutex_init(&mtx, 0);
pthread_cond_init(&cond1, 0);
pthread_cond_init(&cond2, 0);
pthread_t tids;
pthread_create(&tids, NULL, f, NULL);
while(1) {
if (count != 0) {
printf("main: waiting\n");
pthread_cond_wait(&cond1, &mtx);
printf("5 numbers are printed\n");
printf("main: waking up\n");
pthread_cond_signal(&cond2);
break;
}
pthread_cond_signal(&cond2);
if (rem == 0) break;
}
pthread_join(tids, NULL);
}
The output of the program is:
main: waiting
//5 random numbers
time to wake main thread up
second thread waits
5 numbers are printed
main: waking up
Since I do pthread_cond_signal(&cond2);I thought that the thread will wake up and prints the rest numbers but this is not the case. Any ideas why? Thanks in advance.
Summary
The issues have been summarized in comments, or at least most of them. So as to put an actual answer on record, however:
Pretty much nothing about the program's use of shared variables and synchronization objects is correct. It's behavior is undefined, and the specific manifestation observed is just one of the more likely in a universe of possible behaviors.
Accessing shared variables
If two different threads access (read or write) the same non-atomic object during their runs, and at least one of the accesses is a write, then all accesses must be properly protected by synchronization actions.
There is a variety of these, too large to cover comprehensively in a StackOverflow answer, but among the most common is to use a mutex to guard access. In this approach, a mutex is created in the program and designated for protecting access to one or more shared variables. Each thread that wants to access one of those variables locks the mutex before doing so. At some later point, the thread unlocks the mutex, lest other threads be permanently blocked from locking the mutex themselves.
Example:
pthread_mutex_t mutex; // must be initialized before use
int shared_variable;
// ...
void *thread_one_function(void *data) {
int rval;
// some work ...
rval = pthread_mutex_lock(&mutex);
// check for and handle lock failure ...
shared_variable++;
// ... maybe other work ...
rval = pthread_mutex_unlock(&mutex);
// check for and handle unlock failure ...
// more work ...
}
In your program, the rem and count variables are both shared between threads, and access to them needs to be synchronized. You already have a mutex, and using it to protect accesses to these variables looks like it would be appropriate.
Using condition variables
Condition variables have that name because they are designed to support a specific thread interaction pattern: that one thread wants to proceed past a certain point only if a certain condition, which depends on actions performed by other threads, is satisfied. Such requirements arise fairly frequently. It is possible to implement this via a busy loop, in which the thread repeatedly tests the condition (with proper synchronization) until it is true, but this is wasteful. Condition variables allow such a thread to instead suspend operation until a time when it makes sense to check the condition again.
The correct usage pattern for a condition variable should be viewed as a modification and specialization of the busy loop:
the thread locks a mutex guarding the data on which the condition is to be computed;
the thread tests the condition;
if the condition is satisfied then this procedure ends;
otherwise, the thread waits on a designated condition variable, specifying the (same) mutex;
when the thread resumes after its wait, it loops back to (2).
Example:
pthread_cond_t cv; // must be initialized before use
void *thread_two_function(void *data) {
int rval;
// some work ...
rval = pthread_mutex_lock(&mutex);
// check for and handle lock failure ...
while (shared_variable < 5) {
rval = pthread_cond_wait(&cv, &mutex);
// check for and handle wait failure ...
}
// ... maybe other work ...
rval = pthread_mutex_unlock(&mutex);
// check for and handle unlock failure ...
// more work ...
}
Note that
the procedure terminates (at (3)) with the thread still holding the mutex locked. The thread has an obligation to unlock it, but sometimes it will want to perform other work under protection of that mutex first.
the mutex is automatically released while the thread is waiting on the CV, and reacquired before the thread returns from the wait. This allows other threads the opportunity to access shared variables protected by the mutex.
it is required that the thread calling pthread_cond_wait() have the specified mutex locked. Otherwise, the call provokes undefined behavior.
this pattern relies on threads to signal or broadcast to the CV at appropriate times to notify any then-waiting other threads that they might want to re-evaluate the condition for which they are waiting. That is not modeled in the examples above.
multiple CVs can use the same mutex.
the same CV can be used in multiple places and with different associated conditions. It makes sense to do this when all the conditions involved are affected by the same or related actions by other threads.
condition variables do not store signals. Only threads that are already blocked waiting for the specified CV are affected by a pthread_cond_signal() or pthread_cond_broadcast() call.
Your program
Your program has multiple problems in this area, among them:
Both threads access shared variables rem and count without synchronization, and some of the accesses are writes. The behavior of the whole program is therefore undefined. Among the common manifestations would be that the threads do not observe each other's updates to those variables, though it's also possible that things seem to work as expected. Or anything else.
Both threads call pthread_cond_wait() without holding the mutex locked. The behavior of the whole program is therefore undefined. "Undefined" means "undefined", but it is plausible that the UB would manifest as, for example, one or both threads failing to return from their wait after the CV is signaled.
Neither thread employs the standard pattern for CV usage. There is no clear associated condition for either one, and the threads definitely don't test one. That leaves an implied condition of "this CV has been signaled", but that is unsafe because it cannot be tested before waiting. In particular, it leaves open this possible chain of events:
The main thread blocks waiting on cond1.
The second thread signals cond1.
The main thread runs all the way at least through signaling cond2 before the second thread proceeds to waiting on cond2.
Once (3) occurs, the program cannot avoid deadlock. The main thread breaks from the loop and tries to join the second thread, meanwhile the second thread reaches its pthread_cond_wait() call and blocks awaiting a signal that will never arrive.
That chain of events can happen even if the issues called out in the previous points is corrected, and it could manifest exactly the observable behavior you report.
Could anyone please suggest how to optimize the application code below using pthread_mutex_lock?
Let me describe the situation:
I have 2 threads sharing a global shared memory variable. The variable shmPtr->status is protected with mutex lock in both functions. Although there is a sleep(1/2) inside the "for loop" in the task1 function, I can't access the shmPtr->status in task2 when required and have to wait until the "for loop" is finished in the task1 function. It takes around 50 seconds for the shmPtr->status to be available for the task2 function.
I am wondering why the task1 function is not releasing the mutex lock despite the sleep(1/2) line. I don't want to wait with processing the shmPtr->status in the task2 function. Please advice.
thr_id1 = pthread_create ( &p_thread1, NULL, (void *)execution_task1, NULL );
thr_id2 = pthread_create ( &p_thread2, NULL, (void *)execution_task2, NULL );
void execution_task1()
{
for(int i = 0;i < 100;i++)
{
//100 lines of application code running here
pthread_mutex_lock(&lock);
shmPtr->status = 1; //shared memory variable
pthread_mutex_unlock(&lock);
sleep(1/2);
}
}
void execution_task2()
{
//100 lines of application code running here
pthread_mutex_lock(&lock);
shmPtr->status = 0; //shared memory variable
pthread_mutex_unlock(&lock);
sleep(1/2);
}
Regards,
NK
I am wondering why the task1 function is not releasing the mutex lock even with an sleep(1/2).
There is no reason to think that the thread running execution_task1() in your example fails to release the mutex, though you would know for sure if you appropriately tested the return value of its pthread_mutex_unlock() call. Rather, the potential problem is that it reacquires the mutex before any other thread contending for it has an opportunity to acquire it.
It seems plausible that a call to sleep(1/2) is ineffective at preventing that. 1/2 is an integer division, evaluating to 0, so you are performing a sleep(0). That probably does not cause the calling thread to suspend at all, and it may not even cause the thread to yield the CPU.
More generally, sleep() is never a good solution for any thread-synchronization problem.
If you are running on a multi-core system, however, and maybe even if you aren't, then freezing out other threads by such a mechanism seems unlikely if the function really does execute a hundred lines of code between releasing the mutex and trying to lock it again. If that's what you think you see then look harder.
If you really do need to force a thread to allow others a chance to acquire the mutex, then you could perhaps set up a fair queueing system as described in Implementing a FIFO mutex in pthreads. For a case such as you dsecribe, however, with one long-running thread needing occasionally to yield to other, quicker tasks, you could consider introducing a condition variable on which that long-running thread can suspend, and an atomic counter by which it can determine whether it should do so:
#include <stdatomic.h>
// ...
pthread_cond_t yield_cv = PTHREAD_COND_INITIALIZER;
_Atomic unsigned int waiting_thread_count = ATOMIC_VAR_INIT(0);
void execution_task1() {
for (int i = 0; i < 100; i++) {
// ...
pthread_mutex_lock(&lock);
if (waiting_thread_count > 0) {
pthread_cond_wait(&yield_cv, &lock);
// spurrious wakeup ok
}
// ... critical section ...
pthread_mutex_unlock(&lock);
}
}
void execution_task2() {
// ...
waiting_thread_count += 1; // Atomic get & increment; safe w/o mutex
pthread_mutex_lock(&lock);
waiting_thread_count -= 1;
pthread_cond_signal(&yield_cv); // no problem if no-one is presently waiting
// ... critical section ...
pthread_mutex_unlock(&lock);
}
Using an atomic counter relieves the program from having to protect that counter with its own mutex, which could just shift the problem instead of solving it. That allows threads to use the counter to signal upcoming attempts to acquire the mutex. This intent is then visible to the other thread, so that it can suspend on the CV to allow the other to acquire the mutex.
The short-running threads then acknowledge acquiring the mutex by decrementing the counter. They must do so before releasing the mutex, else the long-running thread might cycle around, acquire the mutex, and read the counter before it is decremented, thus erroneously blocking on the CV when no additional signal can be expected.
Although CV's can be subject to spurrious wakeup, that does not present a serious problem for this approach. If the long-running thread wakes spurriously from it wait on the CV, then the worst that happens is that it performs one more iteration of its main loop, then waits again.
For the following code:
f1()
{
pthread_mutex_lock(&mutex); //LINE1 (thread3 and thread4)
pthread_cond_wait(&cond, &mutex); //LINE2 (thread1 and thread2)
pthread_mutex_unlock(&mutex);
}
f2()
{
pthread_mutex_lock(&mutex);
pthread_cond_signal(&cond); //LINE3 (thread5)
pthread_mutex_unlock(&mutex);
}
Assume thread1 and thread2 are waiting at LINE2, thread3 and thread4 are blocked at LINE1. When thread5 executes LINE3, which threads will run first? thread1 or thread2? thread3 or thread4?
When thread5 signals the condition, either thread1 or thread2, or both, will be released from waiting, and will wait until the mutex can be locked... which won't be until after thread5 unlocks it.
When thread5 then unlocks the mutex, one of the threads waiting to lock the mutex will be able to do so. My reading of POSIX reveals only that the order in which threads waiting to lock will proceed is "under defined", though higher priority threads may be expected to run first. How things are scheduled is largely system dependent.
If you need threads to run in a particular order, then you need to arrange that for yourself.
From Pro Asynchrnous Programming with .Net:
for (int nTry = 0; nTry < 3; nTry++)
{
try
{
AttemptOperation();
break;
}
catch (OperationFailedException) { }
Thread.Sleep(2000);
}
While sleeping, the thread doesn’t consume any CPU-based resources,
but the fact that the thread is alive means that it is still consuming
memory resources. On a desktop application this is probably no big
deal, but on a server application, having lots of threads sleeping is
not ideal because if more work arrives on the server, it may have to
spin up more threads, increasing memory pressure and adding additional
resources for the OS to manage.
Ideally, instead of putting the thread to sleep, you would like to
simply give it up, allowing the thread to be free to serve other
requests. When you are ready to continue using CPU resources again,
you can obtain a thread ( not necessarily the same one ) and continue
processing. You can solve this problem by not putting the thread to
sleep, but rather using await on a Task that is deemed to be completed
in a given period.
for (int nTry = 0; nTry < 3; nTry++)
{
try
{
AttemptOperation();
break;
}
catch (OperationFailedException) { }
await Task.Delay(2000);
}
I don't follow author's reasoning. While it's true that calling await Task.Delay will release this thread ( which is processing a request ), but it's also true that task created by Task.Delay will occupy some other thread to run on. So does this code really enable server to process more simultaneous requests or is the text wrong?!
Task.Delay does not occupy some other thread. It gives you a task without blocking. It starts a timer that completes that task in its callback. The timer does not use any thread while waiting.
It is a common myth that async actions like delays or IO just push work to a different thread. They do not. They use OS facilities to truly use zero threads while the operation is in progress. (They obviously need to use some thread to initiate and complete the operation.)
If async was just pushing work to a different thread it would be mostly useless. It's value would be just to keep the UI responsive in client apps. On the server it would only cause harm. It is not so.
The value of async IO is to reduce memory usag (less thread stacks), context switching and thread-pool utilization.
The async version of the code you posted would scale to literally tens of thousands of concurrent requests (if you increase the ASP.NET limits appropriately, which is a simple web.config change) with small memory usage.