pthread mutex: get state - pthreads

I was looking through some code that provides a C/C++ wrapper for a pthread mutex. The code keeps a shadow variable for signaled/not signaled condition. The code also ignores return values from functions like pthread_mutex_lock and pthread_mutex_trylock, so the shadow variable may not accurately reflect the state of the mutex (ignoring the minor race condition).
Does pthread provide a way to query a mutex for its state? A quick read of the pthread API does not appear to offer one. I also don't see anything interesting that operates on pthread_mutexattr_t.
Or should one use trylock, rely upon EBUSY, and give up ownership if acquired?
Thanks in advance.

There is no such function because there would be no point. If you queried the state of a mutex without trying to acquire it, as pthread_mutex_trylock() does, then the result you get could be invalidated immediately by another thread changing that mutex's state.

Related

How to avoid an entire call stack being declared MainActor because a low-level function needs it?

I have an interesting query with regard to #MainActor and strict concurrency checking (-Xfrontend -warn-concurrency -Xfrontend -enable-actor-data-race-checks)
I have functions (Eg, Analytics) that at the lowest level require access to the device screen scale UIScreen.main.scale which is isolated to MainActor. However I would prefer not to have to declare the entire stack of functions above the one that accesses scale as requiring MainActor.
Is there a way to do this, or do I have no other options?
How would be the best way to ensure my code only ever calls UIScreen once and keeps the result available for next time without manually defining a var and checking if its nil? Ie is there a kind of computed property that will do this?
Edit: Is there an equivalent of this using MainActor (MainActor.run doesn't do the same thing; it seems to block synchronously):
DispatchQueue.main.async {
Thanks,
Chris
Non-UI code should not rely directly on UIScreen. The scale (for example), should be passed as a parameter, or to actors in their init. If the scale changes (which it can, when screens are added or removed), then the new value should be sent to the actor. Or the actor can observe something that publishes the scale when it changes.
The key point is accessing UIScreen from a random thread is not valid for a reason. The scale can in fact change at any time. Reading it from an actor is and should be an async call.
It sounds like you have some kind of Analytics actor. The simplest implementation of this would be to just pass the scale when you create it.

FreeRtos calling vTaskDelete from IRQ

I spent some time but I can't find any info if it's allowed to call vTaskDelete from IRQ handler? I know some methods have specialized version for usage in IRQ routines however I can't find anything related to vTaskDelete. Currently it works but I don't want to do some hard to discover bug just because I didn't found info.
If you are calling a callback from the IRQ then it is still in the IRQ context. Calling vTaskDelete() with a NULL parameter would delete the task that was running before the interrupt was entered, so the interrupt would then try to return to a task that was no longer running. Even if that were not the case then the rule of thumb is not to use API functions that do not end in "FromISR" from an interrupt (the separate API ensures fewer decision points in the function, faster and standard interrupt entry as it doesn't need to keep an interrupt nesting variable, no need to pass parameters that don't make sense in an interrupt context [like a block time] into an interrupt function, etc.).
I assume you are not calling vTaskDelete with a NULL argument because there is no current task when you are in interrupt context. In any case vTaskDelete() should not be called from interrupt context. For example, it's implementation will call vPortFree() to free the TCB of the task.

How does g_main_loop_unref(GMainLoop* loop) work?

The Question
Excerpt from the documentation:
Decreases the reference count on a GMainLoop object by one.
If the result is zero, free the loop and free all associated memory.
I could not find information regarding this reference counter. What is it initially set to and how is it used?
Details
In particular, I'm confused about this piece of example code (in the main method) (note that set_cancel is a static method:
void (*old_sigint_handler)(int);
old_sigint_handler = signal (SIGINT, set_cancel);
/* Create a new glib main loop */
data.main_loop = g_main_loop_new (NULL, FALSE);
old_sigint_handler = signal (SIGINT, set_cancel);
/* Run the main loop */
g_main_loop_run (data.main_loop);
signal (SIGINT, old_sigint_handler);
g_main_loop_unref (data.main_loop);
If g_main_loop is blocking, how it ever going to stop? I could not find information on this signal method either. But that might be native to the library (although I do not know).
Note: I reduced the code above to what I thought was the essential part. It is from a camera interface library called aravis under docs/reference/aravis/html/ArvCamera.html
I could not find information regarding this reference counter. What is it initially set to and how is it used?
It is initially set to 1. Whenever you store a reference to the object you increment the reference counter, and whenever you remove a reference you decrement the reference counter. It's a form of manual garbage collection. Just google "reference counting" and you'll get lots of information.
If g_main_loop is blocking, how it ever going to stop?
Somewhere someone will call g_main_loop_quit. Judging by the question I'm guessing you're not very familiar with the concept of an event loop—GLib's manual isn't a very gentle introduction to the basic concept, you may want to try the Wikipedia article or just search for "event loop".
I could not find information on this signal method either. But that might be native to the library (although I do not know).
The signal is a standard function (both C and POSIX). Again, there is lots of information out there, including good old man pages (man 2 signal).

Do I need to wrap accesses to Int64's with a critical section?

I have code that logs execution times of routines by accessing QueryPerformanceCounter. Roughly:
var
FStart, FStop : Int64 ;
...
QueryPerformanceCounter (FStart) ;
... <code to be measured>
QueryPerformanceCounter (FStop) ;
<calculate FStop - FStart, update minimum and maximum execution times, etc>
Some of this logging code is inside threads, but on the other hand, there is a display UI that accesses the derived results. I figure the possibility exists of the VCL thread accessing the same variables that the logging code is also accessing. The VCL will only ever read the data (and a mangled read would not be too serious) but the logging code will read and write the data, sometimes from another thread.
I assume QueryPerformanceCounter itself is thread-safe.
The code has run happily without any sign of a problem, but I'm wondering if I need to wrap my accesses to the Int64 counters in a critical section?
I'm also wondering what the speed penalty of the critical section access is?
Any time you access multi-byte non-atomic data across thread when both reads and writes are involved, you need to serialize the access. Whether you use a critical section, mutex, semaphore, SRW lock, etc is up to you.

Can I use pthread mutexes in the destructor function for thread-specific data?

I'm allocating my pthread thread-specific data from a fixed-size global pool that's controlled by a mutex. (The code in question is not permitted to allocate memory dynamically; all the memory it's allowed to use is provided by the caller as a single buffer. pthreads might allocate memory, I couldn't say, but this doesn't mean that my code is allowed to.)
This is easy to handle when creating the data, because the function can check the result of pthread_getspecific: if it returns NULL, the global pool's mutex can be taken there and then, the pool entry acquired, and the value set using pthread_setspecific.
When the thread is destroyed, the destructor function (as per pthread_key_create) is called, but the pthreads manual is a bit vague about any restrictions that might be in place.
(I can't impose any requirements on the thread code, such as needing it to call a destructor manually before it exits. So, I could leave the data allocated, and maybe treat the pool as some kind of cache, reusing entries on an LRU basis once it becomes full -- and this is probably the approach I'd take on Windows when using the native API -- but it would be neatest to have the per-thread data correctly freed when each thread is destroyed.)
Can I just take the mutex in the destructor? There's no problem with thread destruction being delayed a bit, should some other thread have the mutex taken at that point. But is this guaranteed to work? My worry is that the thread may "no longer exist" at that point. I use quotes, because of course it certainly exists if it's still running code! -- but will it exist enough to permit a mutex to be acquired? Is this documented anywhere?
The pthread_key_create() rationale seems to justify doing whatever you want from a destructor, provided you keep signal handlers from calling pthread_exit():
There is no notion of a destructor-safe function. If an application does not call pthread_exit() from a signal handler, or if it blocks any signal whose handler may call pthread_exit() while calling async-unsafe functions, all functions may be safely called from destructors.
Do note, however, that this section is informative, not normative.
The thread's existence or non-existence will most likely not affect the mutex in the least, unless the mutex is error-checking. Even then, the kernel is still scheduling whatever thread your destructor is being run on, so there should definitely be enough thread to go around.

Resources