According to POSIX, I can statically initialise a mutex this way:
pthread_mutex_t m = PTHREAD_MUTEX_INITIALIZER;
However, what if I want the mutex to be recursive? Mutexes are non-recursive be default and there's no way to supply mutex attributes to the static initialisation.
It seems there is no portable way to do this. A workaround may be initialise the mutex dynamically when it is first used. To prevent race conditions while doing the initialisation another non-recursive statically initialised mutex can be used.
Try :
pthread_mutex_t mutex = PTHREAD_RECURSIVE_MUTEX_INITIALIZER_NP;
Related
When I read The Swift Programming Language: Memory Safety, I was confused by the section Conflicting Access to Properties:
The code below shows that the same error appears for overlapping write
accesses to the properties of a structure that’s stored in a global
variable.
var holly = Player(name: "Holly", health: 10, energy: 10)
balance(&holly.health, &holly.energy) // Error
In practice,
most access to the properties of a structure can overlap safely. For
example, if the variable holly in the example above is changed to a
local variable instead of a global variable, the compiler can prove
that overlapping access to stored properties of the structure is
safe:
func someFunction() {
var oscar = Player(name: "Oscar", health: 10, energy: 10)
balance(&oscar.health, &oscar.energy) // OK
}
In the example above, Oscar’s health and energy are passed as the two in-out parameters to balance(_:_:). The compiler can prove that memory
safety is preserved because the two stored properties don’t interact
in any way.
How the compiler can prove that memory safety?
Being inside a function scope gives the compiler the certainty of which operations will be executed on the struct and when. The compiler knows how structs work, and how and when (relative to the time the function is called) the code inside a function is executed.
In a global or larger scope, the compiler loses visibility over what could be modifying the memory, and when, so it cannot assure safety.
It's because of multiple threads. When "holly" is a global variable, multiple threads could access the global variable at the same time, and you are in trouble. In the case of a local variable, that variable exists once per execution of the function. If multiple threads run someFunction() simultaneously, each thread has its own "oscar" variable, so there is no chance that thread 1's "oscar" variable access thread 2's oscar variable.
An answer from Andrew_Trick on Swift Forums:
"don't interact" is a strong statement. The compiler simply checks that each access to 'oscar' in the call to 'balance' can only modify independent pieces of memory ('heath' vs 'energy'). This is a special case because any access to a struct is generally modeled as a read/write to the entire struct value. The compiler does a little extra work to detect and allow this special case because copying the 'oscar' struct in situations like this would be inconvenient.
I'm wondering
In grails's global variables - do we need to add mutex lock when access them ?
Example
Static variable in XXXService Class
Grails Application Context
I'm wondering In grails's global variables - do we need to add mutex
lock when access them ?
The JVM doesn't really have global variables. The closest thing to them are public static variables, which isn't really the same thing.
Whether or not you have to add a mutex depends on what you want to do with the variables. In general, the answer is "no", but that is in part because in general you wouldn't want to have mutable public static variables.
You only need to synchronize these objects if they need to be thread safe. Most things in the Grails Application Context do not need to be (such as just getting a singleton service).
So the answer to your question is not very clear-cut. Do it when you feel it is necessary to make sure that previous process has finished with the variable you care about.
I am using pthread in order to parallelize some code. First, I parallelized it with openmp. It was fairly easy and straightforward. Because I only made a variable private in order to avoid race condition. I want to do the same in my pthread code. What can I do?
Depending on your code/purpose you can either use pthread mutex to serialize access to some shared resource/value so only one thread is modifying it at any time:
- pthread_mutex_create/destroy
- pthread_mutex_lock/unlock
+ make resource/value itself volatile to avoid compiler time optimization
or you can use thread local values:
- pthread_key_create/delete
- pthread_setspecific
- pthread_getspecific
though local variable in pthread start_routine might do same thing for you.
I was looking through some code that provides a C/C++ wrapper for a pthread mutex. The code keeps a shadow variable for signaled/not signaled condition. The code also ignores return values from functions like pthread_mutex_lock and pthread_mutex_trylock, so the shadow variable may not accurately reflect the state of the mutex (ignoring the minor race condition).
Does pthread provide a way to query a mutex for its state? A quick read of the pthread API does not appear to offer one. I also don't see anything interesting that operates on pthread_mutexattr_t.
Or should one use trylock, rely upon EBUSY, and give up ownership if acquired?
Thanks in advance.
There is no such function because there would be no point. If you queried the state of a mutex without trying to acquire it, as pthread_mutex_trylock() does, then the result you get could be invalidated immediately by another thread changing that mutex's state.
I'm allocating my pthread thread-specific data from a fixed-size global pool that's controlled by a mutex. (The code in question is not permitted to allocate memory dynamically; all the memory it's allowed to use is provided by the caller as a single buffer. pthreads might allocate memory, I couldn't say, but this doesn't mean that my code is allowed to.)
This is easy to handle when creating the data, because the function can check the result of pthread_getspecific: if it returns NULL, the global pool's mutex can be taken there and then, the pool entry acquired, and the value set using pthread_setspecific.
When the thread is destroyed, the destructor function (as per pthread_key_create) is called, but the pthreads manual is a bit vague about any restrictions that might be in place.
(I can't impose any requirements on the thread code, such as needing it to call a destructor manually before it exits. So, I could leave the data allocated, and maybe treat the pool as some kind of cache, reusing entries on an LRU basis once it becomes full -- and this is probably the approach I'd take on Windows when using the native API -- but it would be neatest to have the per-thread data correctly freed when each thread is destroyed.)
Can I just take the mutex in the destructor? There's no problem with thread destruction being delayed a bit, should some other thread have the mutex taken at that point. But is this guaranteed to work? My worry is that the thread may "no longer exist" at that point. I use quotes, because of course it certainly exists if it's still running code! -- but will it exist enough to permit a mutex to be acquired? Is this documented anywhere?
The pthread_key_create() rationale seems to justify doing whatever you want from a destructor, provided you keep signal handlers from calling pthread_exit():
There is no notion of a destructor-safe function. If an application does not call pthread_exit() from a signal handler, or if it blocks any signal whose handler may call pthread_exit() while calling async-unsafe functions, all functions may be safely called from destructors.
Do note, however, that this section is informative, not normative.
The thread's existence or non-existence will most likely not affect the mutex in the least, unless the mutex is error-checking. Even then, the kernel is still scheduling whatever thread your destructor is being run on, so there should definitely be enough thread to go around.