Does the POSIX Thread API offer a way to test if the calling thread alreeady holds a lock? - pthreads

Short question: Does the POSIX thread API offer me a way to determine if the calling thread already holds a particular lock?
Long question:
Suppose I want to protect a data structure with a lock. Acquiring and releasing the lock need to happen in different functions. Calls between the functions involved are rather complex (I am adding multithreading to a 16-year-old code base). For example:
do_dangerous_stuff() does things for which the current thread needs to hold the write lock. Therefore, I acquire the lock at the beginning of the function and release it at the end, as the caller does not necessarily hold the lock.
Another function, do_extra_dangerous_stuff(), calls do_dangerous_stuff(). However, before the call it already does things which also require a write lock, and the data is not consistent until the call to do_dangerous_stuff() returns (so releasing and immediately re-acquiring the lock might break things).
In reality it is more complicated than that. There may be a bunch of functions calling do_dangerous_stuff(), and requiring each of them to obtain the lock before calling do_dangerous_stuff() may be impractical. Sometimes the lock is acquired in one function and released in another.
With a read lock, I could just acquire it multiple times from the same thread, as long as I make sure I release the same number of lock instances that I have acquired. For a write lock, this is not an option (attempting to do so will result in a deadlock). An easy solution would be: test if the current thread already holds the lock and acquire it if not, and conversely, test if the current thread still holds the lock and release it if it does. However, that requires me to test if the current thread already holds the lock—is there a way to do that?

Looking at the man page for pthread_rwlock_wrlock(), I see it says:
If successful, the pthread_rwlock_wrlock() function shall return zero; otherwise, an error number shall be returned to indicate the error.
[…]
The pthread_rwlock_wrlock() function may fail if:
EDEADLK The current thread already owns the read-write lock for writing or reading.
As I read it, EDEADLK is never used to indicate chains involving multiple threads waiting for each other’s resources (and from my observations, such deadlocks indeed seem to result in a freeze rather than EDEADLK). It seems to indicate exclusively that the thread is requesting a resource already held by the current thread, which is the condition I want to test for.
If I have misunderstood the documentation, please let me know. Otherwise the solution would be to simply call pthread_rwlock_wrlock(). One of the following should happen:
It blocks because another thread holds the resource. When we get to run again, we will hold the lock. Business as usual.
It returns zero (success) because we have just acquired the lock (which we didn’t hold before). Business as usual.
It returns EDEADLK because we are already holding the lock. No need to reacquire, but we might want to consider this when we release the lock—that depends on the code in question, see below
It returns some other error, indicating something has truly gone wrong. Same as with every other lock operation.
It may make sense to keep track of the number of times we have acquired the lock and got EDEADLK. Taking from Gil Hamilton’s answer, the lock depth would work for us:
Reset the lock depth to 0 when we have acquired a lock.
Increase the lock depth by 1 each time we get EDEADLK.
Match each attempt to acquire the lock with the following: If lock depth is 0, release the lock. Else decrease the lock depth.
This should be thread-safe without further synchronization, as the lock depth is effectively protected by the lock it refers to (we touch it only while holding the lock).
Caveat: if the current thread already holds the lock for reading (and no others do), this will also report it as being “already locked”. Further tests will be needed to determine if the currently held lock is indeed a write lock. If multiple threads, among them the current one, hold the read lock, I do not know if attempting to obtain a write lock will return EDEADLK or freeze the thread. This part needs some more work…

AFAIK, there's no easy way to accomplish what you're trying to do.
In linux, you can use the "recursive" mutex attribute to achieve your purpose (as shown here for example: https://stackoverflow.com/a/7963765/1076479), but this is not "posix"-portable.
The only really portable solution is to roll your own equivalent. You can do that by creating a data structure containing a lock along with your own thread index (or equivalent) and an ownership/recursion count.
CAVEAT: Pseudo-code off the top of my head
Recursive lock:
// First try to acquire the lock without blocking...
if ((err = pthread_mutex_trylock(&mystruct.mutex)) == 0) {
// Acquire succeeded. I now own the lock.
// (I either just acquired it or already owned it)
assert(mystruct.owner == my_thread_index || mystruct.lock_depth == 0);
mystruct.owner = my_thread_index;
++mystruct.lock_depth;
} else if (mystruct.owner == my_thread_index) {
assert(err == EBUSY);
// I already owned the lock. Now one level deeper
++mystruct.lock_depth;
} else {
// I don't own the lock: block waiting for it.
pthread_mutex_lock(&mystruct.mutex);
assert(mystruct.lock_depth == 0);
}
On the way out, it's simpler, because you know you own the lock, you only need to determine whether it's time to release it (i.e. the last unlock). Recursive unlock:
if (--mystruct.lock_depth == 0) {
assert(mystruct.owner == my_thread_index);
// Last level of recursion unwound
mystruct.owner = -1; // Mark it un-owned
pthread_mutex_unlock(&mystruct.mutex);
}
I would want to add some additional checks and assertions and significant testing before trusting this too.

Related

Are Tasks automatically cancelled upon return/completion in Swift?

I am a little uncertain about task cancellation in swift. My question is:
If a task reaches its return line (in this example, Line 4), does this mean it will be automatically canceled? (and thus free up memory + any used thread(s) previously occupied by the Task?)
someBlock {
Task<Bool, Never> {
await doSomeWork()
return true // Line 4
}
}
As a follow-up, what if we then call .task on a SwiftUI View? Does anything change?
SomeView
.task {
await doSomeWork()
}
Thank you for your time!
If a task reaches its return line (in this example, Line 4), does this mean it will be automatically canceled
No. It means it will automatically end in good order. Cancellation of a task is very different and specialized thing. They are both ways of bringing a task to an end, but within that similarity they are effectively opposites of one another.
It does mean that the thread(s) used by the Task are now free for use, but this is not as dramatically important as it might seem, because await also frees the thread(s) used by the Task. That's the whole point of async/await; it is not thread-bound. A Task is not aligned with a single thread, the way a DispatchQueue is; to put it another way, we tend to use queue and thread as loose equivalents of one another, but with a Task, that is not at all the case.
As for what memory is released, it depends on what memory was being retained. The code given is too "pseudo" to draw any conclusions about that. But basically yes, your mental picture of this is right: a Task has a life of its own, and while it lives its code continues to retain whatever it has a strong reference to, and then when it ends (whether through finishing in good order or through cancellation) the Task's life ends (if you have not retained it elsewhere) and therefore so does whatever the Task's code is retaining.
Does anything change
The .task modifier, according to Apple, creates "an asynchronous task with a lifetime that matches that of the ... view". So I would say, yes: the Task no longer just has an independent life of its own, but rather it is being retained behind the scenes (in order that the SwiftUI framework can cancel it if the View goes out of existence), and so whatever its code has strong references to will remain.
On the other hand this might not be terribly important; it depends on whether you're using reference types. References to value types, such as a View, don't have that sort of memory management implications.

IOS application does not render due to infinite while loop

Hello I am implementing a primitive echo server for ios written purely in c. The issue arises when I enter an infinite while loop in order to accept incoming connections. Where is the best place to put the accept loop and is an infinite while loop (in my case all I want) the best implementation.
Here is the flow
Set root controller inside application function inside app delegate.
call start server function inside controller. (This function never returns)
while(true)
{
#autoreleasepool
{
int exchangeSocket = accept(socket, NULL,NULL);
if(recv(exchangeSocket, buffer, sizeof(buffer), 0) == -1)
{
NSLog(#"%#", #"Error");
}
else
{
//do something with data received
}
}
}
There are at least two concerns with your approach. First, you should not run an infinite loop on the main thread since that thread is already used for the main run loop. Blocking this loop with your own infinite loop (no matter where) will disrupt event processing. This is the answer to why your application does not render. The reference specifies:
Lengthy tasks (or potentially length tasks) should always be performed on a background thread. Any tasks involving network access, file access, or large amounts of data processing should all be performed asynchronously using GCD or operation objects
However, iOS does allow you to create threads using POSIX interfaces, e.g. it offers pthread_create and friends. You should be able to run your code in such a thread without blocking your app.
Second, (but perhaps less of interest to you) using the POSIX networking APIs is somewhat discouraged (due to failure of those APIs to turn on radio) in favor of other interfaces. What's closest to your liking may be CFSocket (also a C interface).

What If the Updates Handler for CoreMotion Does Not Finish Fast Enough?

I am registering to receive updates from a CMMotionManager like so:
motionManager.startDeviceMotionUpdatesToQueue(deviceMotionQueue) {
[unowned self] (deviceMotion, error) -> Void in
// ... handle data ...
}
where deviceMotionQueue is an NSOperationQueue with the highest quality of service, i.e. the highest possible update rate:
self.deviceMotionQueue.qualityOfService = NSQualityOfService.UserInteractive
This means that I am getting updates often. Like really often. So I was wondering: what happens if I don't handle one update fast enough? If the update interval is shorter than the execution time of 'handle data'? Will the motion manager drop some information? Or will it queue up and after a while become run out of memory? Or is this not feasable at all?
It's hard to know what the internal CoreMotion implementation will do, and given that what it does is an "implementation detail", even if you could discern its current behavior, you wouldn't want to rely on that behavior moving forward.
I think the common solution to this is to do the minimum amount of work in the motion update handler, and then manage the work/rate-limiting/etc yourself. So, for instance, if you wanted to drop interstitial updates that arrived while you were processing the last update, you could have the update handler that you pass into CoreMotion do nothing but (safely) add a copy of deviceMotion to a mutable array, and then enqueue the "real" handler on a different queue. The real handler might then have a decision tree like:
if the array is empty, return immediately
otherwise (safely) take the last element, clear all elements from the array, and do the work based on the last element
This would have the effect of letting you take only the most recent reading, but also to have knowledge of how many updates were missed, and, if it's useful, what those missed updates were. Depending on your app, it might be useful to batch process the missed events as a group.
But the takeaway is this: if you want to be sure about how a system like this behaves, you have to manage it yourself.

A simple method for a "master" thread to monitor "slave" threads

As my first real foray into using pthreads, I'm looking to adapt an already written app of mine to use threads.
The paradigm I have in mind is basically to have one "master" thread which iterates through a list of data items to be processed, launching a new thread for each, with MAX_THREADS threads running at any given time (until the number of remaining tasks is less than this), each of which perform the same task on a single data element within a list.
The master thread needs to be aware of whenever any thread has completed its task and returned (or pthread_exit()'ed), immediately launching a new thread to perform the next task in the list.
What I'm wondering about is what are people's preferred methods for working with such a design? Data considerations aside, what would be the simplest set of pthreads functions to use to accomplish this? Obviously, pthread_join() is out as a means for "checking up" on threads.
Early experiments have been using a struct, passed as the final argument to pthread_create(), which contains an element called "running" which the thread sets to true on startup and resets just before returning. The master thread simply checks the current value of this struct element for each thread in a loop.
Here are the data the program uses for thread management:
typedef struct thread_args_struct
{
char *data; /* the data item the thread will be working on */
int index; /* thread's index in the array of threads */
int thread_id; /* thread's actual integer id */
int running; /* boolean status */
int retval; /* value to pass back from thread on return */
} thread_args_t;
/*
* array of threads (only used for thread creation here, not referenced
* otherwise)
*/
pthread_t thread[MAX_THREADS];
/*
* array of argument structs
*
* a pointer to the thread's argument struct will be passed to it on creation,
* and the thread will place its return value in the appropriate struct element
* before returning/exiting
*/
thread_args_t thread_args[MAX_THREADS];
Does this seem like a sound design? Is there a better, more standardized method for monitoring threads' running/exited status, a more "pthreads-y" way? I'm looking to use the simplest, clearest, cleanest mechanism possible which won't lead to any unexpected complications.
Thanks for any feedback.
There isn't so much a "pthreads-y" way as a (generic) multi-threading way. There is nothing wrong with what you have but it is more complicated and inefficient than it need be.
A more standard design is to use a thread pool. The master thread spawns a bunch of worker threads that read a queue. The master puts work in the queue and all the workers take a shot at processing the work in the queue. This eliminates the need to constantly start and terminate threads (though more sophisticated pools can have some mechanism to increase/decrease the pool size based on the work load). If the threads have to return data or status information they can use an output queue (maybe just a pointer to the actual data) that the master can read.
This still leaves the issue of how to get rid of the threads when you are done processing. Again, it is a master-worker relationship so it is advised that the master tell the slaves to shut themselves down. This amounts to using some program switch (such as you currently have), employing a condition variable somewhere, sending a signal, or cancelling the thread. There are a lot questions (and good answers) on this topic here.

slow memory release (refcounted structure) - Is my workaround a good way?

in my program i can load a Catalog: ICatalog
a Catalog here contains a lot of refcounted structures (Icollections of IItems, IElements, IRules, etc.)
when I want to change to another catalog,
I load a new Catalog
but the automatic release of the previous ICatalog instance takes time, freezing my application for 2 second or more.
my question is :
I want to defer the release of the old (and no more used) ICatalog instance to another thread.
I've not tested it already, but I intend to create a new thread with :
ErazerThread.OldCatalog := Catalog; // old catalog refcount jumps to 2
Catalog := LoadNewCatalog(...); // old catalog refcount =1
ErazerThread.Execute; //just set OldCatalog to nil.
this way, I expect the release to occur in the thread, and my application not
beeing freezed anymore.
Is it safe (and good practice) ?
Do you have examples of existing code already perfoming release with a similar method ?
I would let such thread block on some threadsafe queue(*), and push the interfaces to release into that queue as iunknowns.
Note however that if the releasing touches a lock that your memory manager uses (like a global heapmanager lock), then this is futile, since your mainthread will block on the first heapmanager access.
With a heapmanager with per thread pools, allocating many items in one thread and releasing it in a different thread might frustrate coalescing and reuse of (small) blocks algorithms.
I still think the way you describe is generally sound when implemented properly. But
this is from a theoretic perspective to show that there might be a link from the 2nd thread to the mainthread via the heapmanager.
(*) Simplest way is to add it to a tthreadlist and use tevent to signal that an element was added.
That looks OK, but don't call the thread's Execute method directly; that will run the thread object's code in the current thread instead of the one that the thread object creates. Call Start or Resume instead.

Resources