POSIX thread pthread_setschedparam - pthreads

Can anyone help me how to change the thread priority or thread policy? I create a sub thread from main thread, when i try to change the thread priority or thread schedpolicy, using pthread_setschedprio(pthread_self(),2); and pthread_setschedparam(pthread_self(),SCHED_OTHER,&param); it is showing error EINVAL invalid argument. please explain the thing for SCHED_OTHER policy?
Here struct sched_param param;

It is not clear what your param argument contains (which is of struct sched_param* type and has sched_priority field - and thus you can set policy and priority at once). Most probably it contains not-supported/out of range value - or garbage if you forgot to initialize it with something like that:
struct sched_param param;
param.sched_priority = 2;
By the way, valid priorities for a given scheduler policy are within the range returned by the sched_get_priority_max(int policy) and sched_get_priority_min(int policy) - might be worth to check.
Update
From this:
Processes scheduled with SCHED_OTHER must be assigned the static
priority 0, processes
scheduled under SCHED_FIFO or SCHED_RR can have a static priority in
the range 1 to 99

Related

pthread_cond_signal does not wake up the thread that waits

I am writing a program which creates a thread that prints 10 numbers. When it prints 5 of them, it waits and it is notifying the main thread and then it continues for the next 5 numbers
This is test.c
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <time.h>
#include <pthread.h>
#include <unistd.h>
int rem = 10;
int count = 5;
pthread_mutex_t mtx;
pthread_cond_t cond1;
pthread_cond_t cond2;
void *f(void *arg)
{
int a;
srand(time(NULL));
while (rem > 0) {
a = rand() % 100;
printf("%d\n",a);
rem--;
count--;
if (count==0) {
printf("time to wake main thread up\n");
pthread_cond_signal(&cond1);
printf("second thread waits\n");
pthread_cond_wait(&cond2, &mtx);
printf("second thread woke up\n");
}
}
pthread_exit(NULL);
}
int main()
{
pthread_mutex_init(&mtx, 0);
pthread_cond_init(&cond1, 0);
pthread_cond_init(&cond2, 0);
pthread_t tids;
pthread_create(&tids, NULL, f, NULL);
while(1) {
if (count != 0) {
printf("main: waiting\n");
pthread_cond_wait(&cond1, &mtx);
printf("5 numbers are printed\n");
printf("main: waking up\n");
pthread_cond_signal(&cond2);
break;
}
pthread_cond_signal(&cond2);
if (rem == 0) break;
}
pthread_join(tids, NULL);
}
The output of the program is:
main: waiting
//5 random numbers
time to wake main thread up
second thread waits
5 numbers are printed
main: waking up
Since I do pthread_cond_signal(&cond2);I thought that the thread will wake up and prints the rest numbers but this is not the case. Any ideas why? Thanks in advance.
Summary
The issues have been summarized in comments, or at least most of them. So as to put an actual answer on record, however:
Pretty much nothing about the program's use of shared variables and synchronization objects is correct. It's behavior is undefined, and the specific manifestation observed is just one of the more likely in a universe of possible behaviors.
Accessing shared variables
If two different threads access (read or write) the same non-atomic object during their runs, and at least one of the accesses is a write, then all accesses must be properly protected by synchronization actions.
There is a variety of these, too large to cover comprehensively in a StackOverflow answer, but among the most common is to use a mutex to guard access. In this approach, a mutex is created in the program and designated for protecting access to one or more shared variables. Each thread that wants to access one of those variables locks the mutex before doing so. At some later point, the thread unlocks the mutex, lest other threads be permanently blocked from locking the mutex themselves.
Example:
pthread_mutex_t mutex; // must be initialized before use
int shared_variable;
// ...
void *thread_one_function(void *data) {
int rval;
// some work ...
rval = pthread_mutex_lock(&mutex);
// check for and handle lock failure ...
shared_variable++;
// ... maybe other work ...
rval = pthread_mutex_unlock(&mutex);
// check for and handle unlock failure ...
// more work ...
}
In your program, the rem and count variables are both shared between threads, and access to them needs to be synchronized. You already have a mutex, and using it to protect accesses to these variables looks like it would be appropriate.
Using condition variables
Condition variables have that name because they are designed to support a specific thread interaction pattern: that one thread wants to proceed past a certain point only if a certain condition, which depends on actions performed by other threads, is satisfied. Such requirements arise fairly frequently. It is possible to implement this via a busy loop, in which the thread repeatedly tests the condition (with proper synchronization) until it is true, but this is wasteful. Condition variables allow such a thread to instead suspend operation until a time when it makes sense to check the condition again.
The correct usage pattern for a condition variable should be viewed as a modification and specialization of the busy loop:
the thread locks a mutex guarding the data on which the condition is to be computed;
the thread tests the condition;
if the condition is satisfied then this procedure ends;
otherwise, the thread waits on a designated condition variable, specifying the (same) mutex;
when the thread resumes after its wait, it loops back to (2).
Example:
pthread_cond_t cv; // must be initialized before use
void *thread_two_function(void *data) {
int rval;
// some work ...
rval = pthread_mutex_lock(&mutex);
// check for and handle lock failure ...
while (shared_variable < 5) {
rval = pthread_cond_wait(&cv, &mutex);
// check for and handle wait failure ...
}
// ... maybe other work ...
rval = pthread_mutex_unlock(&mutex);
// check for and handle unlock failure ...
// more work ...
}
Note that
the procedure terminates (at (3)) with the thread still holding the mutex locked. The thread has an obligation to unlock it, but sometimes it will want to perform other work under protection of that mutex first.
the mutex is automatically released while the thread is waiting on the CV, and reacquired before the thread returns from the wait. This allows other threads the opportunity to access shared variables protected by the mutex.
it is required that the thread calling pthread_cond_wait() have the specified mutex locked. Otherwise, the call provokes undefined behavior.
this pattern relies on threads to signal or broadcast to the CV at appropriate times to notify any then-waiting other threads that they might want to re-evaluate the condition for which they are waiting. That is not modeled in the examples above.
multiple CVs can use the same mutex.
the same CV can be used in multiple places and with different associated conditions. It makes sense to do this when all the conditions involved are affected by the same or related actions by other threads.
condition variables do not store signals. Only threads that are already blocked waiting for the specified CV are affected by a pthread_cond_signal() or pthread_cond_broadcast() call.
Your program
Your program has multiple problems in this area, among them:
Both threads access shared variables rem and count without synchronization, and some of the accesses are writes. The behavior of the whole program is therefore undefined. Among the common manifestations would be that the threads do not observe each other's updates to those variables, though it's also possible that things seem to work as expected. Or anything else.
Both threads call pthread_cond_wait() without holding the mutex locked. The behavior of the whole program is therefore undefined. "Undefined" means "undefined", but it is plausible that the UB would manifest as, for example, one or both threads failing to return from their wait after the CV is signaled.
Neither thread employs the standard pattern for CV usage. There is no clear associated condition for either one, and the threads definitely don't test one. That leaves an implied condition of "this CV has been signaled", but that is unsafe because it cannot be tested before waiting. In particular, it leaves open this possible chain of events:
The main thread blocks waiting on cond1.
The second thread signals cond1.
The main thread runs all the way at least through signaling cond2 before the second thread proceeds to waiting on cond2.
Once (3) occurs, the program cannot avoid deadlock. The main thread breaks from the loop and tries to join the second thread, meanwhile the second thread reaches its pthread_cond_wait() call and blocks awaiting a signal that will never arrive.
That chain of events can happen even if the issues called out in the previous points is corrected, and it could manifest exactly the observable behavior you report.

What does it mean for something to be thread safe in iOS?

I often come across the key terms "thread safe" and wonder what it means. For example, in Firebase or Realm, some objects are considered "Thread Safe". What exactly does it mean for something to be thread safe?
Thread Unsafe
- If any object is allowed to modify by more than one thread at the same time.
Thread Safe
- If any object is not allowed to modify by more than one thread at the same time.
Generally, immutable objects are thread-safe.
An object is said to be thread safe if more than one thread can call methods or access the object's member data without any issues; an "issue" broadly being defined as being a departure from the behaviour when only accessed from only one thread.
For example an object that contains the code i = i + 1 for a regular integer i would not be thread safe, since two threads might encounter that statement and one thread might read the original value of i, increment it, then write back that single incremented value; all at the same time as another thread. In that way, i would be incremented only once, where it ought to have been incremented twice.
After searching for the answer, I got the following from this website:
Thread safe code can be safely called from multiple threads or concurrent tasks without causing any problems (data corruption, crashing, etc). Code that is not thread safe must only be run in one context at a time. An example of thread safe code is let a = ["thread-safe"]. This array is read-only and you can use it from multiple threads at the same time without issue. On the other hand, an array declared with var a = ["thread-unsafe"] is mutable and can be modified. That means it’s not thread-safe since several threads can access and modify the array at the same time with unpredictable results. Variables and data structures that are mutable and not inherently thread-safe should only be accessed from one thread at a time.
iOS Thread safe
[Atomicity, Visibility, Ordering]
[General lock, mutex, semaphore]
Thread safe means that your program works as expected. It is about multithreading envirompment, where we have a problem with shared resource with problems of Data races and Race Condition[About].
Apple provides us by Synchronization Tools:
Atomicity
Atomic Operations - lock free mechanism which is based on hardware instructions - for example Compare-And-Swap(CAS)[More]...
Objective-C OSAtomic, atomic property attribute[About]
[Swift Atomic Operations]
Visibility
Volatile Variable - read value from memory(no cache)
Objective-C volatile
Ordering
Memory Barriers - guarantees up-to date data[About]
Objective-C OSMemoryBarrier
Find problem in your code
Thread Sanitizer - uses self.recordAndCheckWrite(var) inside to figure out when(timestamp) and who(thread) access to variable
Synchronisation
Locks - thread can get a lock and nobody else access to the resource. NSLock.
Semaphore consists of Threads queue, Counter value and has wait() and signal() api. Semaphore allows a number of threads(Counter value) work with resource at a given moment. DispatchSemaphore, POSIX Semaphore - semaphore_t. App Group allows share POSIX semaphores
Mutex - mutual exclusion, mutually exclusive - is a type of Semaphore(allows several threads) where Thread can acquire it and is able to work with block as a single encroacher, all others thread will be blocked until release. The main different with lock is that mutex also works between processes(not only threads). Also it includes memory barrier.
var lock = os_unfair_lock_s()
os_unfair_lock_lock(&lock)
//critical section
os_unfair_lock_unlock(&lock)
NSLock -POSIX Mutex Lock - pthread_mutex_t, Objective-C #synchronized.
let lock = NSLock()
lock.lock()
//critical section
lock.unlock()
Recursive lock - Lock Reentrance - thread can acquire a lock several times. NSRecursiveLock
Spin lock - waiting thread checks if it can get a lock repeatedly based on polling mechanism. It is useful for small operation. In this case thread is not blocked and expensive operations like context switch is not nedded
[GCD]
Common approach is using custom serial queue with async call - where all access to memory will be done one by one:
serial read and write access
private let queue = DispatchQueue(label: "com.company")
self.queue.async {
//read and write access to shared resource
}
concurrent read and serial write access. when write is oocured - all previous read access finished -> write -> all other reads
private let queue = DispatchQueue(label: "com.company", attributes: .concurrent)
//read
self.queue.sync {
//read
}
//write
self.queue.sync(flags: .barrier) {
//write
}
Operations
[Actors]
actor MyData {
var sharedVariable = "Hello"
}
//using
Task {
await self.myData.sharedVariable = "World"
}
Multi threading:
[Concurrency vs Parallelism]
[Sync vs Async]
[Mutable vs Immutable] [let vs var]
[Swift thread safe Singleton]
[Swift Mutable/Immutable collection]
pthread - POSIX[About] thread
NSThead
To give a simple example. If something is shared across multiple threads without any issues like crash, it is thread-safe. For example, if you have a constant (let value = ["Facebook"]) and it is shared across multiple threads, it is thread safe because it is read-only and cannot be modified. Whereas, if you have a variable (var value = ["Facebook"]), it may cause potential crash or data loss when shared with multiple threads because it's data can be modified.

pthreads SIGEV_THREAD and async-safe function calls

Having trouble tracking down answer to usage of SIGEV_THREAD...
When one sets SIGEV_THREAD as the notify method in sigevent struct, is it correct to assume that async-signal-safe functions must still be used within the notify_function to be invoked as the handler?
Also - is it correct to assume the thread is run as "detached"?
For example
notify thread
void my_thread(union sigval my_data)
{
// is this ok or not (two non async-signal-safe functions)?
printf("in the notify function\n");
mq_send();
}
main function
(...)
se.sigev_notify = SIGEV_THREAD;
se.sigev_value.sival_ptr = &my_data;
se.sigev_notify_function = my_thread;
se.sigev_notify_attributes = NULL;
(...)
Please provide a reference if possible.
No, you don't need to use only async-signal-safe functions, because POSIX does not place any such limitation on the SIGEV_THREAD function. (The whole point of SIGEV_THREAD is that it lets you handle asychronous notifications in a less constrained environment than a signal handler).
As far as the thread being detached, POSIX says:
The function shall be executed in an environment as if it were the
start_routine for a newly created thread with thread attributes
specified by sigev_notify_attributes. If sigev_notify_attributes
is NULL, the behavior shall be as if the thread were created with
the detachstate attribute set to PTHREAD_CREATE_DETACHED. Supplying
an attributes structure with a detachstate attribute of
PTHREAD_CREATE_JOINABLE results in undefined behavior. The signal
mask of this thread is implementation-defined.
This means: you must either leave sigev_notify_attributes as NULL, or set it to an attributes structure with the detachstate set to PTHREAD_CREATE_DETACHED - in both cases the thread will be created detached.

dispatch_queue_set_specific vs. getting the current queue

I am trying to get my head around the difference and usage between these 2:
static void *myFirstQueue = "firstThread";
dispatch_queue_t firstQueue = dispatch_queue_create("com.year.new.happy", DISPATCH_QUEUE_CONCURRENT);
dispatch_queue_set_specific(firstQueue, myFirstQueue, (void*) myFirstQueue, NULL);
Question #1
What is the difference between this:
dispatch_sync(firstQueue, ^{
if(dispatch_get_specific(myFirstQueue))
{
//do something here
}
});
and the following:
dispatch_sync(firstQueue, ^{
if(firstQueue == dispatch_get_current_queue())
{
//do something here
}
});
?
Question #2:
Instead of using the above (void*) myFirstQueue in
dispatch_queue_set_specific(firstQueue, myFirstQueue, (void*) myFirstQueue, NULL);
Can we use a static int * myFirstQueue = 0; instead ?
My reasoning is based on the fact that:
dispatch_once_t is also 0 (is there any correlation here? By the way, I still don’t quite get why dispatch_once_t must be initialized to 0, although I have already read questions here on SO).
Question #3
Can you cite me an example of GCD Deadlock here?
Question #4
This might be a little too much to ask; I will ask anyway, in case someone happens to know this off on top of the head. If not, it would be OK to leave this part unanswered.
I haven’t tried this, because I really don’t know how. But my concept is this:
Is there anyway we can "place a handle" in some queue that enables us to still withhold a handle on it and thus be able to detect when a deadlock occurs after the queue is spun off; and when there is, and since we got a handle of queue we previously set, we could somehow do something to unlock the deadlock?
Again, if this is too much to answer, either that or if my reasoning is completely undoable / off here (in Question #4), feel free to leave this part unanswered.
Happy New Year.
#san.t
With static void *myFirstQueue = 0;
We do this:
dispatch_queue_set_specific(firstQueue, &myFirstQueue, &myFirstQueue, NULL);
Totally understandable.
But if we do:
static void *myFirstQueue = 1;
//or any other number other than 0, it would be OK to revert back to the following?
dispatch_queue_set_specific(firstQueue, myFirstQueue, (void*) myFirstQueue, NULL);
Regarding dispatch_once_t:
Could you elaborate more on this:
Why must dispatch_once_t first be 0, and how and why would it need to act as a boolean at later stage? Does this have to do with memory / safety or the fact that the previous memory address was occupied by other objects that were not 0 (nil)?
As for Question #3:
Sorry, as I might not be completely clear: I didn’t mean I am experiencing a deadlock. I meant whether or not someone can show me a scenario in code with GCD that leads to a deadlock.
Lastly:
Hopefully you could answer Question #4. If not, as previously mentioned, it’s OK.
First, I really don't think you meant to make that queue concurrent. dispatch_sync()ing to a concurrent queue is not really going to accomplish much of anything (concurrent queues do not guarantee ordering between blocks running on them). So, the rest of this answer assumes you meant to have a serial queue there. Also, I'm going to answer this in general terms, rather than your specific questions; hopefully that's ok :)
There are two fundamental issues with using dispatch_get_current_queue() this way. One very broad one that can be summarized as "recursive locking is a bad idea", and one dispatch-specific one that can be summarized as "you can and often will have more than one current queue".
Problem #1: Recursive locking is a bad idea
The usual purpose of a private serial queue is to protect an invariant of your code ("invariant" being "something that must be true"). For example, if you're using a queue to guard access to a property so that it's thread-safe, then the invariant is "this property does not have an invalid value" (for example: if the property is a struct, then half the struct could have a new value and half could have the old value, if it was being set from two threads at once. A serial queue forces one thread or the other to finish setting the whole struct before the other can start).
We can infer that for this to make sense, the invariant has to hold when beginning execution of a block on the serial queue (otherwise, it clearly wasn't protected). Once the block has begun executing, it can break the invariant (say, set the property) without fear of messing up any other threads as long as the invariant holds again by the time it returns (in this example, the property has to be fully set).
Summarizing just to make sure you're still following: at the beginning and end of each block on a serial queue, the invariant that queue is protecting must hold. In the middle of each block, it may be broken.
If, inside the block, you call something which tries to use the thing protected by the queue, then you've changed this simple rule to a much much more complicated one: instead of "at the beginning and end of each block" it's "at the beginning, end, and at any point where that block calls something outside of itself". In other words, instead of thinking about your thread-safety at the block level, you now have to examine every individual line of each block.
What does this have to do with dispatch_get_current_queue()? The only reason to use dispatch_get_current_queue() here is to check "are we already on this queue?", and if you're already on the current queue then you're already in the scary situation above! So don't do that. Use private queues to protect things, and don't call out to other code from inside them. You should already know the answer to "am I on this queue?" and it should be "no".
This is the biggest reason dispatch_get_current_queue() was deprecated: to stop people from trying to simulate recursive locking (what I've described above) with it.
Problem #2: You can have more than one current queue!
Consider this code:
dispatch_async(queueA, ^{
dispatch_sync(queueB, ^{
//what is the current queue here?
});
});
Clearly queueB is current, but we're also still on queueA! dispatch_sync causes the work on queueA to wait for the completion of work on queueB, so they're both effectively "current".
This means that this code will deadlock:
dispatch_async(queueA, ^{
dispatch_sync(queueB, ^{
dispatch_sync(queueA, ^{});
});
});
You can also have multiple current queues by using target queues:
dispatch_set_target_queue(queueB, queueA);
dispatch_sync(queueB, ^{
dispatch_sync(queueA, ^{ /* deadlock! */ });
});
What would really be needed here is something like a hypothetical "dispatch_queue_is_synchronous_with_queue(queueA, queueB)", but since that would only be useful for implementing recursive locking, and I've already described how that's a bad idea... it's unlikely to be added.
Note that if you only use dispatch_async(), then you're immune to deadlocks. Sadly, you're not at all immune to race conditions.
Question 1:
The two code snip does the same thing, that is "does some work" when the block is indeed running in firstQueue. However they are using different ways to detect it is running on firstQueue, the first one sets a non NULL context ((void*)myFirstQueue) with a specific key (myFirstQueue) and later checks that the context is indeed non NULL; the second checks by using the now deprecated function dispatch_get_current_queue. The first method is preferred. But then it seems unnecessary to me, dispatch_sync guarantees the block to run in firstQueue already.
Question 2:
just using static int * myFirstQueue = 0; is not ok, this way, myFirstQueue is a NULL pointer and dispatch_queue_set_specific(firstQueue, key, context, NULL); requires non NULL key & context to work. However it will work with minor changes like this:
static void *myFirstQueue = 0;
dispatch_queue_t firstQueue = dispatch_queue_create("com.year.new.happy", DISPATCH_QUEUE_CONCURRENT);
dispatch_queue_set_specific(firstQueue, &myFirstQueue, &myFirstQueue, NULL);
this would use the address of myFirstQueue variable as key and context.
If we do:
static void *myFirstQueue = 1;
//or any other number other than 0, it would be OK to revert back to the following?
dispatch_queue_set_specific(firstQueue, myFirstQueue, (void*) myFirstQueue, NULL);
I guess it will be fine, as both myFirstQueue pointer won't be dereferenced provided the last destructor parameter is NULL
dispatch_once_t is also 0 has nothing to do with this. It is 0 at first and after it's dispatched once, it's value will change to non zero, essentially acting as a boolean.
Here are extracts from once.h, you can see that dispatch_once_t is actually a long and that Apple's implementation detail requires it to be initially 0, probably because static & global variables defaults to zero. And you can see that there is a line:
if (DISPATCH_EXPECT(*predicate, ~0l) != ~0l) {
essentially checking the once predicate is still zero before calling the dispatch_once function. It is not related to memory safety.
/*!
* #typedef dispatch_once_t
*
* #abstract
* A predicate for use with dispatch_once(). It must be initialized to zero.
* Note: static and global variables default to zero.
*/
typedef long dispatch_once_t;
/*!
* #function dispatch_once
*
* #abstract
* Execute a block once and only once.
*
* #param predicate
* A pointer to a dispatch_once_t that is used to test whether the block has
* completed or not.
*
* #param block
* The block to execute once.
*
* #discussion
* Always call dispatch_once() before using or testing any variables that are
* initialized by the block.
*/
#ifdef __BLOCKS__
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
void
dispatch_once(dispatch_once_t *predicate, dispatch_block_t block);
DISPATCH_INLINE DISPATCH_ALWAYS_INLINE DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
void
_dispatch_once(dispatch_once_t *predicate, dispatch_block_t block)
{
if (DISPATCH_EXPECT(*predicate, ~0l) != ~0l) {
dispatch_once(predicate, block);
}
}
#undef dispatch_once
#define dispatch_once _dispatch_once
#endif
Question 3:
assuming myQueue is serial, concurrent queues are ok.
dispatch_async(myQueue, ^{
dispatch_sync(myQueue, ^{
NSLog(#"This would be a deadlock");
});
});
Question 4: Not sure about it.

One thread showing interest in another thread (consumer / producer)

I would like to have to possibility to make thread (consumer) express interest in when another thread (producer) makes something. But not all the time.
Basically I want to make a one-shot consumer. Ideally the producer through would go merrily about its business until one (or many) consumers signal that they want something, in which case the producer would push some data into a variable and signal that it has done so. The consumer will wait until the variable has become filled.
It must also be so that the one-shot consumer can decide that it has waited too long and abandon the wait (a la pthread_cond_timedwait)
I've been reading many articles and SO questions about different ways to synchronize threads. Currently I'm leaning towards a condition variable approach.
I would like to know if this is a good way to go about it (being a novice at thread programming I probably have quite a few bugs in there), or if it perhaps would be better to (ab)use semaphores for this situation? Or something else entirely? Just an atomic assign to a pointer variable if available? I currently don't see how these would work safely, probably because I'm trying to stay on the safe side, this application is supposed to run for months, without locking up. Can I do without the mutexes in the producer? i.e.: just signal a condition variable?
My current code looks like this:
consumer {
pthread_mutex_lock(m);
pred = true; /* signal interest */
while (pred) {
/* wait a bit and hopefully get an answer before timing out */
pthread_cond_timedwait(c, m, t);
/* it is possible that the producer never produces anything, in which
case the pred will stay true, we must "designal" interest here,
unfortunately the also means that a spurious wake could make us miss
a good answer, no? How to combat this? */
pred = false;
}
/* if we got here that means either an answer is available or we timed out */
//... (do things with answer if not timed out, otherwise assign default answer)
pthread_mutex_unlock(m);
}
/* this thread is always producing, but it doesn't always have listeners */
producer {
pthread_mutex_lock(m);
/* if we have a listener */
if (pred) {
buffer = "work!";
pred = false;
pthread_cond_signal(c);
}
pthread_mutex_unlock(m);
}
NOTE: I'm on a modern linux and can make use of platform-specific functionality if necessary
NOTE2: I used the seemingly global variables m, c, and t. But these would be different for every consumer.
High-level recap
I want a thread to be able to register for an event, wait for it for a specified time and then carry on. Ideally it should be possible for more than one thread to register at the same time and all threads should get the same events (all events that came in the timespan).
What you want is something similar to a std::future in c++ (doc). A consumer requests a task to be performed by a producer using a specific function. That function creates a struct called future (or promise), holding a mutex, a condition variable associated with the task as well as a void pointer for the result, and returns it to the caller. It also put that struct, the task id, and the parameters (if any) in a work queue handled by the producer.
struct future_s {
pthread_mutex_t m;
pthread_cond_t c;
int flag;
void *result;
};
// basic task outline
struct task_s {
struct future_s result;
int taskid;
};
// specific "mytask" task
struct mytask_s {
struct future_s result;
int taskid;
int p1;
float p2;
};
future_s *do_mytask(int p1, float p2){
// allocate task data
struct mytask_s * t = alloc_task(sizeof(struct mytask_s));
t->p1 = p1;
t->p2 = p2;
t->taskid = MYTASK_ID;
task_queue_add(t);
return (struct future_s *)t;
}
Then the producer pull the task out of the queue, process it, and once terminated, put the result in the future and trigger the variable.
The consumer may wait for the future or do something else.
For a cancellable futures, include a flag in the struct to indicate that the task is cancelled. The future is then either:
delivered, the consumer is the owner and must deallocate it
cancelled, the producer remains the owner and disposes of it.
The producer must therefore check that the future has not been cancelled before triggering the condition variable.
For a "shared" future, the flag turns into a number of subscribers. If the number is above zero, the order must be delivered. The consumer owning the result is left to be decided between all consumers (First come first served? Is the result passed along to all consumers?).
Any access to the future struct must be mutexed (which works well with the condition variable).
Regarding the queues, they may be implemented using a linked list or an array (for versions with limited capacity). Since the functions creating the futures may be called concurrently, they have to be protected with a lock, which is usually implemented with a mutex.

Resources