Three producers and each has a queue. Two consumers consume msg both from a specific queue and a common queue. How to synchronize them with pthread_cond?
The problem may need more exact specification. In general, just have the producers signal (for single consumer) or broadcast (in case of producer 3 with multiple consumers) when a queue becomes non-empty.
Consumers just work as fast as they can, going to sleep when the queues they read from are found empty. Protect all queue access inside critical sections.
If you need further clarification, add a comment and I will elaborate as needed.
Related
The question is if an app should have one instance of the queue for all async operations, or several queues can be created?
With one queue it's pretty simple because all tasks are executed based on assigned priority. So for me it's more favourable because there is no need to write extra code.
In the case of multiple queues at least one of them should be main.
So some sort of queue manager should be implemented that will be able to suspend "sub" queues and allow execution of operations from main queue if needed.
The analogy with only one single connection to database make me think that the one centralised queue should be used for all async operations.
So what would you recommend? What are the best practices?
After doing some searching and brainstorming I came up to the solution.
In my application I'm going to use 2 async queues:
one queue (NSOperationQueue) for all background operations (like parsing of downloaded .json files) and another queue (which is already implemented by NSURLSession class) for requests (API requests, downloading images, etc).
I read here in the apple documentation that for creating concurrent queues both DISPATCH_QUEUE_CONCURRENT and Global Concurrent Dispatch Queues can be used however I am uncertain as to what the difference between the two is.
E.x.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{ });
and
dispatch_queue_t queue = dispatch_queue_create("custom",DISPATCH_QUEUE_CONCURRENT);
dispatch_async(queue, ^{ });
I read something on barriers but not sure how it comes into the picture in relationship with these two. Can anybody please explain the use cases for both of these ?
There are four global concurrent queues, one for each of four different priorities, that always exist and that are always there if you need them. So you don't have to create a concurrent queue to execute some block in the background. dispatch_get_global_queue () returns one of these four queues.
You can, if you want, create your own queues and delete them when you don't need them anymore.
What you are reading about barriers: The global queues are used by everyone. Using a barrier in a queue that is used by everyone is let's say highly impolite. That's why you shouldn't use barriers in global queues, only in queues that you created yourself.
In the first case, you are requesting one of the "global queues" which are provided for your application. "Anyone, anywhere, in your app" can get an easy reference to that queue just by asking dispatch_get_global_queue() for it. The queues are, in effect, singletons. And, nearly all of the time, that's all you need. The OS helpfully builds them for you and makes them really easy for everybody (in your app) to get to.
dispatch_queue_create() is used in those rare times when you actually need "another queue." It's bound to your application just like all the other queues are, but you manage it yourself. You have to provide some means for other parts of your app to get that queue-handle. You might do this if, say, in your design it's really necessary for different kinds of requests to be placed onto different queues.
And, as I said, most of the time, the global queues are all you really need. (The OS makes them "really handy," and that's why they're "really handy.") Typically, you "put lots of messages on just a few queues," with a one-to-many or many-to-many or many-to-one relationship between readers and writers.
Is queues in freeRtos from the beginning also mutual exclusive, by that i mean, shall i create some kind of mutual exclusion for writing or reading from a queue, or is it already implemented by the function xQueueRead and xQueueSend.
If you look at the source in "queue.c" you will notice that xQueueGenericSend() and xQueueGenericReceive() functions are using tastENTER_CRITICAL()/taskEXIT_CRITICAL() macro pair to ensure atomic operation of the function, which, in a sense, is kind of mutual exclusion you are asking for.
FreeRTOS queues are thread-safe, you don't need to implement your own locking. See the FreeRTOS documentation about queues:
Queues are the primary form of intertask communications. They can be
used to send messages between tasks, and between interrupts and tasks.
In most cases they are used as thread safe FIFO (First In First Out)
buffers
I'm working on some code that I can't contact the original developer.
He passes a class a reference to another class's serial dispatch queue and I'm not seeing any purpose to him not just creating another dispatch queue (each class is a singleton).
It's not creating issues but I'd like to understand it further, so any insight into the positive implications is appreciated, thank you.
Edit: I suggest reading all the answers here, they explain a lot.
It's actually not a good idea to share queues in this fashion (and no, not because they are expensive - they're not, quite the converse). The rationale is that it's not clear to anyone but a queue's creator just what the semantics of the queue are. Is it serial? Concurrent? High priority? Low priority? All are possible, and once you start passing internal queues around which were actually created for the benefit of a specific class, the external caller can schedule work on it which causes a mutual deadlock or otherwise behaves in an unexpected fashion with the other items on that queue because caller A knew to expect concurrent behavior and caller B was thinking it was a serial queue, without any of the "gotchas" that concurrent execution implies.
Queues should therefore, wherever possible, be hidden implementation details of a class. The class can export methods for scheduling work against its internal queue as necessary, but the methods should be the only accessors since they are the only ones who know for sure how to best access and schedule work on that specific type of queue!
If it's a serial queue, then they may be intending to serialize access to some resource shared between all objects that share it.
Dispatch queues are somewhat expensive to create, and tie up system resources. If you can share one, the system runs more efficiently. For that matter, if your app does work in the background, using a shared queue allows you to manage a single pool of tasks to be completed. So yes, there are good reasons for using a shared dispatch queue.
I'm writing a moderately complex iOS program that needs to have multiple threads for some of its longer operations (parsing, connections to the network...etc). However, I'm confused as to what the difference is between dispatch_get_global_queue and dispatch_queue_create.
Which one should I use and could you give me a simple explanation of what the difference is in general? Thanks.
As the documentation describes, a global queue is good for concurrent tasks (i.e. you're going to dispatch various tasks asynchronously and you're perfectly happy if they run concurrently) and if you don't want to encounter the theoretical overhead of creating and destroying your own queue.
The creating of your own queue is very useful if you need a serial queue (i.e. you need the dispatched blocks to be executed one at a time). This can be useful in many scenarios, such as when each task is dependent upon the preceding one or when coordinating interaction with some shared resource from multiple threads.
Less common, but you will also want to create your own queue if you need to use barriers in conjunction with a concurrent queue. In that scenario, create a concurrent queue (i.e. dispatch_queue_create with the DISPATCH_QUEUE_CONCURRENT option) and use the barriers in conjunction with that queue. You should never use barriers on global queues.
My general counsel is if you need a serial queue (or need to use barriers), then create a queue. If you don't, go ahead and use the global queue and bypass the overhead of creating your own.
If you want a concurrent queue, but want to control how many operations can run concurrently, you can also consider using NSOperationQueue which has a maxConcurrentOperationCount property. This can be useful when doing network operations and you don't want too many concurrent requests being submitted to your server.
Just posted in a different answer, but here is something I wrote quite a while back:
The best way to conceptualize queues is to first realize that at the very low-level, there are only two types of queues: serial and concurrent.
Serial queues are monogamous, but uncommitted. If you give a bunch of tasks to each serial queue, it will run them one at a time, using only one thread at a time. The uncommitted aspect is that serial queues may switch to a different thread between tasks. Serial queues always wait for a task to finish before going to the next one. Thus tasks are completed in FIFO order. You can make as many serial queues as you need with dispatch_queue_create.
The main queue is a special serial queue. Unlike other serial queues, which are uncommitted, in that they are "dating" many threads but only one at time, the main queue is "married" to the main thread and all tasks are performed on it. Jobs on the main queue need to behave nicely with the runloop so that small operations don't block the UI and other important bits. Like all serial queues, tasks are completed in FIFO order.
If serial queues are monogamous, then concurrent queues are promiscuous. They will submit tasks to any available thread or even make new threads depending on system load. They may perform multiple tasks simultaneously on different threads. It's important that tasks submitted to the global queue are thread-safe and minimize side-effects. Tasks are submitted for execution in FIFO order, but order of completion is not guaranteed. As of this writing, there are only three concurrent queues and you can't make them, you can only fetch them with dispatch_get_global_queue.
edit: blog post expanding on this answer: http://amattn.com/p/grand_central_dispatch_gcd_summary_syntax_best_practices.html
One returns the existing global queue, the other creates a new one. Instead of using GCD, I would consider using NSOperation and operation queue. You can find more information about it in this guide. Typically, of you want the operations to execute concurrently, you want to create your own queue and put your operations in it.