Is queues in freeRtos from the beginning also mutual exclusive, by that i mean, shall i create some kind of mutual exclusion for writing or reading from a queue, or is it already implemented by the function xQueueRead and xQueueSend.
If you look at the source in "queue.c" you will notice that xQueueGenericSend() and xQueueGenericReceive() functions are using tastENTER_CRITICAL()/taskEXIT_CRITICAL() macro pair to ensure atomic operation of the function, which, in a sense, is kind of mutual exclusion you are asking for.
FreeRTOS queues are thread-safe, you don't need to implement your own locking. See the FreeRTOS documentation about queues:
Queues are the primary form of intertask communications. They can be
used to send messages between tasks, and between interrupts and tasks.
In most cases they are used as thread safe FIFO (First In First Out)
buffers
Related
when should we use semaphore vs Dispatch group vs operation queue ?
what i understood is:
Use semaphore : when multiple threads want to access shared resource.
Use Dispatch Group: when you want , you should be notified after all threads (which are added to dispatch group) finishes their execution.
Use Operation Queue: when you want that operation C should start after A and B finishes their execution. So A and B has a dependency over C.
is my understanding correct or not ?
I’m gathering you’re focusing on these three techniques’ ability to manage dependencies between units of work. Bottom line, semaphores are a low-level tool, dispatch groups represent a higher level of abstraction, and operation queues are even more high-level.
A few observations:
As a general rule, semaphores are a low-level tool that should be used sparingly as they are easily misused (e.g., easy to accidentally cause deadlocks, easy to block the main thread, even when used properly they unnecessarily block a thread which is inefficient, etc.). There are almost always better, higher-level tools.
For example, when doing synchronization, locks and GCD queues generally not only offer higher-level interfaces, but are also more efficient, too.
Dispatch groups are a slightly higher level tool, and a great way of notifying you when a series of GCD dispatched blocks of code are done. So, if you’re already using GCD, dispatch groups are a logical solution.
Note, I’d advise avoiding the wait function (whether the semaphore or the dispatch group rendition). Use dispatch group notify method instead. Using notify, you mitigate deadlock risks, avoid unnecessarily tying up threads, avoid risking blocking the main thread, etc. The dispatch group’s wait function only re-introduces some of the same potential semaphore problems. But it’s hard(er) to go wrong when using notify.
Operation queues are an even higher-level tool. Yes, you can manage dependencies as you outlined, but you can also do more general “run series of asynchronous operations sequentially” or “run series of asynchronous operations, but not more than x operations at a time”. It’s a great way of managing series of asynchronous tasks.
But operations are more than just a way of managing a series of asynchronous units of work. The other benefit is that it provides an established framework to wrap a unit of work in a discrete object. This can help us achieve a better separation of responsibilities within our code. So you can have queue for network operations, queue for image processing operations, etc., and avoid scenarios, for example, where we bury all of this code in our view controllers (lol).
So, as a gross over-simplification, I’d suggest:
avoiding semaphores altogether;
using dispatch groups with notify pattern if you want to be notified when a bunch of dispatched blocks of code are done; and
considering operation queues if you want to abstract complicated asynchronous code into distinct objects or have more complicated dependencies/concurrency scenarios with asynchronous tasks.
All of that having been said, nowadays, the Swift concurrency system (a.k.a., async-await) obviates most of the above patterns, allowing one to write elegant, readable code that captures asynchronous processes.
Three producers and each has a queue. Two consumers consume msg both from a specific queue and a common queue. How to synchronize them with pthread_cond?
The problem may need more exact specification. In general, just have the producers signal (for single consumer) or broadcast (in case of producer 3 with multiple consumers) when a queue becomes non-empty.
Consumers just work as fast as they can, going to sleep when the queues they read from are found empty. Protect all queue access inside critical sections.
If you need further clarification, add a comment and I will elaborate as needed.
This question already has answers here:
Use of the terms "queues", "multicore", and "threads" in Grand Central Dispatch
(3 answers)
Closed 8 years ago.
I am new to iOS development. Now I am quite confused about the two concepts: "thread" and "queue". All I know is that they both are about multithread programming. Can anyone interpret those two concepts and the difference between them for me?
Thanks in advance!
How NSOperationQueue and NSThread Works:
NSThread:
iOS developers have to write code for the work/process he want to perform along with for the creation and management of the threads themselves.
iOS developers have to be careful about a plan of action for using threads.
iOS developer have to manage posiable problems like reuseability of thread, lockings etc. by them self.
Thread will consume more memory too.
NSOperationQueue:
The NSOperation class is an abstract class which encapsulates the code and data associated with a single task.
Developer needs to use subclass or one of the system-defined subclasses of NSOperation to perform the task.
Add operations into NSOperationQueue to execute them.
The NSOperationQueue creates a new thread for each operation and runs them in the order they are added.
Operation queues handle all of the thread management, ensuring that operations are executed as quickly and efficiently as possible.
An operation queue executes operations either directly by running them on secondary threads or indirectly using GCD (Grand Central Dispatch).
It takes care of all of the memory management and greatly simplifies the process.
If you don’t want to use an operation queue, you can also execute an operation by calling its start method. It may make your code too complex.
How To Use NSThread And NSOperationQueue:
NSThread:
Though Operation queues is the preferred way to perform tasks concurrently, depending on application there may still be times when you need to create custom threads.
Threads are still a good way to implement code that must run in real time.
Use threads for specific tasks that cannot be implemented in any other way.
If you need more predictable behavior from code running in the background, threads may still offer a better alternative.
NSOperationQueue:
Use NSOperationQueue when you have more complex operations you want to run concurrently.
NSOperation allows for subclassing, dependencies, priorities, cancellation and a supports a number of other higher-level features.
NSOperation actually uses GCD under the hood so it is as multi-core, multi-thread capable as GCD.
Now you should aware about advantages and disadvantages of NSTread and NSOperation. You can use either of them as per needs of your application.
Before you read my answer you might want to consider reading this - Migrating away from Threads
I am keeping the discussion theoretical as your question does not have any code samples. Both these constructs are required for increasing app responsiveness & usability.
A message queue is a data structure for holding messages from the time they're sent until the time the receiver retrieves and acts on them. Generally queues are used as a way to 'connect' producers (of data) & consumers (of data).
A thread pool is a pool of threads that do some sort of processing. A thread pool will normally have some sort of thread-safe queue (refer message queue) attached to allow you to queue up jobs to be done. Here the queue would usually be termed 'task-queue'.
So in a way thread pool could exist at your producer end (generating data) or consumer end (processing the data). And the way to 'pass' that data would be through queues. Why the need for this "middleman" -
It decouples the systems. Producers do not know about consumers & vice versa.
The Consumers are not bombarded with data if there is a spike in Producer data. The queue length would increase but the consumers are safe.
Example:
In iOS the main thread, also called the UI thread, is very important because it is in charge of dispatching the events to the appropriate widget and this includes the drawing events, basically the UI that the user sees & interacts.
If you touch a button on screen, the UI thread dispatches the touch event to the app, which in turn sets its pressed state and posts an request to the event queue. The UI thread dequeues the request and notifies the widget to redraw itself.
I was wondering what is the difference in performance between these two.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
// perform complex operation
// dispatch back to main thread to update UI
});
dispatch_async(_myCustomConcurrentQueue, ^{
// perform complex operation
// dispatch back to main thread to update UI
});
My assumption is the GCD is used across the os and other applications, and it will need to perform very quick background tasks, and be finished quick. And custom queues that are created are separate from GCD and they can run a different task, and will be added back to the pool once they are released. And so my assumption is that my customQueue performs better than GCD for a complex operation.
What are your thoughts? Which performs better? Are they the same?
While the high-priority global queue might theoretically be faster (since you don't have to create the queue, slightly different thread priority), the difference between that and your own custom concurrent queue is unlikely to be observable. There are two reasons, though, that you might want to use your own custom queues:
Certain features, notably dispatch barriers, are unavailable in global queues, so if you need those features, you'll want to use custom queue.
Debugging your app, it can also be useful to use your own queues with meaningful names, so that you can more easily identify the individual threads in the debugger.
But there are no material performance reasons to choose high priority global concurrent queue vs a custom concurrent queue.
Well, you don't say how _myCustomConcurrentQueue was created (it could be a serial queue or a concurrent queue), but assuming that it's a concurrent queue then it will have potentially a different priority than the global concurrent queues, both in terms of how GCD dispatches blocks or functions from its internal "queue of queues" blocklist and in the actual thread priority of the thread(s) created to do the work!
Please read the dispatch_queue_create(3) man page and pay specific attention to the "GLOBAL CONCURRENT QUEUES" section. It contains some very informative verbiage on this exact topic (too much to cut-and-paste here).
I'm pretty sure that if you create your own queue, it ultimately gets added to the GCD behind the scenes. I guess it's more of a preference thing.
I'm working on some code that I can't contact the original developer.
He passes a class a reference to another class's serial dispatch queue and I'm not seeing any purpose to him not just creating another dispatch queue (each class is a singleton).
It's not creating issues but I'd like to understand it further, so any insight into the positive implications is appreciated, thank you.
Edit: I suggest reading all the answers here, they explain a lot.
It's actually not a good idea to share queues in this fashion (and no, not because they are expensive - they're not, quite the converse). The rationale is that it's not clear to anyone but a queue's creator just what the semantics of the queue are. Is it serial? Concurrent? High priority? Low priority? All are possible, and once you start passing internal queues around which were actually created for the benefit of a specific class, the external caller can schedule work on it which causes a mutual deadlock or otherwise behaves in an unexpected fashion with the other items on that queue because caller A knew to expect concurrent behavior and caller B was thinking it was a serial queue, without any of the "gotchas" that concurrent execution implies.
Queues should therefore, wherever possible, be hidden implementation details of a class. The class can export methods for scheduling work against its internal queue as necessary, but the methods should be the only accessors since they are the only ones who know for sure how to best access and schedule work on that specific type of queue!
If it's a serial queue, then they may be intending to serialize access to some resource shared between all objects that share it.
Dispatch queues are somewhat expensive to create, and tie up system resources. If you can share one, the system runs more efficiently. For that matter, if your app does work in the background, using a shared queue allows you to manage a single pool of tasks to be completed. So yes, there are good reasons for using a shared dispatch queue.