Is it possible to use TLS in GCD? I'd like to dispatch tasks that need to access hardware driver queues. I need to associate a queue with each thread so that they can be both load balanced and accessed concurrently.
The equivalent of TLS in GCD is dispatch_queue_set_specific/dispatch_get_specific. Associating a queue with a particular thread isn't something you can do though.
Related
according to apple's doc, tasks are still started in the order in which they were added to a concurrent queue, so it seems to me that there is no difference between a concurrent queue vs a serial queue where all its task are run by async.
correct me if I miss something
read bunch of documents and did not find the answer
The difference is how many tasks may run at the same time:
A serial queue only processes (runs) one task at time, one after another. A concurrent queue may process multiple tasks (on multiple thread) at the same time.
You typically use a serial queue to ensure only one task is accessing a resource at the same time. These are scenarios where you would traditionally use mutexes.
If you have tasks that would benefit from (and are able to) running concurrently at the same time, or tasks that are completely independent and thus don't care, you usually use a concurrent queue.
I have 5 tasks(function) in 5 different which are running simultaneously. I want to implement like any function start running then other function should not run until the function completes its process.
I want to implement using FreeRTOS.
Example.
foo1.c
->Task1
foo2.c
->Task2
foo3.c
->Task3
foo4.c
->Task4
foo5.c
->Task5
It sounds like you need a mutex. Each task acquires the mutex when they start running, and release it when they are done. When any of the tasks is running, others are blocked on the mutex.
You need to use a mutex (mutually exclusive) semaphore.
Here is an example protecting the Serial Port of an ATmega Arduino Clone. This ensures that messages presented to the serial port by multiple tasks are not interleaved (corrupted).
You can see that the semaphore is taken before attempting to write to the serial port, and then given (freed) when the task has completed its activities.
Should i use they with TIdTcpServer, and where they can improve something on my application?
i ask it because maybe they can improve some speed/agility action on TIdTcpServer since i use a Queue.
TIdTCPServer runs a thread for every client that is connected. Those threads are managed by the TIdScheduler that is assigned to the TIdTCPServer.Scheduler property. If you do not assign a scheduler of your own, a default TIdSchedulerOfThreadDefault is created internally for you.
The difference between TIdSchedulerOfThreadDefault and TIdSchedulerOfThreadPool is:
TIdSchedulerOfThreadDefault creates a new thread when a client connects, and then terminates that thread when the client disconnects.
TIdSchedulerOfThreadPool maintains a pool of idle threads. When a client connects, a thread is pulled out of the pool if one is available, otherwise a new thread is created. When the client disconnects, the thread is put back in the pool for reuse if the scheduler's PoolSize will not be exceeded, otherwise the thread is terminated.
From the OS's perspective, creating a new thread is an expensive operation. So in general, using a thread pool is usually preferred for better performance, but at the cost of using memory and resources for idle threads hanging around waiting to be used.
Whichever component you decide to use will not have much effect on how the server performs while processing active clients, only how it performs while handling socket connects/disconnects.
I've been using a DispatchGroup to facilitate a number of concurrent calls in my app.
My backend team noticed that when I tried to make eight concurrent calls, they were separated into two batches of four calls.
Is four concurrent calls the limit for GCD?
Is this a limitation of the GCD framework, or is this dependent on the hardware?
Is there a way to allow for more concurrent calls?
From the reference for GCD:
Concurrent queues (also known as a type of global dispatch queue)
execute one or more tasks concurrently, but tasks are still started in
the order in which they were added to the queue. The currently
executing tasks run on distinct threads that are managed by the
dispatch queue. The exact number of tasks executing at any given point
is variable and depends on system conditions.
The system automatically (and dynamically) determines how many tasks to execute simultaneously. Among other things it's based on battery state, # of cores, system load, etc.
See Dispatch Queues for more information.
I have several concurrent asynchronous network operations, and want to be notified after all of them finished receiving the data.
The current thread shall not be blocked, synchronous connections aren't an option, and all operations shall be executed concurrently.
How can I achieve that? Or is it more performant to execute all network operations successively, particularly looking at mobile devices?