What is the concurrency limit for GCD Dispatch groups? - ios

I've been using a DispatchGroup to facilitate a number of concurrent calls in my app.
My backend team noticed that when I tried to make eight concurrent calls, they were separated into two batches of four calls.
Is four concurrent calls the limit for GCD?
Is this a limitation of the GCD framework, or is this dependent on the hardware?
Is there a way to allow for more concurrent calls?

From the reference for GCD:
Concurrent queues (also known as a type of global dispatch queue)
execute one or more tasks concurrently, but tasks are still started in
the order in which they were added to the queue. The currently
executing tasks run on distinct threads that are managed by the
dispatch queue. The exact number of tasks executing at any given point
is variable and depends on system conditions.
The system automatically (and dynamically) determines how many tasks to execute simultaneously. Among other things it's based on battery state, # of cores, system load, etc.
See Dispatch Queues for more information.

Related

In iOS, if all the task can only be executed asynchronously on a serial queue, isn't it essentially becomes a concurrent queue?

according to apple's doc, tasks are still started in the order in which they were added to a concurrent queue, so it seems to me that there is no difference between a concurrent queue vs a serial queue where all its task are run by async.
correct me if I miss something
read bunch of documents and did not find the answer
The difference is how many tasks may run at the same time:
A serial queue only processes (runs) one task at time, one after another. A concurrent queue may process multiple tasks (on multiple thread) at the same time.
You typically use a serial queue to ensure only one task is accessing a resource at the same time. These are scenarios where you would traditionally use mutexes.
If you have tasks that would benefit from (and are able to) running concurrently at the same time, or tasks that are completely independent and thus don't care, you usually use a concurrent queue.

Is it useful to create two concurrent queues in IOS?

I am assuming that i am reading a well written code, since the developer is some twenty years old experienced guy, I have encountered a situation that he is using two concurrent queues for sending two different types of payload to the network, The network is a persistent TCP connection.
I know that Queues helps in dynamic thread management for us. So is there any case where it is advantage to create two concurrent queues, Even single can solve the situation, does it increase the performance, If so how? Thread management can be done by single queue itself, Isn't it?
Yes, there are sometimes good reasons for having multiple concurrent queues. It's more common with serial queues, but there are situations where multiple concurrent queues can be useful.
You might have sets of tasks that you want to run at different priorities, for example.

Execute task after several async connections

I have several concurrent asynchronous network operations, and want to be notified after all of them finished receiving the data.
The current thread shall not be blocked, synchronous connections aren't an option, and all operations shall be executed concurrently.
How can I achieve that? Or is it more performant to execute all network operations successively, particularly looking at mobile devices?

Sidekiq processes queues only AFTER others are done?

Is it possible to process jobs from a Sidekiq queue ONLY if all other queues are empty?
For example, say we have a photos queue and a updates queue. I only want to process photos if updates is free of pending jobs.
Is that possible?
Well, all you queues execute in parallel, so I don't get the idea of executing consequentially.
But you have several options to play with:
you can make more concurrent workers
you can set frequency higher to updates queue, so updates worker will check for updates more frequently then photo worker.
Take a look at this options in doc

pthread scheduling methods?

With no explicit scheduling, pthreads are scheduled to run by the kernel in a random manner.
Are there any scheduling methods defined in the pthread library for the same such as priorities?
The priority of a thread is specified as a delta which is added to the priority of the process. Changing the priority of the process, effects the priority of all of the threads within that process. The default priority for a thread is DEFAULT_PRIO_NP, which is no change from the process priority.
These Pthread APIs support only a scheduling policy of SCHED_OTHER.
pthread_setschedparam (SCHED_OTHERonly supported)
pthread_getschedparam
pthread_attr_setschedparam
pthread_attr_getschedparam
An AS/400 thread competes for scheduling resources against other threads in the system, not solely against other threads in the process. The scheduler is a delay cost scheduler based on several delay cost curves (priority ranges). The Posix standard and the Single Unix Specification refers to this as scheduling scope and scheduling policy, which on this implementation cannot be changed from the default of SCHED_OTHER.
It can be controlled somewhat. For threads at the same priority, the pthreads standard specifies the choices of FIFO (thread runs until it blocks or exits), Round Robin (thread runs for a fixed amount of time), or the default "Other". The only one that is required by the standard is "Other" whose behavior is implementation dependent but usually a combo of FIFO and Round Robin (eg, thread runs until it blocks, exits, or timeslice is used up whichever happens first).

Resources