Is it useful to create two concurrent queues in IOS? - ios

I am assuming that i am reading a well written code, since the developer is some twenty years old experienced guy, I have encountered a situation that he is using two concurrent queues for sending two different types of payload to the network, The network is a persistent TCP connection.
I know that Queues helps in dynamic thread management for us. So is there any case where it is advantage to create two concurrent queues, Even single can solve the situation, does it increase the performance, If so how? Thread management can be done by single queue itself, Isn't it?

Yes, there are sometimes good reasons for having multiple concurrent queues. It's more common with serial queues, but there are situations where multiple concurrent queues can be useful.
You might have sets of tasks that you want to run at different priorities, for example.

Related

Do you have to use worker pools in Erlang?

I have a server I am creating (a messaging service) and I am doing some preliminary tests to benchmark it. So far, the fastest way to process the data is to do it directly on the process of the user and to use worker pools. I have tested spawning and that is unbelievable slow.
The test is just connecting 10k users, and having each one send 15kb of data a couple of times at the same time(or trying too atleast) and having the server process the data (total length, headers, and payload).
The issue I have with worker pools is its only fast when you have enough workers to offset the amount of connections. For example, if you have 500k, or 1 million users, you would need more workers to process all the concurrent data coming in. And, as for my testing, having 1000 workers would make it unusable.
So my question is the following: When does it make sense to use pools of workers? Will there be a tipping point where I would have to use workers to process the data to free up the user process? How many workers is too much, is 500,000 too much?
And, if workers are the way to go (for those massive concurrent distributed servers), I am guessing you can dynamically create/delete as you need?
Any literature is also appreciated!
Thanks for your answer!
Maybe worker pools are not the best tool for your problem. If I were you I would try using Jay Nelson's epocxy, which gives you a very basic backpressure mechanism while still letting you parallelize your tasks. From that library I would check either concurrency fount or concurrency control tools.

Completion ports (limits)

I have some questions about Completion ports:
How many Completion ports could be opened at the same time per process?
What pros and cons of separating ports by data type?
1) why not write a test to see. Chances are that it will be a) more than you could ever need, b) platform and memory specific and c) not a useful number to know.
2) define "by data type". In general your aim should be to have as few threads running as possible and having more than one IOCP means that you either have more threads running (or able to run) than you would have with a single IOCP OR you have the same number of threads and they are used less efficiently as some completions cause some threads to wake and run and others cause a different set of threads to wake and run.
In general I'd need to know more about what you ACTUALLY want to do to be able to provide a more useful answer.

Should spawn be used in Erlang whenever I have a non dependent asynchronous function?

If I have a function that can be executed asynchronously without any dependencies and no other functions require its results directly, should I use spawn ? In my scenario I want to proceed to consume a message queue, so spawning would relif my blocking loop, but if there are other situations where I can distribute function calls as much as possible, will that affect negatively my application ?
Overall, what would be the pros and cons of using Spawn.
Unlike operating system processes or threads, Erlang processes are very light weight. There is minimal overhead in starting, stopping, and scheduling new processes. You should be able to spawn as many of them as you need (the max per vm is in the hundreds of thousands). The Actor model Erlang implements allows you to think about what is actually happening in parallel and write your programs to express that directly. Avoid complicating your logic with work queues if you can avoid it.
Spawn a process whenever it makes logical sense, and optimize only when you have to.
The first thing that come in mind is the size of parameters. They will be copied from your current process to the new one and if the parameters are huge it may be inefficient.
Another problem that may arise is bloating VM with such amount of processes that your system will become irresponsive. You can overcome this problem by using pool of worker processes or special monitor process that will allow to work only limited amount of such processes.
so spawning would relif my blocking loop
If you are in the situation that a loop will receive many messages requiring independant actions, don't hesitate and spawn new processes for each message processing, this way you will take advantage of the multicore capabilities (if any) of your computer. As kjw0188 says, the Erlang processes are very light weight and if the system hits the limit of process numbers alive in parallel (with the assumption that you are doing reasonable code) it is more likely that the application is overloading the capability of the node.

Does rails HireFire support Queues?

Background:
I have 50 clients. (for example) they have their data partitioned into 50 different schemas in postgresql.
I feel it's a good idea to keep their processing as separate as possible, so I think putting their DJ's into different queues is a good idea, At least grouping them based on their load, into different queues (because I have a limit on the number of workers)
If Client_A has 10 large actions in the queue, Client_B shouldn't have to wait for them to be done, to send an email.
DJ supports queue's based workers. I could be wrong, but I don't see a way to set queues in the hirefire paradigm.
Does anyone know how to setup-hirefire to run on a given queue?
I see more issues coming, but I'll ignore them for now :)

Creating threads within the cluster

I wish to know is there any way that I can create the threads on other nodes without starting the process on the nodes.
For example :- lets say I have cluster of 5 nodes I am running an application on node1. Which creates 5 threads on I want the threads not to be created in the same system but across the cluster lets say 1 node 1 thread type.
Is there any way this can be done or is it more depends on the Load Scheduler and does openMP do something like that?
if there is any ambiguity in the question plz let me know I will clarify it.
Short answer - not simply. Threads share a processes' address space, and so therefore it is extremely difficult to relocate them across cluster nodes. And, if it is possible (systems do exist which support this) then getting them to maintain a consistent state introduces a lot of synchronization and communication overhead which impacts on performance.
In short, if you're distributing an application across a cluster, stick with multiple processes and choose a suitable communication mechanism.
generally, leave threads to vm or engine to avoid very inert locks, focus app or transport, if one, create time (200 hz=5ms heuristic), if 2, repaint, good pattern: eventdrive

Resources