I'm using dask.distributed to launch jobs on a SGE cluster (https://jobqueue.dask.org/en/latest/generated/dask_jobqueue.SGECluster.html#dask_jobqueue.SGECluster) via dask.bags and/or dask.delayed.
Everything works nicelly. However, i may have some dask.delayed that should run in a specific queue on my SGE cluster (due to GPU availability for instance). How can I work this out on dask?
In other words, how can I define a dask_jobqueue.SGECluster with multiple queues and/or different resource specs?
dask_jobqueue.SGECluster allows me to configure one cluster setup only (one queue, one resource spec, etc..).
Thanks
Related
in Temporal why single worker for single service is sufficient? doesn't it become a bottleneck for the system to scale? does Worker a single threaded or multi-threaded process?
I have gone through the Temporal documentation but couldn't understand why single Worker per client service is sufficient.
I also tried creating different task queue for different workflows and created new worker(using workerfactory.newWorker(..) method creating 2 workers in the same process) to listen on the new task queue. When I observed the workers in the temporal-UI I see the same worker id for both the task queues.
In many production scenarios, a single Worker is not sufficient, and people run a pool of multiple Workers, each with the same Workflows and/or Activities registered, and polling the same Task Queue.
To tell when a single Worker isn't sufficient, you can look at metrics:
https://docs.temporal.io/application-development/worker-performance
Worker a single threaded or multi-threaded process?
It depends on which SDK. The Java SDK has multi-threaded Workers: see for example
https://www.javadoc.io/static/io.temporal/temporal-sdk/1.7.0/io/temporal/worker/WorkerFactoryOptions.Builder.html#setMaxWorkflowThreadCount-int-
You can give different Worker instances different identities with:
https://www.javadoc.io/static/io.temporal/temporal-sdk/1.7.0/io/temporal/client/WorkflowClientOptions.Builder.html#setIdentity-java.lang.String-
I have the following situation:
Two times a day for about 1h we receive a huge inflow in messages which are currently running through RabbitMQ. The current Rabbit cluster with 3 nodes can't handle the spikes, otherwise runs smoothly. It's currently setup on pure EC2 instances. The instance type is currenty t3.medium, which is very low, unless for the other 22h per day, where we receive ~5 msg/s. It's also setup currently has ha-mode=all.
After a rather lengthy and revealing read in the rabbitmq docs, I decided to just try and setup an ECS EC2 Cluster and scale out when cpu load rises. So, create a service on it and add that service to the service discovery. For example discovery.rabbitmq. If there are three instances then all of them would run on the same name, but it would resolve to all three IPs. Joining the cluster would work based on this:
That would be the rabbitmq.conf part:
cluster_formation.peer_discovery_backend = dns
# the backend can also be specified using its module name
# cluster_formation.peer_discovery_backend = rabbit_peer_discovery_dns
cluster_formation.dns.hostname = discovery.rabbitmq
I use a policy ha_mode=exact with 2 replicas.
Our exchanges and queues are created manually upfront for reasons I cannot discuss any further, but that's a given. They can't be removed and they won't be re-created on the fly. We have 3 exchanges with each 4 queues.
So, the idea: during times of high load - add more instances, during times of no load, run with three instances (or even less).
The setup with scale-out/in works fine, until I started to use the benchmarking tool and discovered that queues are always created on one single node which becomes the queue master. Which is fine considering the benchmarking tool is connected to one single node. Problem is, after scale-in/out, also our manually created queues are not moved to other nodes. This is also in line with what I read on the rabbit 3.8 release page:
One of the pain points of performing a rolling upgrade to the servers of a RabbitMQ cluster was that queue masters would end up concentrated on one or two servers. The new rebalance command will automatically rebalance masters across the cluster.
Here's the problems I ran into, I'm seeking some advise:
If I interpret the docs correctly, scaling out wouldn't help at all, because those nodes would sit there idling until someone would manually call rabbitmq-queues rebalance all.
What would be the preferred way of scaling out?
As per title, if I am creating workers via helm or kubernetes, is it possible to assign "worker resources" (https://distributed.readthedocs.io/en/latest/resources.html#worker-resources) after workers have been created?
The use case is tasks that hit a database, I would like to limit the amount of processes able to hit the database in a given run, without limiting the total size of the cluster.
As of 2019-04-09 there is no standard way to do this. You've found the Worker.set_resources method, which is reasonable to use. Eventually I would also expect Worker plugins to handle this, but they aren't implemented.
For your application of controlling access to a database, it sounds like what you're really after is a semaphore. You might help build one (it's actually decently straightforward given the current Lock implementation), or you could use a Dask Queue to simulate one.
Local dask allows using process scheduler. Workers in dask distributed are using ThreadPoolExecutor to compute tasks. Is it possible to replace ThreadPoolExecutor with ProcessPoolExecutor in dask distributed? Thanks.
The distributed scheduler allows you to work with any number of processes, via any of the deployment options. Each of these can have one or more threads. Thus, you have the flexibility to choose your favourite mix of threads and processes as you see fit.
The simplest expression of this is with the LocalCluster (same as Client() by default):
cluster = LocalCluster(n_workers=W, threads_per_worker=T, processes=True)
makes W workers with T threads each (which can be 1).
As things stand, the implementation of workers uses a thread pool internally, and you cannot swap in a process pool in its place.
I have a scenario where I have long-running jobs that I need to move to a background process. Delayed job with a single worker would be very simple to implement, but would run very, very slowly as jobs mount up. Much of the work is slow because the thread has to sleep to wait on various remote API calls, so running multiple workers concurrently is a very obvious choice.
Unfortunately, some of these jobs are dependent on each other. I can't run two jobs belonging to the same identifier simultaneously. Order doesn't matter, only that exactly one worker can be working on a given ID's work.
My first thought was named queues, and name the queue for the identifiers, but the identifiers are dynamic data. We could be running ID 1 today, 5 tomorrow, 365849 and 645609 the next, so on and so forth. That's too many named queues. Not only would giving each one a single worker probably exceed available system resources (as well as being incredibly wasteful since most of them won't be active at any given time), but since workers aren't configured from inside the code but rather as environment variables, I'd wind up with some insane config files. And creating a sane pool of N generic workers could wind up with all N workers running on the same queue if that's the only queue with work to do.
So what I need is a way to prevent two jobs sharing a given ID from running at the same time, while allowing any number of jobs not sharing IDs to run concurrently.