Several places in the documentation (e.g., https://docs.dask.org/en/latest/phases-of-computation.html) imply that .delayed() is or should be used on the local machine. I'd actually like to have a worker both build up the graph and call .compute(). It seems to work when I test it, but I want to make sure I'm not asking for pain down the road -- e.g. could I deadlock the cluster if all the workers are waiting on a graph and unavailable to do real work?
You can use dask.delayed either on a local machine or on a distributed cluster. The API doesn't really care where it's run.
Related
I would like to do some cloud processing on a very small cluster of machines (<5).
This processing should be based on 'jobs', where jobs are parameterized scripts that run in a certain docker environment.
As an example for what a job could be:
Run in docker image "my_machine_learning_docker"
Download some machine learning dataset from an internal server
Train some neural network on the dataset
Produce a result and upload it to a server again.
My use cases are not limited to machine learning however.
A job could also be:
Run in docker image "my_image_processing_docker"
Download a certain amount of images from some folder on a machine.
Run some image optimization algorithm on each of the images.
Upload the processed images to another server.
Now what I am looking for is some framework/tool, that keeps track of the compute servers, that receives my jobs and dispatches them to an available server. Advanced priorization, load management or something is not really required.
It should be possible to query the status of jobs and of the servers via an API (I want to do this from NodeJS).
Potentially, I could imagine this framework/tool to dynamically spin up these compute servers in in AWS, Azure or something. That would not be a hard requirement though.
I would also like to host this solution myself. So I am not looking for a commercial solution for this.
Now I have done some research, and what I am trying to do has similarities with many, many existing projects, but I have not "quite" found what I am looking for.
Similar things I have found were (selection):
CI/CD solutions such as Jenkins/Gitlab CI. Very similar, but it seems to be tailored very much towards the CI/CD case, and I am not sure whether it is such a good idea to abuse a CI/CD solution for what I am trying to do.
Kubernetes: Appears to be able to do this somehow, but is said to be very complex. It also looks like overkill for what I am trying to do.
Nomad: Appears to be the best fit so far, but it has some proprietary vibes that I am not very much a fan of. Also it still feels a bit complex...
In general, there are many many different projects and frameworks, and it is difficult to find out what the simplest solution is for what I am trying to do.
Can anyone suggest anything or point me in a direction?
Thank you
I would use Jenkins for this use case even if it appears to you as a “simple” one. You can start with the simplest pipeline which can also deal with increasing complexity of your job. Jenkins has API, lots of plugins, it can be run as container for a spin up in a cloud environment.
Its possible you're looking for something like AWS Batch flows: https://aws.amazon.com/batch/ or google datalflow https://cloud.google.com/dataflow. Out of the box they do scaling, distribution monitoring etc.
But if you want to roll your own ....
Option A: Queues
For your job distribution you are really just looking for a simple message queue that all of the workers listen on. In most messaging platforms, a Queue supports deliver once semantics. For example
Active MQ: https://activemq.apache.org/how-does-a-queue-compare-to-a-topic
NATS: https://docs.nats.io/using-nats/developer/receiving/queues
Using queues for load distribution is a common pattern.
A queue based solution can use both with manual or atuomated load balancing as the more workers you spin up, the more instances of your workers you have consuming off the queue. The same messaging solution can be used to gather the results if you need to, using message reply semantics or a dedicated reply channel. You could use the resut channel to post progress reports back and then your main application would know the status of each worker. Alternatively they could drop status in database. It probably depends on your preference for collecting results and how large the result sets would be. If large enough, you might even just drop results in an S3 bucket or some kind of filesystem.
You could use something quote simple to mange the workers - Jenkins was already suggested is in defintely a solution I have seen used for running multiple instances accross many servers as you just need to install the jenkins agent on each of the workers. This can work quote easily if you own or manage the physical servers its running on. You could use TeamCity as well.
If you want something cloud hosted, it may depend on the technology you use. Kubernetties is probably overkill here, but certiabnly could be used to spin up N nodes and increase/decrease those number of workers. To auto scale you could publish out a single metric - the queue depth - and trigger an increase in the number of workers based on how deep the queue is and a metric you work out based on cost of spinning up new nodes vs. the rate at which they are processed.
You could also look at some of the lightweight managed container solutions like fly.io or Heroku which are both much easier to setup than K8s and would let you scale up easily.
Option 2: Web workers
Can you design your solution so that it can be run as a cloud function/web worker?
If so you could set them up so that scaling is fully automated. You would hit the cloud function end point to request each job. The hosting engine would take care of the distribution and scaling of the workers. The results would be passed back in the body of the HTTP response ... a json blob.
Your workload may be too large for these solutions, but if its actually fairly light weight quick it could be a simple option.
I don't think these solutions would let you query the status of tasks easily.
If this option seems appealing there are quite a few choices:
https://workers.cloudflare.com/
https://cloud.google.com/functions
https://aws.amazon.com/lambda/
Option 3: Google Cloud Tasks
This is a bit of a hybrid option. Essentially GCP has a queue distribution workflow where the end point is a cloud function or some other supported worker, including cloud run which uses docker images. I've not actually used it myself but maybe it fits the bill.
https://cloud.google.com/tasks
When I look at a problem like this, I think through the entirity of the data paths. The map between source image and target image and any metadata or status information that needs to be collected. Additionally, failure conditions need to be handled, especially if a production service is going to be built.
I prefer running Python, Pyspark with Pandas UDFs to perform the orchestration and image processing.
S3FS lets me access s3. If using Azure or Google, Databricks' DBFS lets me seamlessly read and write to cloud storage without 2 extra copy file steps.
Pyspark's binaryFile data source lets me list all of the input files to be processed. Spark lets me run this in batch or an incremental/streaming configuration. This design optimizes for end to end data flow and data reliability.
For a cluster manager I use Databricks, which lets me easily provision an auto-scaling cluster. The Databricks cluster manager lets users deploy docker containers or use cluster libraries or notebook scoped libraries.
The example below assumes the image is > 32MB and processes it out of band. If the image is in the KB range then dropping the content is not necessary and in-line processing can be faster (and simpler).
Pseudo code:
df = (spark.read
.format("binaryFile")
.option("pathGlobFilter", "*.png")
.load("/path/to/data")
.drop("content")
)
from typing import Iterator
def do_image_xform(path:str):
# Do image transformation, read from dbfs path, write to dbfs path
...
# return xform status
return "success"
#pandas_udf("string")
def do_image_xform_udf(iterator: Iterator[pd.Series]) -> Iterator[pd.Series]:
for path in iterator:
yield do_image_xform(path)
df_status = df.withColumn('status',do_image_xform_udf(col(path)))
df_status.saveAsTable("status_table") # triggers execution, saves status.
This is a more general question about which types of payloads to host in a Container. In our case we will use Service Fabric guest executables. For this post I will only use the word Container to refer to both. The reason I do this is they have similar properties and think more people may understand a container than a SF Guest Exe.
WebAPIs/Services that needs to scale are a good fit for containers, but this question is related to what we call a "Batch" job. This nomenclature comes out of the old .bat files, but in our case we are using a .NET Framework or Core .exe (console apps).
Currently Windows Task Scheduler kicks off the batch running under a service account on a VM. We want the processing to happen on a certain time of day or day of the week and not before or after. There is not any real scaling here. There is one instance which may or may not be multithreaded and on average they generally run between 2-15 minutes and then stop. Some run longer some run shorter. I understand there are limitations to this approach but this is the type of payload I'm discussing here.
As we modernize the Technology stack we are looking to use the Orchestrator as much as possible. As a technologist I've always tried to understand the different tools in our tool belts and not use a tool just because that's the one I used last, instead use the correct tool for the task.
We started out by not writing any more .net console apps. Instead we put the business logic of these "batches" into WebApi's. Then having the task scheduler call the API when it needed to perform its action. If I put this into Service Fabric and host it my concern is that the system resources are consumed for 23 hours and 45 minutes a day when they are not being used. That seems to be opposite of what you would expect when using a container.
Now if I could spin up a Service Fabric Guest Exe/Container on demand and then after it finishes destroy the instance of the app that could fit the need. Then I could have the benefits of the orchestrator without the determent of having it consume resources all the time. I would hope to retire the Batch Server (VM) as the hardware is usage is not optimized and instead add resources to the cluster.
UPDATE
Looking at Vaclav's Scalability Doco I think there might be a use case in here? https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-concepts-scalability He uses a "Workload Manager Service" combined with CreateServiceAsync, to spin up an instance of the service on demand. I guess I would deploy the app to the image store but not create an instance of the app until needed. Then I need to figure out how to end it, is it as simple as changing the infinite loop in Program.cs? The thing is it doesn't look like there is a Program.cs in a Guest Executable.
This looks like a way to run a package until completion, which was releases as part of 7.1. But how do we start a second execution of the service? I want to execute based on a request coming in.
https://learn.microsoft.com/en-us/azure/service-fabric/run-to-completion
Thoughts?
As per title, if I am creating workers via helm or kubernetes, is it possible to assign "worker resources" (https://distributed.readthedocs.io/en/latest/resources.html#worker-resources) after workers have been created?
The use case is tasks that hit a database, I would like to limit the amount of processes able to hit the database in a given run, without limiting the total size of the cluster.
As of 2019-04-09 there is no standard way to do this. You've found the Worker.set_resources method, which is reasonable to use. Eventually I would also expect Worker plugins to handle this, but they aren't implemented.
For your application of controlling access to a database, it sounds like what you're really after is a semaphore. You might help build one (it's actually decently straightforward given the current Lock implementation), or you could use a Dask Queue to simulate one.
When using Dask normally things work fine. However, when I use Dask with an adaptive cluster I find that sometimes all the tasks get assigned to a single worker. Why is this?
This should be considered a usability bug, and it would be reasonable to file an issue about it.
However, to explain what is going on (at least today 2018-08-09) probably what happens is that
Your scheduler first has no tasks and so has no workers assigned to it
You submit a lot of work from a client, the scheduler responds and asks for many workers
The first worker arrives and the scheduler hands it all of the work
Milliseconds later, several other workers arrive. The scheduler then proceeds to load balance between the available workers
Ideally, the load balancing heuristics should handle the situation. There were older versions of Dask where this performed less well, but usually this is fine. I recommend first updating your version of the dask and distributed packages to the newest possible releases and if that doesn't work, report an issue with a minimal example if possible.
I have an ASP.NET MVC 4 app hosted as an Azure web role. I want to do something that seems like it should be pretty standard: I want to create a function that I can call that initiates a VIP swap and raises and event (or calls a callback) when the VIP Swap operation is done.
Just to add some context to the situation: My website implements a workflow that takes about an hour (or less) to complete. If I want to release a new version of the website code, it's convenient (i.e. much less "backward compatibility" code to write) to first let all of the current users complete the workflow so that the new code doesn't need to deal with data created by the previous version of the code. So a management function in my website would first poke a value into the database that disables new workflows; it would then wait until all current workflows are done; it would then call the "VIP Swap" routine; finally, when the VIP Swap routine signals its completion, it would poke the database value to re-enable new workflows.
I found the Microsoft documentation for how to programmatically initiate a VIP swap here:
http://msdn.microsoft.com/en-us/library/ee460814.aspx
The procedure involves POSTing to a magic URL and including some headers in the POST, then periodically performing a GET to a magic URL and checking the response code.
The more I think about this, the more non-trivial it seems. In addition to the basic complexities of wiring up a background timer and completion notification, I don't know what complexities, if any, I might run into trying to do this stuff in the IIS environment. Can I even perform HTTP operations on a background thread? For that matter, will I run into complications just trying to use any of the half dozen or so different "do things in the background" mechanisms baked into .NET?
Any help or guidance will be greatly appreciated. In particular, I'd be ecstatic if someone could point me at a ready-to-go implementation of this function!
I don't think you will find an easy solution to this as the fabric controller is setup to do some very fancy things without your involvement. Running hour-long workflows on a cloud computing environment, where an instance can be pulled out from underneath you, (with a maximum of 5 minutes from the OnStopping event being called to clean up) requires that you do other work anyway to make sure that all of your tasks complete.
The simple question is "What do you do if an instance goes down when workflows are still running?" Do you restart them or are they lost? If they get lost then you don't care anyway, so killing workflows for an upgrade are equally unimportant. If you re-start them then use that same mechanism to decide whether or not a node is due to be shut down, and distribute the jobs accordingly. This pattern is eerily similar to the Hadoop JobTracker. Don't just run the workflows on any 'ol instance. Submit them to a (job tracker) service that decides what to do. The (job tracker) service can then use the service management API to scale up as many instances as you need running the version that you want, run workflows on the appropriate node, and shut them down when they are no longer needed or are outdated.
Unfortunately this may not be the simple solution that you are looking for, but something in your architecture needs to change, rather than trying to force PaaS to fit with your current approach. Decompose your workloads, create loosely coupled services, design for failure, and a few other cloud/distributed computing practices need to be considered. There is a reason why Hadoop is built the way that it is — and it has a reputation for being able to get work done on a bunch of somewhat unreliable commodity hardware.