Assume I want to run a for loop with a million iterations, each of which takes x milliseconds. How can I use CFS scheduler or runtime scheduler to add an artificial delay of x + delay milliseconds to each iteration? Is it possible? It's totally fine if delay is not constant.
I am trying to make sense out of this documentation - Configure the default CFS scheduler - Docker
Related
I made a simple program which waits for 60 seconds. I have 300 input elements to process.
Number of threads - Batch - 1 and Streaming - 300 per this document
https://cloud.google.com/dataflow/docs/resources/faq#beam-java-sdk
In streaming mode - with 1 worker and 300 threads, job should complete in 2 to 3 minutes considering the overhead of spawning workers etc. My understanding is there will be 300 threads for each of 300 input elements and all sleep for 60 seconds and the job should get done. However, the job takes more time to complete.
Similarly, in Batch mode with 1 worker (1 Thread) and 300 input elements, it should take 300 minutes to complete.
Can someone clarify how this happens at worker level ?
There is considerable overhead in starting up and tearing down worker VMs, so it's hard to generalize from a short experiment such as this. In addition, there's not promise that there will be a given number of workers for streaming or batch, as this is an implementation-dependent parameter that my change at any time for any runner (and indeed may even be chosen dynamically).
I have a dataflow job that reads from pubsub subscription and writes to a Redis instance on a fixed time windows. The job seems to be running well with 4 workers and almost 0s systems latency until I try to drain it which causes upscaling to 10 workers and takes hours to finish.
I suspect this is caused by the windowing/grouping since the output collection metric suggest that it keeps producing elements long after the drain is started.
That's the windowing that I'm using.
beam.WindowInto(
window.FixedWindows(size=120),
trigger=Repeatedly(
AfterAny(AfterCount(100), AfterProcessingTime(120))),
accumulation_mode=AccumulationMode.DISCARDING,
allowed_lateness=Duration(seconds=2 * 24 * 60 * 60))
Given your large allowed lateness, when you drain your pipeline it causes every window for every key seen in the last 48 hours to close all at once, which is probably why there's so much work and it's upscaling. This could be especially bad if your keys are not often re-used.
I'm making an application that needs to run a job at extremely precise intervals of time (say 30 seconds, maximum acceptable delay is +-1 second).
I'm currently doing so using an external Go application that polls an API endpoint built within my application.
Is there a way that I could run the task on a worker machine (eg a Heroku dyno) with delays less than one second?
I've investigated Sidekiq and delayed_job, but both have significant lag and therefore are unsuitable for my application.
Schedule the job for 60 seconds prior to when you need it run. Pass in the exact time you need the job executed, as a parameter. Then, run sleep until Time.now == exact_time_down_to_the_second?
I've read an article in the book elixir in action about processes and scheduler and have some questions:
Each process get a small execution window, what is does mean?
Execution windows is approximately 2000 function calls?
What is a process implicitly yield execution?
Let's say you have 10,000 Erlang/Elixir processes running. For simplicity, let's also say your computer only has a single process with a single core. The processor is only capable of doing one thing at a time, so only a single process is capable of being executed at any given moment.
Let's say one of these processes has a long running task. If the Erlang VM wasn't capable of interrupting the process, every single other process would have to wait until that process is done with its task. This doesn't scale well when you're trying to handle tens of thousands of requests.
Thankfully, the Erlang VM is not so naive. When a process spins up, it's given 2,000 reductions (function calls). Every time a function is called by the process, it's reduction count goes down by 1. Once its reduction count hits zero, the process is interrupted (it implicitly yields execution), and it has to wait its turn.
Because Erlang/Elixir don't have loops, iterating over a large data structure must be done recursively. This means that unlike most languages where loops become system bottlenecks, each iteration uses up one of the process' reductions, and the process cannot hog execution.
The rest of this answer is beyond the scope of the question, but included for completeness.
Let's say now that you have a processor with 4 cores. Instead of only having 1 scheduler, the VM will start up with 4 schedulers (1 for each core). If you have enough processes running that the first scheduler can't handle the load in a reasonable amount of time, the second scheduler will take control of the excess processes, executing them in parallel to the first scheduler.
If those two schedulers can't handle the load in a reasonable amount of time, the third scheduler will take on some of the load. This continues until all of the processors are fully utilized.
Additionally, the VM is smart enough not to waste time on processes that are idle - i.e. just waiting for messages.
There is an excellent blog post by JLouis on How Erlang Does Scheduling. I recommend reading it.
I have a question regarding the reserved CPU time field in Google Dataflow. I don't understand why it varies so widely depending on the configuration of my run. I suspect that I am not interpreting the reserved CPU time for what it really is. To my understanding, it is the CPU time that was needed to complete the job I submitted, but based on the following evidence, it seems I may be mistaken. Is it the time that is allocated to your job, regardless of whether it is actually using the resources? If that's the case, how do I get the actual CPU time of my job?
First I ran my job with a variable sized pool of workers (max 24 workers).
The corresponding stats are as follows:
Then, I ran my script using a fixed number of workers (10):
And the stats changes to:
They went from 15 days to 7 hours? How is that possible?!
Thanks!
If you hover over the "?" next to "Reserved CPU time" a pop-up message will show and it will read: "The total time Dataflow was active on GCE instances, on a per-CPU basis." This indicates it is not the CPU-time used by the VMs. At this time Dataflow does not aggregate per-machine CPU usage stats; you may, however, be able to use the cloud monitoring API to extract those metrics yourself.