Erlang: Will Process Priority Affect Long Running Tasks? - erlang

When a process is scheduled with low priority in erlang, it is scheduled into the low/normal queue with a count of 8. It has to be picked from the queue 8 times before getting scheduled.
Once it's scheduled, a counter for 2000 reductions is started. After the 2000 reductions the process will be suspended and rescheduled. When the process is rescheduled, is it scheduled with the same priority that the original process was?
That would make sense to me but I can't find this detail and it would have a big impact for long running computational tasks (not that it's a good idea to use BEAM for heavy computation!) .

When the process is rescheduled, is it scheduled with the same priority that the original process was?
What do you mean by the original process? It is the same process so it retains its priority.

Related

Execution window time

I've read an article in the book elixir in action about processes and scheduler and have some questions:
Each process get a small execution window, what is does mean?
Execution windows is approximately 2000 function calls?
What is a process implicitly yield execution?
Let's say you have 10,000 Erlang/Elixir processes running. For simplicity, let's also say your computer only has a single process with a single core. The processor is only capable of doing one thing at a time, so only a single process is capable of being executed at any given moment.
Let's say one of these processes has a long running task. If the Erlang VM wasn't capable of interrupting the process, every single other process would have to wait until that process is done with its task. This doesn't scale well when you're trying to handle tens of thousands of requests.
Thankfully, the Erlang VM is not so naive. When a process spins up, it's given 2,000 reductions (function calls). Every time a function is called by the process, it's reduction count goes down by 1. Once its reduction count hits zero, the process is interrupted (it implicitly yields execution), and it has to wait its turn.
Because Erlang/Elixir don't have loops, iterating over a large data structure must be done recursively. This means that unlike most languages where loops become system bottlenecks, each iteration uses up one of the process' reductions, and the process cannot hog execution.
The rest of this answer is beyond the scope of the question, but included for completeness.
Let's say now that you have a processor with 4 cores. Instead of only having 1 scheduler, the VM will start up with 4 schedulers (1 for each core). If you have enough processes running that the first scheduler can't handle the load in a reasonable amount of time, the second scheduler will take control of the excess processes, executing them in parallel to the first scheduler.
If those two schedulers can't handle the load in a reasonable amount of time, the third scheduler will take on some of the load. This continues until all of the processors are fully utilized.
Additionally, the VM is smart enough not to waste time on processes that are idle - i.e. just waiting for messages.
There is an excellent blog post by JLouis on How Erlang Does Scheduling. I recommend reading it.

How to specific concurrency for sidekiq queue?

For example I have concurrency with value 4 in sidekiq config with 3 queues: high, default, low.
I start 10 ffmpeg Active Job Workers in low queue long , and 4 of it running , after it next 4 and etc, so everything is fine.But in the same time i want to run another lightweight jobs like making screenshots in a other queue(high). So high queue is not running , it waits untill first of running 4-x jobs finished , but i don't want to wait , I want to run lightweight queue immideatly, how to do it?(in background in queue of'course)
Take a look at the sidekiq-limit_fetch Gem.
It gives you more control over priority of queues and can limit the concurrency of each queue so that there is more concurrency open to all tasks yet you will be able to limit the ffmpeg jobs to 4.

Kill/cancel low-priority build in jenkins

How can I setup Jenkins to automatically cancel/kill low-priority jobs once higher priority jobs are available to run?
Some background -- there's a feature request for this capability:
https://issues.jenkins-ci.org/browse/JENKINS-8405
In the absence of such a feature being implemented, how might I accomplish this? Some ideas I have are trying to read a potentially-existent Jenkins file that contains the list of jobs in the build. And, I could launch the low priority jobs with a wrapper that spawns a separate process that monitors this file and kills the low-priority processes whenever a high-priority job needs to run.
But the above is fairly involved, and so I'd like to avoid doing that. I could use linux "nice", except that the memory requirements are high, so it's really better to kill the processes.
One partial solution is to use the Accelerated Build Now Plugin, which lets you cancel low priority jobs when you click the button. That being said, an automatic version would be better.
https://wiki.jenkins-ci.org/display/JENKINS/Accelerated+Build+Now+Plugin
You could create a supervisor job that runs periodically and checks if high priority jobs are in the queue and aborts low priority jobs if needed.
Naturally you need to make sure that the supervisor job itself never gets stuck in the queue. One way to accomplish this would be to create a dedicated dummy slave for the supervisor job.

Need to improve the Linux performance for embedded system

I have a ARM OMAP based embedded system with 1 GHZ processor running Linux 2.6.33 cross compiled as CONFIG_PREEMPT. One of the Processes (process 1) is critical and need to run every 4 or 8 milli sec which is configurable. There is another process's (process 2) thread which transfers image to FTP or any other configured application. To trigger the time critical process 1 i use a high resolution timer as a seperate thread (FIFO, say 60) with highest Real time priority in the system. Process 2 is having lower RT priority (RR 20) than process 1 (RR 50).
If there is no image transfer enabled or configured i dont see any timeouts for the critical process (process 1) mentioned above. But if i enable any image transfer then the process 1 will timeout or the image transfer fails due to some error and one of these process dies and then other process runs fine.
I see that if the image resolution is higher then the timing out of process 1 is faster.
With higher resolution of image (say SXGA) the NET_RX ethernet interrupt holds the CPU for long time and by the time it gives up CPU, process 1 timesout. It looks like NET_RX interrupt is having highest priority than timer interrupt used for process 1 and it doesn't give the CPU.
I want to make sure both process running and process 1 should not miss the deadline.
How to debug the system that where it is exactly waiting so that i can remove those waits or atleast avoid those if possible.
How can i achieve this ? Please help.
Linux is not a real-time operating system. It offers no guarantees other than "best efforts" scheduling.
If you have a task which has to run at a particular rate all the time, you need to run that task under a proper RTOS which can make those sorts of guarantees.
Otherwise you have to relax your constraints to "runs every 4ms, mostly".
You may want to check "http://www.techonline.com/electrical-engineers/education-training/tech-papers/4402454/Challenges-in-Using-Linux-for-CPU-intensive-real-time-networking-products". It describes network performance in PREEMPT_RT
I found the solution to this performance issue by modifying priority of the thread sending image data to a SCHED_NORMAL and re arranging the source code avoiding unnecessary loops. Now i see that the image transfer is not affecting the performance of the whole system.

pthread scheduling methods?

With no explicit scheduling, pthreads are scheduled to run by the kernel in a random manner.
Are there any scheduling methods defined in the pthread library for the same such as priorities?
The priority of a thread is specified as a delta which is added to the priority of the process. Changing the priority of the process, effects the priority of all of the threads within that process. The default priority for a thread is DEFAULT_PRIO_NP, which is no change from the process priority.
These Pthread APIs support only a scheduling policy of SCHED_OTHER.
pthread_setschedparam (SCHED_OTHERonly supported)
pthread_getschedparam
pthread_attr_setschedparam
pthread_attr_getschedparam
An AS/400 thread competes for scheduling resources against other threads in the system, not solely against other threads in the process. The scheduler is a delay cost scheduler based on several delay cost curves (priority ranges). The Posix standard and the Single Unix Specification refers to this as scheduling scope and scheduling policy, which on this implementation cannot be changed from the default of SCHED_OTHER.
It can be controlled somewhat. For threads at the same priority, the pthreads standard specifies the choices of FIFO (thread runs until it blocks or exits), Round Robin (thread runs for a fixed amount of time), or the default "Other". The only one that is required by the standard is "Other" whose behavior is implementation dependent but usually a combo of FIFO and Round Robin (eg, thread runs until it blocks, exits, or timeslice is used up whichever happens first).

Resources