I am trying to change Linux thread priority to real time SCHED_FIFO by pthread_setschedparam. I am getting error "no permission".
I am getting this error when the process is executed under root (sodo).
What is correct way to change Linux thread priority to real time SCHED_FIFO?
There were two reasons for the problem.
First, user limits weren't configured in the /etc/security/limits.conf.
Both hard and soft rtprio should be configured.
Here is an example:
myusername hard rtprio 65
myusername soft rtprio 65
The details about limits' configuration can be found in the file limits.conf itself.
Second, the kernel (usr/src/linux-headers-$(uname -r)/.config) is configured with parameter CONFIG_RT_GROUP_SCHED set to y.
In order to use real time priorities with CONFIG_RT_GROUP_SCHED corresponding control group (cgroup) should be created and configured.
See:
[https://www.kernel.org/doc/Documentation/scheduler/sched-rt-group.txt][1]
[https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/resource_management_guide/ch01
https://manpages.ubuntu.com/manpages/bionic/man7/cgroups.7.html
][2]
Related
I'm using Splunk to monitor my applications.
I also store resource statistics in my Splunk too.
Goal: I want to find the optimum CPU limit for each container.
How to I write a query that finds an optimum CPU limit? Or the other question is Should I?
Concern1: When I start customizing my query and let's say that I have used MAX(CPU) command. It doesn't mean that my container will be running at level most of the time. So, I might set an unnecessary high limit for my containers.
Let me explain, when I find a CPU limit value via MAX(CPU) command as 10, this top value might be happened because of a bulk operation. So, my container's expected resource may be around 1.2 all the time, except this single 1 operation that one. So, using MAX value won't work.
Concern2: Let's say that I have used the value of AVG(CPU) value and used it. And that is 2, So how many of my operations will be waited for how many minutes after this change? Or how many of them are going to be timed out? It may create a lot of side-effects. How will I decide the real average value? What parameters should be used?
Is it possible to include such conditions in the query? Or do I need an AI to decide it? :)
Here are my givin parameters:
path=statistics.cpus_system_time_secs
path=statistics.cpus_user_time_secs
path=statistics.cpus_nr_periods
path=statistics.cpus_nr_throttled
path=statistics.cpus_throttled_time_secs
path=statistics.cpus_limit
I bet you can ask better questions than me. Let's discuss.
"Optimum" is going to depend greatly on your own environment (resources available, application priority, etc)
You probably want to look at a combination of the following factors:
avg(CPU)
max(CPU) (and time spent there)
min(CPU) (and time spent there)
I suspect your "optimum" limit is going to be a % below your max...but only if you're spending 'a lot' of time maxxed-out
And, of course, being "maxed" may not matter, if other containers are running acceptably
Keep in mind, once you set that limit, your max will drop (as, likely, will your avg)
Folks,
With regards to docker compose v3's 'cpus' parameter setting (under 'deploy' 'resources' 'limits') to limit the available CPUs to a service, is it an absolute number that specifies the count of CPUs or is it a more useful percentage of available CPUs setting.
From what i read it appears to be an absolute number, where in, say if a host has 4 CPUs and one were to set two services in the compose file with 0.5 then both the services combined can only use a max of 1 CPU (0.5 each) while leaving the 3 remaining CPUs idle.
But thinking loudly it appears to me that it would be nicer if this is a percentage of available cores setting in which case for the same previous example this would result in both services each being able to use up to 2 CPUs each & thereby the two combined could use up all 4 when needed. This way when i increase or decrease the available cores the relative settings would help avoid modifying this value again.
EDIT(09/10/21):
On reading this it appears that the above can be achieved with 'cpu-shares' setting instead of setting 'cpus'. Is my understanding correct?
The doc for 'cpu-shares' however mentions the below cautionary note,
"It does not guarantee or reserve any specific CPU access."
If the above is achieved with this setting, then what does it mean (what is to lose) to not have a guarantee or reservation?
EDIT(09/13/21):
Just to summarize,
The 'cpus' parameter setting is an an absolute number that refers to the number of CPUs a service has reserved for it to use at all times. Correct?
The 'cpu-shares' parameter setting is a relative weight number the value of which is used to compute/determine the percentage of total available CPU that a service can use only when there is contention. Correct?
I have a service that uses an Docker image. About a half dozen people use it. However, occasionally containers produces big core.xxxx dump files. How do I disable it on docker images? My base image is Debian 9.
To disable core dumps set a ulimit value in /etc/security/limits.conf file and defines some shell specific restrictions.
A hard limit is something that never can be overridden, while a soft limit might only be applicable for specific users. If you would like to ensure that no process can create a core dump, you can set them both to zero. Although it may look like a boolean (0 = False, 1 = True), it actually indicates the allowed size.
soft core 0
hard core 0
The asterisk sign means it applies to all users. The second column states if we want to use a hard or soft limit, followed by the columns stating the setting and the value.
I have just started with Camus. I am planning to run camus job every hour. We get ~80000000 messages (with ~4KB avg size) every hour.
How do I set the following properties:
# max historical time that will be pulled from each partition based on event timestamp
kafka.max.pull.hrs=1
# events with a timestamp older than this will be discarded.
kafka.max.historical.days=3
I am not able to make out these configurations clearly. Should I put days as 1 and and hours property as 2?
How does camus pull the data? Often I see the following error also:
ERROR kafka.CamusJob: Offset range from kafka metadata is outside the previously persisted offset
Please check whether kafka cluster configuration is correct. You can also specify config parameter: kafka.move.to.earliest.offset to start processing from earliest kafka metadata offset.
How do I set the configurations correctly to run every hour and avoid that error?
"Offset range from kafka metadata is outside the previously persisted offset ."
Indicates that your fetching is not as fast as the kafka's pruning.
kafka's pruning is defined by log.retention.hours.
1st option :Increase the retention time by changing "log.retention.hours"
2nd Option :Run it with higher frequency.
3rd Option :Set in your camus job kafka.move.to.earliest.offset=true.
This property will force camus to start consuming from the earliest offset currently present in the kafka. But this may lead to data loss since we are not accounting for the pruned data which we were not able to fetch.
I am trying to submit a job to a cluster [running Sun Grid Engine (SGE)]. The job kept being terminated with the following report:
Job 780603 (temp_new) Aborted
Exit Status = 137
Signal = KILL
User = heaswara
Queue = std.q#comp-0-8.local
Host = comp-0-8.local
Start Time = 08/24/2013 13:49:05
End Time = 08/24/2013 16:26:38
CPU = 02:46:38
Max vmem = 12.055G
failed assumedly after job because:
job 780603.1 died through signal KILL (9)
The resource requirements I had set were:
#$ -l mem_free=10G
#$ -l h_vmem=12G
mem_free is the amount of memory my job requires and h_vmem is the is the upper bound on the amount of memory the job is allowed to use. I wonder my job is being terminated because it requires more than that threshold (12G).
Is there a way to estimate how much memory will be required for my operation? I am trying to figure out what should be the upper bound.
Thanks in advance.
It depends on the nature of the job itself. If you know anything about the program that is being run (i.e., you wrote it), you should be able to make an estimate on how much memory it is going to want. If not, your only recourse is to run it without the limit and see how much it actually uses.
I have a bunch of FPGA build and simulation jobs that I run. After each job, I track how much memory was actually used. I can use this historical information to make an estimate on how much it might use in the future (I pad by 10% in case there are some weird changes in the source). I still have to redo the calculations whenever the vendor delivers a new version of the tools, though, as quite often the memory footprint changes dramatically.