I have a slurm job like this:
#!/bin/bash
#SBATCH -o %A.%N.out
#SBATCH -e %A.%N.err
#SBATCH --partition=compute
#SBATCH --nodes=1
#SBATCH -n 16
#SBATCH --export=ALL
#SBATCH -t 1:00:00
cmd1 input1 > o1
cmd2 o1 > o2
cmd3 o2 > o3
With sacct, one can get the time and cpu usage for the whole job. I am also interested to get those info for cmd1 and cmd3 specifically. How can you do that? Will job step and srun help do that?
You can get a separate entry on sacct per step.
If you run your commands with srun they will generate a step and each one will be monitored and have its own entry.
After this you will see in the sacct output one line for the whole job, one for the batch step, and one for each of the steps on the script (srun/mpirun commands)
You can use time -v to get advanced information about timing and resources used. Not that this refers to the binary /usr/bin/time, not the shell built-in time:
$ /usr/bin/time -v ls /
bin dev home lib64 media opt root sbin sys usr
boot etc lib lost+found mnt proc run srv tmp var
Command being timed: "ls /"
User time (seconds): 0.00
System time (seconds): 0.00
Percent of CPU this job got: 94%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.00
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 2136
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 126
Voluntary context switches: 1
Involuntary context switches: 1
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
You can prepend this to any command in your batch script.
Related
I am trying to set upper write throughput limit per cgroup via blkio cgroup controller.
I have tried it like this:
echo "major:minor 10485760" > /sys/fs/cgroup/blkio/docker/XXXXX/blkio.throttle.write_bps_device
This should limit throughput to 10 MBps. However tool, that's monitoring servers disk, reports this behaviour.
I thought that, the line should hold somewhere around 10M. Can somebody explain this behaviour to me and maybe propose a better way to limit throughput?
Are you sure that the major/minor numbers that you specified in the command line are correct? Moreover, as you are running in docker, the limitation is for the processes running in the docker container not for the processes running outside. So, you need to check from where the information taken by the monitoring tool come from (does it take numbers for all the processes inside and outside the container or only for the processes inside the container?).
To check the setting, the Linux documentation provides an example with the dd command and a device limited to 1MB/second on reads. You can try the same with a limit on the writes to see if the monitoring tool is coherent with the output of dd. Make the latter run in the container.
For example, my home directory is located on /dev/sdb2:
$ df
Filesystem 1K-blocks Used Available Use% Mounted on
[...]
/dev/sdb2 2760183720 494494352 2125409664 19% /home
[...]
$ ls -l /dev/sdb*
brw-rw---- 1 root disk 8, 16 mars 14 08:14 /dev/sdb
brw-rw---- 1 root disk 8, 17 mars 14 08:14 /dev/sdb1
brw-rw---- 1 root disk 8, 18 mars 14 08:14 /dev/sdb2
I check the speed of the writing in a file:
$ dd oflag=direct if=/dev/zero of=$HOME/file bs=4K count=1024
1024+0 records in
1024+0 records out
4194304 bytes (4,2 MB, 4,0 MiB) copied, 0,131559 s, 31,9 MB/s
I set the 1MB/s write limit on the whole disk (8:16) as it does not work on individual partitions (8:18) on which my home directory resides:
# echo "8:16 1048576" > /sys/fs/cgroup/blkio/blkio.throttle.write_bps_device
# cat /sys/fs/cgroup/blkio/blkio.throttle.write_bps_device
8:16 1048576
dd's output confirms the limitation of the I/O throughput to 1 MB/s:
$ dd oflag=direct if=/dev/zero of=$HOME/file bs=4K count=1024
1024+0 records in
1024+0 records out
4194304 bytes (4,2 MB, 4,0 MiB) copied, 4,10811 s, 1,0 MB/s
So, it is possible to make the same in a container.
1) I use next to start a container:
docker run --name test -idt python:3 python -m http.server
2) Then, I try to validate memory usage like next:
a)
root#shubuntu1:~# ps aux | grep "python -m http.server"
root 17416 3.0 0.2 27368 19876 pts/0 Ss+ 17:11 0:00 python -m http.server
b)
root#shubuntu1:~# docker exec -it test ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.9 0.2 27368 19876 pts/0 Ss+ 09:11 0:00 python -m http.
c)
root#shubuntu1:~# docker stats --no-stream test
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
d72f2ece6816 test 0.01% 12.45MiB / 7.591GiB 0.16% 3.04kB / 0B 0B / 0B 1
You can see from both docker host & docker container, we could see python -m http.server consume 19876/1024=19.1289MB memory (RSS), but from docker stats, I find 12.45MB, why it show container memory consume even less than the PID1 process in container?
rss RSS resident set size, the non-swapped physical memory that a task has used (in kiloBytes). (alias rssize, rsz).
MEM USAGE / LIMIT the total memory the container is using, and the total amount of memory it is allowed to use
I am trying to increase the maximum open file connection of Mosquitto broker. But I read that increasing concurrent connections are not controlled by Mosquitto only.
As per our study we decided for 100k concurrent connection, we are targeting 1.6 GB RAM. But for testing I have to increase from default 1024 connections to 20000
Testing environment configurations:
t2. micro AWS server with 64 MB 14.04 ubuntu operating system. Changing connection limit in the mosquitto configuration is not reflecting. What can be the reason?
Do we need to change any configuration related to AWS Server?
My configurations:
Our system wide open connections is configured on /etc/sysctl.conf:
fs.file-max =99905
Running the command sysctl -p or cat /proc/sys/fs/file-max is reflecting the changes
In /etc/security/limits.conf:
ubuntu hard nofile 45000
ubuntu soft nofile 35000
Mosquitto is installed under the user 'Ubuntu' .
We also added below line of code on /etc/pam.d/common-session
session required pam_limits.so
Running ulimit -a is giving the below result:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 7859
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 35000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 7859
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
My init configuration file for mosquitto in /etc/init/mosquitto.conf
description "Mosquitto MQTTv3.1 broker"
author "Roger Light <roger#atchoo.org"
start on net-device-up
respawn
exec /usr/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf
#limit nofile 4096 4096
limit nofile 24000 24000
Below is the configuration in /etc/mosquitto/mosquitto.conf:
# change user to root
user ubuntu
set_ulimit () {
ulimit -f unlimited
ulimit -t unlimited
ulimit -v unlimited
ulimit -n 24000
ulimit -m unlimited
ulimit -u 24000
}
start)
...
# Update ulimit config in start command
set_ulimit
...
;;
stop)
But running cat /proc/4957/limits is still showing default value 1024 open files:
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 7859 7859 processes
Max open files 1024 4096 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 7859 7859 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
4957 -is the process id of Mosquitto
The number of open files is limited by the user limits, see ulimit man page.
I set the ulimit -n to 20000, and run mosquitto broker and it shows
% ps ax | grep mosquitto
9497 pts/44 S+ 0:00 ./mosquitto -c mosquitto.conf
9505 pts/10 S+ 0:00 grep --color=auto mosquitto
% cat /proc/9497/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 63084 63084 processes
Max open files 20000 20000 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 63084 63084 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
Regardless, since mosquitto is single-threaded, we have not found it
useable for anything more than about 1000 publisher clients with a
reasonable payload rate at 1 / 10 seconds.
changing limits in /etc/sysctl.conf or /etc/security/limits.conf seems to have no effect for process launched as service: The limit has to be set in the file starting up the daemon.
At the beginning of /etc/init.d/mosquitto:
ulimit -n 20000 #or more if need more....
in /etc/mosquitto/mosquitto.conf:
max_connections -1 #or the max number of connection you want
Till now I have achieved 74K concurrent connections on a broker. I have configured the ulimit of broker server by editing sysctl.conf and limit.conf file.
# vi /etc/sysctl.conf
fs.file-max = 10000000
fs.nr_open = 10000000
net.ipv4.tcp_mem = 786432 1697152 1945728
net.ipv4.tcp_rmem = 4096 4096 16777216
net.ipv4.tcp_wmem = 4096 4096 16777216
net.ipv4.ip_local_port_range = 1000 65535
# vi /etc/security/limits.conf
* soft nofile 10000000
* hard nofile 10000000
root soft nofile 10000000
root hard nofile 10000000
After this reboot your system.
If you are using ubuntu 16.04, we need to make change in system.conf
# vim /etc/system/system.conf
DefaultLimitNOFILE=65536
Reboot, this will increase the connection limit
For me none of the provided solutions worked with Ubuntu 18.04. I had to add LimitNOFILE=10000 to /lib/systemd/system/mosquitto.service under Service:
[Unit]
Description=Mosquitto MQTT Broker
Documentation=man:mosquitto.conf(5) man:mosquitto(8)
After=network.target
Wants=network.target
[Service]
LimitNOFILE=5000
Type=notify
NotifyAccess=main
ExecStart=/usr/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf
ExecReload=/bin/kill -HUP $MAINPID
Restart=on-failure
ExecStartPre=/bin/mkdir -m 740 -p /var/log/mosquitto
ExecStartPre=/bin/chown mosquitto:mosquitto /var/log/mosquitto
ExecStartPre=/bin/mkdir -m 740 -p /run/mosquitto
ExecStartPre=/bin/chown mosquitto:mosquitto /run/mosquitto
[Install]
WantedBy=multi-user.target
Then run systemctl daemon-reload to reload the changes and restart mosquitto with systemctl restart mosquitto.
Working off this example: http://www.gnu.org/software/parallel/man.html#EXAMPLE:-Speeding-up-fast-jobs
When I run:
seq -w 0 9999 | parallel touch pict{}.jpg
seq -w 0 9999 | parallel -X touch pict{}.jpg
Success! However, add another 9 and BOOM:
$ seq -w 0 99999 | parallel --eta -X touch pict{}.jpg
parallel: Warning: No more processes: Decreasing number of running jobs to 3. Raising ulimit -u or /etc/security/limits.conf may help.
Computers / CPU cores / Max jobs to run
1:local / 4 / 3
parallel: Warning: No more processes: Decreasing number of running jobs to 2. Raising ulimit -u or /etc/security/limits.conf may help.
parallel: Warning: No more processes: Decreasing number of running jobs to 1. Raising ulimit -u or /etc/security/limits.conf may help.
parallel: Error: No more processes: cannot run a single job. Something is wrong.
I would expect parallel -X to run no more jobs than I have cpu cores, and to cram as many parameters onto each job as the max command line length permits. How am I running out of processes?
My environment:
OSX Yosemite
ulimit -u == 709
GNU parallel 20141122
GNU bash, version 3.2.53(1)-release (x86_64-apple-darwin14)
Your expectation is 100% correct. What you are seeing is clearly a bug - probably due to GNU Parallel not being well tested on OSX. Please follow http://www.gnu.org/software/parallel/man.html#REPORTING-BUGS and file a bug report.
We store a lot of small fixed length values in memcached, and I constantly observe how memcached used memory stops growing not reaching its defined memory limit. Usually it stops growing upon reaching 100820 items in the smallest slab. I tried playing with -f factor to no avail: growth stops upon reaching 100820 items in one slab.
Is there anyway to amend the 100820 items limit? I can't find info in regards to it anywhere.
Detailed statistics is below.
Run string:
/usr/bin/memcached -m 328 -p 11211 -u nobody -l 127.0.0.1 -n 52 -C
3.2.0-32-generic #51-Ubuntu SMP Wed Sep 26 21:33:09 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
memcached 1.4.13
STAT 1:chunk_size 104
STAT 1:chunks_per_page 10082
STAT 1:total_pages 10
STAT 1:total_chunks 100820
STAT 1:used_chunks 100820
STAT 1:free_chunks 0
STAT 1:free_chunks_end 0
STAT 1:mem_requested 10407999
STAT 1:get_hits 262079
STAT 1:cmd_set 321590
STAT 1:delete_hits 0
STAT 1:incr_hits 0
STAT 1:decr_hits 0
STAT 1:cas_hits 0
STAT 1:cas_badval 0
STAT 1:touch_hits 0
stats sizes
STAT 96 8
STAT 128 100812
STAT 160 5
STAT 192 195
STAT 224 4533
STAT 256 7859
STAT 288 10608
STAT 320 21084
STAT 352 26051
[...]
On the man page, you'll see an option for -I which allows you to adjust the size of your slab (http://linux.die.net/man/1/memcached), which if you use in conjunction with the -f Factor switch, you'll soon realize how to maximize memcache storage sizes.
And if not, there is a much better explanation here: http://artur.ejsmont.org/blog/content/a-few-memcache-gotchas