I am trying to increase the maximum open file connection of Mosquitto broker. But I read that increasing concurrent connections are not controlled by Mosquitto only.
As per our study we decided for 100k concurrent connection, we are targeting 1.6 GB RAM. But for testing I have to increase from default 1024 connections to 20000
Testing environment configurations:
t2. micro AWS server with 64 MB 14.04 ubuntu operating system. Changing connection limit in the mosquitto configuration is not reflecting. What can be the reason?
Do we need to change any configuration related to AWS Server?
My configurations:
Our system wide open connections is configured on /etc/sysctl.conf:
fs.file-max =99905
Running the command sysctl -p or cat /proc/sys/fs/file-max is reflecting the changes
In /etc/security/limits.conf:
ubuntu hard nofile 45000
ubuntu soft nofile 35000
Mosquitto is installed under the user 'Ubuntu' .
We also added below line of code on /etc/pam.d/common-session
session required pam_limits.so
Running ulimit -a is giving the below result:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 7859
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 35000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 7859
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
My init configuration file for mosquitto in /etc/init/mosquitto.conf
description "Mosquitto MQTTv3.1 broker"
author "Roger Light <roger#atchoo.org"
start on net-device-up
respawn
exec /usr/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf
#limit nofile 4096 4096
limit nofile 24000 24000
Below is the configuration in /etc/mosquitto/mosquitto.conf:
# change user to root
user ubuntu
set_ulimit () {
ulimit -f unlimited
ulimit -t unlimited
ulimit -v unlimited
ulimit -n 24000
ulimit -m unlimited
ulimit -u 24000
}
start)
...
# Update ulimit config in start command
set_ulimit
...
;;
stop)
But running cat /proc/4957/limits is still showing default value 1024 open files:
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 7859 7859 processes
Max open files 1024 4096 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 7859 7859 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
4957 -is the process id of Mosquitto
The number of open files is limited by the user limits, see ulimit man page.
I set the ulimit -n to 20000, and run mosquitto broker and it shows
% ps ax | grep mosquitto
9497 pts/44 S+ 0:00 ./mosquitto -c mosquitto.conf
9505 pts/10 S+ 0:00 grep --color=auto mosquitto
% cat /proc/9497/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 63084 63084 processes
Max open files 20000 20000 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 63084 63084 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
Regardless, since mosquitto is single-threaded, we have not found it
useable for anything more than about 1000 publisher clients with a
reasonable payload rate at 1 / 10 seconds.
changing limits in /etc/sysctl.conf or /etc/security/limits.conf seems to have no effect for process launched as service: The limit has to be set in the file starting up the daemon.
At the beginning of /etc/init.d/mosquitto:
ulimit -n 20000 #or more if need more....
in /etc/mosquitto/mosquitto.conf:
max_connections -1 #or the max number of connection you want
Till now I have achieved 74K concurrent connections on a broker. I have configured the ulimit of broker server by editing sysctl.conf and limit.conf file.
# vi /etc/sysctl.conf
fs.file-max = 10000000
fs.nr_open = 10000000
net.ipv4.tcp_mem = 786432 1697152 1945728
net.ipv4.tcp_rmem = 4096 4096 16777216
net.ipv4.tcp_wmem = 4096 4096 16777216
net.ipv4.ip_local_port_range = 1000 65535
# vi /etc/security/limits.conf
* soft nofile 10000000
* hard nofile 10000000
root soft nofile 10000000
root hard nofile 10000000
After this reboot your system.
If you are using ubuntu 16.04, we need to make change in system.conf
# vim /etc/system/system.conf
DefaultLimitNOFILE=65536
Reboot, this will increase the connection limit
For me none of the provided solutions worked with Ubuntu 18.04. I had to add LimitNOFILE=10000 to /lib/systemd/system/mosquitto.service under Service:
[Unit]
Description=Mosquitto MQTT Broker
Documentation=man:mosquitto.conf(5) man:mosquitto(8)
After=network.target
Wants=network.target
[Service]
LimitNOFILE=5000
Type=notify
NotifyAccess=main
ExecStart=/usr/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf
ExecReload=/bin/kill -HUP $MAINPID
Restart=on-failure
ExecStartPre=/bin/mkdir -m 740 -p /var/log/mosquitto
ExecStartPre=/bin/chown mosquitto:mosquitto /var/log/mosquitto
ExecStartPre=/bin/mkdir -m 740 -p /run/mosquitto
ExecStartPre=/bin/chown mosquitto:mosquitto /run/mosquitto
[Install]
WantedBy=multi-user.target
Then run systemctl daemon-reload to reload the changes and restart mosquitto with systemctl restart mosquitto.
Related
I tried to run a gitlab-ce docker container on a ubuntu server version 22.04.
The log output of docker logs --follow gitlab results in
execute[/opt/gitlab/bin/gitlab-ctl start alertmanager] action run
[execute] /opt/gitlab/bin/gitlab-ctl: fork: retry: Resource temporarily unavailable
even though I have enough memory available by monitoring with htop. Docker exited with an error code 137. My docker-compose.yml file looks like
version: "3.7"
gitlab:
image: gitlab/gitlab-ce:latest
container_name: gitlab
restart: "no"
ports:
- "8929:8929"
- "2289:22"
hostname: "gitlab.example.com"
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url "https://gitlab.example.com"
nginx['listen_port'] = 8929
nginx['listen_https'] = false
gitlab_rails['gitlab_shell_ssh_port'] = 2289
volumes:
- ./volumes/gitlab/config:/etc/gitlab
- ./volumes/gitlab/logs:/var/log/gitlab
- ./volumes/gitlab/data:/var/opt/gitlab
shm_size: "256m"
I am using docker version 20.10.16. Other images work fine with docker. The output of ulimit -a is
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 1029348
max locked memory (kbytes, -l) 65536
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 62987
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
I had the same problem with a vServer which looks pretty much like your machine.
I guess that the problem is a limit on the processes that can run at the same time. Probably you are limited by 400 but you need more to run your compose network.
cat /proc/user_beancounters | grep numproc
The response is formatted like this: held, maxheld, barrier, limit
If you run this command, you should be able to see that you are very close to exceeding the limit (if I'm right with my assumption).
Checkout this link, they talk about Java, but the general problem is the same:
https://debianforum.de/forum/viewtopic.php?t=180774
I am running Kubernets v1.11.1 cluster, sometime my kube-apiserver server started throwing the 'too many open files' message. I noticed to many open TCP connections node kubelet port 10250
My server configured with 65536 file descriptors. Do I need to increase the number of open files for the container host? What are the recommended ulimit settings for the container host?
api server log message
I1102 13:57:08.135049 1 logs.go:49] http: Accept error: accept tcp [::]:6443: accept4: too many open files; retrying in 1s
I1102 13:57:09.135191 1 logs.go:49] http: Accept error: accept tcp [::]:6443: accept4: too many open files; retrying in 1s
I1102 13:57:10.135437 1 logs.go:49] http: Accept error: accept tcp [::]:6443: accept4: too many open files; retrying in 1s
I1102 13:57:11.135589 1 logs.go:49] http: Accept error: accept tcp [::]:6443: accept4: too many open files; retrying in 1s
I1102 13:57:12.135755 1 logs.go:49] http: Accept error: accept tcp [::]:6443: accept4: too many open files; retrying in 1s
my host ulimit values:
# ulimit -a
-f: file size (blocks) unlimited
-t: cpu time (seconds) unlimited
-d: data seg size (kb) unlimited
-s: stack size (kb) 8192
-c: core file size (blocks) unlimited
-m: resident set size (kb) unlimited
-l: locked memory (kb) 64
-p: processes unlimited
-n: file descriptors 65536
-v: address space (kb) unlimited
-w: locks unlimited
-e: scheduling priority 0
-r: real-time priority 0
Thanks
SR
65536 seems a bit low, although there are many apps that recommend that number. This is what I have on one K8s cluster for the kube-apiserver:
# kubeapi-server-container
# |
# \|/
# ulimit -a
-f: file size (blocks) unlimited
-t: cpu time (seconds) unlimited
-d: data seg size (kb) unlimited
-s: stack size (kb) 8192
-c: core file size (blocks) unlimited
-m: resident set size (kb) unlimited
-l: locked memory (kb) 16384
-p: processes unlimited
-n: file descriptors 1048576 <====
-v: address space (kb) unlimited
-w: locks unlimited
-e: scheduling priority 0
-r: real-time priority 0
Different from a regular bash process system limits:
# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 15447
max locked memory (kbytes, -l) 16384
max memory size (kbytes, -m) unlimited
open files (-n) 1024 <===
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 15447
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
But yet the total max of the whole system:
$ cat /proc/sys/fs/file-max
394306
If you see this nothing can exceed /proc/sys/fs/file-max on the system, so I would also check that value. I would also check the number of file descriptors being used (first column), this will give you an idea of how many open files you have:
$ cat /proc/sys/fs/file-nr
2176 0 394306
I have an issue with the quantity of available memory in page locked on CentOS 7.
After allocating nodes with slurm, when I launch a job with MPI (mvapich), I encounter the following error:
Fatal error in MPI_Init:
Other MPI error, error stack:
MPIR_Init_thread(514).......:
MPID_Init(359)..............: channel initialization failed
MPIDI_CH3_Init(401).........:
MPIDI_CH3I_RDMA_init(221)...:
rdma_setup_startup_ring(410): cannot create cq
It seems to be due to a lack of locked memory. However, it seems to be set to unlimited since ulimit -a returns:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 254957
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 4096
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
When launching mpirun with sudo, it is working.
The problem came from slurm which did not retrieve the proper value for max locked memory. salloc -N ulimit -lreturned 64 instead of unlimited.
The solution is to add the follwing line in /etc/init.d/slurm
ulimit -l unlimited
Then stop and start again slurm:
sudo /etc/init.d/slurm stop
sudo /etc/init.d/slurm start
problem with ulimit -a every half hour return back with old value
every time i change it to
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) 1048576
open files (-n) 2000000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) unlimited
real-time priority (-r) 0
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
return again to default
how to save it forever
First, are you sure you add these settings to:
/etc/security/limits.conf
* hard nofile 2000000
* soft nofile 2000000
Second, check every user's crontab use
crontab -e
Also don't forget to check
/etc/crontab
Check every *.sh running using
ps aux | grep .sh
Maybe these crontab and shell scripts changed ulimit automatically.
Hope this could help :)
I'm curious how openmp deals with (or doesn't as the case looks to be) with an unlimited stacksize:
[alm475#compute-0-139 ~]$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
max nice (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 278528
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 4096
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
max rt priority (-r) 0
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 278528
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
[alm475#compute-0-139 ~]$ big_stack_openmp
Segmentation Fault
[alm475#compute-0-139 ~]$ ulimit -s 30960
[alm475#compute-0-139 ~]$ big_stack_openmp
The final command runs clean and produces the correct result. It requires a ~12MB stack to run.
What is the behavior of the stack allocations in a parallel environment when there is not a declared stack size?
There are two stack sizes you have to consider when working with OpenMP. There is the stack of the initial (or master) thread, which is controlled by ulimit. Then there is the stack of each of the "slave" threads which is controlled by the OpenMP environment variable OMP_STACKSIZE. This second stack has a default determined by each implementation. Most have a different size default depending on whether you are running in 32-bit or 64-bit mode.