I have got Ubuntu 14.04,
there is ulimit for ${myuser}
ulimit -Hn
65536
I count number of opened files:
sudo lsof -u ${myuser} |wc -l
677245
How does it possible?
Edit: Also there is parameters;
cat /proc/sys/fs/file-max
524288
Related
I'm new in kubernetes and I'm trying to deploy an elasticsearch on it.
Currently, I have a problem with the number of file descriptor required by elasticsearch and allow by docker.
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
So to fix that I have tried 3 different ways:
way 1
From the docker documentation, dockerd should use the system value as default value.
set /etc/security/limits.conf with * - nofile 65536
reboot
execute ulimit -Hn && ulimit -Sn return return 65536 twice
execute docker run --rm centos:7 /bin/bash -c 'ulimit -Hn && ulimit -Sn' (should return 65536 twice but no, return 4096 and 1024 )
way 2
add --default-ulimit nofile=65536:65536 to /var/snap/microk8s/current/args/dockerd
reboot
execute docker run --rm centos:7 /bin/bash -c 'ulimit -Hn && ulimit -Sn' (should return 65536 twice but no return 4096 and 1024)
way 3
add
"default-ulimit" : {
"nofile":{
"Name":" nofile",
"Hard":" 65536",
"Soft":" 65536"
}
}
to /var/snap/microk8s/354/args/docker-daemon.json
execute systemctl restart snap.microk8s.daemon-docker.service
execute journalctl -u snap.microk8s.daemon-docker.service -f will return unable to configure the Docker daemon with file /var/snap/microk8s/354/args/docker-daemon.json: the following directives don't match any configuration option: nofile
The only way I found for set the ulimit is to pass --ulimit nofile=65536:65536 to the docker run command. But I cannot do that inside my kubernetes statesfullset config.
So do you know how I can solve this problem ?
I didn't somethings wrong here ?
Thanks in advance for your help
ps: I'm on ubuntu 18.0.1 with docker 18.06.1-ce and microk8s installed with snap
A bit late but if someone has this problem too, you can add this line to /var/snap/microk8s/current/args/containerd-env:
ulimit -n 65536
Then stop/start microk8s to enable this fix. If you execute command docker run --rm centos:7 /bin/bash -c 'ulimit -Hn && ulimit -Sn' you can see 65536 twice
More information on Microk8s Github issue #253. Microk8s has merge a fix for this, it may will be soon available on a release.
This command
echo 1 | sudo tee /proc/sys/net/ipv6/conf/all/disable_ipv6
when run inside a CentOS docker container (running on Mac), gives:
echo 1 | sudo tee /proc/sys/net/ipv6/conf/all/disable_ipv6
tee: /proc/sys/net/ipv6/conf/all/disable_ipv6: Read-only file system
1
When run inside a CentOS virtual machine, it succeeds and gives no error.
The directory permissions inside docker container and VM are exactly the same:
VM:
$ ls -ld /proc/sys/net/ipv6/conf/all/disable_ipv6
-rw-r--r-- 1 root root 0 Jan 4 21:09 /proc/sys/net/ipv6/conf/all/disable_ipv6
docker:
$ ls -ld /proc/sys/net/ipv6/conf/all/disable_ipv6
-rw-r--r-- 1 root root 0 Jan 5 05:05 /proc/sys/net/ipv6/conf/all/disable_ipv6
This is a fresh, brand new container.
Docker version:
$ docker --version
Docker version 18.09.0, build 4d60db4
What am I missing?
Try hackish solution and add extended privileges to the container with --privileged:
$ docker run --rm -ti centos \
bash -c "echo 1 | tee /proc/sys/net/ipv6/conf/all/disable_ipv6"
tee: /proc/sys/net/ipv6/conf/all/disable_ipv6: Read-only file system
1
vs
$ docker run --privileged --rm -ti centos \
bash -c "echo 1 | tee /proc/sys/net/ipv6/conf/all/disable_ipv6"
1
You can use --cap-add to add precise privilege instead of --privileged.
However --sysctl looks like the best solution, instead of hacking networking in the container with --privileged:
$ docker run --sysctl net.ipv6.conf.all.disable_ipv6=1 \
--rm -ti centos bash -c "cat /proc/sys/net/ipv6/conf/all/disable_ipv6"
1
I'm running two docker container vm1 and vm2 with the same docker image. Both can run successful separately. But can not run at the same time. I've checked the CPU and memory. What's other resource limited the docker to run multiple containers?
systemctl status docker result when run 'vm1' only.
Tasks: 148
Memory: 357.9M
systemctl status docker result when run 'vm2' only.
Tasks: 140
Memory: 360.0M
My system still contains about 4GB free RAM, the CPUs are idle too.
When I run vm1 then vm2, the vm2 will failed with some log like:
[17:55:50.504452] connect(172.17.0.2, 2889) failed: Operation now in progress
And other log like
/etc/bashrc: fork: retry: Resource temporarily unavailable
systemctl status docker result when run 'vm1' then 'vm2'.
Tasks: 244
Memory: 372.2M
vm1 docker run command
exec docker run --rm -it --name vm1
-e OUT_IP="$MYIP" \
-h vm1 \
-v /MyLib/opt:/opt:ro \
-v /home/myid:/home/guest \
-v /sybase:/sybase \
-v /sybaseDB:/sybaseDB \
run-image $*
vm2 docker run command
exec docker run --rm -it --name vm2
-e OUT_IP="$MYIP" \
-h vm2 \
-v /MyLib/opt:/opt:ro \
-v /home/myid:/home/guest \
-v /sybase2:/sybase \
-v /sybaseDB2:/sybaseDB2 \
run-image $*
Some command result according to: fork: retry: Resource temporarily unavailable
# in host os
$ sysctl fs.file-nr
fs.file-nr = 4064 0 814022
# in docker container (vm2)
$ sudo su - guest
$ ulimit -Hn
1048576
$ sudo lsof -u guest 2>/dev/null | wc -l
230
The docker run user is 'guest', but I run program by 'ap' user account through sudo. I found there is different of 'ulimit -u' result inside the container, the run-image is based on centos:6
$ sudo su - guest
$ ulimit -u
unlimited
$ sudo su - ap
$ ulimit -u
1024
In my case, I found the result is caused by 'ap' user's default ulimit -u is only 1024. When run only vm1 or vm2, the 'ap' user's process/thread count is under 1024. When I run both vm1 and vm2, the total process count is larger than 1024.
The solution is enlarge the default user nproc limitation for centos 6:
sudo sed -i 's/1024/4096/' /etc/security/limits.d/90-nproc.conf
I found a video about setting up the docker remote api by Packt publishing.
In the video we are told to change the /etc/init/docker.conf file by adding "-H tcp://127.0.0.1:4243 -H unix:///var/run/docker/sock" to DOCKER_OPTS=. Then we have to restart docker for the changes to take effect.
However after I do all that, I still can't curl localhost at that port. Doing so returns:
vagrant#vagrant-ubuntu-trusty-64:~$ curl localhost:4243/_ping
curl: (7) Failed to connect to localhost port 4243: Connection refused
I'm relativity new to docker, if somebody could help me out here I'd be very grateful.
Edit:
docker.conf
description "Docker daemon"
start on (filesystem and net-device-up IFACE!=lo)
stop on runlevel [!2345]
limit nofile 524288 1048576
limit nproc 524288 1048576
respawn
kill timeout 20
pre-start script
# see also https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount
if grep -v '^#' /etc/fstab | grep -q cgroup \
|| [ ! -e /proc/cgroups ] \
|| [ ! -d /sys/fs/cgroup ]; then
exit 0
fi
if ! mountpoint -q /sys/fs/cgroup; then
mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
fi
(
cd /sys/fs/cgroup
for sys in $(awk '!/^#/ { if ($4 == 1) print $1 }' /proc/cgroups); do
mkdir -p $sys
if ! mountpoint -q $sys; then
if ! mount -n -t cgroup -o $sys cgroup $sys; then
rmdir $sys || true
fi
fi
done
)
end script
script
# modify these in /etc/default/$UPSTART_JOB (/etc/default/docker)
DOCKER=/usr/bin/$UPSTART_JOB
DOCKER_OPTS="-H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock"
if [ -f /etc/default/$UPSTART_JOB ]; then
. /etc/default/$UPSTART_JOB
fi
exec "$DOCKER" daemon $DOCKER_OPTS
end script
# Don't emit "started" event until docker.sock is ready.
# See https://github.com/docker/docker/issues/6647
post-start script
DOCKER_OPTS=
if [ -f /etc/default/$UPSTART_JOB ]; then
"/etc/init/docker.conf" 60L, 1582C
EDIT2: Output of ps aux | grep docker
vagrant#vagrant-ubuntu-trusty-64:~$ ps aux | grep docker
root 858 0.2 4.2 401836 21504 ? Ssl 06:12 0:00 /usr/bin/docker daemon --insecure-registry 11.22.33.44
:5000
vagrant 1694 0.0 0.1 10460 936 pts/0 S+ 06:15 0:00 grep --color=auto docker
The problem
According to the output of ps aux|grep docker it can be noticed that the options the daemon is started with do not match the ones in the docker.conf file. Another file is then used to start the docker daemon service.
Solution
To solve this, track down the file that contains the option "--insecure-registry 11.22.33.44:5000 that may either /etc/default/docker or /etc/init/docker.conf or /etc/systemd/system/docker.service or idk-where-else and modify it accordingly with the needed options.
Then restart the daemon and you're good to go !
I have a centos:7 minimal image on my docker and I want to stop iptables/firewalld but the official centos:7 image that I have downloaded from docker repository does not support systemctl/service .
guide me to stop iptables/firewalld on this minimal centos:7
I tried
setenforce 0
& disabled selinux
The official centos:7 minimal image has no firewalld installed, and iptables is not running by default.
$ docker run -it centos:7 bash
[root#f4d4d29f4ca4 /]# find / -name 'fire*'
[root#f4d4d29f4ca4 /]# find / -name 'iptables*'
/etc/sysconfig/iptables-config
/etc/sysconfig/iptables
/usr/lib/systemd/system/iptables.service
/usr/sbin/iptables
/usr/sbin/iptables-save
/usr/sbin/iptables-restore
/usr/libexec/initscripts/legacy-actions/iptables
/usr/libexec/iptables
/usr/libexec/iptables/iptables.init
/usr/bin/iptables-xml
[root#f4d4d29f4ca4 /]# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.1 0.1 11748 2916 ? Ss 12:13 0:00 bash
root 40 0.0 0.1 19752 2244 ? R+ 12:16 0:00 ps aux
selinux is not installed either:
[root#f4d4d29f4ca4 /]# cat /etc/sysconfig/selinux
cat: /etc/sysconfig/selinux: No such file or directory