I'm new in kubernetes and I'm trying to deploy an elasticsearch on it.
Currently, I have a problem with the number of file descriptor required by elasticsearch and allow by docker.
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
So to fix that I have tried 3 different ways:
way 1
From the docker documentation, dockerd should use the system value as default value.
set /etc/security/limits.conf with * - nofile 65536
reboot
execute ulimit -Hn && ulimit -Sn return return 65536 twice
execute docker run --rm centos:7 /bin/bash -c 'ulimit -Hn && ulimit -Sn' (should return 65536 twice but no, return 4096 and 1024 )
way 2
add --default-ulimit nofile=65536:65536 to /var/snap/microk8s/current/args/dockerd
reboot
execute docker run --rm centos:7 /bin/bash -c 'ulimit -Hn && ulimit -Sn' (should return 65536 twice but no return 4096 and 1024)
way 3
add
"default-ulimit" : {
"nofile":{
"Name":" nofile",
"Hard":" 65536",
"Soft":" 65536"
}
}
to /var/snap/microk8s/354/args/docker-daemon.json
execute systemctl restart snap.microk8s.daemon-docker.service
execute journalctl -u snap.microk8s.daemon-docker.service -f will return unable to configure the Docker daemon with file /var/snap/microk8s/354/args/docker-daemon.json: the following directives don't match any configuration option: nofile
The only way I found for set the ulimit is to pass --ulimit nofile=65536:65536 to the docker run command. But I cannot do that inside my kubernetes statesfullset config.
So do you know how I can solve this problem ?
I didn't somethings wrong here ?
Thanks in advance for your help
ps: I'm on ubuntu 18.0.1 with docker 18.06.1-ce and microk8s installed with snap
A bit late but if someone has this problem too, you can add this line to /var/snap/microk8s/current/args/containerd-env:
ulimit -n 65536
Then stop/start microk8s to enable this fix. If you execute command docker run --rm centos:7 /bin/bash -c 'ulimit -Hn && ulimit -Sn' you can see 65536 twice
More information on Microk8s Github issue #253. Microk8s has merge a fix for this, it may will be soon available on a release.
Related
My HAproxy container exits every time i try to run it .
I have tried to run it without --d to see why it exited and I get the following output:
$ sudo docker run --name=hapr -p 80:80 -v /haproxy/:/usr/local/etc/haproxy/ haproxy
I get this output:
HA-Proxy version 2.1.4 2020/04/02 - https://haproxy.org/ Status: stable branch - will stop receiving fixes around Q1 2021. Known bugs: http://www.haproxy.org/bugs/bugs-2.1.4.html Usage : haproxy [-f <cfgfile|cfgdir>]* [ -vdVD ] [ -n <maxconn> ] [ -N <maxpconn> [ -p <pidfile> ] [ -m <max megs> ] [ -C <dir> ] [-- <cfgfile>* -v displays version ; -vv shows known build options. -d enters debug mode ; -db only disables background mode. -dM[<byte>] poisons memory with <byte> (defaults to 0x50) -V enters verbose mode (disables quiet mode) -D goes daemon ; -C changes to <dir> before loading files. -W master-worker mode. -q quiet mode : don't display messages -c check mode : only check config files and exit -n sets the maximum total # of connections (uses ulimit -n) -m limits the usable amount of memory (in MB) -N sets the default, per-proxy maximum # of connections (0) -L set local peer name (default to hostname) -p writes pids of all children to this file -de disables epoll() usage even when available -dp disables poll() usage even when available -dS disables splice usage (broken on old kernels) -dG disables getaddrinfo() usage -dR disables SO_REUSEPORT usage -dr ignores server address resolution failures -dV disables SSL verify on servers side -sf/-st [pid ]* finishes/terminates old pids. -x <unix_socket> get listening sockets from a unix socket -S <bind>[,<bind options>...] new master CLI
If I list the container I get the following message:
$ docker container ls -a
Exited (1) 3 minutes ago
I have fixed my problem , If someone get same problem .
So just you should have the full path in your command .
instaed of
$ sudo docker run --name=hapr -p 80:80 -v /haproxy/:/usr/local/etc/haproxy/ haproxy
use
$ sudo docker run --name=hapr -p 80:80 -v /home/ubuntu/haproxy/:/usr/local/etc/haproxy/ haproxy
also you should have haproxy.cfg allready in your host .
If you check the official HAproxy page on DockerHub you could see that you need to have the haproxy.cfg in to the path /haproxy/. If not, HAproxy can not start.
Note that your host's /path/to/etc/haproxy folder should be populated with a file named haproxy.cfg. If this configuration file refers to any other files within that folder then you should ensure that they also exist (e.g. template files such as 400.http, 404.http, and so forth).
There is the official HAproxy documentation about the haproxy.cfg.
To continue, you need to stop and delete the current container:
$ docker stop CONTAINER
$ docker rm CONTAINER
And created again.
I have tried to install Docker on google Colab through the following ways:
(1)https://phoenixnap.com/kb/how-to-install-docker-on-ubuntu-18-04
(2)https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-18-04
(3)https://colab.research.google.com/drive/10OinT5ZNGtdLLQ9K399jlKgNgidxUbGP
I started the docker service and saw the status, but it showed 'Docker is not running'. Maybe the docker can not work on the Colab.
I feel confused and want to know the reason.
Thanks
It's possible to run Docker in Colab, but with limiting functionality.
There are two methods of running Docker service, a regular one (more restrictive), and in rootless mode (dockerd inside RootlessKit).
dockerd
Install by:
!apt-get -qq install docker.io
Use the following shell script:
%%shell
set -x
dockerd -b none --iptables=0 -l warn &
for i in $(seq 5); do [ ! -S "/var/run/docker.sock" ] && sleep 2 || break; done
docker info
docker network ls
docker pull hello-world
docker pull ubuntu
# docker build -t myimage .
docker images
kill $(jobs -p)
As shown above, before each docker command, you've to run Docker service (dockerd) in the background, then kill it. Unfortunately you've to run dockerd for each cell where you want to run your docker commands.
Notes on dockerd arguments:
-b none/--bridge none - Disables a network bridge to avoid errors.
--iptables=0 - Disables addition of iptables rules to avoid errors.
-D - Add to enable debug mode.
However in this mode running most of the containers will generate the errors related to read-only file system.
Additional notes:
To disable cpuset support, run: !umount -vl /sys/fs/cgroup/cpuset.
Related issue: https://github.com/docker/for-linux/issues/1124.
Here are some notepads:
https://colab.research.google.com/drive/1Lmbkc7v7XjSWK64E3NY1cw7iJ0sF1brl
https://colab.research.google.com/drive/1RVS5EngPybRZ45PQRmz56PPdz9nWStIb (without cpuset support)
Rootless dockerd
Rootless mode allows running the Docker daemon and containers as a non-root user.
To install, use the following code:
%%shell
useradd -md /opt/docker docker
apt-get -qq install iproute2 uidmap
sudo -Hu docker SKIP_IPTABLES=1 bash < <(curl -fsSL https://get.docker.com/rootless)
To run dockerd service, there are two methods: using a script (dockerd-rootless.sh) or running rootlesskit directly.
Here is the script which uses dockerd-rootless.sh to run a hello-world container:
%%writefile docker-run.sh
#!/usr/bin/env bash
set -e
export DOCKER_SOCK=/opt/docker/.docker/run/docker.sock
export DOCKER_HOST=unix://$DOCKER_SOCK
export PATH=/opt/docker/bin:$PATH
export XDG_RUNTIME_DIR=/opt/docker/.docker/run
/opt/docker/bin/dockerd-rootless.sh --experimental --iptables=false --storage-driver vfs &
for i in $(seq 5); do [ ! -S "$DOCKER_SOCK" ] && sleep 2 || break; done
docker run $#
jobs -p
kill $(jobs -p)
To run above script, run:
!sudo -Hu docker bash -x docker-run.sh hello-world
The above may generate the following warnings:
WARN[0000] failed to mount sysfs, falling back to read-only mount: operation not permitted
To remount some folders with write access, you can try:
!mount -vt sysfs sysfs /sys -o rw,remount
!mount -vt tmpfs tmpfs /sys/fs/cgroup -o rw,remount
[rootlesskit:child ] error: executing [[ip tuntap add name tap0 mode tap] [ip link set tap0 address 02:50:00:00:00:01]]: exit status 1
The above error is related to dockerd-rootless.sh script which adds extra network parameters to rootlesskit such as:
--net=vpnkit --mtu=1500 --slirp4netns-sandbox=auto --slirp4netns-seccomp=auto --disable-host-loopback --port-driver=builtin
This has been reported at https://github.com/rootless-containers/rootlesskit/issues/181 (however ignored).
To workaround the above problem, we can pass our own arguments to rootlesskit using the following file instead:
%%writefile docker-run.sh
#!/usr/bin/env bash
set -e
export DOCKER_SOCK=/opt/docker/.docker/run/docker.sock
export DOCKER_HOST=unix://$DOCKER_SOCK
export PATH=/opt/docker/bin:$PATH
export XDG_RUNTIME_DIR=/opt/docker/.docker/run
rootlesskit --debug --disable-host-loopback --copy-up=/etc --copy-up=/run /opt/docker/bin/dockerd -b none --experimental --iptables=false --storage-driver vfs &
for i in $(seq 5); do [ ! -S "$DOCKER_SOCK" ] && sleep 2 || break; done
docker $#
jobs -p
kill $(jobs -p)
Then run as:
!sudo -Hu docker bash docker-run.sh run --cap-add SYS_ADMIN hello-world
Depending on your image, this may generate the following error:
process_linux.go:449: container init caused "join session keyring: create session key: operation not permitted": unknown.
Which could be solved by !sysctl -w kernel.keys.maxkeys=500, however Colab doesn't allow it. Related: Error response from daemon: join session keyring: create session key: disk quota exceeded.
Notepad showing the above:
https://colab.research.google.com/drive/1oRja4v-PtY6lFMJIIF79No4s3s-vbqd4
Suggested further reading:
Finding the minimal set of privileges for a docker container.
I had the same issue as you and apparently Docker is not supported in Google Colab according to the answers on this issue from its Github repository: https://github.com/googlecolab/colabtools/issues/299#issuecomment-615308778.
I know, it is an old question, but this an old answer (2020) by a member of the Google Colaboratory team.
this isn't possible, and we currently have no plans to support this.
The virtualization/isolation provided by docker is available in Colab as each Colab session is an isolation by itself, if one installs the required libraries, hardware abstraction (Colab by default offers a free GPU and one can choose it during run time).....Have used conda and when I switched to dockers, there was a distinct difference in performance......Docker never had GPU memory fragmentation, but using conda (bare-metal) had the same......I have been trying single colab sessions for training in TF2 and soon will have testing and monitoring sessions(using Tensorboard) and can fully understand, whether having docker in Colab is good or not......Will come back and post my feed back soon....
I'm on OSX and I've got Docker for Mac installed.
On OSX, Docker runs it's containers inside a little hypervisor, we can see this from a process listing
❯ ps awux | grep docker
bryanhunt 512 1.8 0.2 10800436 34172 ?? S Fri11am 386:09.03 com.docker.hyperkit -A -u -F vms/0/hyperkit.pid -c 8 -m 6144M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-vpnkit,path=s50,uuid=c0fac0ff-fb9a-473f-bf44-43d7abdc701d -U 05c2af3a-d417-43fd-b0d4-9d443577f207 -s 2:0,ahci-hd,/Users/bryanhunt/Library/Containers/com.docker.docker/Data/vms/0/Docker.raw -s 3,virtio-sock,guest_cid=3,path=vms/0,guest_forwards=2376;1525 -s 4,ahci-cd,/Applications/Docker.app/Contents/Resources/linuxkit/docker-for-mac.iso -s 5,ahci-cd,vms/0/config.iso -s 6,virtio-rnd -s 7,virtio-9p,path=s51,tag=port -l com1,autopty=vms/0/tty,asl -f bootrom,/Applications/Docker.app/Contents/Resources/uefi/UEFI.fd,,
bryanhunt 509 0.0 0.1 558589408 9608 ?? S Fri11am 0:30.26 com.docker.driver.amd64-linux -addr fd:3 -debug
Note how it's running the VM from an ISO image /Applications/Docker.app/Contents/Resources/linuxkit/docker-for-mac.iso - this is probably a good idea because things would get tricky if users tampered with the VM image, however, in this case, that's exactly what I want to do.
I can get inside the Docker VM by running a privileged container which executes the nsenter utility in order to enter the host process space.
docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n sh
So everything is good. I can now move onto the next stage, install and run plotnetcfg.
plotnetcfg creates very nice graphviz diagrams of networking configuration, and this is what I'd like to do, analyze the networking configuration inside the Docker VM (it's Alpine Linux BTW).
Here's an example of the sort of diagram plotnetcfg can generate :
That's my actual goal - to visualize Docker networking configuration for a hackathon.
Now finally the description of the problem.
The root filesystem is an iso9660 mount.
/ # mount |grep iso
/dev/sr0 on / type iso9660 (ro,relatime)
Is there a way to remount root, using the aufs stacked filesystem or any other means so that I can update the system packages, download, compile and execute the plotnetcfg utility, and finally, export the generated graphviz dot file and render it elsewhere?
For the question: root mounted as ro iso9660 filesystem how can I remount as rw overlay ?
The answer is: there is no way you can remount as rw, but tmpfs /tmp or shm /dev/shm is writable if you really want to add something temporally.
For the things you want to do:
With docker run you can already access the docker vm's network.
You don't need to modify the host to change the network, you can just add --privileged -v /dev:/dev for docker run, then you can just install package in container, create the interface you want
docker run --rm -it --privileged -v /dev:/dev wener/base ifconfig
For example you can create a tap or tun dev in container, I use tinc in container to create host vpn.
Processes in docker containers are still running under the "host's" UID although I have enabled user namespace remapping.
OS is: Ubuntu 16.04 on 4.4.0-21 with
> sudo docker --version
Docker version 1.12.0, build 8eab29e
dockerd configuration is
> grep "DOCKER_OPTS" /etc/default/docker
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 --ipv6 --userns-remap=default"
subordinate UID and GID mappings have been created, when I had run manually, i.e., with the above docker opts string
> grep "dock" /etc/sub*
/etc/subgid:dockremap:362144:65536
/etc/subuid:dockremap:362144:65536
However, the sub UID/GIDs got not created when I (re)started dockerd as service - but had to run it manually.
Also after restarting dockerd, all processes in containers are not in the remapped range but 1to1 that of the host, i.e., a container root process still has UID=0.
E.g., a test container running just top
> sudo docker run -t -i ubuntu /usr/bin/top
...
has top run by UID=0 when checked outside the container on the host
> ps -xaf --forest -o pid,ruid,ruser,cmd | grep top
PID RUID RUSER CMD
23015 0 root | \_ sudo docker run -t -i ubuntu /usr/bin/top
23016 0 root | \_ docker run -t -i ubuntu /usr/bin/top
Apparently, the remapping to subordinate UIDs is not working for me when running docker as a daemon?
/etc/default/docker is not used when running the dockerd via systemd.
Thus any changes I did on the docker-config (after the dist-upgrade I had applied before) where not applied.
For configuring the docker daemon with Systemd see the documantation at
https://docs.docker.com/engine/admin/systemd/
with the configuration drop-in file(s) going to
/etc/systemd/system/docker.service.d
I'm trying to minimize damage made by fork bombs inside of a docker container.
I'm using pam_limits and /etc/security/limits.conf file is
1000:1128 hard nproc 40
1000:1128 soft nproc 40
This means that any user with id in range [1000..1128] can have up to 40 processes. This works fine if I run forkbomb in shell by user with such id.
But when I run fork bomb inside a docker container these limits are not being applied, so when I run command
# docker run -u 1000 ubuntu bash -c ":() { : | : & }; :; while [[ true ]]; do sleep 1; done"
I have as much processes as possible and all these processes belong to user with id=1000.
What's wrong? How can I fix it?
When running a container, there is an option to limit the number of pids:
--pids-limit: Tune container pids limit (set -1 for unlimited)
The command would be:
docker container run --pids-limit 100 your-image
Reference: https://docs.docker.com/engine/reference/commandline/run/#options
Not related with PAM, but you can limit the Docker container with "docker create" command, for example Enduro/X project uses some IPC queue limits, but in the same way you may set some other ulimit settings, as with number of processes it will be "-ulimit nproc=256:512", i.e soft limit and hard limit.
So for example:
$ sudo docker create --name bankapp-inst -it \
--sysctl fs.mqueue.msg_max=10000 \
--sysctl fs.mqueue.msgsize_max=1049600 \
--sysctl fs.mqueue.queues_max=10000 \
--ulimit msgqueue=-1 \
--ulimit nproc=256:512 \
bankapp
So after nproc setting, no more than 256 processes can be spawned, and if ulimit changed, then upper limit is 512 processes. Hope this helps!