I need to set ulimits on the container. For example, docker run --ulimit memlock="-1:-1" <image>. However, I'm not sure how to do this when deploying a container-optimised VM on Compute Engine as it handles the startup of the container.
I'm able to deploy a VM with options like --privileged, -e for environment variables, and even an overriding CMD. How can I deploy a VM with ulimits set for the container?
I received an official reply:
Unfortunately the Containers on Compute Engine feature does not currently support setting the ulimit options for containers.
A workaround would be to set ulimit inside the container. For example:
gcloud beta compute instances create-with-container INSTANCE --zone=ZONE --container-image=gcr.io/google-containers/busybox --container-privileged --container-command=sh --container-arg=-c --container-arg=ulimit\ -n\ 100000
Unfortunately this method requires running the container as privileged.
Best regards,...
This reply gave me inspiration to do the following. Create a wrapper script that is referred to from your docker image's ENTRYPOINT. Within this wrapper script, set the ulimit(s) prior to starting the process(es) subjected to the ulimit(s).
As a quick example:
$HOME/example/wrapper.sh
#! /bin/bash
# set memlock to unlimited
ulimit -l unlimited
# start the elasticsearch node
# (found this from the base images dockerfile on github)
/usr/local/bin/docker-entrypoint.sh eswrapper
$HOME/example/Dockerfile
FROM docker.elastic.co/elasticsearch/elasticsearch:6.3.2
COPY wrapper.sh .
RUN chmod 777 wrapper.sh
ENTRYPOINT ./wrapper.sh
local image build
docker image build -t gcr.io/{GCLOUD_PROJECT_ID}/example:0.0.0 $HOME/example
deploy to gcr.io
docker push gcr.io/{GCLOUD_PROJECT_ID}/example:0.0.0
create an instance via gcloud
gcloud beta compute instances create-with-container example-instance-1 \
--zone us-central1-a \
--container-image=gcr.io/{GCLOUD_PROJECT_ID}/example:0.0.0 \
--container-privileged \
--service-account={DEFAULT_COMPUTE_ENGINE_SERVICE_ACC_ID}-compute#developer.gserviceaccount.com \
--metadata=startup-script="echo 'vm.max_map_count=262144' > /etc/sysctl.conf; sysctl -p;"
Note the following. The above startup script is only necessary for running a container of this image. The service account is necessary for pulling from your private google container registry. The --container-privileged argument is imperative as running the container with privileged is required to set ulimits within it.
verifying ulimits are set for your process(es)
On the vm HOST, ps -e and find the PID(s) of the process(es) that were executed within your wrapper script. In this case, find the PID whose command was java. For each PID, cat /proc/{PID}/limits. In this case, I only set memlock to unlimited. You can see that it is indeed set to unlimited.
There doesn't seem to be a document for setting ulimit when creating a Container Optimized OS or in the doc for Configuring Options to Run Container.
Currently, it doesn't seem to be supported having the option of automatically setting ulimit of containers when deploying a container-optimised VM as in the docs here and here. You can submit a feature request for that here under 'Compute'. The document on Configuring Options to Run Container doesn't include that either.
However, you can run containers on a Container-Optimized OS (COS) instance. Thereby, you can run a docker with setting ulimit like here.
I have successfully used the following.
From within the VM or from a start script for the Container Optimized OS:
sudo echo "vm.max_map_count=262144" | tee -a /etc/sysctl.conf
sudo sysctl -p
Related
I'm working with a poor internet connection and trying to pull and run a image.
I wanted to download one layer at a time and per documentation tried adding a flat --max-concurrent-downloads like so:
docker run --rm -p 8787:8787 -e PASSWORD=blah --max-concurrent-downloads=1 rocker/verse
But this gives an error:
unknown flag: --max-concurrent-downloads See 'docker run --help'.
I tried typing docker run --help and interestingly did not see the option --max-concurrent-downloads.
I'm using Docker Toolbox since I'm on a old Mac.
Over here under l there's an option for --max-concurrent-downloads however this doesn't appear on my terminal when typing docker run --help
How can I change the default of downloading 3 layers at a time to just one?
From the official documentation: (https://docs.docker.com/engine/reference/commandline/pull/#concurrent-downloads)
You can pass --max-concurrent-downloads during a pull operation.
You can set --max-concurrent-downloads with the dockerd command.
If you're using the docker Desktop GUI for Mac or Windows:
You can edit the .json file directly in docker engine settings:
This setting needs to be passed to dockerd when starting the daemon, not to the docker client CLI. The dockerd process is running inside of a VM with docker-machine (and other docker desktop environments).
With docker-machine that is used in toolbox, you typically pass the engine flags on the docker-machine create command line, e.g.
docker-machine create --engine-opt max-concurrent-downloads=1
Once you have a created machine, you can follow the steps from these answers to modify the config of an already running machine, mainly:
SSH into your local docker VM.
note: if 'default' is not the name of your docker machine then substitute 'default' with your docker machine name $
docker-machine ssh default
Open Docker profile $ sudo vi /var/lib/boot2docker/profile
Then in that profile, you would add your --engine-opt max-concurrent-downloads=1.
Newer versions of docker desktop (along with any Linux install) make this much easier with a configuration menu daemon -> advanced where you can specify your daemon.json entries like:
{
"max-concurrent-downloads": 1
}
I'm working with a poor internet connection and trying to pull and run a image.
I wanted to download one layer at a time and per documentation tried adding a flat --max-concurrent-downloads like so:
docker run --rm -p 8787:8787 -e PASSWORD=blah --max-concurrent-downloads=1 rocker/verse
But this gives an error:
unknown flag: --max-concurrent-downloads See 'docker run --help'.
I tried typing docker run --help and interestingly did not see the option --max-concurrent-downloads.
I'm using Docker Toolbox since I'm on a old Mac.
Over here under l there's an option for --max-concurrent-downloads however this doesn't appear on my terminal when typing docker run --help
How can I change the default of downloading 3 layers at a time to just one?
From the official documentation: (https://docs.docker.com/engine/reference/commandline/pull/#concurrent-downloads)
You can pass --max-concurrent-downloads during a pull operation.
You can set --max-concurrent-downloads with the dockerd command.
If you're using the docker Desktop GUI for Mac or Windows:
You can edit the .json file directly in docker engine settings:
This setting needs to be passed to dockerd when starting the daemon, not to the docker client CLI. The dockerd process is running inside of a VM with docker-machine (and other docker desktop environments).
With docker-machine that is used in toolbox, you typically pass the engine flags on the docker-machine create command line, e.g.
docker-machine create --engine-opt max-concurrent-downloads=1
Once you have a created machine, you can follow the steps from these answers to modify the config of an already running machine, mainly:
SSH into your local docker VM.
note: if 'default' is not the name of your docker machine then substitute 'default' with your docker machine name $
docker-machine ssh default
Open Docker profile $ sudo vi /var/lib/boot2docker/profile
Then in that profile, you would add your --engine-opt max-concurrent-downloads=1.
Newer versions of docker desktop (along with any Linux install) make this much easier with a configuration menu daemon -> advanced where you can specify your daemon.json entries like:
{
"max-concurrent-downloads": 1
}
I happened across some helpful information that clued me in to the fact that there is a built-in environment variable $HOSTNAME that can be used in Dockerfile. In a fair amount of searching, I was unable to find a comprehensive list of such built-in variables. The Dockerfile reference explains how to use the ENV command to modify environment variables but I have no need for that right now. I just want to know what's available by default. Is there any official documentation of this? I would think there should be and that doing some searches on HOSTNAME would point me to it but no dice.
I just want to know what's available by default.
It depends on each image. You can see which variables are defined in each one doing this:
docker run <image> env
Or:
docker inspect <image> -f '{{.Config.Env}}'
For instance:
$ docker run ubuntu env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=71fc7d5db1f2
no_proxy=*.local, 169.254/16
HOME=/root
$ docker inspect ubuntu -f '{{.Config.Env}}'
[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin]
Or:
$ docker run node env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=42bbb311714a
no_proxy=*.local, 169.254/16
NPM_CONFIG_LOGLEVEL=info
NODE_VERSION=7.10.0
YARN_VERSION=0.24.4
HOME=/root
$ docker inspect node -f '{{.Config.Env}}'
[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
NPM_CONFIG_LOGLEVEL=info NODE_VERSION=7.10.0 YARN_VERSION=0.24.4]
PS: You can do the same with running containers:
docker inspect <container-id> -f '{{.Config.Env}}'
docker exec <container-id> env
I'm guessing most of that happens at https://github.com/moby/moby/blob/34536c498d56a0c74fab08bd434407ac4707c971/container/container_unix.go#L57-L72. I wouldn't say that $HOSTNAME is a Docker specific thing. It is common in most Linux distributions and a lot of scripts / shells use it. Since Docker isn't running a full init system which would set the hostname variable at startup (such as /etc/init.d/hostname.sh on Ubuntu) they make sure it is set for you.
It looks like they also set a default $PATH and $TERM if you specify a tty (-t). In addition to the environment variables you can specify yourself, you also get a bunch of environment variables available when you use --link to link another container (a now deprecated feature). See https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/#environment-variables.
Running out of entropy in virtualized Linux systems seems to be a common problem (e.g. /dev/random Extremely Slow?, Getting linux to buffer /dev/random). Despite of using a hardware random number generator (HRNG) the use of a an entropy gathering daemon like HAVEGED is often suggested. However an entropy gathering daemon (EGD) cannot be run inside a Docker container, it must be provided by the host.
Using an EGD works fine for docker hosts based on linux distributions like Ubuntu, RHEL, etc. Getting such a daemon to work inside boot2docker - which is based on Tiny Core Linux (TCL) - seems to be another story. Although TCL has a extension mechanism, an extension for an entropy gathering daemon doesn't seem to be available.
So an EGD seems like a proper solution for running docker containers in a (production) hosting environment, but how to solve it for development/testing in boot2docker?
Since running an EGD in boot2docker seemed too difficult, I thought about simply using /dev/urandom instead of /dev/random. Using /dev/urandom is a litte less secure, but still fine for most applications which are not generating long-term cryptographic keys. At least it should be fine for development/testing inside boot2docker.
I just realized, that it is simple as mounting /dev/urandom from the host as /dev/random into the container:
$ docker run -v /dev/urandom:/dev/random ...
The result is as expected:
$ docker run --rm -it -v /dev/urandom:/dev/random ubuntu dd if=/dev/random of=/dev/null bs=1 count=1024
1024+0 records in
1024+0 records out
1024 bytes (1.0 kB) copied, 0.00223239 s, 459 kB/s
At least I know how to build my own boot2docker images now ;-)
The most elegant solution I've found is running Haveged in separate container:
docker pull harbur/haveged
docker run --privileged -d harbur/haveged
Check whether enough entropy available:
$ cat /proc/sys/kernel/random/entropy_avail
2066
Another option is to install the rng-tools package and map it to use the /dev/urandom
yum install rng-tools
rngd -r /dev/urandom
With this I didn't need to map any volume in the docker container.
Since I didn't like to modify my Docker containers for development/testing I tried to modify the boot2docker image. Luckily, the boot2docker image is build with Docker and can be easily extended. So I've set up my own Docker build boot2docker-urandom. It extends the standard boot2docker image with a udev rule found here.
Building your own boot2docker.iso image is simple as
$ docker run --rm mbonato/boot2docker-urandom > boot2docker.iso
To replace the standard boot2docker.iso that comes with boot2docker you need to:
$ boot2docker stop
$ boot2docker delete
$ mv boot2docker.iso ~/.boot2docker/
$ boot2docker init
$ boot2docker up
Limitations, from inside a Docker container /dev/random still blocks. Most likely, because the Docker containers do not use /dev/random of the host directly, but use the corresponding kernel device - which still blocks.
Alpine Linux may be a better choice for a lightweight docker host. Alpine LXC & docker images are only 5mb (versus 27mb for boot2docker)
I use haveged on Alpine for LXC guests & on Debian for docker guests. It gives enough entropy to generate gpg / ssh keys & openssl certificates in containers. Alpine now has an official docker repo.
Alternatively build a haveged package for Tiny Core - there is a package build system available.
if you have this problem in a docker container created from a self-built image that runs a java app (e.g. created FROM tomcat:alpine) and don't have access to the host (e.g. on a managed k8s cluster), you can add the following command to your dockerfile to use non-blocking seeding of SecureRandom:
RUN sed -i.bak \
-e "s/securerandom.source=file:\/dev\/random/securerandom.source=file:\/dev\/urandom/g" \
-e "s/securerandom.strongAlgorithms=NativePRNGBlocking/securerandom.strongAlgorithms=NativePRNG/g" \
$JAVA_HOME/lib/security/java.security
the two regex expressions replace file:/dev/random by file:/dev/urandom and NativePRNGBlocking by NativePRNG in the file $JAVA_HOME/lib/security/java.security which causes tomcat to startup reasonably fast on a vm. i haven't checked whether this works also on non alpine-based openjdk images, but if the sed command fails, just check the location of the file java.security inside the container and adapt the path accordingly.
note: in jdk11 the path has changed to $JAVA_HOME/conf/security/java.security
I added something in /etc/security/limits.conf in a docker container to limit the max number of user processes for user1, but when I run bash in the container under the user user1, ulimit -a doesn't reflect that limits defined in the pam limits file (/etc/security/limits.conf).
How can I get this to work?
I've also added the line session required pam_limits.so to /etc/pam.d/common-session, so that's not the problem.
I start the docker container with something like sudo docker run --user=user1 --rm=true <container-name> bash
Also, sudo docker run ... --user=user1 ... cmd doesn't apply the pam limits, but sudo docker run ... --user=root ... su user1 -c 'cmd' does
The /etc/security/limits.conf is just a file that is read by PAM infrastructure on boot. Docker containers are clones of the kernel in pristine state after it just started. This means that none of the inherited initializations of the environment will apply to container. You have to use the 'limit' command directly to set the process limits.
Better way to do that would be to use container limits, unfortunately current version of docker doesn't support limits on number of processes. Looks like the support will be coming in version 1.6 when it comes out, as #thaJeztah has mentioned.