Setting absolute limits on CPU for Docker containers - docker

I'm trying to set absolute limits on Docker container CPU usage. The CPU shares concept (docker run -c <shares>) is relative, but I would like to say something like "let this container use at most 20ms of CPU time every 100ms. The closest answer I can find is a hint from the mailing list on using cpu.cfs_quota_us and cpu.cfs_period_us. How does one use these settings when using docker run?
I don't have a strict requirement for either LXC-backed Docker (e.g. pre0.9) or later versions, just need to see an example of these settings being used--any links to relevant documentation or helpful blogs are very welcome too. I am currently using Ubuntu 12.04, and under /sys/fs/cgroup/cpu/docker I see these options:
$ ls /sys/fs/cgroup/cpu/docker
cgroup.clone_children cpu.cfs_quota_us cpu.stat
cgroup.event_control cpu.rt_period_us notify_on_release
cgroup.procs cpu.rt_runtime_us tasks
cpu.cfs_period_us cpu.shares

I believe I've gotten this working. I had to restart my Docker daemon with --exec-driver=lxc as I
could not find a way to pass cgroup arguments to libcontainer. This approach worked for me:
# Run with absolute limit
sudo docker run --lxc-conf="lxc.cgroup.cpu.cfs_quota_us=50000" -it ubuntu bash
The necessary CFS docs on bandwidth limiting are here.
I briefly confirmed with sysbench that this does seem to introduce an absolute limit, as shown below:
$ sudo docker run --lxc-conf="lxc.cgroup.cpu.cfs_quota_us=10000" --lxc-conf="lxc.cgroup.cpu.cfs_period_us=50000" -it ubuntu bash
root#302e651c0686:/# sysbench --test=cpu --num-threads=1 run
<snip>
total time: 90.5450s
$ sudo docker run --lxc-conf="lxc.cgroup.cpu.cfs_quota_us=20000" --lxc-conf="lxc.cgroup.cpu.cfs_period_us=50000" -it ubuntu bash
root#302e651c0686:/# sysbench --test=cpu --num-threads=1 run
<snip>
total time: 45.0423s

Related

Can I use docker for installing ubuntu on a Mac?

I'm using a Mac, but I want to learn and use Ubuntu for development and I don't care about the GUI. I used to use Vagrant and ssh to the machine, but it consumes much of my machine resources. Can I use docker for the same purpose while also having the isolation (when I mess things up) of a VM?
First install Docker Desktop for Mac.
Then in a terminal window run: docker run -it --name ubuntu ubuntu:xenial bash
You are in a terminal with ubuntu and can do whatever you like.
Note: If you are using an ubuntu version bionic (18.04) or newer (ubuntu:bionic or ubuntu:latest), you
must run the command unminimize inside the container so the tools
for human interaction be installed.
To start again after a reboot:
docker start ubuntu
docker exec -it ubuntu bash
If you want save your changes:
docker commit ubuntu
docker images
See the unnamed image and:
docker tag <imageid> myubuntu
Then you can run another container using your new image.
docker run -it --name myubuntu myubuntu bash
Or replace the former
docker stop ubuntu
docker rm ubuntu
docker run -it --name ubuntu myubuntu bash
Hope it helps
This is one of the few scenarios I wouldn't use Docker for :)
Base images like Ubuntu are heavily stripped down versions of the full OS. The latest Ubuntu image doesn't have basic tools like ping and curl - that's a deliberate strategy from Canonical to minimise the size of the image, and therefore the attack vector. Typically you'd build an image to run a single app process in a container, you wouldn't SSH in and use ordinary dev tools, so they're not needed. That will make it hard for you to learn Ubuntu, because a lot of the core stuff isn't there.
On the Mac, the best VM tool I've used is Parallels - it manages to share CPU without hammering the battery. VirtualBox is good too, and for either of them you can install full Ubuntu Server from the ISO - 5GB disk and 1GB RAM allocation will be plenty if you're just looking around.
With any hypervisor you can pause VMs so they stop using resources, and checkpoint them to save the image so you can restore back to it later.
Yes, you can.
Try searching docker hub for ubuntu containers of your choice (version and who is supporting the image)
Most of them are very well documented on what was used to build it and also how to run and access/expose resources if needed.
Check the official one here: https://hub.docker.com/_/ubuntu/

How can I set negative niceness of a docker process?

I have a testenvironment for code in a docker image which I use by running bash in the container:
me#host$ docker run -ti myimage bash
Inside the container, I launch a program normally by saying
root#docker# ./myprogram
However, I want the process of myprogram to have a negative niceness (there are valid reasons for this). However:
root#docker# nice -n -7 ./myprogram
nice: cannot set niceness: Permission denied
Given that docker is run by the docker daemon which runs as root and I am root inside the container, why doesn't this work and how can force a negative niceness?
Note: The docker image is running debian/sid and the host is ubuntu/12.04.
Try adding
--privileged=true
to your run command.
[edit] privileged=true is the old method. Looks like
--cap-add=SYS_NICE
Should work as well.
You could also set the CPU priority of the whole container with -c for CPU shares.
Docker docs: http://docs.docker.com/reference/run/#runtime-constraints-on-cpu-and-memory
CGroups/cpu.shares docs: https://www.kernel.org/doc/Documentation/scheduler/sched-design-CFS.txt

Not enough entropy to support /dev/random in docker containers running in boot2docker

Running out of entropy in virtualized Linux systems seems to be a common problem (e.g. /dev/random Extremely Slow?, Getting linux to buffer /dev/random). Despite of using a hardware random number generator (HRNG) the use of a an entropy gathering daemon like HAVEGED is often suggested. However an entropy gathering daemon (EGD) cannot be run inside a Docker container, it must be provided by the host.
Using an EGD works fine for docker hosts based on linux distributions like Ubuntu, RHEL, etc. Getting such a daemon to work inside boot2docker - which is based on Tiny Core Linux (TCL) - seems to be another story. Although TCL has a extension mechanism, an extension for an entropy gathering daemon doesn't seem to be available.
So an EGD seems like a proper solution for running docker containers in a (production) hosting environment, but how to solve it for development/testing in boot2docker?
Since running an EGD in boot2docker seemed too difficult, I thought about simply using /dev/urandom instead of /dev/random. Using /dev/urandom is a litte less secure, but still fine for most applications which are not generating long-term cryptographic keys. At least it should be fine for development/testing inside boot2docker.
I just realized, that it is simple as mounting /dev/urandom from the host as /dev/random into the container:
$ docker run -v /dev/urandom:/dev/random ...
The result is as expected:
$ docker run --rm -it -v /dev/urandom:/dev/random ubuntu dd if=/dev/random of=/dev/null bs=1 count=1024
1024+0 records in
1024+0 records out
1024 bytes (1.0 kB) copied, 0.00223239 s, 459 kB/s
At least I know how to build my own boot2docker images now ;-)
The most elegant solution I've found is running Haveged in separate container:
docker pull harbur/haveged
docker run --privileged -d harbur/haveged
Check whether enough entropy available:
$ cat /proc/sys/kernel/random/entropy_avail
2066
Another option is to install the rng-tools package and map it to use the /dev/urandom
yum install rng-tools
rngd -r /dev/urandom
With this I didn't need to map any volume in the docker container.
Since I didn't like to modify my Docker containers for development/testing I tried to modify the boot2docker image. Luckily, the boot2docker image is build with Docker and can be easily extended. So I've set up my own Docker build boot2docker-urandom. It extends the standard boot2docker image with a udev rule found here.
Building your own boot2docker.iso image is simple as
$ docker run --rm mbonato/boot2docker-urandom > boot2docker.iso
To replace the standard boot2docker.iso that comes with boot2docker you need to:
$ boot2docker stop
$ boot2docker delete
$ mv boot2docker.iso ~/.boot2docker/
$ boot2docker init
$ boot2docker up
Limitations, from inside a Docker container /dev/random still blocks. Most likely, because the Docker containers do not use /dev/random of the host directly, but use the corresponding kernel device - which still blocks.
Alpine Linux may be a better choice for a lightweight docker host. Alpine LXC & docker images are only 5mb (versus 27mb for boot2docker)
I use haveged on Alpine for LXC guests & on Debian for docker guests. It gives enough entropy to generate gpg / ssh keys & openssl certificates in containers. Alpine now has an official docker repo.
Alternatively build a haveged package for Tiny Core - there is a package build system available.
if you have this problem in a docker container created from a self-built image that runs a java app (e.g. created FROM tomcat:alpine) and don't have access to the host (e.g. on a managed k8s cluster), you can add the following command to your dockerfile to use non-blocking seeding of SecureRandom:
RUN sed -i.bak \
-e "s/securerandom.source=file:\/dev\/random/securerandom.source=file:\/dev\/urandom/g" \
-e "s/securerandom.strongAlgorithms=NativePRNGBlocking/securerandom.strongAlgorithms=NativePRNG/g" \
$JAVA_HOME/lib/security/java.security
the two regex expressions replace file:/dev/random by file:/dev/urandom and NativePRNGBlocking by NativePRNG in the file $JAVA_HOME/lib/security/java.security which causes tomcat to startup reasonably fast on a vm. i haven't checked whether this works also on non alpine-based openjdk images, but if the sed command fails, just check the location of the file java.security inside the container and adapt the path accordingly.
note: in jdk11 the path has changed to $JAVA_HOME/conf/security/java.security

pam limits in docker containers aren't working

I added something in /etc/security/limits.conf in a docker container to limit the max number of user processes for user1, but when I run bash in the container under the user user1, ulimit -a doesn't reflect that limits defined in the pam limits file (/etc/security/limits.conf).
How can I get this to work?
I've also added the line session required pam_limits.so to /etc/pam.d/common-session, so that's not the problem.
I start the docker container with something like sudo docker run --user=user1 --rm=true <container-name> bash
Also, sudo docker run ... --user=user1 ... cmd doesn't apply the pam limits, but sudo docker run ... --user=root ... su user1 -c 'cmd' does
The /etc/security/limits.conf is just a file that is read by PAM infrastructure on boot. Docker containers are clones of the kernel in pristine state after it just started. This means that none of the inherited initializations of the environment will apply to container. You have to use the 'limit' command directly to set the process limits.
Better way to do that would be to use container limits, unfortunately current version of docker doesn't support limits on number of processes. Looks like the support will be coming in version 1.6 when it comes out, as #thaJeztah has mentioned.

Start full container in Docker?

According to this github issue it should be possible to start a full container with Upstart, cron etc. with Docker 0.6 or later but how do I do that?
I was expecting that
docker run -t -i ubuntu /sbin/init
would work just like
lxc-start -n ubuntu /sbin/init
and I would get a login screen, but instead it displays nothing. I also tried to access it using ssh, but no luck. I'm using the default ubuntu image from Docker index.
docker run ubuntu /sbin/init appears to work flawlessly for me with 0.6.6. You won't get a login screen because Docker only manages the process. Instead, you can use docker ps -notrunc to get the full lxc container ID and then use lxc-attach -n <container_id> run bash in that container as root. sshd isn't installed in the container, so you can't ssh to it.
You can use the ubuntu-upstart image:
docker run -t -i ubuntu-upstart:14.04 /sbin/init
Although this solution is unfortunately deprecated, it is good enough if you need a full OS container that 'drives' like a normal Ubuntu 12.04, 14.04 or 14.10 (change the :14.04 bit) system today. If no version is specified it defaults to 14.04. I have not used it heavily, and had some issues installing more complicated packages (e.g. dbus!), but it might work for you.
Alas Ubuntu has switched to systemd in more recent releases. Googling reveals that there seems to be ongoing work to make systemd work in a docker container without requiring elevated privileges, but it does not seem to be quite ready for prime-time. Hopefully it will be ready when 16.04 becomes LTS.
Another option is of course to use phusion/baseimage, but it has it's own approach for starting services. Seems better suited to minimal multi-process containers.

Resources