Limit disk size and bandwidth of a Docker container - docker

I have a physical host machine with Ubuntu 14.04 running on it. It has 100G disk and 100M network bandwidth. I installed Docker and launched 10 containers. I would like to limit each container to a maximum of 10G disk and 10M network bandwidth.
After going though the official documents and searching on the Internet, I still can't find a way to allocate specified size disk and network bandwidth to a container.
I think this may not be possible in Docker directly, maybe we need to bypass Docker. Does this means we should use something "underlying", such as LXC or Cgroup? Can anyone give some suggestions?
Edit:
#Mbarthelemy, your suggestion seems to work but I still have some questions about disk:
1) Is it possible to allocate other size (such as 20G, 30G etc) to each container? You said it is hardcoded in Docker so it seems impossible.
2) I use the command below to start the Docker daemon and container:
docker -d -s devicemapper
docker run -i -t training/webapp /bin/bash
then I use df -h to view the disk usage, it gives the following output:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-longid 9.8G 276M 9.0G 3% /
/dev/mapper/Chris--vg-root 27G 5.5G 20G 22% /etc/hosts
from the above I think the maximum disk a container can use is still larger than 10G, what do you think
?

I don't think this is possible right now using Docker default settings. Here's what I would try.
About disk usage: You could tell Docker to use the DeviceMapper storage backend instead of AuFS. This way each container would run on a block device (Devicemapper dm-thin target) limited to 10GB (this is a Docker default, luckily enough it matches your requirement!).
According to this link, it looks like latest versions of Docker now accept advanced storage backend options. Using the devicemapperbackend, you can now change the default container rootfs size option using --storage-opt dm.basesize=20G (that would be applied to any newly created container).
To change the storage backend: use the --storage-driver=devicemapper Docker option. Note that your previous containers won't be seen by Docker anymore after the change.
About network bandwidth : you could tell Docker to use LXC under the hoods : use the -e lxcoption.
Then, create your containers with a custom LXC directive to put them into a traffic class :
docker run --lxc-conf="lxc.cgroup.net_cls.classid = 0x00100001" your/image /bin/stuff
Check the official documentation about how to apply bandwidth limits to this class.
I've never tried this myself (my setup uses a custom OpenVswitch bridge and VLANs for networking, so bandwidth limitation is different and somewhat easier), but I think you'll have to create and configure a different class.
Note : the --storage-driver=devicemapperand -e lxcoptions are for the Docker daemon, not for the Docker client you're using when running docker run ........

New releases version has --device-read-bps and --device-write-bps.
You can use:
docker run --device-read-bps=/dev/sda:10mb
More info here:
https://blog.docker.com/2016/02/docker-1-10/

If you have access to the containers you can use tc for bandwidth control within them.
eg: in your entry point script you can add:
tc qdisc add dev eth0 root tbf rate 240kbit burst 300kbit latency 50ms
to have a bandwidth of 240kbps, burst 300kbps and 50 ms latency.
You also need to pass the --cap-add=NET_ADMIN to the docker run command if you are not running the containers as root.

1) Is it possible to allocate other size (such as 20G, 30G etc) to each container? You said it is hardcoded in Docker so it seems impossible.
to answer this question please refer to Resizing Docker containers with the Device Mapper plugin

Related

Increase the RAM of a docker container itself (NOT DOCKER)

Hey guys I am running WSL 2 with a docker container running on the WSL2 but the docker container itself (its a KVM running QEMU VM) is limited to 4 GB I need a lot more than 4GB I need alt east 8 GB to run the things I want to run in QEMU VM (Theres a reason why I am running QEMU and no I cannot go without it)
I am running docker desktop and if I inspect the docker container it says the following
I have edited the .wslconfig file. and set the limit to 20GB as well as set the swap file to 1 and I have tried a command with docker run insert_docker_name_here it --memory 8000 -m
that then says it cant find the docker container for some reason but its under docker desktop.
I have tried looking on the internet for an answer but everything seems to point to .wslconfig file, --memory, or some vague answer that doesnt help at all, is there a way I can edit my docker container file and set it to use 8GB or more??
Please help - I am new to docker and would appreciate the asssistance
There is no memory limit on docker containers by default. Using --memory will only specify the upper limit of memory the container can use. You need to examine how the docker container is started and remove any limit there.
Also the WSL2 has no memory limit by default and will just grab as much memory as it needs. The value in .wslconfig is also the upper limit. If you just remove all the limits, it should just use all available memory.
This leaves QEMU itself. Have you checked what the guest RAM size is? (-m parameter on the commandline)

Can not get more then 100% CPU usage for docker container

I am trying to get my container to run on multiple CPU's.
To achieve this on Windows, I changed .wslconfig to have:
[wsl2]
memory= 8GB
processors=4
Using docker stats, I can see the available RAM reduced from 12.xx to 7.764, so I can see this file has changed the behaviour.
However, if I run my container with the following command:
docker run -d --cpus="4" "CONTAINERNAME"
and then I check the stats using docker stats, I still see the container using 100% CPU% at maximum. Since the contianer has more cpu's available, I was expecting this to now be able to go above the 100%.
What am I doing wrong?

Increasing the memory allocation to docker daemon (dockerd) on Linux [duplicate]

I've seen that on Windows and Mac it's very easy to change the RAM containers are given - you just go into the GUI. But how do you do this on Linux, where it's a CLI instead of a GUI?
The Docker docs mention an -m flag, but this flag doesn't give any response (just prints the entirety of the help output again) so I don't know whether it worked. It also seems specific to containers, whereas I'd like to change the global default.
Lastly, is there a way to check the current default RAM, so I can make sure whatever I do in the end actually worked?
On native Linux, Docker can use all available host memory. It uses a lightweight kernel-based isolation mechanism that generally shares resources like CPU cores and memory (and on modern installations, disk space) using the standard kernel mechanism. There isn't a control or setting to limit or increase this.
On other platforms Docker runs a hidden Linux VM to be able to run a Linux kernel to use these isolation mechanisms, and the Docker Desktop memory control affects the memory allocation for that VM.
This is how I "check" the Docker container memory:
Open the linux command shell and -
Step 1: Check what containers are running.
docker ps
Step 2: Note down the 'CONTAINER ID' of the container you want to check and issue the following command:
docker container stats <containerID>
eg:
docker container stats c981
This will give an output like:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
c981c9482284 registry 0.00% 4.219MiB / 1.944GiB 0.21% 9.66kB / 0B 0B / 0B 14
'MEM USAGE / LIMIT' column will give you the actual memory usage and default memory allocated.
Note : press ctrl+c to come out of the view and back to command prompt.

Does it make sense to run a command with ionice in docker container with a bind mount

Let's say I'm archiving files from a bind mount in a docker container and want to set a priority for disk time, so I run it like:
ionice -c2 -n7 tar....
Would that work? Would the host OS consider the class/priority that comes from a container (cgroups)?
Tests showed that ionice did not have any significant effect on tar performance, instead, I use RAM/CPU docker container limits
That should make sense:
if this tar command is not bottlenecked on CPU for gzip or similar
if your system is running with the CFQ IO scheduler for the disk(s) that you are using, according to the notes section of: man ionice
To check your systems configuration try:
for SCHEDULER in /sys/block/*/queue/scheduler; do echo "For disk: ${SCHEDULER}"; cat $SCHEDULER; done
To get an overview of disks and file-systems use:
lsblk
For other possible solutions with Docker v 1.10 features see this answer

Usage of loopback devices is strongly discouraged for production use

I want to test docker in my CentOS 7.1 box, I got this warning:
[root#docker1 ~]# docker run busybox /bin/echo Hello Docker
Usage of loopback devices is strongly discouraged for production use. Either use `--storage-opt dm.thinpooldev` or use `--storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.
Hello Docker
I want to know the reason and how to suppress this warning.
The CentOS instance is running in virtualbox created by vagrant.
The warning message occurs because your Docker storage configuration is using a "loopback device" -- a virtual block device such as /dev/loop0 that is actually backed by a file on your filesystem. This was never meant as anything more than a quick hack to get Docker up and running quickly as a proof of concept.
You don't want to suppress the warning; you want to fix your storage configuration such that the warning is no longer issued. The easiest way to do this is to assign some local disk space for use by Docker's devicemapper storage driver and use that.
If you're using LVM and have some free space available on your volume group, this is relatively easy. For example, to give docker 100G of space, first create a data and metadata volume:
# lvcreate -n docker-data -L 100G /dev/my-vg
# lvcreate -n docker-metadata -L1G /dev/my-vg
And then configure Docker to use this space by editing /etc/sysconfig/docker-storage to look like:
DOCKER_STORAGE_OPTIONS=-s devicemapper --storage-opt dm.datadev=/dev/my-vg/docker-data --storage-opt dm.metadatadev=/dev/my-vg/docker-metadata
If you're not using LVM or don't have free space available on your VG, you could expose some other block device (e.g., a spare disk or partition) to Docker in a similar fashion.
There are some interesting notes on this topic here.
Thanks. This was driving me crazy. I thought bash was outputting this message. I was about to submit a bug against bash. Unfortunately, none of the options presented are viable on a laptop or such where disk is fully utilized. Here is my answer for that scenario.
Here is what I used in the /etc/sysconfig/docker-storage on my laptop:
DOCKER_STORAGE_OPTIONS="--storage-opt dm.no_warn_on_loop_devices=true"
Note: I had to restart the docker service for this to have an effect. On Fedora the command for that is:
systemctl stop docker
systemctl start docker
There is also just a restart command (systemctl restart docker), but it is a good idea to check to make sure stop really worked before starting again.
If you don't mind disabling SELinux in your containers, another option is to use overlay. Here is a link that describes that fully:
http://www.projectatomic.io/blog/2015/06/notes-on-fedora-centos-and-docker-storage-drivers/
In summary for /etc/sysconfig/docker:
OPTIONS='--selinux-enabled=false --log-driver=journald'
and for /etc/sysconfig/docker-storage:
DOCKER_STORAGE_OPTIONS=-s overlay
When you change a storage type, restarting docker will destroy your complete image and container store. You may as well everything up in the /var/lib/docker folder when doing this:
systemctl stop docker
rm -rf /var/lib/docker
dnf reinstall docker
systemctl start docker
In RHEL 6.6 any user with docker access can access my private keys, and run applications as root with the most trivial of hacks via volumes. SELinux is the one thing that prevents that in Fedora and RHEL 7. That said, it is not clear how much of the additional RHEL 7 security comes from SELinux outside the container and how much inside the container...
Generally, loopback devices are fine for instances where the limit of 100GB maximum and a slightly reduced performance are not a problem. The only issue I can find is the docker store can be corrupt if you have a disk full error while running... That can probably be avoided with quotas, or other simple solutions.
However, for a production instance it is definitely worth the time and effort to set this up correctly.
100G may excessive for your production instance. Containers and images are fairly small. Many organizations are running docker containers within VM's as an additional measure of security and isolation. If so, you might have a fairly small number of containers running per VM. In which case even 10G might be sufficient.
One final note. Even if you are using direct lvm, you probable want a additional filesystem for /var/lib/docker. The reason is the command "docker load" will create an uncompressed version of the images being loaded in this folder before adding it to the data store. So if you are trying to keep it small and light then explore options other than direct lvm.
#Igor Ganapolsky Feb and #Mincă Daniel Andrei
Check this:
systemctl edit docker --full
If directive EnvironmentFile is not listed in [Service] block, then no luck (I also have this problem on Centos7), but you can extend standard systemd unit like this:
systemctl edit docker
EnvironmentFile=-/etc/sysconfig/docker
ExecStart=
ExecStart=/usr/bin/dockerd $OPTIONS
And create a file /etc/sysconfig/docker with content:
OPTIONS="-s overlay --storage-opt dm.no_warn_on_loop_devices=true"

Resources