Choose available memory for containers in Rancher - memory

In rancher, how do I choose the available memory for a docker container?
On OSX, I can do like so:
VBoxManage modifyvm default --memory 5000
To define the memory available to my docker-machine. How would I achieve this using rancher to set up a host?

If you're adding hosts in a cloud provider (EC2, DigitalOcean, etc) through the Add Host UI then they all have some sort of size option for offering, flavor, RAM, etc depending on the specific provider and their terminology.
Containers themselves have no memory limit by default in Docker. They can use any memory available in the host, and they do not "reserve"any of it so it is all held by a particular container like when you deploy a VM.
There is an option to limit how much memory (+ swap) a container is allowed to use, which is in the Host/Security tab of the service/container definition.

Related

How to increase disk space and memory limits for docker using console?

I easily managed to do this on the desktop version of Docker via preferences, but how can I do this using console on a remote linux server?
The limits you are configuring in the Docker Desktop UI are on the embedded Linux VM. All containers run within that VM, giving you an upper limit on the sum of all containers. To replicate this on a remote Linux server, you would set the physical hardware or VM constraints to match your limit.
For individual containers, you can specify the following:
--cpus to set the CPU shares allocated to the cgroup. This can be something like 2.5 to allocate up to 2.5 CPU threads to the container. Containers attempting to use more CPU will be throttled.
--memory or -m to set the memory limit in bytes. This is applied to the cgroup the container runs within. Containers attempting to exceed this limit will be killed.
Disk space for containers and images is controlled by the disk space available to /var/lib/docker for the default overlay2 graph driver. You can limit this by placing the directory under a different drive/partition with limited space. For volume mounts, disk space is limited by where the volume mount is sourced, and the default named volumes go back to /var/lib/docker.

Default per-container memory limit

I would like to have a default memory limit for each Docker container.
I know I can use --memory when running a container. Yet the problem is the host is shared by many developers and I can't expect everybody to remember to do it.
I want the containers run without an explicit --memory parameter to be limited to e.g. 4GB instead of being able to see the whole hosts memory.
I tried to setup CGroup limits as described in https://stackoverflow.com/a/46557336/1237617. The problem is that it's a limit on total memory used by all containers.
Can I setup a per-container memory limit?
I was able to realize this by adding a proxy in front of docker service.
I use the proxy to inspect the JSON payload and modify the parameters to set the memory limit if it's absent.
The final step is to modify the DOCKER_HOST environment variable to point to the proxy.
socat might be useful if your proxy can't talk with sockets

Bandwidth and Disk space for Docker container

Does docker container get the same band-width as the host container? Or do we need to configure min and(or) max. I 've noticed that we need to override default RAM(which is 2 GB) and Swap space configuration if we need to run CPU intensive jobs.
Also do we need to configure the disk-space ? Or does it by default get as much space as the actual hard disk.
Memory and CPU are controlled using cgroups by docker. If you do not configure these, they are unrestricted and can use all of the memory and CPU on the docker host. If you run in a VM, which includes all Docker for Desktop installs, then you will be limited to that VM's resources.
Disk space is usually limited to the disk space available in /var/lib/docker. For that reason, many make this a different mount. If you use devicemapper for docker's graph driver (this has been largely deprecated), created preallocated blocks of disk space, and you can control that block size. You can restrict containers by running them with read-only root filesystems, and mounting volumes into the container that have a limited disk space. I've seen this done with loopback device mounts, but it requires some configuration outside of docker to setup the loopback device. With a VM, you will again be limited by the disk space allocated to that VM.
Network bandwidth is by default unlimited. I have seen an interesting project called docker-tc which monitors containers for their labels and updates bandwidth settings for a container using tc (traffic control).
Does docker container get the same band-width as the host container?
Yes. There is no limit imposed on network utilization. You could maybe impose limits using a bridge network.
Also do we need to configure the disk-space ? Or does it by default get as much space as the actual hard disk.
It depends on which storage driver you're using because each has its own options. For example, devicemapper uses 10G by default but can be configured to use more. The recommended driver now is overlay2. To configure start docker with overlay2.size.
This depends some on what your host system is and how old it is.
In all cases network bandwidth isn't explicitly limited or allocated between the host and containers; a container can do as much network I/O as it wants up to the host's limitations.
On current native Linux there isn't a desktop application and docker info will say something like Storage driver: overlay2 (overlay and aufs are good here too). There are no special limitations on memory, CPU, or disk usage; in all cases a container can use up to the full physical host resources, unless limited with a docker run option.
On older native Linux there isn't a desktop application and docker info says Storage driver: devicemapper. (Consider upgrading your host!) All containers and images are stored in a separate filesystem and the size of that is limited (it is included in the docker info output); named volumes and host bind mounts live outside this space. Again, memory and CPU are not intrinsically limited.
Docker Toolbox and Docker for Mac both use virtual machines to provide a Linux kernel to non-Linux hosts. If you see a "memory" slider you are probably using a solution like this. Disk use for containers, images, and named volumes is limited to the VM capacity, along with memory and CPU. Host bind mounts generally get passed through to the host system.

Why does Docker on Linux not allow limiting the number of CPUs?

Docker on windows has extra settings for cpus limits
By default, each container’s access to the host machine’s CPU cycles is unlimited
From here https://docs.docker.com/config/containers/resource_constraints/#--kernel-memory-details
But I am not sure in this.
Why then we have that settings in Windows docker?
We don't have something similar to it in linux?

How does docker use CPU cores from its host operating system?

My understading, based on the fact that Docker is based on LXC, is that Docker containers share various resources from its host operating system. My concern is with CPU cores. Here is a scenario:
a host linux OS has 8 cores
I have to deploy a set of docker containers on the host OS above.
Some of the docker containers that I need to deploy would be better suited to use 2 cores
a) So if I run all of the docker containers on that host, will they consume CPU/cores as needed like if they were being run as normal installed applications on that host OS ?
b) Will the docker container consume its own process and all of the processing that is contained in it will be stuck to that parent process's CPU core ?
c) How can I specify a docker container to use a number of cores ( 4 for example ). I saw there is a -C flag that can point to a core id, but it appears there is no option to specify the container to pick N cores at random.
Currently, I don't think docker provides this level of granularity. It doesn't specify how many cores it allocates in its lxc.conf files, so you will get all cores for each docker, potentially (or possibly 1, I'm not 100% sure on that).
However, you could tweak the conf file generated for a given container and set something like
cpuset {
cpuset.cpus="0-3";
}
It might be that things changed in the latest (few) versions. Nowadays you can constrain your docker container with parameters for docker run:
The equivalent for the current answer in the new docker version is
docker run ubuntu /bin/echo 'Hello world --cpuset-cpus="0-3"
However, this will limit the docker process to these CPU, but (please correct me if I am wrong) other containers could also request the same set.
A possibly better way would be to use CPU shares.
For more information see https://docs.docker.com/engine/reference/run/
From ORACLE documentation:
To control a container's CPU usage, you can use the
--cpu-period and --cpu-quota options with the docker
create and docker run commands from version 1.7.0 of Docker onward.
The --cpu-quota option specifies the number of microseconds
that a container has access to CPU resources during a
period specified by --cpu-period.
As the default value of --cpu-period is 100000, setting the
value of --cpu-quota to 25000 limits a container to 25% of
the CPU resources. By default, a container can use all available CPU resources,
which corresponds to a --cpu-quota value of -1.
So if I run all of the docker containers on that host, will they consume CPU/cores as needed like if they were being run as normal installed applications on that host OS?
Yes.
CPU
By default, each container’s access to the host machine’s CPU cycles is unlimited. You can set various constraints to limit a given container’s access to the host machine’s CPU cycles.
Will the docker container consume its own process and all of the processing that is contained in it will be stuck to that parent process's CPU core?
Nope.
Docker uses Completely Fair Scheduler for sharing CPU resources among containers. So containers have configurable access to CPU.
How can I specify a docker container to use a number of cores ( 4 for example ). I saw there is a -C flag that can point to a core id, but it appears there is no option to specify the container to pick N cores at random.
It is overconfigurable. There are more cpu options in Docker which you can combine.
--cpus= Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set --cpus="1.5", the container is guaranteed at most one and a half of the CPUs.
--cpuset-cpus Limit the specific CPUs or cores a container can use. A comma-separated list or hyphen-separated range of CPUs a container can use, if you have more than one CPU. The first CPU is numbered 0. A valid value might be 0-3 (to use the first, second, third, and fourth CPU) or 1,3 (to use the second and fourth CPU).
And more...

Resources