I would like to have a default memory limit for each Docker container.
I know I can use --memory when running a container. Yet the problem is the host is shared by many developers and I can't expect everybody to remember to do it.
I want the containers run without an explicit --memory parameter to be limited to e.g. 4GB instead of being able to see the whole hosts memory.
I tried to setup CGroup limits as described in https://stackoverflow.com/a/46557336/1237617. The problem is that it's a limit on total memory used by all containers.
Can I setup a per-container memory limit?
I was able to realize this by adding a proxy in front of docker service.
I use the proxy to inspect the JSON payload and modify the parameters to set the memory limit if it's absent.
The final step is to modify the DOCKER_HOST environment variable to point to the proxy.
socat might be useful if your proxy can't talk with sockets
Related
I'm setting up two docker containers - one as a server to hold data in memory, and the other as a client to access that data. In order to do so, I believe I need to use the --ipc flag to share memory between the containers. The Docker documentation explains the --ipc flag pretty well. What makes sense to me according to the documentation is running:
docker run -d --ipc=shareable data-server
docker run -d --ipc=container:data-server data-client
But all of the Stackoverflow questions I've read (1, 2, 3, 4) link both containers directly to the host:
docker run -d --ipc=host data-server
docker run -d --ipc=host data-client
Which is more appropriate for this use case? If ipc=host is better, when would you use ipc=shareable?
From doc:
--ipc="MODE" : Set the IPC mode for the container
"shareable": Own private IPC namespace, with a possibility to share it with other containers.
"host": Use the host system’s IPC namespace.
The difference between shareable and host is whether the host can access the shared memory.
An IPC (POSIX/SysV IPC) namespace provides separation of named shared memory segments, semaphores and message queues. Because of this, there should be no difference in performance between two modes.
Shared memory is commonly used by databases and custom-built (typically C/OpenMPI, C++/using boost libraries) high performance applications for scientific computing and financial services industries.
Considering the security of the service, using host exposes the IPC namespace to attackers who have control of the host machine. With shareable, the IPC namespace is only accessible inside of the containers, which may contain any attacks. The host mode exists to allow cooperation between a container and its host.
It's often difficult to know all the details of the environment and requirements of the asker, so host tends to be the most commonly recommended because it is easiest to understand and configure.
How can I prevent that a docker host gets unresponsive if the docker container is under high load?
My docker host server gets unresponsive at certain times, and only a restart helps. We assume this happens when the docker container performs CPU intensive tasks. Whenever this happens, I cannot login to the docker host.
In case I am logged in already, I usually cannot use the shell; sometimes I can use the shell with an about 10 minutes delay for characters to be typed.
There is indeed no limit on a container per default, but there is a large amount of flags allowing you to control a container behaviour at run time.
By default, a container has no resource constraints and can use as much of a given resource as the host’s kernel scheduler allows. Docker provides ways to control how much memory, or CPU a container can use, setting runtime configuration flags of the docker run command. This section provides details on when you should set such limits and the possible implications of setting them.
Here is a totally not exhaustive example using some of those flags
docker run -it --cpus="1.5" --memory="1g" ubuntu /bin/bash
Just make sure your limits are set to something sensible allowing your host machine to still do what it is supposed to do (run the daemon or other tasks).
A comprehensive list of all those flags allowing you to control resources is accessible via https://docs.docker.com/config/containers/resource_constraints/
In rancher, how do I choose the available memory for a docker container?
On OSX, I can do like so:
VBoxManage modifyvm default --memory 5000
To define the memory available to my docker-machine. How would I achieve this using rancher to set up a host?
If you're adding hosts in a cloud provider (EC2, DigitalOcean, etc) through the Add Host UI then they all have some sort of size option for offering, flavor, RAM, etc depending on the specific provider and their terminology.
Containers themselves have no memory limit by default in Docker. They can use any memory available in the host, and they do not "reserve"any of it so it is all held by a particular container like when you deploy a VM.
There is an option to limit how much memory (+ swap) a container is allowed to use, which is in the Host/Security tab of the service/container definition.
Docker containers are currently running at unlimited ulimits...
However, the host system has limits on everything.
How can I test if it is complying with the host system? Any ways I can simulate a max open file connection etc?
I'm curious how docker actually allocates this limit considering a server is limited by a lot of things including open ports etc.
Currently from what I think... there are different ips assigned to different containers...
so each ips can have 65535 ports?
So that means unlimited ports for docker??? What about file descriptors?
Anyone has any ideas?
Docker does not tamper with ulimits, if the host is limited, then the container will be as well.
Containers are nothing more than specials processes, so the fd limit is the same as for any other process on the host.
Concerning the port, same thing, if you host has any limitation, Docker will not bypass them. It just creates a veth pair. So you will be most likely limited to 65535 * max veth ports.
You can test the limit by writing a small program that will open N files or for N times and see if it works.
Docker ulimits are configured here /etc/init/docker.conf , the docker daemon config, which will be used for all the containers.
Also check this answer for more information
My understading, based on the fact that Docker is based on LXC, is that Docker containers share various resources from its host operating system. My concern is with CPU cores. Here is a scenario:
a host linux OS has 8 cores
I have to deploy a set of docker containers on the host OS above.
Some of the docker containers that I need to deploy would be better suited to use 2 cores
a) So if I run all of the docker containers on that host, will they consume CPU/cores as needed like if they were being run as normal installed applications on that host OS ?
b) Will the docker container consume its own process and all of the processing that is contained in it will be stuck to that parent process's CPU core ?
c) How can I specify a docker container to use a number of cores ( 4 for example ). I saw there is a -C flag that can point to a core id, but it appears there is no option to specify the container to pick N cores at random.
Currently, I don't think docker provides this level of granularity. It doesn't specify how many cores it allocates in its lxc.conf files, so you will get all cores for each docker, potentially (or possibly 1, I'm not 100% sure on that).
However, you could tweak the conf file generated for a given container and set something like
cpuset {
cpuset.cpus="0-3";
}
It might be that things changed in the latest (few) versions. Nowadays you can constrain your docker container with parameters for docker run:
The equivalent for the current answer in the new docker version is
docker run ubuntu /bin/echo 'Hello world --cpuset-cpus="0-3"
However, this will limit the docker process to these CPU, but (please correct me if I am wrong) other containers could also request the same set.
A possibly better way would be to use CPU shares.
For more information see https://docs.docker.com/engine/reference/run/
From ORACLE documentation:
To control a container's CPU usage, you can use the
--cpu-period and --cpu-quota options with the docker
create and docker run commands from version 1.7.0 of Docker onward.
The --cpu-quota option specifies the number of microseconds
that a container has access to CPU resources during a
period specified by --cpu-period.
As the default value of --cpu-period is 100000, setting the
value of --cpu-quota to 25000 limits a container to 25% of
the CPU resources. By default, a container can use all available CPU resources,
which corresponds to a --cpu-quota value of -1.
So if I run all of the docker containers on that host, will they consume CPU/cores as needed like if they were being run as normal installed applications on that host OS?
Yes.
CPU
By default, each container’s access to the host machine’s CPU cycles is unlimited. You can set various constraints to limit a given container’s access to the host machine’s CPU cycles.
Will the docker container consume its own process and all of the processing that is contained in it will be stuck to that parent process's CPU core?
Nope.
Docker uses Completely Fair Scheduler for sharing CPU resources among containers. So containers have configurable access to CPU.
How can I specify a docker container to use a number of cores ( 4 for example ). I saw there is a -C flag that can point to a core id, but it appears there is no option to specify the container to pick N cores at random.
It is overconfigurable. There are more cpu options in Docker which you can combine.
--cpus= Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set --cpus="1.5", the container is guaranteed at most one and a half of the CPUs.
--cpuset-cpus Limit the specific CPUs or cores a container can use. A comma-separated list or hyphen-separated range of CPUs a container can use, if you have more than one CPU. The first CPU is numbered 0. A valid value might be 0-3 (to use the first, second, third, and fourth CPU) or 1,3 (to use the second and fourth CPU).
And more...