Is there a maximum number of containers running on a Docker host? - docker

Basically, the title says it all: Is there any limit in the number of containers running at the same time on a single Docker host?

There are a number of system limits you can run into (and work around) but there's a significant amount of grey area depending on
How you are configuring your docker containers.
What you are running in your containers.
What kernel, distribution and docker version you are on.
The figures below are from the boot2docker 1.11.1 vm image which is based on Tiny Core Linux 7. The kernel is 4.4.8
Docker
Docker creates or uses a number of resources to run a container, on top of what you run inside the container.
Attaches a virtual ethernet adaptor to the docker0 bridge (1023 max per bridge)
Mounts an AUFS and shm file system (1048576 mounts max per fs type)
Create's an AUFS layer on top of the image (127 layers max)
Forks 1 extra docker-containerd-shim management process (~3MB per container on avg and sysctl kernel.pid_max)
Docker API/daemon internal data to manage container. (~400k per container)
Creates kernel cgroups and name spaces
Opens file descriptors (~15 + 1 per running container at startup. ulimit -n and sysctl fs.file-max )
Docker options
Port mapping -p will run a extra process per port number on the host (~4.5MB per port on avg pre 1.12, ~300k per port > 1.12 and also sysctl kernel.pid_max)
--net=none and --net=host would remove the networking overheads.
Container services
The overall limits will normally be decided by what you run inside the containers rather than dockers overhead (unless you are doing something esoteric, like testing how many containers you can run :)
If you are running apps in a virtual machine (node,ruby,python,java) memory usage is likely to become your main issue.
IO across a 1000 processes would cause a lot of IO contention.
1000 processes trying to run at the same time would cause a lot of context switching (see vm apps above for garbage collection)
If you create network connections from a 1000 containers the hosts network layer will get a workout.
It's not much different to tuning a linux host to run a 1000 processes, just some additional Docker overheads to include.
Example
1023 Docker busybox images running nc -l -p 80 -e echo host uses up about 1GB of kernel memory and 3.5GB of system memory.
1023 plain nc -l -p 80 -e echo host processes running on a host uses about 75MB of kernel memory and 125MB of system memory
Starting 1023 containers serially took ~8 minutes.
Killing 1023 containers serially took ~6 minutes

From a post on the mailing list, at about 1000 containers you start running into Linux networking issues.
The reason is:
This is the kernel, specifically net/bridge/br_private.h BR_PORT_BITS cannot be extended because of spanning tree requirements.

With Docker-compose, I am able to run over 6k containers on single host (with 190GB memory). container image is under 10MB. But Due to bridge limitation i have divided the containers in batches with multiple services, each service have 1k containers and separate subnet network.
docker-compose -f docker-compose.yml up --scale servicename=1000 -d
But after reaching 6k even though memory is still available around 60GB, it stops scaling and suddenly spikes up the memory. There should be bench-marking figures published by docker team to help, but unfortunately its not available. Kubernetes on the other hand publishes bench-marking stats clearly about the number of pods recommended per node.

Related

Why is my docker container using high cpu, but my docker host is barely utilized?

I'm trying to understand the relationship between docker containers and their host machines. My setup is as follows:
Hypervisor: Proxmox (4x 10 core Xeon, 80 threads total)
Docker Host: LXC on Proxmox, 40 cores allocated
Docker Host OS: Ubuntu 22.10
What I'm seeing:
I have ~16 containers running within docker. Most are utilizing a fraction of a percentage of a cpu as reported by docker stats. One in particular is hovering around 100% utilization, sometimes spiking well above 100%.
When I look at the cpu utilization on the host lxc container, it's around 96% idle. I'm confused as to why the docker container is running so 'hot' and not using more of the available hardware. I've found a lot of documentation around setting limits, but not the opposite - which should be the default behavior.
Seeing as though the CPU is allowed to burst past 100%, I'm not seeing any performance type issues - but seeing that 100% having on my monitoring charts is bothering me:)
Any ideas of an action I can do to remediate this, or do I just leave it as-is?
You can limit the CPU use of the docker container, Use the following flag with the docker command
--cpus="1.0"
Example
docker run --cpus="1.0" --name my_container <docker image name>

Using KinD to create a local cluster and the CPU maintains high usage

I'm using KinD to create a local cluster and noted that the CPU percentage usage stays relatively high, between 40-60 for docker.hyperkit on Mac OS Catalina 10.15.6. Within Docker for mac I limited the resources to CPUs: 4 and Memory:6.00 GB.
My KinD cluster consists of a control plane node and three worker nodes. Is this CPU usage normal for docker for mac? Can I check to see the utilization per container?
Each kind "node" is a Docker container, so you can inspect those in "normal" ways.
Try running kind create cluster to create a single-node cluster. If you run docker stats you will get CPU, memory, and network utilization information; you can also get the same data through the Docker Desktop application, selecting (whale) > Dashboard. This brings up some high-level statistics on the container. Sitting idle on a freshly created cluster, this seems to be consistently using about 30% CPU for me. (So 40-60% CPU for a control-plane node and three workers sounds believable.)
Similarly, since each "node" is a container, you can docker exec -it kind-control-plane bash to get an interactive debugging shell in a node container. Once you're there, you can run top and similar diagnostic commands. On my single node I see the top processes as kube-apiserver (10%), kube-controller (5%), etcd (5%), and kubelet (5%). Again, that seems reasonably normal, though it might be nice if it used less CPU sitting idle.

How docker allocate memory to the process in container?

Docker first initializes a container and then execute the program you want. I wonder how docker manages the memory address of container and the program in it.
Docker does not allocate memory, it's the OS that manages the resources used by programs. Docker (internally) uses cgroups which is a kernel service. The reason that ps command (on the host) won't show up processes running in a container, is that containers run in different cgroups which are isolated from each other.
Rather than worrying about the docker memory, you would need to look at the underlying host (VM/instance) where you are running the docker container. The number of containers is determined by a number of factors including what your app is that runs on the container.
See here for the limits that you can run into Is there a maximum number of containers running on a Docker host?

How many CPUs does a docker container use?

Lets say I am running a multiprocessing service inside a docker container spawning multiple processes, would docker use all/multiple cores/CPUs of the host or just one?
As Charles mentions, by default all can be used, or you can limit it per container using the --cpuset-cpus parameter.
docker run --cpuset-cpus="0-2" myapp:latest
That would restrict the container to 3 CPU's (0, 1, and 2). See the docker run docs for more details.
The preferred way to limit CPU usage of containers is with a fractional limit on CPUs:
docker run --cpus 2.5 myapp:latest
That would limit your container to 2.5 cores on the host.
Lastly, if you run docker inside of a VM, including Docker for Mac, Docker for Windows, and docker-machine, those VM's will have a CPU limit separate from your laptop itself. Docker runs inside of that VM and will use all the resources given to the VM itself. E.g. with Docker for Mac you have the following menu:
Maybe your host VM has only one core by default. Therefore you should increase your VM cpu-count first and then use --cpuset-cpus option to increase your docker cores. You can remove docker default VM using the following command then you can create another VM with optional cpu-count and memory size.:
docker-machine rm default
docker-machine create -d virtualbox --virtualbox-cpu-count=8 --virtualbox-memory=4096 --virtualbox-disk-size=50000 default
After this step you can specify number of cores before running your image. this command will use 4 cores of total 8 cores.
docker run -it --cpuset-cpus="0-3" your_image_name
Then you can check number of available core in your image using this command:
nproc

How does docker use CPU cores from its host operating system?

My understading, based on the fact that Docker is based on LXC, is that Docker containers share various resources from its host operating system. My concern is with CPU cores. Here is a scenario:
a host linux OS has 8 cores
I have to deploy a set of docker containers on the host OS above.
Some of the docker containers that I need to deploy would be better suited to use 2 cores
a) So if I run all of the docker containers on that host, will they consume CPU/cores as needed like if they were being run as normal installed applications on that host OS ?
b) Will the docker container consume its own process and all of the processing that is contained in it will be stuck to that parent process's CPU core ?
c) How can I specify a docker container to use a number of cores ( 4 for example ). I saw there is a -C flag that can point to a core id, but it appears there is no option to specify the container to pick N cores at random.
Currently, I don't think docker provides this level of granularity. It doesn't specify how many cores it allocates in its lxc.conf files, so you will get all cores for each docker, potentially (or possibly 1, I'm not 100% sure on that).
However, you could tweak the conf file generated for a given container and set something like
cpuset {
cpuset.cpus="0-3";
}
It might be that things changed in the latest (few) versions. Nowadays you can constrain your docker container with parameters for docker run:
The equivalent for the current answer in the new docker version is
docker run ubuntu /bin/echo 'Hello world --cpuset-cpus="0-3"
However, this will limit the docker process to these CPU, but (please correct me if I am wrong) other containers could also request the same set.
A possibly better way would be to use CPU shares.
For more information see https://docs.docker.com/engine/reference/run/
From ORACLE documentation:
To control a container's CPU usage, you can use the
--cpu-period and --cpu-quota options with the docker
create and docker run commands from version 1.7.0 of Docker onward.
The --cpu-quota option specifies the number of microseconds
that a container has access to CPU resources during a
period specified by --cpu-period.
As the default value of --cpu-period is 100000, setting the
value of --cpu-quota to 25000 limits a container to 25% of
the CPU resources. By default, a container can use all available CPU resources,
which corresponds to a --cpu-quota value of -1.
So if I run all of the docker containers on that host, will they consume CPU/cores as needed like if they were being run as normal installed applications on that host OS?
Yes.
CPU
By default, each container’s access to the host machine’s CPU cycles is unlimited. You can set various constraints to limit a given container’s access to the host machine’s CPU cycles.
Will the docker container consume its own process and all of the processing that is contained in it will be stuck to that parent process's CPU core?
Nope.
Docker uses Completely Fair Scheduler for sharing CPU resources among containers. So containers have configurable access to CPU.
How can I specify a docker container to use a number of cores ( 4 for example ). I saw there is a -C flag that can point to a core id, but it appears there is no option to specify the container to pick N cores at random.
It is overconfigurable. There are more cpu options in Docker which you can combine.
--cpus= Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set --cpus="1.5", the container is guaranteed at most one and a half of the CPUs.
--cpuset-cpus Limit the specific CPUs or cores a container can use. A comma-separated list or hyphen-separated range of CPUs a container can use, if you have more than one CPU. The first CPU is numbered 0. A valid value might be 0-3 (to use the first, second, third, and fourth CPU) or 1,3 (to use the second and fourth CPU).
And more...

Resources