Docker containers are currently running at unlimited ulimits...
However, the host system has limits on everything.
How can I test if it is complying with the host system? Any ways I can simulate a max open file connection etc?
I'm curious how docker actually allocates this limit considering a server is limited by a lot of things including open ports etc.
Currently from what I think... there are different ips assigned to different containers...
so each ips can have 65535 ports?
So that means unlimited ports for docker??? What about file descriptors?
Anyone has any ideas?
Docker does not tamper with ulimits, if the host is limited, then the container will be as well.
Containers are nothing more than specials processes, so the fd limit is the same as for any other process on the host.
Concerning the port, same thing, if you host has any limitation, Docker will not bypass them. It just creates a veth pair. So you will be most likely limited to 65535 * max veth ports.
You can test the limit by writing a small program that will open N files or for N times and see if it works.
Docker ulimits are configured here /etc/init/docker.conf , the docker daemon config, which will be used for all the containers.
Also check this answer for more information
Related
I have some configuration where users can create nodered instances using docker containers , but i've used one docker container for each instance and i've used nginx as reverse proxy.
Thus where i need to know how much containers can be created in one network and if the number is limited how can i increase it ?
There are a few possible answers to your question:
I dont think Nginx will limit the amount of NodeRed instances you can have.
If you are working on 1 machine with 1 IP address you can change the port number for every NodeRed instance so the limit would be at around 65,535 instances (a little lower as a few ports are already used)
If you are using multiple machines (lets say virtual machines) with only 1 port for NodeRed instances, you are limited to the amount of IP addresses in your subnet.
In a normal /24 subnet (255.255.255.0) there would be 254 IP addresses.
3.1. You can change your subnet in your local network.
https://www.freecodecamp.org/news/subnet-cheat-sheet-24-subnet-mask-30-26-27-29-and-other-ip-address-cidr-network-references/
If you are using multiple machines and are using a wide range of available ports, you have nearly no limit on how many instances you can deploy. The limit would be your hardware i think.
I would like to have a default memory limit for each Docker container.
I know I can use --memory when running a container. Yet the problem is the host is shared by many developers and I can't expect everybody to remember to do it.
I want the containers run without an explicit --memory parameter to be limited to e.g. 4GB instead of being able to see the whole hosts memory.
I tried to setup CGroup limits as described in https://stackoverflow.com/a/46557336/1237617. The problem is that it's a limit on total memory used by all containers.
Can I setup a per-container memory limit?
I was able to realize this by adding a proxy in front of docker service.
I use the proxy to inspect the JSON payload and modify the parameters to set the memory limit if it's absent.
The final step is to modify the DOCKER_HOST environment variable to point to the proxy.
socat might be useful if your proxy can't talk with sockets
I have a couple of Docker swarm questions (Sorry for not splitting them up but they are all closely related):
Do all instances in a swarm have to run on different machines or can they all run on the same? (if having limited amount of hardware and just wanting to try swarm mode)
Do I have to run swarm mode to be able to communicate between instances?
What is the key difference between swarm mode and just running a number of containers as regular?
What are the options of communication between instances of containers? (in swarm and in regular mode) http? named pipes? other?
If using http communication between containers on same machine, will it be roughly similarly as fast as named pipes?
Is there any built in support for a message bus or similar in Docker?
Is there support for any consensus protocol in Docker?
Are there any GUI's for designing, managing, testing and/or debugging Docker swarms?
Can a container list other containers, stop/restart some and start new ones? (to be able to function as a manager for other containers)
Can a container be given access to OS-features (Linux in my case) to configure for instance a reverse proxy or port forwarding on the WAN?
Background: What I'm trying to figure out is how I should go about and build a micro service mesh using Docker. The containers will be running .NET Core. I'm not too keen on relying too much on specifically Docker since it may not be the preferred tech in a couple of years. What can/should I do with Docker and what can/should I do inside the containers. That's what I'm trying to figure out.
I've copied your questions and tried to answer them.
Do all instances in a swarm have to run on different machines or can they all run on the same? (if having limited amount of hardware and just wanting to try swarm mode)
You can have only one machine in a swarm and run multiple tasks of the same service or in other words your scale of a service can be more than the number of actual machines. I have a testing swarm with a single machine and one with three and it works the same way.
Do I have to run swarm mode to be able to communicate between instances?
You have to run your docker in swarm mode in order to create a service, please see this link
What is the key difference between swarm mode and just running a number of containers as regular?
The key difference afaik is, that when a task goes down, docker puts another task up automatically. And you can easily scale your services, which means you can easily have multiple tasks just by scaling your service (up or down). As of running a container - when it goes down you have to manually start another.
What are the options of communication between instances of containers? (in swarm and in regular mode) http? named pipes? other?
I've currently only tested with a couple of wildfly servers in a swarm, which are on the same network. I'm not sure about others, but would love to find out. I've only read about RabbitMQ, but can't seem to find the link atm.
If using http communication between containers on same machine, will it be roughly similarly as fast as named pipes?
I can't say.
Is there any built in support for a message bus or similar in Docker?
I can't say.
Are there any GUI's for designing, managing, testing and/or debugging Docker swarms?
I've tested rancher and portainer.io, for a list of them I found this link
Can a container list other containers, stop/restart some and start new ones?
I'm not sure why would you want to do that? And I guess it's possible, see this link
Can a container be given access to OS-features (Linux in my case) to configure for instance a reverse proxy or port forwarding on the WAN?
I can't say.
#namokarm did a great job, and I'm filling in the gaps:
Benefits of Swarm over docker run or docker-compose.
All communications between containers has to be TCP/UDP etc. You could force two containers to only run on a single machine, then bind-mount their socket so they skip the network, but that would be a bit of an anti-pattern. Swarm is designed for everything to be distributed and TCP/UDP.
In a few cases, such as PHP-FPM + Nginx, I recommend bundling both in the same container (against docker best practices, but trust me it's easier than separate containers). This will ensure they scale together (1-to-1 relationship) and stay fast since they use local sockets to communicate). I only recommend this for a few setups like this, the other being ColdFusion + Nginx because they are two parts of the same tool that provide a HTTP response... I don't recommend bundling images together in nearly all other cases, but I'm open to ideas :).
Rancher is no longer supporting Swarm. Portainer and SwarmPit are GUI options.
Yes a container running something like Portainer/SwarmPit or controlling the Docker socket through a bind-mount or TCP can control the whole Swarm. This is how all docker management works :)
For reverse proxy, you would run a container-based proxy like Traefik or Docker Flow Proxy, which sets up HAProxy for Docker and Swarm.
Many of these topics are discussed in my DockerCon talks: https://www.bretfisher.com/dockercon18/
Basically, the title says it all: Is there any limit in the number of containers running at the same time on a single Docker host?
There are a number of system limits you can run into (and work around) but there's a significant amount of grey area depending on
How you are configuring your docker containers.
What you are running in your containers.
What kernel, distribution and docker version you are on.
The figures below are from the boot2docker 1.11.1 vm image which is based on Tiny Core Linux 7. The kernel is 4.4.8
Docker
Docker creates or uses a number of resources to run a container, on top of what you run inside the container.
Attaches a virtual ethernet adaptor to the docker0 bridge (1023 max per bridge)
Mounts an AUFS and shm file system (1048576 mounts max per fs type)
Create's an AUFS layer on top of the image (127 layers max)
Forks 1 extra docker-containerd-shim management process (~3MB per container on avg and sysctl kernel.pid_max)
Docker API/daemon internal data to manage container. (~400k per container)
Creates kernel cgroups and name spaces
Opens file descriptors (~15 + 1 per running container at startup. ulimit -n and sysctl fs.file-max )
Docker options
Port mapping -p will run a extra process per port number on the host (~4.5MB per port on avg pre 1.12, ~300k per port > 1.12 and also sysctl kernel.pid_max)
--net=none and --net=host would remove the networking overheads.
Container services
The overall limits will normally be decided by what you run inside the containers rather than dockers overhead (unless you are doing something esoteric, like testing how many containers you can run :)
If you are running apps in a virtual machine (node,ruby,python,java) memory usage is likely to become your main issue.
IO across a 1000 processes would cause a lot of IO contention.
1000 processes trying to run at the same time would cause a lot of context switching (see vm apps above for garbage collection)
If you create network connections from a 1000 containers the hosts network layer will get a workout.
It's not much different to tuning a linux host to run a 1000 processes, just some additional Docker overheads to include.
Example
1023 Docker busybox images running nc -l -p 80 -e echo host uses up about 1GB of kernel memory and 3.5GB of system memory.
1023 plain nc -l -p 80 -e echo host processes running on a host uses about 75MB of kernel memory and 125MB of system memory
Starting 1023 containers serially took ~8 minutes.
Killing 1023 containers serially took ~6 minutes
From a post on the mailing list, at about 1000 containers you start running into Linux networking issues.
The reason is:
This is the kernel, specifically net/bridge/br_private.h BR_PORT_BITS cannot be extended because of spanning tree requirements.
With Docker-compose, I am able to run over 6k containers on single host (with 190GB memory). container image is under 10MB. But Due to bridge limitation i have divided the containers in batches with multiple services, each service have 1k containers and separate subnet network.
docker-compose -f docker-compose.yml up --scale servicename=1000 -d
But after reaching 6k even though memory is still available around 60GB, it stops scaling and suddenly spikes up the memory. There should be bench-marking figures published by docker team to help, but unfortunately its not available. Kubernetes on the other hand publishes bench-marking stats clearly about the number of pods recommended per node.
My understading, based on the fact that Docker is based on LXC, is that Docker containers share various resources from its host operating system. My concern is with CPU cores. Here is a scenario:
a host linux OS has 8 cores
I have to deploy a set of docker containers on the host OS above.
Some of the docker containers that I need to deploy would be better suited to use 2 cores
a) So if I run all of the docker containers on that host, will they consume CPU/cores as needed like if they were being run as normal installed applications on that host OS ?
b) Will the docker container consume its own process and all of the processing that is contained in it will be stuck to that parent process's CPU core ?
c) How can I specify a docker container to use a number of cores ( 4 for example ). I saw there is a -C flag that can point to a core id, but it appears there is no option to specify the container to pick N cores at random.
Currently, I don't think docker provides this level of granularity. It doesn't specify how many cores it allocates in its lxc.conf files, so you will get all cores for each docker, potentially (or possibly 1, I'm not 100% sure on that).
However, you could tweak the conf file generated for a given container and set something like
cpuset {
cpuset.cpus="0-3";
}
It might be that things changed in the latest (few) versions. Nowadays you can constrain your docker container with parameters for docker run:
The equivalent for the current answer in the new docker version is
docker run ubuntu /bin/echo 'Hello world --cpuset-cpus="0-3"
However, this will limit the docker process to these CPU, but (please correct me if I am wrong) other containers could also request the same set.
A possibly better way would be to use CPU shares.
For more information see https://docs.docker.com/engine/reference/run/
From ORACLE documentation:
To control a container's CPU usage, you can use the
--cpu-period and --cpu-quota options with the docker
create and docker run commands from version 1.7.0 of Docker onward.
The --cpu-quota option specifies the number of microseconds
that a container has access to CPU resources during a
period specified by --cpu-period.
As the default value of --cpu-period is 100000, setting the
value of --cpu-quota to 25000 limits a container to 25% of
the CPU resources. By default, a container can use all available CPU resources,
which corresponds to a --cpu-quota value of -1.
So if I run all of the docker containers on that host, will they consume CPU/cores as needed like if they were being run as normal installed applications on that host OS?
Yes.
CPU
By default, each container’s access to the host machine’s CPU cycles is unlimited. You can set various constraints to limit a given container’s access to the host machine’s CPU cycles.
Will the docker container consume its own process and all of the processing that is contained in it will be stuck to that parent process's CPU core?
Nope.
Docker uses Completely Fair Scheduler for sharing CPU resources among containers. So containers have configurable access to CPU.
How can I specify a docker container to use a number of cores ( 4 for example ). I saw there is a -C flag that can point to a core id, but it appears there is no option to specify the container to pick N cores at random.
It is overconfigurable. There are more cpu options in Docker which you can combine.
--cpus= Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set --cpus="1.5", the container is guaranteed at most one and a half of the CPUs.
--cpuset-cpus Limit the specific CPUs or cores a container can use. A comma-separated list or hyphen-separated range of CPUs a container can use, if you have more than one CPU. The first CPU is numbered 0. A valid value might be 0-3 (to use the first, second, third, and fourth CPU) or 1,3 (to use the second and fourth CPU).
And more...