Lets say I am running a multiprocessing service inside a docker container spawning multiple processes, would docker use all/multiple cores/CPUs of the host or just one?
As Charles mentions, by default all can be used, or you can limit it per container using the --cpuset-cpus parameter.
docker run --cpuset-cpus="0-2" myapp:latest
That would restrict the container to 3 CPU's (0, 1, and 2). See the docker run docs for more details.
The preferred way to limit CPU usage of containers is with a fractional limit on CPUs:
docker run --cpus 2.5 myapp:latest
That would limit your container to 2.5 cores on the host.
Lastly, if you run docker inside of a VM, including Docker for Mac, Docker for Windows, and docker-machine, those VM's will have a CPU limit separate from your laptop itself. Docker runs inside of that VM and will use all the resources given to the VM itself. E.g. with Docker for Mac you have the following menu:
Maybe your host VM has only one core by default. Therefore you should increase your VM cpu-count first and then use --cpuset-cpus option to increase your docker cores. You can remove docker default VM using the following command then you can create another VM with optional cpu-count and memory size.:
docker-machine rm default
docker-machine create -d virtualbox --virtualbox-cpu-count=8 --virtualbox-memory=4096 --virtualbox-disk-size=50000 default
After this step you can specify number of cores before running your image. this command will use 4 cores of total 8 cores.
docker run -it --cpuset-cpus="0-3" your_image_name
Then you can check number of available core in your image using this command:
nproc
Related
I run a docker build on my unix build machine and therefore I need more than 2GB Memory (default value of docker engine). I got the build working on my Mac with the docker Desktop UI in the settings as you can see it in the image.
How is this possible in Unix?
If you want to increase the allocated memory and CPU while running a container you can try this:
docker run -it --cpus="2" --memory="4096m" ubuntu /bin/bash
You can also do the same while deploying services using docker-compose
More infos:
https://docs.docker.com/config/containers/resource_constraints/
How to specify Memory & CPU limit in docker compose version 3
I have several database processes running in docker containers. Under certain conditions, I'd like to throttle the Memory and Storage used by the container. I understand that docker-compose lets you restrict resource use, but I want to keep the container running if possible.
yes, you can look into docker update command, you can update memory and CPU but storage is not listed in the docker-update command you can look further here for storage option.
Update a container with cpu-shares and memory
To update multiple resource configurations for multiple containers:
$ docker update --cpu-shares 512 -m 300M abebf7571666
Extended description
The docker update command dynamically updates container configuration.
You can use this command to prevent containers from consuming too many
resources from their Docker host. With a single command, you can place
limits on a single container or on many. To specify more than one
container, provide a space-separated list of container names or IDs.
Warning:
The docker update and docker container update commands are not supported for Windows containers.
docker-update-command
Docker first initializes a container and then execute the program you want. I wonder how docker manages the memory address of container and the program in it.
Docker does not allocate memory, it's the OS that manages the resources used by programs. Docker (internally) uses cgroups which is a kernel service. The reason that ps command (on the host) won't show up processes running in a container, is that containers run in different cgroups which are isolated from each other.
Rather than worrying about the docker memory, you would need to look at the underlying host (VM/instance) where you are running the docker container. The number of containers is determined by a number of factors including what your app is that runs on the container.
See here for the limits that you can run into Is there a maximum number of containers running on a Docker host?
I'm using Tensorflow on windows 10 with docker (yes, I know Windows 10 isn't supported yet). It performs ok, but only looks like I am only accessing just one of my cpu cores (I have 8). Tensorflow has the ability to assign ops to different devices, so I'd like to be able to get access to all 8. In VirtualBox when I view the settings it only says there is 1 cpu out of the 8 that is configured for the machine. I tried editing the machine to set it to more, but that lead to all sorts of weirdness.
Does anyone know the right way to either create or restart a docker machine to have 8 CPUs? I'm using the docker quickstart container app.
Cheers!!
First you need to ensure you have enabled Virtualization for your machine. You have to do that in the BIOS of your computer.
The link below has a nice video on how to do that, but there are others as well if you google it:
https://www.youtube.com/watch?v=mFJYpT7L5ag
Then you have to stop the docker machine (i.e. the VirtualBox vm) and change the CPU configuration in VirtualBox.
To list the name of your docker machine (it is usually default) run:
docker-machine ls
Then stop the docker machine:
docker-machine stop <machine name>
Next open VirtualBox UI and change the number of CPUs:
Select the docker virtual machine (should be marked as Powered off)
Click Settings->Systems->Processors
Change the number of CPUs
Click OK to save your changes
Restart the docker machine:
docker-machine start <machine name>
Finally you can use the CPU constraint options available for docker run command to restrict CPU usage for your containers if desired.
For example the following command restrict container to use only 3 CPUs:
docker run -ti --cpuset-cpus="0-2" ubuntu:14.04 /bin/bash
More details available in the docker run reference document here.
I just create the machine with all cpus
docker-machine create -d virtualbox --virtualbox-cpu-count=-1 dev
-1 means use all available cpus.
Basically, the title says it all: Is there any limit in the number of containers running at the same time on a single Docker host?
There are a number of system limits you can run into (and work around) but there's a significant amount of grey area depending on
How you are configuring your docker containers.
What you are running in your containers.
What kernel, distribution and docker version you are on.
The figures below are from the boot2docker 1.11.1 vm image which is based on Tiny Core Linux 7. The kernel is 4.4.8
Docker
Docker creates or uses a number of resources to run a container, on top of what you run inside the container.
Attaches a virtual ethernet adaptor to the docker0 bridge (1023 max per bridge)
Mounts an AUFS and shm file system (1048576 mounts max per fs type)
Create's an AUFS layer on top of the image (127 layers max)
Forks 1 extra docker-containerd-shim management process (~3MB per container on avg and sysctl kernel.pid_max)
Docker API/daemon internal data to manage container. (~400k per container)
Creates kernel cgroups and name spaces
Opens file descriptors (~15 + 1 per running container at startup. ulimit -n and sysctl fs.file-max )
Docker options
Port mapping -p will run a extra process per port number on the host (~4.5MB per port on avg pre 1.12, ~300k per port > 1.12 and also sysctl kernel.pid_max)
--net=none and --net=host would remove the networking overheads.
Container services
The overall limits will normally be decided by what you run inside the containers rather than dockers overhead (unless you are doing something esoteric, like testing how many containers you can run :)
If you are running apps in a virtual machine (node,ruby,python,java) memory usage is likely to become your main issue.
IO across a 1000 processes would cause a lot of IO contention.
1000 processes trying to run at the same time would cause a lot of context switching (see vm apps above for garbage collection)
If you create network connections from a 1000 containers the hosts network layer will get a workout.
It's not much different to tuning a linux host to run a 1000 processes, just some additional Docker overheads to include.
Example
1023 Docker busybox images running nc -l -p 80 -e echo host uses up about 1GB of kernel memory and 3.5GB of system memory.
1023 plain nc -l -p 80 -e echo host processes running on a host uses about 75MB of kernel memory and 125MB of system memory
Starting 1023 containers serially took ~8 minutes.
Killing 1023 containers serially took ~6 minutes
From a post on the mailing list, at about 1000 containers you start running into Linux networking issues.
The reason is:
This is the kernel, specifically net/bridge/br_private.h BR_PORT_BITS cannot be extended because of spanning tree requirements.
With Docker-compose, I am able to run over 6k containers on single host (with 190GB memory). container image is under 10MB. But Due to bridge limitation i have divided the containers in batches with multiple services, each service have 1k containers and separate subnet network.
docker-compose -f docker-compose.yml up --scale servicename=1000 -d
But after reaching 6k even though memory is still available around 60GB, it stops scaling and suddenly spikes up the memory. There should be bench-marking figures published by docker team to help, but unfortunately its not available. Kubernetes on the other hand publishes bench-marking stats clearly about the number of pods recommended per node.