How to set CPU limit in docker file for Windows Containers - docker

I'm trying to host some docker windows based containers in Kubernetes 1.17.9 and unfortunately this version won't respect CPU limit set in pod specification,
Is there any way I can set the CPU limit in the docker file of the windows containers ?

Windows container's use a fair-share system for resource management and those are more guidelines than hard limits. Please read about the limitations here:
https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#v1-container
https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/resource-controls

Related

Is there a way to update Docker "Resources" settings from the command line on an EC2 instance?

I'm attempting to increase the memory allocation of a specific container I'm running on an EC2 instance. I was able to do this locally by adding the mem_limit: 4GB into my docker-compose file (using version 2 not 3) and this did not work until I changed my settings in Docker desktop to be greater than the memory limit I was specifying:
My question is as follows, is it possible to change this memory slider setting from the command line and therefore would it be possible to do it on an EC2 instance and without docker desktop? I've been through the docs but was unable to find anything specific to this!
That's a Docker Desktop setting, which is only necessary because of the way docker containers run in a VM on Windows and Mac computers. On an EC2 Linux server there is no limit like that, docker processes can use as much resources as the server has available.

How to increase disk space and memory limits for docker using console?

I easily managed to do this on the desktop version of Docker via preferences, but how can I do this using console on a remote linux server?
The limits you are configuring in the Docker Desktop UI are on the embedded Linux VM. All containers run within that VM, giving you an upper limit on the sum of all containers. To replicate this on a remote Linux server, you would set the physical hardware or VM constraints to match your limit.
For individual containers, you can specify the following:
--cpus to set the CPU shares allocated to the cgroup. This can be something like 2.5 to allocate up to 2.5 CPU threads to the container. Containers attempting to use more CPU will be throttled.
--memory or -m to set the memory limit in bytes. This is applied to the cgroup the container runs within. Containers attempting to exceed this limit will be killed.
Disk space for containers and images is controlled by the disk space available to /var/lib/docker for the default overlay2 graph driver. You can limit this by placing the directory under a different drive/partition with limited space. For volume mounts, disk space is limited by where the volume mount is sourced, and the default named volumes go back to /var/lib/docker.

Bandwidth and Disk space for Docker container

Does docker container get the same band-width as the host container? Or do we need to configure min and(or) max. I 've noticed that we need to override default RAM(which is 2 GB) and Swap space configuration if we need to run CPU intensive jobs.
Also do we need to configure the disk-space ? Or does it by default get as much space as the actual hard disk.
Memory and CPU are controlled using cgroups by docker. If you do not configure these, they are unrestricted and can use all of the memory and CPU on the docker host. If you run in a VM, which includes all Docker for Desktop installs, then you will be limited to that VM's resources.
Disk space is usually limited to the disk space available in /var/lib/docker. For that reason, many make this a different mount. If you use devicemapper for docker's graph driver (this has been largely deprecated), created preallocated blocks of disk space, and you can control that block size. You can restrict containers by running them with read-only root filesystems, and mounting volumes into the container that have a limited disk space. I've seen this done with loopback device mounts, but it requires some configuration outside of docker to setup the loopback device. With a VM, you will again be limited by the disk space allocated to that VM.
Network bandwidth is by default unlimited. I have seen an interesting project called docker-tc which monitors containers for their labels and updates bandwidth settings for a container using tc (traffic control).
Does docker container get the same band-width as the host container?
Yes. There is no limit imposed on network utilization. You could maybe impose limits using a bridge network.
Also do we need to configure the disk-space ? Or does it by default get as much space as the actual hard disk.
It depends on which storage driver you're using because each has its own options. For example, devicemapper uses 10G by default but can be configured to use more. The recommended driver now is overlay2. To configure start docker with overlay2.size.
This depends some on what your host system is and how old it is.
In all cases network bandwidth isn't explicitly limited or allocated between the host and containers; a container can do as much network I/O as it wants up to the host's limitations.
On current native Linux there isn't a desktop application and docker info will say something like Storage driver: overlay2 (overlay and aufs are good here too). There are no special limitations on memory, CPU, or disk usage; in all cases a container can use up to the full physical host resources, unless limited with a docker run option.
On older native Linux there isn't a desktop application and docker info says Storage driver: devicemapper. (Consider upgrading your host!) All containers and images are stored in a separate filesystem and the size of that is limited (it is included in the docker info output); named volumes and host bind mounts live outside this space. Again, memory and CPU are not intrinsically limited.
Docker Toolbox and Docker for Mac both use virtual machines to provide a Linux kernel to non-Linux hosts. If you see a "memory" slider you are probably using a solution like this. Disk use for containers, images, and named volumes is limited to the VM capacity, along with memory and CPU. Host bind mounts generally get passed through to the host system.

Why does Docker on Linux not allow limiting the number of CPUs?

Docker on windows has extra settings for cpus limits
By default, each container’s access to the host machine’s CPU cycles is unlimited
From here https://docs.docker.com/config/containers/resource_constraints/#--kernel-memory-details
But I am not sure in this.
Why then we have that settings in Windows docker?
We don't have something similar to it in linux?

Docker container disk size in mesos and marathon

Does Marathon impose a disk space resource limit on Docker container applications? By default, I know that Docker containers can grow as needed in their host VMs, but when I tried to have Marathon and Mesos create and manage my Docker containers, I found that the container would run out of space during installation of packages. As it stands, I cannot just cache the installation of these packages in a prebuilt image.
So if Marathon does impose a disk space resource limit, is there a way to turn that off?
Marathon should not impose a size limit on your containers, and as far as I am aware there are no limitations to the size of a container that Marathon can run, so long as the box you are running Marathon and the containers on top of have sufficient resources allocated (remaining).
That being said, there is a great response by user mbarthelemy at this link where he goes into detail regarding devicemapper settings in Ubuntu that allow you to allocate disk size and network resources to each container on a docker level.
No. Marathon does not enforce any resource limits itself, although your app definition can declare cpu/memory/disk limits. It is up to Mesos to actually enforce these limits. Mesos 0.22 added support for disk quota isolation, but it is not enabled by default (check the slave's --isolators flag), so I doubt that was your problem.
What is the slave's --work_dir? If it's mapping to /tmp/mesos (default), and that happens to be a tiny ramdisk/SSD, you might actually be running out of space on the host machine/VM.

Resources