Bandwidth and Disk space for Docker container - docker

Does docker container get the same band-width as the host container? Or do we need to configure min and(or) max. I 've noticed that we need to override default RAM(which is 2 GB) and Swap space configuration if we need to run CPU intensive jobs.
Also do we need to configure the disk-space ? Or does it by default get as much space as the actual hard disk.

Memory and CPU are controlled using cgroups by docker. If you do not configure these, they are unrestricted and can use all of the memory and CPU on the docker host. If you run in a VM, which includes all Docker for Desktop installs, then you will be limited to that VM's resources.
Disk space is usually limited to the disk space available in /var/lib/docker. For that reason, many make this a different mount. If you use devicemapper for docker's graph driver (this has been largely deprecated), created preallocated blocks of disk space, and you can control that block size. You can restrict containers by running them with read-only root filesystems, and mounting volumes into the container that have a limited disk space. I've seen this done with loopback device mounts, but it requires some configuration outside of docker to setup the loopback device. With a VM, you will again be limited by the disk space allocated to that VM.
Network bandwidth is by default unlimited. I have seen an interesting project called docker-tc which monitors containers for their labels and updates bandwidth settings for a container using tc (traffic control).

Does docker container get the same band-width as the host container?
Yes. There is no limit imposed on network utilization. You could maybe impose limits using a bridge network.
Also do we need to configure the disk-space ? Or does it by default get as much space as the actual hard disk.
It depends on which storage driver you're using because each has its own options. For example, devicemapper uses 10G by default but can be configured to use more. The recommended driver now is overlay2. To configure start docker with overlay2.size.

This depends some on what your host system is and how old it is.
In all cases network bandwidth isn't explicitly limited or allocated between the host and containers; a container can do as much network I/O as it wants up to the host's limitations.
On current native Linux there isn't a desktop application and docker info will say something like Storage driver: overlay2 (overlay and aufs are good here too). There are no special limitations on memory, CPU, or disk usage; in all cases a container can use up to the full physical host resources, unless limited with a docker run option.
On older native Linux there isn't a desktop application and docker info says Storage driver: devicemapper. (Consider upgrading your host!) All containers and images are stored in a separate filesystem and the size of that is limited (it is included in the docker info output); named volumes and host bind mounts live outside this space. Again, memory and CPU are not intrinsically limited.
Docker Toolbox and Docker for Mac both use virtual machines to provide a Linux kernel to non-Linux hosts. If you see a "memory" slider you are probably using a solution like this. Disk use for containers, images, and named volumes is limited to the VM capacity, along with memory and CPU. Host bind mounts generally get passed through to the host system.

Related

container has its own disk but shared memory?

I am new to docker, just a question on contain basis, below is a picture from a book:
It says that containers share the the host computer's CPU, OS and memory but each container has its own computer name, ip address and disk.
I am a little bit confused about the disk, isn't that disk is just like memory as resource? if a container has 1gb data inside, it must get allocated 1gb disk space by the host computer from its own disk just like memory? so container's disk is also shared?
You can make that diagram more precise by saying that each container has its own filesystem. /usr in a container is separate from /usr on other containers or on the host, even if they share the same underlying storage.
By way of analogy to ordinary processes, each process has its own address space and processes can't write to each other's memory, even though they share the same underlying memory hardware. The kernel assigns specific blocks (pages) of physical memory to specific process address spaces. If you go out of your way, there are actually a couple of ways to cause blocks of memory to be shared between processes. The same basic properties apply to container filesystems.
On older Docker installations (docker info will say devicemapper) Docker uses a reserved fixed-size disk area. On newer Docker installations (docker info will say overlay2) Docker can use the entire host disk. The Linux kernel is heavily involved in mapping parts of the host disk (or possibly host filesystem) into the per-container filesystem spaces.

How to increase disk space and memory limits for docker using console?

I easily managed to do this on the desktop version of Docker via preferences, but how can I do this using console on a remote linux server?
The limits you are configuring in the Docker Desktop UI are on the embedded Linux VM. All containers run within that VM, giving you an upper limit on the sum of all containers. To replicate this on a remote Linux server, you would set the physical hardware or VM constraints to match your limit.
For individual containers, you can specify the following:
--cpus to set the CPU shares allocated to the cgroup. This can be something like 2.5 to allocate up to 2.5 CPU threads to the container. Containers attempting to use more CPU will be throttled.
--memory or -m to set the memory limit in bytes. This is applied to the cgroup the container runs within. Containers attempting to exceed this limit will be killed.
Disk space for containers and images is controlled by the disk space available to /var/lib/docker for the default overlay2 graph driver. You can limit this by placing the directory under a different drive/partition with limited space. For volume mounts, disk space is limited by where the volume mount is sourced, and the default named volumes go back to /var/lib/docker.

Using different raid partitions with docker

I have a Linux (Ubuntu) machine with a partition on an SSD raid and a partition on an HDD raid. I want to put my docker containers with high traffic (like a database) on the SSD part and the other containers on the cheaper HDD part. I can't find an answer here or on other sides. Is there a possibility?
Docker itself doesn't provide that level of control over Docker storage on a per container basis.
You can use the devicemapper storage driver and use a specific raid logical volume for the container file systems. There's no way to choose between multiple storage devices at container run time, or via some policy.
Docker does have volumes that can be added to a container and volume plugins to use different storage backends for volumes. These can controlled on a per container basis.
There is an LVM volume plugin. You could assign SSD's to a lvm volume
group and mount data volumes from that in any container you want the extra write performance in.
Another option would be to run multiple Docker daemons, one with each storage configuration, that would be difficult to maintain.

When does a running Docker container run out of disk space?

I've read through so much documentation, and I'm still not sure how this really works. It's a bit of a Docker vs. VM question.
If I start a VM with a 2GB hard drive and fill its disk with files, I know it runs out after 2GB of files.
Does Docker work the same way? I would assume so. But from what I've read about "UnionFS" it seems like it does not run out of space.
So then why do Docker "volumes" exist? Is that automagically expanding Docker disk space transient in some way? Will the files I've saved inside of my Docker container disappear after a reboot? How about after restarting the container?
Docker's usage (1.12+) depends on the Docker storage driver and possibly the physical file system in use.
TL;DR Storage will be shared between all containers and local volumes unless you are using the devicemapper storage driver or have set a limit via docker run --storage-opt size=X when running on the zfs or btrfs drivers. Docker 1.13+ also supports a quota size with overlay2 on an xfs backed file system.
Containers
For all storage drivers, except devicemapper, the container and local volume storage is limited by the underlying file system hosting /var/lib/docker and it's subdirectories. A container can fill the shared file system and then other containers can't write any more.
When using the devicemapper driver, a default volume size of 100G is "thin allocated" for each container. The default size can be overridden with the daemon option --storage-opt dm.basesize option or set on a per container basis with docker run --storage-opt size=2G.
The same per container quota support is available for the zfs and btrfs drivers as both file systems provide simple built in support for creating volumes with a size or quota.
The overlay2 storage driver on xfs supporta per container quotas as of Docker 1.13. This will probably be extended to ext4 when new 4.5+ kernels become standard/common and ext4 and xfs quotas share a common API.
Volumes
Docker volumes are separate from a container and can be viewed as a persistant storage area for an ephemeral container.
Volumes are stored separately from Docker storage, and have their own plugins for different backends. local is the default backend, which writes data to /var/lib/docker/volumes so is held outside of the containers storage and possible quota system.
Other volume plugins could be used if you wanted to set per volume limits on a local file system that supports it.
Containers will keep their own file state over a container restart and reboot, until you docker rm the container. Files in a volume will survive a container removal and can be mounted on creation of the new container.

Choose available memory for containers in Rancher

In rancher, how do I choose the available memory for a docker container?
On OSX, I can do like so:
VBoxManage modifyvm default --memory 5000
To define the memory available to my docker-machine. How would I achieve this using rancher to set up a host?
If you're adding hosts in a cloud provider (EC2, DigitalOcean, etc) through the Add Host UI then they all have some sort of size option for offering, flavor, RAM, etc depending on the specific provider and their terminology.
Containers themselves have no memory limit by default in Docker. They can use any memory available in the host, and they do not "reserve"any of it so it is all held by a particular container like when you deploy a VM.
There is an option to limit how much memory (+ swap) a container is allowed to use, which is in the Host/Security tab of the service/container definition.

Resources