How can my docker harddrive be bigger than the hosts? - docker

I run some docker images on an EC2 host and recently noticed, that the docker FS is always 100GB. The host FS is only 8GB though.
What would happen, if i use more than 8GB on the docker image? Magic?

That comes from PR 14709 and the docker daemon --storage-opt dm.basesize= option:
Current default basesize is 10G. Change it to 100G. Reason being that for
some people 10G is turning out to be too small and we don't have capabilities
to grow it dyamically.
This is just overcommitting and no real space is allocated till container
actually writes data. And this is no different then fs based graphdrivers
where virtual size of a container root is unlimited.
So when you go over 8 GB, you should get a "No more space left on device" error message. No magic.

Related

Increase the RAM of a docker container itself (NOT DOCKER)

Hey guys I am running WSL 2 with a docker container running on the WSL2 but the docker container itself (its a KVM running QEMU VM) is limited to 4 GB I need a lot more than 4GB I need alt east 8 GB to run the things I want to run in QEMU VM (Theres a reason why I am running QEMU and no I cannot go without it)
I am running docker desktop and if I inspect the docker container it says the following
I have edited the .wslconfig file. and set the limit to 20GB as well as set the swap file to 1 and I have tried a command with docker run insert_docker_name_here it --memory 8000 -m
that then says it cant find the docker container for some reason but its under docker desktop.
I have tried looking on the internet for an answer but everything seems to point to .wslconfig file, --memory, or some vague answer that doesnt help at all, is there a way I can edit my docker container file and set it to use 8GB or more??
Please help - I am new to docker and would appreciate the asssistance
There is no memory limit on docker containers by default. Using --memory will only specify the upper limit of memory the container can use. You need to examine how the docker container is started and remove any limit there.
Also the WSL2 has no memory limit by default and will just grab as much memory as it needs. The value in .wslconfig is also the upper limit. If you just remove all the limits, it should just use all available memory.
This leaves QEMU itself. Have you checked what the guest RAM size is? (-m parameter on the commandline)

How to change docker default storage size

I've been having loads of issues with kubectl not having enough space. How can I increase the default storage size allocated to docker?
None of minikube recommendations worked.
1. Run "docker system prune" to remove unused Docker data (optionally with "-a")
2. Increase the storage allocated to Docker for Desktop by clicking on:
Docker icon > Preferences > Resources > Disk Image Size
3. Run "minikube ssh -- docker system prune" if using the Docker container runtime
And the second is not possible from command line...
Taking your comment into consideration
I get ImagePullBackOff when I try to deploy nginx on the cluster –
Caterina
You can specify minikube's disk allocations separately:
minikube start --memory=8192 --cpus=4 --disk-size=50g
Which can help you to work around the disk space issues as the default is significantly smaller: --disk-size string Disk size allocated to the minikube VM (format: <number>[<unit>], where unit = b, k, m or g). (default "20000mb")

inconsistency between "docker system df" and Docker Desktop on Mac

#docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 222 0 42.87GB 42.87GB (100%)
Containers 0 0 0B 0B
Local Volumes 10 0 77.68MB 77.68MB (100%)
Build Cache 946 0 7.982GB 7.982GB
From the docker system df command, it seems my docker disk is running out of space. But Docker desktop shows:
So I am confused which one should be the right one to indicate docker space usage?
If you're on a mac/windows, then it means that behind the scenes you're running a VM running linux. That disk size then corresponds to the VM disk size containing the Linux distro, rather than just the docker stuff.
Actually, the RECLAIMABLE column is basically the size and percentage of the resources that aren't in use, there is no container using these images/volumes. In your case, 100% of the images and volumes are in "Idle" so you can remove them if you want, there are some good ways to remove it like docker image prune and docker system prune.
The image that you sent has what you are looking for.

Increasing the memory allocation to docker daemon (dockerd) on Linux [duplicate]

I've seen that on Windows and Mac it's very easy to change the RAM containers are given - you just go into the GUI. But how do you do this on Linux, where it's a CLI instead of a GUI?
The Docker docs mention an -m flag, but this flag doesn't give any response (just prints the entirety of the help output again) so I don't know whether it worked. It also seems specific to containers, whereas I'd like to change the global default.
Lastly, is there a way to check the current default RAM, so I can make sure whatever I do in the end actually worked?
On native Linux, Docker can use all available host memory. It uses a lightweight kernel-based isolation mechanism that generally shares resources like CPU cores and memory (and on modern installations, disk space) using the standard kernel mechanism. There isn't a control or setting to limit or increase this.
On other platforms Docker runs a hidden Linux VM to be able to run a Linux kernel to use these isolation mechanisms, and the Docker Desktop memory control affects the memory allocation for that VM.
This is how I "check" the Docker container memory:
Open the linux command shell and -
Step 1: Check what containers are running.
docker ps
Step 2: Note down the 'CONTAINER ID' of the container you want to check and issue the following command:
docker container stats <containerID>
eg:
docker container stats c981
This will give an output like:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
c981c9482284 registry 0.00% 4.219MiB / 1.944GiB 0.21% 9.66kB / 0B 0B / 0B 14
'MEM USAGE / LIMIT' column will give you the actual memory usage and default memory allocated.
Note : press ctrl+c to come out of the view and back to command prompt.

Limit disk size and bandwidth of a Docker container

I have a physical host machine with Ubuntu 14.04 running on it. It has 100G disk and 100M network bandwidth. I installed Docker and launched 10 containers. I would like to limit each container to a maximum of 10G disk and 10M network bandwidth.
After going though the official documents and searching on the Internet, I still can't find a way to allocate specified size disk and network bandwidth to a container.
I think this may not be possible in Docker directly, maybe we need to bypass Docker. Does this means we should use something "underlying", such as LXC or Cgroup? Can anyone give some suggestions?
Edit:
#Mbarthelemy, your suggestion seems to work but I still have some questions about disk:
1) Is it possible to allocate other size (such as 20G, 30G etc) to each container? You said it is hardcoded in Docker so it seems impossible.
2) I use the command below to start the Docker daemon and container:
docker -d -s devicemapper
docker run -i -t training/webapp /bin/bash
then I use df -h to view the disk usage, it gives the following output:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-longid 9.8G 276M 9.0G 3% /
/dev/mapper/Chris--vg-root 27G 5.5G 20G 22% /etc/hosts
from the above I think the maximum disk a container can use is still larger than 10G, what do you think
?
I don't think this is possible right now using Docker default settings. Here's what I would try.
About disk usage: You could tell Docker to use the DeviceMapper storage backend instead of AuFS. This way each container would run on a block device (Devicemapper dm-thin target) limited to 10GB (this is a Docker default, luckily enough it matches your requirement!).
According to this link, it looks like latest versions of Docker now accept advanced storage backend options. Using the devicemapperbackend, you can now change the default container rootfs size option using --storage-opt dm.basesize=20G (that would be applied to any newly created container).
To change the storage backend: use the --storage-driver=devicemapper Docker option. Note that your previous containers won't be seen by Docker anymore after the change.
About network bandwidth : you could tell Docker to use LXC under the hoods : use the -e lxcoption.
Then, create your containers with a custom LXC directive to put them into a traffic class :
docker run --lxc-conf="lxc.cgroup.net_cls.classid = 0x00100001" your/image /bin/stuff
Check the official documentation about how to apply bandwidth limits to this class.
I've never tried this myself (my setup uses a custom OpenVswitch bridge and VLANs for networking, so bandwidth limitation is different and somewhat easier), but I think you'll have to create and configure a different class.
Note : the --storage-driver=devicemapperand -e lxcoptions are for the Docker daemon, not for the Docker client you're using when running docker run ........
New releases version has --device-read-bps and --device-write-bps.
You can use:
docker run --device-read-bps=/dev/sda:10mb
More info here:
https://blog.docker.com/2016/02/docker-1-10/
If you have access to the containers you can use tc for bandwidth control within them.
eg: in your entry point script you can add:
tc qdisc add dev eth0 root tbf rate 240kbit burst 300kbit latency 50ms
to have a bandwidth of 240kbps, burst 300kbps and 50 ms latency.
You also need to pass the --cap-add=NET_ADMIN to the docker run command if you are not running the containers as root.
1) Is it possible to allocate other size (such as 20G, 30G etc) to each container? You said it is hardcoded in Docker so it seems impossible.
to answer this question please refer to Resizing Docker containers with the Device Mapper plugin

Resources