How to change docker default storage size - docker

I've been having loads of issues with kubectl not having enough space. How can I increase the default storage size allocated to docker?
None of minikube recommendations worked.
1. Run "docker system prune" to remove unused Docker data (optionally with "-a")
2. Increase the storage allocated to Docker for Desktop by clicking on:
Docker icon > Preferences > Resources > Disk Image Size
3. Run "minikube ssh -- docker system prune" if using the Docker container runtime
And the second is not possible from command line...

Taking your comment into consideration
I get ImagePullBackOff when I try to deploy nginx on the cluster –
Caterina
You can specify minikube's disk allocations separately:
minikube start --memory=8192 --cpus=4 --disk-size=50g
Which can help you to work around the disk space issues as the default is significantly smaller: --disk-size string Disk size allocated to the minikube VM (format: <number>[<unit>], where unit = b, k, m or g). (default "20000mb")

Related

Increase the RAM of a docker container itself (NOT DOCKER)

Hey guys I am running WSL 2 with a docker container running on the WSL2 but the docker container itself (its a KVM running QEMU VM) is limited to 4 GB I need a lot more than 4GB I need alt east 8 GB to run the things I want to run in QEMU VM (Theres a reason why I am running QEMU and no I cannot go without it)
I am running docker desktop and if I inspect the docker container it says the following
I have edited the .wslconfig file. and set the limit to 20GB as well as set the swap file to 1 and I have tried a command with docker run insert_docker_name_here it --memory 8000 -m
that then says it cant find the docker container for some reason but its under docker desktop.
I have tried looking on the internet for an answer but everything seems to point to .wslconfig file, --memory, or some vague answer that doesnt help at all, is there a way I can edit my docker container file and set it to use 8GB or more??
Please help - I am new to docker and would appreciate the asssistance
There is no memory limit on docker containers by default. Using --memory will only specify the upper limit of memory the container can use. You need to examine how the docker container is started and remove any limit there.
Also the WSL2 has no memory limit by default and will just grab as much memory as it needs. The value in .wslconfig is also the upper limit. If you just remove all the limits, it should just use all available memory.
This leaves QEMU itself. Have you checked what the guest RAM size is? (-m parameter on the commandline)

Docker images disappearing over time

I loaded some docker images running
docker load --input <file>
I can then see these images when executing
docker image ls
After a while images start disappearing. Every few minutes there are less and less images listed. I did not run any of images yet. What could be the cause of this issue?
EDIT: This issue arises with docker inside minikube VM.
Since you've mentioned that Docker daemon runs inside minikube VM, I assume that you might hit K8s Garbage collection mechanism, which keeps system utilization on appropriate level and reduce amount of unused containers(built from images) by adjusting the specific thresholds.
These eviction thresholds are fully managed by Kubelet k8s node agent, cleaning uncertain images and containers according to the parameters(flags) propagated in kubelet configuration file.
Therefore, I guess you can investigate K8s eviction behavior looking at the certain thresholds, adjusted in kubelet config file which is generated by minikube bootstrapper in the following path /var/lib/kubelet/config.yaml.
As mention in #mk_sta answer to fix issue you need:
Create or edit /var/lib/kubelet/config.yaml with
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
imagefs.available: "5%"
Default value is 15%
minikube stop
minikube start --extra-config=kubelet.config=/var/lib/kubelet/config.yaml
Or free more space on docker partition.
https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/#create-the-config-file
https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#hard-eviction-thresholds

Not enough space during build

I am currently trying to create docker image with python files and lot of extra packages in requirement.txt.
While I am running the command "sudo docker build -t XXX ." the packages are downloaded and than installed one by one untill I receive an error:
"Could not install packages due to an EnvironmentError: [Errno 28] No space left on device"
I have already did the atomic option of "sudo docker system prune" and all the past docker images are deleted.
Moreover, "sudo docker info" shows that I have 15 GB allocated to docker and while my unsuccesfull docker image size is 1 GB size it is still well below the total memory.
None of the options mentioned here: https://unix.stackexchange.com/questions/203168/docker-says-no-space-left-on-device-but-system-has-plenty-of-space
or here: Docker error : no space left on device
worked. I can create several "failed" dockers of ~1GB with the total size of more than 20GB so it is not an issue of lack of space on my HDD of VM.
So I would be grateful for some more ideas.
The disk partition used by Docker is becoming full during the build. You can see the available and used space on your partitions using df -h. You can either add more space to that partition or you need to clean more files.
The docker system prune only removes unused data (dangling images, unreferences volumes ...). You can clean more space, by deleting images that you don't need. I suggest you take a look at the images you have using docker image ls and explicitly delete unneeded ones using docker image rm <image>.
If you are using Docker Desktop on Mac, go to Preferences and increase the Disk image size. In my case it was displaying that it was full Disk image size: 59.6 GB (59.6 GB used).

How can my docker harddrive be bigger than the hosts?

I run some docker images on an EC2 host and recently noticed, that the docker FS is always 100GB. The host FS is only 8GB though.
What would happen, if i use more than 8GB on the docker image? Magic?
That comes from PR 14709 and the docker daemon --storage-opt dm.basesize= option:
Current default basesize is 10G. Change it to 100G. Reason being that for
some people 10G is turning out to be too small and we don't have capabilities
to grow it dyamically.
This is just overcommitting and no real space is allocated till container
actually writes data. And this is no different then fs based graphdrivers
where virtual size of a container root is unlimited.
So when you go over 8 GB, you should get a "No more space left on device" error message. No magic.

Limit disk size and bandwidth of a Docker container

I have a physical host machine with Ubuntu 14.04 running on it. It has 100G disk and 100M network bandwidth. I installed Docker and launched 10 containers. I would like to limit each container to a maximum of 10G disk and 10M network bandwidth.
After going though the official documents and searching on the Internet, I still can't find a way to allocate specified size disk and network bandwidth to a container.
I think this may not be possible in Docker directly, maybe we need to bypass Docker. Does this means we should use something "underlying", such as LXC or Cgroup? Can anyone give some suggestions?
Edit:
#Mbarthelemy, your suggestion seems to work but I still have some questions about disk:
1) Is it possible to allocate other size (such as 20G, 30G etc) to each container? You said it is hardcoded in Docker so it seems impossible.
2) I use the command below to start the Docker daemon and container:
docker -d -s devicemapper
docker run -i -t training/webapp /bin/bash
then I use df -h to view the disk usage, it gives the following output:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-longid 9.8G 276M 9.0G 3% /
/dev/mapper/Chris--vg-root 27G 5.5G 20G 22% /etc/hosts
from the above I think the maximum disk a container can use is still larger than 10G, what do you think
?
I don't think this is possible right now using Docker default settings. Here's what I would try.
About disk usage: You could tell Docker to use the DeviceMapper storage backend instead of AuFS. This way each container would run on a block device (Devicemapper dm-thin target) limited to 10GB (this is a Docker default, luckily enough it matches your requirement!).
According to this link, it looks like latest versions of Docker now accept advanced storage backend options. Using the devicemapperbackend, you can now change the default container rootfs size option using --storage-opt dm.basesize=20G (that would be applied to any newly created container).
To change the storage backend: use the --storage-driver=devicemapper Docker option. Note that your previous containers won't be seen by Docker anymore after the change.
About network bandwidth : you could tell Docker to use LXC under the hoods : use the -e lxcoption.
Then, create your containers with a custom LXC directive to put them into a traffic class :
docker run --lxc-conf="lxc.cgroup.net_cls.classid = 0x00100001" your/image /bin/stuff
Check the official documentation about how to apply bandwidth limits to this class.
I've never tried this myself (my setup uses a custom OpenVswitch bridge and VLANs for networking, so bandwidth limitation is different and somewhat easier), but I think you'll have to create and configure a different class.
Note : the --storage-driver=devicemapperand -e lxcoptions are for the Docker daemon, not for the Docker client you're using when running docker run ........
New releases version has --device-read-bps and --device-write-bps.
You can use:
docker run --device-read-bps=/dev/sda:10mb
More info here:
https://blog.docker.com/2016/02/docker-1-10/
If you have access to the containers you can use tc for bandwidth control within them.
eg: in your entry point script you can add:
tc qdisc add dev eth0 root tbf rate 240kbit burst 300kbit latency 50ms
to have a bandwidth of 240kbps, burst 300kbps and 50 ms latency.
You also need to pass the --cap-add=NET_ADMIN to the docker run command if you are not running the containers as root.
1) Is it possible to allocate other size (such as 20G, 30G etc) to each container? You said it is hardcoded in Docker so it seems impossible.
to answer this question please refer to Resizing Docker containers with the Device Mapper plugin

Resources