Is Docker Maximum container size 100GB? - docker

I am working on Hyperledger Fabric ver 1.4, my server freeze when many transactions are been done. My understanding is that the old transactions or versions which are committed do not get removed but stay in the docker using disk space ( I might be wrong on this assumption ) therefore the only solution I have yet found is increasing my virtual server disk space to 120GB. 100GB for docker and 20GB to run my Front-end development.
I have looked into Alpine images but right now I do not want to take that route.
With Current Configurations I have 75GB SSD and 4GB of RAM
Is there a way to decrease the Max Disk size been used in Ubuntu
16.04 LTS 64bit Minimal?
If not, is the Maximum container size 100GB in Docker?
In GUI interface I can increase and decrease the size as show in the attached image.
GUI Interface Docker

Related

Set max disk usage for Docker using WSL2

On Windows 11, my docker desktop increasingly uses more disk every time I run docker-compose up. I tracked it to the size of the following file which is currently at about 100GB.
%USERPROFILE%\AppData\Local\Docker\wsl\data\ext4.vhdx
It seems on Windows you'd need to configure the Docker resource usage through .wslconfig; however, I don't see a key on the docs to adjust max disk usage.

Mac docker no space left on device when building images?

I've seen this issue a number of times and usually use docker system prune to solve it temporarily, but i'm not understanding why it says there is no space on the device?
The main drive on my mac currently has 170gb free space, i also have a second drive with 900gb free, the images i'm building take up a total of 900mb when built, so what is docker talking about? I have plenty of storage space!
Since you specified that the platform is Mac, your docker runtime is running inside a VM, which has it's own resources allocated.
Assuming you are using Docker For Mac, you should increase the allocated disk space for the docker VM:
In case you don't want to increase the amount of docker engine storage as answered here, you can free some space by running:
docker image prune

Docker Desktop cannot set large disk size

I'm running Docker Desktop 2.2.0 on Windows 10. It appears that the disk size cannot be set beyond 64GB. I tried setting the diskSizeMiB value to 100GB in %APPDATA%\Docker\settings.json, but docker appears to ignore it and set the size to 64GB in the resulting Hyper-V VM.
"cpus": 6,
"diskSizeMiB": 102400,
The issue I'm having is older images being evicted when pulling new ones in. Even when manually expanding the HyperV disk to 100GB, docker pull deletes older images to make space for new ones.
Docker for Windows docs don't seem to explicitly mention a limit, but 64Gb ominously equals 2^16 bytes which hints at it being a technical limit.
Anyone knows of a workaround for this limitation?
Looks like I was on the right track with increasing the virtual disk size directly in Hyper-V (See this guide). The only missing piece was restarting Docker (or Windows). Once restarted, I was able to use the full disk.

Docker stats, memory usage, big difference between OSX and Ubuntu, why?

I have a C program running in an alipine docker container. The image size is 10M on both OSX and on ubuntu.
On OSX, when I run this image, using the 'docker stats' I see it uses 1M RAM and so in the docker compose file I allocate a max of 5M within my swarm.
However, on Ubuntu 16.04.4 LTS the image is also 10M but when running it uses about 9M RAM, and I have to increase the max allocated memory in my compose file.
Why is there a such a difference in RAM usage between OSX and Ubuntu?
Even though we have different OSs, I would have thought once you are running inside a framework, then you would behave similarly on different machines. So I would have thought there should be comparable memory usage.
Update:
Thanks for the comments. So 'stats' may be inaccurate, and there are differences so best to baseline on linux. As an aside, but I think interesting, the reason for asking this question is to understand the 'under the hood' in order to tune my setup for a large number of deployed programs. Originally, when I tested I tried to allocate the smallest amount of maximum RAM on ubuntu, this resulted in a lot of disk thrashing something I didn't see or hear on my Macbook, (no hard disks!).
Some numbers which are completely my setup but also I think are interesting.
1000 docker containers, 1 C program each, 20M RAM MAX per container, Server load of 98, Server runs 4K processes in total, [1000 C programs total]
20 docker containers, 100 C programs each, 200M RAM MAX per container, Server load of 5 to 50, Server runs 2.3K processes in total, [2000 C programs total].
This all points at give your docker images a good amount of MAX RAM and it is nicer to your server to have fewer docker containers running.

How docker manages machine configuration

This might be a stupid question, but a question asked in a recent interview left me pondering about how docker manages the machine configuration. When I said docker makes it possible to have the same environment for your application in production, staging and development, they asked me this question:
If the production configuration for your application is something like 64GB ram, 1TB ssd hard drive and stuff like that, and your development configuration is a much meagre 8GB RAM, 512 GB normal hard disk, how does docker makes the environment similar?
I was dumbstruck!
Docker allows you to limit resources to each Container (at least it is possible now).
But anyway, the resources are up to different circumstances and they could change. There is no reason to have a static hardware configuration for your apps.
The point of docker is to make software environment consistent not the hardware one. Docker does not want to keep you from having Vertical Scaling. Without docker you have vertical scaling, but by using docker you expand your ability to have horizontal scaling at the same time.
The whole question they asked is wrong. If you had a host of 10GB of ram and a container and it was stock on 8GB of ram, and for example your visitors where low and you had to scale down the host to 5GB of ram to lower your costs, then guess what, you could not. Why? because you container is stock on 8GB and it would crash if the real ram is lower than that (Actually in newer docker versions you set the maximum and it is not static i.e. it is not occupied the moment the container runs.)
Remember, docker is about have Horiz and Vertz and the same time!

Resources