Docker - Limit mounted volume size - docker

Is there any way to limit the size that a mounted docker volume can grow to? I'm thinking of doing it like it's done here: How to set limit on directory size in Linux? but I feel it's a bit too convoluted for what I need.

By default when you mount a host directory/volume into your Docker container while running it, the Docker container gets full access to that directory and can use as much space as is available.
The way you're trying is yeah, tedious.
What you can do is, for, eg. maybe mount a new partition to your server, (maybe an EBS to your EC2 instance) with limited size and mount that inside your container and that will suit your purpose.

Related

docker overlay2 increase size

I am very new to docker so please pardon if anything stupid :P
I have docker running on my cloud server and was facing issue of running out of space because of docker overlay files. So I mounted 100GB of storage to the server at
/home/<user>/data
and in daemon.json configured the docker root directory to this newly mounted storage and copied all the old files but after that also when I check
df -h
overlay file shows size 36G. Am I doing something wrong
How can I increase this overlay to completely utilize the storage ?
PS: Also when it starts filling up it doesn't increase space it just fills up and all the apps stop working
Docker stores images, containers, and volumes under /var/lib/docker by default. If you haven't mounted another filesystem there, you are likely looking at the free space on your root filesystem.
When mounting another filesystem in this location, you likely want to move the current directory aside so you can copy it into the new filesystem. If you do restore the content, be sure to use a command that preserves ownership, permissions, and symlinks (I believe cp -a and tar both do this).
Also, make sure the docker engine is not running when you replace this directory, and be sure the filesystem type matches your current root filesystem type, or is compatible with your graph driver.

Docker: in memory file system

I have a docker container which does alot of read/write to disk. I would like to test out what happens when my entire docker filesystem is in memory. I have seen some answers here that say it will not be a real performance improvement, but this is for testing.
The ideal solution I would like to test is sharing the common parts of each image and copy to your memory space when needed.
Each container files which are created during runtime should be in memory as well and separated. it shouldn't be more than 5GB fs in idle time and up to 7GB in processing time.
Simple solutions would duplicate all shared files (even those part of the OS you never use) for each container.
There's no difference between the storage of the image and the base filesystem of the container, the layered FS accesses the images layers directly as a RO layer, with the container using a RW layer above to catch any changes. Therefore your goal of having the container running in memory while the Docker installation remains on disk doesn't have an easy implementation.
If you know where your RW activity is occurring (it's fairly easy to check the docker diff of a running container), the best option to me would be a tmpfs mounted at that location in your container, which is natively supported by docker (from the docker run reference):
$ docker run -d --tmpfs /run:rw,noexec,nosuid,size=65536k my_image
Docker stores image, container, and volume data in its directory by default. Container HDs are made of the original image and the 'container layer'.
You might be able set this up using a RAM disk. You would hard allocate some RAM, mount it, and format it with your file system of choice. Then move your docker installation to the mounted RAM disk and symlink it back to the original location.
Setting up a Ram Disk
Best way to move the Docker directory
Obviously this is only useful for testing as Docker and it's images, volumes, containers, etc would be lost on reboot.

How to create docker container with custom root volume size?

I have hit a limit of 10Gb for default root volume size. For this particular container I need a larger size.
So far I've seen quite dirty hacks to override default size.
Could somebody provide me and the community with a clear example of specifying bigger volume size upon container creation? Thanks!
I'm going to offer an alternative suggestion - don't. Stop and ask yourself why you need a larger root volume. I would suggest it's likely to be because you're doing something that can be done a better way.
I would suggest that instead - you use a storage container (if another 10G would be sufficient) or use a passthrough mount to a local disk.
The problem with big containers is they're somewhat at odds with what containerisation is trying to accomplish - compact, lightweight and self contained program instances.
So I would suggest instead:
docker create -v /path/to/storage:/container_mount --name storage_for_my_app /bin/true
(Or you can just -v /container_mount to keep the data within the container)
Then when you fire up your container:
docker run -d --volumes-from storage_for_my_app your_image
However it may be useful to note - as of Docker 1.9, the size limit is 100G instead: https://docs.docker.com/engine/reference/commandline/daemon/

is it possible to speed up writes inside a docker container?

I have a very large file in my docker container (it's a virtualbox image) which --- unfortunately -- must be modified as part of running it. Docker's copy-on-write policy works against me here and unfortunately any mutation/copying of the file takes about 10 minutes, compared to about 10 seconds to copy the same file on the host.
Can anything be done to speed up the creation/copy of very large files within a docker container? Note that this is an entirely transient file that I do not need to persist after the container is closed.
Declare the folder the file is in as a volume. If you do this, the copy-on-write-policy is not applied. Note that you don't have to mount this volume to the host system, it is sufficient to declare it as a volume.
For more information: https://docs.docker.com/userguide/dockervolumes/

Docker - Editing Mount Options

I am adding a disk quota to my Ubuntu docker container. To add quota support, I need to edit the mount options and add usrquota as explained here: how-to-enable-user-and-group-quotas
Usually you would edit /etc/fstab and add the mount option.
My question, how would I add a mount option to a docker container?
You don't really mount container's disks anywhere. There is a feature request asking for setting quotas in Docker containers (https://github.com/docker/docker/issues/3804) so at the moment there is no easy way.
However, apparently there are a couple of workarounds.
Use Device Mapper as a limit
Docker containers have a maximum of 10GB of disk space, per container (that is the Device Mapper storage driver by default).
So your best option is to change the default value for new containers, but then, it is my understanding you would need to rebuild the containers.
So, if you want to enforce 5 gigabytes, you would write
docker -d --storage-opt dm.basesize=5G
Source
https://goldmann.pl/blog/2014/09/11/resource-management-in-docker/#_limiting_disk_space
User inside/quota outside
The trick is create a specific user account in each container, and assign a userid for that account (and obviously run the command with that account).
On the host, we would use setquota to limit this userid.
Source https://github.com/docker/docker/issues/471#issuecomment-22373948

Resources