Docker volume size limit on BlueMix - docker

Docker containers let us conveniently mount volumes for persistent data. I've researched this, and if I understand correctly, the volume's space allocation is bound by the container host's drive space.
My question is - how does this translate to a cloud system like Bluemix? With a container(on Bluemix), you can set the drive limit to, say, 32GB, etc, and know you can run the image with 32GB available TO the container. Are any created volumes also capped and folled into that 32GB limit?
I'm not able to find any documentation on this. The closest I found was creating "Data Containers", where the volume limit is the size of the data container. But if I just create a volume and mount it to a container, what rules govern the size limit of that particular volume?
running inspect
{
"hostPath": "/vol/af2f348b-cad6-4b86-ac0c-b1bc072ca241/PGDATA",
"spaceGuid": "af2f348b-cad6-4b86-ac0c-b1bc072ca241",
"volName": "PGDATA"
}
This question seems specific to Bluemix, but not necessarily, since it might shed light on practices other "container as a service" providers might use.

On Bluemix you could use external docker volumes for data persistence: if a storage is inside a container its persistence will be limited to when the container is running.
You could create a volume using the cf CLI
cf ic volume create volume_name
and check for it through
cf ic volume list
Then you could mount it on the container on Bluemix through the cf ic command with -v option, or through the dockerfile when building on Bluemix
For a reference
https://www.ng.bluemix.net/docs/containers/container_single_ov.html
Edit Jan 18th
There is a relationship between memory and storage size for containers on Bluemix, it is shown by the dashboard options (according to the account/org type)
Pico: 64 MB Memory, 4GB Storage
Nano: 128 MB Memory, 8GB Storage
Micro: 256 MB Memory, 16GB Storage (default)
Tiny: 512 MB Memory, 32GB Storage
Small: 1GB Memory, 64GB Storage
Medium: 2 GB Memory, 128GB Storage
Large: 4 GB Memory, 256GB Storage
X-Large: 8 GB Memory, 512GB Storage
XX-Large: 16 GB Memory, 1TB Storage

Related

How to set RAM memory of a Docker container by terminal or DockerFile

I need to have a Docker Container with 6gb of RAM memory.
I tried this command:
docker run -p 5311:5311 --memory=6g my-linux
But it doesn't work because I logged in to the Docker Container and I checked the amount of memory available. This is the output which shows there are only 2gb available:
>> cat /proc/meminfo
MemTotal: 2046768 kB
MemFree: 1747120 kB
MemAvailable: 1694424 kB
I tried setting the preferences -> advance in the Docker Application.
If I set 6gb, it works... I mean, I have a container with 6gb MemTotal.
In this way all my containers will have 6gb...
I was wondering how to allocate 6gb of memory for only one container using some commands or setting something in the Docker File. Any help?
Don't rely on /proc/meminfo for tracking memory usage from inside a docker container. /proc/meminfo is not containerized, which means that the file is displaying the meminfo of your host system.
Your /proc/meminfo indicates that your Host system has 2G of memory available. The only way you'll be able to make 6G available in your container without getting more physical memory is to create a swap partition.
Once you have a swap partition larger or equal to ~4G, your container will be able to use that memory (by default, docker imposes no limitation to running containers).
If you want to limit the amount of memory available to your container explicitly to 6G, you could do docker run -p 5311:5311 --memory=2g --memory-swap=6g my-linux, which means that out of a total memory limit of 6G (--memory-swap), upto 2G may be physical memory (--memory). More information about this here.
There is no way to set memory limits in the Dockerfile that I know of (and I think there shouldn't be: Dockerfiles are there for building containers, not running them), but docker-compose supports the above options through the mem_limit and memswap_limit keys.

Clean docker environment: devicemapper

I have a docker environment with 2 containers (Jenkins and Nexus, both with their own named volume).
I have a daily cron-job which deletes unused containers and images. This is working fine. But the problem is inside my devicemapper:
du -sh /var/lib/docker/
30G docker/
I can each folder in my docker folder:
Volumes (big, but that's normal in my case):
/var/lib/docker# du -sh volumes/
14G volumes/
Containers:
/var/lib/docker# du -sh containers/
3.2M containers/
Images:
/var/lib/docker# du -sh image/
5.8M image/
Devicemapper:
/var/lib/docker# du -sh devicemapper/
16G devicemapper/
/var/lib/docker/devicemapper/mnt is 7.3G
/var/lib/docker/devicemapper/devicemapper is 8.1G
Docker info:
Storage Driver: devicemapper
Pool Name: docker-202:1-xxx-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: ext4
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 5.377 GB
Data Space Total: 107.4 GB
Data Space Available: 28.8 GB
Metadata Space Used: 6.148 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.141 GB
Udev Sync Supported: true
What is this space and am I able to clean this without breaking stuff?
Don't use a devicemapper loop file for anything serious! Docker has big warnings about this.
The /var/lib/docker/devicemapper/devicemapper directory contains the sparse loop files that contain all the data that docker mounts. So you would need to use lvm tools to trawl around them and do things. Have a read though the remove issues with devicemapper, they are kinda sorta resolved but maybe not.
I would move away from devicemapper where possible or use LVM thin pools on anything RHEL based. If you can't change storage drivers, the same procedure will at least clear up any allocated sparse space you can't reclaim.
Changing the docker storage driver
Changing storage driver will require dumping your /var/lib/docker directories which contains all your docker data. There are ways to save portions of it but that involves messing around with Docker internals. Better to commit and export any containers or volumes you want to keep and import them after the change. Otherwise you will have a fresh, blank Docker install!
Export data
Stop Docker
Remove /var/lib/docker
Modify your docker startup to use the new storage driver.
Set --storage-driver=<name> in /lib/systemd/system/docker.service or /etc/systemd/system/docker.service or /etc/default/docker or /etc/sysconfig/docker
Start Docker
Import Data
AUFS
AUFS is not in the mainline kernel (and never will be) which means distro's have to actively include it somehow. For Ubuntu it's in the linux-image-extra packages.
apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual
Then change the storage driver option to --storage-driver=aufs
OverlayFS
OverlayFS is already available in Ubuntu, just change the storage driver to --storage-driver=overlay2 or --storage-driver=overlay if you are still using a 3.x kernel
I'm not sure how good an idea this is right now. It can't be much worse than the loop file but
The overlay2 driver is pretty solid for dev use but isn't considered production ready yet (e.g. Docker Enterprise don't provide support) but it is being pushed to become the standard driver due to the AUFS/Kernel issues.
Direct LVM Thin Pool
Instead of the devicemapper loop file you can use an LVM thin pool directly. RHEL makes this easy with a docker-storage-setup utility that distributed with their EPEL docker package. Docker have detailed steps for setting up the volumes manually.
--storage-driver=devicemapper \
--storage-opt=dm.thinpooldev=/dev/mapper/docker-thinpool \
--storage-opt dm.use_deferred_removal=true
Docker 17.06+ supports managing simple direct-lvm block device setups for you.
Just don't run out of space in the LVM volume, ever. You end up with an unresponsive Docker daemon that needs to be killed and then LVM resources that are still in use that are hard to clean up.
A periodic docker system prune -a works for me on systems where I use devicemapper and not the LVM thinpool. The pattern I use is:
I label any containers, images, etc with label "protected" if I want them to be exempt from cleanup
I then periodically run docker system prune -a --filter=label!=protected (either manually or on cron with -f)
Labeling examples:
docker run --label protected ...
docker create --label=protected=true ...
For images, Dockerfile's LABEL, eg LABEL protected=true
To add a label to an existing image that I cannot easily rebuild, I make a 2 line Dockerfile with the above, build a new image, then switch the new image for the old one (tag).
General Docker label documentation
First, what is devicemapper (official documentation)
Device Mapper has been included in the mainline Linux kernel since version 2.6.9 [in 2005]. It is a core part of RHEL family of Linux distributions.
The devicemapper driver stores every image and container on its own virtual device. These devices are thin-provisioned copy-on-write snapshot devices.
Device Mapper technology works at the block level rather than the file level. This means that devicemapper storage driver's thin provisioning and copy-on-write operations work with blocks rather than entire files.
The devicemapper is the default Docker storage driver on some Linux distributions.
Docker hosts running the devicemapper storage driver default to a configuration mode known as loop-lvm. This mode uses sparse files to build the thin pool used by image and container snapshots
Docker 1.10 [from 2016] and later no longer matches image layer IDs with directory names in /var/lib/docker.
However, there are two key directories.
The /var/lib/docker/devicemapper/mnt directory contains the mount points for image and container layers.
The /var/lib/docker/devicemapper/metadatadirectory contains one file for every image layer and container snapshot.
If your docker info does show your Storage Driver is devicemapper (and not aufs), proceed with caution with those folders.
See for instance issue 18867.
I faced the same issue where in my /var/lib/docker/devicemapper/devicemapper/data file has reached ~91% of root volume(~45G of 50G). I tried removing all the unwanted images, deleted volumes, nothing helped in reducing this file.
Did a few googling and understood that the "data" files is loopback-mounted sparse files and docker uses it to store the mount locations and other files we would have stored inside the containers.
Finally I removed all the images which were run before and stopped
Warning: Deletes all docker containers
docker rm $(docker ps -aq)
The reduced the devicemapper file significantly. Hope this may help you
.

How can my docker harddrive be bigger than the hosts?

I run some docker images on an EC2 host and recently noticed, that the docker FS is always 100GB. The host FS is only 8GB though.
What would happen, if i use more than 8GB on the docker image? Magic?
That comes from PR 14709 and the docker daemon --storage-opt dm.basesize= option:
Current default basesize is 10G. Change it to 100G. Reason being that for
some people 10G is turning out to be too small and we don't have capabilities
to grow it dyamically.
This is just overcommitting and no real space is allocated till container
actually writes data. And this is no different then fs based graphdrivers
where virtual size of a container root is unlimited.
So when you go over 8 GB, you should get a "No more space left on device" error message. No magic.

Set disc quota with DeviceMapper

I have changed storage plugin to DeviceMapper. Docker info gives following output.
Server Version: 1.9.0
Storage Driver: devicemapper
Pool Name: docker-253:1-16-pool
Pool Blocksize: 65.54 kB
Base Device Size: 107.4 GB
Backing Filesystem: extfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 1.821 GB
Data Space Total: 268.4 GB
Data Space Available: 11.66 GB
Metadata Space Used: 2.101 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.145 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.90 (2014-09-01)
Execution Driver: native-0.2
First of all, I don't know, how to set quota per container. Should I use maybe flgas in docker run commands?
With devicemapper as storage plugin, you can not set disk size per container. It would be of a fixed size for each container. And as per the output of docker info it suggests that the fixed size would be of around 100GB. However, you can have one of the following 2 options depending on your requirement.
a.) You can change this fixed size from 100GB to some other value like 20GB but in that case as well all the containers would be having fixed disk size as 20GB. If you want go ahead with this option, you can follow this:
Stop docker service, sudo service docker stop
Remove the existing docker directory (which in your case is default
one i.e. /var/lib/docker)-- NOTE this will delete all your
existing docker images and containers.
Start docker daemon with option docker daemon -s devicemapper --storage-opt dm.basesize=20G
Or in place of step 3, add option DOCKER_OPTS='-g /var/lib/docker -s devicemapper --storage-opt dm.basesize=5G'in the file /etc/default/docker and restart docker service
sudo service docker start
Now, whatever containers you will spawn, would be having disk size as 20GB.
b.) As a second option, you can increase disk size of your existing containers from whatever base disk size you have set. (which by default is 100GB and if you follow first option, then 20GB). To do this, here is a very useful article you can follow. This may help in letting you set different disk size for different containers.
Hope this answer is useful for your requirement, thanks.

Docker - Disk Quotas

I understand that docker containers have a maximum of 10GB of disk space with the Device Mapper storage driver by default.
In my case, I have a worker container and a data volume container. I then link them together using "-volumes-from". I also use the Device Mapper storage driver.
Example, I did a test by creating a script to download a 20GB file onto my data volume container and that worked successfully.
My question, is there a 10GB quota for the data volume container? How did the 20GB file download successfully if it is limited by 10GB?
Volumes are outside of the Union File System by definition, so any data in them will not count towards the devicemapper 10GB limit. By default volumes are stored under /var/lib/docker/vfs if you don't specify a mount point.
You can find the exactly where your volumes are on the host by using the docker inspect command e.g:
docker inspect -f {{.Volumes}} CONTAINER
You will get a result like:
map[/CONTAINER/VOLUME:/var/lib/docker/vfs/dir/5a6f7b306b96af38723fc4d31def1cc515a0d75c785f3462482f60b730533b1a]
Where the path after the colon is the location of the volume on the host.

Resources