Docker: How to store images and metadata on another filesystem? - docker

This issue has been really giving me grief and I would appreciate some help.
Running docker 1.10.3 on a vanilla Centos7.1 box, I have two file systems, a 15gb dev/vda1 where my root and var/lib is and a 35gb /dev/vdc1 mounted on mnt where I would like to put my docker volumes/image data and meta data. This is for administration and management purposes as I am expecting the number of containers to grow.
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 15G 1.5G 13G 11% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 8.3M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/vdc1 35G 49M 33G 1% /mnt/vdc1
tmpfs 385M 0 385M 0% /run/user/0
Despite all my attempts, docker keep on installing and defaulting to place the Data Space and Meta data space onto the 15gb root volume. I have tried many solutions including ; http://collabnix.com/archives/5881 , How to change the docker image installation directory?, and more, all with no luck.... basically wither the docker instance does not start at or all it does with its default settings.
Would like some help either the settings required for Data and Meta data to be stored on /mnt/vdc1 or install docker as a whole on the drive.
Thanks in advance , bf !

--graph is only one flag. There is also --exec-root and $DOCKER_TMPDIR which are used to store files as well.
DIR=/mnt/vdc1
export DOCKER_TMPDIR=$DIR/tmp
dockerd -D -g $DIR --exec-root=$DIR

Related

how to convert centos server to docker base image

we have a server which hosts number of apps, I am exploring a possibility where we can create and upload a base image every time we deploy a new app on the server. is this a valid approach and is it possible.
root#sl2o2app301:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg01-root 58G 4.6G 54G 8% /
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 28K 7.8G 1% /dev/shm
tmpfs 7.8G 835M 7.0G 11% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/sda1 247M 152M 96M 62% /boot
tmpfs 1.6G 0 1.6G 0% /run/user/1518
root#sl2o2app301:/etc$ cat redhat-release
Red Hat Enterprise Linux Server release 7.4 (Maipo)
This is an invalid approach if you want to use docker.
Docker is more like isolated running app (container) not operating system.
But Docker is not a silver bullet, and in some cases, you need to use golden images for server management.
For creating of golden server images you may use packer for production and/or Vagrant for local development images
However, you still may want to use a dockerized approach. in this case, you need to split your server into the set of docker containers. One application + its dependencies in one container. This will be a valid usage of the tool.
If you want to run them altogether from one command it is good to evaluate docker-compose onboarding.

Give docker more diskspace for containers

I have a question. Our docker server was out of space for its containers so I gave it a bigger disk from 500GB to 1TB(its a vm) Ubuntu sees this correctily. If I do the command vgs I get this output:
VG #PV #LV #SN Attr VSize VFree
Docker-vg 1 2 0 wz--n- 999.52g 500.00g
But Docker still thinks it's out of space. I have rebooted the docker VM but still he thinks it's out of space. If I use the df -h command this is the output:
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 792M 8.6M 783M 2% /run
/dev/mapper/Docker--vg-root 490G 465G 0 100% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/xvda1 472M 468M 0 100% /boot
As you see the docker-vg still thinks its 490gb
I don't know where to look. can someone help me ?
You still need to extend your logical volume and resize the filesystem to use the larger logical volume.
First, with lvextend, I'm not sure if it works with /dev/mapper. If not, you can do an lvdisplay to list your logical volumes:
lvextend -l +100%FREE /dev/mapper/Docker--vg-root
With ext*fs you can then run a resize:
resize2fs /dev/mapper/Docker--vg-root
The command is similar for xfs:
xfs_growfs /dev/mapper/Docker--vg-root
With "docker system prune" you clean some space removing old images and other stuff.
If you want your container to be aware of the disk size change, you have to:
docker rmi <image>
docker pull <image>

Dokku/Docker out of disk space - How to enter app

So my question is I have an errant rails app deployed using Dokku with the default Digital Ocean setup. This rails app has eaten all of the disk space as I did not set up anything to clean out the /tmp directory.
So the output of df is:
Filesystem 1K-blocks Used Available Use% Mounted on
udev 1506176 0 1506176 0% /dev
tmpfs 307356 27488 279868 9% /run
/dev/vda1 60795672 60779288 0 100% /
tmpfs 1536772 0 1536772 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 1536772 0 1536772 0% /sys/fs/cgroup
/dev/vda15 106858 3419 103439 4% /boot/efi
tmpfs 307352 0 307352 0% /run/user/0
So I am out of disk space, but I don't know how to enter the container to clean it. Any dokku **** return /home/dokku/.basher/bash: main: command not found
Access denied which I have found out is because I am completely out of HD space.
So 2 questions.
1: How do I get into the container to clear the tmp directory
2: Is there a way to set a max disk size limit so Dokku doesn't eat the entire HD again?
Thanks
Dokku uses docker to deploy your application, you are probably accumulating a bunch of stale docker images, which over time can take over all of your disk space.
Try running this:
docker image ls
Then try removing unused images:
docker system prune -a
For more details, see: https://www.digitalocean.com/community/tutorials/how-to-remove-docker-images-containers-and-volumes

How can I use another directory to save built Docker containers?

I am using Vagrant with Docker provision.
The issue is when I run my docker compose I fill up my VM disk space.
Here is what my file system looks like:
Filesystem Size Used Avail Use% Mounted on
udev 476M 0 476M 0% /dev
tmpfs 97M 3.1M 94M 4% /run
/dev/sda1 9.7G 2.2G 7.5G 23% /
tmpfs 483M 0 483M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 483M 0 483M 0% /sys/fs/cgroup
tmpfs 97M 0 97M 0% /run/user/1000
vagrant_ 384G 39G 345G 11% /vagrant
vagrant_www_ 384G 39G 345G 11% /vagrant/www
How can I configure Docker or Vagrant to use /vagrant directory?
(By the way I have not loaded Docker... This is why it's not 100% disk usage)
You can try to reconfigure the Docker daemon as documented here -> https://docs.docker.com/engine/reference/commandline/dockerd/#options. Use the -g parameter to change the root runtime path of the Docker daemon.
--graph, -g /var/lib/docker Root of the Docker runtime
As long as you are working on a local disk or SAN this would be a proper way to change the location of the Docker data including the images. But be aware, do not use NFS or another type of share because this won't work as of the used massive file locks. Somewhere on Github there is an issue about this.

Docker container drive does not match available hard drive space on host

I have loaded a new custom image into a remote RedHat 7 docker host instance. When running a new container, the container does not attempt to use the entire disk. I get the following is the output of a df -h on the container:
rootfs 9.8G 9.3G 0 100% /
/dev/mapper/docker-253:0-67515990-5700c262a29a5bb39d9747532360bf6a346853b0ab1ca6e5e988d7c8191c2573
9.8G 9.3G 0 100% /
tmpfs 1.9G 0 1.9G 0% /dev
shm 64M 0 64M 0% /dev/shm
/dev/mapper/vg_root-lv_root
49G 25G 25G 51% /etc/resolv.conf
/dev/mapper/vg_root-lv_root
49G 25G 25G 51% /etc/hostname
/dev/mapper/vg_root-lv_root
49G 25G 25G 51% /etc/hosts
tmpfs 1.9G 0 1.9G 0% /proc/kcore
tmpfs 1.9G 0 1.9G 0% /proc/timer_stats
But the host system has much more space:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_root-lv_root 49G 25G 25G 51% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 8.5M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/mapper/vg_root-lv_home 9.8G 73M 9.7G 1% /home
/dev/sda1 497M 96M 402M 20% /boot
It seems as if docker is assigning the 9.8 gigs of the /home mapping to the entire drive of the container. So I am wondering if there is a reason I am seeing this?
The Problem
I was able to resolve this problem. The issue was not related to the volume that was being mounted to the container (ie It was not mounting the home volume as the root volume on the container). The problem occurred because docker uses device-mapper in RedHat to manage the file systems of it's containers. By default, the containers will start with 10G of space. In general, docker will use AUFS to manage the file systems of the containers. This is the case on most Debian based versions of Linux, but RedHat uses device-mapper instead.
The Solution
Luckily, the device-mapper size is configurable in docker. First, I had to stop my service, and remove all of my images/containers. (NOTE: There is no coming back from this, so backup all images as needed).
sudo service stop docker && sudo rm -irf /var/lib/docker
Then, start up the docker instance manually with the desired size parameters:
sudo docker -d --storage-opt dm.basesize=[DESIRED_SIZE]
In my case, I increased my container size to 13G:
sudo docker -d --storage-opt dm.basesize=13G
Then with docker still running, pull/reload the desired image, start a container, and the size should now match the desired size.
Next, I set my docker systemd service file to startup with the desired container size. This is required so that the docker service will start the containers up with the desired size. I edited the OPTIONS variable in the /etc/sysconfig/docker file. It now looks like this:
OPTIONS='--selinux-enabled --storage-opt dm.basesize=13G'
Finally, restart the docker service:
sudo service stop docker
References
[1] https://jpetazzo.github.io/2014/01/29/docker-device-mapper-resize/ - This is how I discovered RedHat uses device-mapper, and that device-mapper has a 10G limit.
[2] https://docs.docker.com/reference/commandline/cli/ - Found the storage options in dockers documentation.

Resources