Is there a way to install Docker on a specific volume ?
When I install Docker on Amazon Linux with the following command :
sudo yum install docker
and then start the docker service using :
sudo service docker start
It creates two Data Spaces :
Data file: /dev/loop0
Metadata file: /dev/loop1
How can I have those spaces be on a given volume such as /mnt/docker for example ?
Those are device files. They will always be in /dev (actually not, but let's just assume for sake of simplicity, here). loop0 and loop1 are loop devices that are backed by the actual Docker volume files. You can easily see this using losetup -l:
> losetup -l
NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE
/dev/loop0 0 0 1 0 /var/lib/docker/devicemapper/devicemapper/data
/dev/loop1 0 0 1 0 /var/lib/docker/devicemapper/devicemapper/metadata
What you might want to do (depending on your file system layout) is moving the Docker runtime directory somewhere else (default is /var/lib/docker; all Docker volumes and images are stored there). For this, you can supply the -g flag to the Docker daemon.
In CentOS/Fedora/RHEL (and probably, because it's based on RHEL, also Amazon Linux), you can modify the /etc/sysconfig/docker file for this (look for an OPTIONS variable). In Ubuntu/Debian /etc/default/docker would be the place to look.
I was able to get docker to store all of its data (containers and their data volumes) at a different place in the file system (my EBS volume) by editing /etc/sysconfig/docker
which has the line:
OPTIONS="--default-ulimit nofile=1024:4096"
I added the -g option, as documented here
OPTIONS="--default-ulimit nofile=1024:4096 --graph=/home/ec2-user/myvolume"
where myvolume is the directory where I mounted my EBS volume. Of course you need to stop and restart the docker daemon for this to take effect.
This is on Amazon Linux. Apparently the docker config file is /etc/default/docker on some Linuxes.
Related
I have a Docker container running on my PC. The main functionality of the container is to scrape data, and this accumulates 0.3GB/day. I'll only be needing this data for the last 30 days, and after this I plan to store it archived on Hard Disk Drives for historical purposes. However after few hours of trials and errors, I've failed to create a Docker Volume on another partition, and the _data folder always appears in the /var/lib/docker/volumes/<volume_name> folder, while the partition drive is always empty.
I also tried creating the volume with docker run -v , but it still creates the volume in the main volumes folder.
The operating system is Pop!_OS 20.04 LTS
I'll provide data about the partition:
I'll provide data about the partition:
In case of docker volumes, you don't have control over where docker saves it's volumes. all you can do is just to change docker root directory. so it's better to mount your new partition under a directory and then change docker root directory to this mount point. this way you can achieve what you want. also you should consider that by doing this, all of your docker data will be stored in this new partition.
for changing your docker root directory, you should first create a file named daemon.json in address below:
/etc/docker/daemon.json
and then add config below to it:
{
"data-root": "/path/to/new/directory"
}
then restart docker daemon:
systemctl restart docker
then you can run command below to check current docker root directory:
docker info
In case you are using docker volumes all your volumes data are stored in default location (/var/lib/docker) but you can change it in /etc/docker/daemon.json config file:
{
...
"data-root": "/your/path/here",
...
}
Restart docker service to apply changes.
Docker daemon configuration file documentation here.
Disclaimer/Edit 2
Some years later, for everyone reading this question - If you are on Windows and want to use docker with linux containers, I highly recommend not using docker for windows at all and instead starting the entire docker environment inside a VM altogether. This Ext3 NTFS issue will break your neck on so many different levels that installing docker-machine might not even be worth the effort.
Edit:
I am using docker-machine which starts a boot2docker instance inside a Virtualbox VM with a shared folder on /c/Users from which you can mount volumes into your containers. The permissions of said volumes are the ones the question is about. The VMs are stored under /c/Users/tom/.docker/
I chose to use the docker-machine Virtualbox workflow over Hyper-V because I need VBox in my daily workflow and running Hyper-V and Virtualbox together on one system is not possible due to incompabilities between different Hypervisors.
Original question
I am currently trying to set up PHPMyAdmin in a container on windows but I can't change the permissions of the config.inc.php file.
I found: Cannot call chown inside Docker container (Docker for Windows) and thought this might be somewhat related but it appears to apply only to MongoDB.
This is my docker-compose.yml
version: "3"
services:
pma:
image: (secrect company registry)/phpmyadmin
ports:
- 9090:80
volumes:
- /c/Users/tom/projects/myproject/data/var/www/public/config.inc.php:/var/www/public/config.inc.php
now, when I docker exec -it [container] bash and change in the mounted directory, I try to run chmod on the config.inc.php but for some reason, it fails silently.
root#22a4bag43245: ls -la config.inc.php
-rw------- 1 root root 0 Aug 11 15:11 config.inc.php
root#22a4bag43245: chmod 655 config.inc.php
root#22a4bag43245: ls -la config.inc.php
-rw------- 1 root root 0 Aug 11 15:11 config.inc.php
Considering the linked answer, I thought I could just move the volume out of my Userhome but then vbox doesn't mount the folder at all.
How do I change the file permissions of /var/www/public/config.inc.php persistently?
I had the same problem of not being able to change ownership even after using chown. And as I researched, it was because of NTFS volumes being mounted inside ext filesystem. So I used another approach.
The volumes internal to docker are free from these problems. So you can mount your file on internal docker volume and then create a hard symlink to that file inside your local folder wherever you want:
sudo ln $(docker volume inspect --format '{{ .Mountpoint }}' <project_name>_<volume_name>) <absolute_path_of_destination>
This way you can have your files in desired place, inside docker and without any permission issues, and you will be able to modify the contents of file as in the normal volume mount due to hard symlink.
Here is a working implementation of this process which mounts and links a directory. In case you wanna know about the details, see possible fix section in issue.
EDIT
Steps to implement this approach:
Mount the concerned file in internal docker-volume(also known as named volumes).
Before making hardlink, make sure volumes and concerned file are present there. To ensure this, you should have run your container at least once before or if you want to automate this file creation, you can include a docker run which creates the required files and exits.
docker run --rm -itd \
-v "<Project_name>_<volume_name>:/absolute/path" \
<image> bash -c "touch /absolute/path/<my_file>"
This docker run will create volumes and required files. Here, container is my project name, by default, it is the name of the folder in which project is present and <volume_name> is the same as one which we want to use in our original container. <image> can be the same one which is already being used in your original containers.
Create a hardlink in your OS to the actual file location on your system. You can find the file location using docker volume inspect --format '{{ .Mountpoint }}' <project_name>_<volume_name>/<my_file>. Linux users can use ln in terminal and windows users can use mklink in command prompt.
In step 3 we have not used /absolute/path since the <volume_name> refers to that location already, and we just need to refer to the file.
Try one of the following:
If you can rebuild the image image: image: (secrect company registry)/docker-stretchimal-apache2-php7-pma then inside the docker file, add the following
USER root
RUN chmod 655 config.inc.php
Then you can rebuild the image and push it to the registry, and what you were doing should work. This should be your preferred solution, as you don't want to be manually changing the permissions everytime you start a new container
Try to exec using the user root explicitly
docker exec -it -u root [container] bash
I'd like to control whether docker operates on a persistent storage or on a persistent storage overlayed with a volatile one.
It is because I have the filesystem on an SD card (Raspberry Pi) and it needs to last long. I mostly want to operate on a read-only filesystem (ext4) overlayed with tmpfs (run containers on it), but when I detect that an update is available I want to unmount overlayfs, switch filesystem as read-write, update the image, then switch everything back to the tmpfs-overlayed read-only filesystem.
# mv /var/lib/docker /var/lib/docker~
# mkdir -p /var/lib/docker /tmp/docker /tmp/work
# mount -t overlay -o lowerdir=/var/lib/docker~,upperdir=/tmp/docker,workdir=/tmp/work overlay /var/lib/docker
# docker daemon --storage-driver devicemapper
I tried two storage drivers: overlay2 and devicemapper (loop). The former refused to work on overlayfs underlying filesystem (it is also mentioned in the documentation that it is not supported), the latter consumes all my memory and then Docker gets killed by OS. The behaviour is the same for Raspberry Pi and my PC.
The only storage that should work is vfs, but from what I have read, it is very inefficient for storage (no Copy-on-Write), so it is of no use for me.
Now I'm giving a try to do it with aufs storage driver and overlayfs backing filesystem (Docker documentation doesn't state that it is disabled). I hope it will work but it has some disadvantages: aufs is not supported by mainline Linux kernel.
Is there some other way to switch between the two filesystems? Or could the SD card saving be done by some completely different way (e.g. running in-memory containers)?
EDIT: Sorry, this finally DOES NOT WORK!!!. Docker daemon starts but is unable to create containers. This is the error:
Handler for POST /v1.24/containers/create returned error: error creating aufs mount to /var/lib/docker/aufs/mnt c549130a63857658f8675fd84296afae46293a9f7ae54e9ee04e83c231db600f-init: invalid argument
aufs storage driver with overlayfs backing filesystem works. For now it seems like the only option, however I'm not satisfied with the solution, because it looks like a hack to me and because aufs is not in mainline kernel so I needed to compile the kernel myself.
This is how I did it (it's quite a hack, please advice me to do it better):
on my PC:
$ git clone https://github.com/p4l1ly/rpi-kernel
$ cd rpi-kernel
$ vagrant up
...wait some quite long time...
$ vagrant ssh
$ cp /var/kernel_build/results/kernel-20161003-100112/rpi2_3/kernel7.img /vagrant/
$ exit
$ sudo cp kernel7.img /mnt
then on the SD card:
# mv /var/lib/docker /var/lib/docker~
# mkdir -p /var/lib/docker /tmp/docker /tmp/work
# mount -t overlay -o lowerdir=/var/lib/docker~,upperdir=/tmp/docker,workdir=/tmp/work overlay /var/lib/docker
# docker daemon --storage-driver aufs
I want to use a docker image in my work process. For example I want to use larryprice/sass to compile my SASS files to CSS. This image is pretty simple:
FROM ruby:2.2
RUN gem install sass
WORKDIR /tmp
ENTRYPOINT ["sass", "--watch", "/src"]
I'm using Windows 10, Docker 1.11 and VirtualBox 5.0.16.
My project files placed on work SSD, that mapped to logical drive D -
D:\Projects\Foo\Bar\web\sass
So, my problem is following: when I attach a volume to the container from drive D: (by $PWD or by full path in MINGW style /D/Projects/Foo/Bar/web/sass) e.g.
cd /D/Projects/Foo/Bar/web
docker run --name sass -v $PWD/sass:/src --rm larryprice/sass
the container can't see any SASS files:
$ docker exec -i -t sass /bin/bash
root#541aabac9ceb:/tmp# ls -al /src/
total 4
drwxr-xr-x 2 root root 40 May 3 13:05 .
drwxr-xr-x 50 root root 4096 May 3 13:05 ..
But when I mount a volume from system disk (C:) all works fine:
$ docker run --name sass -v ~/sass:/src --rm larryprice/sass
[Listen warning]:
Listen will be polling for changes. Learn more at https://github.com/guard/listen#polling-fallback.
>>> Sass is watching for changes. Press Ctrl-C to stop.
>>> New template detected: ../src/test.sass
write /src/test.css
write /src/test.css.map
How I can mount volumes from any place I need in Windows? Or what I'm doing wrong in my case?
p.s. Add leading slash to the path also not working:
docker run --name sass -v //d/Projects/Foo/Bar/web/sass:/src --rm larryprice/sass
Okay. Finally I found an explanation and solution for my own question. This solution will work for both Windows and MacOS X (because both of them uses VirtualBox to make Docker do the things).
The source of the problem consists from two points:
By default, VirtualBox VM have limited access to the host filesystem (proof). In my case it have access to the users folder on drive C: via VBox shared folder (screen). Thank to this, I can use volumes mapping like this one: ~/sass:/src (or full path: /c/users/dbykadorov/sass). Unfortunately, this configuration not allows me to use any path outside from /c/users/.
Solution for this point: add another shared folder to the VM, pointed on directory I need. I created new share d:/Projects (screen). Reboot your VM.
I hope here you'll complete your case. But in my case, VirtualBox does not mount new shared folder at system startup. So, I got second problem:
VirualBox does not mount additional shared folder, that I just added.
Additional solution:
Let's try to mount shared folder manually. Log into VM by any available ways. In console:
# Create mount point directory
$ mkdir -p /d/Projects
# Mount shared folder
$ mount -t vboxsf d/Projects /d/Projects
Okay, this do the trick! Now I can mount any project's directory (within D:\Projects)!
But... when I'll reboot my VM the mountpoint will disappear =( Now we need to make our mount point more persistent. As described here:
# Make a file bootlocal.sh
$ touch /var/lib/boot2docker/bootlocal.sh
# Edit it
$ vi /var/lib/boot2docker/bootlocal.sh
# Add follovin lines here:
#!/bin/sh
mkdir -p /d/Projects
mount -t vboxsf d/Projects /d/Projects
# Save the file and reboot VM
Important note: to make volumes creating more clear it will be good idea to mount shared folder to the same path as on the host. E.g. if we need to create volumes from E:\Foo\Bar\Baz (/e/Foo/Bar/Baz in MINGW style) then we need to add new shared folder for E:\Foo\Bar\Baz and mount it exactly to /e/Foo/Bar/Baz in your Docker VM.
That is All.
We have noticed that our containers are taking up a lot of space, one of the reasons for this is the images.
We would like to move the images.
I know right now they are stored in
/var/lib/docker/graph/<id>/layer
Is there a way to move these to another location/persistent disk?
To move images to another drive or another server:
docker save image_name > image_name.tar
mv image_name.tar /somewhere/else/
Load it back into docker
docker load < image_name.tar
Reference.
Here's any easy way to move docker's data:
sudo service docker stop
sudo mv /var/lib/docker /a/new/location
sudo ln -s /a/new/location /var/lib/docker # Create a symbolic link
sudo service docker start
No need to change DOCKER_OPTS or use -g /path.
You can always mount /var/lib/docker to a different disk. Otherwise, you can start the daemon with -g /path in order to tell docker to use a different directory for storage.
Using the answer by #creack I did the following on my Ubuntu install to move my entire docker images/containers folder to a new location/disk. The great thing about doing this is that any new images that I install will then use the new disk location.
First stop the docker service:
sudo service docker stop
Then move the docker folder from the default location to your target location:
sudo mv /var/lib/docker /thenewlocation
Then edit the /etc/default/docker file, inserting/amending the following line which provides the new location as an argument for the docker service:
DOCKER_OPTS="-g /thenewlocation/docker"
Restart the docker service:
sudo service docker start
This worked 100% for me - all my images remained in tact.