I want to test docker in Swarm mode.
I've created 3 Ubuntu-server Virtual Machines.
On each VM I've installed docker.
The next step that I would like to accomplish is to share the /var/lib/docker/volumes folder among the 3 docker nodes.
The first solution I tried is to mount /var/lib/docker/volumes as a remote sshfs volume. I failed because when the docker service starts it executes the command chown on the /var/lib/docker/volumes and it fails.
Then I tried to use glusterfs, I succeded to configure gluster to share the same folder on the 3 nodes (now if I create a file in /var/lib/docker/volumes on the first node, I can see the new file also on the others 2 nodes).
Then I started docker on the first node without any problem. But If I try to start docker on the second node I got the error:
Error starting daemon: error while opening volume store metadata database: timeout
I assume that the error is because the first node acquire the lock on the file /var/lib/doceker/meradata.db
How can I solve this problem?
Is there an alternative to use glusterfs to share the docker volumes folder?
Thank you
You shouldn't share this directory at all. It will probably lead to data corruption. As you already have glusterfs configured you can mount gluster dir into container with -v /path/to/gluster/mount:/path/in/container flag added to docker run. Then files written to path in container will be shared among gluster nodes. Other option is to use some dedicated volumes driver for that. Try searching for 'docker volumes drivers' in your favorite search engine.
Related
I have a Docker container running on my PC. The main functionality of the container is to scrape data, and this accumulates 0.3GB/day. I'll only be needing this data for the last 30 days, and after this I plan to store it archived on Hard Disk Drives for historical purposes. However after few hours of trials and errors, I've failed to create a Docker Volume on another partition, and the _data folder always appears in the /var/lib/docker/volumes/<volume_name> folder, while the partition drive is always empty.
I also tried creating the volume with docker run -v , but it still creates the volume in the main volumes folder.
The operating system is Pop!_OS 20.04 LTS
I'll provide data about the partition:
I'll provide data about the partition:
In case of docker volumes, you don't have control over where docker saves it's volumes. all you can do is just to change docker root directory. so it's better to mount your new partition under a directory and then change docker root directory to this mount point. this way you can achieve what you want. also you should consider that by doing this, all of your docker data will be stored in this new partition.
for changing your docker root directory, you should first create a file named daemon.json in address below:
/etc/docker/daemon.json
and then add config below to it:
{
"data-root": "/path/to/new/directory"
}
then restart docker daemon:
systemctl restart docker
then you can run command below to check current docker root directory:
docker info
In case you are using docker volumes all your volumes data are stored in default location (/var/lib/docker) but you can change it in /etc/docker/daemon.json config file:
{
...
"data-root": "/your/path/here",
...
}
Restart docker service to apply changes.
Docker daemon configuration file documentation here.
So I have this remote folder /mnt/shared mounted with fuse. It is mostly available, except there shall be some disconnections from time to time.
The actual mounted folder /mnt/shared becomes available again when the re-connection happens.
The issue is that I put this folder into a docker volume to make it available to my app: /shared. When I start the container, the volume is available.
But if a disconnection happens in between, while the /mnt/shared repo on the host machine is available, the /shared folder is not accessible from the container, and I get:
user#machine:~$ docker exec -it e313ec554814 bash
root#e313ec554814:/app# ls /shared
ls: cannot access '/shared': Transport endpoint is not connected
In order to get it to work again, the only solution I found is to docker restart e313ec554814, which brings downtime to my app, hence is not an acceptable solution.
So my questions are:
Is this somehow a docker "bug" not to reconnect to the mounted folder when it is available again?
Can I execute this task manually, without having to restart the whole container?
Thanks
I would try the following solution.
If you mount the volume to your docker like so:
docker run -v /mnt/shared:/shared my-image
I would create an intermediate directory /mnt/base/shared and mount it to docker like so:
docker run -v /mnt/base/shared:/base/shared my-image
and I will also adjust my code to refer to the new path or creating a link from /base/shared to /shared inside the container
Explanation:
The problem is that the mounted directory /mnt/shared is probably deleted on host machine, when there is a disconnection and a new directory is created after connection is back. But, the container started running with directory mapping for the old directory which was deleted. By creating an intermediate directory and mapping to it instead you avoid this mapping issue.
Another solution that might work is to mount the directory using bind-propagation=shared
e.g:
--mount type=bind,source=/mnt/shared,target=/shared,bind-propagation=shared
See docker docs explaining bind-propogation
I am trying to run a building job in Jenkins in docker nodes, but the build requires some data that is present on another NFS share. Hence I require this NFS share to be mounted on the Build Containers at the Job Run time. How to supply the NFS share information to the docker container.
I followed :
In the docker image created the directory to mount the NFS share
copied the fstab entries to be present in the container
RUN mount -a Fails when I do that
Any suggestions please.
For now what i have done is I have used the Add Volume section under the container setting in Jenkins to Map the Volume. But I am still looking for a way to mount my NFS shares on the container created dynamically
First time docker ser here, running on Raspberry Pi 3 (Hypriot OS). I have an external hdd attached to my raspberry pi to store all the files. The os is on the sdcard.
I am setting up many images on docker: sonarr, radarr, emby server and bittorrent client.
I have created all containers following the lines on docker hub page, so I attached all of the folders using mount bind (-v /some/path:/some/path).
Now the documentation says volume is better because it doesn't rely on filesystem. Also, I am having problems because I want to use hardlink between files on my external hdd, but because I am using mount binds, it seems to not work when calling hardlink from one mount to another on the same hdd. I think adding only one mount bind should solve this but I just want to make the config correct now.
Is volume an option to store all the movies or should I keep using mount bind?
In canse of volume, can I specify the external hdd to store movies? I have docker installed on an sdcard but I need the movies on my external hdd.
I have used docker create volume --name something -o device=/myhddmount/ but I am not sure if this is ok, because docker volume inspect shows a mountpoint on the sdcard. Also, when I create the volume, should I set -o type=ext4? because according to the manual etx4 doesn't has a device= option.
Thanks!
I'm new to docker and docker-compose.
I'm trying to run a service using docker-compose on my Raspberry PI. The data this service uses is stored on my NAS and is accessible via samba.
I'm currently using this bash script to launch the container:
sudo mount -t cifs -o user=test,password=test //192.168.0.60/test /mnt/test
docker-compose up --force-recreate -d
Where the docker-compose.yml file simply creates a container from an image and binds it's own local /home/test folder to the /mnt/test folder on the host.
This works perfectly fine, when launched from the script. However, I'd like the container to automatically restart when the host reboots, so I specified 'always' as restart policy. In the case of a reboot then, the container starts automatically without anyone mounting the remote folder, and the service will not work correctly as a result.
What would be the best approach to solve this issue? Should I use a volume driver to mount the remote share (I'm on an ARM architecture, so my choices are limited)? Is there a way to run a shell script on the host when starting the docker-compose process? Should I mount the remote folder from inside the container?
Thanks
What would be the best approach to solve this issue?
As #Frap suggested, use systemd units to manage the mount and the service and the dependencies between them.
This document discusses how you could set up a Samba mount as a systemd unit. Under Raspbian, it should look something like:
[Unit]
Description=Mount Share at boot
After=network-online.target
Before=docker.service
RequiredBy=docker.service
[Mount]
What=//192.168.0.60/test
Where=/mnt/test
Options=credentials=/etc/samba/creds/myshare,rw
Type=cifs
TimeoutSec=30
[Install]
WantedBy=multi-user.target
Place this in /etc/systemd/system/mnt-test.mount, and then:
systemctl enable mnt-test.mount
systemctl start mnt-test.mount
The After=network-online.target line should cause systemd to wait until the network is available before trying to access this share. The Before=docker.service line will cause systemd to only launch docker after this share has been mounted. The RequiredBy=docker.service means that if you start docker.service, this share will be mounted first (if it wasn't already), and that if the mount fails, docker will not start.
This is using a credentials file rather than specifying the username/password in the unit itself; a credentials file would look like:
username=test
password=test
You could just replace the credentials option with username= and password=.
Should I mount the remote folder from inside the container?
A standard Docker container can't mount filesystems. You can create a privileged container (by adding --privileged to the docker run command line), but that's generally a bad idea (because that container now has unrestricted root access to your host).
I finally "solved" my own issue by defining a script to run in the /etc/rc.local file. It will launch the mount and docker-compose up commands on every reboot.
Being just 2 lines of code and not dependent on any particular Unix flavor, it felt to me like the most portable solution, barring a docker-only solution that I was unable to find.
Thanks all for the answers