EDITED:
Rclone has a bucket mounted to the host directory /home/user/rclone. I want to access the contents of this directory inside nextcloud docker instance. So I would bind mount it to /var/www/html/data. With the option shared, any changes made in the container will be reflected in the host, and vice versa.
I have set the permission of /home/user/rclone to be 777. And the content is visible with a ls command from the host. Once the docker container is restarted, a ls command from within the container does not show any files. Rclone is still running properly.
I am suspecting that because the volume nextcloud is mounted at /var/www/html, the mount of the bind mount at /var/www/html/data is covered up.
So then I picked another directory inside the container, namely /mnt and tried it. Still no files show up with a ls command.
My nextcloud docker compose: (mysql does not have anything to do with this; showing the /var/www/html/data mount version only.)
version: '2'
volumes:
nextcloud:
db:
services:
db:
image: mariadb
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=xxx
- MYSQL_PASSWORD=xxx
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
network_mode: npm_default
container_name: db
app:
image: nextcloud:latest
restart: always
links:
- db
volumes:
- nextcloud:/var/www/html
- /home/user/rclone:/var/www/html/data:shared
environment:
- MYSQL_PASSWORD=xxx
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=db
- NEXTCLOUD_TRUSTED_DOMAINS=xxx
network_mode: npm_default
container_name: nextcloud
Another way of putting it:
rclone cloud storage --> host --> —> docker —-> nextcloud external storage
So the reason why the nextcloud service cannot view the content is due to permission problems.
If you do exec -it nextcloud bash to check the contents, they are there because you are root.
So the proper solution if you want to use a shared bind mount with the host directory, is to set permission to 666 so the others in the container can view the file.
But at the end, I have figured that volume plugins are a way better solution, thus this is somewhat deprecated.
Related
I have a docker-compose file to build a web server with django and a postgres database. It basically looks like that :
version: '3'
services:
server:
build:
context: .
dockerfile: ./docker/server/Dockerfile
image: backend
volumes:
- ./api:/app
ports:
- 8000:8000
depends_on:
- postgres
- redis
environment:
- PYTHONUNBUFFERED=1
postgres:
image: kartoza/postgis:11.0-2.5
volumes:
- pg_data:/var/lib/postgresql/data:rw
environment:
POSTGRES_DB: "gis,backend"
POSTGRES_PORT: "5432"
POSTGRES_USER: "user"
POSTGRES_PASS: "pass"
POSTGRES_MULTIPLE_EXTENSIONS: "postgis,postgis_topology"
ports:
- 5432:5432
redis:
image: "redis:alpine"
volumes:
pg_data:
I'm using a volume to make my data persistent
I managed to run my containers and add data to the database. A volume has successfully been created : docker volume ls
DRIVER VOLUME NAME
local server_pg_data
But this volume is empty as the output of docker system df -v shows:
Local Volumes space usage:
VOLUME NAME LINKS SIZE
server_pg_data 1 0B
Also, if I want or need to build the containers once again using docker-compose down and docker-compose up, data has been purged from my database. Yet, I thought that volumes were used to make data persistent on disk…
I must be missing something in the way I'm using docker and volumes but I don't get what:
why does my volume appears empty while there is some data in my postgres container ?
why does my volume does not persist after doing docker-compose down ?
This thread (How to persist data in a dockerized postgres database using volumes) looked similar but the solution does not seem to apply.
The kartoza/postgis image isn't configured the same way as the standard postgres image. Its documentation notes (under "Cluster Initializations"):
By default, DATADIR will point to /var/lib/postgresql/{major-version}. You can instead mount the parent location like this: -v data-volume:/var/lib/postgresql
If you look at the Dockerfile in GitHub, you will also see that parent directory named as a VOLUME, which has some interesting semantics here.
With the setting you show, the actual data will be stored in /var/lib/postgresql/11.0; you're mounting the named volume on a different directory, /var/lib/postgresql/data, which is why it stays empty. Changing the volume mount to just /var/lib/postgresql should address this:
volumes:
- pg_data:/var/lib/postgresql:rw # not .../data
I have 2 containers in a compose files,that i want to serve app static files through nginx.
I have read this: https://stackoverflow.com/a/43560093/7522096 and want to use host dir to share between app container and nginx container, for some reason I dont want to use named volume.
===
Using a host directory Alternately you can use a directory on the host
and mount that into the containers. This has the advantage of you
being able to work directly on the files using your tools outside of
Docker (such as your GUI text editor and other tools).
It's the same, except you don't define a volume in Docker, instead
mounting the external directory.
version: '3'
services:
nginx:
volumes:
- ./assets:/var/lib/assets
asset:
volumes:
- ./assets:/var/lib/assets
===
My docker-compose file:
version: "3.7"
services:
app:
container_name: app
restart: always
ports:
- 8888:8888
env_file:
- ./env/app.env
image: registry.gitlab.com/app/development
volumes:
- ./public/app/:/usr/app/static/
- app-log:/root/.pm2
nginx:
container_name: nginx
image: 'nginx:1.16-alpine'
restart: always
ports:
- 80:80
- 443:443
volumes:
- /home/devops/config/:/etc/nginx/conf.d/:ro
- /home/devops/ssl:/etc/nginx/ssl:ro
- ./public/app/:/etc/nginx/public/app/
depends_on:
- app
volumes:
# app-public:
app-log:
Yet when i do this in my compose, the dir always come up empty on nginx, and the static files in my app container got disappear too.
Please help, I tried a lot of ways but can not figure it out.
Thanks.
During the initialization of the container docker will bind the ./public/app directory on the host with the /usr/app/static/ directory in the container.
If the ./public/app does not exist it will be created. The bind is from the host to the container, meaning that the content of ./public/app folder is
reflected (copied) into the container and not viceversa. That's why after the initialization the static app directory is empty.
If my understanding is correct your goal is to share the application files between the app container and nginx.
Taken into consideration the above the only solution is to create the files in the volume after the volume is created. Here is an example for the relevant parts:
version: "3"
services:
app:
image: ubuntu
volumes:
- ./public/app/:/usr/app/static_copy/
entrypoint: /bin/bash
command: >
-c "mkdir /usr/app/static;
touch /usr/app/static/shared_file;
mv /usr/app/static/* /usr/app/static_copy;
rm -r /usr/app/static;
ln -sfT /usr/app/static_copy/ /usr/app/static;
exec sleep infinity"
nginx:
image: 'nginx:1.16-alpine'
volumes:
- ./public/app/:/etc/nginx/public/app/
depends_on:
- app
This will move the static files to the static_copy directory and link back those files to /usr/app/static. Those files will be shared with the host (public/app director)
and nginx container (/etc/nginx/public/app/). Adapt it to fit your needs.
In alternative you can of course use named volumes.
Hope it helps
I persist container's data to a volume (not a bind mount) and I wonder how I can inspect this data later. For example, let's say that I use something like this to run a WordPress site:
docker-compose.yml:
services:
wordpress:
volumes:
- wordpress-files:/var/www/html
volumes:
wordpress-files:
Is it possible to start another container (based on Alpine or something) that would mount the same volume and also expose it to my host OS (macOS – I'm using Docker for Mac)? Something like this (pseudocode):
services:
wordpress:
image: wordpress
volumes:
- wordpress-files:/var/www/html
wordpress-files-inspector:
volumes:
- wordpress-files:/tmp:host
volumes:
wordpress-files:
It's possible to exec into a temporary container but I'd like to make the files available to my local filesystem so that I can use my local tools to browse them. Note that primarily, the files need to live in a named volume (for performance and other reasons) so it cannot be a bind mount like ./my-local-path:/var/www/html.
Why don't you just use samba? Like that:
services:
wordpress:
image: wordpress
volumes:
- wordpress-files:/var/www/html
wordpress-files-inspector:
image: dperson/samba
command: sh -c "samba.sh -s \"mount;/mount\""
volumes:
- wordpress-files:/mount
volumes:
wordpress-files:
You can inspect IP address of the wordpress-files-inspector container later (or set the container with static ip) and mount it into your host OS.
I found that you can use named volumes so two containers can exchange data between them. However, I need to store this names volume in my host computer (the computer which is running the docker images).
So how do I create a voluma that is stored in /media/my_volume that is also shared between containers? I tried to simply binding /media/my_volume to both containers but it ended up in everything being erased when I started the compose again
UPDATE:
version: '3'
services:
transmission:
build: ./rpi-transmission
image: rpi-transmission
ports:
- "9091:9091"
- "51413:51413"
- "51413:51413/udp"
volumes:
- "/home/pi/transmission:/etc/transmission"
- "/media/external:/home/downloads"
- "/home/transmission-watch:/home/transmission-watch"
samba:
build: ./rpi-samba
image: rpi-samba
stdin_open: true
volumes:
- "/media/external:/data/share:ro"
kodi:
build: ./kodi-rpi
image: kodi-rpi
ports:
- "127.0.0.1:8080:8080"
- "127.0.0.1:9777:9777/udp"
devices:
- "/dev/tty0:/dev/tty0"
- "/dev/tty2:/dev/tty2"
- "/dev/fb0:/dev/fb0"
- "/dev/input:/dev/input"
- "/dev/snd:/dev/snd"
- "/dev/vchiq:/dev/vchiq"
volumes:
- "/var/run/dbus:/var/run/dbus"
- "/etc/localtime:/etc/localtime:ro"
- "/etc/timezone:/etc/timezone:ro"
- "/home/pi/kodi-rpi/config:/config/kodi"
- "/home/pi/kodi-rpi/data:/data"
I need to use /media/external on both containers. If I give it a name, I can't mount it to /media/external. If I simply do as it it now, I think samba erases the content of transmission
The content isn't erased from the container though, it is "masked", because the mounted directory is mounted on top of the existing files. The files are still in the container, only not reachable.
However, un-mounting the volume reveals the content that is still in the container (only not accessible, because the volume is mounted over it)
It already has a path on the host inside /var/lib/docker (or whatever directory you has configured as a graph path).
$ docker volume create test
test
$ docker volume inspect -f '{{.Mountpoint}}' test
/var/lib/docker/volumes/test/_data
If you want it to appear on /media/my_volume you can do a bind mount:
mount --bind /var/lib/docker/volumes/test/_data /media/my_volume
I have a docker compose file in a local folder on my mac. I have also another folder /src which should act as the root element. The docker-compose file looks like this:
version: '2'
services:
fpm:
image: sbusso/php-fpm-ion
nginx:
image: nginx:stable
ports:
- "80:80"
links:
- fpm
- db
db:
image: orchardup/mysql
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: myproject
I understand what we are doing here, but I am missing the solution that /src is taken as the root and I think I need to set up an lsync service which syncs between local and my docker container. So I found this one, but it is not working properly - the root /src is not taken into account. I just want to type localhost in my browser and it should open the /src folder.
version: '2'
services:
fpm:
image: sbusso/php-fpm-ion
links:
- sync
volumes_from:
- sync
db:
image: orchardup/mysql
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: myproject
links:
- sync
volumes_from:
- sync
nginx:
image: nginx:stable
ports:
- "80:80"
links:
- sync
volumes_from:
- sync
sync:
image: zeroboh/lsyncd
volumes:
- /var/www/html
- ./src:/src:Z
- ./docker-config/nginx:/etc/nginx/conf.d
- /var/lib/php/session
- ./docker-config/lrsync/lrsync.lua:/etc/lrsync/lrsync.lua
- ./sync:/sync
What I do understand is that every image that is loaded links the sync service into it. What I do not understand is why every image needs a volumes_from and that the syntax in sync explicitly says - can somebody help me, setting this up correctly?
Thanks
volumes_from imports volumes from another container
By default, each container has no volumes. You can define local volumes using the volumes attribute, but the volumes are only used in that container. In order for other containers to make use of them, those containers must import the volumes using volumes_from, pointing to the name of one or more containers. All volumes in those named containers are then made available in the current container.
The Z volume label indicates a private volume
You are mounting the /src volume using this:
volumes:
- ./src:/src:Z
That's fine, except you are also using volumes_from, and your question indicates that you specifically wanted to share /src. But by using the Z label, you have told Docker to make this a private volume.
From the documentation:
Volume labels
Labeling systems like SELinux require that proper labels are placed on volume content mounted into a container. Without a label, the security system might prevent the processes running inside the container from using the content. By default, Docker does not change the labels set by the OS.
To change a label in the container context, you can add either of two suffixes :z or :Z to the volume mount. These suffixes tell Docker to relabel file objects on the shared volumes. The z option tells Docker that two containers share the volume content. As a result, Docker labels the content with a shared content label. Shared volume labels allow all containers to read/write content. The Z option tells Docker to label the content with a private unshared label. Only the current container can use a private volume.
In this case, "current container" is sync, so only that container may use the volume. The others may not use it.