Docker - how to set up compose for local webserver - docker

I have a docker compose file in a local folder on my mac. I have also another folder /src which should act as the root element. The docker-compose file looks like this:
version: '2'
services:
fpm:
image: sbusso/php-fpm-ion
nginx:
image: nginx:stable
ports:
- "80:80"
links:
- fpm
- db
db:
image: orchardup/mysql
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: myproject
I understand what we are doing here, but I am missing the solution that /src is taken as the root and I think I need to set up an lsync service which syncs between local and my docker container. So I found this one, but it is not working properly - the root /src is not taken into account. I just want to type localhost in my browser and it should open the /src folder.
version: '2'
services:
fpm:
image: sbusso/php-fpm-ion
links:
- sync
volumes_from:
- sync
db:
image: orchardup/mysql
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: myproject
links:
- sync
volumes_from:
- sync
nginx:
image: nginx:stable
ports:
- "80:80"
links:
- sync
volumes_from:
- sync
sync:
image: zeroboh/lsyncd
volumes:
- /var/www/html
- ./src:/src:Z
- ./docker-config/nginx:/etc/nginx/conf.d
- /var/lib/php/session
- ./docker-config/lrsync/lrsync.lua:/etc/lrsync/lrsync.lua
- ./sync:/sync
What I do understand is that every image that is loaded links the sync service into it. What I do not understand is why every image needs a volumes_from and that the syntax in sync explicitly says - can somebody help me, setting this up correctly?
Thanks

volumes_from imports volumes from another container
By default, each container has no volumes. You can define local volumes using the volumes attribute, but the volumes are only used in that container. In order for other containers to make use of them, those containers must import the volumes using volumes_from, pointing to the name of one or more containers. All volumes in those named containers are then made available in the current container.
The Z volume label indicates a private volume
You are mounting the /src volume using this:
volumes:
- ./src:/src:Z
That's fine, except you are also using volumes_from, and your question indicates that you specifically wanted to share /src. But by using the Z label, you have told Docker to make this a private volume.
From the documentation:
Volume labels
Labeling systems like SELinux require that proper labels are placed on volume content mounted into a container. Without a label, the security system might prevent the processes running inside the container from using the content. By default, Docker does not change the labels set by the OS.
To change a label in the container context, you can add either of two suffixes :z or :Z to the volume mount. These suffixes tell Docker to relabel file objects on the shared volumes. The z option tells Docker that two containers share the volume content. As a result, Docker labels the content with a shared content label. Shared volume labels allow all containers to read/write content. The Z option tells Docker to label the content with a private unshared label. Only the current container can use a private volume.
In this case, "current container" is sync, so only that container may use the volume. The others may not use it.

Related

Content of docker bind mount is not showing inside the container

EDITED:
Rclone has a bucket mounted to the host directory /home/user/rclone. I want to access the contents of this directory inside nextcloud docker instance. So I would bind mount it to /var/www/html/data. With the option shared, any changes made in the container will be reflected in the host, and vice versa.
I have set the permission of /home/user/rclone to be 777. And the content is visible with a ls command from the host. Once the docker container is restarted, a ls command from within the container does not show any files. Rclone is still running properly.
I am suspecting that because the volume nextcloud is mounted at /var/www/html, the mount of the bind mount at /var/www/html/data is covered up.
So then I picked another directory inside the container, namely /mnt and tried it. Still no files show up with a ls command.
My nextcloud docker compose: (mysql does not have anything to do with this; showing the /var/www/html/data mount version only.)
version: '2'
volumes:
nextcloud:
db:
services:
db:
image: mariadb
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=xxx
- MYSQL_PASSWORD=xxx
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
network_mode: npm_default
container_name: db
app:
image: nextcloud:latest
restart: always
links:
- db
volumes:
- nextcloud:/var/www/html
- /home/user/rclone:/var/www/html/data:shared
environment:
- MYSQL_PASSWORD=xxx
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=db
- NEXTCLOUD_TRUSTED_DOMAINS=xxx
network_mode: npm_default
container_name: nextcloud
Another way of putting it:
rclone cloud storage --> host --> —> docker —-> nextcloud external storage
So the reason why the nextcloud service cannot view the content is due to permission problems.
If you do exec -it nextcloud bash to check the contents, they are there because you are root.
So the proper solution if you want to use a shared bind mount with the host directory, is to set permission to 666 so the others in the container can view the file.
But at the end, I have figured that volume plugins are a way better solution, thus this is somewhat deprecated.

How can I store data with Docker Compose containers?

I have this docker-compose.yml, and I have a Postgres database and Grafana running over it to make queries on data.
version: "3"
services:
db:
image: postgres
container_name: db
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=my_secret_password
grafana:
image: grafana/grafana
container_name: grafana
depends_on:
- db
ports:
- "3000:3000"
I start this compose with the command docker-compose up, but then, if I want to not lose any data, I must run docker-compose stop instead of docker-compose down.
I also read about docker commit, but "the commit operation will not include any data contained in volumes mounted inside the container", so I guess it's no use for my needs.
What's the proper way to store the created volumes and reusing them with commands up/down, so even when recreating the containers? I must use some sort of backup methods provided by every image (so, for example, a DB export for Postgres, and some other type of export for Grafana), or there is a way to do this inside docker-compose.yml?
EDIT:
I also read about volumes, but is there a standard way to store everything?
In the link provided by #DannyB, setting volumes to ./postgres-data:/var/lib/postgresql instead of ./postgres-data:/var/lib/postgresql/data caused the container to not store the actual folder.
My question is: every image must follow a particular pattern like the one above? This path to data to store the volume underlying is present in every Docker image Readme? Or is there something like:
volumes:
- ./my_image_root:/
Docker provides for volumes as the way to persist volumes between container invocations and to share data between containers.
They are quite simple to declare and use in compose files:
volumes:
postgres:
grafana:
services:
db:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=my_secret_password
volumes:
- postgres:/var/lib/postgresql/data
grafana:
image: grafana/grafana
depends_on:
- db
volumes:
- grafana:/var/lib/grafana
ports:
- "3000:3000"
Optionally, you can also set a local directory as your container volume
with the added convince of having the files easily accessible not only from inside the container. This is especially helpful for mounting specific config files to their location in the container, you can edit the file locally like any other file restart the container with the updated configuration (certificates and other similar files also make good use of this option). And you do that like so:
volumes:
- /home/myusername/postgres_data/:/var/lib/postgresql/data/
PS. I have omitted the container_name and version directives from this compose.yml because (as of docker 20.10), the docker compose spec determines version automatically, and docker compose exposes enough functionality that accessing the containers directly using short names isn't necessary usually.

Docker docker-compose volumes delete sibling folders

This is docker compose file looks like
version: '3.3'
services:
portal:
ports:
- '8080:8080'
- '8000:8000'
environment:
- 'revcycle.portal.logger.root=C:/tomcat/logs/'
volumes:
- /src/main/webapp/sampleFiles:/usr/local/tomcat/webapps/portal/sampleFiles:rw
container_name: portal
image: 'portal:latest'
docker-compose up is creating container successfully by when i check the content of the tomcat webapp All the other sibling folder of the sampleFiles are deleted.
Am i missing something with the volumn commands
Same happen when I use Intellji Idea docker plugin Bind mounts in Configuration
It should be like this:
volumes:
- /src/main/webapp/sampleFiles:/usr/local/tomcat/webapps/portal/sampleFiles
as far as i know rw is for cases when you use drivers stuff...
and make sure that /src/main/webapp/sampleFiles is the host folder which have what you need in docker container. Because essentially it will be mapped into docker container. and will replace target folder.
this way siblings for /usr/local/tomcat/webapps/portal/sampleFiles should stay intact. If no, try starting without volumes part and verify that you see siblings.
don't forget to do docker-compose down and docker-compose up -d when you change anything in docker-compose.yaml file

Docker-compose shared volume between containers but that has a path in host

I found that you can use named volumes so two containers can exchange data between them. However, I need to store this names volume in my host computer (the computer which is running the docker images).
So how do I create a voluma that is stored in /media/my_volume that is also shared between containers? I tried to simply binding /media/my_volume to both containers but it ended up in everything being erased when I started the compose again
UPDATE:
version: '3'
services:
transmission:
build: ./rpi-transmission
image: rpi-transmission
ports:
- "9091:9091"
- "51413:51413"
- "51413:51413/udp"
volumes:
- "/home/pi/transmission:/etc/transmission"
- "/media/external:/home/downloads"
- "/home/transmission-watch:/home/transmission-watch"
samba:
build: ./rpi-samba
image: rpi-samba
stdin_open: true
volumes:
- "/media/external:/data/share:ro"
kodi:
build: ./kodi-rpi
image: kodi-rpi
ports:
- "127.0.0.1:8080:8080"
- "127.0.0.1:9777:9777/udp"
devices:
- "/dev/tty0:/dev/tty0"
- "/dev/tty2:/dev/tty2"
- "/dev/fb0:/dev/fb0"
- "/dev/input:/dev/input"
- "/dev/snd:/dev/snd"
- "/dev/vchiq:/dev/vchiq"
volumes:
- "/var/run/dbus:/var/run/dbus"
- "/etc/localtime:/etc/localtime:ro"
- "/etc/timezone:/etc/timezone:ro"
- "/home/pi/kodi-rpi/config:/config/kodi"
- "/home/pi/kodi-rpi/data:/data"
I need to use /media/external on both containers. If I give it a name, I can't mount it to /media/external. If I simply do as it it now, I think samba erases the content of transmission
The content isn't erased from the container though, it is "masked", because the mounted directory is mounted on top of the existing files. The files are still in the container, only not reachable.
However, un-mounting the volume reveals the content that is still in the container (only not accessible, because the volume is mounted over it)
It already has a path on the host inside /var/lib/docker (or whatever directory you has configured as a graph path).
$ docker volume create test
test
$ docker volume inspect -f '{{.Mountpoint}}' test
/var/lib/docker/volumes/test/_data
If you want it to appear on /media/my_volume you can do a bind mount:
mount --bind /var/lib/docker/volumes/test/_data /media/my_volume

Share data between 2 containers

I do a Symfony project with Docker. In development, I mount my source folder in Nginx and PHP-FPM containers. But for the production, I want to put the code in the PHP-FPM container to do an app container, and share the code with the Nginx container.
In my Dockerfile, I use a VOLUME /var/www/html, but how can I permit the nginx container to access this volume (in docker-compose file) ?
Before the v3, I know there was a volumes_from, but not anymore.
I want place the code inside the container like say here (https://docs.docker.com/compose/production/)
Removing any volume bindings for application code, so that code stays inside the container and can’t be changed from outside
Thanks a lot for your help
Finally, it appear we can use a named volume to do it, remove the VOLUME from the Dockerfile, then just define a name volume, and it takes the value of the first container.
version: '3'
services:
nginx:
build: ./docker/nginx
volumes:
- app_data:/var/www/html:ro
depends_on:
- app
app:
build: ./
volumes:
- app_data:/var/www/html:rw
networks:
- default
volumes:
app_data:
driver: local

Resources