docker-compose: previous volume setting overrides changes - docker

I've been using a host directory as my data volume for a postgresql container. My docker-compose.yml reads something like this.
postgresql:
image: postgres
ports:
- "5432:5432"
container_name: postgresql
networks:
- mynet
volumes:
- pg-data:/var/lib/postgresql/data
volumes:
pg-data:
driver_opts:
type: none
device: /volumes/pgdata
o: bind
As we wanted to consolidate all data into a single encrypted volume, I decided to remap the volume for this container.
volumes:
pg-data:
driver_opts:
type: none
device: /volumes/data/pgdata
o: bind
I know that I need to remove the containers of this image and recreate the image. I removed all containers and images. It appears that docker-compose is still remembering my old setting somewhere when I try to rebuild (docker-compose up --build postgresql) the image. I get the following error.
ERROR: for postgresql Cannot create container for service postgresql: error while mounting volume with options: type='none' device='/volumes/pgdata' o='bind': no such file or directory
Its still trying to access the old volume definition where as my new docker-compose.yml has no reference to this directory.
Appreciate some help to resolve this. Am I missing some steps here?

This turned out to be an issue in docker-compose as per this post. This is scheduled to be fixed in the next release. Meanwhile, the workaround suggested works.
In the meantime, you should be able to work around the issue by
removing the existing volume (either docker volume rm -f <project>_pg-data
or docker-compose down -v)

Related

I am trying to connect to an SMB share via Docker Compose and it will not give write permissions

I am new to docker. I am attempting run a docker server on a raspberry pi and have the volumes point to a network share so I can avoid persistent files directly on the pi. I am using docker compose and am attempting to mount an SMB share from my Unraid server to the docker containers utilizing volumes.
This is how I have tried mounting the SMB shares:
volumes:
downloads:
driver_opts:
type: cifs
o: username=COOLUSERNAMEHERE,password=SUPERSECRETPASSWORD,vers=3
device: //192.168.0.110/downloads
I then mount this volume inside the container as follows:
services:
myservicename:
image: theplaceigetmyimagefrom
container_name: myservice
environment:
- PUID=1000
- PGID=1000
- TZ=America/Toronto
volumes:
- downloads:/downloads
ports:
- 1234:1234
restart: unless-stopped
This volume mounts just fine however it mounts as read only. No matter what I do I cannot get it to have write permissions. I again am new to docker. So I am probably trying the wrong stuff. I have tried adding privileged: true to the container but that didn't change anything. The SMB share I am connecting to is set to public and shared so there shouldn't be any user issues, however I have also logged in with a user that has "read/write"access to that folder as setup in Unraid itself.
Any ideas on next steps would be appreciated.
Figured this out. I changed the version to 3.0 (not sure if that did anything at all) and I added GID and UID and the rw option:
volumes:
downloads:
driver_opts:
type: cifs
o: username=COOLUSERNAMEHERE,password=SUPERSECRETPASSWORD,uid=1000,gid-1000,vers=3.0,rw
device: //192.168.0.110/downloads
it now connects as expected.

Permission denied to Docker container accessing NFS share - Docker Compose

I have a problem mounting a WD MyCloud EX2 NAS as an NFS share for a Nextcloud and MariaDB container combination, using Docker Compose. When I run docker-compose up -d, here's the error I get:
Creating nextcloud_app_1 ... error
ERROR: for nextcloud_app_1 Cannot create container for service app: b"error while mounting volume with options: type='nfs' device=':/mnt/HD/HD_a/nextcloud' o='addr=192.168.1.73,rw': permission denied"
ERROR: for app Cannot create container for service app: b"error while mounting volume with options: type='nfs' device=':/mnt/HD/HD_a/nextcloud' o='addr=192.168.1.73,rw': permission denied"
ERROR: Encountered errors while bringing up the project.
Here's docker-compose.yml (all sensitive info replaced with <brackets>:
Version: '2'
volumes:
nextcloud:
driver: local
driver_opts:
type: nfs
o: addr=192.168.1.73,rw
device: ":/mnt/HD/HD_a/nextcloud"
db:
services:
db:
image: mariadb
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: always
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=<****>
- MYSQL_PASSWORD=<****>
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- NEXTCLOUD_ADMIN_USER=<****>
- NEXTCLOUD_ADMIN_PASSWORD=<****>
app:
image: nextcloud
ports:
- 8080:80
links:
- db
volumes:
- nextcloud:/var/www/html
restart: always
I SSHd into the NAS box to check /etc/exports and sure enough, it was using all_squash, so I changed that.
Here's the /etc/exports file on the NAS box:
"/nfs/nextcloud" 192.168.1.73(rw,no_root_squash,sync,no_wdelay,insecure_locks,insecure,no_subtree_check,anonuid=501,anongid=1000)
"/nfs/Public" 192.168.1.73(rw,no_root_squash,sync,no_wdelay,insecure_locks,insecure,no_subtree_check,anonuid=501,anongid=1000)
Then, I refreshed the service with exportfs -a
Nothing changed - docker-compose throws the same error. And I'm deleting all containers and images and redownloading the image every time I attempt the build.
I've read similar questions and done everything I can think of. I also know this is a container issue because I can access the NFS share quite happily from the command line thanks to my settings in /etc/fstabs.
What else should I be doing here?
In our case, we are mounting the nfs volume localy on the docker host, then mounting the folder inside the containers.
We are running with oracle-linux 7, with SElinux enable.
We fixed by adding the following parameter inside /etc/fstab in the fs_mntops block (see https://man7.org/linux/man-pages/man5/fstab.5.html):
defaults,context="system_u:object_r:svirt_sandbox_file_t:s0"
Try to check with the command line ini nextcloud folder "ls -l /var/www/html", see groups and users who can access it
I fixed it by removing the anonuid=501,anongid=1000 entries in the NAS box's /etc/exports file, and I also managed to enter the wrong IP - the NAS box wasn't granting access to the Ubuntu computer that was trying to connect with it.

How fast do the files from a docker image get copied to a named volume after container initialization

I have a stack of containers that are sharing a named volume. The image that contains the files is built to contain code (multiple libraries, thousands of classes).
The issue I am facing is that when I deploy the stack to a docker swarm mode cluster, the containers initialize before the files are fully copied to the volume.
Is there a way to tell that the volume is ready and all files mounted have been copied? I would have assumed that the containers would only get created after the volume is ready, but this does not seem to be the case.
I have an install command that runs in one of the containers sharing that named volume and this fails because the files are not there yet.
version: '3.3'
services:
php:
image: code
volumes:
- namedvolume:/var/www/html
web:
image: nginx
volumes:
- namedvolume:/var/www/html
install:
image: code
volumes:
- namedvolume:/var/www/html
command: "/bin/bash -c \"somecommand\""
volumes:
namedvolume:
Or is there something i am doing wrong?
Thanks

Docker-compose recreating containers, lost data

In my attempt to extract some logs from a container I edited my docker-compose.yml adding an extra mount pointing to those logs.
After running docker-compose up and recreating the respective image I found out that all of the log files were gone, as the container was completely replaced (something which is quite obvious to me now)
Is there a way to recover the old container?
Also: the docker volumes live under /var/lib/docker/volumes/, where are the root file systems of containers?
Here is a snippet of the docker-compose:
version: '3.3'
services:
some_app:
image: some_image:latest
restart: always
volumes:
- some_image_logs:/var/log
volumes:
some_image_logs: {}

Docker stack deploy rolling updates volume issue

I'm running docker for a production PHP-FPM/Nginx application, I want to use docker-stack.yml and deploy to a swarm cluster. Here's my file:
version: "3"
services:
app:
image: <MYREGISTRY>/app
volumes:
- app-data:/var/www/app
deploy:
mode: global
php:
image: <MYREGISTRY>/php
volumes:
- app-data:/var/www/app
deploy:
replicas: 2
nginx:
image: <MYREGISTRY>/nginx
depends_on:
- php
volumes:
- app-data:/var/www/app
deploy:
replicas: 2
ports:
- "80:80"
volumes:
app-data:
My code is in app container with image from my registry.
I want to update my code with docker service update --image <MYREGISTRY>/app:latest but it's not working the code is not changed.
I guess it uses the local volume app-data instead.
Is it normal that the new container data doesn't override volume data?
Yes, this is the expected behavior. Named volumes are only initialized to the image contents when they are empty (the default state when first created). Updating the volume any time after that point would risk data loss from overwriting or deleting volume data that you explicitly asked to be preserved.
If you need the files to be updated with every new image, then perhaps they shouldn't be in a volume? If you do need these inside a volume, then you may need to create a procedure to update the volumes from the image, e.g. if this were a docker run, you could do:
docker run -v app-data:/target --rm <your_registry>/app cp -a /var/www/app/. /target/.
Otherwise, you can delete the volume, or simply remove all files from the volume, and restart your stack to populate it again.
I was having the same issue that I have app and nginx containers sharing the same volume. My current solution having a deploy script which runs
docker service update --mount-add mount service
for app and nginx after docker stack deploy. It will force to update the volume for app and nginx containers.

Resources