docker-compose volume bug: File shows up in ls, but accessing it says that file doesnt exists - docker-volume

I'm trying to mount some certification files from letsencrypt. They are sudo protected (need sudo access), however, since docker has sudo access, that shouldnt be the problem. When I bash into the container, and go into the mounted folder inside the container, the files show up in the ls command, however, cat-ing the files tells me that said files doesn't exists. When I run the container normally, geoserver says that it cant find the certificate/private key files and generates its own self-signed certificates.
version: '3'
services:
geoserver:
container_name: geoserver
image: "kartoza/geoserver:2.22.0"
volumes:
- ./geoserver-data:/opt/geoserver/data_dir
- /etc/letsencrypt/live/geo.geplant.com.br:/etc/certs
ports:
- 0.0.0.0:8080:8080
- 0.0.0.0:443:8443
restart: always
environment:
- GEOSERVER_ADMIN_PASSWORD=
- GEOSERVER_ADMIN_USER=
- GEOSERVER_DATA_DIR=/opt/geoserver/data_dir
- GEOWEBCACHE_CACHE_DIR=/opt/geoserver/data_dir/gwc
- SSL=true
healthcheck:
test: curl --fail -s http://localhost:8080/ || exit 1
interval: 1m30s
timeout: 10s
retries: 3
Inside the containers mounted volume:
t
Cat'ing the file
I think this is some sort of protection going on, bc the README file works just fine.

I found the answer. The problemn was the the certificate files were only symlinks.

Related

How can I mount my current host directory into a docker container with docker-compose?

Following the official Docs and several blog posts about this topic, I try to mount my current host directory into a docker-container as /var/www. However, it will always be empty and files placed in host directory aren't visible nor accessible for Docker.
This is my docker-compose.yml
version: '3.4'
php:
build:
context: .
volumes:
- .:/var/www
- php_socket:/var/run/php
working_dir: /var/www
healthcheck:
interval: 10s
timeout: 3s
retries: 3
start_period: 30s
In Dockerfile:
WORKINGDIR /var/www
RUN bin/console
will fail due to file does not exist, although it is existing on host directory.
Even when I mount the intermediate container and run ls -l on working dir (ls -l /var/www) it is empty:
docker run --rm 1be5bb736ee6 ls -l /var/www
I run Docker on a Ubuntu 20 machine.
How can I make shared volumes work?
Since the files you've shown don't allow us to reproduce your error, I've tried doing what you want to do using ubuntu:latest. This works and prints out the contents of the current directory on the host.
version: '3.3'
services:
test:
image: ubuntu:latest
volumes:
- .:/var/www
command: ["ls", "/var/www"]

Containerizing Cordapp with Docker Image and Docker Compose

When running Corda in docker with external Postgres DB configurations, I get insufficient privileges to access error.
Note:
Corda: 4.6 Postgresql: 9.6
Docker engine 20.10.6
Docker-compose: docker-compose version 1.29.1, build c34c88b2
docker-compose.yml file:
version: '3.3'
services:
partyadb:
hostname: partyadb
container_name: partyadb
image: "postgres:9.6"
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: partyadb
ports:
- 5432
partya:
hostname: partya
# image: corda/corda-zulu-java1.8-4.7:RELEASE
image: corda/corda-zulu-java1.8-4.6:latest
container_name: partya
ports:
- 10006
- 2223
command: /bin/bash -c "java -jar /opt/corda/bin/corda.jar run-migration-scripts -f /etc/corda/node.conf --core-schemas --app-schemas && /opt/corda/bin/run-corda"
volumes:
- ./partya/node.conf:/etc/corda/node.conf:ro
- ./partya/certificates:/opt/corda/certificates:ro
- ./partya/persistence.mv.db:/opt/corda/persistence/persistence.mv.db:rw
- ./partya/persistence.trace.db:/opt/corda/persistence/persistence.trace.db:rw
# - ./partya/logs:/opt/corda/logs:rw
- ./shared/additional-node-infos:/opt/corda/additional-node-infos:rw
- ./shared/cordapps:/opt/corda/cordapps:rw
- ./shared/drivers:/opt/corda/drivers:ro
- ./shared/network-parameters:/opt/corda/network-parameters:rw
environment:
- ACCEPT_LICENSE=${ACCEPT_LICENSE}
depends_on:
- partyadb
Error:
[ERROR] 12:41:24+0000 [main] internal.NodeStartupLogging. - Exception during node startup. Corda started with insufficient privileges to access /opt/corda/additional-node-infos/nodeInfo-5B........................................47D
The corda/corda-zulu-java1.8-4.6:latest image runs under the user corda, not root. This user has user id 1000, and also is in a group called corda, also with gid 1000:
corda#5bb6f196a682:~$ id -u corda
1000
corda#5bb6f196a682:~$ groups corda
corda : corda
corda#5bb6f196a682:~$ id -G corda
1000
The problem here seems to be that the file you are mounting into the docker container (./shared/additional-node-infos/nodeInfo-5B) does not have permissions setup in such a way as to allow this user to access it. I'm assuming the user needs read and write access. A very simple fix would be to give other read and write access to this file:
$ chmod o+rw ./shared/additional-node-infos/nodeInfo-5B
There are plenty of other ways to manage this kind of permissions issue in docker, but remember that the permissions are based on uid/gid which usually do not map nicely from your host machine into the docker container.
So the error itself describes that it's a permission problem.
I don't know if you crafted this dockerfile yourself, you may want to take a look at generating them with the dockerform task (https://docs.corda.net/docs/corda-os/4.8/generating-a-node.html#use-cordform-and-dockerform-to-create-a-set-of-local-nodes-automatically)
This permission problem could be that you're setting only read / write within the container:
- ./shared/additional-node-infos:/opt/corda/additional-node-infos:rw
or it could be that you need to change the permissions on the shared folder. Try changing the permissions of shared to 777 and see if that works, then restrict your way back down to permissions you're comfortable with.
I just configure the image to be run as root. This works but may not be safe. Simply add
services:
cordaNode:
user: root
to the service configuration.
Ref: How to configure docker-compose.yml to up a container as root

Docker persisted volum has no permissions (Apache Solr)

My docker-compose.yml:
solr:
image: solr:8.6.2
container_name: myproject-solr
ports:
- "8983:8983"
volumes:
- ./data/solr:/var/solr/data
networks:
static-network:
ipv4_address: 172.20.1.42
After bringing up the docker with docker-compose up -d --build, the solr container is down and the log (docker logs myproject-solr) shows this:
Copying solr.xml
cp: cannot create regular file '/var/solr/data/solr.xml': Permission denied
I've noticed that if I give full permissions on my machine to the data directory sudo chmod 777 ./data/solr/ -R and I run the Docker again, everything is fine.
I guess the issue comes when the solr user is not my machine, because Docker creates the data/solr folder with root:root. Having my ./data folder gitignored, I cannot manage these folder permissions.
I'd like to know a workaround to manage permissions properly with the purpose of persisting data
It's a known "issue" with docker-compose: all files created by Docker engine are owned by root:root. Usually it's solved in one of the two ways:
Create the volume in advance. In your case, you can create the ./data/solr directory in advance, with appropriate permissions. You might make it accessible to anyone, or, better, change its owner to the solr user. The solr user and group ids are hardcoded inside the solr image: 8983 (Dockerfile.template)
mkdir -p ./data/solr
sudo chown 8983:8983 ./data/solr
If you want to avoid running additional commands before docker-compose, you can create additional container which will fix the permissions:
version: "3"
services:
initializer:
image: alpine
container_name: solr-initializer
restart: "no"
entrypoint: |
/bin/sh -c "chown 8983:8983 /solr"
volumes:
- ./data/solr:/solr
solr:
depends_on:
- initializer
image: solr:8.6.2
container_name: myproject-solr
ports:
- "8983:8983"
volumes:
- ./data/solr:/var/solr/data
networks:
static-network:
ipv4_address: 172.20.1.42
There is docker-compose-only solution :)
Problem
Docker mounts local folders with root permissions.
In Solr's docker image, the default user is solr - for a good reason: Solr commands should be run with this user (you can force to run them with root but that is not recommended).
Most Solr commands require write permissions to /var/solr/, for data and logs storage.
In this context, when you run a solr command as the solr user, you are rejected because you don't have write permission to /var/solr/.
Solution
What you can do is to first start the container as root to change the permissions of /var/solr/. And then switch to solr user to run all necessary solr commands. You can't start our Solr server.
In the example below, we use solr-precreate to create a default core and start solr.
version: '3.7'
services:
solr:
image: solr:8.5.2
volumes:
- ./mnt/solr:/var/solr
ports:
- 8983:8983
user: root # run as root to change the permissions of the solr folder
# Change permissions of the solr folder, create a default core and start solr as solr user
command: bash -c "
chown -R 8983:8983 /var/solr
&& runuser -u solr -- solr-precreate default-core"
Set with a Dockerfile
It's possibly not exactly what you wanted as the files aren't persisted when rebuilding the container, but it solves the 'rights' problem. Copy the files over and chown them with a Dockerfile:
FROM solr:8.7.0
COPY --chown=solr ./data /var/solr/data
This is more useful if you're trying to initialise a single core:
FROM solr:8.7.0
COPY --chown=solr ./core /var/solr/data/someCollection
It also has the advantage that you can create an image for reuse.
With a named volume
For persistence, you can also create a volume (in this case core) and copy the contents of a directory (also called core here), assigning the rights to the files on the way:
docker container create --name temp -v core:/data tianon/true || exit $?
tar -cf - --directory core --owner 8983 --group 8983 . | docker cp - temp:/data
docker rm temp
This was adapted from these answers:
https://github.com/moby/moby/issues/25245#issuecomment-365980572
https://stackoverflow.com/a/52446394
Then you can mount the named volume in your Docker Compose file:
version: '3'
services:
solr:
image: solr:8.7.0
networks:
- internal
ports:
- 8983:8983
volumes:
- core:/var/solr/data/someCollection
volumes:
core:
external: true
This solution persists the data without overriding the data on the host. And it doesn't need the extra build step. And can obviously be adapted for mounting the entire /var/solr/data folder.
It doesn't seem to matter that the mounted volume/directory doesn't have the correct rights (/var/solr/data/someCollection has owner root:root).

Docker compose error while creating mount source path

I have created a docker-compose.yml using cloudestuary. After downloading it and putting it in my Laravel project folder and running docker-compose up -d the download takes place and then I get this message:
ERROR: for worker-1 Cannot start service worker-1: error while creating mount source path '/var/www/html/lensin/html': mkdir /var/www: read-only file system
ERROR: for nginx Cannot start service nginx: error while creating mount source path '/var/www/html/lensin/html': mkdir /var/www: read-only file system
ERROR: for app Cannot start service app: error while creating mount source path '/var/www/html/lensin/html': mkdir /var/www: read-only file system
ERROR: for workspace Cannot start service workspace: error while creating mount source path '/var/www/html/lensin/html': mkdir /var/www: read-only file system
ERROR: Encountered errors while bringing up the project.
I`m on Ubuntu 17, and have tried even to set 777 to all folders, and running it with sudo, but the result is the same. I have also tried to move the file and to edit the volumes in yml.
Here is my docker compose file:
version: '2'
services:
nginx:
image: 'cloudestuary/nginx:mainline-fpm'
restart: always
environment:
CLIENT_MAX_BODY_SIZE: 100m
DOCUMENT_ROOT: /var/www/html/public
INDEX_FILE: index.php
PHP_FPM: app
networks:
- app
volumes:
- './html:/var/www/html'
ports:
- '80:80'
app:
image: 'cloudestuary/php-fpm:7.1'
restart: always
environment:
MAX_UPLOAD_FILE_SIZE: 100m
APP_URL: 'http://lensin.localhost'
APP_KEY: 'base64:2X9U1HiBdmfbwvZ4UkwUP/25svg7439HXKWL1F8Xn1c='
DB_CONNECTION: mysql
DB_HOST: mysql
DB_PORT: '3306'
DB_DATABASE: cloudestuary
DB_USER: cloudestuary
DB_PASSWORD: secret
networks:
- app
volumes:
- './html:/var/www/html'
workspace:
image: 'cloudestuary/php-workspace:7.1'
restart: always
ports:
- '2222:22'
environment:
MAX_UPLOAD_FILE_SIZE: 100m
APP_URL: 'http://lensin.localhost'
APP_KEY: 'base64:2X9U1HiBdmfbwvZ4UkwUP/25svg7439HXKWL1F8Xn1c='
DB_CONNECTION: mysql
DB_HOST: mysql
DB_PORT: '3306'
DB_DATABASE: cloudestuary
DB_USER: cloudestuary
DB_PASSWORD: secret
SSH_PASSWORD: xsKEVWXPrdAeg
networks:
- app
volumes:
- './html:/var/www/html'
worker-1:
image: 'cloudestuary/php-cli:7.1'
restart: always
networks:
- app
environment: { }
volumes:
- './html:/var/www/html'
command: 'php artisan queue:work'
mysql:
image: 'mysql:5.7'
restart: always
networks:
- app
environment:
MYSQL_ROOT_PASSWORD: toor
MYSQL_PASSWORD: secret
MYSQL_USER: cloudestuary
MYSQL_DATABASE: cloudestuary
volumes:
- 'mysql-data:/var/lib/mysql'
volumes:
mysql-data: { }
networks:
app: { }
It's likely a pathing issue with Docker when installed with snap, you're better off installing it with the official documentation from Docker.
Remove docker from snap
snap remove docker
Remove the docker directory, and old version (It's okay if these don't exist already)
rm -R /var/lib/docker
sudo apt-get remove docker docker-engine docker.io
Install the official docker package: https://docs.docker.com/install/linux/docker-ce/ubuntu/
Update: Since posting this answer, I've learnt that tools installed using snap are installed in a sandbox with limited permissions outside of that sandbox. This is likely the cause as docker won't have access to the external filesystem from its isolated sandbox environment.
Restart your docker service. Then the problem will solve.
sudo systemctl restart docker
What led me here was the Kubernetes V1VolumeMount. When deploying my application I was getting the same error format:
ERROR: for <pod_name> Cannot start <service_name>: error while creating mount source path '<source_path>': mkdir <dir_path>: read-only file system
At the start I was thinking permissions error as well, hence the message is a bit misleading. I turned out that I was trying to mount something that didn't exist in the source image. Hence, my uneducated suggestion would be, verify that what you are trying to mount does exist, if it doesn't you probably don't need that mount path.
P.S. I saw that there wasn't an accepted answer, so I am hoping that my contribution is not causing unnecessary cluttering.
Got this error but the issue was that the source path was a symlink. For some reason docker does not seem to like it, even after restarting the service.
Had to use a real path and then it worked just fine with Docker version 20.10.8 installed with snap.
For Docker on Windows 10, sometimes you have just to wait a while (1-5 min) before executing docker-compose up again.
Hope this will help someone else.

Mounted volume is empty inside container

I've got a docker-compose.yml like this:
db:
image: mongo:latest
ports:
- "27017:27017"
server:
image: artificial/docker-sails:stable-pm2
command: sails lift
volumes:
- server/:/server
ports:
- "1337:1337"
links:
- db
server/ is relative to the folder of the docker-compose.yml file. However when I docker exec -it CONTAINERID /bin/bash and check /server it is empty.
What am I doing wrong?
Aside from the answers here, it might have to do with drive sharing in Docker Setting. On Windows, I discovered that drive sharing needs to be enabled.
In case it is already enabled and you recently changed your PC's password, you need to disable drive sharing (and click "Apply") and re-enable it again (and click "Apply"). In the process, you will be prompted for your PC's new password. After this process, run your docker command (run or compose) again
Try using:
volumes:
- ./server:/server
instead of server/ -- there are some cases where Docker doesn't like the trailing slash.
As per docker volumes documentation,
https://docs.docker.com/engine/tutorials/dockervolumes/#/mount-a-host-directory-as-a-data-volume
The host-dir can either be an absolute path or a name value. If you
supply an absolute path for the host-dir, Docker bind-mounts to the
path you specify. If you supply a name, Docker creates a named volume
by that name
I had similar issue when I wanted to mount a directory from command line:
docker run -tid -p 5080:80 -v /d/my_project:/var/www/html/my_project nimmis/apache-php5
The container has been started successfully but the mounted directory was empty.
The reason was that the mounted directory must be under the user's home directory. So, I created a symlink under c:\Users\<username> that mounts to my project folder d:\my_project and mounted that one:
docker run -tid -p 5080:80 -v /c/Users/<username>/my_project/:/var/www/html/my_project nimmis/apache-php5
If you are using Docker for Mac then you need to go to:
Docker Desktop -> Preferences -> Resources -> File Sharing
and add the folder you intend to mount. See the screenshot:
I don't know if other people made the same mistake but the host directory path has to start from /home
So my msitake was that in my docker-compose I was WRONGLY specifying the following:
services:
myservice:
build: .
ports:
- 8888:8888
volumes:
- /Desktop/subfolder/subfolder2:/app/subfolder
When the host path should have been full path from /home. something like:
services:
myservice:
build: .
ports:
- 8888:8888
volumes:
- home/myuser/Desktop/subfolder/subfolder2:/app/subfolder
On Ubuntu 20.04.4 LTS, with Docker version 20.10.12, build e91ed57, I started observing a similar symptom with no apparent preceding action. After a docker-compose -p production-001 -f deploy/docker-compose.yml up -d --build command, with no changes to one of the services (production-001-volumeConsumingService is up-to-date), a part of the volumes stopped mounting.
# deploy/docker-compose.yml
version: "3"
services:
...
volumeConsumingService:
container_name: production-001-volumeConsumingService
hostname: production-001-volumeConsumingService
image: group/production-001-volumeConsumingService
build:
context: .
dockerfile: volumeConsumingService.Dockerfile
depends_on:
- anotherServiceDefinedEarlier
restart: always
volumes:
- ../data/certbot/conf:/etc/letsencrypt # mouning
- ../data/certbot/www:/var/www/certbot # not mounting
- ../data/www/public:/var/www/public # not mounting
- ../data/www/root:/var/www/root # not mounting
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
networks:
- default
- external
...
networks:
external:
name: routing
A workaround that seems to be working is to enforce a restart on the failing service immediately after the docker-compose -p production-001 -f deploy/docker-compose.yml up -d --build command:
docker-compose -p production-001 -f deploy/docker-compose.yml up -d --build && docker stop production-001-volumeConsumingService && docker start production-001-volumeConsumingService
In the case when the volumes are not mounted after a host reboot, adding a cron task to restart the service once should do.
In my case, the volume was empty because I did not use the right path format without quotes.
If you have a relative or absolute path with spaces in it, you do not need to use double quotes around the path, you can just use any path with spaces and it will be understood since docker-compose has the ":" as the delimiter and does not check spaces.
Ways that do not work (double quotes are the problem!):
volumes:
- "MY_PATH.../my server":/server
- "MY_PATH.../my server:/server" (I might have missed testing this, not sure!)
- "./my server":/server
- ."/my server":/server
- "./my server:/server"
- ."/my server:/server"
Two ways how you can do it (no double quotes!):
volumes:
- MY_PATH.../my server:/server
- ./my server:/server

Resources