I stumbled across a problem with docker volumes while starting docker containers with a docker compose file (MariaDB, RabbitMQ, Maven). I start them simply with docker-compose up -d (WITHOUT SUDO)
My volumes are definied like this:
...
volumes:
- ./production/mysql:/var/lib/mysql:z
...
Everything is working fine and the ./production directory is created (where the volumes are mapped)
But when I again try to restart the docker containers with down/up, I get following error:
error checking context: 'no permission to read from '…/production/mysql/aria_log.00000001'
When I check the mentioned file I saw that it needs root:root permission. This is because the file is generated with the root user inside the container. So I tried to use namespace as mentioned in the docs.
Anyway the error still occurs. Any ideas or references?
Thanks.
Docker Compose File:
version: '3.8'
services:
mysql:
image: mariadb:latest
restart: always
env_file:
- config.env
volumes:
- ./production/mysql:/var/lib/mysql:z
environment:
MYSQL_DATABASE: ${DATABASE_NAME}
MYSQL_USER: ${DATABASE_USER}
MYSQL_PASSWORD: ${DATABASE_PASSWORD}
MYSQL_ROOT_PASSWORD: ${DATABASE_PASSWORD}
networks:
- testnetwork
networks:
testnetwork:
The issue comes from the mapping between the host user/group IDs and the ones inside the container. One of the solutions is to use a named volume and avoid all this hassle, but you can also do the following:
Add user: ${UID}:${GID} to your service inside the docker-compose file.
Run UID=${id -u} GID=${id -g} docker-compose up. This way you make sure that the user in the container will have the same UID/GID as the user on the host and files created in the container will have proper permissions.
NOTE: Docker for Mac (using the osxfs driver) does this behind the scenes and you don't need to worry about users and groups.
Run the Docker daemon as a non-root user this can be helpfull for your purpose.
all document are here.
Related
When running Corda in docker with external Postgres DB configurations, I get insufficient privileges to access error.
Note:
Corda: 4.6 Postgresql: 9.6
Docker engine 20.10.6
Docker-compose: docker-compose version 1.29.1, build c34c88b2
docker-compose.yml file:
version: '3.3'
services:
partyadb:
hostname: partyadb
container_name: partyadb
image: "postgres:9.6"
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: partyadb
ports:
- 5432
partya:
hostname: partya
# image: corda/corda-zulu-java1.8-4.7:RELEASE
image: corda/corda-zulu-java1.8-4.6:latest
container_name: partya
ports:
- 10006
- 2223
command: /bin/bash -c "java -jar /opt/corda/bin/corda.jar run-migration-scripts -f /etc/corda/node.conf --core-schemas --app-schemas && /opt/corda/bin/run-corda"
volumes:
- ./partya/node.conf:/etc/corda/node.conf:ro
- ./partya/certificates:/opt/corda/certificates:ro
- ./partya/persistence.mv.db:/opt/corda/persistence/persistence.mv.db:rw
- ./partya/persistence.trace.db:/opt/corda/persistence/persistence.trace.db:rw
# - ./partya/logs:/opt/corda/logs:rw
- ./shared/additional-node-infos:/opt/corda/additional-node-infos:rw
- ./shared/cordapps:/opt/corda/cordapps:rw
- ./shared/drivers:/opt/corda/drivers:ro
- ./shared/network-parameters:/opt/corda/network-parameters:rw
environment:
- ACCEPT_LICENSE=${ACCEPT_LICENSE}
depends_on:
- partyadb
Error:
[ERROR] 12:41:24+0000 [main] internal.NodeStartupLogging. - Exception during node startup. Corda started with insufficient privileges to access /opt/corda/additional-node-infos/nodeInfo-5B........................................47D
The corda/corda-zulu-java1.8-4.6:latest image runs under the user corda, not root. This user has user id 1000, and also is in a group called corda, also with gid 1000:
corda#5bb6f196a682:~$ id -u corda
1000
corda#5bb6f196a682:~$ groups corda
corda : corda
corda#5bb6f196a682:~$ id -G corda
1000
The problem here seems to be that the file you are mounting into the docker container (./shared/additional-node-infos/nodeInfo-5B) does not have permissions setup in such a way as to allow this user to access it. I'm assuming the user needs read and write access. A very simple fix would be to give other read and write access to this file:
$ chmod o+rw ./shared/additional-node-infos/nodeInfo-5B
There are plenty of other ways to manage this kind of permissions issue in docker, but remember that the permissions are based on uid/gid which usually do not map nicely from your host machine into the docker container.
So the error itself describes that it's a permission problem.
I don't know if you crafted this dockerfile yourself, you may want to take a look at generating them with the dockerform task (https://docs.corda.net/docs/corda-os/4.8/generating-a-node.html#use-cordform-and-dockerform-to-create-a-set-of-local-nodes-automatically)
This permission problem could be that you're setting only read / write within the container:
- ./shared/additional-node-infos:/opt/corda/additional-node-infos:rw
or it could be that you need to change the permissions on the shared folder. Try changing the permissions of shared to 777 and see if that works, then restrict your way back down to permissions you're comfortable with.
I just configure the image to be run as root. This works but may not be safe. Simply add
services:
cordaNode:
user: root
to the service configuration.
Ref: How to configure docker-compose.yml to up a container as root
I have a shared network folder, e.g.
\\pa02ptsdfs002.corp.lgd.afk\files\Public\chris\temp
There is a file in the shared folder that I would like to be visible to my dockerized application. The ultimate goal is, to have my application pick up and process this file, then put it into a database.
I have a Dockerfile and docker-compose.yml that I am thinking I will need to add a volume with the shared folder location (I'm not sure if this is the correct approach, this is where I need help!)
So far I've tried adding a volume in my yml which threw an error when i did docker-compose up -d
airflow:
build: ./airflow
image: digitalImage/airflow
container_name: di-airflow
environment:
AIRFLOW__CORE__EXECUTOR: 'LocalExecutor'
POSTGRES_USER: 'airflowStuff'
POSTGRES_PASSWORD: 'postgresCreds'
POSTGRES_HOST: 'host-postgres'
POSTGRES_PORT: '5432'
POSTGRES_DB: 'postgres-db'
DATE_VALUE: '1 DEC 2020 00:00:00'
volumes:
- ./airflow/released_dags:/usr/local/airflow/dags
- \\pa02ptsdfs002.corp.lgd.afk\files\Public\chris\temp:/usr/local/airflow/dags/inboundFiles
networks:
- di-airflowStuff
ports:
- 8081:8080
depends_on:
- postgres
ERROR: Cannot create container for service airflow: \pa02ptsdfs002.corp.lgd.afk\files\Public\chris\temp%! (EXTRA string=is not a valid Windows path)
p.s. I can access this shared folder location from my file explorer and python without a problem.
You don't need docker-compose to mount an external volume to your container, just configure it when running the container:
docker run --name name -v path_host:path_in_container image:tag
both directories must exist
Microsoft recomends mapping shares to network drives (if you're running docker on Windows):
https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/persistent-storage#smb-mounts
I have several containers which are described in a docker-compose-<service>.yaml file each, and which I start with
docker-compose -f docker-compose-<service>.yaml up -d
I then see via docker ps the container running.
I expected that I could stop that container via
docker-compose -f docker-compose-<service>.yaml down
The container is however not stopped. Neither it is when I use the comane above with stop instead of down.
Doing a docker kill <service> stops the container.
My question: since all my services started with docker-compose are effectively one container for each docker-compose-<service>.yaml file, can I use the bare docker command to stop it?
Or more generally speaking: is docker-compose simply a helper for underlying docker commands which means that using docker is always safe (from a "consistency in using different commands" perspective)?
My question: since all my services started with docker-compose are effectively one container for each docker-compose-.yaml file, can I use the bare docker command to stop it?
Actually docker-compose is using docker engine, you can try locally:
ex: docker-compose.yaml:
version: "3"
services:
# Database
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
networks:
- wpsite
# phpmyadmin
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
restart: always
ports:
- '9090:80'
environment:
PMA_HOST: db
MYSQL_ROOT_PASSWORD: wordpress
networks:
- wpsite
networks:
wpsite:
You can now interact with them thought docker engine if needed:
More globally docker-compose is a kind of orchestrater ( I prefer the terme of composer), if you need a to define container stack, dependent each others (like the previous example phpmyadmin/mysql) it is perfect to test in dev environment. In my point of view to have a better resilience, HA, service management... of containers stack in production environment, you strongly need to consider the implementation of a real orchestrater such as docker-swarm, kubernetes, openshift....
Here some documentation to explain the difference: https://linuxhint.com/docker_compose_vs_docker_swarm/
You can also see: What is the difference between `docker-compose build` and `docker build`?
I have a problem mounting a WD MyCloud EX2 NAS as an NFS share for a Nextcloud and MariaDB container combination, using Docker Compose. When I run docker-compose up -d, here's the error I get:
Creating nextcloud_app_1 ... error
ERROR: for nextcloud_app_1 Cannot create container for service app: b"error while mounting volume with options: type='nfs' device=':/mnt/HD/HD_a/nextcloud' o='addr=192.168.1.73,rw': permission denied"
ERROR: for app Cannot create container for service app: b"error while mounting volume with options: type='nfs' device=':/mnt/HD/HD_a/nextcloud' o='addr=192.168.1.73,rw': permission denied"
ERROR: Encountered errors while bringing up the project.
Here's docker-compose.yml (all sensitive info replaced with <brackets>:
Version: '2'
volumes:
nextcloud:
driver: local
driver_opts:
type: nfs
o: addr=192.168.1.73,rw
device: ":/mnt/HD/HD_a/nextcloud"
db:
services:
db:
image: mariadb
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: always
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=<****>
- MYSQL_PASSWORD=<****>
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- NEXTCLOUD_ADMIN_USER=<****>
- NEXTCLOUD_ADMIN_PASSWORD=<****>
app:
image: nextcloud
ports:
- 8080:80
links:
- db
volumes:
- nextcloud:/var/www/html
restart: always
I SSHd into the NAS box to check /etc/exports and sure enough, it was using all_squash, so I changed that.
Here's the /etc/exports file on the NAS box:
"/nfs/nextcloud" 192.168.1.73(rw,no_root_squash,sync,no_wdelay,insecure_locks,insecure,no_subtree_check,anonuid=501,anongid=1000)
"/nfs/Public" 192.168.1.73(rw,no_root_squash,sync,no_wdelay,insecure_locks,insecure,no_subtree_check,anonuid=501,anongid=1000)
Then, I refreshed the service with exportfs -a
Nothing changed - docker-compose throws the same error. And I'm deleting all containers and images and redownloading the image every time I attempt the build.
I've read similar questions and done everything I can think of. I also know this is a container issue because I can access the NFS share quite happily from the command line thanks to my settings in /etc/fstabs.
What else should I be doing here?
In our case, we are mounting the nfs volume localy on the docker host, then mounting the folder inside the containers.
We are running with oracle-linux 7, with SElinux enable.
We fixed by adding the following parameter inside /etc/fstab in the fs_mntops block (see https://man7.org/linux/man-pages/man5/fstab.5.html):
defaults,context="system_u:object_r:svirt_sandbox_file_t:s0"
Try to check with the command line ini nextcloud folder "ls -l /var/www/html", see groups and users who can access it
I fixed it by removing the anonuid=501,anongid=1000 entries in the NAS box's /etc/exports file, and I also managed to enter the wrong IP - the NAS box wasn't granting access to the Ubuntu computer that was trying to connect with it.
I have created a docker-compose.yml using cloudestuary. After downloading it and putting it in my Laravel project folder and running docker-compose up -d the download takes place and then I get this message:
ERROR: for worker-1 Cannot start service worker-1: error while creating mount source path '/var/www/html/lensin/html': mkdir /var/www: read-only file system
ERROR: for nginx Cannot start service nginx: error while creating mount source path '/var/www/html/lensin/html': mkdir /var/www: read-only file system
ERROR: for app Cannot start service app: error while creating mount source path '/var/www/html/lensin/html': mkdir /var/www: read-only file system
ERROR: for workspace Cannot start service workspace: error while creating mount source path '/var/www/html/lensin/html': mkdir /var/www: read-only file system
ERROR: Encountered errors while bringing up the project.
I`m on Ubuntu 17, and have tried even to set 777 to all folders, and running it with sudo, but the result is the same. I have also tried to move the file and to edit the volumes in yml.
Here is my docker compose file:
version: '2'
services:
nginx:
image: 'cloudestuary/nginx:mainline-fpm'
restart: always
environment:
CLIENT_MAX_BODY_SIZE: 100m
DOCUMENT_ROOT: /var/www/html/public
INDEX_FILE: index.php
PHP_FPM: app
networks:
- app
volumes:
- './html:/var/www/html'
ports:
- '80:80'
app:
image: 'cloudestuary/php-fpm:7.1'
restart: always
environment:
MAX_UPLOAD_FILE_SIZE: 100m
APP_URL: 'http://lensin.localhost'
APP_KEY: 'base64:2X9U1HiBdmfbwvZ4UkwUP/25svg7439HXKWL1F8Xn1c='
DB_CONNECTION: mysql
DB_HOST: mysql
DB_PORT: '3306'
DB_DATABASE: cloudestuary
DB_USER: cloudestuary
DB_PASSWORD: secret
networks:
- app
volumes:
- './html:/var/www/html'
workspace:
image: 'cloudestuary/php-workspace:7.1'
restart: always
ports:
- '2222:22'
environment:
MAX_UPLOAD_FILE_SIZE: 100m
APP_URL: 'http://lensin.localhost'
APP_KEY: 'base64:2X9U1HiBdmfbwvZ4UkwUP/25svg7439HXKWL1F8Xn1c='
DB_CONNECTION: mysql
DB_HOST: mysql
DB_PORT: '3306'
DB_DATABASE: cloudestuary
DB_USER: cloudestuary
DB_PASSWORD: secret
SSH_PASSWORD: xsKEVWXPrdAeg
networks:
- app
volumes:
- './html:/var/www/html'
worker-1:
image: 'cloudestuary/php-cli:7.1'
restart: always
networks:
- app
environment: { }
volumes:
- './html:/var/www/html'
command: 'php artisan queue:work'
mysql:
image: 'mysql:5.7'
restart: always
networks:
- app
environment:
MYSQL_ROOT_PASSWORD: toor
MYSQL_PASSWORD: secret
MYSQL_USER: cloudestuary
MYSQL_DATABASE: cloudestuary
volumes:
- 'mysql-data:/var/lib/mysql'
volumes:
mysql-data: { }
networks:
app: { }
It's likely a pathing issue with Docker when installed with snap, you're better off installing it with the official documentation from Docker.
Remove docker from snap
snap remove docker
Remove the docker directory, and old version (It's okay if these don't exist already)
rm -R /var/lib/docker
sudo apt-get remove docker docker-engine docker.io
Install the official docker package: https://docs.docker.com/install/linux/docker-ce/ubuntu/
Update: Since posting this answer, I've learnt that tools installed using snap are installed in a sandbox with limited permissions outside of that sandbox. This is likely the cause as docker won't have access to the external filesystem from its isolated sandbox environment.
Restart your docker service. Then the problem will solve.
sudo systemctl restart docker
What led me here was the Kubernetes V1VolumeMount. When deploying my application I was getting the same error format:
ERROR: for <pod_name> Cannot start <service_name>: error while creating mount source path '<source_path>': mkdir <dir_path>: read-only file system
At the start I was thinking permissions error as well, hence the message is a bit misleading. I turned out that I was trying to mount something that didn't exist in the source image. Hence, my uneducated suggestion would be, verify that what you are trying to mount does exist, if it doesn't you probably don't need that mount path.
P.S. I saw that there wasn't an accepted answer, so I am hoping that my contribution is not causing unnecessary cluttering.
Got this error but the issue was that the source path was a symlink. For some reason docker does not seem to like it, even after restarting the service.
Had to use a real path and then it worked just fine with Docker version 20.10.8 installed with snap.
For Docker on Windows 10, sometimes you have just to wait a while (1-5 min) before executing docker-compose up again.
Hope this will help someone else.