Docker compose error while creating mount source path - docker

I have created a docker-compose.yml using cloudestuary. After downloading it and putting it in my Laravel project folder and running docker-compose up -d the download takes place and then I get this message:
ERROR: for worker-1 Cannot start service worker-1: error while creating mount source path '/var/www/html/lensin/html': mkdir /var/www: read-only file system
ERROR: for nginx Cannot start service nginx: error while creating mount source path '/var/www/html/lensin/html': mkdir /var/www: read-only file system
ERROR: for app Cannot start service app: error while creating mount source path '/var/www/html/lensin/html': mkdir /var/www: read-only file system
ERROR: for workspace Cannot start service workspace: error while creating mount source path '/var/www/html/lensin/html': mkdir /var/www: read-only file system
ERROR: Encountered errors while bringing up the project.
I`m on Ubuntu 17, and have tried even to set 777 to all folders, and running it with sudo, but the result is the same. I have also tried to move the file and to edit the volumes in yml.
Here is my docker compose file:
version: '2'
services:
nginx:
image: 'cloudestuary/nginx:mainline-fpm'
restart: always
environment:
CLIENT_MAX_BODY_SIZE: 100m
DOCUMENT_ROOT: /var/www/html/public
INDEX_FILE: index.php
PHP_FPM: app
networks:
- app
volumes:
- './html:/var/www/html'
ports:
- '80:80'
app:
image: 'cloudestuary/php-fpm:7.1'
restart: always
environment:
MAX_UPLOAD_FILE_SIZE: 100m
APP_URL: 'http://lensin.localhost'
APP_KEY: 'base64:2X9U1HiBdmfbwvZ4UkwUP/25svg7439HXKWL1F8Xn1c='
DB_CONNECTION: mysql
DB_HOST: mysql
DB_PORT: '3306'
DB_DATABASE: cloudestuary
DB_USER: cloudestuary
DB_PASSWORD: secret
networks:
- app
volumes:
- './html:/var/www/html'
workspace:
image: 'cloudestuary/php-workspace:7.1'
restart: always
ports:
- '2222:22'
environment:
MAX_UPLOAD_FILE_SIZE: 100m
APP_URL: 'http://lensin.localhost'
APP_KEY: 'base64:2X9U1HiBdmfbwvZ4UkwUP/25svg7439HXKWL1F8Xn1c='
DB_CONNECTION: mysql
DB_HOST: mysql
DB_PORT: '3306'
DB_DATABASE: cloudestuary
DB_USER: cloudestuary
DB_PASSWORD: secret
SSH_PASSWORD: xsKEVWXPrdAeg
networks:
- app
volumes:
- './html:/var/www/html'
worker-1:
image: 'cloudestuary/php-cli:7.1'
restart: always
networks:
- app
environment: { }
volumes:
- './html:/var/www/html'
command: 'php artisan queue:work'
mysql:
image: 'mysql:5.7'
restart: always
networks:
- app
environment:
MYSQL_ROOT_PASSWORD: toor
MYSQL_PASSWORD: secret
MYSQL_USER: cloudestuary
MYSQL_DATABASE: cloudestuary
volumes:
- 'mysql-data:/var/lib/mysql'
volumes:
mysql-data: { }
networks:
app: { }

It's likely a pathing issue with Docker when installed with snap, you're better off installing it with the official documentation from Docker.
Remove docker from snap
snap remove docker
Remove the docker directory, and old version (It's okay if these don't exist already)
rm -R /var/lib/docker
sudo apt-get remove docker docker-engine docker.io
Install the official docker package: https://docs.docker.com/install/linux/docker-ce/ubuntu/
Update: Since posting this answer, I've learnt that tools installed using snap are installed in a sandbox with limited permissions outside of that sandbox. This is likely the cause as docker won't have access to the external filesystem from its isolated sandbox environment.

Restart your docker service. Then the problem will solve.
sudo systemctl restart docker

What led me here was the Kubernetes V1VolumeMount. When deploying my application I was getting the same error format:
ERROR: for <pod_name> Cannot start <service_name>: error while creating mount source path '<source_path>': mkdir <dir_path>: read-only file system
At the start I was thinking permissions error as well, hence the message is a bit misleading. I turned out that I was trying to mount something that didn't exist in the source image. Hence, my uneducated suggestion would be, verify that what you are trying to mount does exist, if it doesn't you probably don't need that mount path.
P.S. I saw that there wasn't an accepted answer, so I am hoping that my contribution is not causing unnecessary cluttering.

Got this error but the issue was that the source path was a symlink. For some reason docker does not seem to like it, even after restarting the service.
Had to use a real path and then it worked just fine with Docker version 20.10.8 installed with snap.

For Docker on Windows 10, sometimes you have just to wait a while (1-5 min) before executing docker-compose up again.
Hope this will help someone else.

Related

Docker volume mariadb has root permission

I stumbled across a problem with docker volumes while starting docker containers with a docker compose file (MariaDB, RabbitMQ, Maven). I start them simply with docker-compose up -d (WITHOUT SUDO)
My volumes are definied like this:
...
volumes:
- ./production/mysql:/var/lib/mysql:z
...
Everything is working fine and the ./production directory is created (where the volumes are mapped)
But when I again try to restart the docker containers with down/up, I get following error:
error checking context: 'no permission to read from '…/production/mysql/aria_log.00000001'
When I check the mentioned file I saw that it needs root:root permission. This is because the file is generated with the root user inside the container. So I tried to use namespace as mentioned in the docs.
Anyway the error still occurs. Any ideas or references?
Thanks.
Docker Compose File:
version: '3.8'
services:
mysql:
image: mariadb:latest
restart: always
env_file:
- config.env
volumes:
- ./production/mysql:/var/lib/mysql:z
environment:
MYSQL_DATABASE: ${DATABASE_NAME}
MYSQL_USER: ${DATABASE_USER}
MYSQL_PASSWORD: ${DATABASE_PASSWORD}
MYSQL_ROOT_PASSWORD: ${DATABASE_PASSWORD}
networks:
- testnetwork
networks:
testnetwork:
The issue comes from the mapping between the host user/group IDs and the ones inside the container. One of the solutions is to use a named volume and avoid all this hassle, but you can also do the following:
Add user: ${UID}:${GID} to your service inside the docker-compose file.
Run UID=${id -u} GID=${id -g} docker-compose up. This way you make sure that the user in the container will have the same UID/GID as the user on the host and files created in the container will have proper permissions.
NOTE: Docker for Mac (using the osxfs driver) does this behind the scenes and you don't need to worry about users and groups.
Run the Docker daemon as a non-root user this can be helpfull for your purpose.
all document are here.

Containerizing Cordapp with Docker Image and Docker Compose

When running Corda in docker with external Postgres DB configurations, I get insufficient privileges to access error.
Note:
Corda: 4.6 Postgresql: 9.6
Docker engine 20.10.6
Docker-compose: docker-compose version 1.29.1, build c34c88b2
docker-compose.yml file:
version: '3.3'
services:
partyadb:
hostname: partyadb
container_name: partyadb
image: "postgres:9.6"
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: partyadb
ports:
- 5432
partya:
hostname: partya
# image: corda/corda-zulu-java1.8-4.7:RELEASE
image: corda/corda-zulu-java1.8-4.6:latest
container_name: partya
ports:
- 10006
- 2223
command: /bin/bash -c "java -jar /opt/corda/bin/corda.jar run-migration-scripts -f /etc/corda/node.conf --core-schemas --app-schemas && /opt/corda/bin/run-corda"
volumes:
- ./partya/node.conf:/etc/corda/node.conf:ro
- ./partya/certificates:/opt/corda/certificates:ro
- ./partya/persistence.mv.db:/opt/corda/persistence/persistence.mv.db:rw
- ./partya/persistence.trace.db:/opt/corda/persistence/persistence.trace.db:rw
# - ./partya/logs:/opt/corda/logs:rw
- ./shared/additional-node-infos:/opt/corda/additional-node-infos:rw
- ./shared/cordapps:/opt/corda/cordapps:rw
- ./shared/drivers:/opt/corda/drivers:ro
- ./shared/network-parameters:/opt/corda/network-parameters:rw
environment:
- ACCEPT_LICENSE=${ACCEPT_LICENSE}
depends_on:
- partyadb
Error:
[ERROR] 12:41:24+0000 [main] internal.NodeStartupLogging. - Exception during node startup. Corda started with insufficient privileges to access /opt/corda/additional-node-infos/nodeInfo-5B........................................47D
The corda/corda-zulu-java1.8-4.6:latest image runs under the user corda, not root. This user has user id 1000, and also is in a group called corda, also with gid 1000:
corda#5bb6f196a682:~$ id -u corda
1000
corda#5bb6f196a682:~$ groups corda
corda : corda
corda#5bb6f196a682:~$ id -G corda
1000
The problem here seems to be that the file you are mounting into the docker container (./shared/additional-node-infos/nodeInfo-5B) does not have permissions setup in such a way as to allow this user to access it. I'm assuming the user needs read and write access. A very simple fix would be to give other read and write access to this file:
$ chmod o+rw ./shared/additional-node-infos/nodeInfo-5B
There are plenty of other ways to manage this kind of permissions issue in docker, but remember that the permissions are based on uid/gid which usually do not map nicely from your host machine into the docker container.
So the error itself describes that it's a permission problem.
I don't know if you crafted this dockerfile yourself, you may want to take a look at generating them with the dockerform task (https://docs.corda.net/docs/corda-os/4.8/generating-a-node.html#use-cordform-and-dockerform-to-create-a-set-of-local-nodes-automatically)
This permission problem could be that you're setting only read / write within the container:
- ./shared/additional-node-infos:/opt/corda/additional-node-infos:rw
or it could be that you need to change the permissions on the shared folder. Try changing the permissions of shared to 777 and see if that works, then restrict your way back down to permissions you're comfortable with.
I just configure the image to be run as root. This works but may not be safe. Simply add
services:
cordaNode:
user: root
to the service configuration.
Ref: How to configure docker-compose.yml to up a container as root

How to Add a shared folder location to my application (Docker)

I have a shared network folder, e.g.
\\pa02ptsdfs002.corp.lgd.afk\files\Public\chris\temp
There is a file in the shared folder that I would like to be visible to my dockerized application. The ultimate goal is, to have my application pick up and process this file, then put it into a database.
I have a Dockerfile and docker-compose.yml that I am thinking I will need to add a volume with the shared folder location (I'm not sure if this is the correct approach, this is where I need help!)
So far I've tried adding a volume in my yml which threw an error when i did docker-compose up -d
airflow:
build: ./airflow
image: digitalImage/airflow
container_name: di-airflow
environment:
AIRFLOW__CORE__EXECUTOR: 'LocalExecutor'
POSTGRES_USER: 'airflowStuff'
POSTGRES_PASSWORD: 'postgresCreds'
POSTGRES_HOST: 'host-postgres'
POSTGRES_PORT: '5432'
POSTGRES_DB: 'postgres-db'
DATE_VALUE: '1 DEC 2020 00:00:00'
volumes:
- ./airflow/released_dags:/usr/local/airflow/dags
- \\pa02ptsdfs002.corp.lgd.afk\files\Public\chris\temp:/usr/local/airflow/dags/inboundFiles
networks:
- di-airflowStuff
ports:
- 8081:8080
depends_on:
- postgres
ERROR: Cannot create container for service airflow: \pa02ptsdfs002.corp.lgd.afk\files\Public\chris\temp%! (EXTRA string=is not a valid Windows path)
p.s. I can access this shared folder location from my file explorer and python without a problem.
You don't need docker-compose to mount an external volume to your container, just configure it when running the container:
docker run --name name -v path_host:path_in_container image:tag
both directories must exist
Microsoft recomends mapping shares to network drives (if you're running docker on Windows):
https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/persistent-storage#smb-mounts

Docker for Mac and mkdir permisions

Using the docker for mac app. Just installed everything yesterday. Finally got the app going.
But I can't run migrations till I install postgis. So I dropped the official postgres dockerhub image for postgis:11-alpine image. But I keep on getting a permission denied issue when docker tries to mkdir for the pg_data volume.
Dockerfile:
version: '3'
# Containers we are going to run
services:
# Our Phoenix container
phoenix:
# The build parameters for this container.
build:
# Here we define that it should build from the current directory
context: .
environment:
# Variables to connect to our Postgres server
PGUSER: postgres
PGPASSWORD: postgres
PGDATABASE: gametime_dev
PGPORT: 5432
# Hostname of our Postgres container
PGHOST: db
ports:
# Mapping the port to make the Phoenix app accessible outside of the container
- "4000:4000"
depends_on:
# The db container needs to be started before we start this container
- db
- redis
redis:
image: "redis:alpine"
ports:
- "6379:6379"
sysctls:
net.core.somaxconn: 1024
db:
# We use the predefined Postgres image
image: mdillon/postgis:11-alpine
environment:
# Set user/password for Postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
# Set a path where Postgres should store the data
PGDATA: /var/lib/postgresql/data/pgdata
restart: always
volumes:
- pgdata:/usr/local/var/postgres_data
# Define the volumes
volumes:
pgdata:
Error I'm getting:
db_1 | mkdir: can't create directory '/var/lib/postgresql/data/pgdata': Permission denied
This does not happen though when using the postgres(official) image. I have googled high and low. I did read something about Docker for Mac running commands on the containers it creates in a VM as the current user's localhost user and not root. But that doesn't make sense to me - how do I get around this, if that's the case?
[Extra note:] - I did try the :z and :Z - still got the exact same error as above.
Appreciate the the time - in advance.
Your environment variables for the db service state that PGDATA is in /var/lib/postgresql/data/pgdata but you are mounting a pgdata volume in the container at /usr/local/var/postgres_data.
My guess is that when postgres starts, it is looking at the env vars and expecting a dir in /var/lib/postgresql/data/pgdata. Since it probably does not exists, it is trying to create it as postgres user which does not have the right to do it.
Use the same path for both vars and I'm quite sure it will fix the error.

Docker-compose error when try to start

I am having a funny error when I try to run a docker-compose. I have reinstall the VM several times, everything is update and install but I cannot run a compose.
$ sudo docker-compose up -d
Creating network "apache2_default" with the default driver
Building mysql
ERROR: Error processing tar file(exit status 1): permission denied
My docker-composer.yml file:
version: '2'
services:
mysql:
build: ./mysql
environment:
MYSQL_ROOT_PASSWORD: pass
volumes:
- db:/var/lib/mysql
php:
build: ./php
ports:
- '80:80'
volumes:
- ./html:/var/www/html
depends_on:
- mysql
volumes:
db:
I have run this in Mac and it works
Edit:
Dockerfile fot mysql:
FROM mysql:5.7
COPY ./my.cnf /etc/mysql/conf.d/
The docker build build command can fail with permission error when there are files (or folders) in the build context directory which aren't owned by the current user.
This situation happens when you mount a volume to a host directory ; the files in that directory might be owned by root.
The fix is quite easy: just create a .dockerignore file with the name of the directories/files you don't own and don't require in the docker image build.
For instance:
docker run -d -v $(pwd)/data-volume:/var/lib/mysql mysql
would create a data-volume directory.
If you were to build a Dockerfile, you would have the following content in your .dockerignore:
data-volume

Resources