docker-compose error "read-only file system" - docker

I designed a docker-compose.yml file that also supposed to work with individual volumes.
I created a raid-drive which is mounted as /dataraid to my system. I can read/write to the system, but when using it in my compose file, I get read-only file system error messages.
Adjusting the volumes to a other path like /home/myname/test the compose file works.
I have no idea what the /dataraid makes it "read-only".
What are the permissions settings a compose file needs?
error message:
ERROR: for db Cannot start service db: error while creating mount source path '/dataraid/nextcloud/mariadb': mkdir /dataraid: read-only file system
compose:
version: '3'
services:
db:
image: mariadb
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: always
volumes:
- /dataraid/nextcloud/mariadb:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=PASSWORD
env_file:
- db.env
redis:
image: redis
restart: always
app:
image: nextcloud:fpm
restart: always
volumes:
- /dataraid/nextcloud/html:/var/www/html
environment:
- MYSQL_HOST=db
env_file:
- db.env
depends_on:
- db
- redis
web:
build: ./web
restart: always
volumes:
- /dataraid/nextcloud/html:/var/www/html:ro
environment:
- VIRTUAL_HOST=name.de
- LETSENCRYPT_HOST=name.de
- LETSENCRYPT_EMAIL=x#y.de
depends_on:
- app
ports:
- 4080:80
networks:
- proxy-tier
- default
collabora:
image: collabora/code
expose:
- 9980
cap_add:
- MKNOD
environment:
- domain=name.de
- VIRTUAL_HOST=name.de
- VIRTUAL_PORT=9980
- VIRTUAL_PROTO=https
- LETSENCRYPT_HOST=name.de
- LETSENCRYPT_EMAIL=x#y.de
- username= #optional
- password= #optional
networks:
- proxy-tier
restart: always
cron:
build: ./app
restart: always
volumes:
- /dataraid/nextcloud/html:/var/www/html
entrypoint: /cron.sh
depends_on:
- db
- redis
proxy:
build: ./proxy
restart: always
ports:
- 443:443
- 80:80
environment:
- VIRTUAL_PROTO=https
- VIRTUAL_PORT=443
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
volumes:
- /dataraid/nextcloud/nginx-certs:/etc/nginx/certs:ro
- /dataraid/nextcloud/nginx-vhost.d:/etc/nginx/vhost.d
- /dataraid/nextcloud/nginx-html:/usr/share/nginx/html
- /dataraid/nextcloud/nginx-conf.d:/etc/nginx/conf.d
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
- proxy-tier
letsencrypt-companion:
image: jrcs/letsencrypt-nginx-proxy-companion
restart: always
volumes:
- /dataraid/nextcloud/nginx-certs:/etc/nginx/certs
- /dataraid/nextcloud/nginx-vhost.d:/etc/nginx/vhost.d
- /dataraid/nextcloud/nginx-html:/usr/share/nginx/html
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- proxy-tier
depends_on:
- proxy
networks:
proxy-tier:
see error messages:
bernd#sys-dock:/dataraid/Docker-Configs/nextcloud$ docker-compose up -d
Creating network "nextcloud_default" with the default driver
Creating network "nextcloud_proxy-tier" with the default driver
Creating nextcloud_db_1 ...
Creating nextcloud_proxy_1 ... error
Creating nextcloud_db_1 ... error
Creating nextcloud_collabora_1 ...
ERROR: for nextcloud_proxy_1 Cannot start service proxy: error while creating mount source path '/dataraid/nextcloud/nginx-certs': mkdir /dataraid: read-only file system
Creating nextcloud_redis_1 ... done
Creating nextcloud_collabora_1 ... done
ERROR: for proxy Cannot start service proxy: error while creating mount source path '/dataraid/nextcloud/nginx-certs': mkdir /dataraid: read-only file system
ERROR: for db Cannot start service db: error while creating mount source path '/dataraid/nextcloud/mariadb': mkdir /dataraid: read-only file system
ERROR: Encountered errors while bringing up the project.

If docker starts before the filesystem gets mounted, you could be seeing issues with the docker engine trying to write to the parent filesystem. You can restart the docker daemon to rule this out (systemctl restart docker in systemd base environments).
If restarting the daemon helps, then you can add a dependency between the docker engine and the external filesystem mounts. In systemd, that involves an After= clause in the unit file. E.g. you could create a /etc/systemd/system/docker.service.d/override.conf file containing:
[Unit]
After=nfs-client.target
(Note that I'm not sure that nfs-client.target is the correct unit file for your
filesystem, you'll want to check where it gets mounted.)
Another issue I've seen people encounter recently is Snap based docker installs, which run docker inside of another container technology, which would prevent access to paths not explicitly configured in the Snap.

Related

understanding Docker compose mongodb and rails application

there is ruby on rails application which uses mongodb and postgresql databases. When I run it locally everything works fine, however when I try to open in a remote container, it throws error message
2021-03-14T20:22:27.985+0000 Failed: error connecting to db server: no reachable servers
the docker-compose.yml file defines following services:
redis mongodb db rails
I start remote containers with following command:
docker-compose build - build successful
docker-compose up -d - containers are up and running
when I connect to the rails container and try to do
bundle exec rake aws:restore_db
error mentioned above is thrown. I don't know what is wrong here. The mongodb container is up and running.
the docker-compose.yml is mentioned below:
version: '3.4'
services:
redis:
image: redis:5.0.5
mongodb:
image: mongo:3.6.13
volumes:
- mongo-data:/data/db
db:
image: postgres:11.3
volumes:
- db-data:/var/lib/postgresql/data
rails:
build: .
image: proj:latest
depends_on:
- db
- mongodb
- redis
volumes:
- .:/proj
ports:
- "3000:3000"
tty: true
stdin_open: true
env_file:
- .env/development.env
volumes:
db-data:
mongo-data:
this is how I start all four remote containers:
$ docker-compose up -d
Starting proj_db_1 ... done
Starting proj_redis_1 ... done
Starting proj_mongodb_1 ... done
Starting proj_rails_1 ... done
please help me to understand how remote containers should interact with each other.
Your configuration should point to the services by name and not to a port on localhost. For example, if you ware connecting to redis as localhost:6380 or 127.0.0.1:6380, now you need to use redis:6380
If this is still not helping, you can try to add links between containers in order the names given to them as services to be resolved. So the file will look something like this:
version: '3.4'
services:
redis:
image: redis:5.0.5
networks:
- front-end
links:
- "mongodb:mongodb"
- "db:db"
- "rails:rails"
mongodb:
image: mongo:3.6.13
volumes:
- mongo-data:/data/db
networks:
- front-end
links:
- "redis:redis"
- "db:db"
- "rails:rails"
db:
image: postgres:11.3
volumes:
- db-data:/var/lib/postgresql/data
networks:
- front-end
links:
- "redis:redis"
- "mongodb:mongodb"
- "rails:rails"
rails:
build: .
image: proj:latest
depends_on:
- db
- mongodb
- redis
volumes:
- .:/proj
ports:
- "3000:3000"
tty: true
stdin_open: true
env_file:
- .env/development.env
networks:
- front-end
links:
- "redis:redis"
- "mongodb:mongodb"
- "db:db"
volumes:
db-data:
mongo-data:
networks:
front-end:
The links will allow for a hostnames to be defined in the containers.
The link flag is legacy, and in new versions of docker-engine it's not required for user defined networks. Also, the links will be ignored in case of docker swarm deployment. However since there are sill old installations of Docker and docker-compose, this is one thing to try in troubleshooting.

How to access wacore container which is exiting due to file error present within - "/opt/whatsapp/bin/wait_on_postgres.sh"

I am launching containers via docker-compose, but 2 out of 3 containers are failing stating -:"exec user process caused "exec format error" "
The above error is caused while executing a file places at location /opt/whatsapp/bin/wait_on_postgres.sh, i need to add #!/bin/bash at top of this file.
Problem is, the container is exiting in no time so how to access this file to make necessary changes ??
Below is the docker-compose.yml i am using -:
version: '3'
volumes:
whatsappMedia:
driver: local
postgresData:
driver: local
services:
db:
image: postgres:10.6
command: "-p 3306 -N 500"
restart: always
environment:
POSTGRES_PASSWORD: testpass
POSTGRES_USER: root
expose:
- "33060"
ports:
- "33060:3306"
volumes:
- postgresData:/var/lib/postgresql/data
network_mode: bridge
wacore:
image: docker.whatsapp.biz/coreapp:v${WA_API_VERSION:?Run docker-compose with env var WA_API_VERSION (ex. WA_API_VERSION=2.31.4 docker-compose <command> <options>)}
command: ["/opt/whatsapp/bin/wait_on_postgres.sh", "/opt/whatsapp/bin/launch_within_docker.sh"]
volumes:
- whatsappMedia:/usr/local/wamedia
env_file:
- db.env
environment:
# This is the version of the docker templates being used to run WhatsApp Business API
WA_RUNNING_ENV_VERSION: v2.2.3
ORCHESTRATION: DOCKER-COMPOSE
depends_on:
- "db"
network_mode: bridge
links:
- db
waweb:
image: docker.whatsapp.biz/web:v${WA_API_VERSION:?Run docker-compose with env var WA_API_VERSION (ex. WA_API_VERSION=2.31.4 docker-compose <command> <options>)}
command: ["/opt/whatsapp/bin/wait_on_postgres.sh", "/opt/whatsapp/bin/launch_within_docker.sh"]
ports:
- "9090:443"
volumes:
- whatsappMedia:/usr/local/wamedia
env_file:
- db.env
environment:
WACORE_HOSTNAME: wacore
# This is the version of the docker templates being used to run WhatsApp Business API
WA_RUNNING_ENV_VERSION: v2.2.3
ORCHESTRATION: DOCKER-COMPOSE
depends_on:
- "db"
- "wacore"
links:
- db
- wacore
network_mode: bridge
Problem got resolved by using 64bit guest OS image.
I was running this container over 32 bit Centos which was causing the error.

docker-compose up gives errors: Are you trying to mount a directory onto a file?

Trying to install nextcloud with docker on windows (Docker version: 19.03.13) and I'm starting windows powershell with admin rights, and using docker-compose up -d.
my compose yaml looks like this:
version: '3'
services:
proxy:
image: jwilder/nginx-proxy:alpine
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
container_name: nextcloud-proxy
networks:
- nextcloud_network
ports:
- 80:80
- 443:443
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- ./proxy/certs:/etc/nginx/certs:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: unless-stopped
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nextcloud-letsencrypt
depends_on:
- proxy
networks:
- nextcloud_network
volumes:
- ./proxy/certs:/etc/nginx/certs:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: unless-stopped
db:
image: mariadb
container_name: nextcloud-mariadb
networks:
- nextcloud_network
volumes:
- db:/var/lib/mysql
- /etc/localtime:/etc/localtime:ro
environment:
- MYSQL_ROOT_PASSWORD=1984cstr
- MYSQL_PASSWORD=cstrike
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
restart: unless-stopped
app:
image: nextcloud:latest
container_name: nextcloud-app
networks:
- nextcloud_network
depends_on:
- letsencrypt
- proxy
- db
volumes:
- nextcloud:/var/www/html
- ./app/config:/var/www/html/config
- ./app/custom_apps:/var/www/html/custom_apps
- ./app/data:/var/www/html/data
- ./app/themes:/var/www/html/themes
- /etc/localtime:/etc/localtime:ro
environment:
- VIRTUAL_HOST=nextcloud.example.de
- LETSENCRYPT_HOST=nextcloud.example.de
- LETSENCRYPT_EMAIL=realmail#gmail.com
restart: unless-stopped
volumes:
nextcloud:
db:
networks:
nextcloud_network:
But I'm getting the following errors:
ERROR: for nextcloud-mariadb Cannot start service db: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused "rootfs_linux.go:58: mounting \"/etc/localtime\" to rootfs \"/var/lib/docker/overlay2/5111ae9606906d7a02c039fc8ea7987272d4b2738dabd763fde72bdf56c8bb59/merged\" at \"/var/lib/docker/overlay2/5111ae9606906d7a02c039fc8ea7987272d4b2738dabd763fde72bdf56c8bb59/merged/usr/share/zoneinfo/Etc/UTC\" caused \"not a directory\""": unknown: Are you trying to mount a directory onto a file (or vice-versa)? ChCreating nextcloud-proxy ... done
Creating nextcloud-letsencrypt ... done
ERROR: for db Cannot start service db: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused "rootfs_linux.go:58: mounting \"/etc/localtime\" to rootfs \"/var/lib/docker/overlay2/5111ae9606906d7a02c039fc8ea7987272d4b2738dabd763fde72bdf56c8bb59/merged\" at \"/var/lib/docker/overlay2/5111ae9606906d7a02c039fc8ea7987272d4b2738dabd763fde72bdf56c8bb59/merged/usr/share/zoneinfo/Etc/UTC\" caused \"not a directory\""": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
ERROR: Encountered errors while bringing up the project.
What is wrong? Or what additional Information do I need to provide so that the problem can be found?
Since you are on a Windows host, the mount paths like /etc/localtime won’t work because they don’t exist on your system. The configuration you are using is for a Linux-based host.
Although, it’s recommended, you can remove those mounts from your services.
But, keep in mind that you need to keep the docker socket mount, and you will need to adjust it for a Windows host (since the one you have is also for a Linux host). You can try some solution from here.

Docker unable to mount nginx

I'm having issues setting up Docker for the first time on a Windows using the Docker Toolbox. Everything works except nginx at the moment.
Error message:
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58: mounting \\\"/c/wamp64/www/cathaypacific_career/ops/nginx/default.conf\\\" to rootfs \\\"/mnt/sda1/var/lib/docker/aufs/mnt/ff9b27a89b26b0e9091264d04d3a475f18469db3cf3be473c005e2d4c7d4b5ef\\\" at \\\"/mnt/sda1/var/lib/docker/aufs/mnt/ff9b27a89b26b0e9091264d04d3a475f18469db3cf3be473c005e2d4c7d4b5ef/etc/nginx/conf.d/default.conf\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
ERROR: Encountered errors while bringing up the project.
Docker-compose config:
version: '3'
services:
web:
container_name: web
image: nginx:1.13.3-alpine
networks:
- web_tier
ports:
- 80:80
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
- ../:/code
- /code/ops/
depends_on:
- app
app:
container_name: app
build: ./php/
networks:
- web_tier
- app_tier
expose:
- '9000'
volumes:
- ./php/settings.conf:/usr/local/etc/php-fpm.d/settings.conf
- ../:/code
- /code/ops/
working_dir: /code
entrypoint: "/bin/sh -c"
command:
- "php-fpm"
env_file: ../.env
depends_on:
- db
db:
container_name: db
image: mysql:5.6.39
networks:
- app_tier
- db_tier
expose:
- '3306'
ports:
- 3306:3306
volumes:
- db_data:/var/lib/mysql
- ./db:/etc/mysql/conf.d
restart: always
env_file: ../.env
networks:
web_tier:
driver: bridge
app_tier:
driver: bridge
db_tier:
driver: bridge
volumes:
db_data:
The issue seems to be related to Nginx with the default.conf not being accessible or the app thinkgs it's a folder and not a file.
I checked the issue online and people suggests to mount the C: folder so I tried to mount it on Oracle VirtualBox and re-run the docker-compose up command but it didn't solve the issue.
Any idea?
I solved same problem with sharing folder with Oracle VirtualBox VM default
Share your project folder and restart your vm.
You can do even with command like
docker-machine stop default & docker-machine stop default
Now, you need to use shared name (project) instead of . in your compose file (docker-compose.yml)
For your case,
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
should changed to
- /sharename/nginx/default.conf:/etc/nginx/conf.d/default.conf
Now try with docker-compose up.
It worked for me.

How to run Docker container in it's own network

Today I switched from "Docker Toolbox" to "Docker for Mac", because Docker now has finally write-access to my User directory (which doesn't worked with "Docker Toolbox") - Yay!
But this change also includes that all containers now running under my localhost and not under Docker's IP as before (e.g. 192.168.99.100).
Since my localhost listens to various ports by default (80, 443, ...) and I don't want to always add new created ports, that doesn't conflict with the standard one's, to my local dev domains (e.g. example.dev:8443), I wonder how to run my containers as before.
I read about network configs and tried a lot of things (creating a new host network, exposing ports with an IP in front of it, ...), but didn't got it working.
What kind of config do I need to run my app container with the IP 192.168.99.100? Thats my docker-compose.yml so far.
version: '2'
services:
app:
build:
context: .
dockerfile: Dockerfile
depends_on:
- mysql
- redis
- memcached
ports:
- 80:80
- 443:443
- 22:22
- 3000:3000
- 3001:3001
volumes:
- ./app/:/app/
- /tmp/debug/:/tmp/debug/
- ./:/docker/
volumes_from:
- storage
# cap and privileged needed for slowlog
cap_add:
- SYS_PTRACE
privileged: true
env_file:
- etc/environment.yml
- etc/environment.development.yml
mysql:
build:
context: docker/mysql/
dockerfile: MariaDB-10
ports:
- 3306:3306
volumes_from:
- storage
volumes:
- ./data/mysql:/var/lib/mysql
- /tmp/debug/:/tmp/debug/
env_file:
- etc/environment.yml
- etc/environment.development.yml
redis:
build: docker/redis/
volumes_from:
- storage
env_file:
- etc/environment.yml
- etc/environment.development.yml
memcached:
build: docker/memcached/
volumes_from:
- storage
env_file:
- etc/environment.yml
- etc/environment.development.yml
storage:
build: docker/storage/
volumes:
- /storage
You need to declare "networks:" for each of your services:
e.g.
version: '2'
services:
app:
image: xxxx:xxx
ports:
- "80:80"
networks:
- my-network
mysql:
image: xxxx:xxx
networks:
- my-network
networks:
my-network:
driver: bridge
Then from side your app configuration, you can use "mysql" as the hostname of database server.
You can define a network in your compose file, then add any services to the network.
https://docs.docker.com/compose/networking/
But I would suggest you just use different ports now that you are running natively. I.e. 8080:80

Resources