Can't set volume value right when running owncloud docker container - docker

I followed this guide to install owncloud server on my VPS, it works. However there is an issue when I was trying to mount the file volume to host server. Below is the docker-compose.yml file I use. If I run the docker image with the yml file directly, I can't find there is any owncloud data under the directory /mnt/data of my VPS even I created /mnt/data manually.
version: '2.1'
volumes:
files:
driver: local
mysql:
driver: local
backup:
driver: local
redis:
driver: local
services:
owncloud:
image: owncloud/server:${OWNCLOUD_VERSION}
restart: always
ports:
- ${HTTPS_PORT}:443
- ${HTTP_PORT}:80
depends_on:
- db
- redis
environment:
- OWNCLOUD_DOMAIN=${OWNCLOUD_DOMAIN}
- OWNCLOUD_DB_TYPE=mysql
- OWNCLOUD_DB_NAME=owncloud
- OWNCLOUD_DB_USERNAME=owncloud
- OWNCLOUD_DB_PASSWORD=owncloud
- OWNCLOUD_DB_HOST=db
- OWNCLOUD_ADMIN_USERNAME=${ADMIN_USERNAME}
- OWNCLOUD_ADMIN_PASSWORD=${ADMIN_PASSWORD}
- OWNCLOUD_UTF8MB4_ENABLED=true
- OWNCLOUD_REDIS_ENABLED=true
- OWNCLOUD_REDIS_HOST=redis
healthcheck:
test: ["CMD", "/usr/bin/healthcheck"]
interval: 30s
timeout: 10s
retries: 5
volumes:
- files:/mnt/data
db:
image: webhippie/mariadb:latest
restart: always
environment:
- MARIADB_ROOT_PASSWORD=owncloud
- MARIADB_USERNAME=owncloud
- MARIADB_PASSWORD=owncloud
- MARIADB_DATABASE=owncloud
- MARIADB_MAX_ALLOWED_PACKET=128M
- MARIADB_INNODB_LOG_FILE_SIZE=64M
- MARIADB_INNODB_LARGE_PREFIX=ON
- MARIADB_INNODB_FILE_FORMAT=Barracuda
healthcheck:
test: ["CMD", "/usr/bin/healthcheck"]
interval: 30s
timeout: 10s
retries: 5
volumes:
- mysql:/var/lib/mysql
- backup:/var/lib/backup
redis:
image: webhippie/redis:latest
restart: always
environment:
- REDIS_DATABASES=1
healthcheck:
test: ["CMD", "/usr/bin/healthcheck"]
interval: 30s
timeout: 10s
retries: 5
volumes:
- redis:/var/lib/redis
So I tried modifying line files:/mnt/data to files:/mnt/data:/mnt/data, then it prompted me with following error.
ERROR: for ownclouddockerserver_owncloud_1 Cannot create container for service owncloud: invalid bind mount spec "ownclouddockerserver_files:/mnt/data:/mnt/data": invalid mode: /mnt/data
ERROR: for owncloud Cannot create container for service owncloud: invalid bind mount spec "ownclouddockerserver_files:/mnt/data:/mnt/data": invalid mode: /mnt/data
Can anyone help me out to figure out the right way of mounting external directory to container's? Much appreciated.

The /mnt/data directory exists only inside your docker container.
It is mapped to a generated (virtual) drive called files.
In order to map the /mnt/data directory to a local directory, you use:
/my/local/directory:/mnt/data

Related

Minio: found backend type fs, expected xl or xl-single

I try to upgrade minio version in my docker commpose(previously I used image: minio/minio:RELEASE.2020-06-22T03-12-50Z and it was working
)
For now I have following docker-compose service:
version: '3.6'
services:
minio:
container_name: minio
image: minio/minio:RELEASE.2022-11-17T23-20-09Z.fips
volumes:
- minio-data:/data
ports:
- 9000:9000
environment:
- MINIO_ROOT_USER=minio
- MINIO_ROOT_PASSWORD=minio123
command: server /data
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
When I try to start(docker-compose up -d) I see the following error in the minio container log:
2022-11-25 11:40:56 ERROR Unable to use the drive /data: Drive /data: found backend type fs, expected xl or xl-single - to migrate to a supported backend visit https://min.io/docs/minio/linux/operations/install-deploy-manage/migrate-fs-gateway.html: Invalid arguments specified
I've googled the following article https://min.io/docs/minio/linux/operations/install-deploy-manage/migrate-fs-gateway.html
But I still don't understand what shoud I change in my compose file to make it working.
looks like you need migrate data/fs in your volum to be used in new version of minio
so you need to run steps from
https://min.io/docs/minio/linux/operations/install-deploy-manage/migrate-fs-gateway.html
In your compose you need to add
volumes:
minio-data:
driver: local
It is not a solution but workaround how to use a fresh version:
minio:
container_name: minio
image: bitnami/minio:2022.11.17-debian-11-r0
volumes:
- minio-data:/data
ports:
- 9000:9000
- 9001:9001
environment:
- MINIO_ROOT_USER=minio
- MINIO_ROOT_PASSWORD=minio123
- MINIO_DEFAULT_BUCKETS=mybucket1,mybucket2
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s

How to use two healthchecks in docker-compose file where the python app depends on both healthchecks?

I have two postgres containers one mdhillon/postgis and another postgrest/postgrest. And the python app depends on both the healthchecks of the postgres containers. Please help
In terminal after docker-compose up
Creating compose_postgis_1 ... done
Creating compose_postgrest_1 ... done
Error for app Container <postgrest_container_id> is unhealthy. And the terminal exits
Showing docker-compose.yml file
services:
postgis:
image: mdillon/postgis
volumes:
- ./data:/var/lib/postgresql/data:cached
ports:
- 5432:5432
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
postgrest:
image: postgrest/postgrest
volumes:
- ./data:/var/lib/postgresql/data:cached
environment:
PGRST_DB_URI: postgres://${PGRST_DB_ANON_ROLE}:#postgis:5432/postgres
PGRST_DB_SCHEMA: ${PGRST_DB_SCHEMA}
PGRST_DB_ANON_ROLE: ${PGRST_DB_ANON_ROLE}
PGRST_DB_POOL: ${PGRST_DB_POOL}
ports:
- 3000:3000
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
app:
image: newapp
command: python main.py
ports:
- 5000:5000
depends_on:
postgis:
condition: service_healthy
postgrest:
condition: service_healthy
If you are using the official Postgres Docker Image, there is an option to RUN your postgres on a specific port. You need to add ENV variable PGPORT for postgree docker container to run on a different port. Try the below one...
services:
postgis:
image: mdillon/postgis
volumes:
- ./data:/var/lib/postgresql/data:cached
ports:
- 5432:5432
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
postgrest:
image: postgrest/postgrest
volumes:
- ./data:/var/lib/postgresql/data:cached
environment:
PGRST_DB_URI: postgres://${PGRST_DB_ANON_ROLE}:#postgis:5432/postgres
PGRST_DB_SCHEMA: ${PGRST_DB_SCHEMA}
PGRST_DB_ANON_ROLE: ${PGRST_DB_ANON_ROLE}
PGRST_DB_POOL: ${PGRST_DB_POOL}
PGPORT: 3000
ports:
- 3000:3000
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
app:
image: newapp
command: python main.py
ports:
- 5000:5000
depends_on:
postgis:
condition: service_healthy
postgrest:
condition: service_healthy
By default, Postgres container runs on Port 5432 inside Docker Network. Since, you are not changing the port of Postgres Container, both the containers are trying to run on same port inside docker network, and due to this, one container will run and the other will not. You can check the logs of docker container for better understanding.
Hence, adding PGPORT env var to the container to run Postgres on diff port will resolve your issue...

Docker compose always makes a new volume instead of using named ones

My problem is as the title states.
Here are the steps to reproduce. Note I am using docker compose file version 3.3, as I am running this off the apt python version of docker-compose as there is no binary for ARM64.
Create a docker file with these contents:
version: "3.3"
volumes:
owncloud_data:
external: true
owncloud_mysql:
external: true
owncloud_backup:
external: true
owncloud_redis:
external: true
services:
traefik:
image: "traefik"
container_name: "traefik"
restart: "always"
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.websecure.address=:443"
- "--certificatesresolvers.myresolver.acme.tlschallenge=true"
#- "--certificatesresolvers.myresolver.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory"
# Be sure to have LeGo installed
- "--certificatesresolvers.myresolver.acme.email=<my email>"
- "--certificatesresolvers.myresolver.acme.storage=/letsencrypt/acme.json"
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- "./letsencrypt:/letsencrypt"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
whoami:
image: "containous/whoami"
container_name: "simple-service"
restart: "always"
labels:
- "traefik.enable=true"
- "traefik.http.routers.whoami.rule=Host(`<my domain>`)"
- "traefik.http.routers.whoami.entrypoints=websecure"
- "traefik.http.routers.whoami.tls.certresolver=myresolver"
owncloud:
image: "owncloud/server"
container_name: "owncloud"
restart: "always"
depends_on:
- db
- redis
environment:
- OWNCLOUD_DOMAIN=owncloud.<my domain>
- OWNCLOUD_DB_TYPE=mysql
- OWNCLOUD_DB_NAME=owncloud
- OWNCLOUD_DB_USERNAME=owncloud
- OWNCLOUD_DB_PASSWORD=owncloud
- OWNCLOUD_DB_HOST=db
- OWNCLOUD_ADMIN_USERNAME=<my username>
- OWNCLOUD_ADMIN_PASSWORD=<my password>
- OWNCLOUD_MYSQL_UTF8MB4=true
- OWNCLOUD_REDIS_ENABLED=true
- OWNCLOUD_REDIS_HOST=redis
healthcheck:
test: ["CMD", "/usr/bin/healthcheck"]
interval: 30s
timeout: 10s
retries: 5
labels:
- "traefik.enable=true"
- "traefik.http.routers.owncloud.rule=Host(`<my domain>`)"
- "traefik.http.routers.owncloud.entrypoints=websecure"
- "traefik.http.routers.owncloud.tls.certresolver=myresolver"
volumes:
- type: volume
source: owncloud_data
target: /owncloud/data
db:
image: webhippie/mariadb:latest
restart: always
environment:
- MARIADB_ROOT_PASSWORD=owncloud
- MARIADB_USERNAME=owncloud
- MARIADB_PASSWORD=owncloud
- MARIADB_DATABASE=owncloud
- MARIADB_MAX_ALLOWED_PACKET=128M
- MARIADB_INNODB_LOG_FILE_SIZE=64M
healthcheck:
test: ["CMD", "/usr/bin/healthcheck"]
interval: 30s
timeout: 10s
retries: 5
volumes:
- type: volume
source: owncloud_mysql
target: /owncloud/mysql
- type: volume
source: owncloud_backup
target: /owncloud/backup
redis:
image: webhippie/redis:latest
restart: "always"
environment:
- REDIS_DATABASES=1
healthcheck:
test: ["CMD", "/usr/bin/healthcheck"]
interval: 30s
timeout: 10s
retries: 5
volumes:
- type: volume
source: owncloud_redis
target: /owncloud/redis
I run this command before I start the containers:
for v in owncloud_data owncloud_mysql owncloud_backup owncloud_redis; do
sudo docker volume create $v
done
I then run sudo docker-compose up.
When I run sudo docker volume ls I get my four (dangling volumes) I created earlier, and 4 more ones that were created from docker compose. What's the deal here? I specified their name explicitly. I'm not sure why the bind for the traefik container works.
I have tried seemingly everything. I put things in quotes. I tried the prepended project name. I tried the short syntax owncloud_data:/owncloud/data. I tried binds, although I don't want to use binds since it's easier to backup a volume.
Thank you, Logan
docker-compose is using the volumes you named, for the binds you specified. However, it is also creating unnamed volumes for each VOLUME declaration on the images you are using and not explicitly binding.
I've made this little script to extract the declared volumes from the images you're using:
images="traefik containous/whoami owncloud/server webhippie/mariadb:latest webhippie/redis:latest"
echo $images | xargs -n1 docker pull
docker inspect $images -f '{{.RepoTags}}, {{.Config.Volumes}}'
The result:
[traefik:latest], map[]
[containous/whoami:latest], map[]
[owncloud/server:latest], map[/mnt/data:{}]
[webhippie/mariadb:latest], map[/var/lib/backup:{} /var/lib/mysql:{}]
[webhippie/redis:latest], map[/var/lib/redis:{}]
None of your binds are binding to the declared volumes, so you're effectively creating new ones and leaving the declared ones empty.

Losing all modification when container is restarted with Docker Compose

i'm using docker docker-compose for running web application. I want to change inside my container and modify some config file and restarting the container without losing modification .
I'm creating a container using
sudo docker-compose up
Then i run
sudo -it -u 0 <container-id> bash
After changing in config files everything looks good. If I restart the container executing
docker container restart $(docker ps -a -q)
all changes where discarded. Can someone explain to me the best way for doing this without losing modifications after restart ?
A useful technique here is to store a copy of the configuration files on the host and then inject them using a Docker-Compose volumes: directive.
version: '3'
services:
myapp:
image: me/myapp
ports: ['8080:8080']
volumes:
- './myapp.ini:/app/myapp.ini'
It is fairly routine to destroy and recreate containers, and you want things to be set up so that everything is ready to go immediately once you docker run or docker-compose up.
Other good uses of bind-mounted directories like this are to give a container a place to publish log files back out, and if your container happens to need persistent data on a filesystem, giving a place to store that across container runs.
docker exec is a useful debugging tool, but it is not intended to be part of your core Docker workflow.
thanks #David Maze for your reply in my case i have à script for changing many parameters in my app and generating ssl certificat after execution of script in my container i have to restart the contianer
my docker-compose.yml
version: '2.3'
services:
wso2iot-mysql:
image: mysql:5.7.20
container_name: wso2iot-mysql
hostname: wso2iot-mysql
ports:
- 3306
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- ./mysql/scripts:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-uroot", "-proot"]
interval: 10s
timeout: 60s
retries: 5
wso2iot-broker:
image: docker.wso2.com/wso2iot-broker:3.3.0
container_name: wso2iot-broker
hostname: wso2iot-broker
ports:
- "9446:9446"
- "5675:5675"
healthcheck:
test: ["CMD", "nc", "-z", "localhost", "9446"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./broker:/home/wso2carbon/volumes/wso2/broker
wso2iot-analytics:
image: docker.wso2.com/wso2iot-analytics:3.3.0
container_name: wso2iot-analytics
hostname: wso2iot-analytics
healthcheck:
test: ["CMD", "curl", "-k", "-f", "https://localhost:9445/carbon/admin/login.jsp"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./analytics:/home/wso2carbon/volumes/wso2/analytics
ports:
- "9445:9445"
wso2iot-server:
image: docker.wso2.com/wso2iot-server:3.3.0
container_name: wso2iot-server
hostname: wso2iot-server
healthcheck:
test: ["CMD", "curl", "-k", "-f", "https://localhost:9443/carbon/admin/login.jsp"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./iot-server:/home/wso2carbon/volumes
ports:
- "9443:9443"
links:
- wso2iot-mysql

Using “bind” volume mount inside of Docker Swarm with a docker-compose file

My environment is using a 3 node Docker Swarm (all being managers) and I have a docker-compose.yaml that was created to deploy out to the Swarm.
Inside my docker-compose.yaml I have two services being setup, one is a MySQL instance and the other is my custom Django App.
What I am trying to do is two-fold:
I need to mount a local directory (Example: /test) into the container. This file does exist on the host/node/server and I am trying to mount it to a file that exists in the container (Example: /tmp).
Create a persistent database folder so that our MySQL doesn’t get destroyed when the container exits.
My issue is that I am not able to get a local host file (something in this case /test) to show inside the container. I have tried using both the long syntax and short syntax to create a “bind mount”.
Here is my docker-compose.yaml file:
version: '3.2'
services:
project_mysql:
environment:
MYSQL_USER: 'project'
MYSQL_PASSWORD: 'password1234'
ports:
- 3306:3306
image: 'mysql/mysql-server'
tty: true
stdin_open: true
deploy:
mode: replicated
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints:
- node.hostname == node1
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
interval: "5s"
timeout: "1s"
project_web:
image: 'localhost:5123/project_web:0.1.5'
tty: true
stdin_open: true
volumes:
- type: bind
source: /test
target: /tmp
ports:
- 8000:8000
depends_on:
- project_mysql
healthcheck:
test: ["CMD-SHELL", "nc -z 127.0.0.1 8000 || exit 1"]
interval: "5s"
timeout: "1s"
networks:
projectnet:
driver: overlay
ipam:
config:
- subnet: 10.2.0.0/24
Thanks for any assistance!
You need to add named volumes to the docker-compose.yaml file.
Before starting the instance, rundocker volume create mysql-data
Then, in docker-compose.yaml add:
services:
project_mysql:
volumes:
- mysql-data:/var/lib/mysql
volumes:
mysql-data:
external: true
If you ever kill the service, the data will still persist.

Resources