I'm using docker-compose to spawn two containers. I would like to share the /tmp directory between these two containers (but not with the host /tmp if possible). This is because I'm uploading some files through flask to /tmp and want to process these files from celery.
flask:
build: .
command: "gulp"
ports:
- '3000:3000'
- '5000:5000'
links:
- celery
- redis
volumes:
- .:/usr/src/app:rw
celery:
build: .
command: "celery -A web.tasks worker --autoreload --loglevel=info"
environment:
- C_FORCE_ROOT="true"
links:
- redis
- neo4j
volumes:
- .:/usr/src/app:ro
You can used a named volume:
flask:
build: .
command: "gulp"
ports:
- '3000:3000'
- '5000:5000'
links:
- celery
- redis
volumes:
- .:/usr/src/app:rw
- tmp:/tmp
celery:
build: .
command: "celery -A web.tasks worker --autoreload --loglevel=info"
environment:
- C_FORCE_ROOT="true"
links:
- redis
- neo4j
volumes:
- .:/usr/src/app:ro
- tmp:/tmp
When compose creates the volume for the first container, it will be initialized with the contents of /tmp from the image. And after that, it will be persistent until deleted with docker-compose down -v.
Related
I have an application where I need to reset the database (wipe it completely).
I ran all the commands I could find
docker system prune
docker system prune -a -f
docker volume prune
Using docker volume ls, I copied the volume ID and then ran
docker volume rm "the volume id"
When I do docker system df nothing is shown anymore. However, once I run my app again
docker-compose up --build
the database still contains old values.
What am I doing wrong?
EDIT here is my compose file
version: "3"
services:
nftapi:
env_file:
- .env
build:
context: .
ports:
- '5000:5000'
depends_on:
- postgres
networks:
- postgres
extra_hosts:
- "host.docker.internal:host-gateway"
restart: always
postgres:
container_name: postgres
image: postgres:latest
ports:
- "5432:5432"
volumes:
- /data/postgres:/var/lib/postgresql/data
env_file:
- docker.env
networks:
- postgres
pgadmin:
links:
- postgres:postgres
container_name: pgadmin
image: dpage/pgadmin4
ports:
- "8080:80"
env_file:
- docker.env
networks:
- postgres
networks:
postgres:
driver: bridge
It seems the database in your config is mapped to a directory on your host system:
volumes:
- /data/postgres:/var/lib/postgresql/data
so the data in the containers /var/lib/postgresql/data is read from and written to your local /data/postgres directory.
If you want to delete it you should empty out that directory. (Or move it until you are sure that you can delete it)
I am dockerizing my existing application. But there's a strange issue. When i start my application with
docker-compose up
each service in the docker-compose runs successfully with no issues. But there are some services which i don't want to run sometimes (celery, celerybeat etc). For that i run
docker-compose run nginx
the above command should run nginx, web, db services as configured with docker-compose.yml but it only runs web and db not nginx.
Here's my yml file
docker-compose.yml
version: '3'
services:
db:
image: postgres:12
env_file: .env
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
ports:
- "5431:5432"
volumes:
- dbdata:/var/lib/postgresql/data
nginx:
image: nginx:1.14
ports:
- "443:443"
- "80:80"
volumes:
- ./config/nginx/:/etc/nginx/conf.d
- ./MyAPP/static:/var/www/MyAPP.me/static/
depends_on:
- web
web:
restart: always
build: ./MyAPP
command: bash -c "
python manage.py collectstatic --noinput
&& python manage.py makemigrations
&& python manage.py migrate
&& gunicorn --certfile=/etc/certs/localhost.crt --keyfile=/etc/certs /localhost.key MyAPP.wsgi:application --bind 0.0.0.0:443 --reload"
expose:
- "443"
depends_on:
- db
env_file:
- .env
volumes:
- ./MyAPP:/opt/MyAPP
- ./config/nginx/certs/:/etc/certs
- ./MyAPP/static:/var/www/MyAPP.me/static/
broker:
image: redis:alpine
expose:
- "6379"
celery:
build: ./MyAPP
command: celery -A MyAPP worker -l info
env_file:
- .env
volumes:
- ./MyAPP:/opt/MyAPP
depends_on:
- broker
- db
celery-beat:
build: ./MyAPP
command: celery -A MyAPP beat -l info
env_file:
- .env
volumes:
- ./MyAPP:/opt/MyAPP
depends_on:
- broker
- db
comment-classifier:
image: codait/max-toxic-comment-classifier
volumes:
dbdata:
TL;dr: docker-compose up nginx
There's a distinct difference between docker-compose up and docker-compose run. The first builds, (re)creates, starts, and attaches to containers for a service. The second runs a one-time command against a service. When you do docker-compose run, it starts db and web because nginx depends on them, then it runs a single command on nginx and exits. So you have to use docker-compose up nginx in order to get what you want.
I have several Postgresql services, and some other services which useful in my case (for creating HA Postgresql cluster). This cluster is described in docker-compose below:
version: '3.3'
services:
haproxy:
image: haproxy:alpine
ports:
- "5000:5000"
- "5001:5001"
- "8008:8008"
configs:
- haproxy_cfg
networks:
- dbs
command: haproxy -f /haproxy_cfg
etcd:
image: quay.io/coreos/etcd:v3.1.2
configs:
- etcd_cfg
networks:
- dbs
command: /bin/sh /etcd_cfg
dbnode1:
image: seocahill/patroni:1.2.5
secrets:
- patroni.yml
environment:
- PATRONI_NAME=dbnode1
- PATRONI_POSTGRESQL_DATA_DIR=data/dbnode1
- PATRONI_POSTGRESQL_CONNECT_ADDRESS=dbnode1:5432
- PATRONI_RESTAPI_CONNECT_ADDRESS=dbnode1:8008
env_file:
- test.env
networks:
- dbs
entrypoint: patroni
command: /run/secrets/patroni.yml
dbnode2:
image: seocahill/patroni:1.2.5
secrets:
- patroni.yml
environment:
- PATRONI_NAME=dbnode2
- PATRONI_POSTGRESQL_DATA_DIR=data/dbnode2
- PATRONI_POSTGRESQL_CONNECT_ADDRESS=dbnode2:5432
- PATRONI_RESTAPI_CONNECT_ADDRESS=dbnode2:8008
env_file:
- test.env
networks:
- dbs
entrypoint: patroni
command: /run/secrets/patroni.yml
dbnode3:
image: seocahill/patroni:1.2.5
secrets:
- patroni.yml
environment:
- PATRONI_NAME=dbnode3
- PATRONI_POSTGRESQL_DATA_DIR=data/dbnode3
- PATRONI_POSTGRESQL_CONNECT_ADDRESS=dbnode3:5432
- PATRONI_RESTAPI_CONNECT_ADDRESS=dbnode3:8008
env_file:
- test.env
networks:
- dbs
entrypoint: patroni
command: /run/secrets/patroni.yml
networks:
dbs:
external: true
configs:
haproxy_cfg:
file: config/haproxy.cfg
etcd_cfg:
file: config/etcd.sh
secrets:
patroni.yml:
file: patroni.test.yml
I took this yml-code from https://github.com/seocahill/ha-postgres-docker-stack.git. And i use next command to deploy this services in docker swarm - docker network create -d overlay --attachable dbs && docker stack deploy -c docker-stack.test.yml test_pg_cluster. But if i create some databases and insert some data to it and then restart servies - my data will be lost.
I know that i need to use volume for saving data on host.
I create volume with docker command: docker volume create pgdata with default docker volume directory and mount it like this:
dbnode1:
image: seocahill/patroni:1.2.5
secrets:
- patroni.yml
environment:
- PATRONI_NAME=dbnode1
- PATRONI_POSTGRESQL_DATA_DIR=data/dbnode1
- PATRONI_POSTGRESQL_CONNECT_ADDRESS=dbnode1:5432
- PATRONI_RESTAPI_CONNECT_ADDRESS=dbnode1:8008
env_file:
- test.env
volumes:
pgdata:/data/dbnode1
networks:
- dbs
entrypoint: patroni
command: /run/secrets/patroni.yml
volumes:
pgdata:
When container started it has own configs in data directory data/dbnode1 inside container. And if i mount volume pgdata for store data in host, i can't connect to db and there is empty folder in container directory data/dbnode1. How can i create a persistent data volume for saving changed data in PostgerSQL?
It is way easier to create volumes by adding the path directly. Check this example.
dbnode1:
image: seocahill/patroni:1.2.5
secrets:
- patroni.yml
environment:
- PATRONI_NAME=dbnode1
- PATRONI_POSTGRESQL_DATA_DIR=data/dbnode1
- PATRONI_POSTGRESQL_CONNECT_ADDRESS=dbnode1:5432
- PATRONI_RESTAPI_CONNECT_ADDRESS=dbnode1:8008
env_file:
- test.env
volumes:
- /opt/dbnode1/:/data/dbnode1
networks:
- dbs
entrypoint: patroni
command: /run/secrets/patroni.yml
Note the lines
volumes:
- /opt/dbnode1/:/data/dbnode1
where I am using a the path /opt/dbnode1/ to store the filesystem from the container at /data/dbnode1.
Also it is important to note that docker swarm does not create folders for you. Thus, you have to create the folder before starting the service. Run mkdir -p /opt/dbnode1 to do so.
I am trying to run the following docker-compose file:
version: "3"
services:
db:
image: postgres
container_name: pgsql
environment:
- foo=foo
- bar=bar
volumes:
- ./sql/:/opt/sql
command: bash /opt/sql/create-db.sql
# command: ps -aux
web:
image: benit/debian-web
container_name: web
depends_on:
- db
ports:
- 80:80
volumes:
- ./html:/var/www/html
I am encountering an error with the line:
command: bash /opt/sql/create-db.sql
It is because pgsql service is not started. It can be monitored with command: ps -aux
How can I run my script once pgsql service is started ?
You can use a volume to provide an initialization sql script:
version: "3"
services:
db:
image: postgres
container_name: pgsql
environment:
- foo=foo
- bar=bar
volumes:
- ./sql/:/opt/sql
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
web:
image: benit/debian-web
container_name: web
depends_on:
- db
ports:
- 80:80
volumes:
- ./html:/var/www/html
This will work because original Posgresql dockerfile contains a script (that runs after Posrgres has been started) which will execute any *.sql files from /docker-entrypoint-initdb.d/ folder.
By mounting your local volume in that place, your sql files will be run at the right time.
It's actually mentioned in documentation for that image: https://hub.docker.com/_/postgres under the How to extend this image section.
I'm running docker with apache2. When doing docker-compose up -d it needs 777 permission to var/lib directory. If I give 777 permission then docker start but the same movement other application like Skype, sublime won't able to start and give an error like
cannot open cookie file /var/lib/snapd/cookie/snap.sublime-text
/var/lib/snapd has 'other' write 40777
so here the problem is sublime need 755 permission but docker need 777 permission
Also, one of snaps file of docker is also available inside /var/lib/snapd/snaps
Due to this problem, I'm not able to simultaneously use docker and other application
My docker-compose.yml
version: "3"
services:
app:
image: markoshust/magento-nginx:1.13
ports:
- 80:8000
links:
- db
- phpfpm
- redis
- elasticsearch
volumes:
- ./.docker/nginx.conf:/etc/nginx/conf.d/default.conf
- .:/var/www/html:delegated
- ~/.composer:/var/www/.composer:delegated
- sockdata:/sock
phpfpm:
image: markoshust/magento-php:7.1-fpm
links:
- db
volumes:
- ./.docker/php.ini:/usr/local/etc/php/php.ini
- .:/var/www/html:delegated
- ~/.composer:/var/www/.composer:delegated
- sockdata:/sock
db:
image: percona:5.7
ports:
- 3306:3306
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=test
- MYSQL_USER=test
- MYSQL_PASSWORD=test
volumes:
- dbdata:/var/lib/mysql
redis:
image: redis:3.0
elasticsearch:
image: elasticsearch:5.2
volumes:
- esdata:/usr/share/elasticsearch/data
volumes:
dbdata:
sockdata:
esdata:
# Mark Shust's Docker Configuration for Magento
(https://github.com/markoshust/docker-magento)
# Version 12.0.0