docker-compose: run a command on a pgsql container - docker

I am trying to run the following docker-compose file:
version: "3"
services:
db:
image: postgres
container_name: pgsql
environment:
- foo=foo
- bar=bar
volumes:
- ./sql/:/opt/sql
command: bash /opt/sql/create-db.sql
# command: ps -aux
web:
image: benit/debian-web
container_name: web
depends_on:
- db
ports:
- 80:80
volumes:
- ./html:/var/www/html
I am encountering an error with the line:
command: bash /opt/sql/create-db.sql
It is because pgsql service is not started. It can be monitored with command: ps -aux
How can I run my script once pgsql service is started ?

You can use a volume to provide an initialization sql script:
version: "3"
services:
db:
image: postgres
container_name: pgsql
environment:
- foo=foo
- bar=bar
volumes:
- ./sql/:/opt/sql
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
web:
image: benit/debian-web
container_name: web
depends_on:
- db
ports:
- 80:80
volumes:
- ./html:/var/www/html
This will work because original Posgresql dockerfile contains a script (that runs after Posrgres has been started) which will execute any *.sql files from /docker-entrypoint-initdb.d/ folder.
By mounting your local volume in that place, your sql files will be run at the right time.
It's actually mentioned in documentation for that image: https://hub.docker.com/_/postgres under the How to extend this image section.

Related

Docker Compose not reading multiple files

Using the below docker compose files, i am unable to bring up my app correctly. Docker says my LAPIS_ENV environment variable is not set, but i am setting it in my second compose file which I am expecting to be merged into the first one. I have tried including them in reverse order to no avail.
version: '2.4'
services:
backend:
mem_limit: 50mb
memswap_limit: 50mb
build:
context: ./backend
dockerfile: Dockerfile
depends_on:
- postgres
volumes:
- ./backend:/var/www
- ./data:/var/data
restart: unless-stopped
command: bash -c "/usr/local/bin/docker-entrypoint.sh ${LAPIS_ENV}"
postgres:
build:
context: ./postgres
dockerfile: Dockerfile
environment:
PGDATA: /var/lib/postgresql/data/pgdata
POSTGRES_HOST_AUTH_METHOD: trust
volumes:
- postgres:/var/lib/postgresql/data
- ./postgres/pg_hba.conf:/var/lib/postgres/data/pg_hba.conf
- ./data/backup:/pgbackup
restart: unless-stopped
volumes:
postgres:
version: '2.4'
services:
backend:
environment:
LAPIS_ENV: development
ports:
- 8080:80
#!/usr/bin/env bash
docker compose -f docker-compose.yml -f docker-compose.dev.yml up

Nginx do not start with docker-compose run command

I am dockerizing my existing application. But there's a strange issue. When i start my application with
docker-compose up
each service in the docker-compose runs successfully with no issues. But there are some services which i don't want to run sometimes (celery, celerybeat etc). For that i run
docker-compose run nginx
the above command should run nginx, web, db services as configured with docker-compose.yml but it only runs web and db not nginx.
Here's my yml file
docker-compose.yml
version: '3'
services:
db:
image: postgres:12
env_file: .env
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
ports:
- "5431:5432"
volumes:
- dbdata:/var/lib/postgresql/data
nginx:
image: nginx:1.14
ports:
- "443:443"
- "80:80"
volumes:
- ./config/nginx/:/etc/nginx/conf.d
- ./MyAPP/static:/var/www/MyAPP.me/static/
depends_on:
- web
web:
restart: always
build: ./MyAPP
command: bash -c "
python manage.py collectstatic --noinput
&& python manage.py makemigrations
&& python manage.py migrate
&& gunicorn --certfile=/etc/certs/localhost.crt --keyfile=/etc/certs /localhost.key MyAPP.wsgi:application --bind 0.0.0.0:443 --reload"
expose:
- "443"
depends_on:
- db
env_file:
- .env
volumes:
- ./MyAPP:/opt/MyAPP
- ./config/nginx/certs/:/etc/certs
- ./MyAPP/static:/var/www/MyAPP.me/static/
broker:
image: redis:alpine
expose:
- "6379"
celery:
build: ./MyAPP
command: celery -A MyAPP worker -l info
env_file:
- .env
volumes:
- ./MyAPP:/opt/MyAPP
depends_on:
- broker
- db
celery-beat:
build: ./MyAPP
command: celery -A MyAPP beat -l info
env_file:
- .env
volumes:
- ./MyAPP:/opt/MyAPP
depends_on:
- broker
- db
comment-classifier:
image: codait/max-toxic-comment-classifier
volumes:
dbdata:
TL;dr: docker-compose up nginx
There's a distinct difference between docker-compose up and docker-compose run. The first builds, (re)creates, starts, and attaches to containers for a service. The second runs a one-time command against a service. When you do docker-compose run, it starts db and web because nginx depends on them, then it runs a single command on nginx and exits. So you have to use docker-compose up nginx in order to get what you want.

Why Dockerfile doesn't run multiple commands

I want use Docker run my project(react+nodejs+mongodb),
Dockerfile:
FROM node:8.9-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . .
CMD nohup sh -c 'npm start && node ./server/server.js'
docker-compose.yml:
version: '2.1'
services:
chat:
image: chat
container_name: chat
build: .
environment:
NODE_ENV: production
ports:
- "3000:3000"
- "8080:8080"
volumes:
- ./:/usr/src/app
links:
- mongo
mongo:
container_name: mongo
image: mongo
ports:
- "27017:27017"
run docker-compose up --build, the 3000 port is worked, but the 8080 port dies
localhost:3000
localhost:8080
I would suggest create a container for the server and have it seperate from the "chat" container. Its best to have each container do one thing and one thing only (almost like the philosophy behind unix commands)
In any case here is some modifications that I would make to the compose file.
version: '2.1'
services:
chat:
image: chat
container_name: chat
build: .
environment:
NODE_ENV: production
ports:
- "3000:3000"
- "8080:8080"
volumes:
- ./:/usr/src/app
links:
- mongo
mongo:
container_name: mongo
image: mongo
# You don't need to expose this port to the outside world. Because you linked the two containers the chat app
# will be able to connect to mongodb using hostname mongodb inside the container network.
# ports:
# - "27017:27017"
Btw what happens if you run:
$ docker-compose down
and then
$ docker-compose up
$ docker ps
can you see the ports exposed in docker ps output?
your chat service depends on mongo so you also need to have this in your chat
depends_on:
- mongo
This docker-compose file works for me. Note that i am saving the data from the database to a local directory. You should add this directory to gitignore.
version: "3.2"
services:
mongo:
container_name: mongo
image: mongo:latest
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=password
- NODE_ENV=production
ports:
- "28017:27017"
expose:
- 28017 # you can connect to this mongodb with studio3t
volumes:
- ./mongodb-data:/data/db
restart: always
networks:
- docker-network
express:
container_name: express
environment:
- NODE_ENV=development
restart: always
build:
context: .
args:
buildno: 1
expose:
- 3000
ports:
- "3000:3000"
links:
- mongo # link this service to the database service
depends_on:
- mongo
command: "npm start" # override the default command to use nodemon in dev
networks:
- docker-network
networks:
docker-network:
driver: bridge
You may also find that using node you have to wait for the mongodb container to be ready before you can connect to the database.

Cannot launch django with celery in Docker Compose v3

Here is my docker-compose.yml:
version: '3.4'
services:
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
web:
restart: always
image: celery-with-docker-compose:latest
build: .
command: bash -c "python /code/manage.py collectstatic --noinput && python /code/manage.py migrate && /code/run_gunicorn.sh"
volumes:
- /static:/data/web/static
- /media:/data/web/media
- .:/code
env_file:
- ./.env
depends_on:
- db
volumes:
- ./app:/deploy/app
worker:
image: celery-with-docker-compose:latest
restart: always
build:
context: .
command: bash -c "pip install -r /code/requirements.txt && /code/run_celery.sh"
volumes:
- .:/code
env_file:
- ./.env
depends_on:
- redis
- web
db:
restart: always
image: postgres
env_file:
- ./.env
volumes:
- pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"
redis:
restart: always
image: redis:latest
privileged: true
command: bash -c "sysctl vm.overcommit_memory=1 && redis-server"
ports:
- "6379:6379"
volumes:
pgdata:
When I run docker stack deploy -c docker-compose.yml cryptex I got
Non-string key at top level: true
And docker-compose -f docker-compose.yml config gives me
ERROR: In file './docker-compose.yml', the service name True must be a quoted string, i.e. 'True'.
I'm using latest versions of docker and compose. Also I'm new to compose v3 and started to use it for getting availability of docker stack command. If you see any mistakes or redudants in config file please, let me know. Thanks
you have to validate you docker compose file, is probably that the have low value inside
Validating your file now is as simple as docker-compose -f
docker-compose.yml config. As always, you can omit the -f
docker-compose.yml part when running this in the same folder as the
file itself or having the

how to run docker exec on a docker-compose.yml

I am trying to create a mysql database schema during the docker-compose.yml file is getting executed
version: "2"
services:
web:
build: docker
ports:
- "8080:8080"
environment:
- MYSQL_ROOT_PASSWORD=root
mysql:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=test
ports:
- "3306:3306"
links:
- web
onrun:
command: "docker exec -i test_mysql_1 mysql -uroot -proot test <dummy1.sql"
I tried onrun but this is not working .
i am building the first image but pulling the second image from the docker hub.
kindly help in how to execute the following command after the docker-compose up
There is nothing like onrun in docker-compose. It will only bring up the containers and execute the command. Now you have few possible options
Use mysql Image Initialization
mysql:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=test
volumes:
- ./dummy1.sql:/docker-entrypoint-initdb.d/dummy1.sql
ports:
- "3306:3306"
You may your sql files inside /docker-entrypoint-initdb.d inside the container
Use bash script
docker-compose up -d
# Give some time for mysql to get up
sleep 20
docker-compose exec mysql mysql -uroot -proot test <dummy1.sql
Use another docker service to initialize the DB
version: "2"
services:
web:
build: docker
ports:
- "8080:8080"
environment:
- MYSQL_ROOT_PASSWORD=root
mysql:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=test
ports:
- "3306:3306"
mysqlinit:
image: mysql:latest
volumes:
- ./dummy1.sql:/dump/dummy1.sql
command: bash -c "sleep 20 && mysql -h mysql -uroot -proot test < /dump/dummy1.sql"
You run another service which will init the DB for you, like mysqlinit in the above one
When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d. Files will be executed in alphabetical order.
From https://hub.docker.com/_/mysql/
That is the convenient way how many databases (postgresql, mysql, ...) are initializing themselves on container-creation. You should create a *.sql / *.sh file and bind it via volume into the new container:
db:
image: mysql:latest
volumes:
- ./db/entrypoint:/docker-entrypoint-initdb.d
environment:
- MYSQL_ROOT_PASSWORD=iamgroot
- MYSQL_DATABASE=gotg
This loads all your sql / sh files into the container which are then automatically executed.

Resources