I'm trying to containerize a demo nodejs + reactjs + postgresql application and the directory structure looks something like this:
demo:
-client
-.env
- Dockerfile.client
- package.json
-server
- .env
- Dockerfile.server
- package.json
.env
docker-compose.yml
docker-compose.yml
version: '3.8'
services:
client:
container_name: ${APP_NAME}_fe
build:
dockerfile: ./client/Dockerfile.client
environment:
CHOKIDAR_USEPOLLING: "true"
image: ${APP_NAME}_img_fe
volumes:
- ./client:/app
- /app/node_modules
ports:
- 3000:3000
server:
container_name: ${APP_NAME}_be
build:
# context: .
dockerfile: ./server/Dockerfile.server
image: ${APP_NAME}_img_be
volumes:
- ./server:/app
- /app/node_modules
ports:
- 5000:5000
db:
container_name: ${APP_NAME}_db
image: postgres
build:
# context: .
dockerfile: ./server/Dockerfile.db
env_file: ./server/.env
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_USER: ${DB_USER}
POSTGRES_DB: ${APP_NAME}
volumes:
- db-data:/var/lib/postgresql/data
ports:
- 5432:5432
volumes:
db-data:
name: ${APP_NAME}_db
I'm trying to keep all the sensitive data(like passwords and keys), api urls or ports inside .env files so i have all the infos in just one place, something like this:
.env from server
PORT=5000
DB_PASSWORD=postgres
DB_USER=postgres
DB_NAME=demo
.env on root level
APP_NAME=demo
The problem is for the db service (postgres container) the credentials are not visible at the creation time, so i need to hardcode them like:
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: demo
Is there a way to fetch these values from inside the .env files ?
LE: I think one way is to keep only one .env at the root of the application with:
-db password
-db user
-port(both server and client)
-api url
In this case i need to acces .env from client and server folder.
Not sure this is a common practice or not.
Thank you.
You don't need to access the .env file from your applications. You can substitute variables in the docker-compose.yml parsing context by simply running the docker-compose up in the right folder.
The issue that you're running through is probably because you have multiple .env files distributed across your project, with different values.
From the documentation:
If you have multiple environment variables, you can substitute them by providing a path to your environment variables file. By default, the docker-compose command will look for a file named .env in the directory you run the command. By passing the file as an argument, you can store it anywhere and name it appropriately, for example, .env.ci, .env.dev, .env.prod. Passing the file path is done using the --env-file option:
That means that not only it's a good practice to keep .env files but also that you probably should keep only one .env file and run the docker-compose up command from the folder that keeps that file (usually the project root folder).
it seems to me that at your environment, '-' this symbol is missing. So it would be
environment:
- POSTGRES_PASSWORD: ${DB_PASSWORD}
- POSTGRES_USER: ${DB_USER}
- POSTGRES_DB: ${APP_NAME}
and keep docker-compose.yaml and .env file under the same directory and try
docker-compose up -d
I always use -d to start as background job, daemon thread.
Also you can see the list of environment variables of Postgres at this link:
https://hub.docker.com/_/postgres
Related
I have a simple docker compose file to create a Mysql database for my app. But I cannot interpolate the environment variable MYSQL_PORT to set a custom port. Running docker compose up with the configuration below results in a random port being assigned to mysql.
The path to the env file does work, since I have env variables configuring the database.
docker-compose.yml
version: '3'
services:
mysql:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
volumes:
- mysql_data:/var/lib/mysql
env_file:
- ../../.env
ports:
- ${MYSQL_PORT}:3306
volumes:
mysql_data:
.env
MYSQL_PORT=3306
MYSQL_ROOT_PASSWORD=root
MYSQL_DATABASE=final_project_database
MYSQL_USER=db_user
MYSQL_PASSWORD=some_db_user_password
Use --env-file option with docker-compose up command. env_file declared in your MySQL service applies only for container env
Move your .env file to the same directory as the docker-compose file and change the env_file to point to it. That way both docker-compose and the container will use the same environment file.
Right now it's only the container that's using it.
version: '3'
services:
mysql:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
volumes:
- mysql_data:/var/lib/mysql
env_file:
- ./.env
ports:
- ${MYSQL_PORT}:3306
volumes:
mysql_data:
I get the below error when I run docker-compose up, any pointers why I am getting this error
service "mysqldb-docker" refers to undefined volume mysqldb: invalid compose project
Also, is there a way to pass the $ENV value in CLI to docker-compose up , currently I have a ENV variable that specified dev, uat or prod that I use to specify the db name. Are there better alternatives to do this other than create a .env file explicitly for this
version: '3.8'
services:
mysqldb-docker:
image: '8.0.27'
restart: 'unless-stopped'
ports:
- "3309:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_PASSWORD=root
- MYSQL_DATABASE=reco-tracker-$ENV
volumes:
- mysqldb:/var/lib/mysql
reco-tracker-docker:
image: 'reco-tracker-docker:v1'
ports:
- "8083:8083"
environment:
- SPRING_DATASOURCE_USERNAME=root
- SPRING_DATASOURCE_PASSWORD=root
- SPRING_DATASOURCE_URL="jdbc:mysql://mysqldb-docker:3309/reco-tracker-$ENV"
depends_on: [mysqldb-docker]
You must define volumes at the top level like this:
version: '3.8'
services:
mysqldb-docker:
# ...
volumes:
- mysqldb:/var/lib/mysql
volumes:
mysqldb:
You can pass environment variables from your shell straight through to a service’s containers with the ‘environment’ key by not giving them a value
https://docs.docker.com/compose/environment-variables/#pass-environment-variables-to-containers
web:
environment:
- ENV
but from my tests you cant write $ENV in the compose file and expect it to read your env
for this you need to call docker-compose that way :
docker-compose run -e ENV web python console.py
see this : https://docs.docker.com/compose/environment-variables/#set-environment-variables-with-docker-compose-run
I'm using docker-compose and I'm trying to run an express app and postgres db in docker containers.
My problem is that it only starts postgres image, but express app is not running.
What am I doing wrong?
I've published it on my github: https://github.com/ayakymyshyn/docker-playground
looking at your docker-compose file and Dockerfile, i assume that your intention is that the web service in the compose will actually run the image produced by the Dockerfile.
if that is the case, you need to modify the compose file and tell it to build an image based on the Dockerfile.
it should look something like
version: "3.7"
services:
web:
image: node
build: . # <--- this is the missing line
depends_on:
- db
ports:
- '3001:3001'
db:
image: postgres
environment:
POSTGRES_PASSWORD: 123123123
POSTGRES_USER: yakym
POSTGRES_DB: jwt
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- '5433:5433'
I have two Rails 6 application and I am trying to deploy in aws ec2 instance with different port 8080 and 8081 but when I trying to run docker-compose up -d it start one rails application successfully and if I tries to run docker-compose up -d for second application, It make first application down and make another application up on particular Port
Below is my docker configuration for two applications.
Application 1
version: "3.4"
services:
app:
image: "dockerhub_repo/a_api:${TAG}"
# build:
# context: .
# dockerfile: Dockerfile
container_name: a_api_container
depends_on:
- database
- redis
- sidekiq
ports:
- "8080:8080"
volumes:
- .:/app
env_file: .env
environment:
RAILS_ENV: staging
database:
image: postgres:12.1
container_name: a_database_container
restart: always
volumes:
- db_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
sidekiq:
image: "dockerhub_repo/a_api:${STAG}"
container_name: a_sidekiq_container
environment:
RAILS_ENV: staging
env_file: .env
depends_on:
- redis
volumes:
- ".:/app"
redis:
image: redis:4.0-alpine
container_name: a_redis_container
volumes:
- "redis:/data"
volumes:
redis:
db_data:
Application 2
version: "3.4"
services:
app:
image: "dockerhub_repo/b_api:${PPTAG}"
build:
context: .
dockerfile: Dockerfile
container_name: b_api
depends_on:
- database
- redis
ports:
- "8081:8081"
volumes:
- .:/app
env_file: .env
environment:
RAILS_ENV: development
database:
image: postgres:12.1
container_name: pp_database
restart: always
volumes:
- db_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
redis:
image: redis:4.0-alpine
container_name: pp_redis
volumes:
db_data:
This Configuration works very well in local machine. It start both application in local on different port but it has some issue on aws ec2. I am not sure is any thing wrong in configuration?
Compose has the notion of a project name. If you add or delete containers from a docker-compose.yml file, it looks for existing containers that are labeled with the project name to figure out what needs to change. The project name is also included in the Docker names of containers, networks, and volumes.
You can configure the project name with the COMPOSE_PROJECT_NAME environment variable or the docker-compose -p option. If you don't configure it, it defaults to the base name of the current directory.
You clarify in a comment that the two docker-compose.yml files are in directories app1/backend and app2/backend. Since the base name of those directories are both backend, they have the same project name; so if you run docker-compose up in the app2/backend directory, it finds the existing containers for the backend project, sees they don't match what's in the docker-compose.yml file, and deletes them (even though you as the operator think they belong to the other project).
There are a couple of ways to get around this:
Rename one or the other directory; maybe move the docker-compose.yml files up to the top-level app1 and app2 directories.
In one or both directories, create a .env file that sets COMPOSE_PROJECT_NAME=app1. (Note that file is checked in the current directory, not necessarily the directory that contains the docker-compose.yml file.)
Set and change an environment variable export COMPOSE_PROJECT_NAME=app1.
Consistently use an option docker-compose -p app1 ... with all Compose commands.
I am new to docker and developing a project using docker compose. From the documentation I have learned that I should be using data only containers to keep data persistant but I am unable to do so using docker-compose.
Whenever I do docker-compose down it removes the the data from db but by doing docker-compose stop the data is not removed. May be this is because that I am not creating named data volume and docker-compose down hardly removes all the containers. So I tried naming the container but it threw me errors.
Please have a look at my yml file:
version: '2'
services:
data_container:
build: ./data
#volumes:
# - dataVolume:/data
db:
build: ./db
ports:
- "5445:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
# - PGDATA=/var/lib/postgresql/data/pgdata
volumes_from:
# - container:db_bus
- data_container
geoserver:
build: ./geoserver
depends_on:
- db
ports:
- "8004:8080"
volumes:
- ./geoserver/data:/opt/geoserverdata_dir
web:
build: ./web
volumes:
- ./web:/code
ports:
- "8000:8000"
depends_on:
- db
command: python manage.py runserver 0.0.0.0:8000
nginx:
build: ./nginx
ports:
- "83:80"
depends_on:
- web
The Docker file for the data_container is:
FROM stackbrew/busybox:latest
MAINTAINER Tom Offermann <tom#offermann.us>
# Create data directory
RUN mkdir /data
# Create /data volume
VOLUME /data
I tried this but by doing docker-compose down, the data is lost. I tried naming the data_container as you can see the commented line, it threw me this error:
ERROR: Named volume "dataVolume:/data:rw" is used in service "data_container" but no declaration was found in the volumes section.
So right now what I am doing is I created a stand alone data only named container and put that in the volumes_from value of the db. It worked fine and didn't remove any data even after doing docker-compose down.
My queries:
What is the best approach to make containers that can store database's data using the docker-compose and to use them properly ?
My conscious is not agreeing with me on approach that I have opted, the one by creating a stand alone data container. Any thoughts?
docker-compose down
does the following
Stops containers and removes containers, networks, volumes, and images
created by up
So the behaviour you are experiencing is expected.
Use docker-compose stop to shutdown containers created with the docker-compose file but not remove their volumes.
Secondly you don't need the data-container pattern in version 2 of docker compose. So remove that and just use
db:
...
volumes:
- /var/lib/postgresql/data
docker-compose down stops containers but also removes them (with everything: networks, ...).
Use docker-compose stop instead.
I think the best approach to make containers that can store database's data with docker-compose is to use named volumes:
version: '2'
services:
db: #https://hub.docker.com/_/mysql/
image: mysql
volumes:
- "wp-db:/var/lib/mysql:rw"
env_file:
- "./conf/db/mysql.env"
volumes:
wp-db: {}
Here, it will create a named volume called "wp-db" (if it doesn't exist) and mount it in /var/lib/mysql (in read-write mode, the default). This is where the database stores its data (for the mysql image).
If the named volume already exists, it will be used without creating it.
When starting, the mysql image look if there are databases in /var/lib/mysql (your volume) in order to use them.
You can have more information with the docker-compose file reference here:
https://docs.docker.com/compose/compose-file/#/volumes-volume-driver
To store database data make sure your docker-compose.yml will look like
if you want to use Dockerfile
version: '3.1'
services:
php:
build:
context: .
dockerfile: Dockerfile
ports:
- 80:80
volumes:
- ./src:/var/www/html/
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- mysql-data:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8080:8080
volumes:
mysql-data:
your docker-compose.yml will looks like
if you want to use your image instead of Dockerfile
version: '3.1'
services:
php:
image: php:7.4-apache
ports:
- 80:80
volumes:
- ./src:/var/www/html/
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- mysql-data:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8080:8080
volumes:
if you want to store or preserve data of mysql then
must remember to add two lines in your docker-compose.yml
volumes:
- mysql-data:/var/lib/mysql
and
volumes:
mysql-data:
after that use this command
docker-compose up -d
now your data will persistent and will not be deleted even after using this command
docker-compose down
extra:- but if you want to delete all data then you will use
docker-compose down -v
to verify or check database data list by using this command
docker volume ls
DRIVER VOLUME NAME
local 35c819179d883cf8a4355ae2ce391844fcaa534cb71dc9a3fd5c6a4ed862b0d4
local 133db2cc48919575fc35457d104cb126b1e7eb3792b8e69249c1cfd20826aac4
local 483d7b8fe09d9e96b483295c6e7e4a9d58443b2321e0862818159ba8cf0e1d39
local 725aa19ad0e864688788576c5f46e1f62dfc8cdf154f243d68fa186da04bc5ec
local de265ce8fc271fc0ae49850650f9d3bf0492b6f58162698c26fce35694e6231c
local phphelloworld_mysql-data