I have created a docker-compose file with 2 services web and POSTGRES. When I run the below docker-compose file, it does not create tables that is in my local POSTGRES db. I have provided the environment variables to use. I think it should use these variables and grab the schema and creates the schema in docker right? When I call the rest API, I am getting relation "x" does not exist.
version: "3"
services:
web:
build: .
depends_on:
- postgres
ports:
- "8000:8000"
volumes:
- postgres-db:/var/lib/postgresql/data
environment:
- DATABASE_HOSTNAME=postgres
- DATABASE_PORT=5432
- DATABASE_PASSWORD=password!
- DATABASE_NAME=dump
- DATABASE_USERNAME=postgres
- SECRET_KEY=021158d8d8d8d8d8d8d8d87
- ALGORITHM=HS256
- ACCESS_TOKEN_EXPIRE_MINUTES=50
postgres:
image: postgres
environment:
- POSTGRES_PASSWORD=password!
- POSTGRES_DB=dump
volumes:
postgres-db: {}
Related
i am trying to create 2 database in a docker compose yml file, one is for the app and the other is for the test part, in the java spring framework i do use the the url like "jdbc:postgresql://localhost:5401/webTest", but id does not work.
From the cmd, i can connect to the database-user with no problem and the table are there but i can not connect to the database-test, is there a specific issue i am blind about?
#service
services:
database-user:
#container_name: postgres-user
image: postgres
ports:
- 5401:5432
volumes:
- postgres-user:/var/lib/postgresql/data
- ./scripts/create-table-db.sql:/docker-entrypoint-initdb.d/create-table-db.sql
environment:
- POSTGRES_USER=webAppUser
- POSTGRES_PASSWORD=user
- POSTGRES_DB=webApp
database-test:
#container_name: postgres-test
image: postgres
ports:
- 5402:5432
volumes:
- postgres-test:/var/lib/postgresql/data
- ./scripts/create-table-db.sql:/docker-entrypoint-initdb.d/create-table-db.sql
environment:
- POSTGRES_USER=webAppTest
- POSTGRES_PASSWORD=test
- POSTGRES_DB=webTest
volumes:
postgres-user:
postgres-test:
i did try to follow some example like here but its not clear.
(the database-user does work also in the java part)
When running docker-compose up -d, I expect 2 databases to be created.
docker-compose.yml:
version: '3.4'
volumes:
db_data:
services:
postgres:
image: postgres:alpine
environment:
- POSTGRES_PASSWORD=Password123
- POSTGRES_DB=database1
ports:
- "5432:5432"
platform:
image: image1/platform:${TAG:-latest}
build:
context: .
dockerfile: PlatformApi/Dockerfile
restart: on-failure
environment:
- ASPNETCORE_ENVIRONMENT=Local
- ConnectionStrings__DefaultConnection=Server=postgres;Port=5432;Uid=postgres;Pwd=Password123;Database=database1
ports:
- "5001:80"
depends_on:
- postgres
volumes:
- .docker/setup.sql:/docker-entrypoint-initdb.d/setup.sql
- db_data:/var/lib/mysql
identity:
image: image2/identity:${TAG:-latest}
build:
context: .
dockerfile: Identity/Dockerfile
restart: on-failure
environment:
- ASPNETCORE_ENVIRONMENT=Local
- ConnectionStrings__DefaultConnection=Server=postgres;Port=5432;Uid=postgres;Pwd=Password123;Database=database2
ports:
- "5002:80"
depends_on:
- postgres
volumes:
- .docker/setup.sql:/docker-entrypoint-initdb.d/setup.sql
- db_data:/var/lib/mysql
This is my setup.sql file which is located inside a .docker folder
CREATE DATABASE IF NOT EXISTS database1;
CREATE USER postgres IDENTIFIED BY Password123;
GRANT CREATE, ALTER, INDEX, LOCK TABLES, REFERENCES, UPDATE, DELETE, DROP, SELECT, INSERT ON database1.* TO postgres;
CREATE DATABASE IF NOT EXISTS database2;
CREATE USER postgres IDENTIFIED BY Password123;
GRANT CREATE, ALTER, INDEX, LOCK TABLES, REFERENCES, UPDATE, DELETE, DROP, SELECT, INSERT ON database2.* TO postgres;
FLUSH PRIVILEGES;
When I run docker-compose up -d, 3 containers are created but 1 of them is exited with an error database "database2" does not exist.
What did I do wrong? Did the setup.sql file not execute or is the content incorrect?
In the postgres service you initiate the Database with the following config
postgres:
image: postgres:alpine
environment:
- POSTGRES_PASSWORD=Password123
- POSTGRES_DB=database1
ports:
- "5432:5432"
As a result a database with name database1 is created
In the following services you try to firstly connect to the database
In platform service:
- ConnectionStrings__DefaultConnection=...;Database=database1
Here there is no issue since database1 exists
But in identity service:
- ConnectionStrings__DefaultConnection=...;Database=database2
You try to connect to database2 which does not exist
The reason it does not exist is that there are race conditions in your setup.sql. You can not guarantee that when the identity service initiates, that the database2 would be already created by platform service.
To tackle this, you could add postgres2 service which creates the database2
postgres2:
image: postgres:alpine
environment:
- POSTGRES_PASSWORD=Password123
- POSTGRES_DB=database2
ports:
- "5433:5432"
identity:
image: image2/identity:${TAG:-latest}
build:
context: .
dockerfile: Identity/Dockerfile
restart: on-failure
environment:
- ASPNETCORE_ENVIRONMENT=Local
- ConnectionStrings__DefaultConnection=Server=postgres2;Port=5432;Uid=postgres;Pwd=Password123;Database=database2
ports:
- "5002:80"
depends_on:
- postgres2
volumes:
- .docker/setup.sql:/docker-entrypoint-initdb.d/setup.sql
- db_data:/var/lib/mysql
there is ruby on rails application which uses mongodb and postgresql databases. When I run it locally everything works fine, however when I try to open in a remote container, it throws error message
2021-03-14T20:22:27.985+0000 Failed: error connecting to db server: no reachable servers
the docker-compose.yml file defines following services:
redis mongodb db rails
I start remote containers with following command:
docker-compose build - build successful
docker-compose up -d - containers are up and running
when I connect to the rails container and try to do
bundle exec rake aws:restore_db
error mentioned above is thrown. I don't know what is wrong here. The mongodb container is up and running.
the docker-compose.yml is mentioned below:
version: '3.4'
services:
redis:
image: redis:5.0.5
mongodb:
image: mongo:3.6.13
volumes:
- mongo-data:/data/db
db:
image: postgres:11.3
volumes:
- db-data:/var/lib/postgresql/data
rails:
build: .
image: proj:latest
depends_on:
- db
- mongodb
- redis
volumes:
- .:/proj
ports:
- "3000:3000"
tty: true
stdin_open: true
env_file:
- .env/development.env
volumes:
db-data:
mongo-data:
this is how I start all four remote containers:
$ docker-compose up -d
Starting proj_db_1 ... done
Starting proj_redis_1 ... done
Starting proj_mongodb_1 ... done
Starting proj_rails_1 ... done
please help me to understand how remote containers should interact with each other.
Your configuration should point to the services by name and not to a port on localhost. For example, if you ware connecting to redis as localhost:6380 or 127.0.0.1:6380, now you need to use redis:6380
If this is still not helping, you can try to add links between containers in order the names given to them as services to be resolved. So the file will look something like this:
version: '3.4'
services:
redis:
image: redis:5.0.5
networks:
- front-end
links:
- "mongodb:mongodb"
- "db:db"
- "rails:rails"
mongodb:
image: mongo:3.6.13
volumes:
- mongo-data:/data/db
networks:
- front-end
links:
- "redis:redis"
- "db:db"
- "rails:rails"
db:
image: postgres:11.3
volumes:
- db-data:/var/lib/postgresql/data
networks:
- front-end
links:
- "redis:redis"
- "mongodb:mongodb"
- "rails:rails"
rails:
build: .
image: proj:latest
depends_on:
- db
- mongodb
- redis
volumes:
- .:/proj
ports:
- "3000:3000"
tty: true
stdin_open: true
env_file:
- .env/development.env
networks:
- front-end
links:
- "redis:redis"
- "mongodb:mongodb"
- "db:db"
volumes:
db-data:
mongo-data:
networks:
front-end:
The links will allow for a hostnames to be defined in the containers.
The link flag is legacy, and in new versions of docker-engine it's not required for user defined networks. Also, the links will be ignored in case of docker swarm deployment. However since there are sill old installations of Docker and docker-compose, this is one thing to try in troubleshooting.
I have a simple Rails/React app that works with Docker with 3 services:
'database' for postgres
'web' for Rails
'webpack_dev_server' for react
In AWS I've created:
* built a custom image for nginx,
* set s3 to hold ecs configs.
* a production cluster,
* private repositories for the 'web' and nginx, tagged both images and pushed to the repositories
* create 4 ec2 instances, 2 for the web and 2 for react
Now I'm ready to create task definitions but I'm not sure how to handle webpack_dev_server (React).
Can we build the image with the same dockerfile as the the web?
For the task definition, should it look like the web as well?
Here's the docker-compose.yml file that works.
version: '3'
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- .:/usr/src/app
- gem_cache:/gems
env_file:
- .env/development/database
- .env/development/web
environment:
- WEBPACKER_DEV_SERVER_HOST=webpack_dev_server
- DOCKERIZED=true
webpack_dev_server:
build: .
command: ./bin/webpack-dev-server
ports:
- 3035:3035
volumes:
- .:/usr/src/app
- gem_cache:/gems
env_file:
- .env/development/web
- .env/development/database
environment:
- WEBPACK_DEV_SERVER=0.0.0.0
database:
image: postgres
env_file:
- .env/development/database
volumes:
- db_data:/var/lib/postgresql/data
volumes:
db_data:
gem_cache:
Is there a way in docker-compose.yml to include a db service only in specific environments ("test" in my case)?
For a Ruby project, development and production both use a remote Postgres database, but the test needs it own local Postgres database.
What I have now is shown below... "works" in the sense that when we run in development the db container is simply ignored by our code (our dev't ENV supplies a remote postres url instead of using the db host). But it would be nicer to not spin up an unused docker container for db when running in development.
version: '3'
services:
web:
build: .
ports:
- "3010:3010"
volumes:
- .:/my_app
links:
- db.local
depends_on:
- db
db:
image: postgres:10.5
ports:
- "5432:5432"