How to create a docker compose file to support multiple databases - docker

What I want to achieve:
I want a docker-compose file to spin up
one application from .jar file
one DB server running 2 databases under two users
I have the .jar set up and it works fine, but I can't get it to work with 2 databases.
With docker-compose:
version: "3.2"
services:
db:
container_name: postgresserver
image: mdillon/postgis
ports:
- "54322:5432"
environment:
POSTGRES_DB: "postgres"
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "postgres"
db2:
extends: db
container_name: postgresserver2
environment:
POSTGRES_DB: "postgres2"
POSTGRES_USER: "postgres2"
POSTGRES_PASSWORD: "postgres2"
Currently I get
ERROR: The Compose file '.\docker-compose.yml' is invalid because:
Unsupported config option for services.db2: 'extends'
Any working samples with postgre and postgis? (did not find any from SO or google).
Also regular docker build/run setup would solve my problem, but I could not get that working either.

extends: is only supported up to V2.1 of the compose file (see here), and your file is tagged as V3.2, so that's why you get that error.
If you're not going to use Docker Swarm in your project (which is basically a multi node orchestration service) you could just change the version of the file to 2.1, or re-define it so it doesn't use extends:

Related

How to force docker-compose to use specific port

Hey I don't know if the title is right, but I'm working on my project. Tools that I use are symfony and docker. When I start pgsql database on my other machines, they're using port 5432, but I just installed fresh linux on my new computer and it uses port 49153 it took me quiet some time to figure out that the port was the problem. Same thing happens if I start new project don't change anything and pg database still runs on port 49153, it's a bit annoying while working on few machines. So is it possible to force docker to setup database on port 5432 for all of my future projects or everytime I have to change port in .env file?
My docker-compose.yml
services:
database:
image: postgres:${POSTGRES_VERSION:-14}-alpine
environment:
POSTGRES_DB: ${POSTGRES_DB:-app}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-!ChangeMe!}
POSTGRES_USER: ${POSTGRES_USER:-app}
volumes:
- db-data:/var/lib/postgresql/data:rw
.env file
DATABASE_URL="postgresql://app:!ChangeMe!#127.0.0.1:5432/app?serverVersion=14&charset=utf8"
For exemple if you want to map your pgsql database :
From the local docker container port 5432
To your local env 5432 port.
Add
ports:
- "5432:5432"
I forgot which one is the container and which one is the local but you will find out pretty easier now.
services:
database:
image: postgres:${POSTGRES_VERSION:-14}-alpine
environment:
POSTGRES_DB: ${POSTGRES_DB:-app}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-!ChangeMe!}
POSTGRES_USER: ${POSTGRES_USER:-app}
volumes:
- db-data:/var/lib/postgresql/data:rw
ports:
- "5432:5432"

Network communication between containers in different Docker Compose applications running simultaneously

Is there any way that a container X in docker-compose application A can communicate with a container Y in docker-compose application B, both running simultaneously.
I wish to deliver my application as a docker-compose.yml file. This is docker-compose application A. The application requires that certain databases exist. In production, clients must provide these database and inform the application of the required access information.
Here is an runnable simulation of my production deliverable docker-compose.yml file. It provides a service, but needs access to an external Postgres database, configured via three environment variables.
# A
version: '3'
services:
keycloak:
image: jboss/keycloak:11.0.3
environment:
DB_VENDOR: POSTGRES
DB_ADDR: ${MYAPP_KEYCLOAK_POSTGRES_ADDR}:${MYAPP_KEYCLOAK_POSTGRES_PORT}
DB_DATABASE: keycloak
DB_USER: keycloak
DB_SCHEMA: public
DB_PASSWORD: "${MYAPP_KEYCLOAK_POSTGRES_PASSWORD}"
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: changeme
PROXY_ADDRESS_FORWARDING: "true"
# Other services ...
Clients run the application with docker-compose up with the three environment variables set to those of a client provided Postgres database.
For development, I compose the required Postgres databases inside the Docker Compose application, using the following docker-compose.yml file. This composition runs out the box.
# DEV
version: '3'
volumes:
dev-keycloak_postgres_data:
driver: local
services:
dev-keycloak-postgres:
image: postgres:11.5
volumes:
- dev-keycloak_postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: keycloak-postgres-changeme
# DELIVERABLES
keycloak:
image: jboss/keycloak:11.0.3
environment:
DB_VENDOR: POSTGRES
DB_ADDR: dev-keycloak-postgres:5432
DB_DATABASE: keycloak
DB_USER: keycloak
DB_SCHEMA: public
DB_PASSWORD: keycloak-postgres-changeme
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: keycloak-admin-changeme
PROXY_ADDRESS_FORWARDING: "true"
depends_on:
- dev-keycloak-postgres
# Other services ...
While using containerized Postgres is not suitable for production, I would like to provide my clients with a demonstration required environment, in the form of a separate docker-compose.yml file, providing the required external infrastructure, in this example, a single containerized Postgres. The is Docker Compose application B.
# B
version: '3'
# For demonstration purposes only! Not production ready!
volumes:
demo-keycloak_postgres_data:
driver: local
services:
demo-keycloak-postgres:
image: postgres:11.5
volumes:
- demo-keycloak_postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: keycloak-postgres-changeme
The demonstration required infrastructure application B is delivered and managed completely independently to the real deliverable application A. It needs to be up and running before application A starts.
Suppose the respective docker-compose files are in subfolders A and B respectively.
To start application B, I change into folder B and run docker-compose up.
To start application A, in another terminal I change into folder A, and run docker-compose up with the three environment variables set.
I hoped the following values would work, given the behaviour of the DEV docker-compose.yml above:
export ICT4APP_KEYCLOAK_POSTGRES_ADDR="demo-keycloak-postgres"
export ICT4APP_KEYCLOAK_POSTGRES_PORT="5432"
export ICT4APP_KEYCLOAK_POSTGRES_PASSWORD="keycloak-postgres-changeme"
docker-compose up
But no! Cearly ICT4APP_KEYCLOAK_POSTGRES_ADDR="demo-keycloak-postgres" is wrong.
Is there any way a container X in docker-compose application A can communicate with a container Y in docker-compose application B, and if so, can I determine the correct value for ICT4APP_KEYCLOAK_POSTGRES_ADDR
I am trying to avoid a -f solution for this particular use case.
You have to create an external network and assign it to both containers in compose-file a and b.
docker network create external-net
and in your docker-compose.yml add to the end
networks:
external-net:
external:
name: external-net

Cannot connect to my postgres database through docker-compose

I have a very simple docker-compose.yml:
version: '3.1'
services:
db:
image: postgres
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: my_password
POSTGRES_DB: demo
And start it with: docker-compose up -d
It starts well:
Creating network "bootcamp_default" with the default driver
Creating bootcamp_db_1 ... done
Name: bootcamp_db_1
Command: docker-entrypoint.sh postgres
State: up
Ports: 0.0.0.0:5432->5432/tcp,:::5432->5432/tcp
When I go into the container I see that demo db exists, but when I try to connect to it any utils say that demo db is not existed.
How can I connect to it?
Solved the problem by moving the project to Ubuntu system. It was caused by OS settings.

Docker-compose starts a single container

I'm using docker-compose and I'm trying to run an express app and postgres db in docker containers.
My problem is that it only starts postgres image, but express app is not running.
What am I doing wrong?
I've published it on my github: https://github.com/ayakymyshyn/docker-playground
looking at your docker-compose file and Dockerfile, i assume that your intention is that the web service in the compose will actually run the image produced by the Dockerfile.
if that is the case, you need to modify the compose file and tell it to build an image based on the Dockerfile.
it should look something like
version: "3.7"
services:
web:
image: node
build: . # <--- this is the missing line
depends_on:
- db
ports:
- '3001:3001'
db:
image: postgres
environment:
POSTGRES_PASSWORD: 123123123
POSTGRES_USER: yakym
POSTGRES_DB: jwt
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- '5433:5433'

How can docker-compose.yml include/omit environment-specific services?

Is there a way in docker-compose.yml to include a db service only in specific environments ("test" in my case)?
For a Ruby project, development and production both use a remote Postgres database, but the test needs it own local Postgres database.
What I have now is shown below... "works" in the sense that when we run in development the db container is simply ignored by our code (our dev't ENV supplies a remote postres url instead of using the db host). But it would be nicer to not spin up an unused docker container for db when running in development.
version: '3'
services:
web:
build: .
ports:
- "3010:3010"
volumes:
- .:/my_app
links:
- db.local
depends_on:
- db
db:
image: postgres:10.5
ports:
- "5432:5432"

Resources