Is there any way that a container X in docker-compose application A can communicate with a container Y in docker-compose application B, both running simultaneously.
I wish to deliver my application as a docker-compose.yml file. This is docker-compose application A. The application requires that certain databases exist. In production, clients must provide these database and inform the application of the required access information.
Here is an runnable simulation of my production deliverable docker-compose.yml file. It provides a service, but needs access to an external Postgres database, configured via three environment variables.
# A
version: '3'
services:
keycloak:
image: jboss/keycloak:11.0.3
environment:
DB_VENDOR: POSTGRES
DB_ADDR: ${MYAPP_KEYCLOAK_POSTGRES_ADDR}:${MYAPP_KEYCLOAK_POSTGRES_PORT}
DB_DATABASE: keycloak
DB_USER: keycloak
DB_SCHEMA: public
DB_PASSWORD: "${MYAPP_KEYCLOAK_POSTGRES_PASSWORD}"
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: changeme
PROXY_ADDRESS_FORWARDING: "true"
# Other services ...
Clients run the application with docker-compose up with the three environment variables set to those of a client provided Postgres database.
For development, I compose the required Postgres databases inside the Docker Compose application, using the following docker-compose.yml file. This composition runs out the box.
# DEV
version: '3'
volumes:
dev-keycloak_postgres_data:
driver: local
services:
dev-keycloak-postgres:
image: postgres:11.5
volumes:
- dev-keycloak_postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: keycloak-postgres-changeme
# DELIVERABLES
keycloak:
image: jboss/keycloak:11.0.3
environment:
DB_VENDOR: POSTGRES
DB_ADDR: dev-keycloak-postgres:5432
DB_DATABASE: keycloak
DB_USER: keycloak
DB_SCHEMA: public
DB_PASSWORD: keycloak-postgres-changeme
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: keycloak-admin-changeme
PROXY_ADDRESS_FORWARDING: "true"
depends_on:
- dev-keycloak-postgres
# Other services ...
While using containerized Postgres is not suitable for production, I would like to provide my clients with a demonstration required environment, in the form of a separate docker-compose.yml file, providing the required external infrastructure, in this example, a single containerized Postgres. The is Docker Compose application B.
# B
version: '3'
# For demonstration purposes only! Not production ready!
volumes:
demo-keycloak_postgres_data:
driver: local
services:
demo-keycloak-postgres:
image: postgres:11.5
volumes:
- demo-keycloak_postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: keycloak-postgres-changeme
The demonstration required infrastructure application B is delivered and managed completely independently to the real deliverable application A. It needs to be up and running before application A starts.
Suppose the respective docker-compose files are in subfolders A and B respectively.
To start application B, I change into folder B and run docker-compose up.
To start application A, in another terminal I change into folder A, and run docker-compose up with the three environment variables set.
I hoped the following values would work, given the behaviour of the DEV docker-compose.yml above:
export ICT4APP_KEYCLOAK_POSTGRES_ADDR="demo-keycloak-postgres"
export ICT4APP_KEYCLOAK_POSTGRES_PORT="5432"
export ICT4APP_KEYCLOAK_POSTGRES_PASSWORD="keycloak-postgres-changeme"
docker-compose up
But no! Cearly ICT4APP_KEYCLOAK_POSTGRES_ADDR="demo-keycloak-postgres" is wrong.
Is there any way a container X in docker-compose application A can communicate with a container Y in docker-compose application B, and if so, can I determine the correct value for ICT4APP_KEYCLOAK_POSTGRES_ADDR
I am trying to avoid a -f solution for this particular use case.
You have to create an external network and assign it to both containers in compose-file a and b.
docker network create external-net
and in your docker-compose.yml add to the end
networks:
external-net:
external:
name: external-net
Related
I'm hosting 2 wordpress website on my VPS, and I'm using Nginx Proxy Manager to proxy them.
I use Docker network connect to join NPM & 2 Wordpress containers together to make them work, but after reload or restart docker the networks between them is broken. (Is that because I use systemctl restart docker? or compose down & up ?)
So now I decide to create a new network in docker called bridge_default, and put this network in docker compose file so I don't have to connect those containers together to make them work every time.
But now I don't know where is wrong in docker compose file, Can any one tell me how to put networks in docker compose file correctly?
version: "3"
# Defines which compose version to use
services:
# Services line define which Docker images to run. In this case, it will be MySQL server and WordPr> db:
image: mariadb:10.6.4-focal
# image: mysql:5.7 indicates the MySQL database container image from Docker Hub used in this inst> restart: always
networks:
- default
environment:
MYSQL_ROOT_PASSWORD: PassWord#123
MYSQL_DATABASE: wordpress
MYSQL_USER: admin
MYSQL_PASSWORD: PassWord#123
# Previous four lines define the main variables needed for the MySQL container to work: databas>
wordpress:
depends_on:
- db
image: wordpress:latest
restart: always
# Restart line controls the restart mode, meaning if the container stops running for any reason, > networks:
- default
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: admin
WORDPRESS_DB_PASSWORD: PassWord#123
WORDPRESS_DB_NAME: wordpress
# Similar to MySQL image variables, the last four lines define the main variables needed for the Word> volumes:
["./wordpress:/var/www/html"]
volumes:
mysql: {}
networks:
default: bridge_default
Docker compose file
Docker networks
Can any one tell me how to put networks in docker compose file correctly?
I have two virtual machines (VM) each machine is in a Docker Swarm environment, one VM has a mysql container running in docker-compose (for now let's say I can't move it to swarm), in the other machine I'm trying to connect a containerized rails app that is inside the swarm I'm using mysql2 gem to connect to the database however I'm having the following error:
Mysql2::Error::ConnectionError: Access denied for user 'bduser'#'10.0.13.248' (using password: YES)
I have double checked the credentials, I also ran an alpine container in this VM where the rails is running, installed mysql and succesfully connected to the db in the other VM (was not in swarm though). I checked the ip address and I'm not sure where this came from, it is not the ip for the db's container.
Compose file for the database:
version: '3.4'
services:
db:
image: mysql:5.7
restart: always
container_name: db-container
ports:
- "3306:3306"
expose:
- "3306"
environment:
MYSQL_ROOT_PASSWORD: mysecurepassword
command: --sql-mode STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION --max-connections 350
volumes:
- ./mysql:/var/lib/mysql
healthcheck:
test: mysqladmin ping --silent
interval: 1m30s
timeout: 10s
retries: 3
start_period: 30s
How can I successfully connect the rails app to the db's container, considering that the db is running using docker-compose and the rails is in a swarm in another VM?
If docker swarm mode is reduced to its core functionality: it adds overlay networks to docker. Also called vxlans these are software defined networks that containers can be attached to. overlay networks are the mechanisim that allow containers on different hosts to communicate with each other.
With that in mind, even if you otherwise treat your docker swarm as a set of discreet docker hosts on which you run compose stacks, you can nonetheless get services to communicate completely privately.
First, on a manager node, create an overlay network with a well known name:-
docker network create application --driver overlay
Now in your compose files, deployed as compose stacks on different nodes, you should be able to reference that network:
# deployed on node1
networks:
application:
external: true
services:
db:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: mysql-password
networks:
- application
volumes:
- ./mysql/:/var/lib/mysql
# deployed on node2
networks:
application:
external: true
services:
my-rails-app:
image: my-rails:dev
build:
context: src
networks:
- application
volumes:
- ./data:/data
etc.
I am trying to deploy a simple Django/Postgress app using a docker-compose file on a docker swarm (the two containers are running on different VMs). However, I when I try establishing the connection between the two using container names (e.g. to set up Django's connection to the database) I cannot seem to use the container names. For example, if I try to run migrations from the Django container, the host name of the database container is not recognised.
This is my docker-compose.yml
version: '3.7'
services:
db:
image: postgres:10
environment:
POSTGRES_DB: "random"
POSTGRES_USER: "random"
POSTGRES_PASSWORD: "random"
app:
image: "app_server"
build:
context: ./app_server
links:
- db
environment:
DATABASE_URL: 'psql://random:random#db:5432/random'
and this is the database connection config in Django's settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'random',
'USER': 'random',
'PASSWORD': 'random',
'HOST': 'db',
'PORT': '5432',
}
}
Is there a way in which I can make the Django app automatically aware of the IP of the database service?
I tried using the "name:" parameter in the service configuration, but the didn't work.
Any help is highly appreciated!
What I want to achieve:
I want a docker-compose file to spin up
one application from .jar file
one DB server running 2 databases under two users
I have the .jar set up and it works fine, but I can't get it to work with 2 databases.
With docker-compose:
version: "3.2"
services:
db:
container_name: postgresserver
image: mdillon/postgis
ports:
- "54322:5432"
environment:
POSTGRES_DB: "postgres"
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "postgres"
db2:
extends: db
container_name: postgresserver2
environment:
POSTGRES_DB: "postgres2"
POSTGRES_USER: "postgres2"
POSTGRES_PASSWORD: "postgres2"
Currently I get
ERROR: The Compose file '.\docker-compose.yml' is invalid because:
Unsupported config option for services.db2: 'extends'
Any working samples with postgre and postgis? (did not find any from SO or google).
Also regular docker build/run setup would solve my problem, but I could not get that working either.
extends: is only supported up to V2.1 of the compose file (see here), and your file is tagged as V3.2, so that's why you get that error.
If you're not going to use Docker Swarm in your project (which is basically a multi node orchestration service) you could just change the version of the file to 2.1, or re-define it so it doesn't use extends:
I want to try docker for my web-site. I use php, nginx, mysql. I've configured docker and I've run my website locally. Now I want to publish my web-site to production.
I have few difference between developer and production version:
I need to be able connect to mysql inside container in developer mode (for debugging), but in production mode mysql must be isolated from outside for security
I want open my web-site by address app.dev and use nginx-proxy image on my developer machine, but on production I will not use nginx-proxy for increase performance.
Could I run docker with one docker-compose.yml file?
Or should I create two version of docker-compose file for developer and production version? But in this case I lose advantage of docker - same enviroment evrywhere. If I change docker-compose-dev.yml, I need to remember to change docker-compose-prod.yml.
My docker-compose.yml:
version: '2'
services:
app:
build: .
volumes:
- ./app:/app
container_name: app
app_nginx:
image: nginx
ports:
- "8080:80"
container_name: app_nginx
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./app:/app
environment:
- VIRTUAL_HOST=app.dev
app_db:
image: mysql:5.7
volumes:
- "./data/db:/var/lib/mysql"
restart: always
environment:
MYSQL_ROOT_PASSWORD:
MYSQL_ALLOW_EMPTY_PASSWORD: 1
MYSQL_DATABASE: "app_db"
container_name: app_db
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
You can achieve this with environment variable based configurations.
Usually different environments i.e staging and production differs only by configurations like database it needs to connect to, external service it calls, their end-points and credentials.
Instead of hard coding all such configuration, read them from environment variables. Thus you can use same docker-compose file with different environment variables for your staging and production environment.
You can also explore Rancher by Rancher Labs at http://rancher.com/ to manage your environments.