How do I refer to a container by name in docker stack? - docker

I am trying to deploy a simple Django/Postgress app using a docker-compose file on a docker swarm (the two containers are running on different VMs). However, I when I try establishing the connection between the two using container names (e.g. to set up Django's connection to the database) I cannot seem to use the container names. For example, if I try to run migrations from the Django container, the host name of the database container is not recognised.
This is my docker-compose.yml
version: '3.7'
services:
db:
image: postgres:10
environment:
POSTGRES_DB: "random"
POSTGRES_USER: "random"
POSTGRES_PASSWORD: "random"
app:
image: "app_server"
build:
context: ./app_server
links:
- db
environment:
DATABASE_URL: 'psql://random:random#db:5432/random'
and this is the database connection config in Django's settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'random',
'USER': 'random',
'PASSWORD': 'random',
'HOST': 'db',
'PORT': '5432',
}
}
Is there a way in which I can make the Django app automatically aware of the IP of the database service?
I tried using the "name:" parameter in the service configuration, but the didn't work.
Any help is highly appreciated!

Related

How to force docker-compose to use specific port

Hey I don't know if the title is right, but I'm working on my project. Tools that I use are symfony and docker. When I start pgsql database on my other machines, they're using port 5432, but I just installed fresh linux on my new computer and it uses port 49153 it took me quiet some time to figure out that the port was the problem. Same thing happens if I start new project don't change anything and pg database still runs on port 49153, it's a bit annoying while working on few machines. So is it possible to force docker to setup database on port 5432 for all of my future projects or everytime I have to change port in .env file?
My docker-compose.yml
services:
database:
image: postgres:${POSTGRES_VERSION:-14}-alpine
environment:
POSTGRES_DB: ${POSTGRES_DB:-app}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-!ChangeMe!}
POSTGRES_USER: ${POSTGRES_USER:-app}
volumes:
- db-data:/var/lib/postgresql/data:rw
.env file
DATABASE_URL="postgresql://app:!ChangeMe!#127.0.0.1:5432/app?serverVersion=14&charset=utf8"
For exemple if you want to map your pgsql database :
From the local docker container port 5432
To your local env 5432 port.
Add
ports:
- "5432:5432"
I forgot which one is the container and which one is the local but you will find out pretty easier now.
services:
database:
image: postgres:${POSTGRES_VERSION:-14}-alpine
environment:
POSTGRES_DB: ${POSTGRES_DB:-app}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-!ChangeMe!}
POSTGRES_USER: ${POSTGRES_USER:-app}
volumes:
- db-data:/var/lib/postgresql/data:rw
ports:
- "5432:5432"

Network communication between containers in different Docker Compose applications running simultaneously

Is there any way that a container X in docker-compose application A can communicate with a container Y in docker-compose application B, both running simultaneously.
I wish to deliver my application as a docker-compose.yml file. This is docker-compose application A. The application requires that certain databases exist. In production, clients must provide these database and inform the application of the required access information.
Here is an runnable simulation of my production deliverable docker-compose.yml file. It provides a service, but needs access to an external Postgres database, configured via three environment variables.
# A
version: '3'
services:
keycloak:
image: jboss/keycloak:11.0.3
environment:
DB_VENDOR: POSTGRES
DB_ADDR: ${MYAPP_KEYCLOAK_POSTGRES_ADDR}:${MYAPP_KEYCLOAK_POSTGRES_PORT}
DB_DATABASE: keycloak
DB_USER: keycloak
DB_SCHEMA: public
DB_PASSWORD: "${MYAPP_KEYCLOAK_POSTGRES_PASSWORD}"
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: changeme
PROXY_ADDRESS_FORWARDING: "true"
# Other services ...
Clients run the application with docker-compose up with the three environment variables set to those of a client provided Postgres database.
For development, I compose the required Postgres databases inside the Docker Compose application, using the following docker-compose.yml file. This composition runs out the box.
# DEV
version: '3'
volumes:
dev-keycloak_postgres_data:
driver: local
services:
dev-keycloak-postgres:
image: postgres:11.5
volumes:
- dev-keycloak_postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: keycloak-postgres-changeme
# DELIVERABLES
keycloak:
image: jboss/keycloak:11.0.3
environment:
DB_VENDOR: POSTGRES
DB_ADDR: dev-keycloak-postgres:5432
DB_DATABASE: keycloak
DB_USER: keycloak
DB_SCHEMA: public
DB_PASSWORD: keycloak-postgres-changeme
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: keycloak-admin-changeme
PROXY_ADDRESS_FORWARDING: "true"
depends_on:
- dev-keycloak-postgres
# Other services ...
While using containerized Postgres is not suitable for production, I would like to provide my clients with a demonstration required environment, in the form of a separate docker-compose.yml file, providing the required external infrastructure, in this example, a single containerized Postgres. The is Docker Compose application B.
# B
version: '3'
# For demonstration purposes only! Not production ready!
volumes:
demo-keycloak_postgres_data:
driver: local
services:
demo-keycloak-postgres:
image: postgres:11.5
volumes:
- demo-keycloak_postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: keycloak-postgres-changeme
The demonstration required infrastructure application B is delivered and managed completely independently to the real deliverable application A. It needs to be up and running before application A starts.
Suppose the respective docker-compose files are in subfolders A and B respectively.
To start application B, I change into folder B and run docker-compose up.
To start application A, in another terminal I change into folder A, and run docker-compose up with the three environment variables set.
I hoped the following values would work, given the behaviour of the DEV docker-compose.yml above:
export ICT4APP_KEYCLOAK_POSTGRES_ADDR="demo-keycloak-postgres"
export ICT4APP_KEYCLOAK_POSTGRES_PORT="5432"
export ICT4APP_KEYCLOAK_POSTGRES_PASSWORD="keycloak-postgres-changeme"
docker-compose up
But no! Cearly ICT4APP_KEYCLOAK_POSTGRES_ADDR="demo-keycloak-postgres" is wrong.
Is there any way a container X in docker-compose application A can communicate with a container Y in docker-compose application B, and if so, can I determine the correct value for ICT4APP_KEYCLOAK_POSTGRES_ADDR
I am trying to avoid a -f solution for this particular use case.
You have to create an external network and assign it to both containers in compose-file a and b.
docker network create external-net
and in your docker-compose.yml add to the end
networks:
external-net:
external:
name: external-net

Containerized Rails application in Swarm access containerized database in compose

I have two virtual machines (VM) each machine is in a Docker Swarm environment, one VM has a mysql container running in docker-compose (for now let's say I can't move it to swarm), in the other machine I'm trying to connect a containerized rails app that is inside the swarm I'm using mysql2 gem to connect to the database however I'm having the following error:
Mysql2::Error::ConnectionError: Access denied for user 'bduser'#'10.0.13.248' (using password: YES)
I have double checked the credentials, I also ran an alpine container in this VM where the rails is running, installed mysql and succesfully connected to the db in the other VM (was not in swarm though). I checked the ip address and I'm not sure where this came from, it is not the ip for the db's container.
Compose file for the database:
version: '3.4'
services:
db:
image: mysql:5.7
restart: always
container_name: db-container
ports:
- "3306:3306"
expose:
- "3306"
environment:
MYSQL_ROOT_PASSWORD: mysecurepassword
command: --sql-mode STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION --max-connections 350
volumes:
- ./mysql:/var/lib/mysql
healthcheck:
test: mysqladmin ping --silent
interval: 1m30s
timeout: 10s
retries: 3
start_period: 30s
How can I successfully connect the rails app to the db's container, considering that the db is running using docker-compose and the rails is in a swarm in another VM?
If docker swarm mode is reduced to its core functionality: it adds overlay networks to docker. Also called vxlans these are software defined networks that containers can be attached to. overlay networks are the mechanisim that allow containers on different hosts to communicate with each other.
With that in mind, even if you otherwise treat your docker swarm as a set of discreet docker hosts on which you run compose stacks, you can nonetheless get services to communicate completely privately.
First, on a manager node, create an overlay network with a well known name:-
docker network create application --driver overlay
Now in your compose files, deployed as compose stacks on different nodes, you should be able to reference that network:
# deployed on node1
networks:
application:
external: true
services:
db:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: mysql-password
networks:
- application
volumes:
- ./mysql/:/var/lib/mysql
# deployed on node2
networks:
application:
external: true
services:
my-rails-app:
image: my-rails:dev
build:
context: src
networks:
- application
volumes:
- ./data:/data
etc.

how to connect my docker container (frontend) connect to a containerized database running on a different VM

Unable to connect to containers running on separate docker hosts
I've got 2 docker Tomcat containers running on 2 different Ubuntu vm's. System-A has a webservice running and System-B has a db. I haven't been able to figure out how to connect the application running on system-A to the db running on system-B. When I run the database on system-A, the application(which is also running on system-A) can connect to the database. I'm using docker-compose to setup the network(which works fine when both containers are running on the same VM). I've execd into etc/hosts file in the application container on system-A and I think whats missing is the ip address of System-B.
services:
db:
image: mydb
hostname: mydbName
ports:
- "8012: 8012"
networks:
data:
aliases:
- mydbName
api:
image: myApi
hostname: myApiName
ports:
- "8810: 8810"
networks:
data:
networks:
data:
You would configure this exactly the same way you would as if Docker wasn't involved: configure the Tomcat instance with the DNS name or IP address of the other server. You would need to make sure the service is published outside of Docker space using a ports: directive.
On server-a.example.com you could run this docker-compose.yml file:
version: '3'
services:
api:
image: myApi
ports:
- "8810:8810"
env:
DATABASE_URL: "http://server-b.example.com:8012"
And on server-b.example.com:
version: '3'
services:
db:
image: mydb
ports:
- "8012:8012"
In principle it would be possible to set up an overlay network connecting the two hosts, but this is a significantly more complicated setup.
(You definitely don't want to use docker exec to modify /etc/hosts in a container: you'll have to repeat this step every time you delete and recreate the container, and manually maintaining hosts files is tedious and error-prone, particularly if you're moving containers between hosts. Consul could work as a service-discovery system that provides a DNS service.)

Wrong credentials when connecting to Docker MySQL container

New to docker and I'm using laradock to set up my environment with a Craft CMS project. I'm able to get it installed and set up, however I'm have an issue when trying to connect to the MySQL container that laradock creates from the docker-compose.yml file.
Here's the db related portion in my docker-compose.yml
mysql:
build:
context: ./mysql
args:
- MYSQL_DATABASE=homestead
- MYSQL_USER=homestead
- MYSQL_PASSWORD=secret
- MYSQL_ROOT_PASSWORD=root
volumes:
- mysql:/var/lib/mysql
ports:
- "3306:3306"
Within my craft/config/db.php file, I have the following settings:
return array(
'.dev' => array(
'tablePrefix' => 'craft',
'server' => 'mysql',
'database' => 'homestead',
'user' => 'homestead',
'password' => 'secret',
),
);
However, I'm getting a Craft can’t connect to the database with the credentials in craft/config/db.php error.
My question is - when docker creates the MySQL container, I'm assuming it uses the credentials in the docker-compose.yml file to create and allow access to that database. If so, as long as my container is running and my credentials from my db.php file match with the credentials in the docker-compose.yml file, shouldn't it connect?
If I wanted to update the credentials in the MySQL container, can I simply update the values in both files and run docker-compose up -d mysql?
Thanks!
I have run in kind of similar issue recently because the containers were started in a "random" order. Maybe this is your issue too don't know for sure.
In a brief this is my case:
- two containers php-fpm and mysql.
- Running docker-compose up -d --build --no-cache build everything but php-fpm finished first so by then mysql was doing some stuff for get service ready
- php-fpm application couldn't connect to MySQL server because wasn't ready
The solution use the new Docker Compose version and use 2.1 on the docker-compose.yml. Here is my working example:
version: '2.1'
services:
php-fpm:
build:
context: docker/php-fpm
depends_on:
db:
condition: service_healthy
db:
image: mariadb
healthcheck:
test: "exit 0"
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
The trick: depends_on (check here) and condition (check the example).

Resources