docker-compose networking multiple apps with same service name - docker

Problem:
When having two docker-compose files / projects with the same services, under the same network, when you spin up t he second compose project, the DNS name for the service gets overwritten.
eg:
App 1
version: "3.1"
services:
db:
image: mysql:8.0
container_name: monolith-db
networks:
- my-network-name
webserver:
image: nginx:alpine
container_name: monolith-webserver
networks:
- my-network-name
phpfpm:
container_name: monolith-phpfpm
networks:
- my-network-name
networks:
my-network-name:
external: true
App 2
version: "3.1"
services:
db:
image: mysql:8.0
container_name: ms-auth-db
networks:
- my-network-name
webserver:
image: nginx:alpine
container_name: ms-auth-webserver
networks:
- my-network-name
phpfpm:
container_name: ms-auth-phpfpm
networks:
- my-network-name
networks:
my-network-name:
external: true
If you start App 1, the services inside can connect to their declared services by service name as hostname, for example, in my config I have database-host: db
However, when I do docker-compose -p ms-auth --env-file .env -f infra/local/docker-compose.yml up -d then db hostname now points to App 2's db service.

The solution is to use the container_name as hostname
e.g. instead of connecting to db, configure App 1' config files to use the hostname monolith-db, and for pointing from App 1 to App 2, also use container name as hostname, e.g. ms-auth-host: ms-auth-webserver

Related

Docker containers unable to comunicate

I have 2 containers that belongs to the same network:
version: '3'
services:
#PHP Service
app:
build:
context: ./website
dockerfile: Dockerfile
image: travellist
container_name: app
restart: unless-stopped
depends_on:
- db
tty: true
...
networks:
- app-network
administration:
build:
dockerfile: Dockerfile
image: travellist
container_name: administration
restart: unless-stopped
depends_on:
- db
tty: true
environment:
....
networks:
- app-network
#Nginx Service
webserver:
container_name: webserver
image: nginx:1.17-alpine
restart: unless-stopped
depends_on:
- db
ports:
- 8000:80
- 7999:81
...
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
as you can see the two applications runs over NGINX over 2 different ports... however, I'm unable to send a request from one application to the other one... non of the following works (from administration, that is the one that works over 81:7999):
localhost:80
localhost:8000
app:80
app:8000
From the administration container you should send your request to the webserver on port 80.
From the administration container, you can first check that you can ping the webserver, if it succeeds it means that the two can reach each other on the network and for this reason, you can execute your request.
Please note that the port 8000 is only exposed to the host machine.

Connect to database from another container

Please help me if it's possible.
I need to start 2 applications with a single database.
I have 2 applications. First domain.com, 2-nd api.domain.com. Each application has docker-compose.yaml files.
domain.com - CMS
version: "3.8"
services:
web:
container_name: domain_web
build:
context: ./docker/php
dockerfile: Dockerfile
working_dir: /var/www/html
#command: composer install
volumes:
- ./:/var/www/html
- ./docker/php/app.conf:/etc/apache2/sites-available/000-default.conf
- ./docker/php/hosts:/etc/hosts
networks:
domain:
ipv4_address: 10.9.0.5
networks:
domain:
driver: bridge
ipam:
config:
- subnet: 10.9.0.0/16
gateway: 10.9.
volumes:
bel_baza:
api.domain.com - Laravel 5.6
version: "3.8"
services:
web:
container_name: api_domain_web
build:
context: ./docker/php
dockerfile: Dockerfile
working_dir: /var/www/html
# command: composer install
volumes:
- ./:/var/www/html
- ./docker/php/app.conf:/etc/apache2/sites-available/000-default.conf
- ./docker/php/hosts:/etc/hosts
networks:
api_domain:
ipv4_address: 10.15.0.5
db:
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password
restart: always
container_name: api_domain_db
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: domain
MYSQL_USER: user
MYSQL_PASSWORD: user
volumes:
- api_domain_baza:/var/lib/mysql
- ./docker/db:/docker-entrypoint-initdb.d
networks:
api_domain:
ipv4_address: 10.15.0.6
phpmyadmin:
image: phpmyadmin
restart: always
container_name: api_domain_pma
networks:
api_domain:
ipv4_address: 10.15.0.7
redis:
image: redis:3.0
container_name: api_domain_redis
networks:
api_domain:
ipv4_address: 10.15.0.10
networks:
api_domain:
driver: bridge
ipam:
config:
- subnet: 10.15.0.0/16
gateway: 10.15.0.1
volumes:
api_domain_baza:
api_domain started successfully.
I need to connect domain.com with database api_domain_db. For connecting host, I used IP address 10.15.0.6. First application not connected to the database from 2nd application.
What is my problem?
How I can connect domain.com with the database of 2nd application?
your problem is your are using separate docker compose applications for each application. And by default those applications can not access each other inner parts:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
doc is here - https://docs.docker.com/compose/networking/
So it is like it creates separate network for each docker compose.
if you want them both to see each other inner part you can create external docker network as this:
docker network create --subnet 10.1.0.0/24 network_name
and then use that network in both docker compose like this:
networks:
default:
external:
name: network_name
services:
.....
if you need fixed IPs, you can define them as
app:
image: ...
networks:
default:
ipv4_address: 10.1.0.10

Docker mis-forwarding ports

I have several domains sharing one public IP (EC2 instance). My setup is like this:
/home/ubuntu contains docker-compose.yml:
version: '3'
services:
nginx-proxy:
image: "jwilder/nginx-proxy"
container_name: nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- "80:80"
restart: "always"
This creates a network named ubuntu_default which will allow other compose instances to join. The nginx-proxy image creates reverse proxies for these other compose instances so that you can visit example.com and be routed to the appropriate UI within the appropriate compose instance.
/home/ubuntu/example.com/project-1 contains a docker-compose.yml like:
version: '3'
services:
db:
build: "./db" # mongo
volumes:
- "./data:/data/db"
restart: "always"
api:
build: "./api" # a node backend
ports:
- "9005:9005"
restart: "always"
depends_on:
- db
ui:
build: "./ui" # a react front end
ports:
- "8005:8005"
restart: "always"
environment:
- VIRTUAL_HOST=project-1.example.com # this tells nginx-proxy which domain to proxy
- VIRTUAL_PORT=8005 # this tells nginx-proxy which port to proxy
networks:
default:
external:
name: ubuntu_default
/home/ubuntu/testing.com/project-2 contains a docker-compose.yml like:
version: '3'
services:
db:
build: "./db" # postgres
volumes:
- "./data:/var/lib/postgresql/data"
restart: "always"
api:
build: "./api" # a python backend
ports:
- "9000:9000"
restart: "always"
depends_on:
- db
ui:
build: "./ui" # a react front end
ports:
- "8000:8000"
restart: "always"
environment:
- VIRTUAL_HOST=testing.com,www.testing.com # tells nginx-proxy which domains to proxy
- VIRTUAL_PORT=8000 # tells nginx-proxy which port to proxy
networks:
default:
external:
name: ubuntu_default
So basically:
project-1.example.com:80 forwards to the UI running on :8005
project-1.example.com:80/api forwards to the API running on :9005
testing.com forwards to the UI running on :8000
testing.com/api forwards to the API running on :9000
...and that all works perfectly as long as I only run one at a time. The moment I start both Compose instances, the /api urls start clashing. I can sit on one of them and refresh repeatedly and sometimes I'll see the one for example.com/api and sometimes I'll see the one for testing.com/api.
I have no idea whats going on at this point. Maybe the premise I'm working against is fundamentally flawed but it seems like an intended use of Docker/Compose. I'm open to suggestions to accomplish the same otherwise.
Docker containers communicate using DNS lookups on their network. If multiple containers have the same alias on the same network, it will round robin load balance between the containers with each network connection. If you don't want containers to talk to each other, then you don't want them on the same docker network. The good news is you solve this by using more than one network, and not putting the api and db server on the frontend proxy network:
version: '3'
services:
db:
build: "./db" # postgres
volumes:
- "./data:/var/lib/postgresql/data"
restart: "always"
api:
build: "./api" # a python backend
ports:
- "9000:9000"
restart: "always"
depends_on:
- db
ui:
build: "./ui" # a react front end
ports:
- "8000:8000"
restart: "always"
networks:
- default
- proxy
environment:
- VIRTUAL_HOST=testing.com,www.testing.com # tells nginx-proxy which domains to proxy
- VIRTUAL_PORT=8000 # tells nginx-proxy which port to proxy
networks:
proxy:
external:
name: ubuntu_default
If you do not override the default network, docker will create one for your compose project and use it for any containers not assigned to another network.

domain configuration in docker-compose

How to configure hostnames with domains in docker-compose.yml?
Let's say the service worker expects the service web on the http://web.local/ address. But web.local doesn't resolve to an ip address no matter what I configure using the hostname directive. Adding an extra_hosts directive doesn't work either as I should know the ip of the service web for that, which I don't as it is assigned by docker.
docker-compose.yml:
version: '3'
services:
worker:
build: ./worker
networks:
- mynet
web:
build: ./web
ports:
- 80:80
hostname: web.local
networks:
- mynet
networks:
mynet:
but ping web.local doesn't resolve inside the service worker
For this to work you need to add an alias in the network mynet.
From the official documentation:
Aliases (alternative hostnames) for this service on the network. Other
containers on the same network can use either the service name or this
alias to connect to one of the service’s containers.
So, your docker-compose.yml file should look like this:
version: '3'
services:
worker:
build: ./worker
networks:
- mynet
web:
build: ./web
ports:
- 80:80
hostname: web.local
networks:
mynet:
aliases:
- web.local
networks:
mynet:

How docker manage volumes when scaling up compose project?

What if have 10 instance of any container which needs presistant storage, so on 10 instance how docker will manage volume for them, i've defined volume in docker-compose.yml
i didn't find anything regarding this that what will happen when multiple instance will run?
1. will docker create new folders for each instance or
2. share same folder to all of them (this will lead data corrption)?
here is my sample docker-compose.yml
version: '2'
services:
consul:
#image: myappteam/consul:3.4.0
build: ./consul
container_name: consul
hostname: consul
domainname: consul
restart: always
volumes:
- myapp-data:/data/consul
consului:
#image: myappteam/consul-ui:3.4.0
build: ./consul-ui
container_name: consul-ui
hostname: consul-ui
domainname: consul-ui
ports:
- 8500:8500
restart: always
volumes:
- myapp-data:/data/consului
nginx:
#image: myappteam/nginx:3.4.0
build: ./nginx
container_name: nginx
hostname: nginx
domainname: nginx
ports:
- "80:80"
volumes:
- myapp-logs:/logs/nginx_access_logs
- myapp-logs:/logs/nginx_error_logs
restart: always
volumes:
myapp-data:
myapp-logs:
myapp-bundle:
myapp-source:
so in above example, myapp-data is plan i want to have all data so i was thinking when i'll increase instance of nginx, consul, will they use same myapp-data volume or create new volume? because if it will use same instance then data will corrupted because two instance will write same files..
so in that case what should i do?

Resources