Docker and Traefik different directories - docker

I have 2 different directories each containing docker containers for different purposes and both spun up with docker compose.
Dir A has Traefik config and container (and other containers) as well as environment variables whereas Dir B is a bunch of containers.
I want to now include Traefik labels in Dir B containers but when I run compose in Dir B, I'm facing:
WARN[0000] The "DOMAIN_NAME" variable is not set. Defaulting to a blank string.
service "[service name]" refers to undefined network traefik_proxy: invalid compose project
I'm guessing this is because services in Dir B can't see traefik_proxy since it's part of a different stack and same with the DOMAIN_NAME variable.
How can I have Dir B 'reach across' to Dir A? Is it even possible with my current config?

If you want to have multiple compose projects share a single Traefik frontend, that's certainly possible, but you need to place Traefik on a shared network. For this model, I would suggest starting with a docker-compose.yaml that only deploys Traefik; e.g:
version: "3"
services:
traefik:
image: docker.io/traefik:latest
command:
- --api.insecure=true
- --providers.docker
- --accesslog=true
- --accesslog.filepath=/dev/stderr
- --providers.docker.exposedByDefault=false
ports:
- "80:80"
- "443:443"
- "127.0.0.2:8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
services:
external: true
Start by creating the shared network:
docker network create services
And then starting the Traefik project:
pushd traefik; docker-compose up -d; popd
Now for every project you want to make available via Traefik, put your services on the services network. For example, let's say we have this in app1/docker-compose.yaml:
version: "3"
services:
app1:
image: docker.io/containous/whoami
networks:
- services
labels:
- "traefik.enable=true"
- "traefik.http.routers.app1.rule=PathPrefix(`/app1`)"
networks:
services:
external: true
Then I can run:
pushd app1; docker-compose up -d; popd
And now my app1 service is available at http://localhost/app1/.
We can add as many services as we want like this; the only requirement is that the containers are attached to the services network.

Related

Docker compose networking between pods [duplicate]

I have two separate docker-compose.yml files in two different folders:
~/front/docker-compose.yml
~/api/docker-compose.yml
How can I make sure that a container in front can send requests to a container in api?
I know that --default-gateway option can be set using docker run for an individual container, so that a specific IP address can be assigned to this container, but it seems that this option is not available when using docker-compose.
Currently I end up doing a docker inspect my_api_container_id and look at the gateway in the output. It works but the problem is that this IP is randomly attributed, so I can't rely on it.
Another form of this question might thus be:
Can I attribute a fixed IP address to a particular container using docker-compose?
But in the end what I'm looking after is:
How can two different docker-compose projects communicate with each other?
You just need to make sure that the containers you want to talk to each other are on the same network. Networks are a first-class docker construct, and not specific to compose.
# front/docker-compose.yml
version: '2'
services:
front:
...
networks:
- some-net
networks:
some-net:
driver: bridge
...
# api/docker-compose.yml
version: '2'
services:
api:
...
networks:
- front_some-net
networks:
front_some-net:
external: true
Note: Your app’s network is given a name based on the “project name”, which is based on the name of the directory it lives in, in this case a prefix front_ was added
They can then talk to each other using the service name. From front you can do ping api and vice versa.
UPDATE: As of compose file version 3.5:
This now works:
version: "3.5"
services:
proxy:
image: hello-world
ports:
- "80:80"
networks:
- proxynet
networks:
proxynet:
name: custom_network
docker-compose up -d will join a network called 'custom_network'. If it doesn't exist, it will be created!
root#ubuntu-s-1vcpu-1gb-tor1-01:~# docker-compose up -d
Creating network "custom_network" with the default driver
Creating root_proxy_1 ... done
Now, you can do this:
version: "2"
services:
web:
image: hello-world
networks:
- my-proxy-net
networks:
my-proxy-net:
external:
name: custom_network
This will create a container that will be on the external network.
I can't find any reference in the docs yet but it works!
Just a small adittion to #johnharris85's great answer,
when you are running a docker compose file, a "default" network is created
so you can just add it to the other compose file as an external network:
# front/docker-compose.yml
version: '2'
services:
front_service:
...
...
# api/docker-compose.yml
version: '2'
services:
api_service:
...
networks:
- front_default
networks:
front_default:
external: true
For me this approach was more suited because I did not own the first docker-compose file and wanted to communicate with it.
All containers from api can join the front default network with following config:
# api/docker-compose.yml
...
networks:
default:
external:
name: front_default
See docker compose guide: using a pre existing network (see at the bottom)
The previous posts information is correct, but it does not have details on how to link containers, which should be connected as "external_links".
Hope this example make more clear to you:
Suppose you have app1/docker-compose.yml, with two services (svc11 and svc12), and app2/docker-compose.yml with two more services (svc21 and svc22) and suppose you need to connect in a crossed fashion:
svc11 needs to connect to svc22's container
svc21 needs to connect to svc11's container.
So the configuration should be like this:
this is app1/docker-compose.yml:
version: '2'
services:
svc11:
container_name: container11
[..]
networks:
- default # this network
- app2_default # external network
external_links:
- container22:container22
[..]
svc12:
container_name: container12
[..]
networks:
default: # this network (app1)
driver: bridge
app2_default: # external network (app2)
external: true
this is app2/docker-compose.yml:
version: '2'
services:
svc21:
container_name: container21
[..]
networks:
- default # this network (app2)
- app1_default # external network (app1)
external_links:
- container11:container11
[..]
svc22:
container_name: container22
[..]
networks:
default: # this network (app2)
driver: bridge
app1_default: # external network (app1)
external: true
Everybody has explained really well, so I'll add the necessary code with just one simple explanation.
Use a network created outside of docker-compose (an "external" network) with docker-compose version 3.5+.
Further explanation can be found here.
First docker-compose.yml file should define network with name giveItANamePlease as follows.
networks:
my-network:
name: giveItANamePlease
driver: bridge
The services of first docker-compose.yml file can use network as follows:
networks:
- my-network
In second docker-compose file, we need to proxy the network by using the network name which we have used in first docker-compose file, which in this case is giveItANamePlease:
networks:
my-proxy-net:
external:
name: giveItANamePlease
And now you can use my-proxy-net in services of a second docker-compose.yml file as follows.
networks:
- my-proxy-net
Since Compose 1.18 (spec 3.5), you can just override the default network using your own custom name for all Compose YAML files you need. It is as simple as appending the following to them:
networks:
default:
name: my-app
The above assumes you have version set to 3.5 (or above if they don't deprecate it in 4+).
Other answers have pointed the same; this is a simplified summary.
UPDATE: As of docker-compose file version 3.5:
I came across a similar problem and I solved it by adding a small change in one of my docker-compose.yml project.
For instance, we have two API's scoring and ner. Scoring API needs to send a request to the ner API for processing the input request. In order to do that they both are supposed to share the same network.
Note: Every container has its own network which is automatically created at the time of running the app inside docker. For example ner API network will be created like ner_default and scoring API network will be named as scoring default. This solution will work for version: '3'.
As in the above scenario, my scoring API wants to communicate with ner API then I will add the following lines. This means Whenever I create the container for ner API then it automatically added to the scoring_default network.
networks:
default:
external:
name: scoring_default
ner/docker-compose.yml
version: '3'
services:
ner:
container_name: "ner_api"
build: .
...
networks:
default:
external:
name: scoring_default
scoring/docker-compose.yml
version: '3'
services:
api:
build: .
...
We can see this how the above containers are now a part of the same network called scoring_default using the command:
docker inspect scoring_default
{
"Name": "scoring_default",
....
"Containers": {
"14a6...28bf": {
"Name": "ner_api",
"EndpointID": "83b7...d6291",
"MacAddress": "0....",
"IPv4Address": "0.0....",
"IPv6Address": ""
},
"7b32...90d1": {
"Name": "scoring_api",
"EndpointID": "311...280d",
"MacAddress": "0.....3",
"IPv4Address": "1...0",
"IPv6Address": ""
},
...
}
You can add a .env file in all your projects containing COMPOSE_PROJECT_NAME=somename.
COMPOSE_PROJECT_NAME overrides the prefix used to name resources, as such all your projects will use somename_default as their network, making it possible for services to communicate with each other as they were in the same project.
NB: You'll get warnings for "orphaned" containers created from other projects.
So many answers!
First of all, avoid hyphens in entities names such as services and networks. They cause issues with name resolution.
Example: my-api won't work. myapi or api will work.
What worked for me is:
# api/docker-compose.yml
version: '3'
services:
api:
container_name: api
...
ports:
- 8081:8080
networks:
- mynetwork
networks:
mynetwork:
name: mynetwork
and
# front/docker-compose.yml
version: '3'
services:
front:
container_name: front
...
ports:
- 81:80
networks:
- mynetwork
networks:
mynetwork:
name: mynetwork
NOTE: I added ports to show how services can access each other, and how they are accessible from the host.
IMPORTANT: If you don't specify a network name, docker-compose will craft one for you. It uses the name of the folder the docker_compose.yml file is in. In this case: api_mynetwork and front_mynetwork. That will prevent communication between containers since they will by on different network, with very similar names.
Note that the network is defined exactly the same in both file, so you can start either service first and it will work. No need to specify which one is external, docker-compose will take care of managing that for you.
From the host
You can access either container using the published ports defined in docker-compose.yml.
You can access the Front container: curl http://localhost:81
You can access the API container: curl http://localhost:8081
From the API container
You can access the Front container using the original port, not the one you published in docker-compose.yml.
Example: curl http://front:80
From the Front container
You can access the API container using the original port, not the one you published in docker-compose.yml.
Example: curl http://api:8080
For using another docker-compose network you just do these(to share networks between docker-compose):
Run the first docker-compose project by up -d
Find the network name of the first docker-compose by: docker network ls(It contains the name of the root directory project)
Then use that name by this structure at below in the second docker-compose file.
second docker-compose.yml
version: '3'
services:
service-on-second-compose: # Define any names that you want.
.
.
.
networks:
- <put it here(the network name that comes from "docker network ls")>
networks:
- <put it here(the network name that comes from "docker network ls")>:
external: true
I would ensure all containers are docker-compose'd to the same network by composing them together at the same time, using:
docker compose --file ~/front/docker-compose.yml --file ~/api/docker-compose.yml up -d
If you are
trying to communicate between two containers from different docker-compose projects and don't want to use the same network (because let's say they would have PostgreSQL or Redis container on the same port and you would prefer to not changing these ports and not use it at the same network)
developing locally and want to imitate communication between two docker compose projects
running two docker-compose projects on localhost
developing especially Django apps or Django Rest Framework (drf) API and running app inside container on some exposed port
getting Connection refused while trying to communicate between two containers
And you want to
container api_a communicate to api_b (or vice versa) without the same "docker network"
(example below)
you can use "host" of the second container as IP of your computer and port that is mapped from inside Docker container. You can obtain IP of your computer with this script (from: Finding local IP addresses using Python's stdlib):
import socket
def get_ip():
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
try:
# doesn't even have to be reachable
s.connect(('10.255.255.255', 1))
IP = s.getsockname()[0]
except:
IP = '127.0.0.1'
finally:
s.close()
return IP
Example:
project_api_a/docker-compose.yml:
networks:
app-tier:
driver: bridge
services:
api:
container_name: api_a
image: api_a:latest
depends_on:
- postgresql
networks:
- app-tier
inside api_a container you are running Django app:
manage.py runserver 0.0.0.0:8000
and second docker-compose.yml from other project:
project_api_b/docker-compose-yml :
networks:
app-tier:
driver: bridge
services:
api:
container_name: api_b
image: api_b:latest
depends_on:
- postgresql
networks:
- app-tier
inside api_b container you are running Django app:
manage.py runserver 0.0.0.0:8001
And trying to connect from container api_a to api_b then URL of api_b container will be:
http://<get_ip_from_script_above>:8001/
It can be especially valuable if you are using even more than two(three or more) docker-compose projects and it's hard to provide common network for all of it - it's good workaround and solution
To connect two docker-compose you need a network and putting both docker-composes in that network,
you could create netwrok with docker network create name-of-network,
or you could simply put network declaration in networks option of docker-compose file and when you run docker-compose (docker-compose up) the network would be created automatically.
put the below lines in both docker-compose files
networks:
net-for-alpine:
name: test-db-net
Note: net-for-alpine is internal name of the network and it will be used inside of the docker-compose files and could be different,
test-db-net is external name of the network and must be same in two docker-compose files.
Assume we have docker-compose.db.yml and docker-compose.alpine.yml
docker-compose.apline.yml would be:
version: '3.8'
services:
alpine:
image: alpine:3.14
container_name: alpine
networks:
- net-for-alpine
# these two command keeps apline container running
stdin_open: true # docker run -i
tty: true # docker run -t
networks:
net-for-alpine:
name: test-db-net
docker-compose.db.yml would be:
version: '3.8'
services:
db:
image: postgres:13.4-alpine
container_name: psql
networks:
- net-for-db
networks:
net-for-db:
name: test-db-net
To test the network, go inside alpine container
docker exec -it alpine sh
then with following commands you could check the network
# if it returns 0 or see nothing as a result, network is established
nc -z psql (container name)
or
ping pgsql
I'm running multiple identical docker-compose.yml files in different directories, using .env files to make a slight difference. And use Nginx Proxy Manage to communicate with other services. here is my file:
make sure you have created a public network
docker network create nginx-proxy-man
/domain1.com/docker-compose.yml, /domain2.com/docker-compose.yml, ...
version: "3.9"
services:
webserver:
build:
context: ./bin/${PHPVERSION}
container_name: "${COMPOSE_PROJECT_NAME}-${PHPVERSION}"
...
networks:
- default # network outside
- internal # network internal
database:
build:
context: "./bin/${DATABASE}"
container_name: "${COMPOSE_PROJECT_NAME}-${DATABASE}"
...
networks:
- internal # network internal
networks:
default:
external: true
name: nginx-proxy-man
internal:
internal: true
.env file just change COMPOSE_PROJECT_NAME
COMPOSE_PROJECT_NAME=domain1_com
.
.
.
PHPVERSION=php56
DATABASE=mysql57
webserver.container_name: domain1_com-php56 - will join the default network (name: nginx-proxy-man), previously created for Nginx Proxy Manager to be accessible from the outside.
Note: container_name is unique in the same network.
database.container_name: domain1_com-mysql57 - easier to distinguish
In the same docker-compose.yml, the services will connect to each other via the service name because of the same network domain1_com_internal. And to be more secure, set this network with the option internal: true
Note, if you don't explicitly specify networks for each service, but just use a common external network for both docker-compose.yml, then it's likely that domain1_com will use domain2_com's database.
Another option is just running up the first module with the 'docker-compose' check the ip related with the module, and connect the second module with the previous net like external, and pointing the internal ip
example
app1 - new-network created in the service lines, mark as external: true at the bottom
app2 - indicate the "new-network" created by app1 when goes up, mark as external: true at the bottom, and set in the config to connect, the ip that app1 have in this net.
With this, you should be able to talk with each other
*this way is just for local-test focus, in order to don't do an over complex configuration
** I know is very 'patch way' but works for me and I think is so simple some other can take advantage of this
Answer for Docker Compose '3' and up
By default Docker Compose uses a bridge network to provision inter-container communication. Read this article for more info about inter-container networking.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file. Consider the following docker-compose.yml:
version: '3.9'
services:
server:
image: node:16.9.0
container_name: server
tty: true
stdin_open: true
depends_on:
- mongo
command: bash
mongo:
image: mongo
environment:
MONGO_INITDB_DATABASE: my-database
When you run docker-compose up, Docker will create a default network and assigns the service name as hostname for both mongo and server.
You can now access the backend container via:
docker exec -it server bash
And now you can ping the mongo container using Dockers internal network (default on port 27017 in this case):
curl -v http://mongo:27017/my-database
That's it. The same applies for your setup.
I have had a similar example where I was working with separate docker-compose files working on a docker swarm with an overlay network to do that all I had to do is change the networks parameters as so:
first docker-compose.yaml
version: '3.9'
.
.
.
networks:
net:
driver: overlay
attachable: true
docker-compose -p app up
since I have specified the app name as app using -p the initial network will be app_net.
Now in order to run another docker-compose with multiple services that will use the same network you will need to set these as so:
second docker-compose.yaml
version: '3.9'
.
.
.
networks:
net-ref:
external: true
name: app_net
docker stack deploy -c docker-compose.yml mystack
No matter what name you give to the stack the network will not be affected and will always refer to the existing external network called app_net.
PS: It's important to make sure to check your docker-compose version.
version: '2'
services:
bot:
build: .
volumes:
- '.:/home/node'
- /home/node/node_modules
networks:
- my-rede
mem_limit: 100m
memswap_limit: 100m
cpu_quota: 25000
container_name: 236948199393329152_585042339404185600_bot
command: node index.js
environment:
NODE_ENV: production
networks:
my-rede:
external:
name: name_rede_externa
Follow up of JohnHarris answer, just adding some more details which may be useful to someone: Lets take two docker-compose file and connect them through networks:
1st foldername/docker-compose.yml:
version: '2'
services:
some-contr:
container_name: []
build: .
...
networks:
- somenet
ports:
- "8080:8080"
expose:
# Opens port 8080 on the container
- "8080"
environment:
PORT: 8080
tty: true
networks:
boomnet:
driver: bridge
2nd docker-compose.yml:
version: '2'
services:
pushapiserver:
container_name: [container_name]
build: .
command: "tail -f /dev/null"
volumes:
- ./:/[work_dir]
working_dir: /[work dir]
image: [name of image]
ports:
- "8060:8066"
environment:
PORT: 8066
tty: true
networks:
- foldername_somenet
networks:
foldername_somenet:
external: true
Now you can make api calls to one another services(b/w diff containers)like:
http://pushapiserver:8066/send_push call from some code in files for 1st docker-compose.yml
Two common mistakes (atleast i made them few times):
take note of [foldername] in which your docker-compose.yml file is present. Please see above in 2nd docker-compose.yml i have added foldername in network bc docker create network by [foldername]_[networkname]
Port: this one is very common. Please note i have used 8066 when trying to make connection i.e. http://pushapiserver:8066/... 8066 is port of docker container(2nd docker-compose.yml) so when trying to talk with different docker compose.
docker will use docker container port[8066] and not host machine mapped port
[8060]

CURL different docker-compose [duplicate]

I have two separate docker-compose.yml files in two different folders:
~/front/docker-compose.yml
~/api/docker-compose.yml
How can I make sure that a container in front can send requests to a container in api?
I know that --default-gateway option can be set using docker run for an individual container, so that a specific IP address can be assigned to this container, but it seems that this option is not available when using docker-compose.
Currently I end up doing a docker inspect my_api_container_id and look at the gateway in the output. It works but the problem is that this IP is randomly attributed, so I can't rely on it.
Another form of this question might thus be:
Can I attribute a fixed IP address to a particular container using docker-compose?
But in the end what I'm looking after is:
How can two different docker-compose projects communicate with each other?
You just need to make sure that the containers you want to talk to each other are on the same network. Networks are a first-class docker construct, and not specific to compose.
# front/docker-compose.yml
version: '2'
services:
front:
...
networks:
- some-net
networks:
some-net:
driver: bridge
...
# api/docker-compose.yml
version: '2'
services:
api:
...
networks:
- front_some-net
networks:
front_some-net:
external: true
Note: Your app’s network is given a name based on the “project name”, which is based on the name of the directory it lives in, in this case a prefix front_ was added
They can then talk to each other using the service name. From front you can do ping api and vice versa.
UPDATE: As of compose file version 3.5:
This now works:
version: "3.5"
services:
proxy:
image: hello-world
ports:
- "80:80"
networks:
- proxynet
networks:
proxynet:
name: custom_network
docker-compose up -d will join a network called 'custom_network'. If it doesn't exist, it will be created!
root#ubuntu-s-1vcpu-1gb-tor1-01:~# docker-compose up -d
Creating network "custom_network" with the default driver
Creating root_proxy_1 ... done
Now, you can do this:
version: "2"
services:
web:
image: hello-world
networks:
- my-proxy-net
networks:
my-proxy-net:
external:
name: custom_network
This will create a container that will be on the external network.
I can't find any reference in the docs yet but it works!
Just a small adittion to #johnharris85's great answer,
when you are running a docker compose file, a "default" network is created
so you can just add it to the other compose file as an external network:
# front/docker-compose.yml
version: '2'
services:
front_service:
...
...
# api/docker-compose.yml
version: '2'
services:
api_service:
...
networks:
- front_default
networks:
front_default:
external: true
For me this approach was more suited because I did not own the first docker-compose file and wanted to communicate with it.
All containers from api can join the front default network with following config:
# api/docker-compose.yml
...
networks:
default:
external:
name: front_default
See docker compose guide: using a pre existing network (see at the bottom)
The previous posts information is correct, but it does not have details on how to link containers, which should be connected as "external_links".
Hope this example make more clear to you:
Suppose you have app1/docker-compose.yml, with two services (svc11 and svc12), and app2/docker-compose.yml with two more services (svc21 and svc22) and suppose you need to connect in a crossed fashion:
svc11 needs to connect to svc22's container
svc21 needs to connect to svc11's container.
So the configuration should be like this:
this is app1/docker-compose.yml:
version: '2'
services:
svc11:
container_name: container11
[..]
networks:
- default # this network
- app2_default # external network
external_links:
- container22:container22
[..]
svc12:
container_name: container12
[..]
networks:
default: # this network (app1)
driver: bridge
app2_default: # external network (app2)
external: true
this is app2/docker-compose.yml:
version: '2'
services:
svc21:
container_name: container21
[..]
networks:
- default # this network (app2)
- app1_default # external network (app1)
external_links:
- container11:container11
[..]
svc22:
container_name: container22
[..]
networks:
default: # this network (app2)
driver: bridge
app1_default: # external network (app1)
external: true
Everybody has explained really well, so I'll add the necessary code with just one simple explanation.
Use a network created outside of docker-compose (an "external" network) with docker-compose version 3.5+.
Further explanation can be found here.
First docker-compose.yml file should define network with name giveItANamePlease as follows.
networks:
my-network:
name: giveItANamePlease
driver: bridge
The services of first docker-compose.yml file can use network as follows:
networks:
- my-network
In second docker-compose file, we need to proxy the network by using the network name which we have used in first docker-compose file, which in this case is giveItANamePlease:
networks:
my-proxy-net:
external:
name: giveItANamePlease
And now you can use my-proxy-net in services of a second docker-compose.yml file as follows.
networks:
- my-proxy-net
Since Compose 1.18 (spec 3.5), you can just override the default network using your own custom name for all Compose YAML files you need. It is as simple as appending the following to them:
networks:
default:
name: my-app
The above assumes you have version set to 3.5 (or above if they don't deprecate it in 4+).
Other answers have pointed the same; this is a simplified summary.
UPDATE: As of docker-compose file version 3.5:
I came across a similar problem and I solved it by adding a small change in one of my docker-compose.yml project.
For instance, we have two API's scoring and ner. Scoring API needs to send a request to the ner API for processing the input request. In order to do that they both are supposed to share the same network.
Note: Every container has its own network which is automatically created at the time of running the app inside docker. For example ner API network will be created like ner_default and scoring API network will be named as scoring default. This solution will work for version: '3'.
As in the above scenario, my scoring API wants to communicate with ner API then I will add the following lines. This means Whenever I create the container for ner API then it automatically added to the scoring_default network.
networks:
default:
external:
name: scoring_default
ner/docker-compose.yml
version: '3'
services:
ner:
container_name: "ner_api"
build: .
...
networks:
default:
external:
name: scoring_default
scoring/docker-compose.yml
version: '3'
services:
api:
build: .
...
We can see this how the above containers are now a part of the same network called scoring_default using the command:
docker inspect scoring_default
{
"Name": "scoring_default",
....
"Containers": {
"14a6...28bf": {
"Name": "ner_api",
"EndpointID": "83b7...d6291",
"MacAddress": "0....",
"IPv4Address": "0.0....",
"IPv6Address": ""
},
"7b32...90d1": {
"Name": "scoring_api",
"EndpointID": "311...280d",
"MacAddress": "0.....3",
"IPv4Address": "1...0",
"IPv6Address": ""
},
...
}
You can add a .env file in all your projects containing COMPOSE_PROJECT_NAME=somename.
COMPOSE_PROJECT_NAME overrides the prefix used to name resources, as such all your projects will use somename_default as their network, making it possible for services to communicate with each other as they were in the same project.
NB: You'll get warnings for "orphaned" containers created from other projects.
So many answers!
First of all, avoid hyphens in entities names such as services and networks. They cause issues with name resolution.
Example: my-api won't work. myapi or api will work.
What worked for me is:
# api/docker-compose.yml
version: '3'
services:
api:
container_name: api
...
ports:
- 8081:8080
networks:
- mynetwork
networks:
mynetwork:
name: mynetwork
and
# front/docker-compose.yml
version: '3'
services:
front:
container_name: front
...
ports:
- 81:80
networks:
- mynetwork
networks:
mynetwork:
name: mynetwork
NOTE: I added ports to show how services can access each other, and how they are accessible from the host.
IMPORTANT: If you don't specify a network name, docker-compose will craft one for you. It uses the name of the folder the docker_compose.yml file is in. In this case: api_mynetwork and front_mynetwork. That will prevent communication between containers since they will by on different network, with very similar names.
Note that the network is defined exactly the same in both file, so you can start either service first and it will work. No need to specify which one is external, docker-compose will take care of managing that for you.
From the host
You can access either container using the published ports defined in docker-compose.yml.
You can access the Front container: curl http://localhost:81
You can access the API container: curl http://localhost:8081
From the API container
You can access the Front container using the original port, not the one you published in docker-compose.yml.
Example: curl http://front:80
From the Front container
You can access the API container using the original port, not the one you published in docker-compose.yml.
Example: curl http://api:8080
For using another docker-compose network you just do these(to share networks between docker-compose):
Run the first docker-compose project by up -d
Find the network name of the first docker-compose by: docker network ls(It contains the name of the root directory project)
Then use that name by this structure at below in the second docker-compose file.
second docker-compose.yml
version: '3'
services:
service-on-second-compose: # Define any names that you want.
.
.
.
networks:
- <put it here(the network name that comes from "docker network ls")>
networks:
- <put it here(the network name that comes from "docker network ls")>:
external: true
I would ensure all containers are docker-compose'd to the same network by composing them together at the same time, using:
docker compose --file ~/front/docker-compose.yml --file ~/api/docker-compose.yml up -d
If you are
trying to communicate between two containers from different docker-compose projects and don't want to use the same network (because let's say they would have PostgreSQL or Redis container on the same port and you would prefer to not changing these ports and not use it at the same network)
developing locally and want to imitate communication between two docker compose projects
running two docker-compose projects on localhost
developing especially Django apps or Django Rest Framework (drf) API and running app inside container on some exposed port
getting Connection refused while trying to communicate between two containers
And you want to
container api_a communicate to api_b (or vice versa) without the same "docker network"
(example below)
you can use "host" of the second container as IP of your computer and port that is mapped from inside Docker container. You can obtain IP of your computer with this script (from: Finding local IP addresses using Python's stdlib):
import socket
def get_ip():
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
try:
# doesn't even have to be reachable
s.connect(('10.255.255.255', 1))
IP = s.getsockname()[0]
except:
IP = '127.0.0.1'
finally:
s.close()
return IP
Example:
project_api_a/docker-compose.yml:
networks:
app-tier:
driver: bridge
services:
api:
container_name: api_a
image: api_a:latest
depends_on:
- postgresql
networks:
- app-tier
inside api_a container you are running Django app:
manage.py runserver 0.0.0.0:8000
and second docker-compose.yml from other project:
project_api_b/docker-compose-yml :
networks:
app-tier:
driver: bridge
services:
api:
container_name: api_b
image: api_b:latest
depends_on:
- postgresql
networks:
- app-tier
inside api_b container you are running Django app:
manage.py runserver 0.0.0.0:8001
And trying to connect from container api_a to api_b then URL of api_b container will be:
http://<get_ip_from_script_above>:8001/
It can be especially valuable if you are using even more than two(three or more) docker-compose projects and it's hard to provide common network for all of it - it's good workaround and solution
To connect two docker-compose you need a network and putting both docker-composes in that network,
you could create netwrok with docker network create name-of-network,
or you could simply put network declaration in networks option of docker-compose file and when you run docker-compose (docker-compose up) the network would be created automatically.
put the below lines in both docker-compose files
networks:
net-for-alpine:
name: test-db-net
Note: net-for-alpine is internal name of the network and it will be used inside of the docker-compose files and could be different,
test-db-net is external name of the network and must be same in two docker-compose files.
Assume we have docker-compose.db.yml and docker-compose.alpine.yml
docker-compose.apline.yml would be:
version: '3.8'
services:
alpine:
image: alpine:3.14
container_name: alpine
networks:
- net-for-alpine
# these two command keeps apline container running
stdin_open: true # docker run -i
tty: true # docker run -t
networks:
net-for-alpine:
name: test-db-net
docker-compose.db.yml would be:
version: '3.8'
services:
db:
image: postgres:13.4-alpine
container_name: psql
networks:
- net-for-db
networks:
net-for-db:
name: test-db-net
To test the network, go inside alpine container
docker exec -it alpine sh
then with following commands you could check the network
# if it returns 0 or see nothing as a result, network is established
nc -z psql (container name)
or
ping pgsql
I'm running multiple identical docker-compose.yml files in different directories, using .env files to make a slight difference. And use Nginx Proxy Manage to communicate with other services. here is my file:
make sure you have created a public network
docker network create nginx-proxy-man
/domain1.com/docker-compose.yml, /domain2.com/docker-compose.yml, ...
version: "3.9"
services:
webserver:
build:
context: ./bin/${PHPVERSION}
container_name: "${COMPOSE_PROJECT_NAME}-${PHPVERSION}"
...
networks:
- default # network outside
- internal # network internal
database:
build:
context: "./bin/${DATABASE}"
container_name: "${COMPOSE_PROJECT_NAME}-${DATABASE}"
...
networks:
- internal # network internal
networks:
default:
external: true
name: nginx-proxy-man
internal:
internal: true
.env file just change COMPOSE_PROJECT_NAME
COMPOSE_PROJECT_NAME=domain1_com
.
.
.
PHPVERSION=php56
DATABASE=mysql57
webserver.container_name: domain1_com-php56 - will join the default network (name: nginx-proxy-man), previously created for Nginx Proxy Manager to be accessible from the outside.
Note: container_name is unique in the same network.
database.container_name: domain1_com-mysql57 - easier to distinguish
In the same docker-compose.yml, the services will connect to each other via the service name because of the same network domain1_com_internal. And to be more secure, set this network with the option internal: true
Note, if you don't explicitly specify networks for each service, but just use a common external network for both docker-compose.yml, then it's likely that domain1_com will use domain2_com's database.
Another option is just running up the first module with the 'docker-compose' check the ip related with the module, and connect the second module with the previous net like external, and pointing the internal ip
example
app1 - new-network created in the service lines, mark as external: true at the bottom
app2 - indicate the "new-network" created by app1 when goes up, mark as external: true at the bottom, and set in the config to connect, the ip that app1 have in this net.
With this, you should be able to talk with each other
*this way is just for local-test focus, in order to don't do an over complex configuration
** I know is very 'patch way' but works for me and I think is so simple some other can take advantage of this
Answer for Docker Compose '3' and up
By default Docker Compose uses a bridge network to provision inter-container communication. Read this article for more info about inter-container networking.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file. Consider the following docker-compose.yml:
version: '3.9'
services:
server:
image: node:16.9.0
container_name: server
tty: true
stdin_open: true
depends_on:
- mongo
command: bash
mongo:
image: mongo
environment:
MONGO_INITDB_DATABASE: my-database
When you run docker-compose up, Docker will create a default network and assigns the service name as hostname for both mongo and server.
You can now access the backend container via:
docker exec -it server bash
And now you can ping the mongo container using Dockers internal network (default on port 27017 in this case):
curl -v http://mongo:27017/my-database
That's it. The same applies for your setup.
I have had a similar example where I was working with separate docker-compose files working on a docker swarm with an overlay network to do that all I had to do is change the networks parameters as so:
first docker-compose.yaml
version: '3.9'
.
.
.
networks:
net:
driver: overlay
attachable: true
docker-compose -p app up
since I have specified the app name as app using -p the initial network will be app_net.
Now in order to run another docker-compose with multiple services that will use the same network you will need to set these as so:
second docker-compose.yaml
version: '3.9'
.
.
.
networks:
net-ref:
external: true
name: app_net
docker stack deploy -c docker-compose.yml mystack
No matter what name you give to the stack the network will not be affected and will always refer to the existing external network called app_net.
PS: It's important to make sure to check your docker-compose version.
version: '2'
services:
bot:
build: .
volumes:
- '.:/home/node'
- /home/node/node_modules
networks:
- my-rede
mem_limit: 100m
memswap_limit: 100m
cpu_quota: 25000
container_name: 236948199393329152_585042339404185600_bot
command: node index.js
environment:
NODE_ENV: production
networks:
my-rede:
external:
name: name_rede_externa
Follow up of JohnHarris answer, just adding some more details which may be useful to someone: Lets take two docker-compose file and connect them through networks:
1st foldername/docker-compose.yml:
version: '2'
services:
some-contr:
container_name: []
build: .
...
networks:
- somenet
ports:
- "8080:8080"
expose:
# Opens port 8080 on the container
- "8080"
environment:
PORT: 8080
tty: true
networks:
boomnet:
driver: bridge
2nd docker-compose.yml:
version: '2'
services:
pushapiserver:
container_name: [container_name]
build: .
command: "tail -f /dev/null"
volumes:
- ./:/[work_dir]
working_dir: /[work dir]
image: [name of image]
ports:
- "8060:8066"
environment:
PORT: 8066
tty: true
networks:
- foldername_somenet
networks:
foldername_somenet:
external: true
Now you can make api calls to one another services(b/w diff containers)like:
http://pushapiserver:8066/send_push call from some code in files for 1st docker-compose.yml
Two common mistakes (atleast i made them few times):
take note of [foldername] in which your docker-compose.yml file is present. Please see above in 2nd docker-compose.yml i have added foldername in network bc docker create network by [foldername]_[networkname]
Port: this one is very common. Please note i have used 8066 when trying to make connection i.e. http://pushapiserver:8066/... 8066 is port of docker container(2nd docker-compose.yml) so when trying to talk with different docker compose.
docker will use docker container port[8066] and not host machine mapped port
[8060]

Multiple apps (microservices) and one proxy (nginx) docker-compose configuration/architecture

Having the following architecture:
Microservice 1 + DB (microservice1/docker-compose.yml)
Microservice 2 + DB (microservice2/docker-compose.yml)
Proxy (proxy/docker-compose.yml)
Which of the following options would be the best to deploy in the production environment?
Docker Compose Overriding. Have a docker-compose for each microservice and another docker-compose for the proxy. When the production deployment is done, all the docker-compose would be merged to create only one (with docker-compose -f microservice1/docker-compose.yml -f microservice2/docker-compose.yml -f proxy/docker-compose.yml up. In this way, the proxy container, for example nginx, would have access to microservices to be able to redirect to one or the other depending on the request.
Shared external network. Have a docker-compose for each microservice and another docker-compose for the proxy. First, an external network would have to be created to link the proxy container with microservices.docker network create nginx_network. Then, in each docker-compose file, this network should be referenced in the necessary containers so that the proxy has visibility of the microservices and thus be able to use them in the configuration. An example is in the following link https://stackoverflow.com/a/48081535/6112286.
The first option is simple, but offers little felxibility when configuring many microservices or applications, since all docker-compose of all applications would need to be merged to generate the final configuration. The second option uses networks, which are a fundamental pillar of Docker. On the other hand, you don't need all docker-compose to be merged.
Of these two options, given the scenario of having several microservices and needing a single proxy to configure access, which would be the best? Why?
Tnahks in advance.
There is a third approach, for example documented in https://www.bogotobogo.com/DevOps/Docker/Docker-Compose-Nginx-Reverse-Proxy-Multiple-Containers.php and https://github.com/Einsteinish/Docker-compose-Nginx-Reverse-Proxy-II/. The gist of it is to have the proxy join all the other networks. Thus, you can keep the other compose files, possibly from a software distribution, unmodified.
docker-compose.yml
version: '3'
services:
proxy:
build: ./
networks:
- microservice1
- microservice2
ports:
- 80:80
- 443:443
networks:
microservice1:
external:
name: microservice1_default
microservice2:
external:
name: microservice2_default
Proxy configuration
The proxy will refer to the hosts by their names microservice1_app_1 and microservice2_app_1, assuming the services are called app in directories microservice1 and microservice2.
docker-compose is designed to orchestrate multiple containers in one single file. I do not know the content of your docker-compose files but the right way is to write one single docker-compose.yml that could contains:
version: '3.7'
services:
microservice1_app:
image: ...
volumes: ...
networks:
- service1_app
- service1_db
microservice1_db:
image: ...
volumes: ...
networks:
- service1_db
microservice2_app:
image: ...
volumes: ...
networks:
- service2_app
- service2_db
microservice2_db:
image: ...
volumes: ...
networks:
- service2_db
nginx:
image: ...
volumes: ...
networks:
- default
- service1_app
- service2_app
volumes:
...
networks:
service1_app:
service1_db:
service2_app:
service2_db:
default:
name: proxy_frontend
driver: bridge
In this way nginx container is able to communicate with microservice1_app container through microservice1_app hostname. If other hostnames are needed, it can be configured with aliases subsection within services networks section.
Security Bonus
In the above configuration, microservice1_db is only visible by microservice1_app (same for microservice2) and nginx is only able to see microservice1_app and microservice2_app and is reachable from outside of Docker (bridge mode)

rationale behind docker compose "links" order

I have a Redis - Elasticsearch - Logstash - Kibana stack in docker which I am orchestrating using docker compose.
Redis will receive the logs from a remote location, will forward them to Logstash, and then the customary Elasticsearch, Kibana.
In the docker-compose.yml, I am confused about the order of "links"
Elasticsearch links to no one while logstash links to both redis and elasticsearch
elasticsearch:
redis:
logstash:
links:
- elasticsearch
- redis
kibana:
links:
- elasticsearch
Is this order correct? What is the rational behind choosing the "link" direction.
Why don't we say, elasticsearch is linked to logstash?
Instead of using the Legacy container linking method, you could instead use Docker user defined networks. Basically you can define a network for your services and then indicate in the docker-compose file that you want the container to run on that network. If your containers all run on the same network they can access each other via their container name (DNS records are added automatically).
1) : Create User Defined Network
docker network create pocnet
2) : Update docker-compose file
You want to add your containers to the network you just created. Your docker-compose file would look something along the lines of this :
version: '2'
services:
elasticsearch:
image: elasticsearch
container_name: elasticsearch
ports:
- "{your:ports}"
networks:
- pocnet
redis:
image: redis
container_name: redis
ports:
- "{your:ports}"
networks:
- pocnet
logstash:
image: logstash
container_name: logstash
ports:
- "{your:ports}"
networks:
- pocnet
kibana:
image: kibana
container_name: kibana
ports:
- "5601:5601"
networks:
- pocnet
networks:
pocnet:
external: true
3) : Start Services
docker-compose up
note : you might want to open a new shell window to run step 4.
4) : Test
Go into the Kibana container and see if you can ping the elasticsearch container.
your__Machine:/ docker exec -it kibana bash
kibana#123456:/# ping elasticsearch
First of all Links in docker are Unidirectional.
More info on links:
there are legacy links, and links in user-defined networks.
The legacy link provided 4 major functionalities to the default bridge network.
name resolution
name alias for the linked container using --link=CONTAINER-NAME:ALIAS
secured container connectivity (in isolation via --icc=false)
environment variable injection
Comparing the above 4 functionalities with the non-default user-defined networks , without any additional config, docker network provides
automatic name resolution using DNS
automatic secured isolated environment for the containers in a
network
ability to dynamically attach and detach to multiple networks
supports the --link option to provide name alias for the linked
container
In your case: Automatic dns will help you on user-defined network. first create a new network:
docker network create ELK -d bridge
With this approach you dont need to link containers on the same user-defined network. you just have to put your elk stack + redis containers in ELK network and remove link directives from composer file.
Your order looks fine to me. If you have any problem regarding the order, or waiting for services to get up in dependent containers, you can use something like the following:
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
entrypoint: ./wait-for-it.sh db:5432
db:
image: postgres
This will make the web container wait until it can connect to the db.
You can get wait-for-it script from here.

Communication between multiple docker-compose projects

I have two separate docker-compose.yml files in two different folders:
~/front/docker-compose.yml
~/api/docker-compose.yml
How can I make sure that a container in front can send requests to a container in api?
I know that --default-gateway option can be set using docker run for an individual container, so that a specific IP address can be assigned to this container, but it seems that this option is not available when using docker-compose.
Currently I end up doing a docker inspect my_api_container_id and look at the gateway in the output. It works but the problem is that this IP is randomly attributed, so I can't rely on it.
Another form of this question might thus be:
Can I attribute a fixed IP address to a particular container using docker-compose?
But in the end what I'm looking after is:
How can two different docker-compose projects communicate with each other?
You just need to make sure that the containers you want to talk to each other are on the same network. Networks are a first-class docker construct, and not specific to compose.
# front/docker-compose.yml
version: '2'
services:
front:
...
networks:
- some-net
networks:
some-net:
driver: bridge
...
# api/docker-compose.yml
version: '2'
services:
api:
...
networks:
- front_some-net
networks:
front_some-net:
external: true
Note: Your app’s network is given a name based on the “project name”, which is based on the name of the directory it lives in, in this case a prefix front_ was added
They can then talk to each other using the service name. From front you can do ping api and vice versa.
UPDATE: As of compose file version 3.5:
This now works:
version: "3.5"
services:
proxy:
image: hello-world
ports:
- "80:80"
networks:
- proxynet
networks:
proxynet:
name: custom_network
docker-compose up -d will join a network called 'custom_network'. If it doesn't exist, it will be created!
root#ubuntu-s-1vcpu-1gb-tor1-01:~# docker-compose up -d
Creating network "custom_network" with the default driver
Creating root_proxy_1 ... done
Now, you can do this:
version: "2"
services:
web:
image: hello-world
networks:
- my-proxy-net
networks:
my-proxy-net:
external:
name: custom_network
This will create a container that will be on the external network.
I can't find any reference in the docs yet but it works!
Just a small adittion to #johnharris85's great answer,
when you are running a docker compose file, a "default" network is created
so you can just add it to the other compose file as an external network:
# front/docker-compose.yml
version: '2'
services:
front_service:
...
...
# api/docker-compose.yml
version: '2'
services:
api_service:
...
networks:
- front_default
networks:
front_default:
external: true
For me this approach was more suited because I did not own the first docker-compose file and wanted to communicate with it.
All containers from api can join the front default network with following config:
# api/docker-compose.yml
...
networks:
default:
external:
name: front_default
See docker compose guide: using a pre existing network (see at the bottom)
The previous posts information is correct, but it does not have details on how to link containers, which should be connected as "external_links".
Hope this example make more clear to you:
Suppose you have app1/docker-compose.yml, with two services (svc11 and svc12), and app2/docker-compose.yml with two more services (svc21 and svc22) and suppose you need to connect in a crossed fashion:
svc11 needs to connect to svc22's container
svc21 needs to connect to svc11's container.
So the configuration should be like this:
this is app1/docker-compose.yml:
version: '2'
services:
svc11:
container_name: container11
[..]
networks:
- default # this network
- app2_default # external network
external_links:
- container22:container22
[..]
svc12:
container_name: container12
[..]
networks:
default: # this network (app1)
driver: bridge
app2_default: # external network (app2)
external: true
this is app2/docker-compose.yml:
version: '2'
services:
svc21:
container_name: container21
[..]
networks:
- default # this network (app2)
- app1_default # external network (app1)
external_links:
- container11:container11
[..]
svc22:
container_name: container22
[..]
networks:
default: # this network (app2)
driver: bridge
app1_default: # external network (app1)
external: true
Everybody has explained really well, so I'll add the necessary code with just one simple explanation.
Use a network created outside of docker-compose (an "external" network) with docker-compose version 3.5+.
Further explanation can be found here.
First docker-compose.yml file should define network with name giveItANamePlease as follows.
networks:
my-network:
name: giveItANamePlease
driver: bridge
The services of first docker-compose.yml file can use network as follows:
networks:
- my-network
In second docker-compose file, we need to proxy the network by using the network name which we have used in first docker-compose file, which in this case is giveItANamePlease:
networks:
my-proxy-net:
external:
name: giveItANamePlease
And now you can use my-proxy-net in services of a second docker-compose.yml file as follows.
networks:
- my-proxy-net
Since Compose 1.18 (spec 3.5), you can just override the default network using your own custom name for all Compose YAML files you need. It is as simple as appending the following to them:
networks:
default:
name: my-app
The above assumes you have version set to 3.5 (or above if they don't deprecate it in 4+).
Other answers have pointed the same; this is a simplified summary.
UPDATE: As of docker-compose file version 3.5:
I came across a similar problem and I solved it by adding a small change in one of my docker-compose.yml project.
For instance, we have two API's scoring and ner. Scoring API needs to send a request to the ner API for processing the input request. In order to do that they both are supposed to share the same network.
Note: Every container has its own network which is automatically created at the time of running the app inside docker. For example ner API network will be created like ner_default and scoring API network will be named as scoring default. This solution will work for version: '3'.
As in the above scenario, my scoring API wants to communicate with ner API then I will add the following lines. This means Whenever I create the container for ner API then it automatically added to the scoring_default network.
networks:
default:
external:
name: scoring_default
ner/docker-compose.yml
version: '3'
services:
ner:
container_name: "ner_api"
build: .
...
networks:
default:
external:
name: scoring_default
scoring/docker-compose.yml
version: '3'
services:
api:
build: .
...
We can see this how the above containers are now a part of the same network called scoring_default using the command:
docker inspect scoring_default
{
"Name": "scoring_default",
....
"Containers": {
"14a6...28bf": {
"Name": "ner_api",
"EndpointID": "83b7...d6291",
"MacAddress": "0....",
"IPv4Address": "0.0....",
"IPv6Address": ""
},
"7b32...90d1": {
"Name": "scoring_api",
"EndpointID": "311...280d",
"MacAddress": "0.....3",
"IPv4Address": "1...0",
"IPv6Address": ""
},
...
}
You can add a .env file in all your projects containing COMPOSE_PROJECT_NAME=somename.
COMPOSE_PROJECT_NAME overrides the prefix used to name resources, as such all your projects will use somename_default as their network, making it possible for services to communicate with each other as they were in the same project.
NB: You'll get warnings for "orphaned" containers created from other projects.
So many answers!
First of all, avoid hyphens in entities names such as services and networks. They cause issues with name resolution.
Example: my-api won't work. myapi or api will work.
What worked for me is:
# api/docker-compose.yml
version: '3'
services:
api:
container_name: api
...
ports:
- 8081:8080
networks:
- mynetwork
networks:
mynetwork:
name: mynetwork
and
# front/docker-compose.yml
version: '3'
services:
front:
container_name: front
...
ports:
- 81:80
networks:
- mynetwork
networks:
mynetwork:
name: mynetwork
NOTE: I added ports to show how services can access each other, and how they are accessible from the host.
IMPORTANT: If you don't specify a network name, docker-compose will craft one for you. It uses the name of the folder the docker_compose.yml file is in. In this case: api_mynetwork and front_mynetwork. That will prevent communication between containers since they will by on different network, with very similar names.
Note that the network is defined exactly the same in both file, so you can start either service first and it will work. No need to specify which one is external, docker-compose will take care of managing that for you.
From the host
You can access either container using the published ports defined in docker-compose.yml.
You can access the Front container: curl http://localhost:81
You can access the API container: curl http://localhost:8081
From the API container
You can access the Front container using the original port, not the one you published in docker-compose.yml.
Example: curl http://front:80
From the Front container
You can access the API container using the original port, not the one you published in docker-compose.yml.
Example: curl http://api:8080
For using another docker-compose network you just do these(to share networks between docker-compose):
Run the first docker-compose project by up -d
Find the network name of the first docker-compose by: docker network ls(It contains the name of the root directory project)
Then use that name by this structure at below in the second docker-compose file.
second docker-compose.yml
version: '3'
services:
service-on-second-compose: # Define any names that you want.
.
.
.
networks:
- <put it here(the network name that comes from "docker network ls")>
networks:
- <put it here(the network name that comes from "docker network ls")>:
external: true
I would ensure all containers are docker-compose'd to the same network by composing them together at the same time, using:
docker compose --file ~/front/docker-compose.yml --file ~/api/docker-compose.yml up -d
If you are
trying to communicate between two containers from different docker-compose projects and don't want to use the same network (because let's say they would have PostgreSQL or Redis container on the same port and you would prefer to not changing these ports and not use it at the same network)
developing locally and want to imitate communication between two docker compose projects
running two docker-compose projects on localhost
developing especially Django apps or Django Rest Framework (drf) API and running app inside container on some exposed port
getting Connection refused while trying to communicate between two containers
And you want to
container api_a communicate to api_b (or vice versa) without the same "docker network"
(example below)
you can use "host" of the second container as IP of your computer and port that is mapped from inside Docker container. You can obtain IP of your computer with this script (from: Finding local IP addresses using Python's stdlib):
import socket
def get_ip():
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
try:
# doesn't even have to be reachable
s.connect(('10.255.255.255', 1))
IP = s.getsockname()[0]
except:
IP = '127.0.0.1'
finally:
s.close()
return IP
Example:
project_api_a/docker-compose.yml:
networks:
app-tier:
driver: bridge
services:
api:
container_name: api_a
image: api_a:latest
depends_on:
- postgresql
networks:
- app-tier
inside api_a container you are running Django app:
manage.py runserver 0.0.0.0:8000
and second docker-compose.yml from other project:
project_api_b/docker-compose-yml :
networks:
app-tier:
driver: bridge
services:
api:
container_name: api_b
image: api_b:latest
depends_on:
- postgresql
networks:
- app-tier
inside api_b container you are running Django app:
manage.py runserver 0.0.0.0:8001
And trying to connect from container api_a to api_b then URL of api_b container will be:
http://<get_ip_from_script_above>:8001/
It can be especially valuable if you are using even more than two(three or more) docker-compose projects and it's hard to provide common network for all of it - it's good workaround and solution
To connect two docker-compose you need a network and putting both docker-composes in that network,
you could create netwrok with docker network create name-of-network,
or you could simply put network declaration in networks option of docker-compose file and when you run docker-compose (docker-compose up) the network would be created automatically.
put the below lines in both docker-compose files
networks:
net-for-alpine:
name: test-db-net
Note: net-for-alpine is internal name of the network and it will be used inside of the docker-compose files and could be different,
test-db-net is external name of the network and must be same in two docker-compose files.
Assume we have docker-compose.db.yml and docker-compose.alpine.yml
docker-compose.apline.yml would be:
version: '3.8'
services:
alpine:
image: alpine:3.14
container_name: alpine
networks:
- net-for-alpine
# these two command keeps apline container running
stdin_open: true # docker run -i
tty: true # docker run -t
networks:
net-for-alpine:
name: test-db-net
docker-compose.db.yml would be:
version: '3.8'
services:
db:
image: postgres:13.4-alpine
container_name: psql
networks:
- net-for-db
networks:
net-for-db:
name: test-db-net
To test the network, go inside alpine container
docker exec -it alpine sh
then with following commands you could check the network
# if it returns 0 or see nothing as a result, network is established
nc -z psql (container name)
or
ping pgsql
I'm running multiple identical docker-compose.yml files in different directories, using .env files to make a slight difference. And use Nginx Proxy Manage to communicate with other services. here is my file:
make sure you have created a public network
docker network create nginx-proxy-man
/domain1.com/docker-compose.yml, /domain2.com/docker-compose.yml, ...
version: "3.9"
services:
webserver:
build:
context: ./bin/${PHPVERSION}
container_name: "${COMPOSE_PROJECT_NAME}-${PHPVERSION}"
...
networks:
- default # network outside
- internal # network internal
database:
build:
context: "./bin/${DATABASE}"
container_name: "${COMPOSE_PROJECT_NAME}-${DATABASE}"
...
networks:
- internal # network internal
networks:
default:
external: true
name: nginx-proxy-man
internal:
internal: true
.env file just change COMPOSE_PROJECT_NAME
COMPOSE_PROJECT_NAME=domain1_com
.
.
.
PHPVERSION=php56
DATABASE=mysql57
webserver.container_name: domain1_com-php56 - will join the default network (name: nginx-proxy-man), previously created for Nginx Proxy Manager to be accessible from the outside.
Note: container_name is unique in the same network.
database.container_name: domain1_com-mysql57 - easier to distinguish
In the same docker-compose.yml, the services will connect to each other via the service name because of the same network domain1_com_internal. And to be more secure, set this network with the option internal: true
Note, if you don't explicitly specify networks for each service, but just use a common external network for both docker-compose.yml, then it's likely that domain1_com will use domain2_com's database.
Another option is just running up the first module with the 'docker-compose' check the ip related with the module, and connect the second module with the previous net like external, and pointing the internal ip
example
app1 - new-network created in the service lines, mark as external: true at the bottom
app2 - indicate the "new-network" created by app1 when goes up, mark as external: true at the bottom, and set in the config to connect, the ip that app1 have in this net.
With this, you should be able to talk with each other
*this way is just for local-test focus, in order to don't do an over complex configuration
** I know is very 'patch way' but works for me and I think is so simple some other can take advantage of this
Answer for Docker Compose '3' and up
By default Docker Compose uses a bridge network to provision inter-container communication. Read this article for more info about inter-container networking.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file. Consider the following docker-compose.yml:
version: '3.9'
services:
server:
image: node:16.9.0
container_name: server
tty: true
stdin_open: true
depends_on:
- mongo
command: bash
mongo:
image: mongo
environment:
MONGO_INITDB_DATABASE: my-database
When you run docker-compose up, Docker will create a default network and assigns the service name as hostname for both mongo and server.
You can now access the backend container via:
docker exec -it server bash
And now you can ping the mongo container using Dockers internal network (default on port 27017 in this case):
curl -v http://mongo:27017/my-database
That's it. The same applies for your setup.
I have had a similar example where I was working with separate docker-compose files working on a docker swarm with an overlay network to do that all I had to do is change the networks parameters as so:
first docker-compose.yaml
version: '3.9'
.
.
.
networks:
net:
driver: overlay
attachable: true
docker-compose -p app up
since I have specified the app name as app using -p the initial network will be app_net.
Now in order to run another docker-compose with multiple services that will use the same network you will need to set these as so:
second docker-compose.yaml
version: '3.9'
.
.
.
networks:
net-ref:
external: true
name: app_net
docker stack deploy -c docker-compose.yml mystack
No matter what name you give to the stack the network will not be affected and will always refer to the existing external network called app_net.
PS: It's important to make sure to check your docker-compose version.
version: '2'
services:
bot:
build: .
volumes:
- '.:/home/node'
- /home/node/node_modules
networks:
- my-rede
mem_limit: 100m
memswap_limit: 100m
cpu_quota: 25000
container_name: 236948199393329152_585042339404185600_bot
command: node index.js
environment:
NODE_ENV: production
networks:
my-rede:
external:
name: name_rede_externa
Follow up of JohnHarris answer, just adding some more details which may be useful to someone: Lets take two docker-compose file and connect them through networks:
1st foldername/docker-compose.yml:
version: '2'
services:
some-contr:
container_name: []
build: .
...
networks:
- somenet
ports:
- "8080:8080"
expose:
# Opens port 8080 on the container
- "8080"
environment:
PORT: 8080
tty: true
networks:
boomnet:
driver: bridge
2nd docker-compose.yml:
version: '2'
services:
pushapiserver:
container_name: [container_name]
build: .
command: "tail -f /dev/null"
volumes:
- ./:/[work_dir]
working_dir: /[work dir]
image: [name of image]
ports:
- "8060:8066"
environment:
PORT: 8066
tty: true
networks:
- foldername_somenet
networks:
foldername_somenet:
external: true
Now you can make api calls to one another services(b/w diff containers)like:
http://pushapiserver:8066/send_push call from some code in files for 1st docker-compose.yml
Two common mistakes (atleast i made them few times):
take note of [foldername] in which your docker-compose.yml file is present. Please see above in 2nd docker-compose.yml i have added foldername in network bc docker create network by [foldername]_[networkname]
Port: this one is very common. Please note i have used 8066 when trying to make connection i.e. http://pushapiserver:8066/... 8066 is port of docker container(2nd docker-compose.yml) so when trying to talk with different docker compose.
docker will use docker container port[8066] and not host machine mapped port
[8060]

Resources