Docker-compose network link - docker

Docker 'link' feature will be deprecated as new feature 'networking' has been released (link). I'm making docker-compose with some containers, and it was fine with 'link' to connect each others(without any other commands).
Since I need to change link configuration to network, I have to make docker network before 'docker-compose up'. Is there any docker-compose feature that making docker network automatically? Or any other way to connecting each containers with some configuration?

By default, docker-compose with a v2 yml will spin up a network for your project. Any networks you define will also be created unless you explicitly tell it otherwise. Here's an example docker-compose.yml:
version: '2'
networks:
dbnet:
appnet:
services:
db:
image: busybox
command: tail -f /dev/null
networks:
- dbnet
app:
image: busybox
command: tail -f /dev/null
networks:
- dbnet
- appnet
proxy:
image: busybox
command: tail -f /dev/null
ports:
- 80
networks:
- appnet
And then when you spin it up, you'll see that it creates the networks defined:
$ docker-compose up -d
Creating network "test_dbnet" with the default driver
Creating network "test_appnet" with the default driver
Creating test_app_1
Creating test_db_1
Creating test_proxy_1
Note that linking containers also created an implicit dependency, so you may want to use depends_on in your yml to be explicit in any dependencies after removing your link.

docker-compose creates a default network for your compose project on itself. You only have to migrate your compose projects to version: '2' or version: '3' of the compose yaml format. Please read how to upgrade for more information.
With version 2 and 3, you don't have to specify links anymore, as all services will be in the default network if you don't explicitly specify other networks.
UPDATE: To make 2 containers talk to each other, you can simply use the service names which will resolve to container IPs. Links are now only required if for some reason a container expects a specific name, e.g. because it is hardcoded.

Related

How to unset network mode in an overriding docker compose file?

Consider a part of some base docker-compose.yml file:
services:
foo:
image: bar
network_mode: host
.
.
.
Then, consider a docker-compose.prod.yml file which would override the base file's network mode and also set ports:
services:
foo:
ports:
- 'xxxx:yyyy'
network_mode: ?
I am looking for a value of ? such that the network_mode is considered unset. In other words, setting it to none or bridge doesn't seem to work, so I want it to just disappear, or use a value with such an effect (I don't think there is a default).
An alternative solution to this problem is to define three docker-compose files: docker-compose.yml, docker-compose.prod.yml, and docker-compose.dev.yml (or something equivalent, doesn't matter). It works fine (see below), but I would rather have 2 files only, and override the dev file with the prod file, rather than the other way around (it feels more natural this way).
A working version using three files:
docker-compose.yml
services:
foo:
image: bar
.
.
.
docker-compose.dev.yml
services:
foo:
network_mode: host
docker-compose.prod.yml
services:
foo:
ports:
- 'xxxx:yyyy'
Notes:
All files are using docker-compose version 3.
The specific setup which doesn't work with bridge network mode in my case is a collection of three services - one for running web stuff (exposed to public), one for celery workers (internal), and one for Redis (internal). Using bridge in web and/or celery results in being unable to connect to the Redis service.
I don't know how to remove a option from former docker-compose.yaml, but the bridge really works, you may want to have a double confirm.
If we have a look for docker default network, you could see bridge really there, if we do not set --net, default docker will use bridge:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
32a9a31082ae bridge bridge local
dcb12f4cc711 host host local
16c73acab8c9 none null local
I give a small example to prove it:
docker-compose.yaml:
version: "3"
services:
web:
image: python:3.7
network_mode: host
command: python -m http.server 9000
docker-compose.prod.yaml:
version: "3"
services:
web:
ports:
- 10000:9000
network_mode: bridge
docker-compose up
With this command, we will in fact just use docker-compose.yaml, at that time, if we open browser we can see could successfully visit e.g.: http://$ip:9000, this is because the network is host.
docker-compose -f docker-compose.yaml -f docker-compose.prod.yaml up
With this command, we will use 2 compose files, the latter one will override the duplicate option in the former one.
If you open browser, you will find you can't visit http://$ip/9000 now, you could just visit http://$ip/10000. This can prove that host loss effect, and the bridge override host successfully.
(NOTE: you will have to assure docker-compose.prod.yaml after docker-compose.yaml, the sequence order is important.)
UPDATE 20211017 based on your new comments:
If you want to addtional visit other container in your compose, you will have to define network_mode in that target service to use the same network of source service with network_mode: service:web, see network mode:
docker-compose.yaml:
version: "3"
services:
web:
image: python:3.7-alpine3.13
command: python -m http.server 9000
network_mode: host
db:
image: python:3.7-alpine3.13
command: python -m http.server 8000
network_mode: service:web
docker-compose.yaml:
version: "3"
services:
web:
ports:
- 10000:9000
network_mode: bridge
Then, after up, you could use something like next to directly visit the service in db:
docker-compose exec web wget http://localhost:8000 -O -
NOTE:
with network_mode: bridge in web, the web container nolonger use the default network which compose set up for you. As a result, you won't benifit from the auto dns setup by compose, which means in web you won't be able to access db container using the service name db.
to conquer this issue, you now could let db container use the network of web explicitly with network_mode: service:web. This means the 2 containers now share the same network namespace, then you no need to visit db with service name, but to use localhost. In web, you now could access db's 8000 port just with http://localhost:8000.
TL;DR
docker-compose.prod.yml
services:
foo:
network_mode: unset # Can be any string other than 'host'.
networks: [ default ]
ports: [ 80:80 ]
Docker Compose Default Network
This is the default behavior: if there's no networks: and no network_mode: defined for the service then it's attached to the default network.
Unfortunately an empty string is ignored by the YAML parser, so we can't override network_mode: back to '', what we can do is set networks: [ default ], but that won't work with network_mode: host so what can we set network_mode: to?
network_mode: is equivalent to Docker --network, if we set it to a network name networks: will override it, so we can use any value other than host which apparently triggers some special behavior. It's probably a good idea to set it to something that will fail without networks: and avoid default, bridge or none.
To restore the default behavior we need to add default to networks: and set network_mode: to something other than host.
Note that network_mode: default is equivalent to --network default or --network bridge which uses the default Docker network: bridge, and not the default project-wide Compose network: projectName_default.

Dynamically add docker container ip in Dockerfile ( redis)

How do I dynamically add container ip in other Dockerfile ( I am running two container a) Redis b) java application .
I need to pass redis url on run time to my java arguments
Currently I am manually checking the redis ip and copying it in Dockerfile. and later creating new image using redis ip for java application.
docker run --name my-redis -d redis
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' my-redis
IN Dockerfile (java application)
CMD ["-Dspring.redis.host=172.17.0.2", "-jar", "/apps/some-0.0.1-SNAPSHOT.jar"]
Can I use any script to update the DockerFile or can use any environment variable.
you can assign a static ip address to your dokcer container when you run it, following the steps:
1 - create custom network:
docker network create --subnet=172.17.0.0/16 redis-net
2 - run the redis container to use the specified network, and assign the ip address:
docker run --net redis-net --ip 172.17.0.2 --name my-redis -d redis
by then you have the static ip address 172.17.0.2 for my-redis container, you don't need to inspect it anymore.
3 - now it is possible to run the java appication container but it must use the same network:
docker run --net redis-net my-java-app
of course you can optimize the solution, by using env variables or whatever you find convenient to your setup.
More infos can be found in the official docs (search for --ip):
docker run
docker network
Edit (add docker-compose):
I just find out that it is also possible to assign static ips using docker-compose, and this answer gives an example how.
This is a similar example just in case:
version: '3'
services:
redis:
container_name: redis
image: redis:latest
restart: always
networks:
vpcbr:
ipv4_address: 172.17.0.2
java-app:
container_name: java-app
build: <path to Dockerfile>
networks:
vpcbr:
ipv4_address: 172.17.0.3
depends_on:
- redis
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 172.17.0.0/16
gateway: 172.17.0.1
official docs: https://docs.docker.com/compose/networking/
hope this helps you find your way.
You should add your containers in the same network . Then at runtime you can use that name to refer to the container with its name. Container's name is the host name in the network. Thus at runtime it will be resolved as container's ip address.
Follow these steps:
First, create a network for the containers:
docker network create my-network
Start redis: docker run -d --network=my-network --name=redis redis
Edit java application's Dockerfile, replace -Dspring.redis.host=172.17.0.2" with -Dspring.redis.host=redis" and build again.
Finally start java application container: docker run -it --network=my-network your_image. Optionally you can define a name for the container, but it is not required as you do not access java application's container from redis container.
Alternatively you can use a docker-compose file. By default docker-compose creates a network for running services. I am not aware of your full setup, so I will provide a sample docker-compose.yml that illustrates the main concept.
version: "3.7"
services:
redis:
image: redis
java_app_image:
image: your_image_name
In both ways, you are able to access redis container from java application dynamically using container's hostname instead of providing a static ip.

Service access another service on 127.0.0.1?

I'd like my web Docker container to access Redis on 127.0.0.1:6379 from within the web container. I've setup my Docker Compose file as the following. I get ECONNREFUSED though:
version: "3"
services:
web:
build: .
ports:
- 8080:8080
command: ["test"]
links:
- redis:127.0.0.1
redis:
image: redis:alpine
ports:
- 6379
Any ideas?
The short answer to this is "don't". Docker containers each get their own loopback interface, 127.0.0.1, that is separate from the host loopback and from that of other containers. You can't redefine 127.0.0.1, and if you could, that would almost certainly break other things.
There is a technically possible way to do it by either running all containers directly on the host, with:
network_mode: "host"
However, that removes the docker network isolation that you'll want with containers.
You can also attach one container to the network of another container (so they have the same loopback interface) with:
docker run --net container:$container_id ...
but I'm not sure if there's a syntax to do this in docker-compose and it's not available in swarm mode since containers may run on different nodes. The main use I've had for this syntax is attach network debugging tools like nicolaka/netshoot.
What you should do instead is make the location of the redis database a configuration parameter to your webapp container. Pass the location in as an environment variable, config file, or command line parameter. If the web app can't support this directly, update the configuration with an entrypoint script that runs before you start your web app. This would change your compose yml file to look like:
version: "3"
services:
web:
# you should include an image name
image: your_webapp_image_name
build: .
ports:
- 8080:8080
command: ["test"]
environment:
- REDIS_URL=redis:6379
# no need to link, it's deprecated, use dns and the network docker creates
#links:
# - redis:127.0.0.1
redis:
image: redis:alpine
# no need to publish the port if you don't need external access
#ports:
# - 6379

Connect two instances of docker-compose [duplicate]

This question already has answers here:
Communication between multiple docker-compose projects
(20 answers)
Closed 4 months ago.
I have a dockerized application with a few services running using docker-compose. I'd like to connect this application with ElasticSearch/Logstash/Kibana (ELK) using another docker-compose application, docker-elk. Both of them are running in the same docker machine in development. In production, that will probably not be the case.
How can I configure my application's docker-compose.yml to link to the ELK stack?
Update Jun 2016
The answer below is outdated starting with docker 1.10. See this other similar answer for the new solution.
https://stackoverflow.com/a/34476794/1556338
Old answer
Create a network:
$ docker network create --driver bridge my-net
Reference that network as an environment variable (${NETWORK})in the docker-compose.yml files. Eg:
pg:
image: postgres:9.4.4
container_name: pg
net: ${NETWORK}
ports:
- "5432"
myapp:
image: quay.io/myco/myapp
container_name: myapp
environment:
DATABASE_URL: "http://pg:5432"
net: ${NETWORK}
ports:
- "3000:3000"
Note that pg in http://pg:5432 will resolve to the ip address of the pg service (container). No need to hardcode ip addresses; An entry for pg is automatically added to the /etc/host of the myapp container.
Call docker-compose, passing it the network you created:
$ NETWORK=my-net docker-compose up -d -f docker-compose.yml -f other-compose.yml
I've created a bridge network above which only works within one node (host). Good for dev. If you need to get two nodes to talk to each other, you need to create an overlay network. Same principle though. You pass the network name to the docker-compose up command.
You could also create a network with docker outside your docker-compose :
docker network create my-shared-network
And in your docker-compose.yml :
version: '2'
services:
pg:
image: postgres:9.4.4
container_name: pg
expose:
- "5432"
networks:
default:
external:
name: my-shared-network
And in your second docker-compose.yml :
version: '2'
myapp:
image: quay.io/myco/myapp
container_name: myapp
environment:
DATABASE_URL: "http://pg:5432"
net: ${NETWORK}
expose:
- "3000"
networks:
default:
external:
name: my-shared-network
And both instances will be able to see each other, without open ports on host, you just need to expose ports, and there will see each other through the network : "my-shared-network".
If you set a predictable project name for the first composition you can use external_links to reference external containers by name from a different compose file.
In the next docker-compose release (1.6) you will be able to use user defined networks, and have both compositions join the same network.
Take a look at multi-host docker networking
Networking is a feature of Docker Engine that allows you to create
virtual networks and attach containers to them so you can create the
network topology that is right for your application. The networked
containers can even span multiple hosts, so you don’t have to worry
about what host your container lands on. They seamlessly communicate
with each other wherever they are – thus enabling true distributed
applications.
I didn't find any complete answer, so decided to explain it in a complete and simple way.
To connect two docker-compose you need a network and putting both docker-composes in that network,
you could create netwrok with docker network create name-of-network,
or you could simply put network declaration in networks option of docker-compose file and when you run docker-compose (docker-compose up) the network would be created automatically.
put the below lines in both docker-compose files
networks:
net-for-alpine:
name: test-db-net
Note: net-for-alpine is internal name of the network and it will be used inside of the docker-compose files and could be different,
test-db-net is external name of the network and must be same in two docker-compose files.
Assume we have docker-compose.db.yml and docker-compose.alpine.yml
docker-compose.apline.yml would be:
version: '3.8'
services:
alpine:
image: alpine:3.14
container_name: alpine
networks:
- net-for-alpine
# these two command keeps apline container running
stdin_open: true # docker run -i
tty: true # docker run -t
networks:
net-for-alpine:
name: test-db-net
docker-compose.db.yml would be:
version: '3.8'
services:
db:
image: postgres:13.4-alpine
container_name: psql
networks:
- net-for-db
networks:
net-for-db:
name: test-db-net
To test the network, go inside alpine container
docker exec -it alpine sh
then with following commands you can check the network
# if it returns 0 or see nothing as a result, network is established
nc -z psql (container name)
or
ping pgsql

How do I set hostname in docker-compose?

In my docker-compose.yml file, I have the following. However the container does not pick up the hostname value. Any ideas?
dns:
image: phensley/docker-dns
hostname: affy
domainname: affy.com
volumes:
- /var/run/docker.sock:/docker.sock
When I check the hostname in the container it does not pick up affy.
As of docker-compose version 3.0 and later, you can just use the hostname key:
version: "3.0"
services:
yourservicename:
hostname: your-name
I found that the hostname was not visible to other containers when using docker run. This turns out to be a known issue (perhaps more a known feature), with part of the discussion being:
We should probably add a warning to the docs about using hostname. I think it is rarely useful.
The correct way of assigning a hostname - in terms of container networking - is to define an alias like so:
services:
some-service:
networks:
some-network:
aliases:
- alias1
- alias2
Unfortunately this still doesn't work with docker run. The workaround is to assign the container a name:
docker-compose run --name alias1 some-service
And alias1 can then be pinged from the other containers.
UPDATE: As #grilix points out, you should use docker-compose run --use-aliases to make the defined aliases available.
This seems to work correctly. If I put your config into a file:
$ cat > compose.yml <<EOF
dns:
image: phensley/docker-dns
hostname: affy
domainname: affy.com
volumes:
- /var/run/docker.sock:/docker.sock
EOF
And then bring things up:
$ docker-compose -f compose.yml up
Creating tmp_dns_1...
Attaching to tmp_dns_1
dns_1 | 2015-04-28T17:47:45.423387 [dockerdns] table.add tmp_dns_1.docker -> 172.17.0.5
And then check the hostname inside the container, everything seems to be fine:
$ docker exec -it stack_dns_1 hostname
affy.affy.com
Based on docker documentation:
https://docs.docker.com/compose/compose-file/#/command
I simply put
hostname: <string>
in my docker-compose file.
E.g.:
[...]
lb01:
hostname: at-lb01
image: at-client-base:v1
[...]
and container lb01 picks up at-lb01 as hostname.
The simplest way I have found is to just set the container name in the docker-compose.yml See container_name documentation. It is applicable to docker-compose v1+. It works for container to container, not from the host machine to container.
services:
dns:
image: phensley/docker-dns
container_name: affy
Now you should be able to access affy from other containers using the container name. I had to do this for multiple redis servers in a development environment.
NOTE The solution works so long as you don't need to scale. Such as consistant individual developer environments.
I needed to spin freeipa container to have a working kdc and had to give it a hostname otherwise it wouldn't run.
What eventually did work for me is setting the HOSTNAME env variable in compose:
version: 2
services:
freeipa:
environment:
- HOSTNAME=ipa.example.test
Now its working:
docker exec -it freeipa_freeipa_1 hostname
ipa.example.test

Resources