everyone, I have a requirement to write a docker-compose.yml which need to make one of service to use two network, one for the default for communication with each other service and one for the external bridge network for auto self discovery via nginx-proxy.
My docker-compose.yml like the belows.
version: '2'
services:
dns-management-frontend:
image: ......
depends_on:
- dns-management-backend
ports:
- 80
restart: always
networks:
- default
- bridge
dns-management-backend:
image:......
depends_on:
- db
- redis
restart: always
networks:
- default
db:
image: ......
volumes:
- ./mysql-data:/var/lib/mysql
restart: always
networks:
- default
redis:
image: redis
ports:
- 6379
restart: always
networks:
- default
networks:
default:
bridge:
external:
name: bridge
networks:
- default
When I start with it, it gave me network-scoped alias is supported only for containers in user defined networks error. I have to remove the networks section in services, and after started, manually ran docker network connect <id_of_frontend_container> bridge to make it work.
Any advice on how to configure multiple network in docker-compose? I also have read https://docs.docker.com/compose/networking/, but it is too simple.
The Docker network named bridge is special; most notably, it doesn't provide DNS-based service discovery.
For your proxy service, you should docker network create some other network, named anything other than bridge, either docker network connect the existing container to it or restart the proxy --net the_new_network_name. In the docker-compose.yml file change the external: {name: ...} to the new network name.
Any advice on how to configure multiple network in docker-compose?
As you note Docker Compose (and for that matter Docker proper) doesn't support especially involved network topologies. At the half-dozen-containers scale where Compose works well, you don't really need an involved network topology. Use the default network that Docker Compose provides for you, and don't bother manually configuring networks: unless it's actually necessary (as the external proxy is in your question).
you can not mixing for now the default bridge with other networks in compose
issue is still open ...
Related
how can we run docker commands inside container with docker-compose?
Simply I want to get IP of some other network container.
I am running three container va-server, db and api-server. All the containers are in same docker-network
Here I am providing docker-compose file below:
version: "2.3"
services:
va-server:
container_name: va_server
image: nitinroxx/facesense:amd64_2022.11.28 #facesense:alpha
runtime: nvidia
restart: always
mem_limit: 4G
networks:
- perimeter-network
db:
container_name: mongodb
image: mongo:latest
ports:
- "27017:27017"
restart: always
volumes:
- ./facesense_db:/data/db
command: [--auth]
networks:
- perimeter-network
api-server:
container_name: api_server
image: nitinroxx/facesense:api_amd64_2022.11.28
ports:
- "80:80"
- "465:465"
restart: always
networks:
- perimeter-network
networks:
perimeter-network:
driver: bridge
ipam:
config:
- gateway: 10.16.239.1
subnet: 10.16.239.0/24
I have install docker inside the container which giving me below permission error:
docker.errors.dockerexception: error while fetching server api version: ('connection aborted.', permissionerror(13, 'permission denied')
...inside [a] container [...] I want to get IP of some other network container....
Docker provides an internal DNS service that can resolve container names to their Docker-internal IP addresses. From one of the containers you show, you could look up a host name like db to get the container's IP address; but in practice, this is a totally normal DNS name and all but the lowest-level networking interfaces can use those directly.
This does require that all of the containers involved be on the same Docker network. Normally Compose sets this up automatically for you; in the file you show I might delete the networks: blocks and container_name: overrides in the name of simplicity. Also see Networking in Compose in the Docker documentation.
In short:
You can probably use the Compose service names va-server, db, and api-server as host names without specifically knowing their IP addresses.
This probably means you never need to know the container IP addresses at all (they're usually unusable from outside Docker).
If you do need an IP address from inside a container, a DNS lookup can find it.
You can't usually run docker commands from inside containers. You can't do this safely without making it possible for the container to take over the whole host. There are usually better patterns that don't tie you to the Docker stack specifically.
I have two containers running on the same host using docker, however one container uses the host network while the other uses a custom bridge network as follows:
version: '3.8'
services:
app1:
container_name: app1
hostname: app1
image: app1/app1
restart: always
networks:
local:
ipv4_address: 10.0.0.8
ports:
- "9000:9000/tcp"
volumes:
- /host:/container
app2:
container_name: app2
hostname: app2
image: app2/app2
restart: always
network_mode: host
volumes:
- /host:/container
networks:
local:
ipam:
driver: bridge
config:
- subnet: "10.0.0.0/24"
i have normal ip communication between the two containers however when i want to use the hostname of the containers to communicate it fails. is there a way to make this feature work on host networks?
No, you can't do this. You probably could turn off host networking though.
Host networking pretty much completely disables Docker's networking layer. In the same way that a process outside a container can't directly communicate with a container except via its published ports:, a container that uses host networking would have to talk to localhost and the other container's published port. If the host has multiple interfaces it's up to the process to figure out which one(s) to listen on, and you can't do things like remap ports.
You almost never need host networking in practice. It's appropriate in three cases: if a service listens on a truly large number of ports (thousands); if the service's port is unpredictable; or for a management tool that's consciously trying to escape the container. You do not need host networking to make outbound network calls, and it's not a good solution to work around an incorrect hard-coded host name.
For a typical application, I would remove network_mode: host. If app2 needs to be reached from outside the container, add ports: to it. You also do not need any of the manual networking configuration you show, since Compose creates a default network for you and Docker automatically assigns IP addresses on its own.
A functioning docker-compose.yml file that omits the unnecessary options and also does not use host networking could look like:
version: '3.8'
services:
app1:
image: app1/app1
restart: always
ports: # optional if it does not need to be directly reached
- "9000:9000/tcp"
# no container_name:, hostname:, networks:, manual IP configuration
# volumes: may not be necessary in routine use
app2:
image: app2/app2
restart: always
# add to make the container accessible
ports:
- "3000:3000"
# configure communication with the first service
environment:
APP1_URL: 'http://app1:9000'
I'm fairly new to docker and docker compose, so forgive me if this is a stupid question...
I have a compose file with 2 containers. A homeassistant container with port 8123 exposed and a database with 5432. The homeassistant can access the database using the url postgresql://user:password#db:5432/homeassistant_db. I think that this is because docker has created a db binding on the host and that's why I can connect to db.
However I need to bind the homeassistant to the host, which I can do with network_mode: "host" which you can see commented out in my config. When I do this I can indeed bind to the host and homeassistant can do it's discovery of network devices etc...
Unfortunately this breaks the connection with the database so that I can't use the postgresql://user:password#db:5432/homeassistant_db url any longer.
How do I attach homeassistant to the host AND keep the database connection working? I guess I could change the database host from db to the pi's url or network name (eg. postgresql://user:password#192.168.0.100:5432/homeassistant_db or postgresql://user:password#homeassistant.local:5432/homeassistant_db) but this doesn't feel as clean or as robust as it could be.
I don't really understand the network bindings so I wan to try and learn so I can fix this myself going forward.
compose file below:
version: '3'
services:
db:
restart: always
container_name: "homeassistant_db_container"
# image: postgres:latest
image: tobi312/rpi-postgresql
ports:
- "5432:5432"
volumes:
- ./data/postgres/data:/var/lib/postgresql/data/pgdata
env_file:
- ./envs/database.env
home_assistant:
container_name: "homeassistant_container"
restart: always
image: homeassistant/raspberrypi3-homeassistant
ports:
- "8123:8123"
# network_mode: "host"
env_file:
- ./envs/homeassistant.env
volumes:
- ./configs/homeassistant:/config
depends_on:
- db
volumes:
data:
driver_opts:
type: none
o: bind
device: "${PWD}/data/postgres"
You can add both the containers in the same network as shown below. Then you could use the way you want to. Just add below code to your compose file. Then it will create a network and add both these containers there. This will also give you a security layer, so that no other containers can talk to your db container.
Second, remove container_name. You are confusing yourself. Services get their host names equal to service names by default.
networks:
default:
external:
name: "tools"
We currently have docker containers with complex builds using supervisord so that we can group services together. For example, nginx and ssh.
I'm attempting to rebuild these with more service-driven isolation linked by shared volumes. However, without mapping the IP to the host, I can't seem to find a way to allow IP addresses to be shared even though the ports may be discrete.
What I'm trying to do is something like this:
version: '2'
services:
web:
image: nginx
volumes:
- /data/web:/var/www
networks:
public:
ipv4_address: 10.0.0.1
ports:
- "10.0.0.1:80:80"
ssh:
image: alpine-sshd
volumes:
- /data/web:/var/www
networks:
public:
ipv4_address: 10.0.0.1
ports:
- "10.0.0.1:22:22"
networks:
public:
external: true
...where public is a predefined docker macvlan network.
When I try this, I get the error:
ERROR: for ssh Cannot start service ssh: Address already in use
I'm aware that another solution to this is to introduce a third service to work as a proxy. However, I thought this would be a simple enough case not to need it.
Is it possible to configure docker-compose/docker-networking to route by the port to allow the same IP address to be used for different containers?
Is it possible to configure docker-compose/docker-networking to route by the port to allow the same IP address to be used for different containers?
Yes we can(familiar? -_-!). There is an option of network mode presented by Docker, called service:service-name.
When we execute docker run, we could add --network=service:service-name flag. It means that current container uses the same network namespace of service:service-name. More information reference here.
Try the following compose file below. I've tested it, which works well.
version: '2'
services:
web:
image: nginx
networks:
public:
ipv4_address: 10.0.0.2
ports:
- "8880:80"
- "2220:22"
ssh:
image: panubo/sshd
network_mode: "service:web"
depends_on:
- web
networks:
public:
external: true
I have a Redis - Elasticsearch - Logstash - Kibana stack in docker which I am orchestrating using docker compose.
Redis will receive the logs from a remote location, will forward them to Logstash, and then the customary Elasticsearch, Kibana.
In the docker-compose.yml, I am confused about the order of "links"
Elasticsearch links to no one while logstash links to both redis and elasticsearch
elasticsearch:
redis:
logstash:
links:
- elasticsearch
- redis
kibana:
links:
- elasticsearch
Is this order correct? What is the rational behind choosing the "link" direction.
Why don't we say, elasticsearch is linked to logstash?
Instead of using the Legacy container linking method, you could instead use Docker user defined networks. Basically you can define a network for your services and then indicate in the docker-compose file that you want the container to run on that network. If your containers all run on the same network they can access each other via their container name (DNS records are added automatically).
1) : Create User Defined Network
docker network create pocnet
2) : Update docker-compose file
You want to add your containers to the network you just created. Your docker-compose file would look something along the lines of this :
version: '2'
services:
elasticsearch:
image: elasticsearch
container_name: elasticsearch
ports:
- "{your:ports}"
networks:
- pocnet
redis:
image: redis
container_name: redis
ports:
- "{your:ports}"
networks:
- pocnet
logstash:
image: logstash
container_name: logstash
ports:
- "{your:ports}"
networks:
- pocnet
kibana:
image: kibana
container_name: kibana
ports:
- "5601:5601"
networks:
- pocnet
networks:
pocnet:
external: true
3) : Start Services
docker-compose up
note : you might want to open a new shell window to run step 4.
4) : Test
Go into the Kibana container and see if you can ping the elasticsearch container.
your__Machine:/ docker exec -it kibana bash
kibana#123456:/# ping elasticsearch
First of all Links in docker are Unidirectional.
More info on links:
there are legacy links, and links in user-defined networks.
The legacy link provided 4 major functionalities to the default bridge network.
name resolution
name alias for the linked container using --link=CONTAINER-NAME:ALIAS
secured container connectivity (in isolation via --icc=false)
environment variable injection
Comparing the above 4 functionalities with the non-default user-defined networks , without any additional config, docker network provides
automatic name resolution using DNS
automatic secured isolated environment for the containers in a
network
ability to dynamically attach and detach to multiple networks
supports the --link option to provide name alias for the linked
container
In your case: Automatic dns will help you on user-defined network. first create a new network:
docker network create ELK -d bridge
With this approach you dont need to link containers on the same user-defined network. you just have to put your elk stack + redis containers in ELK network and remove link directives from composer file.
Your order looks fine to me. If you have any problem regarding the order, or waiting for services to get up in dependent containers, you can use something like the following:
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
entrypoint: ./wait-for-it.sh db:5432
db:
image: postgres
This will make the web container wait until it can connect to the db.
You can get wait-for-it script from here.