I am deploying a compose file onto the UCP via:
docker stack deploy -c docker-compose.yml custom-stack-name
In the end I want to deploy multiple compose files (each compose file describes the setup for a separate microservice) onto one docker network e.g. appsnetwork
version: "3"
services:
service1:
image: docker/service1
networks:
- appsnetwork
customservice2:
image: myprivaterepo/imageforcustomservice2
networks:
- appsnetwork
networks:
appsnetwork:
The docker stack deploy command automatically creates a new network with a generated name like this: custom-stack-name_appsnetwork
What are my options?
Try to create the network yourself first
docker network create --driver=overlay --scope=swarm appsnetwork
After that make the network external in your compose
version: "3"
services:
service1:
image: nginx
networks:
- appsnetwork
networks:
appsnetwork:
external: true
After that running two copies of the stack
docker stack deploy --compose-file docker-compose.yml stack1
docker stack deploy --compose-file docker-compose.yml stack2
Docker inspect for both shows IP in same network
$ docker inspect 369b610110a9
...
"Networks": {
"appsnetwork": {
"IPAMConfig": {
"IPv4Address": "10.0.1.5"
},
"Links": null,
"Aliases": [
"369b610110a9"
],
$ docker inspect e8b8cc1a81ed
"Networks": {
"appsnetwork": {
"IPAMConfig": {
"IPv4Address": "10.0.1.3"
},
"Links": null,
"Aliases": [
"e8b8cc1a81ed"
],
Related
I have two docker compose yml files. Both should use the same network. First is the backend project with database, second a Angular frontend.
I tried the follow:
BACKEND
version: "3.7"
services:
....... MySQL and so on
backend:
container_name: backend
build: .
ports:
- 3000:3000
depends_on:
- db
networks:
- explorer-docs-net
networks:
explorer-docs-net:
name: explorer-docs-net
external: true
FRONTEND
version: "3.7"
services:
frontend:
build: .
ports:
- 4200:4200
networks:
- explorer-docs-net
networks:
explorer-docs-net:
name: explorer-docs-net
external: true
Normally when all in the same yml file I can call a API in the frontend like this: http://backend/api/test (with Axios as example) and the backend will be converted to its current container IP by docker. If I use 2 yml files docker do not resolve the container name and a error occurs like this:
If I call docker inspect network explorer-docs-net the result looks good:
....
"Containers": {
"215cb01256d8e4d669064ed0b6026ce486fee027e999d2746655b090b75d2015": {
"Name": "backend",
"EndpointID": "0b4f7e022e38507300c049f43c880e5baf18ae993e19bb5c13892e9618688353",
"MacAddress": "02:42:ac:1a:00:04",
"IPv4Address": "172.26.0.4/16",
"IPv6Address": ""
},
"240cfbe158f3024b90fd05ebc06b36e271bc8fc6af7d1991015ea63c0cb0fbec": {
"Name": "frontend-frontend-1",
"EndpointID": "c347862269921715fac67b4b7e10133c18ec89e8ea230f177930bf0335b53446",
"MacAddress": "02:42:ac:1a:00:05",
"IPv4Address": "172.26.0.5/16",
"IPv6Address": ""
},
....
So why docker do not resolve my container name when using more yml files for one shared network?
Your browser runs on the host system, not in a container. The frontend container doesn't have a connection to the backend container. The browser loads the frontend and opens a connection to the backend.
You have to use the hostname of the host system in the frontend. Either use localhost or configure the hostname backend in /etc/hosts.
I want to use a named volume inside my docker compose file which binds to a user defined path in the host. It seems like it should be possible since I have seen multiple examples online one of them being How can I mount an absolute host path as a named volume in docker-compose?.
So, I wanted to do the same. Please bear in mind that this is just an example and I have a use case where I want to use named volumes for DRYness.
Note: I am using Docker for Windows with WSL2
version: '3'
services:
example:
image: caddy
restart: unless-stopped
volumes:
- caddy_data:/data
- ./Caddyfile:/etc/caddy/Caddyfile
volumes:
caddy_data:
name: caddy_data
driver_opts:
o: bind
device: D:\Some\path_in\my\host
type: none
# volumes:
# caddy_data:
# external: true
# name: caddyvol
This does not work and everytime I do docker compose up -d I get the error:
[+] Running 1/2
- Volume "caddy_data" Created 0.0s
- Container project-example-1 Creating 0.9s
Error response from daemon: failed to mount local volume: mount D:\Some\path_in\my\host:/var/lib/docker/volumes/caddy_data/_data, flags: 0x1000: no such file or director
But if I create the volume first using
docker volume create --opt o=bind --opt device=D:\Some\path_in\my\host --opt type=none caddyvol
and then use the above in my docker compose file (see the above file's commented section), it works perfectly.
I have even tried to see the difference between the volumes created and have found none
docker volume inspect caddy_data
[
{
"CreatedAt": "2021-12-12T18:19:20Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "ngrok-compose",
"com.docker.compose.version": "2.2.1",
"com.docker.compose.volume": "caddy_data"
},
"Mountpoint": "/var/lib/docker/volumes/caddy_data/_data",
"Name": "caddy_data",
"Options": {
"device": "D:\\Some\\path_in\\my\\host",
"o": "bind",
"type": "none"
},
"Scope": "local"
}
]
docker volume inspect caddyvol
[
{
"CreatedAt": "2021-12-12T18:13:17Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/caddyvol/_data",
"Name": "caddyvol",
"Options": {
"device": "D:\\Some\\path_in\\my\\host",
"o": "bind",
"type": "none"
},
"Scope": "local"
}
]
Any ideas what's going wrong in here?
Finally managed to figure it out thanks to someone pointing out half of my mistake. While defining the volume in the compose file, the device should be in linux path format without the : after the drive name. Also, the version number should be fully defined. So, in the example case, it should be
version: '3.8'
services:
example:
image: caddy
restart: unless-stopped
volumes:
- caddy_data:/data
- ./Caddyfile:/etc/caddy/Caddyfile
volumes:
caddy_data:
name: caddy_data
driver_opts:
o: bind
device: d/Some/path_in/my/host
type: none
But this still did not work. And it seemed to not work only in Windows Docker Desktop. So, I went into \\wsl.localhost\docker-desktop-data\version-pack-data\community\docker\volumes and checked the difference between the manually created volume and the volume generated from the compose file.
The only difference was in the MountDevice key in the opts.json file for each. The manually created file had /run/desktop/mnt/host/ appended to the path provided. So, I updated my compose file to
version: '3.8'
services:
example:
image: caddy
restart: unless-stopped
volumes:
- caddy_data:/data
- ./Caddyfile:/etc/caddy/Caddyfile
volumes:
caddy_data:
name: caddy_data
driver_opts:
o: bind
device: /run/desktop/mnt/host/d/Some/path_in/my/host
type: none
And this worked!
Is it possible to set in the docker compose file the container information about NetworkSetting into the environment variable?
I have the following docker-compose.yml file:
version: '3.7'
services:
sdt-proxy:
image: myimage
ports:
- 32770-32780:8181
environment:
- SERVER_PORT=8181
It maps the port 8181 to a random port from 32770-32780. when I run the container with docker-compose up, I can see the mapped port with docker inspect:
.....
"NetworkSettings": {
"Bridge": "",
"SandboxID": "83e6933aaf7b09b8ae1238d3dbb71bdd495c14927a5a509b332afc17cda6d854",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"8181/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "32771"
}
]
},
...
So, I know that the internal port 8181 (my application running inside the container), is mapped to the port 32771.
I need to pass this information, the container port 32771 to my application, is it possible to do something like this in the docker-compose file?
version: '3.7'
services:
sdt-proxy:
image: myimage
ports:
- 32770-32780:8181
environment:
- SERVER_PORT=8181
- MY_CONTAINER_PORT= <the running container port 32771>
I have a docker-compose networking issue. So i create my shared space with containers for ubuntu, tensorflow, and Rstudio, which do an excellent job in sharing the volume between them and the host, but when it comes down to using the resources of the one container inside the terminal of each other one, I hit a wall. I can't do as little as calling python in the terminal of the container that doesn't have it. My docker-compose.yaml:
# docker-compose.yml
version: '3'
services:
#ubuntu(16.04)
ubuntu:
image: ubuntu_base
build:
context: .
dockerfile: dockerfileBase
volumes:
- "/data/data_vol/:/data/data_vol/:Z"
networks:
- default
ports:
- "8081:8081"
tty: true
#tensorflow
tensorflow:
image: tensorflow_jupyter
build:
context: .
dockerfile: dockerfileTensorflow
volumes:
- "/data/data_vol/:/data/data_vol/:Z"
- .:/notebooks
networks:
- default
ports:
- "8888:8888"
tty: true
#rstudio
rstudio:
image: rstudio1
build:
context: .
dockerfile: dockerfileRstudio1
volumes:
- "/data/data_vol/:/data/data_vol/:Z"
networks:
- default
environment:
- PASSWORD=test
ports:
- "8787:8787"
tty: true
volumes:
ubuntu:
tensorflow:
rstudio:
networks:
default:
driver: bridge
I am quite a docker novice, so I'm not sure about my network settings. That being said the docker inspect composetest_default (the default network created for the compose) shows the containers are connected to the network. It is my understanding that in this kind of situation I should be able to freely call one service in each one of the other containers and vice-versa:
"Containers": {
"83065ec7c84de22a1f91242b42d41b293e622528d4ef6819132325fde1d37164": {
"Name": "composetest_ubuntu_1",
"EndpointID": "0dbf6b889eb9f818cfafbe6523f020c862b2040b0162ffbcaebfbdc9395d1aa2",
"MacAddress": "02:42:c0:a8:40:04",
"IPv4Address": "192.168.64.4/20",
"IPv6Address": ""
},
"8a2e44a6d39abd246097cb9e5792a45ca25feee16c7c2e6a64fb1cee436631ff": {
"Name": "composetest_rstudio_1",
"EndpointID": "d7104ac8aaa089d4b679cc2a699ed7ab3592f4f549041fd35e5d2efe0a5d256a",
"MacAddress": "02:42:c0:a8:40:03",
"IPv4Address": "192.168.64.3/20",
"IPv6Address": ""
},
"ea51749aedb1ec28f5ba56139c5e948af90213d914630780a3a2d2ed8ec9c732": {
"Name": "composetest_tensorflow_1",
"EndpointID": "248e7b2f163cff2c1388c1c69196bea93369434d91cdedd67933c970ff160022",
"MacAddress": "02:42:c0:a8:40:02",
"IPv4Address": "192.168.64.2/20",
"IPv6Address": ""
}
A pre-history - I had tried with links: inside the docker-compose but decided to change to networks: on account of some warnings of deprecation. Was this the right way to go about it?
Docker version 18.09.1
Docker-compose version 1.17.1
but when it comes down to using the resources of the one container inside the terminal of each other one, I hit a wall. I can't do as little as calling python in the terminal of the container that doesn't have it.
You cannot use linux programs the are in the bin path of a container from another container, but you can use any service that is designed to communicate over a network from any container in your docker compose file.
Bin path:
$ echo $PATH 127 ↵
/home/exadra37/bin:/home/exadra37/bin:/home/exadra37/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
So the programs in this paths that are not designed to communicate over a network are not usable from other containers and need to be installed in each container you need them,like python.
I ran two services individually
docker-compose run --service-ports django /bin/bash
docker-compose run --service-ports other /bin/bash
Although I can see the ports in docker ps, a service can't talk to the exposed ports in another service.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
586e859afcab littlehome_other "/bin/bash" 12 minutes ago Up 12 minutes 6379-6380/tcp, 9200/tcp zibann-reservation_other_run_6
994dadb0ad7f littlehome "/bin/bash" 25 minutes ago Up 25 minutes 0.0.0.0:10011->10011/tcp zibann-reservation_django_run_3
docker-compose.yml has
services:
django:
restart: always
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
image: littlehome
depends_on:
- other
- nginx
env_file:
- ./compose/.envs/production/postgres
# command: /app/compose/production/django/uwsgi.sh
ports:
- "0.0.0.0:10011:10011"
other:
build:
context: .
dockerfile: ./compose/production/other/Dockerfile
image: littlehome_other
# depends_on:
# - postgres
expose:
- "9200"
- "6379"
- "6380"
volumes:
- ~/.bash_history:/root/.bash_history
I'm tyring to let django talk to other:9200
docker network inspect zibann-reservation_default shows
"Containers": {
"994dadb0ad7f59e6a9ecaddfffe46aba98209ff2ae9eb0542f89dee969a85a17": {
"Name": "zibann-reservation_django_run_3",
"EndpointID": "02bf3e21aba290b999d26f0e52f2cb6b3aa792a10c86e08065d0b299995480dd",
"MacAddress": "02:42:ac:12:00:06",
"IPv4Address": "172.18.0.6/16",
"IPv6Address": ""
},
"ac5b1845f31f23bce0668ee7a427dc21aafbda0494cf67cc764df7b0898f5d23": {
"Name": "zibann-reservation_other_run_7",
"EndpointID": "b6cfcbfbf637d6521575c300d74fb483b47d6fa9e173aeb17f9c5bfc12341a37",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"fe83a3addb7365b2439870e887a4eae50477f1c3531c6af60a91a07bb1226922": {
"Name": "zibann-reservation_postgres_1",
"EndpointID": "bee7d0fcc80f94303306d849fbb29a3362d1d71ceb7d34773cd82ab08bc80172",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
}
},
The service is elasticsearch and client is trying to connect to elasticsearch via 'http://other:9200/reviewmeta_index/_count' Would this work ?
Make sure they are connected to same network.
Check you network with docker network ls (use value from here to connect later)
Check which bridge your containers are using: docker network inspect bridge
And connect the right network: docker network connect default-bridge zibann-reservation_django_run_3 (default-bridge being your the network you want to connect. Can be anything of course)
More detailed information can be found here: https://docs.docker.com/network/network-tutorial-standalone/#use-the-default-bridge-network and https://docs.docker.com/engine/reference/commandline/network_connect/#related-commands