I have a docker-compose networking issue. So i create my shared space with containers for ubuntu, tensorflow, and Rstudio, which do an excellent job in sharing the volume between them and the host, but when it comes down to using the resources of the one container inside the terminal of each other one, I hit a wall. I can't do as little as calling python in the terminal of the container that doesn't have it. My docker-compose.yaml:
# docker-compose.yml
version: '3'
services:
#ubuntu(16.04)
ubuntu:
image: ubuntu_base
build:
context: .
dockerfile: dockerfileBase
volumes:
- "/data/data_vol/:/data/data_vol/:Z"
networks:
- default
ports:
- "8081:8081"
tty: true
#tensorflow
tensorflow:
image: tensorflow_jupyter
build:
context: .
dockerfile: dockerfileTensorflow
volumes:
- "/data/data_vol/:/data/data_vol/:Z"
- .:/notebooks
networks:
- default
ports:
- "8888:8888"
tty: true
#rstudio
rstudio:
image: rstudio1
build:
context: .
dockerfile: dockerfileRstudio1
volumes:
- "/data/data_vol/:/data/data_vol/:Z"
networks:
- default
environment:
- PASSWORD=test
ports:
- "8787:8787"
tty: true
volumes:
ubuntu:
tensorflow:
rstudio:
networks:
default:
driver: bridge
I am quite a docker novice, so I'm not sure about my network settings. That being said the docker inspect composetest_default (the default network created for the compose) shows the containers are connected to the network. It is my understanding that in this kind of situation I should be able to freely call one service in each one of the other containers and vice-versa:
"Containers": {
"83065ec7c84de22a1f91242b42d41b293e622528d4ef6819132325fde1d37164": {
"Name": "composetest_ubuntu_1",
"EndpointID": "0dbf6b889eb9f818cfafbe6523f020c862b2040b0162ffbcaebfbdc9395d1aa2",
"MacAddress": "02:42:c0:a8:40:04",
"IPv4Address": "192.168.64.4/20",
"IPv6Address": ""
},
"8a2e44a6d39abd246097cb9e5792a45ca25feee16c7c2e6a64fb1cee436631ff": {
"Name": "composetest_rstudio_1",
"EndpointID": "d7104ac8aaa089d4b679cc2a699ed7ab3592f4f549041fd35e5d2efe0a5d256a",
"MacAddress": "02:42:c0:a8:40:03",
"IPv4Address": "192.168.64.3/20",
"IPv6Address": ""
},
"ea51749aedb1ec28f5ba56139c5e948af90213d914630780a3a2d2ed8ec9c732": {
"Name": "composetest_tensorflow_1",
"EndpointID": "248e7b2f163cff2c1388c1c69196bea93369434d91cdedd67933c970ff160022",
"MacAddress": "02:42:c0:a8:40:02",
"IPv4Address": "192.168.64.2/20",
"IPv6Address": ""
}
A pre-history - I had tried with links: inside the docker-compose but decided to change to networks: on account of some warnings of deprecation. Was this the right way to go about it?
Docker version 18.09.1
Docker-compose version 1.17.1
but when it comes down to using the resources of the one container inside the terminal of each other one, I hit a wall. I can't do as little as calling python in the terminal of the container that doesn't have it.
You cannot use linux programs the are in the bin path of a container from another container, but you can use any service that is designed to communicate over a network from any container in your docker compose file.
Bin path:
$ echo $PATH 127 ↵
/home/exadra37/bin:/home/exadra37/bin:/home/exadra37/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
So the programs in this paths that are not designed to communicate over a network are not usable from other containers and need to be installed in each container you need them,like python.
Related
I have two docker compose yml files. Both should use the same network. First is the backend project with database, second a Angular frontend.
I tried the follow:
BACKEND
version: "3.7"
services:
....... MySQL and so on
backend:
container_name: backend
build: .
ports:
- 3000:3000
depends_on:
- db
networks:
- explorer-docs-net
networks:
explorer-docs-net:
name: explorer-docs-net
external: true
FRONTEND
version: "3.7"
services:
frontend:
build: .
ports:
- 4200:4200
networks:
- explorer-docs-net
networks:
explorer-docs-net:
name: explorer-docs-net
external: true
Normally when all in the same yml file I can call a API in the frontend like this: http://backend/api/test (with Axios as example) and the backend will be converted to its current container IP by docker. If I use 2 yml files docker do not resolve the container name and a error occurs like this:
If I call docker inspect network explorer-docs-net the result looks good:
....
"Containers": {
"215cb01256d8e4d669064ed0b6026ce486fee027e999d2746655b090b75d2015": {
"Name": "backend",
"EndpointID": "0b4f7e022e38507300c049f43c880e5baf18ae993e19bb5c13892e9618688353",
"MacAddress": "02:42:ac:1a:00:04",
"IPv4Address": "172.26.0.4/16",
"IPv6Address": ""
},
"240cfbe158f3024b90fd05ebc06b36e271bc8fc6af7d1991015ea63c0cb0fbec": {
"Name": "frontend-frontend-1",
"EndpointID": "c347862269921715fac67b4b7e10133c18ec89e8ea230f177930bf0335b53446",
"MacAddress": "02:42:ac:1a:00:05",
"IPv4Address": "172.26.0.5/16",
"IPv6Address": ""
},
....
So why docker do not resolve my container name when using more yml files for one shared network?
Your browser runs on the host system, not in a container. The frontend container doesn't have a connection to the backend container. The browser loads the frontend and opens a connection to the backend.
You have to use the hostname of the host system in the frontend. Either use localhost or configure the hostname backend in /etc/hosts.
I am trying to (in between other services of the compose) connect this 2 by TCP:
autoserver:
image: 19mikel95/pymodmikel:autoserversynchub
container_name: autoserver
expose:
- 5020
restart: unless-stopped
networks:
- monitor-net
clientperf:
image: 19mikel95/pymodmikel:reloadcomp
container_name: clientperf
restart: unless-stopped
networks:
- monitor-net
depends_on:
- autoserver
Where monitor-net is bridge:
version: '2.1'
networks:
monitor-net:
driver: bridge
So in a python file executed by the client I use a pymodbus library to run this:
host = 'localhost'
client = ModbusTcpClient(host, port=5020)
Where I obviously have problems with that ´localhost´. When I run each container manually I used docker run --network host but now that I am forced to use network bridge I dont know what to put instead of localhost. I have tried with "autoserver", "172.18.0.5" which is the IP given to autoserver by the docker network:
"57c6e2c366e81f59636a21b61e7935f68e6c700787b57eba572543e76f35f1ce": {
"Name": "autoserver",
"EndpointID": "56e586b875e6d2c17779e236b2448825910d330cc502dec96e2c3ec3771e5bf3",
"MacAddress": "02:42:ac:12:00:05",
"IPv4Address": "172.18.0.5/16",
"IPv6Address": ""
And other combinations, but I dont know how to actually manage to make that connection.
If I try with 'autoserver' as suggested, it just cant connect:
File "/usr/lib/python3/dist-packages/pymodbus/client/sync.py", line
107, in execute
raise ConnectionException("Failed to connect[%s]" % (self.str())) pymodbus.exceptions.ConnectionException: Modbus
Error: [Connection] Failed to
connect[ModbusTcpClient(autoserver:5020)] [ERROR/MainProcess] failed
to run test successfully
I need to curl my API from another container.
Container 1 is called nginx
Container 2 is called fpm
I need to by able to bash into my fpm container and curl the nginx container.
Config:
#docker-compose.yaml
services:
nginx:
build:
context: .
dockerfile: ./docker/nginx/Dockerfile
volumes:
- ./docker/nginx/conf/dev/api.conf:/etc/nginx/conf.d/default.conf
ports:
- 8080:80
links:
- fpm
fpm:
build:
context: .
dockerfile: ./docker/fpm/Dockerfile
volumes:
- .:/var/www/html
- ./docker/fpm/conf/dev/xdebug.ini:/usr/local/etc/php/conf.d/xdebug.ini
- ./docker/fpm/conf/dev/api.ini:/usr/local/etc/php/conf.d/api.ini
env_file:
- ./docker/mysql/mysql.env
- ./docker/fpm/conf/dev/fpm.env
links:
- mysql
shm_size: 256M
extra_hosts:
- myapi.docker:nginx
My initial thought was to slap it in the extra_hosts option like:
extra_hosts:
- myapi.docker:nginx
But docker-compose up fails with:
ERROR: for apiwip_fpm_1 Cannot create container for service fpm: invalid IP address in add-host: "nginx"
I have seen some examples of people using docker's network configuration but it seems over the top to just resolve an address.
How can I resolve/eval the IP address of the container rather than just passing it literally?
Add network aliases in the default network.
version: "3.7"
services:
nginx:
# ...
networks:
default:
aliases:
- example.local
browser-sync:
# ...
depends_on:
- nginx
command: "browser-sync start --proxy http://example.local"
services:
nginx:
build:
context: .
dockerfile: ./docker/nginx/Dockerfile
volumes:
- ./docker/nginx/conf/dev/api.conf:/etc/nginx/conf.d/default.conf
ports:
- 8080:80
networks:
my_network:
aliases:
- myapi.docker
- docker_my_network
fpm:
build:
context: .
dockerfile: ./docker/fpm/Dockerfile
volumes:
- .:/var/www/html
- ./docker/fpm/conf/dev/xdebug.ini:/usr/local/etc/php/conf.d/xdebug.ini
- ./docker/fpm/conf/dev/api.ini:/usr/local/etc/php/conf.d/api.ini
env_file:
- ./docker/mysql/mysql.env
- ./docker/fpm/conf/dev/fpm.env
links:
- mysql
shm_size: 256M
networks:
- my_network
networks:
my_network:
driver: bridge
add custom network and add container to that network
by default if you curl nginx container from fpm you will curl localhost, so we need to add alias to be the same as your servername in the nginx configuration
with this solution you can curl myapi.docker from the fpm container
#edward let me know if this solution worked for you
Edit:
Jack_Hu is right, I removed extra_hosts. Network alias is enough.
I solved this kind of issue by using links instead of extra_hosts.
In that case, just set link alias can do your favor.
service fpm setting
links
- nginx:myapi.docker
See docker-compose links documentation, The alias name can be the domain which appears in your code.
From the docker documentation,
https://docs.docker.com/compose/networking/
You should be able to use your service name (in this case nginx) as a hostname from within your docker network. So you can bash into my FPM container and call curl nginx and the docker will resolve it for you. Hope this helps.
Quick fix is to point it to a dynamic IP generated by docker. This may change though so... yeah.
Find your networks:
docker network ls
NETWORK ID NAME DRIVER SCOPE
72fef1ce7a50 apiwip_default bridge local <-- here
cdf9d5b885f6 bridge bridge local
2f4f1e7038fa host host local
a96919eea0f7 mgadmin_default bridge local
30386c421b70 none null local
5457b953fadc website2_default bridge local
1450ebeb9856 anotherapi_default bridge local
Copy the NETWORK ID
docker network inspect 72fef1ce7a50
"Containers": {
"345026453e1390528b2bb7eac4c66160750081d78a77ac152a912f3de5fd912c": {
"Name": "apiwip_nginx_1",
"EndpointID": "6504a3e4714a6ba599ec882b21f956bfd1b1b7d19b8e04772abaa89c02b1a686",
"MacAddress": "02:42:ac:14:00:05",
"IPv4Address": "172.20.0.5/16", <-- CIDR block
"IPv6Address": ""
},
"ea89d3089193825209d0e23c8105312e3df7ad1bea6b915ec9f9325dfd11736c": {
"Name": "apiwip_fpm_1",
"EndpointID": "dc4ecc7f0706c0586cc39dbf8a05abc9cc70784f2d44c90de2e8dbdc9148a294",
"MacAddress": "02:42:ac:14:00:04",
"IPv4Address": "172.20.0.4/16",
"IPv6Address": ""
}
},
Add the IP address from that CIDR block to the extra_hosts option:
extra_hosts:
- myapi.docker:172.20.0.5
I am deploying a compose file onto the UCP via:
docker stack deploy -c docker-compose.yml custom-stack-name
In the end I want to deploy multiple compose files (each compose file describes the setup for a separate microservice) onto one docker network e.g. appsnetwork
version: "3"
services:
service1:
image: docker/service1
networks:
- appsnetwork
customservice2:
image: myprivaterepo/imageforcustomservice2
networks:
- appsnetwork
networks:
appsnetwork:
The docker stack deploy command automatically creates a new network with a generated name like this: custom-stack-name_appsnetwork
What are my options?
Try to create the network yourself first
docker network create --driver=overlay --scope=swarm appsnetwork
After that make the network external in your compose
version: "3"
services:
service1:
image: nginx
networks:
- appsnetwork
networks:
appsnetwork:
external: true
After that running two copies of the stack
docker stack deploy --compose-file docker-compose.yml stack1
docker stack deploy --compose-file docker-compose.yml stack2
Docker inspect for both shows IP in same network
$ docker inspect 369b610110a9
...
"Networks": {
"appsnetwork": {
"IPAMConfig": {
"IPv4Address": "10.0.1.5"
},
"Links": null,
"Aliases": [
"369b610110a9"
],
$ docker inspect e8b8cc1a81ed
"Networks": {
"appsnetwork": {
"IPAMConfig": {
"IPv4Address": "10.0.1.3"
},
"Links": null,
"Aliases": [
"e8b8cc1a81ed"
],
I have a docker-compose with some php, mysql and so on starting. After a few days, I cannot bring them down as everything stopps instead of mysql. It always gives me the following error:
ERROR: network docker_default has active endpoints
this is my docker-compose.yml
version: '2'
services:
php:
build: php-docker/.
container_name: php
ports:
- "9000:9000"
volumes:
- /var/www/:/var/www/
links:
- mysql:mysql
restart: always
nginx:
build: nginx-docker/.
container_name: nginx
links:
- php
- mysql:mysql
environment:
WORDPRESS_DB_HOST: mysql:3306
ports:
- "80:80"
volumes:
- /var/log/nginx:/var/log/nginx
- /var/www/:/var/www/
- /var/logs/nginx:/var/logs/nginx
- /var/config/nginx/certs:/etc/nginx/certs
- /var/config/nginx/sites-enabled:/etc/nginx/sites-available
restart: always
mysql:
build: mysql-docker/.
container_name: mysql
volumes:
- /var/mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: pw
MYSQL_USER: florian
MYSQL_PASSWORD: pw
MYSQL_DATABASE: database
restart: always
phpmyadmin:
build: phpmyadmin/.
links:
- mysql:db
ports:
- 1234:80
container_name: phpmyadmin
environment:
PMA_ARBITRARY: 1
PMA_USERNAME: florian
PMA_PASSWORD: pw
MYSQL_ROOT_PASSWORD: pw
restart: always
docker network inspect docker_default gives me:
[
{
"Name": "docker_default",
"Id": "1ed93da1a82efdab065e3a833067615e2d8b76336968a2591584af5874f07622",
"Created": "2017-03-08T07:21:34.969179141Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {
"85985605f1c0c20e5ee9fedc95800327f782beafc0049f51e645146d2e954b7d": {
"Name": "mysql",
"EndpointID": "84fb19cd428f8b0ba764b396362727d9809cd1cfea536e648bfc4752c5cb6b27",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
UPDATE
Seems that docker rm mysql -f stopped the mysql container, but the network is running.
Removed the network with docker network disconnect -f docker_default mysql But I'm pretty interested in how I got into this situation. any ideas?
I resolved a similar problem when I added this after rename services in by docker-compose.yml file before stop container
docker-compose down --remove-orphans
I'm guessing you edited the docker-compose file while you were currently running...?
Sometimes if you edit the docker-compose file before doing a docker-compose down it will then have a mismatch in what docker-compose will attempt to stop. First run docker rm 8598560 to stop the currently running container. From there, make sure you do a docker-compose down before editing the file. Once you stop the container, docker-compose up should work.
You need to disconnect stale endpoint(s) from the network. First, get the endpoint names with
docker network inspect <network>
You can find your endpoints in the output JSON: Containers -> Name. Now, simply force disconnect them with:
docker network disconnect -f <network> <endpoint>
Unfortunately none of above worked for me. Restarting the docker service solved the problem.
I happened to run into this error message because I had two networks with the same name (copy/paste mistake).
What I did to fix it:
docker-compose down - ignore errors
docker network list - note if any container is using it and stop if if necessary
docker network prune to remove all dangling networks, although you may want to just docker network rm <network name>
rename the second network to a unique name
docker-compose up
This worked for me. systemctl restart docker
service docker restart
then remove the network normally
docker network rm [network name]
This issue happens rarely while running exceptional dozens of services ( in parallel ) on the same instance, but unfortunately docker-compose could not establish deployment / recreation operation of these containers as a result of an "active network interface" that is bind and stuck.
I also enabled running flags such as:
force-recreate ( Recreate containers even if their configuration and image haven’t changed )
or
remove-orphans flags(Remove containers for services not defined in
the Compose file.)
but it did not help.
Eventually I came into conclusion that restarting docker
engine is the last resort - docker closes any network connections that are associated with the docker daemon ( connection between containers and between the api )- this action
solved this problem ( sudo service docker restart )