Docker compose, can't talk to exposed port when run individually? - docker

I ran two services individually
docker-compose run --service-ports django /bin/bash
docker-compose run --service-ports other /bin/bash
Although I can see the ports in docker ps, a service can't talk to the exposed ports in another service.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
586e859afcab littlehome_other "/bin/bash" 12 minutes ago Up 12 minutes 6379-6380/tcp, 9200/tcp zibann-reservation_other_run_6
994dadb0ad7f littlehome "/bin/bash" 25 minutes ago Up 25 minutes 0.0.0.0:10011->10011/tcp zibann-reservation_django_run_3
docker-compose.yml has
services:
django:
restart: always
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
image: littlehome
depends_on:
- other
- nginx
env_file:
- ./compose/.envs/production/postgres
# command: /app/compose/production/django/uwsgi.sh
ports:
- "0.0.0.0:10011:10011"
other:
build:
context: .
dockerfile: ./compose/production/other/Dockerfile
image: littlehome_other
# depends_on:
# - postgres
expose:
- "9200"
- "6379"
- "6380"
volumes:
- ~/.bash_history:/root/.bash_history
I'm tyring to let django talk to other:9200
docker network inspect zibann-reservation_default shows
"Containers": {
"994dadb0ad7f59e6a9ecaddfffe46aba98209ff2ae9eb0542f89dee969a85a17": {
"Name": "zibann-reservation_django_run_3",
"EndpointID": "02bf3e21aba290b999d26f0e52f2cb6b3aa792a10c86e08065d0b299995480dd",
"MacAddress": "02:42:ac:12:00:06",
"IPv4Address": "172.18.0.6/16",
"IPv6Address": ""
},
"ac5b1845f31f23bce0668ee7a427dc21aafbda0494cf67cc764df7b0898f5d23": {
"Name": "zibann-reservation_other_run_7",
"EndpointID": "b6cfcbfbf637d6521575c300d74fb483b47d6fa9e173aeb17f9c5bfc12341a37",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"fe83a3addb7365b2439870e887a4eae50477f1c3531c6af60a91a07bb1226922": {
"Name": "zibann-reservation_postgres_1",
"EndpointID": "bee7d0fcc80f94303306d849fbb29a3362d1d71ceb7d34773cd82ab08bc80172",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
}
},
The service is elasticsearch and client is trying to connect to elasticsearch via 'http://other:9200/reviewmeta_index/_count' Would this work ?

Make sure they are connected to same network.
Check you network with docker network ls (use value from here to connect later)
Check which bridge your containers are using: docker network inspect bridge
And connect the right network: docker network connect default-bridge zibann-reservation_django_run_3 (default-bridge being your the network you want to connect. Can be anything of course)
More detailed information can be found here: https://docs.docker.com/network/network-tutorial-standalone/#use-the-default-bridge-network and https://docs.docker.com/engine/reference/commandline/network_connect/#related-commands

Related

Docker Compose with multiple .yml files and same shared network not resolving container names to ip adress

I have two docker compose yml files. Both should use the same network. First is the backend project with database, second a Angular frontend.
I tried the follow:
BACKEND
version: "3.7"
services:
....... MySQL and so on
backend:
container_name: backend
build: .
ports:
- 3000:3000
depends_on:
- db
networks:
- explorer-docs-net
networks:
explorer-docs-net:
name: explorer-docs-net
external: true
FRONTEND
version: "3.7"
services:
frontend:
build: .
ports:
- 4200:4200
networks:
- explorer-docs-net
networks:
explorer-docs-net:
name: explorer-docs-net
external: true
Normally when all in the same yml file I can call a API in the frontend like this: http://backend/api/test (with Axios as example) and the backend will be converted to its current container IP by docker. If I use 2 yml files docker do not resolve the container name and a error occurs like this:
If I call docker inspect network explorer-docs-net the result looks good:
....
"Containers": {
"215cb01256d8e4d669064ed0b6026ce486fee027e999d2746655b090b75d2015": {
"Name": "backend",
"EndpointID": "0b4f7e022e38507300c049f43c880e5baf18ae993e19bb5c13892e9618688353",
"MacAddress": "02:42:ac:1a:00:04",
"IPv4Address": "172.26.0.4/16",
"IPv6Address": ""
},
"240cfbe158f3024b90fd05ebc06b36e271bc8fc6af7d1991015ea63c0cb0fbec": {
"Name": "frontend-frontend-1",
"EndpointID": "c347862269921715fac67b4b7e10133c18ec89e8ea230f177930bf0335b53446",
"MacAddress": "02:42:ac:1a:00:05",
"IPv4Address": "172.26.0.5/16",
"IPv6Address": ""
},
....
So why docker do not resolve my container name when using more yml files for one shared network?
Your browser runs on the host system, not in a container. The frontend container doesn't have a connection to the backend container. The browser loads the frontend and opens a connection to the backend.
You have to use the hostname of the host system in the frontend. Either use localhost or configure the hostname backend in /etc/hosts.

(Google Compute VM) How can I get containers in docker-compose to connect

I am trying to get angular and nginx containers in docker-compose to speak to each other on a google-compute vm instance (Debian OS), without success. Here is my docker-compose.yml:
version: '3'
services:
angular:
container_name: angular
hostname: angular
build: project-frontend
ports:
- "80:80"
#network_mode: host
nodejs:
container_name: nodejs
hostname: nodejs
build: project-backend
ports:
- "8080:8080"
# network_mode: host
I have read the docs and numerous SO posts such as this, and understand that angular should be trying to find node at http://nodejs:8080/, but I'm getting:
POST http://nodejs:8080/login/ net::ERR_NAME_NOT_RESOLVED
When I do docker networkk inspect I see this
[
{
"Name": "project_default",
"Id": "2d1665ce09f712457e706b83f4ae1139a846f9ce26163e07ee7e5357d4b28cd3",
"Created": "2020-05-22T11:25:22.441164515Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.28.0.0/16",
"Gateway": "172.28.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"b0fceb913ef14b0b867ae01ce4852ad4a0827c06194102082c0d4b18d7b80464": {
"Name": "angular",
"EndpointID": "83fba04c3cf6f7af743cae87116730805d030040f286706029da1c7a687b199c",
"MacAddress": "02:42:ac:1c:00:03",
"IPv4Address": "172.28.0.3/16",
"IPv6Address": ""
},
"c181cd4b0e9ccdd793c4e1fc49067ef4880cda91228a10b900899470cdd1a138": {
"Name": "nodejs",
"EndpointID": "6da8ad2a83e2809f68c310d8f34e3feb2f4c19b40f701b3b00b8fb9e6f231906",
"MacAddress": "02:42:ac:1c:00:02",
"IPv4Address": "172.28.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
I'm not sure what other steps can help me to debug this.
Thanks.
EDIT:
Thanks to this post I tried to ping nodejs container through the angular container successfully:
$ sudo docker exec -it angular ping nodejs
PING nodejs (172.28.0.2): 56 data bytes
64 bytes from 172.28.0.2: seq=0 ttl=64 time=0.079 ms
64 bytes from 172.28.0.2: seq=1 ttl=64 time=0.105 ms
I also tried tested the port on the nodejs constainer and it seems to be there:
$ sudo docker port nodejs
8080/tcp -> 0.0.0.0:8080
EDIT:
I'm starting to think this is a google compute VM question as I have it running on my local linux box without any problem...have updated the title accordingly
You need to make sure they are on the same network. You can do it by adding the following lines in your compose file at the end
networks:
default:
external:
name: project.local
Note, you have to create project.local network. When you run docker-compose up it'll tell you how to do it.
As #ShipluMokaddim says, containers must be in the same network ort hey can't hear each other, what I recommended is create a new network:
version: '3'
services:
angular:
container_name: angular
build: project-frontend
ports:
- "80:80"
networks:
- mynetwork
nodejs:
container_name: nodejs
build: project-backend
ports:
- "8080:8080"
networks:
- mynetwork
networks:
mynetwork:
Whit this you will be fine.

containers not communicating in network in docker-compose

I have a docker-compose networking issue. So i create my shared space with containers for ubuntu, tensorflow, and Rstudio, which do an excellent job in sharing the volume between them and the host, but when it comes down to using the resources of the one container inside the terminal of each other one, I hit a wall. I can't do as little as calling python in the terminal of the container that doesn't have it. My docker-compose.yaml:
# docker-compose.yml
version: '3'
services:
#ubuntu(16.04)
ubuntu:
image: ubuntu_base
build:
context: .
dockerfile: dockerfileBase
volumes:
- "/data/data_vol/:/data/data_vol/:Z"
networks:
- default
ports:
- "8081:8081"
tty: true
#tensorflow
tensorflow:
image: tensorflow_jupyter
build:
context: .
dockerfile: dockerfileTensorflow
volumes:
- "/data/data_vol/:/data/data_vol/:Z"
- .:/notebooks
networks:
- default
ports:
- "8888:8888"
tty: true
#rstudio
rstudio:
image: rstudio1
build:
context: .
dockerfile: dockerfileRstudio1
volumes:
- "/data/data_vol/:/data/data_vol/:Z"
networks:
- default
environment:
- PASSWORD=test
ports:
- "8787:8787"
tty: true
volumes:
ubuntu:
tensorflow:
rstudio:
networks:
default:
driver: bridge
I am quite a docker novice, so I'm not sure about my network settings. That being said the docker inspect composetest_default (the default network created for the compose) shows the containers are connected to the network. It is my understanding that in this kind of situation I should be able to freely call one service in each one of the other containers and vice-versa:
"Containers": {
"83065ec7c84de22a1f91242b42d41b293e622528d4ef6819132325fde1d37164": {
"Name": "composetest_ubuntu_1",
"EndpointID": "0dbf6b889eb9f818cfafbe6523f020c862b2040b0162ffbcaebfbdc9395d1aa2",
"MacAddress": "02:42:c0:a8:40:04",
"IPv4Address": "192.168.64.4/20",
"IPv6Address": ""
},
"8a2e44a6d39abd246097cb9e5792a45ca25feee16c7c2e6a64fb1cee436631ff": {
"Name": "composetest_rstudio_1",
"EndpointID": "d7104ac8aaa089d4b679cc2a699ed7ab3592f4f549041fd35e5d2efe0a5d256a",
"MacAddress": "02:42:c0:a8:40:03",
"IPv4Address": "192.168.64.3/20",
"IPv6Address": ""
},
"ea51749aedb1ec28f5ba56139c5e948af90213d914630780a3a2d2ed8ec9c732": {
"Name": "composetest_tensorflow_1",
"EndpointID": "248e7b2f163cff2c1388c1c69196bea93369434d91cdedd67933c970ff160022",
"MacAddress": "02:42:c0:a8:40:02",
"IPv4Address": "192.168.64.2/20",
"IPv6Address": ""
}
A pre-history - I had tried with links: inside the docker-compose but decided to change to networks: on account of some warnings of deprecation. Was this the right way to go about it?
Docker version 18.09.1
Docker-compose version 1.17.1
but when it comes down to using the resources of the one container inside the terminal of each other one, I hit a wall. I can't do as little as calling python in the terminal of the container that doesn't have it.
You cannot use linux programs the are in the bin path of a container from another container, but you can use any service that is designed to communicate over a network from any container in your docker compose file.
Bin path:
$ echo $PATH 127 ↵
/home/exadra37/bin:/home/exadra37/bin:/home/exadra37/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
So the programs in this paths that are not designed to communicate over a network are not usable from other containers and need to be installed in each container you need them,like python.

How to disable network generation on docker stack deploy

I am deploying a compose file onto the UCP via:
docker stack deploy -c docker-compose.yml custom-stack-name
In the end I want to deploy multiple compose files (each compose file describes the setup for a separate microservice) onto one docker network e.g. appsnetwork
version: "3"
services:
service1:
image: docker/service1
networks:
- appsnetwork
customservice2:
image: myprivaterepo/imageforcustomservice2
networks:
- appsnetwork
networks:
appsnetwork:
The docker stack deploy command automatically creates a new network with a generated name like this: custom-stack-name_appsnetwork
What are my options?
Try to create the network yourself first
docker network create --driver=overlay --scope=swarm appsnetwork
After that make the network external in your compose
version: "3"
services:
service1:
image: nginx
networks:
- appsnetwork
networks:
appsnetwork:
external: true
After that running two copies of the stack
docker stack deploy --compose-file docker-compose.yml stack1
docker stack deploy --compose-file docker-compose.yml stack2
Docker inspect for both shows IP in same network
$ docker inspect 369b610110a9
...
"Networks": {
"appsnetwork": {
"IPAMConfig": {
"IPv4Address": "10.0.1.5"
},
"Links": null,
"Aliases": [
"369b610110a9"
],
$ docker inspect e8b8cc1a81ed
"Networks": {
"appsnetwork": {
"IPAMConfig": {
"IPv4Address": "10.0.1.3"
},
"Links": null,
"Aliases": [
"e8b8cc1a81ed"
],

docker-compose down default_network error

I have a docker-compose with some php, mysql and so on starting. After a few days, I cannot bring them down as everything stopps instead of mysql. It always gives me the following error:
ERROR: network docker_default has active endpoints
this is my docker-compose.yml
version: '2'
services:
php:
build: php-docker/.
container_name: php
ports:
- "9000:9000"
volumes:
- /var/www/:/var/www/
links:
- mysql:mysql
restart: always
nginx:
build: nginx-docker/.
container_name: nginx
links:
- php
- mysql:mysql
environment:
WORDPRESS_DB_HOST: mysql:3306
ports:
- "80:80"
volumes:
- /var/log/nginx:/var/log/nginx
- /var/www/:/var/www/
- /var/logs/nginx:/var/logs/nginx
- /var/config/nginx/certs:/etc/nginx/certs
- /var/config/nginx/sites-enabled:/etc/nginx/sites-available
restart: always
mysql:
build: mysql-docker/.
container_name: mysql
volumes:
- /var/mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: pw
MYSQL_USER: florian
MYSQL_PASSWORD: pw
MYSQL_DATABASE: database
restart: always
phpmyadmin:
build: phpmyadmin/.
links:
- mysql:db
ports:
- 1234:80
container_name: phpmyadmin
environment:
PMA_ARBITRARY: 1
PMA_USERNAME: florian
PMA_PASSWORD: pw
MYSQL_ROOT_PASSWORD: pw
restart: always
docker network inspect docker_default gives me:
[
{
"Name": "docker_default",
"Id": "1ed93da1a82efdab065e3a833067615e2d8b76336968a2591584af5874f07622",
"Created": "2017-03-08T07:21:34.969179141Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {
"85985605f1c0c20e5ee9fedc95800327f782beafc0049f51e645146d2e954b7d": {
"Name": "mysql",
"EndpointID": "84fb19cd428f8b0ba764b396362727d9809cd1cfea536e648bfc4752c5cb6b27",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
UPDATE
Seems that docker rm mysql -f stopped the mysql container, but the network is running.
Removed the network with docker network disconnect -f docker_default mysql But I'm pretty interested in how I got into this situation. any ideas?
I resolved a similar problem when I added this after rename services in by docker-compose.yml file before stop container
docker-compose down --remove-orphans
I'm guessing you edited the docker-compose file while you were currently running...?
Sometimes if you edit the docker-compose file before doing a docker-compose down it will then have a mismatch in what docker-compose will attempt to stop. First run docker rm 8598560 to stop the currently running container. From there, make sure you do a docker-compose down before editing the file. Once you stop the container, docker-compose up should work.
You need to disconnect stale endpoint(s) from the network. First, get the endpoint names with
docker network inspect <network>
You can find your endpoints in the output JSON: Containers -> Name. Now, simply force disconnect them with:
docker network disconnect -f <network> <endpoint>
Unfortunately none of above worked for me. Restarting the docker service solved the problem.
I happened to run into this error message because I had two networks with the same name (copy/paste mistake).
What I did to fix it:
docker-compose down - ignore errors
docker network list - note if any container is using it and stop if if necessary
docker network prune to remove all dangling networks, although you may want to just docker network rm <network name>
rename the second network to a unique name
docker-compose up
This worked for me. systemctl restart docker
service docker restart
then remove the network normally
docker network rm [network name]
This issue happens rarely while running exceptional dozens of services ( in parallel ) on the same instance, but unfortunately docker-compose could not establish deployment / recreation operation of these containers as a result of an "active network interface" that is bind and stuck.
I also enabled running flags such as:
force-recreate ( Recreate containers even if their configuration and image haven’t changed )
or
remove-orphans flags(Remove containers for services not defined in
the Compose file.)
but it did not help.
Eventually I came into conclusion that restarting docker
engine is the last resort - docker closes any network connections that are associated with the docker daemon ( connection between containers and between the api )- this action
solved this problem ( sudo service docker restart )

Resources