Having networking issues with docker-compose - docker

I'm trying to use a following config:
docker-compose.yml
version: "3"
services:
web:
build: .
ports:
- "3000:3000"
depends_on:
- db
db:
image: onjin/alpine-postgres
environment:
POSTGRES_PASSWORD: password
The other file is Dockerfile:
FROM alpine
RUN apk update && apk add --no-cache postgresql-client
COPY Bot/ /Bot
ENV PGHOST=db PGPASSWORD=password
RUN psql -h "$PGHOST" -f /Bot/test/database_schema.sql
I have no idea why I always get this error while running "docker-compose up":
psql: could not translate host name "db" to address: Name does not
resolve
Can anyone help me with debugging this? Seems like the "db" hostname is not being propagated inside the docker environment, but don't know the reason for that.

The issue you are seeing is related to the fact that docker-composer runs services in the same order as those are defined in the yaml file. So basically the moment when you run your web service db service does not exists yet so it's hostname is not resolvable.
If you change the order in the docker-compose.yaml:
version: "2"
services:
db:
image: onjin/alpine-postgres
environment:
POSTGRES_PASSWORD: password
web:
build: .
ports:
- "3000:3000"
depends_on:
- "db"
tty: true
and run docker-compose up -d you won't see the error anymore, service will be up:
sudo docker-compose ps
Name Command State Ports
-------------------------------------------------------------------------------
db_1 /docker-entrypoint.sh postgres Up 5432/tcp
web_1 /bin/sh Up 0.0.0.0:3000->3000/tcp
and hostname is correctly resolvable:
sudo docker-compose run web "ping" "db"
PING db (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.096 ms
64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.101 ms
64 bytes from 172.18.0.2: seq=2 ttl=64 time=0.097 ms
64 bytes from 172.18.0.2: seq=3 ttl=64 time=0.106 ms

Related

Isolating certain Docker containers from one another

I have a couple of containers sitting behind an nginx reverse proxy and while nginx should and can talk to all of the containers, they shouldn't be able to talk to one another. The primary reason for wanting this is that the containers are independent of one another and in the event that one of them gets compromised, at least it won't allow access to the other ones.
Using my limited understanding of networking (even more so with Docker), I naively assumed that creating a separate network for each container and assigning all networks to the nginx container would do the job, though that doesn't seem to work. Here's what my docker-compose.yml looks like based on that attempt:
services:
nginx:
networks:
- net1
- net2
- net3
container1:
networks:
- net1
container2:
networks:
- net2
container3:
networks:
- net3
networks:
net1:
name: net1
net2:
name: net2
net3:
name: net3
Is what I'm trying to do possible? Is it even worth going through the effort of doing it for the sake of security?
You idea really works, see next example:
docker-compose.yaml:
version: "3.7"
services:
nginx:
image: debian:10
tty: true
stdin_open: true
networks:
- net1
- net2
container1:
image: debian:10
tty: true
stdin_open: true
networks:
- net1
container2:
image: debian:10
tty: true
stdin_open: true
networks:
- net2
networks:
net1:
name: net1
net2:
name: net2
verify:
root#pie:~/20221014# docker-compose exec nginx ping -c 1 container1
PING container1 (172.19.0.2) 56(84) bytes of data.
64 bytes from 20221014_container1_1.net1 (172.19.0.2): icmp_seq=1 ttl=64 time=0.083 ms
--- container1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms
root#pie:~/20221014# docker-compose exec nginx ping -c 1 container2
PING container2 (172.20.0.2) 56(84) bytes of data.
64 bytes from 20221014_container2_1.net2 (172.20.0.2): icmp_seq=1 ttl=64 time=0.062 ms
--- container2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms
root#pie:~/20221014# docker-compose exec container1 ping -c 1 container2
ping: container2: Temporary failure in name resolution
root#pie:~/20221014# docker-compose exec container2 ping -c 1 container1
ping: container1: Temporary failure in name resolution
And, networking in compose demo an example which same scenario as you:
The proxy service is isolated from the db service, because they do not share a network in common - only app can talk to both.
services:
proxy:
build: ./proxy
networks:
- frontend
app:
build: ./app
networks:
- frontend
- backend
db:
image: postgres
networks:
- backend
networks:
frontend:
# Use a custom driver
driver: custom-driver-1
backend:
# Use a custom driver which takes special options
driver: custom-driver-2
driver_opts:
foo: "1"
bar: "2" ```

Connect to the internet from within a docker container with docker compose on Mac

I have a fairly simple docker-compose file, as follows:
version: "3.8"
services:
synchronisation:
build: .
command: 'python3 manage.py runserver 0.0.0.0:8000'
networks:
- syncnet
ports:
- "8000:8000"
- "8008:8008"
volumes:
- .:/var/lib/django
- packages:/usr/local/lib/python3.6/site-packages
neo4j:
image: neo4j:3.5.3
networks:
- syncnet
ports:
- "7687:7687"
- "7474:7474"
- "7473:7473"
environment:
NEO4J_AUTH: none
volumes:
packages:
driver: local
driver_opts:
type: none
device: <somefolder>
o: bind
networks:
syncnet:
name: sync_network
Now from the synchronisation app I need to perform some requests to the app neo4j and to the 'outside world'. Connection to neo4j is successful, however the connections to the outside world seem to fail. In particular if I get into the synchronisation container and run
ping google.com
I get the result
PING google.com (172.217.17.46) 56(84) bytes of data.
^C
--- google.com ping statistics ---
7 packets transmitted, 0 received, 100% packet loss, time 6149ms
I have already set
sudo sysctl -w net.inet.ip.forwarding=1
How can I get access to the 'outside world' to work? It's good to know that I am working from a Mac 10.15.7.

Docker-Compose - Container Network - Name or Service Not Known

I am having some issues getting my containers to connect using the container name.
I have 4 different containers... app_backend, app_web, app_redis, app_db.
In my docker-compose file I have defined a network, appnet, and put all the containers on the same network. I'll show this below.
The app_backend container connects to the app_redis and app_db container just fine with the container name as the host name... here is an example url that I am using:http://app_redis:6379 or http://app_db:3306.
The app_web container, is refusing to connect to my app_backend unless I specify the host name as localhost. For example... http://localhost:4000 works, but http://app_backend:4000 does not.
The app_backend container is running a express server, and I have confirmed by logging the servername/hostname that it is running at http://app_backend:4000.
If I ssh (docker exec -it web bash) into the app_web container and ping the app_backend container, ping app_backend, it returns a ping! Though if I ping the app_backend container with the port, ping http://app_backend:4000 or even just app_backend:4000, it returns Name or Service Not Known.
Either way... My frontend is trying to request data from my express api and I am not sure what to do in order to get this to work.
Previously, I'd request from the api at http://localhost:4000/api/thing-to-call. I am having some network issues with uploading files and I think it has something to do with this being a localhost... I'd like to get it to be like the rest of the connects such as, http://app_backend:4000/api/thing-to-call.
Thanks for taking the time to look at this and pointing me in the right direction...
version: '3'
services:
db:
image: mysql
restart: always
environment:
MYSQL_DATABASE: 'appdb'
MYSQL_USER: 'app_user'
MYSQL_PASSWORD: 'removed_for_this'
MYSQL_ROOT_PASSWORD: 'removed_for_this'
ports:
- '3306:3306'
expose:
- '3306'
volumes:
- appdb:/var/lib/mysql:Z
networks:
- appnet
redis:
build:
context: ./Assets/Docker/redis
image: registry.location.secret/app:redis
command: redis-server --requirepass removed_for_this
ports:
- '6379:6379'
container_name: app_redis
volumes:
- ./redis-data:/var/lib/redis:Z
- ./Assets/Docker/redis/redis-dev.conf:/usr/local/etc/redis/redis.conf:Z
environment:
- REDIS_REPLICATION_MODE=master
networks:
- appnet
app_backend:
build:
context: ./app-backend
image: registry.location.secret/app:backend
ports:
- '4000:4000'
expose:
- '4000'
container_name: app_backend
volumes:
- ./app-backend:/app:Z
- /app/node_modules
- ./Assets/_dev/backend/.env:/app/.env:Z
networks:
- appnet
app_web:
build:
context: ./app-web
image:
registry.location.secret/app:web
ports:
- '3000:3000'
container_name: app_web
stdin_open: true
volumes:
- ./app-web:/app/:Z
- /app/node_modules
- ./Assets/_dev/web/.env:/app/.env:Z
networks:
- appnet
volumes:
appdb:
networks:
appnet:
Here is an example of the ping..
root#f82cc599058d:/app# ping app_backend
PING app_backend (172.19.0.2) 56(84) bytes of data.
64 bytes from app_backend.app_appnet (172.19.0.2): icmp_seq=1 ttl=64 time=0.109 ms
64 bytes from app_backend.app_appnet (172.19.0.2): icmp_seq=2 ttl=64 time=0.080 ms
^C
--- app_backend ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1056ms
rtt min/avg/max/mdev = 0.080/0.094/0.109/0.017 ms
root#f82cc599058d:/app# ping app_backend:4000
ping: app_backend:4000: Name or service not known

How to access docker container through hostname?(from my host pc)

(I'm doing this just for practice, to know docker.)
My host PC is ubuntu 18.04 LTS.
And this is my docker-compose file(default network mode is bridge).
version: "2"
services:
zookeeper1:
image: wurstmeister/zookeeper
zookeeper2:
image: wurstmeister/zookeeper
zookeeper3:
image: wurstmeister/zookeeper
In docker container, they can find each other.
> docker container exec -it 8b1c2b412989 ping zookeeper2
PING zookeeper2 (172.19.0.3) 56(84) bytes of data.
64 bytes from setup-zookeeper-kafka_zookeeper2_1.setup-zookeeper-kafka_default (172.19.0.3): icmp_seq=1 ttl=64 time=0.097 ms
64 bytes from setup-zookeeper-kafka_zookeeper2_1.setup-zookeeper-kafka_default (172.19.0.3): icmp_seq=2 ttl=64 time=0.129 ms
But when I tried to my host PC, it doesn't work
> ping zookeeper2
ping: zookeeper2: Name or service not known
> ping 8b1c2b412989 # container id also doesn't work
ping: 8b1c2b412989: Name or service not known
Ping with ip, it works well.
> ping 172.19.0.3
PING 172.19.0.3 (172.19.0.3) 56(84) bytes of data.
64 bytes from 172.19.0.3: icmp_seq=1 ttl=64 time=0.142 ms
64 bytes from 172.19.0.3: icmp_seq=2 ttl=64 time=0.046 ms
I added hostname property, it still doesn't work.
version: "2"
services:
zookeeper1:
hostname: zookeeper1
image: wurstmeister/zookeeper
zookeeper2:
hostname: zookeeper2
image: wurstmeister/zookeeper
zookeeper3:
hostname: zookeeper3
image: wurstmeister/zookeeper
How can I access container with hostname from my host PC?
Now, I only can do with ports options.(or I have to write static IP address)
What this I am thinking wrong?
This happens as Giga commented. If you don't have the hostname in your /etc/hosts there is no magic here.
If you need to ping for healthcheck you can use docker-compose's healthcheck
So, with it you can always look your zookeepers if they are alive or not.
If you insist that you need to ping with hostname, I recommend roll up your sleeves, and make a bash script using docker format like ...
docker ps --format "table {{.ID}}\t{{.Ports}}"
And then with every containerid with the container Name do:
docker inspect <containerid> and extract the IP like you have [here.][2]
Docker does not updates your host /etc/hosts file automatically, so you can`t access fro host machine via hostname.
You can manually write wrapper for docker-compose, which will update host machine's /etc/hosts file.
But another question is, why you would need it.

How to link one docker service with other docker service?

I have 12 docker containers. When I run individually I can use --link to connect some of them, like web app link with mysql db. But when I run them as service in docker swarm (like docker create service)I can not link them because --link is not available with docker service create command.
If I use docker-compose.yml file to run all container, I can link up. But here is another issue.
Suppose I have 12 different containers (components)in docker-compose file or docker stack how can I update a single container or components? Do I have to redeploy whole docker stack?
You only need to put your containers in the same network in each docker-compose.yml file.
First you will need to create a network with docker:
docker network create -d bridge custom
After you will need to change the network in your docker-compose files to the new network, and if you want you can use external_links like as the example:
example file 1:
version: '3'
services:
php-server:
container_name: myphp
image: devilxatoms/taproject:latest
ports:
- "9000:9000"
external_links:
- mysql:mysql
networks:
- custom
networks:
custom:
external: true
example file 2:
version: '3'
services:
mysql:
container_name: mydb
image: mysql:latest
restart: always
environment:
- MYSQL_ROOT_PASSWORD=root
ports:
- "3306:3306"
networks:
- custom
networks:
custom:
external: true
To test it, i only accessed to the bash of my mysql container and send a ping to the another container:
MySQL Container:
# ping php-server
PING php-server (172.26.0.3) 56(84) bytes of data.
64 bytes from myphp.custom (172.26.0.3): icmp_seq=1 ttl=64 time=0.124 ms
64 bytes from myphp.custom (172.26.0.3): icmp_seq=2 ttl=64 time=0.368 ms
64 bytes from myphp.custom (172.26.0.3): icmp_seq=3 ttl=64 time=0.071 ms
64 bytes from myphp.custom (172.26.0.3): icmp_seq=4 ttl=64 time=0.136 ms
^C
--- php-server ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3094ms
rtt min/avg/max/mdev = 0.071/0.174/0.368/0.115 ms
PHP Container:
# ping mysql
PING mysql (172.26.0.2) 56(84) bytes of data.
64 bytes from mydb.custom (172.26.0.2): icmp_seq=1 ttl=64 time=0.075 ms
64 bytes from mydb.custom (172.26.0.2): icmp_seq=2 ttl=64 time=0.107 ms
64 bytes from mydb.custom (172.26.0.2): icmp_seq=3 ttl=64 time=0.109 ms
^C
--- mysql ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2094ms
rtt min/avg/max/mdev = 0.075/0.097/0.109/0.015 ms
For update a specific services you can update your docker-compose file with your changes and tell to docker-compose wich of your services need to update with this line:
docker-compose up -d --no-deps <service_name>
The -d is Detached mode: Run containers in the background, print new container names.
The --no-deps will not start linked services.
references:
https://docs.docker.com/compose/compose-file/#external_links

Resources