I have a couple of containers sitting behind an nginx reverse proxy and while nginx should and can talk to all of the containers, they shouldn't be able to talk to one another. The primary reason for wanting this is that the containers are independent of one another and in the event that one of them gets compromised, at least it won't allow access to the other ones.
Using my limited understanding of networking (even more so with Docker), I naively assumed that creating a separate network for each container and assigning all networks to the nginx container would do the job, though that doesn't seem to work. Here's what my docker-compose.yml looks like based on that attempt:
services:
nginx:
networks:
- net1
- net2
- net3
container1:
networks:
- net1
container2:
networks:
- net2
container3:
networks:
- net3
networks:
net1:
name: net1
net2:
name: net2
net3:
name: net3
Is what I'm trying to do possible? Is it even worth going through the effort of doing it for the sake of security?
You idea really works, see next example:
docker-compose.yaml:
version: "3.7"
services:
nginx:
image: debian:10
tty: true
stdin_open: true
networks:
- net1
- net2
container1:
image: debian:10
tty: true
stdin_open: true
networks:
- net1
container2:
image: debian:10
tty: true
stdin_open: true
networks:
- net2
networks:
net1:
name: net1
net2:
name: net2
verify:
root#pie:~/20221014# docker-compose exec nginx ping -c 1 container1
PING container1 (172.19.0.2) 56(84) bytes of data.
64 bytes from 20221014_container1_1.net1 (172.19.0.2): icmp_seq=1 ttl=64 time=0.083 ms
--- container1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms
root#pie:~/20221014# docker-compose exec nginx ping -c 1 container2
PING container2 (172.20.0.2) 56(84) bytes of data.
64 bytes from 20221014_container2_1.net2 (172.20.0.2): icmp_seq=1 ttl=64 time=0.062 ms
--- container2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms
root#pie:~/20221014# docker-compose exec container1 ping -c 1 container2
ping: container2: Temporary failure in name resolution
root#pie:~/20221014# docker-compose exec container2 ping -c 1 container1
ping: container1: Temporary failure in name resolution
And, networking in compose demo an example which same scenario as you:
The proxy service is isolated from the db service, because they do not share a network in common - only app can talk to both.
services:
proxy:
build: ./proxy
networks:
- frontend
app:
build: ./app
networks:
- frontend
- backend
db:
image: postgres
networks:
- backend
networks:
frontend:
# Use a custom driver
driver: custom-driver-1
backend:
# Use a custom driver which takes special options
driver: custom-driver-2
driver_opts:
foo: "1"
bar: "2" ```
Related
I have two docker-compose files where each contains different services. By creating an external network and defining such network to each compose files, I was able to have a shared network between each containers and all containers are accessible via their hostnames. I can ping each containers that were created from these separate compose files via their hostnames. The problem Im having is that I could not curl a certain service via their exposed port. Actually, my goal is to access the rest api myapp-backend-api:81 from my myapp_frontend_php. May I know what is lacking from my setup?
Here are the results from ping and curl:
root#myapp-encryption-flask:/var/www# ping -c 3 myapp-backend-api
PING myapp-backend-api (172.18.0.7) 56(84) bytes of data.
64 bytes from abkd-proxy-1.myapp-shared-network (172.18.0.7): icmp_seq=1 ttl=64 time=0.052 ms
64 bytes from abkd-proxy-1.myapp-shared-network (172.18.0.7): icmp_seq=2 ttl=64 time=0.206 ms
64 bytes from abkd-proxy-1.myapp-shared-network (172.18.0.7): icmp_seq=3 ttl=64 time=0.060 ms
--- myapp-backend-api ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2076ms
rtt min/avg/max/mdev = 0.052/0.106/0.206/0.070 ms
root#myapp-encryption-flask:/var/www# curl myapp-backend-api:81
curl: (7) Failed to connect to myapp-backend-api port 81: Connection refused
root#myapp-encryption-flask:/var/www#
Here is compose file 1:
version: '3.7'
networks:
default:
external:
name: myapp-shared-network
services:
db:
image: mysql:5.7
ports:
- "3301:3306"
hostname: myapp-db
restart: always
volumes:
- production_db_volume:/var/lib/mysql
env_file:
- .env.dev
# or
# db:
# image: postgres:12.5
# ports:
# - "5432:5432"
# restart: always
# volumes:
# - production_db_volume:/var/lib/postgresql/data/
# env_file:
# - .env.dev
app:
build:
context: .
ports:
- "8001:8000"
volumes:
- production_static_data:/vol/web
hostname: myapp-backend
restart: always
env_file:
- .env.dev
depends_on:
- db
proxy:
build:
context: ./proxy
hostname: myapp-backend-api
volumes:
- production_static_data:/vol/static
restart: always
ports:
- "81:80"
depends_on:
- app
volumes:
production_static_data:
production_db_volume:
And, here is compose file 2
version: '3.8'
networks:
default:
external:
name: myapp-shared-network
services:
app:
build:
context: ./myapp_frontend_php
dockerfile: Dockerfile
hostname: myapp-app
container_name: app
restart: always
working_dir: /var/www/
volumes:
- ../src:/var/www
nginx:
image: nginx:1.19-alpine
hostname: myapp-nginx
container_name: nginx
restart: always
ports:
- 80:80
volumes:
- ../src:/var/www
- ./nginx:/etc/nginx/conf.d
myapp-processor:
build:
context: ./myapp_encryption_flask
dockerfile: Dockerfile
hostname: myapp-processor
container_name: myapp-processor
restart: always
ports:
- "5000:5000"
I have a fairly simple docker-compose file, as follows:
version: "3.8"
services:
synchronisation:
build: .
command: 'python3 manage.py runserver 0.0.0.0:8000'
networks:
- syncnet
ports:
- "8000:8000"
- "8008:8008"
volumes:
- .:/var/lib/django
- packages:/usr/local/lib/python3.6/site-packages
neo4j:
image: neo4j:3.5.3
networks:
- syncnet
ports:
- "7687:7687"
- "7474:7474"
- "7473:7473"
environment:
NEO4J_AUTH: none
volumes:
packages:
driver: local
driver_opts:
type: none
device: <somefolder>
o: bind
networks:
syncnet:
name: sync_network
Now from the synchronisation app I need to perform some requests to the app neo4j and to the 'outside world'. Connection to neo4j is successful, however the connections to the outside world seem to fail. In particular if I get into the synchronisation container and run
ping google.com
I get the result
PING google.com (172.217.17.46) 56(84) bytes of data.
^C
--- google.com ping statistics ---
7 packets transmitted, 0 received, 100% packet loss, time 6149ms
I have already set
sudo sysctl -w net.inet.ip.forwarding=1
How can I get access to the 'outside world' to work? It's good to know that I am working from a Mac 10.15.7.
I am having some issues getting my containers to connect using the container name.
I have 4 different containers... app_backend, app_web, app_redis, app_db.
In my docker-compose file I have defined a network, appnet, and put all the containers on the same network. I'll show this below.
The app_backend container connects to the app_redis and app_db container just fine with the container name as the host name... here is an example url that I am using:http://app_redis:6379 or http://app_db:3306.
The app_web container, is refusing to connect to my app_backend unless I specify the host name as localhost. For example... http://localhost:4000 works, but http://app_backend:4000 does not.
The app_backend container is running a express server, and I have confirmed by logging the servername/hostname that it is running at http://app_backend:4000.
If I ssh (docker exec -it web bash) into the app_web container and ping the app_backend container, ping app_backend, it returns a ping! Though if I ping the app_backend container with the port, ping http://app_backend:4000 or even just app_backend:4000, it returns Name or Service Not Known.
Either way... My frontend is trying to request data from my express api and I am not sure what to do in order to get this to work.
Previously, I'd request from the api at http://localhost:4000/api/thing-to-call. I am having some network issues with uploading files and I think it has something to do with this being a localhost... I'd like to get it to be like the rest of the connects such as, http://app_backend:4000/api/thing-to-call.
Thanks for taking the time to look at this and pointing me in the right direction...
version: '3'
services:
db:
image: mysql
restart: always
environment:
MYSQL_DATABASE: 'appdb'
MYSQL_USER: 'app_user'
MYSQL_PASSWORD: 'removed_for_this'
MYSQL_ROOT_PASSWORD: 'removed_for_this'
ports:
- '3306:3306'
expose:
- '3306'
volumes:
- appdb:/var/lib/mysql:Z
networks:
- appnet
redis:
build:
context: ./Assets/Docker/redis
image: registry.location.secret/app:redis
command: redis-server --requirepass removed_for_this
ports:
- '6379:6379'
container_name: app_redis
volumes:
- ./redis-data:/var/lib/redis:Z
- ./Assets/Docker/redis/redis-dev.conf:/usr/local/etc/redis/redis.conf:Z
environment:
- REDIS_REPLICATION_MODE=master
networks:
- appnet
app_backend:
build:
context: ./app-backend
image: registry.location.secret/app:backend
ports:
- '4000:4000'
expose:
- '4000'
container_name: app_backend
volumes:
- ./app-backend:/app:Z
- /app/node_modules
- ./Assets/_dev/backend/.env:/app/.env:Z
networks:
- appnet
app_web:
build:
context: ./app-web
image:
registry.location.secret/app:web
ports:
- '3000:3000'
container_name: app_web
stdin_open: true
volumes:
- ./app-web:/app/:Z
- /app/node_modules
- ./Assets/_dev/web/.env:/app/.env:Z
networks:
- appnet
volumes:
appdb:
networks:
appnet:
Here is an example of the ping..
root#f82cc599058d:/app# ping app_backend
PING app_backend (172.19.0.2) 56(84) bytes of data.
64 bytes from app_backend.app_appnet (172.19.0.2): icmp_seq=1 ttl=64 time=0.109 ms
64 bytes from app_backend.app_appnet (172.19.0.2): icmp_seq=2 ttl=64 time=0.080 ms
^C
--- app_backend ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1056ms
rtt min/avg/max/mdev = 0.080/0.094/0.109/0.017 ms
root#f82cc599058d:/app# ping app_backend:4000
ping: app_backend:4000: Name or service not known
Here my docker-compose:
version: '3.1'
services:
service1:
image: alpine/socat
command: TCP-LISTEN:9999,bind=127.16.1.10 SYSTEM:"echo lol"
networks:
network1:
ipv4_address: 172.16.1.10
service2:
image: alpine/socat
entrypoint: ''
command: sh -c 'sleep 2 && ping -c 1 172.16.1.10 && socat - TCP4:172.16.1.10:9999'
depends_on:
- service1
networks:
- network1
networks:
network1:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.16.1.0/24
Both services are sharing the network1 network, service1 get the ip 172.16.1.10 and socat is binding to this ip.
If I run:
$ docker-compose run service2
Creating network "docker_network1" with driver "bridge"
Creating docker_service1_1 ... done
PING 172.16.1.10 (172.16.1.10): 56 data bytes
64 bytes from 172.16.1.10: seq=0 ttl=64 time=0.197 ms
--- 172.16.1.10 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.197/0.197/0.197 ms
2020/06/07 16:06:15 socat[1] E connect(5, AF=2 172.16.1.10:9999, 16): Connection refused
Socat does not accept the connection.
If I change binding to 0.0.0.0, it works, but I don't want to bind to all interfaces, I'm using networks for this purpose, this is pointless if I can't bind my services on specific interfaces.
Is it not possible in Docker or am I doing it wrong?
I have docker cluster.
version: "3.2"
services:
manager:
image: busybox
networks:
- frontend
deploy:
placement:
constraints: [node.role == manager]
worker:
image: busybox
depends_on:
- manager
networks:
- frontend
deploy:
placement:
constraints: [node.label.name == hp-laptop]
networks:
frontend:
On host with label hp-laptop i have ipv6 address.
ping6
PING google.com(lhr.net) 56 data bytes
64 bytes from lhr.net: icmp_seq=1 ttl=57 time=36.6 ms
64 bytes from lhr.net: icmp_seq=2 ttl=57 time=30.0 ms
64 bytes from lhr.net: icmp_seq=3 ttl=57 time=30.6 ms
How i can provide ipv6 support from host (hp-laptop) to docker swarm node using docker stack configuration?
IPv6 doesn't work in Overlay networks. You'd have to use bridge networks.