I have docker cluster.
version: "3.2"
services:
manager:
image: busybox
networks:
- frontend
deploy:
placement:
constraints: [node.role == manager]
worker:
image: busybox
depends_on:
- manager
networks:
- frontend
deploy:
placement:
constraints: [node.label.name == hp-laptop]
networks:
frontend:
On host with label hp-laptop i have ipv6 address.
ping6
PING google.com(lhr.net) 56 data bytes
64 bytes from lhr.net: icmp_seq=1 ttl=57 time=36.6 ms
64 bytes from lhr.net: icmp_seq=2 ttl=57 time=30.0 ms
64 bytes from lhr.net: icmp_seq=3 ttl=57 time=30.6 ms
How i can provide ipv6 support from host (hp-laptop) to docker swarm node using docker stack configuration?
IPv6 doesn't work in Overlay networks. You'd have to use bridge networks.
Related
I have a couple of containers sitting behind an nginx reverse proxy and while nginx should and can talk to all of the containers, they shouldn't be able to talk to one another. The primary reason for wanting this is that the containers are independent of one another and in the event that one of them gets compromised, at least it won't allow access to the other ones.
Using my limited understanding of networking (even more so with Docker), I naively assumed that creating a separate network for each container and assigning all networks to the nginx container would do the job, though that doesn't seem to work. Here's what my docker-compose.yml looks like based on that attempt:
services:
nginx:
networks:
- net1
- net2
- net3
container1:
networks:
- net1
container2:
networks:
- net2
container3:
networks:
- net3
networks:
net1:
name: net1
net2:
name: net2
net3:
name: net3
Is what I'm trying to do possible? Is it even worth going through the effort of doing it for the sake of security?
You idea really works, see next example:
docker-compose.yaml:
version: "3.7"
services:
nginx:
image: debian:10
tty: true
stdin_open: true
networks:
- net1
- net2
container1:
image: debian:10
tty: true
stdin_open: true
networks:
- net1
container2:
image: debian:10
tty: true
stdin_open: true
networks:
- net2
networks:
net1:
name: net1
net2:
name: net2
verify:
root#pie:~/20221014# docker-compose exec nginx ping -c 1 container1
PING container1 (172.19.0.2) 56(84) bytes of data.
64 bytes from 20221014_container1_1.net1 (172.19.0.2): icmp_seq=1 ttl=64 time=0.083 ms
--- container1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms
root#pie:~/20221014# docker-compose exec nginx ping -c 1 container2
PING container2 (172.20.0.2) 56(84) bytes of data.
64 bytes from 20221014_container2_1.net2 (172.20.0.2): icmp_seq=1 ttl=64 time=0.062 ms
--- container2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms
root#pie:~/20221014# docker-compose exec container1 ping -c 1 container2
ping: container2: Temporary failure in name resolution
root#pie:~/20221014# docker-compose exec container2 ping -c 1 container1
ping: container1: Temporary failure in name resolution
And, networking in compose demo an example which same scenario as you:
The proxy service is isolated from the db service, because they do not share a network in common - only app can talk to both.
services:
proxy:
build: ./proxy
networks:
- frontend
app:
build: ./app
networks:
- frontend
- backend
db:
image: postgres
networks:
- backend
networks:
frontend:
# Use a custom driver
driver: custom-driver-1
backend:
# Use a custom driver which takes special options
driver: custom-driver-2
driver_opts:
foo: "1"
bar: "2" ```
I am having some issues getting my containers to connect using the container name.
I have 4 different containers... app_backend, app_web, app_redis, app_db.
In my docker-compose file I have defined a network, appnet, and put all the containers on the same network. I'll show this below.
The app_backend container connects to the app_redis and app_db container just fine with the container name as the host name... here is an example url that I am using:http://app_redis:6379 or http://app_db:3306.
The app_web container, is refusing to connect to my app_backend unless I specify the host name as localhost. For example... http://localhost:4000 works, but http://app_backend:4000 does not.
The app_backend container is running a express server, and I have confirmed by logging the servername/hostname that it is running at http://app_backend:4000.
If I ssh (docker exec -it web bash) into the app_web container and ping the app_backend container, ping app_backend, it returns a ping! Though if I ping the app_backend container with the port, ping http://app_backend:4000 or even just app_backend:4000, it returns Name or Service Not Known.
Either way... My frontend is trying to request data from my express api and I am not sure what to do in order to get this to work.
Previously, I'd request from the api at http://localhost:4000/api/thing-to-call. I am having some network issues with uploading files and I think it has something to do with this being a localhost... I'd like to get it to be like the rest of the connects such as, http://app_backend:4000/api/thing-to-call.
Thanks for taking the time to look at this and pointing me in the right direction...
version: '3'
services:
db:
image: mysql
restart: always
environment:
MYSQL_DATABASE: 'appdb'
MYSQL_USER: 'app_user'
MYSQL_PASSWORD: 'removed_for_this'
MYSQL_ROOT_PASSWORD: 'removed_for_this'
ports:
- '3306:3306'
expose:
- '3306'
volumes:
- appdb:/var/lib/mysql:Z
networks:
- appnet
redis:
build:
context: ./Assets/Docker/redis
image: registry.location.secret/app:redis
command: redis-server --requirepass removed_for_this
ports:
- '6379:6379'
container_name: app_redis
volumes:
- ./redis-data:/var/lib/redis:Z
- ./Assets/Docker/redis/redis-dev.conf:/usr/local/etc/redis/redis.conf:Z
environment:
- REDIS_REPLICATION_MODE=master
networks:
- appnet
app_backend:
build:
context: ./app-backend
image: registry.location.secret/app:backend
ports:
- '4000:4000'
expose:
- '4000'
container_name: app_backend
volumes:
- ./app-backend:/app:Z
- /app/node_modules
- ./Assets/_dev/backend/.env:/app/.env:Z
networks:
- appnet
app_web:
build:
context: ./app-web
image:
registry.location.secret/app:web
ports:
- '3000:3000'
container_name: app_web
stdin_open: true
volumes:
- ./app-web:/app/:Z
- /app/node_modules
- ./Assets/_dev/web/.env:/app/.env:Z
networks:
- appnet
volumes:
appdb:
networks:
appnet:
Here is an example of the ping..
root#f82cc599058d:/app# ping app_backend
PING app_backend (172.19.0.2) 56(84) bytes of data.
64 bytes from app_backend.app_appnet (172.19.0.2): icmp_seq=1 ttl=64 time=0.109 ms
64 bytes from app_backend.app_appnet (172.19.0.2): icmp_seq=2 ttl=64 time=0.080 ms
^C
--- app_backend ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1056ms
rtt min/avg/max/mdev = 0.080/0.094/0.109/0.017 ms
root#f82cc599058d:/app# ping app_backend:4000
ping: app_backend:4000: Name or service not known
Here my docker-compose:
version: '3.1'
services:
service1:
image: alpine/socat
command: TCP-LISTEN:9999,bind=127.16.1.10 SYSTEM:"echo lol"
networks:
network1:
ipv4_address: 172.16.1.10
service2:
image: alpine/socat
entrypoint: ''
command: sh -c 'sleep 2 && ping -c 1 172.16.1.10 && socat - TCP4:172.16.1.10:9999'
depends_on:
- service1
networks:
- network1
networks:
network1:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.16.1.0/24
Both services are sharing the network1 network, service1 get the ip 172.16.1.10 and socat is binding to this ip.
If I run:
$ docker-compose run service2
Creating network "docker_network1" with driver "bridge"
Creating docker_service1_1 ... done
PING 172.16.1.10 (172.16.1.10): 56 data bytes
64 bytes from 172.16.1.10: seq=0 ttl=64 time=0.197 ms
--- 172.16.1.10 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.197/0.197/0.197 ms
2020/06/07 16:06:15 socat[1] E connect(5, AF=2 172.16.1.10:9999, 16): Connection refused
Socat does not accept the connection.
If I change binding to 0.0.0.0, it works, but I don't want to bind to all interfaces, I'm using networks for this purpose, this is pointless if I can't bind my services on specific interfaces.
Is it not possible in Docker or am I doing it wrong?
(I'm doing this just for practice, to know docker.)
My host PC is ubuntu 18.04 LTS.
And this is my docker-compose file(default network mode is bridge).
version: "2"
services:
zookeeper1:
image: wurstmeister/zookeeper
zookeeper2:
image: wurstmeister/zookeeper
zookeeper3:
image: wurstmeister/zookeeper
In docker container, they can find each other.
> docker container exec -it 8b1c2b412989 ping zookeeper2
PING zookeeper2 (172.19.0.3) 56(84) bytes of data.
64 bytes from setup-zookeeper-kafka_zookeeper2_1.setup-zookeeper-kafka_default (172.19.0.3): icmp_seq=1 ttl=64 time=0.097 ms
64 bytes from setup-zookeeper-kafka_zookeeper2_1.setup-zookeeper-kafka_default (172.19.0.3): icmp_seq=2 ttl=64 time=0.129 ms
But when I tried to my host PC, it doesn't work
> ping zookeeper2
ping: zookeeper2: Name or service not known
> ping 8b1c2b412989 # container id also doesn't work
ping: 8b1c2b412989: Name or service not known
Ping with ip, it works well.
> ping 172.19.0.3
PING 172.19.0.3 (172.19.0.3) 56(84) bytes of data.
64 bytes from 172.19.0.3: icmp_seq=1 ttl=64 time=0.142 ms
64 bytes from 172.19.0.3: icmp_seq=2 ttl=64 time=0.046 ms
I added hostname property, it still doesn't work.
version: "2"
services:
zookeeper1:
hostname: zookeeper1
image: wurstmeister/zookeeper
zookeeper2:
hostname: zookeeper2
image: wurstmeister/zookeeper
zookeeper3:
hostname: zookeeper3
image: wurstmeister/zookeeper
How can I access container with hostname from my host PC?
Now, I only can do with ports options.(or I have to write static IP address)
What this I am thinking wrong?
This happens as Giga commented. If you don't have the hostname in your /etc/hosts there is no magic here.
If you need to ping for healthcheck you can use docker-compose's healthcheck
So, with it you can always look your zookeepers if they are alive or not.
If you insist that you need to ping with hostname, I recommend roll up your sleeves, and make a bash script using docker format like ...
docker ps --format "table {{.ID}}\t{{.Ports}}"
And then with every containerid with the container Name do:
docker inspect <containerid> and extract the IP like you have [here.][2]
Docker does not updates your host /etc/hosts file automatically, so you can`t access fro host machine via hostname.
You can manually write wrapper for docker-compose, which will update host machine's /etc/hosts file.
But another question is, why you would need it.
I have 12 docker containers. When I run individually I can use --link to connect some of them, like web app link with mysql db. But when I run them as service in docker swarm (like docker create service)I can not link them because --link is not available with docker service create command.
If I use docker-compose.yml file to run all container, I can link up. But here is another issue.
Suppose I have 12 different containers (components)in docker-compose file or docker stack how can I update a single container or components? Do I have to redeploy whole docker stack?
You only need to put your containers in the same network in each docker-compose.yml file.
First you will need to create a network with docker:
docker network create -d bridge custom
After you will need to change the network in your docker-compose files to the new network, and if you want you can use external_links like as the example:
example file 1:
version: '3'
services:
php-server:
container_name: myphp
image: devilxatoms/taproject:latest
ports:
- "9000:9000"
external_links:
- mysql:mysql
networks:
- custom
networks:
custom:
external: true
example file 2:
version: '3'
services:
mysql:
container_name: mydb
image: mysql:latest
restart: always
environment:
- MYSQL_ROOT_PASSWORD=root
ports:
- "3306:3306"
networks:
- custom
networks:
custom:
external: true
To test it, i only accessed to the bash of my mysql container and send a ping to the another container:
MySQL Container:
# ping php-server
PING php-server (172.26.0.3) 56(84) bytes of data.
64 bytes from myphp.custom (172.26.0.3): icmp_seq=1 ttl=64 time=0.124 ms
64 bytes from myphp.custom (172.26.0.3): icmp_seq=2 ttl=64 time=0.368 ms
64 bytes from myphp.custom (172.26.0.3): icmp_seq=3 ttl=64 time=0.071 ms
64 bytes from myphp.custom (172.26.0.3): icmp_seq=4 ttl=64 time=0.136 ms
^C
--- php-server ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3094ms
rtt min/avg/max/mdev = 0.071/0.174/0.368/0.115 ms
PHP Container:
# ping mysql
PING mysql (172.26.0.2) 56(84) bytes of data.
64 bytes from mydb.custom (172.26.0.2): icmp_seq=1 ttl=64 time=0.075 ms
64 bytes from mydb.custom (172.26.0.2): icmp_seq=2 ttl=64 time=0.107 ms
64 bytes from mydb.custom (172.26.0.2): icmp_seq=3 ttl=64 time=0.109 ms
^C
--- mysql ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2094ms
rtt min/avg/max/mdev = 0.075/0.097/0.109/0.015 ms
For update a specific services you can update your docker-compose file with your changes and tell to docker-compose wich of your services need to update with this line:
docker-compose up -d --no-deps <service_name>
The -d is Detached mode: Run containers in the background, print new container names.
The --no-deps will not start linked services.
references:
https://docs.docker.com/compose/compose-file/#external_links