I have a docker kylemanna/openvpn server to access a private network, with configured users. VPN is working. Internet access through nat works. I want to see sites on the internal network vpn client IP instead of vpn server host address 192.168.140.38. I lost about 3 weeks in time, and read a lot of documentation, but the experience is not enough. I tried using macvlan, tap(server-bridge and docker:host) but it didn't work (nothing worked). I'm terribly tired, any help is appreciated
My system: ubuntu 20
My server IP (openvpn): 192.168.140.38 (external ip 88.56..)
My gateway: 192.168.140.1
DHCP server range: 192.168.140.1/24
command generation configuration
ovpn_genconfig -N -2 -e 'duplicate-cn' -n '192.168.140.1' -n '8.8.8.8' -n '8.8.4.4' -d
-C 'AES-256-GCM' -e 'tls-crypt-v2 /etc/openvpn/pki/private/vpn_server.pem'
-s '10.10.140.0/24' -u udp://77.66.19.237:587 -e 'topology subnet' -p '192.168.140.0
255.255.255.0' -p "10.0.0.0 255.255.255.0' -p '192.168.2.0 255.255.255.0'
openvpn.conf
server 10.10.140.0 255.255.255.0
verb 3
key /etc/openvpn/pki/private/77.66.19.237.key
ca /etc/openvpn/pki/ca.crt
cert /etc/openvpn/pki/issued/77.66.19.237.crt
dh /etc/openvpn/pki/dh.pem
tls-auth /etc/openvpn/pki/ta.key
key-direction 0
keepalive 10 60
persist-key
persist-tun
proto udp
# Rely on Docker to do port mapping, internally always 1194
port 1194
dev tun0
status /tmp/openvpn-status.log
user nobody
group nogroup
cipher AES-256-GCM
comp-lzo no
### Push Configurations Below
push "dhcp-option DNS 192.168.140.1"
push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DNS 8.8.4.4"
push "comp-lzo no"
push "route 192.168.140.0 255.255.255.0"
push "route 10.0.0.0 255.255.255.0"
push "route 192.168.2.0 255.255.255.0"
reneg-sec 0
### Extra Configurations Below
duplicate-cn
topology subnet
docker-compose.yml
version: '3.8'
services:
openvpn:
container_name: openvpn
build: #fix last version (2.5) I get last build docker image
context: ./docker-openvpn
dockerfile: Dockerfile
restart: always
ports:
- "587:1194/udp"
command: bash -c "ovpn_run"
cap_add:
- NET_ADMIN
volumes:
- ./openvpn-data/conf:/etc/openvpn
networks:
stack:
ipv4_address: 192.168.140.153
networks:
stack:
external: true
This configuration works well however I would like my internal resources to be able to identify me with a unique ip address
I tried do macvlan network:
#we take the entire network and determine the DHCP output range (Part of the network is divided into macvlan stack)
root#test-openvpn ~ # docker network create -d macvlan --subnet=192.168.140.0/24 --gateway=192.168.140.1 --ip-range=192.168.140.153/31 -o parent=ens160 stack
#create macvlan-br0 in host interface to redirect traffic from host eth0 to docker
ip link add macvlan-br0 link ens160 type macvlan mode bridge
# allocate ip from dhcp for bridge
ip addr add 192.168.140.152/32 dev macvlan-br0
ip link set macvlan-br0 up
#associate the host network with the docker DHCP network
ip route add 192.168.140.153/31 dev macvlan-br0
#test network
ssh root#192.168.140.152 #success bridge create
root#test-openvpn ~ # docker run --net=stack --rm busybox sh -c "ip ad sh && ping 192.168.140.152 -c 2 && ping google.com -c 2"
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
125: eth0#if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:c0:a8:8c:98 brd ff:ff:ff:ff:ff:ff
inet 192.168.140.152/24 brd 192.168.140.255 scope global eth0
valid_lft forever preferred_lft forever
PING 192.168.140.152 (192.168.140.152): 56 data bytes
64 bytes from 192.168.140.152: seq=0 ttl=64 time=1.645 ms
64 bytes from 192.168.140.152: seq=1 ttl=64 time=0.097 ms
--- 192.168.140.152 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.097/0.871/1.645 ms
PING google.com (172.217.168.206): 56 data bytes
--- google.com ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:b2:39:c4 brd ff:ff:ff:ff:ff:ff
inet 192.168.140.38/24 brd 192.168.140.255 scope global dynamic ens160
valid_lft 4143sec preferred_lft 4143sec
inet6 fe80::250:56ff:feb2:39c4/64 scope link
valid_lft forever preferred_lft forever
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:90:fa:7a:a6 brd ff:ff:ff:ff:ff:ff
inet 172.30.0.1/24 brd 172.30.0.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:90ff:fefa:7aa6/64 scope link
valid_lft forever preferred_lft forever
99: veth3443190#if98: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 7e:0a:b2:53:84:7c brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::7c0a:b2ff:fe53:847c/64 scope link
valid_lft forever preferred_lft forever
103: br-fe3e6d8d0ad4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:b2:6e:7a:9f brd ff:ff:ff:ff:ff:ff
inet 172.30.3.1/24 brd 172.30.3.255 scope global br-fe3e6d8d0ad4
valid_lft forever preferred_lft forever
inet6 fe80::42:b2ff:fe6e:7a9f/64 scope link
valid_lft forever preferred_lft forever
106: macvlan-br0#ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 16:c9:47:98:c7:6b brd ff:ff:ff:ff:ff:ff
inet 192.168.140.152/32 scope global macvlan-br0
valid_lft forever preferred_lft forever
inet6 fe80::14c9:47ff:fe98:c76b/64 scope link
valid_lft forever preferred_lft forever
Internet doesn't work container through docker macvlan
Related
I'm using microk8s installed on my ubuntu server. and I try to ping outside from my pod
I have docker installed on my machine: when I run a container with docker I can ping outside :
~$ sudo ip addr show docker0
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:a7:9f:15:48 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:a7ff:fe8f:1548/64 scope link
valid_lft forever preferred_lft forever
on the container :
~$ sudo docker run --rm -it ubuntu:trusty bash
root#dd0af86b1209:/# ip addr show eth0
158: eth0#if159: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
root#dd0af86b1209:/# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
158: eth0#if159: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
root#dd0af86b1209:/# ping google.com
PING google.com (142.250.179.110) 56(84) bytes of data.
64 bytes from par21s20-in-f14.1e100.net (142.250.179.110): icmp_seq=1 ttl=108 time=3.71 ms
64 bytes from par21s20-in-f14.1e100.net (142.250.179.110): icmp_seq=2 ttl=108 time=3.70 ms
64 bytes from par21s20-in-f14.1e100.net (142.250.179.110): icmp_seq=3 ttl=108 time=3.74 ms
64 bytes from par21s20-in-f14.1e100.net (142.250.179.110): icmp_seq=4 ttl=108 time=3.75 ms
64 bytes from par21s20-in-f14.1e100.net (142.250.179.110): icmp_seq=5 ttl=108 time=3.76 ms
but on my pod with microk8s I can't ping outside :
/ # ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0#if146: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1440 qdisc noqueue state UP
link/ether ba:03:bd:4b:66:97 brd ff:ff:ff:ff:ff:ff
inet 172.17.159.19/32 brd 172.17.159.19 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::b803:bdff:fe44:6697/64 scope link
valid_lft forever preferred_lft forever
/ # ping google.com
ping: bad address 'google.com'
ufw status :
Anywhere (v6) on cali+ ALLOW Anywhere (v6)
Anywhere (v6) on cni0 ALLOW Anywhere (v6)
Anywhere (v6) on cbr0 ALLOW Anywhere (v6)
Anywhere (v6) on eth0 ALLOW Anywhere (v6)
EDIT :
I tried to ping IP addresses and it worked, the problem is with hostnames resolution
this is my coredns configmap :
apiVersion: v1
data:
Corefile: ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n
\ log . {\n class error\n }\n kubernetes cluster.local in-addr.arpa
ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n }\n
\ prometheus :9153\n forward . 8.8.8.8 8.8.4.4 \n cache 30\n loop\n
\ reload\n loadbalance\n}\n"
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"Corefile":".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n log . {\n class error\n }\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n }\n prometheus :9153\n forward . 8.8.8.8 8.8.4.4 \n cache 30\n loop\n reload\n loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists","k8s-app":"kube-dns"},"name":"coredns","namespace":"kube-system"}}
creationTimestamp: "2022-06-19T17:07:02Z"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
k8s-app: kube-dns
name: coredns
namespace: kube-system
resourceVersion: "7503127"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: 0735a387-6970-43ab-8490-cdf49a23f936
Thanks in advance for your answers
So here is my problem:
I have a server with debian 10 that runs docker
In the docker containers i run pihole
When i run the pihole container, docker set his ip to 172.17.0.2
Docker itself create a network interface called: docker0 and his ip is 172.17.0.1
The problem being outside the server, when i ping to the docker interface 172.17.0.1 its fine, but when i ping to the docker container 172.17.0.2 its no reachable.
Here is the ip a command output
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether ac:16:2d:12:30:71 brd ff:ff:ff:ff:ff:ff
inet 10.42.0.247/24 brd 10.42.0.255 scope global dynamic eno1
valid_lft 3152sec preferred_lft 3152sec
inet6 fe80::ae16:2dff:fe12:3071/64 scope link
valid_lft forever preferred_lft forever
3: wlp2s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether d0:37:45:80:81:0f brd ff:ff:ff:ff:ff:ff
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:55:80:15:34 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:55ff:fe80:1534/64 scope link
valid_lft forever preferred_lft forever
25: vethedcefcc#if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether e2:02:56:8f:9b:22 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::e002:56ff:fe8f:9b22/64 scope link
valid_lft forever preferred_lft forever
What i need to do?, what i have to configure?
Thanks:
~James Phoenix
You can't access container IP directly from host.
If you want to access service from outside you need to forward (publish) service ports
Example:
docker host IP → 192.168.0.111
container IP → 172.17.0.111
Run nginx container and publish 8080 port to connect from outside:
docker run --name some-nginx -d -p 8080:80 some-content-nginx
Here 8080 is external port (accessible from outside)
And 80 is internal port (accessible from container group in same network)
Access to nginx:
curl http://localhost:8080
# or
curl http://192.168.0.111:8080
I have a bunch of docker containers running on the default bridge network, that need to communicate with each other.
I want to move some of the containers to a separate user defined network so I can specify their IP addresses.
Is there any way to do this without having to take down/replicate all the containers and move them to the other network, or is this the only way?
It's possible to create networks and connect containers while they are live. You may still need to stop/start processes if the process is listening on specific a IP addresses rather than all interfaces (* or :: )
Create a network
docker network create \
--driver=bridge \
--subnet=192.168.38.0/24 \
--gateway=172.16.238.1 \
<NETWORK>
Connect a container
docker network connect \
--ip 192.168.38.14 \
<NETWORK> \
<CONTAINER>
Disconnect from original network
docker network disconnect <OLDNETWORK> <CONTAINER>
Example
Before the containers eth0 is on the default bridge network
→ docker exec $CONTAINER ip ad sh
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
15: eth0#if16: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:04 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.4/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
Afterwards, eth1 has been added and no more eth0
→ docker exec $CONTAINER ip ad sh
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
17: eth1#if18: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:c0:a8:26:0e brd ff:ff:ff:ff:ff:ff
inet 192.168.38.14/24 brd 192.168.38.255 scope global eth1
valid_lft forever preferred_lft forever
You also should think about using a docker compose. It will create a network automatically, with its own DNS, allowing the containers to be connected.
https://docs.docker.com/network/network-tutorial-macvlan/#prerequisites
docker network create -d macvlan \
--subnet=172.16.86.0/24 \
--gateway=172.16.86.1 \
-o parent=eth0 \
my-macvlan-net
"Create a macvlan network called my-macvlan-net. Modify the subnet, gateway, and parent values to values that make sense in your environment."
I am noob when it comes to network. I have no idea what it means the values which make sense in my env
this is what i see in my host network interface, ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP qlen 1000
link/ether 00:25:b5:66:11:31 brd ff:ff:ff:ff:ff:ff
3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP qlen 1000
link/ether 00:25:b5:66:11:32 brd ff:ff:ff:ff:ff:ff
4: enp12s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP qlen 1000
link/ether 00:25:b5:66:11:33 brd ff:ff:ff:ff:ff:ff
inet 10.60.114.101/23 brd 10.60.115.255 scope global dynamic enp12s0
valid_lft 442187sec preferred_lft 442187sec
inet6 fd20:8b1e:b255:8136:225:b5ff:fe66:1133/64 scope global noprefixroute dynamic
valid_lft 2591830sec preferred_lft 604630sec
inet6 fe80::225:b5ff:fe66:1133/64 scope link
valid_lft forever preferred_lft forever
5: enp13s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP qlen 1000
link/ether 00:25:b5:66:11:34 brd ff:ff:ff:ff:ff:ff
inet 10.60.115.252/23 brd 10.60.115.255 scope global dynamic enp13s0
valid_lft 414540sec preferred_lft 414540sec
inet6 fd20:8b1e:b255:8136:607f:edd6:613a:41da/64 scope global noprefixroute dynamic
valid_lft 2591830sec preferred_lft 604630sec
inet6 fd20:8b1e:b255:8136:225:b5ff:fe66:1134/64 scope global deprecated mngtmpaddr dynamic
valid_lft 1720109sec preferred_lft 0sec
inet6 fe80::225:b5ff:fe66:1134/64 scope link
valid_lft forever preferred_lft forever
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:02:16:fb:be brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:2ff:fe16:fbbe/64 scope link
valid_lft forever preferred_lft forever
11: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 02:42:bb:c4:b4:18 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global docker_gwbridge
valid_lft forever preferred_lft forever
inet6 fe80::42:bbff:fec4:b418/64 scope link
valid_lft forever preferred_lft forever
106: veth65ae6f8#if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether 52:be:7f:de:e2:11 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::50be:7fff:fede:e211/64 scope link
valid_lft forever preferred_lft forever
How do I know which values make sense in my env?
ip route
ip route
default via 10.60.114.1 dev enp12s0 proto static metric 100
default via 10.60.114.1 dev enp13s0 proto static metric 101
10.60.114.0/23 dev enp12s0 proto kernel scope link src 10.60.114.101
10.60.114.0/23 dev enp13s0 proto kernel scope link src 10.60.115.252
10.60.114.0/23 dev enp12s0 proto kernel scope link src 10.60.114.101 metric 100
10.60.114.0/23 dev enp13s0 proto kernel scope link src 10.60.115.252 metric 101
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.18.0.0/16 dev docker_gwbridge proto kernel scope link src 172.18.0.1
I am noob when it comes to network. I have no idea what it means the values which make sense in my env
When you're creating a macvlan network, you are effectively making a "clone" of an existing network interface. In order for your containers to communicate on the associated network, they will generally need to be using the same ip address range and gateway used by other devices on the network.
For example, if you were to create a macvlan network associated with enp12s0 on your system, then you would need to use the 10.60.114.0/23 network range and whatever default gateway your system is using (you don't include this information in your question so I can't suggest a specific value).
That is (replacing the argument to --gateway with the correct value):
docker network create -d macvlan \
--subnet=10.60.114.0/24 \
--gateway=10.60.114.1 \
-o parent=enp12s0 \
my-macvlan-net
This by itself might not work, because it is likely that docker would assign ip addresses to containers that are already in use elsewhere on the network. You can avoid this by assigning docker a dedicated subset of addresses using the --ip-range option:
docker network create -d macvlan \
--subnet=10.60.114.0/24 \
--gateway=10.60.114.1 \
--ip-range=10.60.115.0/28 \
-o parent=enp12s0 \
my-macvlan-net
This would restrict docker to addresses between 10.60.115.0 and 10.60.115.15. Whether or not this actually makes sense in your environment is something only you would know (possibly by asking your network administrator if you are not responsible for the network configuration).
I am using docker 17.03 version in centos 7
Kernel version - 3.10.0-514.10.2.el7.x86_64
Client:
Version: 17.03.0-ce
API version: 1.26
Go version: go1.7.5
Git commit: 3a232c8
Built: Tue Feb 28 08:10:07 2017
OS/Arch: linux/amd64
Server:
Version: 17.03.0-ce
API version: 1.26 (minimum version 1.12)
Go version: go1.7.5
Git commit: 3a232c8
Built: Tue Feb 28 08:10:07 2017
OS/Arch: linux/amd64
Experimental: false
I have node-0 and node-1 for the docker multi host networking and i am using consul. In node-0 i have created a consul container using the below command,
docker run -d -p 8500:8500 -h consul --name consul progrium/consul -server -bootstrap
then i have created a drop-in file inside /etc/systemd/system/docker.service.d and added the below lines,
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=consul://<NODE-0-PRIVATE-IP>:8500/network --cluster-advertise=<NODE0-IP>:2375"
once this is done, i have restarted the docker demon and created a overlay network using the command,
docker network create -d overlay --subnet=10.10.10.0/24 my-net
then i have created a container called container1 in node-0 and mapped it to the my-net.
In node-1 machine, i have created a drop-in file inside /etc/systemd/system/docker.service.d and added the below lines,
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=consul://<NODE-0-PRIVATE-IP>:8500/network --cluster-advertise=<NODE1-IP>:2375"
and started a container called container2 and mapped it to my-net.
My setup will look like,
node0 - consul, container1
node1 - container2
Inside the container2, i am trying to ping container1 but getting the below response,
PING container1 (10.10.10.3) 56(84) bytes of data.
From container2 (10.10.10.4) icmp_seq=1 Destination Host Unreachable
From container2 (10.10.10.4) icmp_seq=2 Destination Host Unreachable
From container2 (10.10.10.4) icmp_seq=3 Destination Host Unreachable
From container2 (10.10.10.4) icmp_seq=4 Destination Host Unreachable
from node0, ip a shows
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:50:56:9d:9c:9f brd ff:ff:ff:ff:ff:ff
inet <NODE0-PRIVATE-IP>/24 brd 192.168.5.255 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe9d:9c9f/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 02:42:57:6d:e8:a9 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:57ff:fe6d:e8a9/64 scope link
valid_lft forever preferred_lft forever
4: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 02:42:10:5b:7d:b5 brd ff:ff:ff:ff:ff:ff
inet 172.19.0.1/16 scope global docker_gwbridge
valid_lft forever preferred_lft forever
inet6 fe80::42:10ff:fe5b:7db5/64 scope link
inside container1, ip a shows as,
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
18: eth0#if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP
link/ether 02:42:0a:0a:0a:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.10.10.3/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:aff:fe0a:a03/64 scope link
valid_lft forever preferred_lft forever
20: eth1#if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:13:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 172.19.0.3/16 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe13:3/64 scope link
valid_lft forever preferred_lft forever
Do i need to change anything to get this work? Thanks in advance.
I have docker 18.09.1 on CentOS7, i was not able to ping a remote host from docker host machine, when pinging I was getting the IP of docker_gwbridge, network, configured by docker
searched and found this article https://github.com/docker/for-mac/issues/2345, ran the command below and was able to ping the remote host.
docker network rm docker_gwbridge
Maybe try to leave the swarm and try,
docker swarm leave -f