I'm using microk8s installed on my ubuntu server. and I try to ping outside from my pod
I have docker installed on my machine: when I run a container with docker I can ping outside :
~$ sudo ip addr show docker0
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:a7:9f:15:48 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:a7ff:fe8f:1548/64 scope link
valid_lft forever preferred_lft forever
on the container :
~$ sudo docker run --rm -it ubuntu:trusty bash
root#dd0af86b1209:/# ip addr show eth0
158: eth0#if159: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
root#dd0af86b1209:/# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
158: eth0#if159: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
root#dd0af86b1209:/# ping google.com
PING google.com (142.250.179.110) 56(84) bytes of data.
64 bytes from par21s20-in-f14.1e100.net (142.250.179.110): icmp_seq=1 ttl=108 time=3.71 ms
64 bytes from par21s20-in-f14.1e100.net (142.250.179.110): icmp_seq=2 ttl=108 time=3.70 ms
64 bytes from par21s20-in-f14.1e100.net (142.250.179.110): icmp_seq=3 ttl=108 time=3.74 ms
64 bytes from par21s20-in-f14.1e100.net (142.250.179.110): icmp_seq=4 ttl=108 time=3.75 ms
64 bytes from par21s20-in-f14.1e100.net (142.250.179.110): icmp_seq=5 ttl=108 time=3.76 ms
but on my pod with microk8s I can't ping outside :
/ # ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0#if146: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1440 qdisc noqueue state UP
link/ether ba:03:bd:4b:66:97 brd ff:ff:ff:ff:ff:ff
inet 172.17.159.19/32 brd 172.17.159.19 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::b803:bdff:fe44:6697/64 scope link
valid_lft forever preferred_lft forever
/ # ping google.com
ping: bad address 'google.com'
ufw status :
Anywhere (v6) on cali+ ALLOW Anywhere (v6)
Anywhere (v6) on cni0 ALLOW Anywhere (v6)
Anywhere (v6) on cbr0 ALLOW Anywhere (v6)
Anywhere (v6) on eth0 ALLOW Anywhere (v6)
EDIT :
I tried to ping IP addresses and it worked, the problem is with hostnames resolution
this is my coredns configmap :
apiVersion: v1
data:
Corefile: ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n
\ log . {\n class error\n }\n kubernetes cluster.local in-addr.arpa
ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n }\n
\ prometheus :9153\n forward . 8.8.8.8 8.8.4.4 \n cache 30\n loop\n
\ reload\n loadbalance\n}\n"
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"Corefile":".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n log . {\n class error\n }\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n }\n prometheus :9153\n forward . 8.8.8.8 8.8.4.4 \n cache 30\n loop\n reload\n loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists","k8s-app":"kube-dns"},"name":"coredns","namespace":"kube-system"}}
creationTimestamp: "2022-06-19T17:07:02Z"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
k8s-app: kube-dns
name: coredns
namespace: kube-system
resourceVersion: "7503127"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: 0735a387-6970-43ab-8490-cdf49a23f936
Thanks in advance for your answers
Related
I'm running a k3s cluster and a docker traefik container on the same host. The traefik docker container is actually doing the reverse proxy stuff for tls which is working already on ports 80 and 443 for my different subdomains. I'm trying to get ssh working (for only one subdomain) too but without success so far.
port 22 is open through ufw allow (on Ubuntu 22.04)
traefik rules are set as following:
tcp:
routers:
giti-ssh:
entrypoints:
- "https"
rule: "HostSNI(`*`)"
tls: {}
service: giti-ssh
services:
giti-ssh:
loadBalancer:
servers:
- address: "10.42.0.232:22"
k3s is running flannel and metallb where the externalIP-range is at 10.42.0.0
ip a shows (the interesting parts):
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:19:ea:c3 brd ff:ff:ff:ff:ff:ff
altname enp11s0
inet "private"/32 metric 100 scope global dynamic ens192
valid_lft 36147sec preferred_lft 36147sec
inet 10.42.0.200/32 scope global ens192
valid_lft forever preferred_lft forever
inet6 "private"/64 scope link
valid_lft forever preferred_lft forever
3: br-5014eb2ffdf2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:7e:ab:72:98 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global br-5014eb2ffdf2
valid_lft forever preferred_lft forever
inet6 fe80::42:7eff:feab:7298/64 scope link
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:a5:03:77:2c brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
7: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 42:1b:d3:49:d3:6b brd ff:ff:ff:ff:ff:ff
inet 10.42.0.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::401b:d3ff:fe49:d36b/64 scope link
valid_lft forever preferred_lft forever
8: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
link/ether e2:27:27:96:96:7e brd ff:ff:ff:ff:ff:ff
inet 10.42.0.1/24 brd 10.42.0.255 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::e027:27ff:fe96:967e/64 scope link
valid_lft forever preferred_lft forever
the containers are set up and the service for the one for ssh is listening on port 22 as type: LoadBalancer
I can connect to that container through another service and IP on port 443 from the traefik reverse proxy but am missing something for port 22 and I think it has something to do with the traefik HostSNI or maybe the iptables....
Can s.o. give me a hint on how to achieve this.
Thanks in advance!
jim
I have a docker kylemanna/openvpn server to access a private network, with configured users. VPN is working. Internet access through nat works. I want to see sites on the internal network vpn client IP instead of vpn server host address 192.168.140.38. I lost about 3 weeks in time, and read a lot of documentation, but the experience is not enough. I tried using macvlan, tap(server-bridge and docker:host) but it didn't work (nothing worked). I'm terribly tired, any help is appreciated
My system: ubuntu 20
My server IP (openvpn): 192.168.140.38 (external ip 88.56..)
My gateway: 192.168.140.1
DHCP server range: 192.168.140.1/24
command generation configuration
ovpn_genconfig -N -2 -e 'duplicate-cn' -n '192.168.140.1' -n '8.8.8.8' -n '8.8.4.4' -d
-C 'AES-256-GCM' -e 'tls-crypt-v2 /etc/openvpn/pki/private/vpn_server.pem'
-s '10.10.140.0/24' -u udp://77.66.19.237:587 -e 'topology subnet' -p '192.168.140.0
255.255.255.0' -p "10.0.0.0 255.255.255.0' -p '192.168.2.0 255.255.255.0'
openvpn.conf
server 10.10.140.0 255.255.255.0
verb 3
key /etc/openvpn/pki/private/77.66.19.237.key
ca /etc/openvpn/pki/ca.crt
cert /etc/openvpn/pki/issued/77.66.19.237.crt
dh /etc/openvpn/pki/dh.pem
tls-auth /etc/openvpn/pki/ta.key
key-direction 0
keepalive 10 60
persist-key
persist-tun
proto udp
# Rely on Docker to do port mapping, internally always 1194
port 1194
dev tun0
status /tmp/openvpn-status.log
user nobody
group nogroup
cipher AES-256-GCM
comp-lzo no
### Push Configurations Below
push "dhcp-option DNS 192.168.140.1"
push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DNS 8.8.4.4"
push "comp-lzo no"
push "route 192.168.140.0 255.255.255.0"
push "route 10.0.0.0 255.255.255.0"
push "route 192.168.2.0 255.255.255.0"
reneg-sec 0
### Extra Configurations Below
duplicate-cn
topology subnet
docker-compose.yml
version: '3.8'
services:
openvpn:
container_name: openvpn
build: #fix last version (2.5) I get last build docker image
context: ./docker-openvpn
dockerfile: Dockerfile
restart: always
ports:
- "587:1194/udp"
command: bash -c "ovpn_run"
cap_add:
- NET_ADMIN
volumes:
- ./openvpn-data/conf:/etc/openvpn
networks:
stack:
ipv4_address: 192.168.140.153
networks:
stack:
external: true
This configuration works well however I would like my internal resources to be able to identify me with a unique ip address
I tried do macvlan network:
#we take the entire network and determine the DHCP output range (Part of the network is divided into macvlan stack)
root#test-openvpn ~ # docker network create -d macvlan --subnet=192.168.140.0/24 --gateway=192.168.140.1 --ip-range=192.168.140.153/31 -o parent=ens160 stack
#create macvlan-br0 in host interface to redirect traffic from host eth0 to docker
ip link add macvlan-br0 link ens160 type macvlan mode bridge
# allocate ip from dhcp for bridge
ip addr add 192.168.140.152/32 dev macvlan-br0
ip link set macvlan-br0 up
#associate the host network with the docker DHCP network
ip route add 192.168.140.153/31 dev macvlan-br0
#test network
ssh root#192.168.140.152 #success bridge create
root#test-openvpn ~ # docker run --net=stack --rm busybox sh -c "ip ad sh && ping 192.168.140.152 -c 2 && ping google.com -c 2"
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
125: eth0#if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:c0:a8:8c:98 brd ff:ff:ff:ff:ff:ff
inet 192.168.140.152/24 brd 192.168.140.255 scope global eth0
valid_lft forever preferred_lft forever
PING 192.168.140.152 (192.168.140.152): 56 data bytes
64 bytes from 192.168.140.152: seq=0 ttl=64 time=1.645 ms
64 bytes from 192.168.140.152: seq=1 ttl=64 time=0.097 ms
--- 192.168.140.152 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.097/0.871/1.645 ms
PING google.com (172.217.168.206): 56 data bytes
--- google.com ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:b2:39:c4 brd ff:ff:ff:ff:ff:ff
inet 192.168.140.38/24 brd 192.168.140.255 scope global dynamic ens160
valid_lft 4143sec preferred_lft 4143sec
inet6 fe80::250:56ff:feb2:39c4/64 scope link
valid_lft forever preferred_lft forever
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:90:fa:7a:a6 brd ff:ff:ff:ff:ff:ff
inet 172.30.0.1/24 brd 172.30.0.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:90ff:fefa:7aa6/64 scope link
valid_lft forever preferred_lft forever
99: veth3443190#if98: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 7e:0a:b2:53:84:7c brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::7c0a:b2ff:fe53:847c/64 scope link
valid_lft forever preferred_lft forever
103: br-fe3e6d8d0ad4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:b2:6e:7a:9f brd ff:ff:ff:ff:ff:ff
inet 172.30.3.1/24 brd 172.30.3.255 scope global br-fe3e6d8d0ad4
valid_lft forever preferred_lft forever
inet6 fe80::42:b2ff:fe6e:7a9f/64 scope link
valid_lft forever preferred_lft forever
106: macvlan-br0#ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 16:c9:47:98:c7:6b brd ff:ff:ff:ff:ff:ff
inet 192.168.140.152/32 scope global macvlan-br0
valid_lft forever preferred_lft forever
inet6 fe80::14c9:47ff:fe98:c76b/64 scope link
valid_lft forever preferred_lft forever
Internet doesn't work container through docker macvlan
i was following a kubernetes tutorial for beginners (techworld with nana) with a win10 machine running docker. As i got troubles, i migrate to this config :
wsl -l -v
NAME STATE VERSION
* Ubuntu Running 2
I install docker and run it by $ sudo service docker start
Start minikube : $minikube start --driver=docker --kubernetes-version=v1.18.0
(not the last version because some pb between systemd and systemctl)
Everything was ok, i create a mongodb pod and a mongoexpress pod with ad hoc services:
plaurent$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mongo-express-864c95f479-8gfxf 1/1 Running 2 23h
mongodb-deployment-58977cc4f5-k4r4h 1/1 Running 1 23h
plaurent$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h
mongo-express-service LoadBalancer 10.98.7.33 <pending> 8081:30000/TCP 23h
mongodb-service ClusterIP 10.101.132.245 <none> 27017/TCP 23h
following tuto, i run :
/plaurent$ minikube service mongo-express-service
🏃 Starting tunnel for service mongo-express-service.
🎉 Opening service default/mongo-express-service in default browser...
👉 http://192.168.49.2:30000
❗ Because you are using a Docker driver on linux, the terminal needs to be open to run it.
On a second terminal wsl, i can reach this service with the following and it's ok.
plaurent$ curl http://192.168.49.2:30000
BUt i cannot do the same thing from the win10 and even a ping fails.
i start ip addr and get the following :
/plaurent$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 86:5b:79:bf:27:05 brd ff:ff:ff:ff:ff:ff
3: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether f2:bd:6f:41:f3:2d brd ff:ff:ff:ff:ff:ff
4: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:15:5d:b5:ae:43 brd ff:ff:ff:ff:ff:ff
inet 172.20.254.215/20 brd 172.20.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::215:5dff:feb5:ae43/64 scope link
valid_lft forever preferred_lft forever
5: tunl0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
6: sit0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
7: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:eb:30:05:9a brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
8: br-ecf9b5a8d792: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:0f:31:2f:71 brd ff:ff:ff:ff:ff:ff
**inet 192.168.49.1/24 brd 192.168.49.255 scope global br-ecf9b5a8d792**
valid_lft forever preferred_lft forever
inet6 fe80::42:fff:fe31:2f71/64 scope link
valid_lft forever preferred_lft forever
10: vethe8c97a5#if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ecf9b5a8d792 state UP group default
link/ether ee:d2:2d:f8:5b:4d brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::ecd2:2dff:fef8:5b4d/64 scope link
valid_lft forever preferred_lft forever
i can see inet 192.168.49.1/24 brd 192.168.49.255 scope global br-ecf9b5a8d792 close to the ip of service, but i don't know what it means and if this can help to solve the problem.
I'm not comfortable with networks , any help welcome.
You can do port forward of the services in cluster. https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
So here is my problem:
I have a server with debian 10 that runs docker
In the docker containers i run pihole
When i run the pihole container, docker set his ip to 172.17.0.2
Docker itself create a network interface called: docker0 and his ip is 172.17.0.1
The problem being outside the server, when i ping to the docker interface 172.17.0.1 its fine, but when i ping to the docker container 172.17.0.2 its no reachable.
Here is the ip a command output
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether ac:16:2d:12:30:71 brd ff:ff:ff:ff:ff:ff
inet 10.42.0.247/24 brd 10.42.0.255 scope global dynamic eno1
valid_lft 3152sec preferred_lft 3152sec
inet6 fe80::ae16:2dff:fe12:3071/64 scope link
valid_lft forever preferred_lft forever
3: wlp2s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether d0:37:45:80:81:0f brd ff:ff:ff:ff:ff:ff
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:55:80:15:34 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:55ff:fe80:1534/64 scope link
valid_lft forever preferred_lft forever
25: vethedcefcc#if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether e2:02:56:8f:9b:22 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::e002:56ff:fe8f:9b22/64 scope link
valid_lft forever preferred_lft forever
What i need to do?, what i have to configure?
Thanks:
~James Phoenix
You can't access container IP directly from host.
If you want to access service from outside you need to forward (publish) service ports
Example:
docker host IP → 192.168.0.111
container IP → 172.17.0.111
Run nginx container and publish 8080 port to connect from outside:
docker run --name some-nginx -d -p 8080:80 some-content-nginx
Here 8080 is external port (accessible from outside)
And 80 is internal port (accessible from container group in same network)
Access to nginx:
curl http://localhost:8080
# or
curl http://192.168.0.111:8080
I just don't have enough networking knowledge to understand this.
On my laptop, I'm running both Docker and multiple vagrant VMs.
I want to connect to one of the vagrant VMs from within a docker container but ping keeps hanging or spitting out "Destination Host Unreachable". I can ping the vagrant VMs just fine from the host (ie. outside the container).
Could you point me in the right direction to fixing this? I basically want to install nginx on the vagrant VMs but have some load balancers in Docker.
This means that docker containers need to be able to "see" the vagrant VMs.
Do I need a route table entry? Do I need a special network adapter? Do I need to create a bridge? I just don't know enough and would appreciate a nudge in the right direction.
Here are details from the container:
root#d755dbb8bbc9:/# ip route
default via 172.18.0.1 dev eth1
10.0.1.0/24 dev eth2 proto kernel scope link src 10.0.1.6
10.255.0.0/16 dev eth0 proto kernel scope link src 10.255.0.4
172.18.0.0/16 dev eth1 proto kernel scope link src 172.18.0.5
root#d755dbb8bbc9:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet 10.255.0.30/32 brd 10.255.0.30 scope global lo
valid_lft forever preferred_lft forever
inet 10.0.1.41/32 brd 10.0.1.41 scope global lo
valid_lft forever preferred_lft forever
2: tunl0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
link/ipip 0.0.0.0 brd 0.0.0.0
3: ip6tnl0#NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1
link/tunnel6 :: brd ::
29: eth0#if30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:ff:00:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.255.0.4/16 brd 10.255.255.255 scope global eth0
valid_lft forever preferred_lft forever
35: eth1#if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 172.18.0.5/16 brd 172.18.255.255 scope global eth1
valid_lft forever preferred_lft forever
39: eth2#if40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:01:06 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet 10.0.1.6/24 brd 10.0.1.255 scope global eth2
valid_lft forever preferred_lft forever
And here is some stuff from on of the vagrant VMs:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:cf:1a:c3 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
valid_lft 67730sec preferred_lft 67730sec
inet6 fe80::a00:27ff:fecf:1ac3/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:ca:c7:a1 brd ff:ff:ff:ff:ff:ff
inet 172.17.8.101/16 brd 172.17.255.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:feca:c7a1/64 scope link
valid_lft forever preferred_lft forever
core#core-01 ~ $ ip route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 1024
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 1024
172.17.0.0/16 dev eth1 proto kernel scope link src 172.17.8.101
When I ping 172.17.8.101 (the ip of the vagrant VM i want to ping) from the docker container, it just hangs. How can I get access to one of the VMs from one of the docker containers?