When I inspect ingress I getting info about containers:
$docker inspect ingress
"Containers": {
"1234567890": {
"Name": "gateway.1.qwertyuiop",
"EndpointID": "12345678990",
"MacAddress": "00:11:22:33:44:55",
"IPv4Address": "10.255.0.11/16",
"IPv6Address": ""
},
"ingress-sbox": {
"Name": "ingress-endpoint",
"EndpointID": "1234567890",
"MacAddress": "00:11:22:33:44:55",
"IPv4Address": "10.255.0.3/16",
"IPv6Address": ""
}
},
Then I can inspect first container but inspecting ingress-endpoint returns nothing
$ docker inspect ingress-endpoint
[]
What I'm trying to find is the local ip for ingress-endpoint in the swarm node.
Not sure why you need that, but if you are looking for where incoming packets are being routed to, you may look at iptables.
iptables -n -t nat -L DOCKER-INGRESS
On my setup (where one of the services publishes the port 80) I get following output:
Chain DOCKER-INGRESS (2 references)
target prot opt source destination
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.18.0.2:80
RETURN all -- 0.0.0.0/0 0.0.0.0/0
So 172.18.0.2 is the IP address of the ingress container
Notice that this ingress-sbox container is part of both networks that Swarm mode creates for you:
ingress
docker_gwbridge
This is all part of Linux network namespaces, as explained in this very good post: https://neuvector.com/network-security/docker-swarm-container-networking/ which I recommend reading.
# cd /var/run/
# ln -s docker/netns .
# ip netns exec ingress_sbox ip addr
This gives (on my system):
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
7: eth0#if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:ff:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.255.0.2/16 brd 10.255.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.255.0.4/32 brd 10.255.0.4 scope global eth0
valid_lft forever preferred_lft forever
10: eth1#if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 172.18.0.2/16 brd 172.18.255.255 scope global eth1
valid_lft forever preferred_lft forever
You can also run iptables commands from within those network namespaces to try and follow how packets get magically routed...
As running docker inspect ingress-endpoint giving output of [], it very possibly means such endpoint is a stale container, which still hooks ingress network. You might want to disconnect the network, remove such container then re- join Swarm cluster, by doing
docker network disconnect ingress --force ingress-endpoint
docker network rm ingess
Related
I have a docker kylemanna/openvpn server to access a private network, with configured users. VPN is working. Internet access through nat works. I want to see sites on the internal network vpn client IP instead of vpn server host address 192.168.140.38. I lost about 3 weeks in time, and read a lot of documentation, but the experience is not enough. I tried using macvlan, tap(server-bridge and docker:host) but it didn't work (nothing worked). I'm terribly tired, any help is appreciated
My system: ubuntu 20
My server IP (openvpn): 192.168.140.38 (external ip 88.56..)
My gateway: 192.168.140.1
DHCP server range: 192.168.140.1/24
command generation configuration
ovpn_genconfig -N -2 -e 'duplicate-cn' -n '192.168.140.1' -n '8.8.8.8' -n '8.8.4.4' -d
-C 'AES-256-GCM' -e 'tls-crypt-v2 /etc/openvpn/pki/private/vpn_server.pem'
-s '10.10.140.0/24' -u udp://77.66.19.237:587 -e 'topology subnet' -p '192.168.140.0
255.255.255.0' -p "10.0.0.0 255.255.255.0' -p '192.168.2.0 255.255.255.0'
openvpn.conf
server 10.10.140.0 255.255.255.0
verb 3
key /etc/openvpn/pki/private/77.66.19.237.key
ca /etc/openvpn/pki/ca.crt
cert /etc/openvpn/pki/issued/77.66.19.237.crt
dh /etc/openvpn/pki/dh.pem
tls-auth /etc/openvpn/pki/ta.key
key-direction 0
keepalive 10 60
persist-key
persist-tun
proto udp
# Rely on Docker to do port mapping, internally always 1194
port 1194
dev tun0
status /tmp/openvpn-status.log
user nobody
group nogroup
cipher AES-256-GCM
comp-lzo no
### Push Configurations Below
push "dhcp-option DNS 192.168.140.1"
push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DNS 8.8.4.4"
push "comp-lzo no"
push "route 192.168.140.0 255.255.255.0"
push "route 10.0.0.0 255.255.255.0"
push "route 192.168.2.0 255.255.255.0"
reneg-sec 0
### Extra Configurations Below
duplicate-cn
topology subnet
docker-compose.yml
version: '3.8'
services:
openvpn:
container_name: openvpn
build: #fix last version (2.5) I get last build docker image
context: ./docker-openvpn
dockerfile: Dockerfile
restart: always
ports:
- "587:1194/udp"
command: bash -c "ovpn_run"
cap_add:
- NET_ADMIN
volumes:
- ./openvpn-data/conf:/etc/openvpn
networks:
stack:
ipv4_address: 192.168.140.153
networks:
stack:
external: true
This configuration works well however I would like my internal resources to be able to identify me with a unique ip address
I tried do macvlan network:
#we take the entire network and determine the DHCP output range (Part of the network is divided into macvlan stack)
root#test-openvpn ~ # docker network create -d macvlan --subnet=192.168.140.0/24 --gateway=192.168.140.1 --ip-range=192.168.140.153/31 -o parent=ens160 stack
#create macvlan-br0 in host interface to redirect traffic from host eth0 to docker
ip link add macvlan-br0 link ens160 type macvlan mode bridge
# allocate ip from dhcp for bridge
ip addr add 192.168.140.152/32 dev macvlan-br0
ip link set macvlan-br0 up
#associate the host network with the docker DHCP network
ip route add 192.168.140.153/31 dev macvlan-br0
#test network
ssh root#192.168.140.152 #success bridge create
root#test-openvpn ~ # docker run --net=stack --rm busybox sh -c "ip ad sh && ping 192.168.140.152 -c 2 && ping google.com -c 2"
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
125: eth0#if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:c0:a8:8c:98 brd ff:ff:ff:ff:ff:ff
inet 192.168.140.152/24 brd 192.168.140.255 scope global eth0
valid_lft forever preferred_lft forever
PING 192.168.140.152 (192.168.140.152): 56 data bytes
64 bytes from 192.168.140.152: seq=0 ttl=64 time=1.645 ms
64 bytes from 192.168.140.152: seq=1 ttl=64 time=0.097 ms
--- 192.168.140.152 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.097/0.871/1.645 ms
PING google.com (172.217.168.206): 56 data bytes
--- google.com ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:b2:39:c4 brd ff:ff:ff:ff:ff:ff
inet 192.168.140.38/24 brd 192.168.140.255 scope global dynamic ens160
valid_lft 4143sec preferred_lft 4143sec
inet6 fe80::250:56ff:feb2:39c4/64 scope link
valid_lft forever preferred_lft forever
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:90:fa:7a:a6 brd ff:ff:ff:ff:ff:ff
inet 172.30.0.1/24 brd 172.30.0.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:90ff:fefa:7aa6/64 scope link
valid_lft forever preferred_lft forever
99: veth3443190#if98: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 7e:0a:b2:53:84:7c brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::7c0a:b2ff:fe53:847c/64 scope link
valid_lft forever preferred_lft forever
103: br-fe3e6d8d0ad4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:b2:6e:7a:9f brd ff:ff:ff:ff:ff:ff
inet 172.30.3.1/24 brd 172.30.3.255 scope global br-fe3e6d8d0ad4
valid_lft forever preferred_lft forever
inet6 fe80::42:b2ff:fe6e:7a9f/64 scope link
valid_lft forever preferred_lft forever
106: macvlan-br0#ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 16:c9:47:98:c7:6b brd ff:ff:ff:ff:ff:ff
inet 192.168.140.152/32 scope global macvlan-br0
valid_lft forever preferred_lft forever
inet6 fe80::14c9:47ff:fe98:c76b/64 scope link
valid_lft forever preferred_lft forever
Internet doesn't work container through docker macvlan
I manually installed weblogic, configured domain inside a Docker container [Overlay network - Docker swarm].
[root#host ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
e3f32e71cc16 bridge bridge local
26628a46774c docker_gwbridge bridge local
20c80427519f host host local
9ejgeett1y4y ingress overlay swarm
52d14f492cda none null local
f628wowngc6z myoverlay overlay swarm
After starting the Admin Server, I see that the admin server port 7001 is bound to only the overlay interface and not to the ANY interface 0.0.0.0. So, even though I exposed the 7001 port when creating container, I am not able to access it from external public network.
[root#host /]# netstat -anp | grep 7001
tcp 0 0 10.0.0.5:7001 0.0.0.0:* LISTEN 8168/java
[root#host /]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
265: eth0#if266: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.0.0.5/24 scope global eth0
valid_lft forever preferred_lft forever
267: eth1#if268: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:13:00:06 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 172.19.0.6/16 scope global eth1
valid_lft forever preferred_lft forever
I also started sshd in the same container, but this was bound to the 0.0.0.0 ANY interface.
[root#host /]# netstat -anp | grep 22
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1/sshd
I need weblogic admin server console to be accessible to public network, not just within the overlay network containers. So, how to bind weblogic admin server port to all interfaces (0.0.0.0) ?
Problem lies in Weblogic configuration. If Listen address is set to empty, weblogic itself will bind the port to all interfaces. This solved my problem.
By default we have a bridge named docker0 on host machine as one component of docker networking.
When we run a docker container, it creates a vethxxx pipe which binds docker0 with one point and container with the other point, named eth0.
I'm trying to find the trace of those eth0 interface on host machine.
I've expected to find some network namespace via:
ip netns show
But it's clear. So how could I see the representation of a container's eth0 interface on host machine?
Generally, each container has an isolated network namespace on host. And the interface eth0 in a container is encapsulated in a network namespace(aka sandbox in Docker terminology). So if you want to see eth0 on host side, you must enter its network namespace first.
But docker containers' network namespaces lay in different directory from those created manually. They lay in /var/run/docker/netns. So we need create a soft link to /var/run/netns.
ln -s /var/run/docker/netns /var/run/netns
ip netns list
ip netns exec xxxx ip addr show
Thus, you could see the other side of veth on host machine in each isolated network namespace.
root#Light-G:/var/lib# ip netns exec 459c238c2a4f ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
15: eth0#if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:0a:0a:c7:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.10.199.2/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:aff:fe0a:c702/64 scope link
valid_lft forever preferred_lft forever
I have a bunch of docker containers running on the default bridge network, that need to communicate with each other.
I want to move some of the containers to a separate user defined network so I can specify their IP addresses.
Is there any way to do this without having to take down/replicate all the containers and move them to the other network, or is this the only way?
It's possible to create networks and connect containers while they are live. You may still need to stop/start processes if the process is listening on specific a IP addresses rather than all interfaces (* or :: )
Create a network
docker network create \
--driver=bridge \
--subnet=192.168.38.0/24 \
--gateway=172.16.238.1 \
<NETWORK>
Connect a container
docker network connect \
--ip 192.168.38.14 \
<NETWORK> \
<CONTAINER>
Disconnect from original network
docker network disconnect <OLDNETWORK> <CONTAINER>
Example
Before the containers eth0 is on the default bridge network
→ docker exec $CONTAINER ip ad sh
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
15: eth0#if16: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:04 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.4/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
Afterwards, eth1 has been added and no more eth0
→ docker exec $CONTAINER ip ad sh
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
17: eth1#if18: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:c0:a8:26:0e brd ff:ff:ff:ff:ff:ff
inet 192.168.38.14/24 brd 192.168.38.255 scope global eth1
valid_lft forever preferred_lft forever
You also should think about using a docker compose. It will create a network automatically, with its own DNS, allowing the containers to be connected.
Problem: the Internet isn't accessible within a docker container.
on my bare metal Ubuntu 17.10 box...
$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=52 time=10.8 ms
but...
$ docker run --rm debian:latest ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
92 bytes from 7911d89db6a4 (192.168.220.2): Destination Host Unreachable
I think the root cause is that I had to set up a non-default network for docker0 because the default one 172.17.0.1 was already in use within my organization.
My /etc/docker/daemon.json file needs to look like this in order for docker to start successfully.
$ cat /etc/docker/daemon.json
{
"bip": "192.168.220.1/24",
"fixed-cidr": "192.168.220.0/24",
"fixed-cidr-v6": "0:0:0:0:0:ffff:c0a8:dc00/120",
"mtu": 1500,
"default-gateway": "192.168.220.10",
"default-gateway-v6": "0:0:0:0:0:ffff:c0a8:dc0a",
"dns": ["10.0.0.69","10.0.0.70","10.1.1.11"],
"debug": true
}
Note that the default-gateway setting looks wrong. However, if I correct it to read 192.168.220.1 the docker service fails to start. Running dockerd at the command line directly produces the most helpful logging, thus:
With "default-gateway": 192.168.220.1 in daemon.json...
$ sudo dockerd
-----8<-----
many lines removed
----->8-----
Error starting daemon: Error initializing network controller: Error creating default "bridge" network: failed to allocate secondary ip address (DefaultGatewayIPv4:192.168.220.1): Address already in use
Here's the info for docker0...
$ ip addr show docker0
10: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:10:bc:66:fd brd ff:ff:ff:ff:ff:ff
inet 192.168.220.1/24 brd 192.168.220.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:10ff:febc:66fd/64 scope link
valid_lft forever preferred_lft forever
And routing table...
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.62.131.1 0.0.0.0 UG 100 0 0 enp14s0
10.62.131.0 0.0.0.0 255.255.255.0 U 100 0 0 enp14s0
169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 enp14s0
192.168.220.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
Is this the root cause? How do I achieve the, seemingly mutually exclusive states of:
docker0 interface address is x.x.x.1
gateway address is same, x.x.x.1
dockerd runs ok
?
Thanks!
Longer answer to Wedge Martin's question. I made the changes to daemon.json as you suggested:
{
"bip": "192.168.220.2/24",
"fixed-cidr": "192.168.220.0/24",
"fixed-cidr-v6": "0:0:0:0:0:ffff:c0a8:dc00/120",
"mtu": 1500,
"default-gateway": "192.168.220.1",
"default-gateway-v6": "0:0:0:0:0:ffff:c0a8:dc0a",
"dns": ["10.0.0.69","10.0.0.70","10.1.1.11"],
"debug": true
}
so at least the daemon starts, but I still don't have internet access within a container...
$ docker run -it --rm debian:latest bash
root#bd9082bf70a0:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
15: eth0#if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:dc:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.220.3/24 brd 192.168.220.255 scope global eth0
valid_lft forever preferred_lft forever
root#bd9082bf70a0:/# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
92 bytes from bd9082bf70a0 (192.168.220.3): Destination Host Unreachable
It turned out that less is more. Simplifying daemon.json to the following resolved my issues.
{
"bip": "192.168.220.2/24"
}
If you don't set the gw, docker will set it to first non-network address in the network, or .1, but if you set it, docker will conflict when allocating the bridge as the address .1 is in use. You should only set default_gateway if its outside of the network range.
Now the bip can tell docker to use a different address than the .1 and so setting the bip can avoid the conflict, but I am not sure that it will end up doing what you want. Probably will cause routing issues as non-network route will go to address that has no host responding.