cannot access Internet inside docker container when docker0 has non-default address - docker

Problem: the Internet isn't accessible within a docker container.
on my bare metal Ubuntu 17.10 box...
$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=52 time=10.8 ms
but...
$ docker run --rm debian:latest ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
92 bytes from 7911d89db6a4 (192.168.220.2): Destination Host Unreachable
I think the root cause is that I had to set up a non-default network for docker0 because the default one 172.17.0.1 was already in use within my organization.
My /etc/docker/daemon.json file needs to look like this in order for docker to start successfully.
$ cat /etc/docker/daemon.json
{
"bip": "192.168.220.1/24",
"fixed-cidr": "192.168.220.0/24",
"fixed-cidr-v6": "0:0:0:0:0:ffff:c0a8:dc00/120",
"mtu": 1500,
"default-gateway": "192.168.220.10",
"default-gateway-v6": "0:0:0:0:0:ffff:c0a8:dc0a",
"dns": ["10.0.0.69","10.0.0.70","10.1.1.11"],
"debug": true
}
Note that the default-gateway setting looks wrong. However, if I correct it to read 192.168.220.1 the docker service fails to start. Running dockerd at the command line directly produces the most helpful logging, thus:
With "default-gateway": 192.168.220.1 in daemon.json...
$ sudo dockerd
-----8<-----
many lines removed
----->8-----
Error starting daemon: Error initializing network controller: Error creating default "bridge" network: failed to allocate secondary ip address (DefaultGatewayIPv4:192.168.220.1): Address already in use
Here's the info for docker0...
$ ip addr show docker0
10: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:10:bc:66:fd brd ff:ff:ff:ff:ff:ff
inet 192.168.220.1/24 brd 192.168.220.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:10ff:febc:66fd/64 scope link
valid_lft forever preferred_lft forever
And routing table...
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.62.131.1 0.0.0.0 UG 100 0 0 enp14s0
10.62.131.0 0.0.0.0 255.255.255.0 U 100 0 0 enp14s0
169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 enp14s0
192.168.220.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
Is this the root cause? How do I achieve the, seemingly mutually exclusive states of:
docker0 interface address is x.x.x.1
gateway address is same, x.x.x.1
dockerd runs ok
?
Thanks!
Longer answer to Wedge Martin's question. I made the changes to daemon.json as you suggested:
{
"bip": "192.168.220.2/24",
"fixed-cidr": "192.168.220.0/24",
"fixed-cidr-v6": "0:0:0:0:0:ffff:c0a8:dc00/120",
"mtu": 1500,
"default-gateway": "192.168.220.1",
"default-gateway-v6": "0:0:0:0:0:ffff:c0a8:dc0a",
"dns": ["10.0.0.69","10.0.0.70","10.1.1.11"],
"debug": true
}
so at least the daemon starts, but I still don't have internet access within a container...
$ docker run -it --rm debian:latest bash
root#bd9082bf70a0:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
15: eth0#if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:dc:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.220.3/24 brd 192.168.220.255 scope global eth0
valid_lft forever preferred_lft forever
root#bd9082bf70a0:/# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
92 bytes from bd9082bf70a0 (192.168.220.3): Destination Host Unreachable

It turned out that less is more. Simplifying daemon.json to the following resolved my issues.
{
"bip": "192.168.220.2/24"
}

If you don't set the gw, docker will set it to first non-network address in the network, or .1, but if you set it, docker will conflict when allocating the bridge as the address .1 is in use. You should only set default_gateway if its outside of the network range.
Now the bip can tell docker to use a different address than the .1 and so setting the bip can avoid the conflict, but I am not sure that it will end up doing what you want. Probably will cause routing issues as non-network route will go to address that has no host responding.

Related

identifying openvpn clients on sites on the internal network

I have a docker kylemanna/openvpn server to access a private network, with configured users. VPN is working. Internet access through nat works. I want to see sites on the internal network vpn client IP instead of vpn server host address 192.168.140.38. I lost about 3 weeks in time, and read a lot of documentation, but the experience is not enough. I tried using macvlan, tap(server-bridge and docker:host) but it didn't work (nothing worked). I'm terribly tired, any help is appreciated
My system: ubuntu 20
My server IP (openvpn): 192.168.140.38 (external ip 88.56..)
My gateway: 192.168.140.1
DHCP server range: 192.168.140.1/24
command generation configuration
ovpn_genconfig -N -2 -e 'duplicate-cn' -n '192.168.140.1' -n '8.8.8.8' -n '8.8.4.4' -d
-C 'AES-256-GCM' -e 'tls-crypt-v2 /etc/openvpn/pki/private/vpn_server.pem'
-s '10.10.140.0/24' -u udp://77.66.19.237:587 -e 'topology subnet' -p '192.168.140.0
255.255.255.0' -p "10.0.0.0 255.255.255.0' -p '192.168.2.0 255.255.255.0'
openvpn.conf
server 10.10.140.0 255.255.255.0
verb 3
key /etc/openvpn/pki/private/77.66.19.237.key
ca /etc/openvpn/pki/ca.crt
cert /etc/openvpn/pki/issued/77.66.19.237.crt
dh /etc/openvpn/pki/dh.pem
tls-auth /etc/openvpn/pki/ta.key
key-direction 0
keepalive 10 60
persist-key
persist-tun
proto udp
# Rely on Docker to do port mapping, internally always 1194
port 1194
dev tun0
status /tmp/openvpn-status.log
user nobody
group nogroup
cipher AES-256-GCM
comp-lzo no
### Push Configurations Below
push "dhcp-option DNS 192.168.140.1"
push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DNS 8.8.4.4"
push "comp-lzo no"
push "route 192.168.140.0 255.255.255.0"
push "route 10.0.0.0 255.255.255.0"
push "route 192.168.2.0 255.255.255.0"
reneg-sec 0
### Extra Configurations Below
duplicate-cn
topology subnet
docker-compose.yml
version: '3.8'
services:
openvpn:
container_name: openvpn
build: #fix last version (2.5) I get last build docker image
context: ./docker-openvpn
dockerfile: Dockerfile
restart: always
ports:
- "587:1194/udp"
command: bash -c "ovpn_run"
cap_add:
- NET_ADMIN
volumes:
- ./openvpn-data/conf:/etc/openvpn
networks:
stack:
ipv4_address: 192.168.140.153
networks:
stack:
external: true
This configuration works well however I would like my internal resources to be able to identify me with a unique ip address
I tried do macvlan network:
#we take the entire network and determine the DHCP output range (Part of the network is divided into macvlan stack)
root#test-openvpn ~ # docker network create -d macvlan --subnet=192.168.140.0/24 --gateway=192.168.140.1 --ip-range=192.168.140.153/31 -o parent=ens160 stack
#create macvlan-br0 in host interface to redirect traffic from host eth0 to docker
ip link add macvlan-br0 link ens160 type macvlan mode bridge
# allocate ip from dhcp for bridge
ip addr add 192.168.140.152/32 dev macvlan-br0
ip link set macvlan-br0 up
#associate the host network with the docker DHCP network
ip route add 192.168.140.153/31 dev macvlan-br0
#test network
ssh root#192.168.140.152 #success bridge create
root#test-openvpn ~ # docker run --net=stack --rm busybox sh -c "ip ad sh && ping 192.168.140.152 -c 2 && ping google.com -c 2"
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
125: eth0#if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:c0:a8:8c:98 brd ff:ff:ff:ff:ff:ff
inet 192.168.140.152/24 brd 192.168.140.255 scope global eth0
valid_lft forever preferred_lft forever
PING 192.168.140.152 (192.168.140.152): 56 data bytes
64 bytes from 192.168.140.152: seq=0 ttl=64 time=1.645 ms
64 bytes from 192.168.140.152: seq=1 ttl=64 time=0.097 ms
--- 192.168.140.152 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.097/0.871/1.645 ms
PING google.com (172.217.168.206): 56 data bytes
--- google.com ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:b2:39:c4 brd ff:ff:ff:ff:ff:ff
inet 192.168.140.38/24 brd 192.168.140.255 scope global dynamic ens160
valid_lft 4143sec preferred_lft 4143sec
inet6 fe80::250:56ff:feb2:39c4/64 scope link
valid_lft forever preferred_lft forever
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:90:fa:7a:a6 brd ff:ff:ff:ff:ff:ff
inet 172.30.0.1/24 brd 172.30.0.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:90ff:fefa:7aa6/64 scope link
valid_lft forever preferred_lft forever
99: veth3443190#if98: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 7e:0a:b2:53:84:7c brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::7c0a:b2ff:fe53:847c/64 scope link
valid_lft forever preferred_lft forever
103: br-fe3e6d8d0ad4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:b2:6e:7a:9f brd ff:ff:ff:ff:ff:ff
inet 172.30.3.1/24 brd 172.30.3.255 scope global br-fe3e6d8d0ad4
valid_lft forever preferred_lft forever
inet6 fe80::42:b2ff:fe6e:7a9f/64 scope link
valid_lft forever preferred_lft forever
106: macvlan-br0#ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 16:c9:47:98:c7:6b brd ff:ff:ff:ff:ff:ff
inet 192.168.140.152/32 scope global macvlan-br0
valid_lft forever preferred_lft forever
inet6 fe80::14c9:47ff:fe98:c76b/64 scope link
valid_lft forever preferred_lft forever
Internet doesn't work container through docker macvlan

Docker multiple networks, unable to connect outside world

When deploying docker-compose with multiple networks, only the first interface have an access to the outside world
version: "3.9"
services:
speedtest:
build:
context: .
dockerfile: speedtest.Dockerfile
tty: true
networks:
- eth0
- eth1
networks:
eth0:
eth1:
Running inside the container ping for example ping -I eth0 google.com works fine
However running ping -I eth1 google.com will get the result
PING google.com (142.250.200.238) from 172.21.0.2 eth1: 56(84) bytes of data.
From c4d3b238f9a1 (172.21.0.2) icmp_seq=1 Destination Host Unreachable
From c4d3b238f9a1 (172.21.0.2) icmp_seq=2 Destination Host Unreachable
Any idea how to have egress to the internet on both networks?
Tried multiple combinations for creating the network, with external, bridge with custom config etc...
Update
After larsks answer, using ip route add for eth1 and running tcpdump -i any packets are coming in correctly:
11:26:12.098918 eth1 Out IP 8077ec32b69d > dns.google: ICMP echo request, id 3, seq 1, length 64
11:26:12.184195 eth1 In IP dns.google > 8077ec32b69d: ICMP echo reply, id 3, seq 1, length 64
But still 100% packet loss...
The problem here is that while there are two interfaces inside the container, there is only a single default route. Given a container with two interfaces, like this:
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
70: eth0#if71: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:10:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.16.2/20 brd 192.168.31.255 scope global eth0
valid_lft forever preferred_lft forever
72: eth1#if73: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:30:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.48.2/20 brd 192.168.63.255 scope global eth1
valid_lft forever preferred_lft forever
The routing table looks like this:
/ # ip route
default via 192.168.16.1 dev eth0
192.168.16.0/20 dev eth0 proto kernel scope link src 192.168.16.2
192.168.48.0/20 dev eth1 proto kernel scope link src 192.168.48.2
When you run ping google.com or ping -I eth0 google.com, in both
cases your ICMP request egresses through eth0, goes to the
appropriate default gateway, and eventually works it way to
google.com.
But when you run ping -I eth1 google.com, there's no way to reach
the default gateway from that address; the gateway is only reachable
via eth0. Since the kernel can't find a useful route, it attempts to
connect directly. If we run tcpdump on the host interface that is
the other end of with eth1, we see:
23:47:58.035853 ARP, Request who-has 142.251.35.174 tell 192.168.48.2, length 28
23:47:59.083553 ARP, Request who-has 142.251.35.174 tell 192.168.48.2, length 28
[...]
That's the kernel saying, "I've been told to connect to this address
using this specific interface, but there's no route, so I'm going to
assume the address is on the same network and just ARP for it".
Of course that fails.
We can make this work by adding an appropriate route. You need to run
a privileged container to do this (or at least have
CAP_NET_ADMIN):
ip route add default via 192.168.48.1 metric 101
(The gateway address is the .1 address of the network associated with eth1.)
We need the metric setting to differentiate this from the existing
default route; without that the command would fail with RTNETLINK answers: File exists.
After running that command, we have:
/ # ip route
default via 192.168.16.1 dev eth0
default via 192.168.48.1 dev eth1 metric 101
192.168.16.0/20 dev eth0 proto kernel scope link src 192.168.16.2
192.168.48.0/20 dev eth1 proto kernel scope link src 192.168.48.2
And we can successfully ping google.com via eth1:
/ # ping -c2 -I eth1 google.com
PING google.com (142.251.35.174) from 192.168.48.2 eth1: 56(84) bytes of data.
64 bytes from lga25s78-in-f14.1e100.net (142.251.35.174): icmp_seq=1 ttl=116 time=8.87 ms
64 bytes from lga25s78-in-f14.1e100.net (142.251.35.174): icmp_seq=2 ttl=116 time=8.13 ms
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 8.127/8.497/8.868/0.370 ms
Having gone through all that, I'll add that I don't see many
situations in which it would be necessary: typically you use
additional networks in order to isolate things like database servers,
etc, while using the "primary" interface (the one with which the
default route is associated) for outbound requests.
I tested all this using the following docker-compose.yaml:
version: "3"
services:
sleeper:
image: alpine
cap_add:
- NET_ADMIN
command:
- sleep
- inf
networks:
- eth0
- eth1
networks:
eth0:
eth1:

Firewalld And Container Published Ports

On a KVM guest of my RHEL8 host, whose KVM guest is running CentOS7, I was expecting firewalld to by default block outside access to an ephemeral port published to by a Docker Container running nginx. To my surprise the access ISN'T blocked.
Again, the host (myhost) is running RHEL8, and it has a KVM guest (myguest) running CentOS7.
The firewalld configuration on myguest is standard, nothin' fancy:
[root#myguest ~]# firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0 eth1
sources:
services: http https ssh
ports:
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
Here are the eth0 and eth1 interfaces that fall under the firewalld public zone:
[root#myguest ~]# ip a s dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:96:9c:fc brd ff:ff:ff:ff:ff:ff
inet 192.168.100.111/24 brd 192.168.100.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe96:9cfc/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root#myguest ~]# ip a s dev eth1
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:66:6c:a1 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.111/24 brd 192.168.1.255 scope global noprefixroute eth1
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe66:6ca1/64 scope link noprefixroute
valid_lft forever preferred_lft forever
On myguest I'm running Docker, and the nginx container is publishing its Port 80 to an ephemeral port:
[me#myguest ~]$ docker container ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
06471204f091 nginx "/docker-entrypoint.…" About an hour ago Up About an hour 0.0.0.0:49154->80/tcp focused_robinson
Notice that in the prior firewall-cmd output I was not permitting access via this ephemeral TCP Port 49154 (or to any other ephemeral ports for that matter). So, I was expecting that unless I did so, outside access to nginx would be blocked. But to my surprise, from another host in the home network running Windows, I was able to access it:
C:\Users\me>curl http://myguest:49154
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
.
.etc etc
If a container publishes its container port to an ephemeral one on the host (myguest in this case), shouldn't the host firewall utility protect access to that port in the same manner as it would a standard port? Am I missing something?
But I also noticed that in fact the nginx container is listening on a TCP6 socket:
[root#myguest ~]# netstat -tlpan | grep 49154
tcp6 0 0 :::49154 :::* LISTEN 23231/docker-proxy
It seems, then, that firewalld may not be blocking tcp6 sockets? I'm confused.
This is obviously not a production issue, nor something to lose sleep over. I'd just like to make sense of it. Thanks.
The integration between docker and firewalld has changed over the years, but based on your OS versions and CLI output I think you can get the behavior you expect by setting AllowZoneDrifting=no it /etc/firewalld/firewalld.conf 1 on the RHEL-8 host.
Due to zone drifting, it possible for packets received in a zone with --set-target=default (e.g. public zone) to drift to a zone with --set-target=accept (e.g. trusted zone). This means FORWARDed packets received in zone public will be forwarded to zone trusted. If your docker containers are using a real bridge interface, then this issue may apply to your setup. Docker defaults to SNAT so usually this problem is hidden.
Newer firewalld 2 releases have completely removed this behavior, because as you have found it's both unexpected and a security issue.

Weblogic port binding in Docker container

I manually installed weblogic, configured domain inside a Docker container [Overlay network - Docker swarm].
[root#host ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
e3f32e71cc16 bridge bridge local
26628a46774c docker_gwbridge bridge local
20c80427519f host host local
9ejgeett1y4y ingress overlay swarm
52d14f492cda none null local
f628wowngc6z myoverlay overlay swarm
After starting the Admin Server, I see that the admin server port 7001 is bound to only the overlay interface and not to the ANY interface 0.0.0.0. So, even though I exposed the 7001 port when creating container, I am not able to access it from external public network.
[root#host /]# netstat -anp | grep 7001
tcp 0 0 10.0.0.5:7001 0.0.0.0:* LISTEN 8168/java
[root#host /]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
265: eth0#if266: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.0.0.5/24 scope global eth0
valid_lft forever preferred_lft forever
267: eth1#if268: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:13:00:06 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 172.19.0.6/16 scope global eth1
valid_lft forever preferred_lft forever
I also started sshd in the same container, but this was bound to the 0.0.0.0 ANY interface.
[root#host /]# netstat -anp | grep 22
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1/sshd
I need weblogic admin server console to be accessible to public network, not just within the overlay network containers. So, how to bind weblogic admin server port to all interfaces (0.0.0.0) ?
Problem lies in Weblogic configuration. If Listen address is set to empty, weblogic itself will bind the port to all interfaces. This solved my problem.

What is the second to docker0 part of docker veth interface on host side?

By default we have a bridge named docker0 on host machine as one component of docker networking.
When we run a docker container, it creates a vethxxx pipe which binds docker0 with one point and container with the other point, named eth0.
I'm trying to find the trace of those eth0 interface on host machine.
I've expected to find some network namespace via:
ip netns show
But it's clear. So how could I see the representation of a container's eth0 interface on host machine?
Generally, each container has an isolated network namespace on host. And the interface eth0 in a container is encapsulated in a network namespace(aka sandbox in Docker terminology). So if you want to see eth0 on host side, you must enter its network namespace first.
But docker containers' network namespaces lay in different directory from those created manually. They lay in /var/run/docker/netns. So we need create a soft link to /var/run/netns.
ln -s /var/run/docker/netns /var/run/netns
ip netns list
ip netns exec xxxx ip addr show
Thus, you could see the other side of veth on host machine in each isolated network namespace.
root#Light-G:/var/lib# ip netns exec 459c238c2a4f ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
15: eth0#if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:0a:0a:c7:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.10.199.2/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:aff:fe0a:c702/64 scope link
valid_lft forever preferred_lft forever

Resources