So I'm trying to create a network (docker network create) so that its traffic will pass through an specific physical network interface (NIC); I have two: <iface1> (internal), and <iface2> (external).
I need the traffics of both NICs to be physically separated.
METHOD 1:
I think macvlan is the driver should use to create such network.
For most of what I found on the internet, the solutions refer to Pipework (deprecated now) and temporary docker-plugins (deprecated too).
For what most closely has helped me is this1
docker network create -d macvlan \
--subnet 192.168.0.0/16 \
--ip-range 192.168.2.0/24 \
-o parent=wlp8s0.1 \
-o macvlan_mode=bridge \
macvlan0
Then, in order for the container to be visible from the host, I need to do this in the host:
sudo ip link add macvlan0 link wlp8s0.1 type macvlan mode bridge
sudo ip addr add 192.168.2.10/16 dev macvlan0
sudo ifconfig macvlan0 up
Now the container and the host see each other :) BUT the container can't access the local network.
The idea, is that the container can access internet.
METHOD 2:
As I will use <iface2> manually, I'm ok if by default the traffic goes through <iface1>.
But no matter in which order I get the NICs up (I also tried removing the LKM for <iface2> temporarely); the whole traffic is always overtaken by the external NIC <iface2>.
And I found that it happens because the route table updates automatically at some "random" time.
In order to force the traffic to go through <iface1>, I have to (in the host):
sudo route del -net <net> gw 0.0.0.0 netmask 255.0.0.0 dev <iface2>
sudo route del default <iface2>
Now, I can verify (in several ways) that the traffic just goes through <iface1>.
But the moment that the route table updates (automatically), all traffic moves to <iface2>. Damn!
I'm sure there's a way to make the route table "static" or "persistent".
EDIT (18/Jul/2018):
The main idea is to be able to access internet through a docker container using only one of two available physical network interfaces.
My environment:
On the host created for vm virbr0 bridge with ip address 192.168.122.1 and up vm instance with interface ens3 and ip address 192.168.122.152.
192.168.122.1 - is gateway for 192.168.122.0/24 network.
Into vm:
Create network:
# docker network create --subnet 192.168.122.0/24 --gateway 192.168.122.1 --driver macvlan -o parent=ens3 vmnet
Create docker container:
# docker run -ti --network vmnet alpine ash
Check:
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
12: eth0#if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:c0:a8:7a:02 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.2/24 brd 192.168.122.255 scope global eth0
valid_lft forever preferred_lft forever
/ # ping 192.168.122.152
PING 192.168.122.152 (192.168.122.152): 56 data bytes
^C
--- 192.168.122.152 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
/ # ping 192.168.122.1
PING 192.168.122.1 (192.168.122.1): 56 data bytes
64 bytes from 192.168.122.1: seq=0 ttl=64 time=0.471 ms
^C
--- 192.168.122.1 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.471/0.471/0.471 ms
Ok, I up another vm with ip address 192.168.122.73 and check from docker:
/ # ping 192.168.122.73 -c2
PING 192.168.122.73 (192.168.122.73): 56 data bytes
64 bytes from 192.168.122.73: seq=0 ttl=64 time=1.630 ms
64 bytes from 192.168.122.73: seq=1 ttl=64 time=0.984 ms
--- 192.168.122.73 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.984/1.307/1.630 ms
From docker instance I can't ping interface on vm, but I can access to local network.
/ # ip n|grep 192.168.122.152
192.168.122.152 dev eth0 used 0/0/0 probes 6 FAILED
On vm I add macvlan0 nic:
# ip link add macvlan0 link ens3 type macvlan mode bridge
# ip addr add 192.168.122.100/24 dev macvlan0
# ip l set macvlan0 up
From the docker I can ping 192.168.122.100:
/ # ping 192.168.122.100 -c2
PING 192.168.122.100 (192.168.122.100): 56 data bytes
64 bytes from 192.168.122.100: seq=0 ttl=64 time=0.087 ms
64 bytes from 192.168.122.100: seq=1 ttl=64 time=0.132 ms
--- 192.168.122.100 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.087/0.109/0.132 ms
Related
When deploying docker-compose with multiple networks, only the first interface have an access to the outside world
version: "3.9"
services:
speedtest:
build:
context: .
dockerfile: speedtest.Dockerfile
tty: true
networks:
- eth0
- eth1
networks:
eth0:
eth1:
Running inside the container ping for example ping -I eth0 google.com works fine
However running ping -I eth1 google.com will get the result
PING google.com (142.250.200.238) from 172.21.0.2 eth1: 56(84) bytes of data.
From c4d3b238f9a1 (172.21.0.2) icmp_seq=1 Destination Host Unreachable
From c4d3b238f9a1 (172.21.0.2) icmp_seq=2 Destination Host Unreachable
Any idea how to have egress to the internet on both networks?
Tried multiple combinations for creating the network, with external, bridge with custom config etc...
Update
After larsks answer, using ip route add for eth1 and running tcpdump -i any packets are coming in correctly:
11:26:12.098918 eth1 Out IP 8077ec32b69d > dns.google: ICMP echo request, id 3, seq 1, length 64
11:26:12.184195 eth1 In IP dns.google > 8077ec32b69d: ICMP echo reply, id 3, seq 1, length 64
But still 100% packet loss...
The problem here is that while there are two interfaces inside the container, there is only a single default route. Given a container with two interfaces, like this:
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
70: eth0#if71: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:10:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.16.2/20 brd 192.168.31.255 scope global eth0
valid_lft forever preferred_lft forever
72: eth1#if73: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:30:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.48.2/20 brd 192.168.63.255 scope global eth1
valid_lft forever preferred_lft forever
The routing table looks like this:
/ # ip route
default via 192.168.16.1 dev eth0
192.168.16.0/20 dev eth0 proto kernel scope link src 192.168.16.2
192.168.48.0/20 dev eth1 proto kernel scope link src 192.168.48.2
When you run ping google.com or ping -I eth0 google.com, in both
cases your ICMP request egresses through eth0, goes to the
appropriate default gateway, and eventually works it way to
google.com.
But when you run ping -I eth1 google.com, there's no way to reach
the default gateway from that address; the gateway is only reachable
via eth0. Since the kernel can't find a useful route, it attempts to
connect directly. If we run tcpdump on the host interface that is
the other end of with eth1, we see:
23:47:58.035853 ARP, Request who-has 142.251.35.174 tell 192.168.48.2, length 28
23:47:59.083553 ARP, Request who-has 142.251.35.174 tell 192.168.48.2, length 28
[...]
That's the kernel saying, "I've been told to connect to this address
using this specific interface, but there's no route, so I'm going to
assume the address is on the same network and just ARP for it".
Of course that fails.
We can make this work by adding an appropriate route. You need to run
a privileged container to do this (or at least have
CAP_NET_ADMIN):
ip route add default via 192.168.48.1 metric 101
(The gateway address is the .1 address of the network associated with eth1.)
We need the metric setting to differentiate this from the existing
default route; without that the command would fail with RTNETLINK answers: File exists.
After running that command, we have:
/ # ip route
default via 192.168.16.1 dev eth0
default via 192.168.48.1 dev eth1 metric 101
192.168.16.0/20 dev eth0 proto kernel scope link src 192.168.16.2
192.168.48.0/20 dev eth1 proto kernel scope link src 192.168.48.2
And we can successfully ping google.com via eth1:
/ # ping -c2 -I eth1 google.com
PING google.com (142.251.35.174) from 192.168.48.2 eth1: 56(84) bytes of data.
64 bytes from lga25s78-in-f14.1e100.net (142.251.35.174): icmp_seq=1 ttl=116 time=8.87 ms
64 bytes from lga25s78-in-f14.1e100.net (142.251.35.174): icmp_seq=2 ttl=116 time=8.13 ms
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 8.127/8.497/8.868/0.370 ms
Having gone through all that, I'll add that I don't see many
situations in which it would be necessary: typically you use
additional networks in order to isolate things like database servers,
etc, while using the "primary" interface (the one with which the
default route is associated) for outbound requests.
I tested all this using the following docker-compose.yaml:
version: "3"
services:
sleeper:
image: alpine
cap_add:
- NET_ADMIN
command:
- sleep
- inf
networks:
- eth0
- eth1
networks:
eth0:
eth1:
I have tried to connect to a digilent ZedBoard using my host PC, which I can do using UART, but I am not able to ssh into the board or further use my host PC internet connection to access the internet through the ZedBoard.
Zedboard is running: Xillinux distribution for Zynq-7000 EPP
Host PC is running: Ubuntu 16.04
How should I set this up?
We will go through the steps of communicating to a digilent Zedboard using the UART and the Ethernet port.
Using UART port
Connect the host (USB) to the zedboard's UART port (micro USB) and execute on the host:
# Install minicom
apt update && apt install minicom
minicom –D /dev/ttyACM0 –b 115200 -8 -o
Congratulations, you are connected to the zedboard
* For minicom help: CTRL+a z
* To exit minicom CTRL+a x
Connect using the board's ethernet port
Connect the zedboard to the host using the ethernet port on the host system, or an ethernet to usb adapter.
By default the zedboard's os has eth0 cunfigured to have the static ip of: 192.168.1.10
Configure on the host:
Network Connections > (Select the connection interface to the zedboard) > Edit > IPv4 Settings:
Change Method to Manual
Edit Address to: 192.168.1.1
Edit Netmask to: 255.255.255.0
Use the menu on the host to disconnect and connect to the interface that you have just configured.
Connect to the board by: ssh root#192.168.1.10
Share your PC's internet with the zedboard
Network Connections > (Select the connection interface) > Edit > IPv4 Settings:
* Change Method to Share to other computers
Use the menu on the host to disconnect and connect to the interface that you have just configured
execute ip addr and confirm the ip of the connection interface that is being shared
10.42.0.1 in my machine (this may be different in your machine)
Use minicom to connect to the board (see above).
In the ZedBoard:
Edit the file /etc/network/interfaces
# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 10.42.0.10
netmask 255.255.255.0
gateway 10.42.0.1
And fix your DNS resolver by editing the file /etc/resolv.conf to
nameserver 10.42.0.1
Execute the command to change the configurations of your zedboard
ifdown eth0; ifup eth0
And voiala! At this point should would be able to ping your host at:
root#localhost:~# ping 10.42.0.1
PING 10.42.0.1 (10.42.0.1) 56(84) bytes of data.
64 bytes from 10.42.0.1: icmp_req=1 ttl=64 time=0.424 ms
64 bytes from 10.42.0.1: icmp_req=2 ttl=64 time=0.498 ms
Ping a internet hosted website 8.8.8.8 through your host connection:
root#localhost:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_req=1 ttl=53 time=6.93 ms
64 bytes from 8.8.8.8: icmp_req=2 ttl=53 time=6.89 ms
64 bytes from 8.8.8.8: icmp_req=3 ttl=53 time=7.22 ms
And if you have setup /etc/resolv.conf correctly you can also access the internet using full domain names:
root#localhost:~# ping www.google.com
PING www.google.com (172.217.10.132) 56(84) bytes of data.
64 bytes from lga34s16-in-f4.1e100.net (172.217.10.132): icmp_req=1 ttl=53 time=7.02 ms
64 bytes from lga34s16-in-f4.1e100.net (172.217.10.132): icmp_req=2 ttl=53 time=7.20 ms
Additional notes
Files to keep in mind
/etc/network/interfaces describes the network interfaces
/etc/hostname configures the nameserver credentials
/etc/hosts resolves IP addresses to hostnames
/etc/resolv.conf configure your DNS resolver
I have an external server abc.internalcorp.com that I'm planning to connect from docker.
I tried to ping that server from host machine and it works.
ping abc.internalcorp.com
PING abc.internalcorp.com (172.xx.xx.xx) 56(84) bytes of data.
64 bytes from abc.internalcorp.com (172.xx.xx.xx): icmp_seq=1 ttl=47 time=32.6 ms
^C
--- abc.internalcorp.com ping statistics ---
2 packets transmitted, 1 received, 50% packet loss, time 999ms
rtt min/avg/max/mdev = 32.673/32.673/32.673/0.000 ms
But when I execute the same command from my docker container, I see no response. How could this be?
docker exec -ti docker-container bash
root#b7bdf44feb7f:/# ping abc.internalcorp.com
PING abc.internalcorp.com (172.xx.xx.xx) 56(84) bytes of data.
<No response>
This ping is just a test. abc.internalcorp.com is actually a database server and I'm unable to connect to it. I can connect to other database servers though.
Update:
I changed bip in ~/.docker/daemon.json
{
"bip": "193.168.1.5/24",
"registry-mirrors": [],
"insecure-registries": [],
"debug": true,
"experimental": false
}
But I still have the same ping issue
docker exec -ti docker-container bash
root#b7bdf44feb7f:/# ip addr show eth0
10: eth0#if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c1:a8:01:01 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 193.168.1.1/24 brd 193.168.1.255 scope global eth0
valid_lft forever preferred_lft forever
Edit
Figured out the issue. There were other networks in my docker that were having the same network subnets. Deleted them and works fine now
Need to do two things.
change the network by editing daemon.json
{
"registry-mirrors": [],
"insecure-registries": [],
"debug": true,
"experimental": false,
"bip" : "12.12.0.1/24"
}
Delete other networks in docker which might be conflicting with the ip. You check if any other network is in the same range using
docker inspect 'networkname'
The range 172.x.x.x is the default range on the internal network in docker, if you are using that same range in your local network you need to specify a different one for the docker network.
https://docs.docker.com/v17.09/engine/userguide/networking/default_network/custom-docker0/
Find the DNS with resolves abc.internalcorp.com. Add it as a DNS to your docker container by updating the daemon.json as below. If x.x.x.x is the DNS.
{
"dns": ["x.x.x.x"]
}
restart docker daemon. Then try ping from the container.
On the host, there is a service
#server# netstat -ln | grep 3308
tcp6 0 0 :::3308 :::* LISTEN
It can be reached from remote.
The container is in a user-defined bridge network.
The server IP address is 192.168.1.30
#localhost ~]# ifconfig
br-a54fd3b63acd: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
inet6 fe80::42:1eff:fecc:92e8 prefixlen 64 scopeid 0x20<link>
ether 02:42:1e:cc:92:e8 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:37ff:fe9f:e4f1 prefixlen 64 scopeid 0x20<link>
ether 02:42:37:9f:e4:f1 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 34 bytes 4018 (3.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.30 netmask 255.255.255.0 broadcast 192.168.1.255
And ping from container also works.
#33208c18aa61:~# ping -c 2 192.168.1.30
PING 192.168.1.30 (192.168.1.30) 56(84) bytes of data.
64 bytes from 192.168.1.30: icmp_seq=1 ttl=64 time=0.120 ms
64 bytes from 192.168.1.30: icmp_seq=2 ttl=64 time=0.105 ms
And the service is available.
#server# telnet 192.168.1.30 3308
Trying 192.168.1.30...
Connected to 192.168.1.30.
Escape character is '^]'.
N
But the service can't be reached from the container.
#33208c18aa61:~# telnet 192.168.1.30 3308
Trying 192.168.1.30...
telnet: Unable to connect to remote host: No route to host
I checked
Make docker use IPv4 for port binding
make sure I didn't have IPv6 set to only bind on IPv6
# sysctl net.ipv6.bindv6only
net.ipv6.bindv6only = 0
From inside of a Docker container, how do I connect to the localhost of the machine?
find my route is a little different.
# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default router.asus.com 0.0.0.0 UG 100 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-a54fd3b63acd
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
Does it matter? Or could it be another reason?
Your docker container is on a different network namespace and connected to a different interface than your host machine that's why you can't reach it using the ip 192.168.x.x
What you need to do is to use the docker network gateway instead, in your case 172.17.0.1 but be aware that this IP might no be the same from host to host so to reproduce this everywhere and be completely sure of which is the IP you can create an user-defined network specifying the subnet and gateway and running your container there for example:
docker network create -d bridge --subnet 172.16.0.0/24 --gateway 172.16.0.1 dockernet
docker run --net=dockernet ubuntu
Also whatever service you are trying to connect here must be listening on the docker's bridge interface as well.
Another option is to run the container on the same network namespace as the host with the --net=host flag, and in this case you can access service outside the container using localhost
Inspired by the official document
The Docker bridge driver automatically installs rules in the host
machine so that containers on different bridge networks cannot
communicate directly with each other.
I checked the iptables on the server, for an experiment I stopped the iptables temporary. Then the container can reach that service success. Later I was told, the server has been reboot recently. So guessing some config was lost after that reboot. Not familiar with iptables very much, and when I try
systemctl status iptables.service
It says the service is not installed. After I install and run the service,
iptables -L -n
is almost empty. Now not clue what kind of iptables rules can cause that messy.
But if anyone face the ping success telnet fail situation, iptables could be the place of the root cause.
I am running a Docker image in a MAC machine and when I logged into the container, I see the ip address as "172.17.0.2"( cat /etc/hosts).
How does docker choose the IP?
Is there any IP range that Docker choose?
What if I run multiple container on the same host? Will it be different?
/etc/resolve.conf gives some IP. What is that IP and where does it get?
How to connect to Docker service using the internal IP, say 172.17.0.2
ping CONTAINER_ID -> returns the IP 172.17.0.2
How does it resolve the hostname?
I tried reading through networking but it doesn't help.
Also, I am running my service in the port 8443. Still, I am unable to connect.
I tried running,
docker run -net host -p 8443:8443 IMAGE
Still no luck.
Tried the below approach also.
docker run -p MY_MACHINE_IP:8080:8080 IMAGE
Tried with,
http://MY_MACHINE_IP:8080
http://localhost:8080
None of the above works.
ifconfig output,
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
options=1203<RXCSUM,TXCSUM,TXSTATUS,SW_TIMESTAMP>
inet 127.0.0.1 netmask 0xff000000
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
nd6 options=201<PERFORMNUD,DAD>
gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280
stf0: flags=0<> mtu 1280
XHC20: flags=0<> mtu 0
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether 60:f8:1d:b2:cb:0c
inet6 fe80::49d:a511:dc4e:7960%en0 prefixlen 64 secured scopeid 0x5
inet 10.231.168.63 netmask 0xffe00000 broadcast 10.255.255.255
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
p2p0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 2304
ether 02:f8:1d:b2:cb:0c
media: autoselect
status: inactive
awdl0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1484
ether 0a:71:96:61:e4:eb
inet6 fe80::871:96ff:fe61:e4eb%awdl0 prefixlen 64 scopeid 0x7
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
en1: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
options=60<TSO4,TSO6>
ether 72:00:07:57:48:30
media: autoselect <full-duplex>
status: inactive
en2: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
options=60<TSO4,TSO6>
ether 72:00:07:57:48:31
media: autoselect <full-duplex>
status: inactive
bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=63<RXCSUM,TXCSUM,TSO4,TSO6>
ether 72:00:07:57:48:30
Configuration:
id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
ipfilter disabled flags 0x2
member: en1 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 8 priority 0 path cost 0
member: en2 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 9 priority 0 path cost 0
nd6 options=201<PERFORMNUD,DAD>
media: <unknown type>
status: inactive
utun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 2000
inet6 fe80::3f17:8946:c18d:5d25%utun0 prefixlen 64 scopeid 0xb
nd6 options=201<PERFORMNUD,DAD>
utun2: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380
inet6 fe80::20aa:76fd:d68:7fb2%utun2 prefixlen 64 scopeid 0xd
nd6 options=201<PERFORMNUD,DAD>
utun3: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380
inet6 fe80::e42a:c616:4960:2c43%utun3 prefixlen 64 scopeid 0x10
nd6 options=201<PERFORMNUD,DAD>
utun1: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1342
inet 17...... --> 17.... netmask 0xff000000
inet6 fe80::93df:7780:862c:8a06%utun1 prefixlen 64 scopeid 0x12
nd6 options=201<PERFORMNUD,DAD>
for the first 4 question you can find here some information, in general the docker network is the responsable about manager the network.
Usually I specify the prots like this:
docker run -p 8443:8443 IMAGE
and it work.
An reference to an existing topic is here
1. How does docker choose the IP?
When docker installed in your machine it will create docker0 interface. It will gives ip address to your container whenever it launch.
you can verify the ip range for docker0 by ifconfig command.
2. Is there any IP range that docker choose?
Yes, Please refer my answer 1.
3. What if i run multiple container on the same host? Will it be different?
Yes, It will be different from the range of docker0 interface until you create your own network using docker network create for more refer : Docker Networking
4./etc/resolve.conf gives some IP. What is that IP and where does it get?
It's internal DNS of docker network you can give your DNS ip in vi /etc/systemd/system/docker.service.d/docker.conf add your DNS server on line like below:
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock -g "/opt/docker_storage" --dns <replace-dns-ip>
5. How to connect to docker service using the internal IP, say 172.17.0.2
You have to expose port to connect like docker run -p 8443:8443 <image-name>
after that you can connect by telnet localhost 8443 or curl http://172.17.0.2:8443
Most important
Add the following to /etc/sysctl.conf
net.ipv4.ip_forward = 1 and apply settings by
sysctl -p /etc/sysctl.conf
Hope this will help.
Thank you!
Docker manages all of this internal networking machinery itself. This includes allocating IP(v4) addresses from a private range, a NAT setup for outbound connections, and a DNS service to allow containers to communicate with each other.
A stable, reasonable setup is:
Run docker network create mynet, once, to create a non-default network. (Docker Compose will do this for you automatically.)
Run your containers with --net mynet.
When containers need to communicate with each other, they can use other containers' --name as DNS names (you can connect to http://other-container-name).
If you need to reach a container from elsewhere, publish its service port using docker run -p or the Docker Compose ports: section. It can be reached using the host's DNS name or IP address and the published port.
Never ever use the container-private IP addresses (directly).
Never use localhost unless you're absolutely sure about what it means. (It's a correct way to reach a published port from a browser running on the host that's running the containers; it's almost definitely not what you mean from within a container.)
The problems I've seen with the container-private IP addresses tend to be around the second time you use them: because you relaunched the container and the IP address changed; because it worked from your local host and now you want to reach it from somewhere else.
To answer your initial questions briefly: (1-2) Docker assigns them itself from a network that can be configured but often defaults to 172.17.0.0/16; (3) different containers have different private IP addresses; (4-5) Docker provides its own DNS service and /etc/resolv.conf points there; (6) ICMP connectivity usually doesn't prove much and you don't need to ping containers (use dig or nslookup for DNS debugging, curl for actual HTTP requests).