How to let docker container access host network port - docker

I want to connect to my local host service from a docker container. I am using docker for mac. I checked this link How to access host port from docker container but when I run ip addr show docker0 in the docker container. I got ip addr show docker0 error response. Below is all the network devices on my docker container.
# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
link/ipip 0.0.0.0 brd 0.0.0.0
3: gre0#NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1
link/gre 0.0.0.0 brd 0.0.0.0
4: gretap0#NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: ip_vti0#NONE: <NOARP> mtu 1428 qdisc noop state DOWN group default qlen 1
link/ipip 0.0.0.0 brd 0.0.0.0
6: ip6_vti0#NONE: <NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1
link/tunnel6 :: brd ::
7: sit0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
link/sit 0.0.0.0 brd 0.0.0.0
8: ip6tnl0#NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1
link/tunnel6 :: brd ::
9: ip6gre0#NONE: <NOARP> mtu 1448 qdisc noop state DOWN group default qlen 1
link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
238: eth0#if239: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:14:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.20.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe14:2/64 scope link
valid_lft forever preferred_lft forever
Which one is my local host address?

You can use the special Mac-only DNS name docker.for.mac.localhost, this will resolve to the host IP.
Source: https://docs.docker.com/docker-for-mac/networking/#there-is-no-docker0-bridge-on-macos

You can docker run --network host ... to let docker container access host ports. Or use option network_mode in docker-compose.yaml. But this mode can be a security issue if you use an untrusted docker image.

Related

Kubernetes minikube running in wsl Ubuntu exposes a service i cannot reach from win 10

i was following a kubernetes tutorial for beginners (techworld with nana) with a win10 machine running docker. As i got troubles, i migrate to this config :
wsl -l -v
NAME STATE VERSION
* Ubuntu Running 2
I install docker and run it by $ sudo service docker start
Start minikube : $minikube start --driver=docker --kubernetes-version=v1.18.0
(not the last version because some pb between systemd and systemctl)
Everything was ok, i create a mongodb pod and a mongoexpress pod with ad hoc services:
plaurent$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mongo-express-864c95f479-8gfxf 1/1 Running 2 23h
mongodb-deployment-58977cc4f5-k4r4h 1/1 Running 1 23h
plaurent$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h
mongo-express-service LoadBalancer 10.98.7.33 <pending> 8081:30000/TCP 23h
mongodb-service ClusterIP 10.101.132.245 <none> 27017/TCP 23h
following tuto, i run :
/plaurent$ minikube service mongo-express-service
🏃 Starting tunnel for service mongo-express-service.
🎉 Opening service default/mongo-express-service in default browser...
👉 http://192.168.49.2:30000
❗ Because you are using a Docker driver on linux, the terminal needs to be open to run it.
On a second terminal wsl, i can reach this service with the following and it's ok.
plaurent$ curl http://192.168.49.2:30000
BUt i cannot do the same thing from the win10 and even a ping fails.
i start ip addr and get the following :
/plaurent$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 86:5b:79:bf:27:05 brd ff:ff:ff:ff:ff:ff
3: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether f2:bd:6f:41:f3:2d brd ff:ff:ff:ff:ff:ff
4: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:15:5d:b5:ae:43 brd ff:ff:ff:ff:ff:ff
inet 172.20.254.215/20 brd 172.20.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::215:5dff:feb5:ae43/64 scope link
valid_lft forever preferred_lft forever
5: tunl0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
6: sit0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
7: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:eb:30:05:9a brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
8: br-ecf9b5a8d792: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:0f:31:2f:71 brd ff:ff:ff:ff:ff:ff
**inet 192.168.49.1/24 brd 192.168.49.255 scope global br-ecf9b5a8d792**
valid_lft forever preferred_lft forever
inet6 fe80::42:fff:fe31:2f71/64 scope link
valid_lft forever preferred_lft forever
10: vethe8c97a5#if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ecf9b5a8d792 state UP group default
link/ether ee:d2:2d:f8:5b:4d brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::ecd2:2dff:fef8:5b4d/64 scope link
valid_lft forever preferred_lft forever
i can see inet 192.168.49.1/24 brd 192.168.49.255 scope global br-ecf9b5a8d792 close to the ip of service, but i don't know what it means and if this can help to solve the problem.
I'm not comfortable with networks , any help welcome.
You can do port forward of the services in cluster. https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/

Windows docker desktop gives different network interface between host and container when using --net=host

Update
I try the same setup using Ubuntu as host. It works! And I notice the interface info (ip a) in host and container are the same in Ubuntu docker. But are different when using docker windows desktop.
So, the question becomes, why windows docker desktop give different network interface between host and container when using --net=host?
Original question
I open a container with --net=host. I want to connect to a device, which on the same subnet of my host, from inside the container. Also, the container has a server running on port 3000.
Host (192.168.64.101/18)
Device (192.168.64.102/18)
Container (--net=host, server on port 3000)
Container can connect to device with 192.168.64.102.
Container can ping the host with 192.168.64.101
But I cannot access container's server on port 3000 from host. I try curl localhost:3000 the connection refuse.
I thought --net=host will treat container as same network as host. Why can't I connect to the container's server using localhost?
ip a from container
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
3: sit0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
4: services1#if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether a2:41:c9:a1:cd:4e brd ff:ff:ff:ff:ff:ff
inet 192.168.65.4 peer 192.168.65.5/32 scope global services1
valid_lft forever preferred_lft forever
inet6 fe80::a041:c9ff:fea1:cd4e/64 scope link
valid_lft forever preferred_lft forever
6: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN qlen 1000
link/ether 02:50:00:00:00:01 brd ff:ff:ff:ff:ff:ff
inet 192.168.65.3/24 brd 192.168.65.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::50:ff:fe00:1/64 scope link
valid_lft forever preferred_lft forever
7: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 02:42:fb:e9:2d:76 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:fbff:fee9:2d76/64 scope link
valid_lft forever preferred_lft forever
11: vethfd2c43f#if10: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0 state UP
link/ether d6:ee:fe:80:24:04 brd ff:ff:ff:ff:ff:ff
inet6 fe80::d4ee:feff:fe80:2404/64 scope link
valid_lft forever preferred_lft forever

how to pass dockerized program ip addresses of overlay networks attached to it

I have three overlay networks connected to container in a swarm and i need to give the addresses of the different networks to the program running inside when the container comes up. Each network is for a different purpose and i cant seem to identify them within the container.
I've tried the usual IP a and hostname -i but it only displays the adapter information and nothing identifiable about which overlay is on which adapter.
lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
eth3#if118: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:0e:03 brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet 10.0.14.3/24 brd 10.0.14.255 scope global eth3
valid_lft forever preferred_lft forever
eth4#if120: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 4
inet 172.18.0.3/16 brd 172.18.255.255 scope global eth4
valid_lft forever preferred_lft forever
eth0#if122: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:ff:00:0d brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.255.0.13/16 brd 10.255.255.255 scope global eth0
valid_lft forever preferred_lft forever
eth1#if124: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:0d:03 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 10.0.13.3/24 brd 10.0.13.255 scope global eth1
valid_lft forever preferred_lft forever
eth2#if126: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:0f:03 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet 10.0.15.3/24 brd 10.0.15.255 scope global eth2
valid_lft forever preferred_lft forever
hostname -i
10.0.14.3 10.0.13.3 10.0.15.3
This shows the addresses but nothing identifiable about which is which overlay. Any pointers would be appreciated.
You could write a script that runs on the host when the swarm service is started. The script would run docker inspect on your container and write the network information to a file. The file would be stored on a mount point that the container has mounted. Then you can read the information from that file from within the container.
A more elaborate solution would be to build a custom API that would run the docker inspect for you and return the network information in a response.
Overlay networks are likely assigned to interfaces in order. A little bit of testing shows this to be true, but I don't like relying on assumptions.

Simulating network failures in Docker

I am trying to simulate partial/total network/container failure in docker in order to see how my application behaves in failure conditions. I have started by using pumba, but it isn't working right. More specifically, this command fails when run, both via pumba and when run directly on the container with docker exec:
tc qdisc add dev eth0 root netem delay 2000ms 10ms 20.00
with the following output:
RTNETLINK answers: Operation not permitted
Now here is where it gets stranger. It works when run inside my service containers Actually, it only works when run via pumba, not when run directly (rabbitmq:3.6.10, redis:4.0.1, mongo:3.5.11), after installing the iproute2 package. It does not work inside my application containers, all of which use node:8.2.1 as the base image, which already has iproute2 installed. None of the containers have any add_caps applied.
Output of ip addr on one of the application containers:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: tunl0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
link/ipip 0.0.0.0 brd 0.0.0.0
3: gre0#NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1
link/gre 0.0.0.0 brd 0.0.0.0
4: gretap0#NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: ip_vti0#NONE: <NOARP> mtu 1332 qdisc noop state DOWN group default qlen 1
link/ipip 0.0.0.0 brd 0.0.0.0
6: ip6_vti0#NONE: <NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1
link/tunnel6 :: brd ::
7: sit0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
link/sit 0.0.0.0 brd 0.0.0.0
8: ip6tnl0#NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1
link/tunnel6 :: brd ::
9: ip6gre0#NONE: <NOARP> mtu 1448 qdisc noop state DOWN group default qlen 1
link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
113: eth0#if114: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:06 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.6/16 scope global eth0
valid_lft forever preferred_lft forever
Ok, I found part of the answer. It turns out that the tc command was not working when run directly on the service containers. Sorry for the bit of incorrect information in the original question. Pumba works on the service containers and not the application containers. The tc command does not work in any of the containers.
It turns out that it was a problem with running as an unprivileged user. I opened an issue with pumba to address the problem.
The tc comand still isn't working when run as root, and I still don't know why. However, I was only using that command for debugging, so while I am curious as to why it doesn't work, my main issue has been resolved.
You should call exec on the containner using root user: -u=0
like:
sudo docker exec-u=0 myContainer tc qdisc add dev eth0 root netem delay 2000ms 10ms 20.00
I had a similar issue on Windows and finally was able to resolve by turning off the WSL 2 based engine in Docker settings. Now all my tc qdisc... commands are working.

how to start a docker container with multi interface?

I want to start a docker container with three interfaces, all these interfaces will be attached to a bridge on host.
The only solution is providing my own network plugin. the below interface will be invoked by docker daemon once container is created to configure its network:
func (d *Driver) Join(r *dknet.JoinRequest) (*dknet.JoinResponse, error)
but there is only one Endpoint object in JoinRequest struct, so the above invocation can only configure one container interface.
I don't know how to create and configure three container interfaces?
You need to do it multiple time
$ docker network create net1
bdc53c143e89d562761eedfd232620daf585968bc9ae022ba142d17601af6146
$ docker network create net2
d9a72a7a6ee6b61da3c6bb17e312e48888807a5a8c159fd42b6c99d219977559
$ docker network create net3
d2be9628f4fd60587d44967a5813e9ba7c730d24e953368b18d7872731a9478c
$ docker run -it --network net3 ubuntu:16.04 bash
root#cd70c7cbe367:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
90: eth0#if91: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:18:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.24.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
Now your container is running with net3 network only. You can attach net1 and net2 as well.
$ docker network connect net1 cd70c7cbe367
$ docker network connect net2 cd70c7cbe367
After that check in container
root#cd70c7cbe367:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
90: eth0#if91: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:18:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.24.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
92: eth1#if93: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:16:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.22.0.2/16 scope global eth1
valid_lft forever preferred_lft forever
94: eth2#if95: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:17:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.23.0.2/16 scope global eth2
valid_lft forever preferred_lft forever
PS: the ip command is missing by default in container, so i installed the iproute2 packaged to check

Resources