Rootless mode: My docker container has internet but can not ping - docker

I just noticed that my docker container has internet connection. Because I can update or download packages. But I can not ping google. I do not think it is a DNS issue. Because I can not ping 8.8.8.8 either.
% docker run -it --name alpine5 alpine ash
/ # apk update
fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/community/x86_64/APKINDEX.tar.gz
v3.15.0-172-g86e4642bdd [https://dl-cdn.alpinelinux.org/alpine/v3.15/main]
v3.15.0-173-g0bd3b989ee [https://dl-cdn.alpinelinux.org/alpine/v3.15/community]
OK: 15837 distinct packages available
/ # apk add openssh
(1/10) Installing openssh-keygen (8.8_p1-r1)
(2/10) Installing ncurses-terminfo-base (6.3_p20211120-r0)
(3/10) Installing ncurses-libs (6.3_p20211120-r0)
(4/10) Installing libedit (20210910.3.1-r0)
(5/10) Installing openssh-client-common (8.8_p1-r1)
(6/10) Installing openssh-client-default (8.8_p1-r1)
(7/10) Installing openssh-sftp-server (8.8_p1-r1)
(8/10) Installing openssh-server-common (8.8_p1-r1)
(9/10) Installing openssh-server (8.8_p1-r1)
(10/10) Installing openssh (8.8_p1-r1)
Executing busybox-1.34.1-r3.trigger
OK: 12 MiB in 24 packages
/ # ping -c 2 google.com
PING google.com (172.217.160.142): 56 data bytes
--- google.com ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
/ # ping -c 2 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
What might be the issue here?
Update 1
I wanted to add that in my host machine (Linux Mint 20.2) the interfaces are:
% ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 18:31:bf:b5:e6:5b brd ff:ff:ff:ff:ff:ff
In the container, the interface are:
$ docker container attach alpine5
/ # ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
61: eth0#if62: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:07 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.7/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
/ # ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
61: eth0#if62: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:07 brd ff:ff:ff:ff:ff:ff
There is no docker0 interface. One more thing I need to mention is , I am using docker in rootless mode.

Related

Kubernetes minikube running in wsl Ubuntu exposes a service i cannot reach from win 10

i was following a kubernetes tutorial for beginners (techworld with nana) with a win10 machine running docker. As i got troubles, i migrate to this config :
wsl -l -v
NAME STATE VERSION
* Ubuntu Running 2
I install docker and run it by $ sudo service docker start
Start minikube : $minikube start --driver=docker --kubernetes-version=v1.18.0
(not the last version because some pb between systemd and systemctl)
Everything was ok, i create a mongodb pod and a mongoexpress pod with ad hoc services:
plaurent$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mongo-express-864c95f479-8gfxf 1/1 Running 2 23h
mongodb-deployment-58977cc4f5-k4r4h 1/1 Running 1 23h
plaurent$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h
mongo-express-service LoadBalancer 10.98.7.33 <pending> 8081:30000/TCP 23h
mongodb-service ClusterIP 10.101.132.245 <none> 27017/TCP 23h
following tuto, i run :
/plaurent$ minikube service mongo-express-service
🏃 Starting tunnel for service mongo-express-service.
🎉 Opening service default/mongo-express-service in default browser...
👉 http://192.168.49.2:30000
❗ Because you are using a Docker driver on linux, the terminal needs to be open to run it.
On a second terminal wsl, i can reach this service with the following and it's ok.
plaurent$ curl http://192.168.49.2:30000
BUt i cannot do the same thing from the win10 and even a ping fails.
i start ip addr and get the following :
/plaurent$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 86:5b:79:bf:27:05 brd ff:ff:ff:ff:ff:ff
3: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether f2:bd:6f:41:f3:2d brd ff:ff:ff:ff:ff:ff
4: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:15:5d:b5:ae:43 brd ff:ff:ff:ff:ff:ff
inet 172.20.254.215/20 brd 172.20.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::215:5dff:feb5:ae43/64 scope link
valid_lft forever preferred_lft forever
5: tunl0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
6: sit0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
7: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:eb:30:05:9a brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
8: br-ecf9b5a8d792: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:0f:31:2f:71 brd ff:ff:ff:ff:ff:ff
**inet 192.168.49.1/24 brd 192.168.49.255 scope global br-ecf9b5a8d792**
valid_lft forever preferred_lft forever
inet6 fe80::42:fff:fe31:2f71/64 scope link
valid_lft forever preferred_lft forever
10: vethe8c97a5#if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ecf9b5a8d792 state UP group default
link/ether ee:d2:2d:f8:5b:4d brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::ecd2:2dff:fef8:5b4d/64 scope link
valid_lft forever preferred_lft forever
i can see inet 192.168.49.1/24 brd 192.168.49.255 scope global br-ecf9b5a8d792 close to the ip of service, but i don't know what it means and if this can help to solve the problem.
I'm not comfortable with networks , any help welcome.
You can do port forward of the services in cluster. https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/

Not able to connect from inside container, but able to connect from host

I am facing a peculiar situation, could not find much help from web.
I have a container(based on alpine image) running on Centos7 host in host network mode which essentially means it shares the network stack, /etc/hosts, and /etc/resolv.conf with host.
Trying to connect to a remote machine(UB1804-MN1-131) within our organization network (so no proxy needed). The connect call is a grpc.dial(hostname:port, ..) call.
I keep getting the below error:
code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp: i/o timeout"
This behavior itself is not consistent. For e.g. it connects successfully after few retries sometimes, other times it simply refuses to connect
The same remote machine connects without any issues when tried from host itself.
Any help in finding the root cause is much appreciated. For reference, sharing eth details, host and resolv details (Edited some values in it for security reasons):
[user#HOST-21343-135 ~]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:9e:7c:15 brd ff:ff:ff:ff:ff:ff
inet 172.17.65.135/16 brd 172.17.255.255 scope global ens192
valid_lft forever preferred_lft forever
3: ens224: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:50:56:9e:12:a3 brd ff:ff:ff:ff:ff:ff
4: ens256: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:50:56:9e:23:0c brd ff:ff:ff:ff:ff:ff
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:b0:fa:05:37 brd ff:ff:ff:ff:ff:ff
inet 10.190.64.1/25 scope global docker0
valid_lft forever preferred_lft forever
[user#HOST-21343-135 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 HOST-21343-135
172.17.65.131 UB1804-MN1-131
[root#ATLAS-21343-135 ~]# cat /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
nameserver 14.110.135.81
nameserver 14.110.135.82
nameserver 14.110.135.83
I have verified all the above data is shared by container also.
After much effort, found the root cause. It was due to the DNS IPs present in /etc/resolv.conf. When I removed all DNS entries, it worked like charm. So it looks like it was routing the connection through the DNS servers and it was failing.
What I don't understand though is, the same thing works fine on the host machine with DNS present which means it ignores the DNS and only reads from /etc/hosts. However the same thing from within container did not ignore DNS entries. I am not sure why this differing behaviour, I do not know.
For now, the workaround mentioned above works for me. So I am good to go.

Simulating network failures in Docker

I am trying to simulate partial/total network/container failure in docker in order to see how my application behaves in failure conditions. I have started by using pumba, but it isn't working right. More specifically, this command fails when run, both via pumba and when run directly on the container with docker exec:
tc qdisc add dev eth0 root netem delay 2000ms 10ms 20.00
with the following output:
RTNETLINK answers: Operation not permitted
Now here is where it gets stranger. It works when run inside my service containers Actually, it only works when run via pumba, not when run directly (rabbitmq:3.6.10, redis:4.0.1, mongo:3.5.11), after installing the iproute2 package. It does not work inside my application containers, all of which use node:8.2.1 as the base image, which already has iproute2 installed. None of the containers have any add_caps applied.
Output of ip addr on one of the application containers:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: tunl0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
link/ipip 0.0.0.0 brd 0.0.0.0
3: gre0#NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1
link/gre 0.0.0.0 brd 0.0.0.0
4: gretap0#NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: ip_vti0#NONE: <NOARP> mtu 1332 qdisc noop state DOWN group default qlen 1
link/ipip 0.0.0.0 brd 0.0.0.0
6: ip6_vti0#NONE: <NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1
link/tunnel6 :: brd ::
7: sit0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
link/sit 0.0.0.0 brd 0.0.0.0
8: ip6tnl0#NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1
link/tunnel6 :: brd ::
9: ip6gre0#NONE: <NOARP> mtu 1448 qdisc noop state DOWN group default qlen 1
link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
113: eth0#if114: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:06 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.6/16 scope global eth0
valid_lft forever preferred_lft forever
Ok, I found part of the answer. It turns out that the tc command was not working when run directly on the service containers. Sorry for the bit of incorrect information in the original question. Pumba works on the service containers and not the application containers. The tc command does not work in any of the containers.
It turns out that it was a problem with running as an unprivileged user. I opened an issue with pumba to address the problem.
The tc comand still isn't working when run as root, and I still don't know why. However, I was only using that command for debugging, so while I am curious as to why it doesn't work, my main issue has been resolved.
You should call exec on the containner using root user: -u=0
like:
sudo docker exec-u=0 myContainer tc qdisc add dev eth0 root netem delay 2000ms 10ms 20.00
I had a similar issue on Windows and finally was able to resolve by turning off the WSL 2 based engine in Docker settings. Now all my tc qdisc... commands are working.

how to start a docker container with multi interface?

I want to start a docker container with three interfaces, all these interfaces will be attached to a bridge on host.
The only solution is providing my own network plugin. the below interface will be invoked by docker daemon once container is created to configure its network:
func (d *Driver) Join(r *dknet.JoinRequest) (*dknet.JoinResponse, error)
but there is only one Endpoint object in JoinRequest struct, so the above invocation can only configure one container interface.
I don't know how to create and configure three container interfaces?
You need to do it multiple time
$ docker network create net1
bdc53c143e89d562761eedfd232620daf585968bc9ae022ba142d17601af6146
$ docker network create net2
d9a72a7a6ee6b61da3c6bb17e312e48888807a5a8c159fd42b6c99d219977559
$ docker network create net3
d2be9628f4fd60587d44967a5813e9ba7c730d24e953368b18d7872731a9478c
$ docker run -it --network net3 ubuntu:16.04 bash
root#cd70c7cbe367:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
90: eth0#if91: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:18:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.24.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
Now your container is running with net3 network only. You can attach net1 and net2 as well.
$ docker network connect net1 cd70c7cbe367
$ docker network connect net2 cd70c7cbe367
After that check in container
root#cd70c7cbe367:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
90: eth0#if91: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:18:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.24.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
92: eth1#if93: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:16:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.22.0.2/16 scope global eth1
valid_lft forever preferred_lft forever
94: eth2#if95: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:17:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.23.0.2/16 scope global eth2
valid_lft forever preferred_lft forever
PS: the ip command is missing by default in container, so i installed the iproute2 packaged to check

How to let docker container access host network port

I want to connect to my local host service from a docker container. I am using docker for mac. I checked this link How to access host port from docker container but when I run ip addr show docker0 in the docker container. I got ip addr show docker0 error response. Below is all the network devices on my docker container.
# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
link/ipip 0.0.0.0 brd 0.0.0.0
3: gre0#NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1
link/gre 0.0.0.0 brd 0.0.0.0
4: gretap0#NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: ip_vti0#NONE: <NOARP> mtu 1428 qdisc noop state DOWN group default qlen 1
link/ipip 0.0.0.0 brd 0.0.0.0
6: ip6_vti0#NONE: <NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1
link/tunnel6 :: brd ::
7: sit0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
link/sit 0.0.0.0 brd 0.0.0.0
8: ip6tnl0#NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1
link/tunnel6 :: brd ::
9: ip6gre0#NONE: <NOARP> mtu 1448 qdisc noop state DOWN group default qlen 1
link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
238: eth0#if239: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:14:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.20.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe14:2/64 scope link
valid_lft forever preferred_lft forever
Which one is my local host address?
You can use the special Mac-only DNS name docker.for.mac.localhost, this will resolve to the host IP.
Source: https://docs.docker.com/docker-for-mac/networking/#there-is-no-docker0-bridge-on-macos
You can docker run --network host ... to let docker container access host ports. Or use option network_mode in docker-compose.yaml. But this mode can be a security issue if you use an untrusted docker image.

Resources