Docker in docker routing within Kubernetes - docker

I've network related issue on the Kubernetes host, using Calico network layer. For continuous integration I need to run docker in docker, but running simple docker build with this Dockerfile:
FROM praqma/network-multitool AS build
RUN route
RUN ping -c 4 google.com
RUN traceroute google.com
produces output:
Step 1/4 : FROM praqma/network-multitool AS build
---> 3619cb81e582
Step 2/4 : RUN route
---> Running in 80bda13a9860
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 * 255.255.0.0 U 0 0 0 eth0
Removing intermediate container 80bda13a9860
---> d79e864eafaf
Step 3/4 : RUN ping -c 4 google.com
---> Running in 76354a92a413
PING google.com (216.58.201.110) 56(84) bytes of data.
--- google.com ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 53ms
---> 3619cb81e582
Step 4/4 : RUN traceroute google.com
---> Running in 3aa7908347ba
traceroute to google.com (216.58.201.110), 30 hops max, 46 byte packets
1 172.17.0.1 (172.17.0.1) 0.009 ms 0.005 ms 0.003 ms
Seems docker container has invalid routing while created off Kubernetes. Pods orchestrated by Kubernetes can access internet normally.
bash-5.0# ping -c 3 google.com
PING google.com (216.58.201.110) 56(84) bytes of data.
64 bytes from prg03s02-in-f14.1e100.net (216.58.201.110): icmp_seq=1 ttl=55 time=0.726 ms
64 bytes from prg03s02-in-f14.1e100.net (216.58.201.110): icmp_seq=2 ttl=55 time=0.586 ms
64 bytes from prg03s02-in-f14.1e100.net (216.58.201.110): icmp_seq=3 ttl=55 time=0.451 ms
--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 10ms
rtt min/avg/max/mdev = 0.451/0.587/0.726/0.115 ms
bash-5.0# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 169.254.1.1 0.0.0.0 UG 0 0 0 eth0
169.254.1.1 * 255.255.255.255 UH 0 0 0 eth0
bash-5.0# traceroute google.com
traceroute to google.com (216.58.201.110), 30 hops max, 46 byte packets
1 10-68-149-194.kubelet.kube-system.svc.kube.example.com (10.68.149.194) 0.006 ms 0.005 ms 0.004 ms

Related

Docker container can't connect to the internet. But can ping any external ip

Can't ping or connect to any internet domain from docker container
Manjaro linux
dns set in /etc/docker/daemon.json on host
/etc/resolv.conf in docker container:
root#785625d57ad5:/# cat /etc/resolv.conf
nameserver 8.8.4.4
nameserver 8.8.8.8
ping from docker contaner (ip is google.com)
root#785625d57ad5:/# ping -c 3 172.217.23.142
PING 172.217.23.142 (172.217.23.142) 56(84) bytes of data.
64 bytes from 172.217.23.142: icmp_seq=2 ttl=53 time=51.9 ms
64 bytes from 172.217.23.142: icmp_seq=3 ttl=53 time=51.9 ms
--- 172.217.23.142 ping statistics ---
3 packets transmitted, 2 received, 33% packet loss, time 2018ms
rtt min/avg/max/mdev = 51.973/51.980/51.987/0.007 ms
root#785625d57ad5:/# ping -c 3 google.com
ping: unknown host google.com

Port not accessible even after being exposed. Connection refused

we created a docker container like this:
docker container create \
--name orderer \
--network dscsa_net \
--workdir $WORK_DIR \
--expose=7050 \
hyperledger/fabric-orderer:1.3.0 ./start-orderer.sh
but are unable to connect to port 7050 on the container.
root#dcee7e74266f:/home# nc -vz 10.0.0.194 7050
nc: connect to 10.0.0.194 port 7050 (tcp) failed: Connection refused
we are able to ping the container:
root#dcee7e74266f:/home# ping 10.0.0.194
PING 10.0.0.194 (10.0.0.194) 56(84) bytes of data.
64 bytes from 10.0.0.194: icmp_seq=1 ttl=64 time=0.810 ms
64 bytes from 10.0.0.194: icmp_seq=2 ttl=64 time=1.30 ms
64 bytes from 10.0.0.194: icmp_seq=3 ttl=64 time=0.668 ms
64 bytes from 10.0.0.194: icmp_seq=4 ttl=64 time=1.10 ms
64 bytes from 10.0.0.194: icmp_seq=5 ttl=64 time=0.631 ms
^C
--- 10.0.0.194 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 0.631/0.902/1.301/0.261 ms
and also see a process listening on port 7050 on the container:
root#9756199efefa:/home# netstat -tuplen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 127.0.0.1:7050 0.0.0.0:* LISTEN 0 10097930 7/orderer
tcp 0 0 127.0.0.11:34865 0.0.0.0:* LISTEN 0 10097705 -
udp 0 0 127.0.0.11:51385 0.0.0.0:* 0 10097704 -
what is going on here? how can we fix this?
EDIT: we are on a overlay network. The publish flag suggested in the answer is n/a as we are doing container to container communication. Anyway we tried it and it doesn't work.
There is one thing we have noticed which is if we run:
docker network inspect <our-network-name>
Among other things, it prints out a containers section but in that section only the containers on the host from which docker network inspect is executed are listed. The containers hosted on other nodes are not listed (also mentioned here).
we verified that if we run:
docker node ls
all the nodes are part of the swarm.
It seems other people have also run into this issue e.g., here but what is the solution?
Note: we are able to connect to another container running a different service exposed on port 7054. This container was created without even using the expose flag.
root#dcee7e74266f:/home# nc -zv 10.0.0.164 7054
Connection to 10.0.0.164 7054 port [tcp/*] succeeded!
Did further debugging with tcpdump and output of tcpdump is identical to the output when someone tries to connect to a port on which no process is listening. But as shown earlier netstat shows a process that is listening and we can connect to the process from localhost.
Output of tcpdump:
root#dcee7e74266f:/test# tcpdump -s0 host 10.0.0.195
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
23:44:45.978583 IP dcee7e74266f.52148 > orderer.dscsa_net.7050: Flags [S], seq 3845506108, win 28200, options [mss 1410,sackOK,TS val 4203049443 ecr 0,nop,wscale 7], length 0
23:44:45.979324 IP orderer.dscsa_net.7050 > dcee7e74266f.52148: Flags [R.], seq 0, ack 3845506109, win 0, length 0
The R flag tells client to reset the connection.
Output of traceroute:
root#dcee7e74266f:/test# traceroute 10.0.0.195
traceroute to 10.0.0.195 (10.0.0.195), 30 hops max, 60 byte packets
1 orderer.dscsa_net (10.0.0.195) 1.008 ms 0.900 ms 0.872 ms
Expose only sets metadata on the image or container, it does not make the port externally accessible. The option you are looking for is publish:
docker container create \
--name orderer \
--network dscsa_net \
--workdir $WORK_DIR \
--publish=7050:7050 \
hyperledger/fabric-orderer:1.3.0 ./start-orderer.sh
Solved this issue thanks to 1. The server listening to 127.0.0.1 was the problem. Once we changed the listening address to 0.0.0.0 (shows as ::: in netstat output below), we are able to connect to the server:
root#e9766a94d102:/home# netstat -tuplen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 127.0.0.11:37641 0.0.0.0:* LISTEN 0 12821468 -
tcp6 0 0 :::7050 :::* LISTEN 0 12821696 7/orderer
udp 0 0 127.0.0.11:51855 0.0.0.0:* 0 12821467 -
there is no need for either expose or publish flags. note to self: wasted 1.5 days on this.

Unable to ping docker host but the world is reachable

I am able to ping rest of the world but not the host from container in which docker container is running. I am sure someone encountered this issue before
See below details
Ubuntu 14.04.2 LTS (GNU/Linux 3.16.0-30-generic x86_64
**Container is using "bridge" network
Docker version 18.06.1-ce, build e68fc7a**
IP address for eth0: 135.25.87.162
IP address for eth1: 192.168.122.55
IP address for eth2: 135.21.171.209
IP address for docker0: 172.17.42.1
route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 135.21.248.1 0.0.0.0 UG 0 0 0 eth1
135.21.171.192 * 255.255.255.192 U 0 0 0 eth2
135.21.248.0 * 255.255.255.0 U 0 0 0 eth1
135.25.87.128 * 255.255.255.192 U 0 0 0 eth0
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
192.168.122.0 * 255.255.255.192 U 0 0 0 eth1
#ping commands from container
# ping google.com
PING google.com (64.233.177.113): 56 data bytes
64 bytes from 64.233.177.113: icmp_seq=0 ttl=29 time=51.827 ms
64 bytes from 64.233.177.113: icmp_seq=1 ttl=29 time=50.184 ms
64 bytes from 64.233.177.113: icmp_seq=2 ttl=29 time=50.991 ms
^C--- google.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 50.184/51.001/51.827/0.671 ms
# ping 135.25.87.162
PING 135.25.87.162 (135.25.87.162): 56 data bytes
^C--- 135.25.87.162 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
root#9ed17e4c2ee3:/opt/app/tomcat#

Docker: Connection from inside the container to localhost:port Refused

I'm trying to insure the connection between the different containers and the localhost address (127.0.0.1) used with port 8040.( My web application container run using this port.)
root#a70b20fbda00:~# curl -v http://127.0.0.1
* Rebuilt URL to: http://127.0.0.1/
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* connect to 127.0.0.1 port 80 failed: Connection refused
* Failed to connect to 127.0.0.1 port 80: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused
This is what I get when I want to connect to localhost from inside the container
root#a70b20fbda00:~# curl -v http://127.0.0.1:8040
* Rebuilt URL to: http://127.0.0.1:8040/
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* connect to 127.0.0.1 port 8040 failed: Connection refused
* Failed to connect to 127.0.0.1 port 8040: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 127.0.0.1 port 8040: Connection refused
About iptables in each container:
root#a70b20fbda00:~# iptables
bash: iptables: command not found
Connection between the container is good
root#635114ca18b7:~# ping 172.17.0.1
PING 172.17.0.1 (172.17.0.1) 56(84) bytes of data.
64 bytes from 172.17.0.1: icmp_seq=1 ttl=64 time=0.061 ms
64 bytes from 172.17.0.1: icmp_seq=2 ttl=64 time=0.253 ms
--- 172.17.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
root#635114ca18b7:~# ping 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.100 ms
--- 127.0.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
root#635114ca18b7:~# ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data.
64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.149 ms
64 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.180 ms
--- 172.17.0.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.149/0.164/0.180/0.020 ms
Ping the 127.0.0.1:8040
root#635114ca18b7:~# ping 127.0.01:8040
ping: unknown host 127.0.0.1:8040
What I need to do in this case?
So the Global image that there is two containers ,
The first container contains a tomcat server that deploy my web application and it turnes perfectly.
The second is a container that need to connect to the web application
URL. http://127.0.0.1:8040/my_app
you will have to use docker run --network host IMAGE:TAG for achieving the desired connection
further read here
example:-
docker run --network host --name CONTAINER1 IMAGE:tag
docker run --network host --name CONTAINER2 IMAGE:tag
inside container - CONTAINER2 you will be able to access other container as host CONTAINER1
And for accessing the service you will have to do CONTAINER:
Based on the information provided, looks like there are two containers. If these two containers are started by docker without --net=host then each of them get two different IP addresses. Say your first container got 172.17.0.2 and the second one 172.17.0.3.
In this scenario each container gets it's own networking stack. So 127.0.0.1 refers to it's own networking stack not the same.
As pointed out by #kakabali, it's possible to run the containers with host network, sharing the networking stack of the host.
One of the other options is to use the actual IP address of the first container in the second one.
second-container# curl http://172.17.0.2
Or another option is to run the second container as the sidekick/sidecar container sharing the networking stack of the first one.
docker run --net=container:${ID_OF_FIRST_CONTAINER} ${IMAGE_SECOND}:${IMAGE_TAG_SECOND}
Or if you use links correctly:
docker run --name web -itd ${IMAGE_FIRST}:${TAG_FIRST}
docker run --link web -itd ${IMAGE_SECOND}:${TAG_SECOND}
Note: docker --link feature is deprecated.
Another option is to use container management platforms which take care of service discovery for you automatically.
PS: You cannot ping an IP address on a different port. For more info, click here.

How can I fix network for docker in kubernetes?

I have a kubernetes cluster and using Jenkins
pipeline jenkins:
podTemplate(label: 'pod-golang', containers: [
containerTemplate(name: 'golang', image: 'golang:latest', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'docker', image: 'docker:17.11-dind', ttyEnabled: true, command: 'cat'),
],
volumes: [hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock')]
) {
node('pod-golang') {
def app
String applicationName = "auth"
String buildNumber = "0.1.${env.BUILD_NUMBER}"
stage 'Checkout'
checkout scm
container('docker') {
stage 'Create docker image'
app = docker.build("test/${applicationName}")
}
}
}
When I run "docker build" command in new (creating) container not working network:
Step 1/6 : FROM alpine:latest
---> e21c333399e0
Step 2/6 : RUN apk --no-cache add ca-certificates
---> Running in 8483bb918ee8
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
[91mWARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz: operation timed out
[0mEXITCODE 0[91mWARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz: operation timed out
if I use "docker run" command on host machine I see, It does not work properly network in "manual" started docker image:
root#node2:~/tmp# docker run --rm -it alpine ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
^C
--- 8.8.8.8 ping statistics ---
14 packets transmitted, 0 packets received, 100% packet loss
root#node2:~/tmp# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=12.9 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=12.9 ms
^C
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 12.927/12.943/12.960/0.114 ms
but When I use pod from kubectl everything worked.
How can I fix that?
Open another windows run the tcpdump -vvv host 8.8.8.8 command see traffic going out.
Here is my host output.
# tcpdump -vvv host 8.8.8.8
tcpdump: listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
18:36:35.142633 IP (tos 0x0, ttl 63, id 2091, offset 0, flags [DF], proto ICMP (1), length 84)
webserver > google-public-dns-a.google.com: ICMP echo request, id 256, seq 0, length 64
18:36:35.170475 IP (tos 0x0, ttl 55, id 18270, offset 0, flags [none], proto ICMP (1), length 84)
google-public-dns-a.google.com > webserver: ICMP echo reply, id 256, seq 0, length 64
18:36:36.146145 IP (tos 0x0, ttl 63, id 2180, offset 0, flags [DF], proto ICMP (1), length 84)
webserver > google-public-dns-a.google.com: ICMP echo request, id 256, seq 1, length 64
# docker run --rm -it alpine ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=54 time=30.720 ms
64 bytes from 8.8.8.8: seq=1 ttl=54 time=25.576 ms
64 bytes from 8.8.8.8: seq=2 ttl=54 time=28.464 ms
64 bytes from 8.8.8.8: seq=3 ttl=54 time=33.860 ms
64 bytes from 8.8.8.8: seq=4 ttl=54 time=25.525 ms

Resources