TCP rentransmission occurs on any docker network.
simple test: create VM in Azure, centos 7,7
yum update
yum install docker
systemctl start docker
docker run --name mynginx1 -P -d nginx
tshark -tad -i any -Y "tcp.analysis.retransmission"
curl localhost:32768
this results in occurence of TCP Retransmission.
[root#vm1 ~]# tshark -tad -i any -Y "tcp.analysis.retransmission"
Running as user "root" and group "root". This could be dangerous.
Capturing on 'any'
121 2020-04-24 07:26:55.504210673 109.81.211.189 -> 10.0.0.4 SSH 92 [TCP Retransmission] Encrypted request packet len=36
418 2020-04-24 07:27:04.982215355 109.81.211.189 -> 10.0.0.4 SSH 92 [TCP Retransmission] Encrypted request packet len=36
572 2020-04-24 07:27:10.746826933 172.17.0.1 -> 172.17.0.2 HTTP 147 [TCP Retransmission] GET / HTTP/1.1
576 2020-04-24 07:27:10.747858244 172.17.0.2 -> 172.17.0.1 TCP 307 [TCP Retransmission] http > 40514 [PSH, ACK] Seq=1 Ack=80 Win=29056 Len=239 TSval=1217913 TSecr=1217912
580 2020-04-24 07:27:10.747930345 172.17.0.2 -> 172.17.0.1 TCP 680 [TCP Retransmission] http > 40514 [PSH, ACK] Seq=240 Ack=80 Win=29056 Len=612 TSval=1217914 TSecr=1217913[Reassembly error, protocol TCP: New fragment overlaps old data (retransmission?)]
the same problem on Kubernetes (tested with flannel plugin)
this issue significantly reduce the performance of container, in our case high performance message parser inside docker has 4x less results. Using docker host network resolves the issue. But using bridge network causes this issue.
Please any advice/help?
for reference - the reason of retransmission is seen from wireshark as outoforder/duplicate
enter image wireshark trace detail
Finally we identified the problem in used image based on Alpine. After we moved the docker base image into debian/ubuntu retransmission problem gone.
Related
I set up an SSTP client container (172.17.0.3) that communicates with an SSTP server container (172.17.0.2) via the ppp0 interface. All traffic from the SSTP client container is routed through its ppp0 interface, as seen using netstat on the SSTP client container (192.168.20.1 is the SSTP server container's ppp0 IP address):
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.20.1 0.0.0.0 UG 0 0 0 ppp0
0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
192.168.20.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0
Now, I have an HTTP server container (172.17.0.4) running, and I want to use yet another client container (for example, a container that runs Apache Benchmark ab) to talk to the HTTP server container via the SSTP server container. To do so, I use --net=container:sstp-client on the ab client container so it uses the SSTP client container's network. However, the ab client container cannot seem to reach the HTTP server container, even though it is able to benchmark servers on the Internet (e.g., 8.8.8.8). For another example, if I do traceroute from a container through the SSTP client container to 8.8.8.8:
docker run -it --name alpine --net=container:sstp-client alpine ash
/ # traceroute -s 192.168.20.2 google.com
traceroute to google.com (142.250.65.206) from 192.168.20.2, 30 hops max, 46 byte packets
1 192.168.20.1 (192.168.20.1) 1.088 ms 1.006 ms 1.077 ms
2 * * *
3 10.0.2.2 (10.0.2.2) 1.710 ms 1.695 ms 0.977 ms
4 * * *
...
I am able to finally reach Google.
But if I traceroute to my HTTP server container:
/ # traceroute -s 192.168.20.2 172.17.0.4
traceroute to 172.17.0.4 (172.17.0.4) from 192.168.20.2, 30 hops max, 46 byte packets
1 * * *
2 * * *
3 * * *
4 * * *
5 * * *
...
It fails.
My suspicion is that the routing configuration on the SSTP server container is incorrect, but I am not sure how I can fix that to make it work. My goal is to be able to reach both the outside world and the internal containers. I've messed around with both iptables and route quite a bit, but still can't make it work. This is my current configuration of the SSTP server container:
/ # netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
192.168.20.0 172.17.0.2 255.255.255.255 UGH 0 0 0 eth0
192.168.20.0 172.17.0.2 255.255.255.0 UG 0 0 0 eth0
192.168.20.2 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0
/ # iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-A INPUT -i ppp+ -j ACCEPT
-A FORWARD -j ACCEPT
-A OUTPUT -o ppp+ -j ACCEPT
I've seen many online solutions on how to route via a VPN container to the Internet, but nothing about to another containers. Very much a newbie in this area. Any suggestions welcome! Thank you.
I had to choose similar approach to this problem. Nevertheless I have it inside gitlab-ci...
Start VPN container and find its IP
docker run -it -d --rm \
--name vpn-client-${SOME_VARIABLE} \
--privileged \
--net gitlab-runners \
-e VPNADDR=... \
-e VPNUSER=... \
-e VPNPASS=... \
auchandirect/forticlient || true && sleep 2
export VPN_CONTAINER_IP=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' vpn-client-${SOME_VARIABLE})
Next you have to add new variable into your application container and add new ROUTE, eg docker-compose:
version: "3"
services:
test-server:
image: alpine
variables:
ROUTE_IP: "${VPN_CONTAINER_IP}"
command: "ip route add <CIDR_TARGET_SUBNET> via ${ROUTE_IP}"
networks:
default:
name: gitlab-runners
external: true
But be advised IP mentioned above HAS to be in CIDR format (eg 192.168.1.0/24) and also be in SAME docker network
$ docker run -it busybox
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
90e01955edcd: Pull complete
Digest: sha256:2a03a6059f21e150ae84b0973863609494aad70f0a80eaeb64bddd8d92465812
Status: Downloaded newer image for busybox:latest
/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
^C
--- 8.8.8.8 ping statistics ---
285 packets transmitted, 0 packets received, 100% packet loss
/ #
Why does this happen? What can I do to resolve it?
hi can you check your network interface eth0 or whatever name
or restart network interface
ifdown eth0
ifup eth0
if instance in vpc then check NAT or internet gateway is there to make connection to internet
I was behind a proxy and the solution was to set the ip number of the proxy in ~/.docker/config.json instead of the name.
I am running a Debian server (stable), with the docker.io Debian package.This is the one distributed by Debian, not the one from the Docker developers. Since docker.io is only available in sid, I have installed from there (apt install -t unstable docker.io).
My firewall does allow connections to/from docker containers:
$ sudo ufw status
(...)
Anywhere ALLOW 172.17.0.0/16
172.17.0.0/16 ALLOW Anywhere
I also have this in /etc/ufw/before.rules :
*nat
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -s 172.17.0.0/16 -o eth0 -j MASQUERADE
So -- I have created an image with
$ sudo debootstrap stable ./stable-chroot http://deb.debian.org/debian > /dev/null
$ sudo tar -C stable-chroot -c . | docker import - debian-stable
Then started a container and installed apache2 and netcat. Port 1111 on the host machine will be redirected to port 80 on the container:
$ docker run -ti -p 1111:80 debian-stable bash
root#dc4996de9fe6:/# apt update
(... usual output from apt update ...)
root#dc4996de9fe6:/# apt install apache2 netcat
(... expected output, installation successful ...)
root#dc4996de9fe6:/# service apache2 start
root#dc4996de9fe6:/# service apache2 status
[ ok ] apache2 is running.
And from the host machine I can connect to the apache server:
$ curl 127.0.0.1:1111
(... HTML from the Debian apache placeholder page ...)
$ telnet 127.0.0.1 1111
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
And it waits for me to type (if I type GET / I get the Debian apache placeholder page). Ok. And if I stop apache inside the container,
root#06da401a5724:/# service apache2 stop
[ ok ] Stopping Apache httpd web server: apache2.
root#06da401a5724:/# service apache2 status
[FAIL] apache2 is not running ... failed!
Then connections to port 1111 on the host will be rejected (as expected):
$ telnet 127.0.0.1 1111
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Connection closed by foreign host.
Now, if I start netcat on the container, listening on port 80:
root#06da401a5724:/# nc -l 172.17.0.2 80
Then I cannot connect from the host!
$ telnet localhost 1111
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connection closed by foreign host.
The same happens if I try nc -l 127.0.0.1 80 in the container.
What could be happening? Both apache and netcat were listening on port 80. What have I missed?
I'd appreciate any hints...
update: if I try this:
root#12b8fd142e00:/# nc -vv -l -p 80
listening on [any] 80 ...
172.17.0.1: inverse host lookup failed: Unknown host
invalid connection to [172.17.0.2] from (UNKNOWN) [172.17.0.1] 54876
Then it works!
Now it's weird... ifconfig inside the container tells me it has IP 172.17.0.2, but I can only use netcat binding to 172.17.0.1:
root#12b8fd142e00:/# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.2 netmask 255.255.0.0 broadcast 0.0.0.0
inet6 fe80::42:acff:fe11:2 prefixlen 64 scopeid 0x20<link>
And Apache seems to want to 172.17.0.2 instead:
2AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
but it actually uses 172.17.0.1:
root#12b8fd142e00:/# netstat
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 12b8fd142e00:http 172.17.0.1:54942 TIME_WAIT
tcp 0 0 12b8fd142e00:39528 151.101.48.204:http TIME_WAIT
Apache is not listening on 172.17.0.1, that's the address of the host (in the docker bridge).
In the netstat output, the local address has been resolved to 12b8fd142e00. Use the -n option with netstat to see unresolved (numeric) addresses (for example netstat -plnet to see listening sockets). 172.17.0.1 is the foreign address that connected to Apache (an it's indeed the host).
The last line in the netstat output shows that some process made a connection to 151.101.48.204:80, probably to make an HTTP request. You can see the PID/name of the process with netstat -p.
I'm trying to insure the connection between the different containers and the localhost address (127.0.0.1) used with port 8040.( My web application container run using this port.)
root#a70b20fbda00:~# curl -v http://127.0.0.1
* Rebuilt URL to: http://127.0.0.1/
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* connect to 127.0.0.1 port 80 failed: Connection refused
* Failed to connect to 127.0.0.1 port 80: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused
This is what I get when I want to connect to localhost from inside the container
root#a70b20fbda00:~# curl -v http://127.0.0.1:8040
* Rebuilt URL to: http://127.0.0.1:8040/
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* connect to 127.0.0.1 port 8040 failed: Connection refused
* Failed to connect to 127.0.0.1 port 8040: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 127.0.0.1 port 8040: Connection refused
About iptables in each container:
root#a70b20fbda00:~# iptables
bash: iptables: command not found
Connection between the container is good
root#635114ca18b7:~# ping 172.17.0.1
PING 172.17.0.1 (172.17.0.1) 56(84) bytes of data.
64 bytes from 172.17.0.1: icmp_seq=1 ttl=64 time=0.061 ms
64 bytes from 172.17.0.1: icmp_seq=2 ttl=64 time=0.253 ms
--- 172.17.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
root#635114ca18b7:~# ping 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.100 ms
--- 127.0.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
root#635114ca18b7:~# ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data.
64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.149 ms
64 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.180 ms
--- 172.17.0.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.149/0.164/0.180/0.020 ms
Ping the 127.0.0.1:8040
root#635114ca18b7:~# ping 127.0.01:8040
ping: unknown host 127.0.0.1:8040
What I need to do in this case?
So the Global image that there is two containers ,
The first container contains a tomcat server that deploy my web application and it turnes perfectly.
The second is a container that need to connect to the web application
URL. http://127.0.0.1:8040/my_app
you will have to use docker run --network host IMAGE:TAG for achieving the desired connection
further read here
example:-
docker run --network host --name CONTAINER1 IMAGE:tag
docker run --network host --name CONTAINER2 IMAGE:tag
inside container - CONTAINER2 you will be able to access other container as host CONTAINER1
And for accessing the service you will have to do CONTAINER:
Based on the information provided, looks like there are two containers. If these two containers are started by docker without --net=host then each of them get two different IP addresses. Say your first container got 172.17.0.2 and the second one 172.17.0.3.
In this scenario each container gets it's own networking stack. So 127.0.0.1 refers to it's own networking stack not the same.
As pointed out by #kakabali, it's possible to run the containers with host network, sharing the networking stack of the host.
One of the other options is to use the actual IP address of the first container in the second one.
second-container# curl http://172.17.0.2
Or another option is to run the second container as the sidekick/sidecar container sharing the networking stack of the first one.
docker run --net=container:${ID_OF_FIRST_CONTAINER} ${IMAGE_SECOND}:${IMAGE_TAG_SECOND}
Or if you use links correctly:
docker run --name web -itd ${IMAGE_FIRST}:${TAG_FIRST}
docker run --link web -itd ${IMAGE_SECOND}:${TAG_SECOND}
Note: docker --link feature is deprecated.
Another option is to use container management platforms which take care of service discovery for you automatically.
PS: You cannot ping an IP address on a different port. For more info, click here.
My setup is the following:
Host: Win10
Guest: Ubuntu 15.10 (clean install, only docker and nodejs are added)
Base image: https://hub.docker.com/r/microsoft/aspnet/ 1.0.0-beta8-coreclr
Inside the guest I have installed Docker and created image (added sample webapp using yeoman to the image above). When I run the image inside container I can ping the container IP sucessfuly using the container IP from the linux (e.g. 172.17.0.2).
$sudo docker run -d -p 80:5000 --name web myapp
$sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' "web"
172.17.0.2
$ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.060 ms
1 packets transmitted, 1 received, 0% packet loss, time 999ms
$curl 172.17.0.2:80
curl: (7) Failed to connect to 172.17.0.2 port 80: Connection refused
I can also connect to the container and execute commands like ping, however from the linux machine (guest in VirtualBox, host for docker) I cannot access the web app that is hosted inside the container as seen above. I tried several approaches like mapping to the host IP addresses etc, but none of them worked. Did anyone have ideas where to start from ? Is the issue comes from that the docker is installed inside VirtualBox machine?
Thank you in advance.
Edit: Here are the logs from the container:
Could not open /etc/lsb_release. OS version will default to the empty string.
Hosting environment: Production
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.
Your command tells Docker to essentially proxy requests from port 80 of the Linux guest to port 5000 of the container. So the curl command you tried doesn't work because you're trying on port 80 on the container, while the container itself has a service listening on port 5000.
To connect to the container directly, you would use (on the Linux guest):
curl 172.17.0.2:5000
To access via the published port on the Linux guest (from your host):
curl (Linux guest IP)
Or (from the Linux guest):
curl localhost
Edit: This will also prove to be problematic:
Now listening on: http://localhost:5000
You'll want your app inside the container to bind to all interfaces (0.0.0.0) so it listens on the container's assigned IP. With localhost it won't be accessible.
You might find this example useful:
https://github.com/aspnet/Home/blob/dev/samples/1.0.0-beta8/HelloWeb/project.json
This line specifies that the app bind to all interfaces (using "*") on port 5004:
21 "kestrel": "Microsoft.AspNet.Hosting --server Microsoft.AspNet.Server.Kestrel --server.urls http://*:5004"
You'll need similar configuration.