Wrong TCP source port in docker container - docker

Im running a docker container with asterisk inside. Asterisk is listening on port 5061 TCP, but in connects i see a different port in tcpdump.
docker run --log-driver none --name=asterisk-docker -dt --net=host --restart=always asterisk-docker
netstat output:
tcp 0 0 0.0.0.0:5061 0.0.0.0:* LISTEN 58731/asterisk
tcp 0 0 srv-tk:46315 217.0.26.101:sip ESTABLISHED
tcpdump output:
12:24:18.230394 IP 217.0.26.101.sip > srv-tk.46315: Flags [.], ack 3051, win 1217, options [nop,nop,TS val 391595046 ecr 3262544919], length 0
12:24:18.292636 IP 217.0.26.101.sip > srv-tk.46315: Flags [P.], seq 3444:4092, ack 3051, win 1217, options [nop,nop,TS val 391595061 ecr 3262544919], length 648
Why is this not 5061 as source/destination?

This is because behind docker networking is a lot of magic done with iptables. Packet forwarding, different networks layers, etc.
To check all this magic run this:
sudo iptables -S
To see high layer layout of your containers execute the following command:
docker ps
If you want to understand high level docker networking I can recommend to read this documentation: https://docs.docker.com/network/
More about system networking behind docker network you can find here:
https://argus-sec.com/docker-networking-behind-the-scenes/

Related

Docker: Route Traffic from One Container to Another Container Through a VPN Container

I set up an SSTP client container (172.17.0.3) that communicates with an SSTP server container (172.17.0.2) via the ppp0 interface. All traffic from the SSTP client container is routed through its ppp0 interface, as seen using netstat on the SSTP client container (192.168.20.1 is the SSTP server container's ppp0 IP address):
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.20.1 0.0.0.0 UG 0 0 0 ppp0
0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
192.168.20.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0
Now, I have an HTTP server container (172.17.0.4) running, and I want to use yet another client container (for example, a container that runs Apache Benchmark ab) to talk to the HTTP server container via the SSTP server container. To do so, I use --net=container:sstp-client on the ab client container so it uses the SSTP client container's network. However, the ab client container cannot seem to reach the HTTP server container, even though it is able to benchmark servers on the Internet (e.g., 8.8.8.8). For another example, if I do traceroute from a container through the SSTP client container to 8.8.8.8:
docker run -it --name alpine --net=container:sstp-client alpine ash
/ # traceroute -s 192.168.20.2 google.com
traceroute to google.com (142.250.65.206) from 192.168.20.2, 30 hops max, 46 byte packets
1 192.168.20.1 (192.168.20.1) 1.088 ms 1.006 ms 1.077 ms
2 * * *
3 10.0.2.2 (10.0.2.2) 1.710 ms 1.695 ms 0.977 ms
4 * * *
...
I am able to finally reach Google.
But if I traceroute to my HTTP server container:
/ # traceroute -s 192.168.20.2 172.17.0.4
traceroute to 172.17.0.4 (172.17.0.4) from 192.168.20.2, 30 hops max, 46 byte packets
1 * * *
2 * * *
3 * * *
4 * * *
5 * * *
...
It fails.
My suspicion is that the routing configuration on the SSTP server container is incorrect, but I am not sure how I can fix that to make it work. My goal is to be able to reach both the outside world and the internal containers. I've messed around with both iptables and route quite a bit, but still can't make it work. This is my current configuration of the SSTP server container:
/ # netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
192.168.20.0 172.17.0.2 255.255.255.255 UGH 0 0 0 eth0
192.168.20.0 172.17.0.2 255.255.255.0 UG 0 0 0 eth0
192.168.20.2 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0
/ # iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-A INPUT -i ppp+ -j ACCEPT
-A FORWARD -j ACCEPT
-A OUTPUT -o ppp+ -j ACCEPT
I've seen many online solutions on how to route via a VPN container to the Internet, but nothing about to another containers. Very much a newbie in this area. Any suggestions welcome! Thank you.
I had to choose similar approach to this problem. Nevertheless I have it inside gitlab-ci...
Start VPN container and find its IP
docker run -it -d --rm \
--name vpn-client-${SOME_VARIABLE} \
--privileged \
--net gitlab-runners \
-e VPNADDR=... \
-e VPNUSER=... \
-e VPNPASS=... \
auchandirect/forticlient || true && sleep 2
export VPN_CONTAINER_IP=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' vpn-client-${SOME_VARIABLE})
Next you have to add new variable into your application container and add new ROUTE, eg docker-compose:
version: "3"
services:
test-server:
image: alpine
variables:
ROUTE_IP: "${VPN_CONTAINER_IP}"
command: "ip route add <CIDR_TARGET_SUBNET> via ${ROUTE_IP}"
networks:
default:
name: gitlab-runners
external: true
But be advised IP mentioned above HAS to be in CIDR format (eg 192.168.1.0/24) and also be in SAME docker network

Docker network macvlan driver: gateway unreachable

I have a macvlan network created with the following command:
docker network create -d macvlan --subnet=192.168.1.0/24 --gateway=192.168.1.2 -o parent=wlp2s0 pub_ne
Where wlp2s0 is the name of the wireless interface of my laptop.
gateway is 192.168.1.1 and subnet 192.168.1.0/24
Then I have created and attached a container to this network:
docker run --rm -itd --network pub_ne --name myAlpine alpine:latest sh
In addition I have created a virtual machine using, virtualbox provider, with bridged network interface.
if I use ping command:
- docker container -> vm ubuntu (ip of vm: 192.168.1.200) : ping works
but if I use ping command:
- docker container -> gateway 192.168.1.1
or
- docker container -> external world (google.com): ping not works
suggestions?
edit 1:
On docker host if i run tcpdump ( tcpdump -i icmp ) i see:
14:53:30.015822 IP 192.168.1.56 > 216.58.205.142: ICMP echo request, id 5376, seq 29, length 64
14:53:31.016143 IP 192.168.1.56 > 216.58.205.142: ICMP echo request, id 5376, seq 30, length 64
14:53:32.016426 IP 192.168.1.56 > 216.58.205.142: ICMP echo request, id 5376, seq 31, length 64
14:53:33.016722 IP 192.168.1.56 > 216.58.205.142: ICMP echo request, id 5376, seq 32, length 64
Where 192.168.1.56 is my docker container and 216.58.205.142 should be google ip address. No echo reply is received.
Macvlan is unlikely to work with IEEE 802.11.
Your wifi access point, and/or your host network stack, are not going to be thrilled.
You might want to try ipvlan instead: add -o ipvlan_mode=l2 to your network creation call and see if that helps.
That might very well still not work... (for eg, if you rely on DHCP and your DHCP server uses macaddresses and not client id)
And your only (reasonable) solution might be to drop the wifi entirely and wire the device up instead... (or move away from macvlan and use host / bridge - whichever is the most convenient)

docker overlay network problems connecting containers

We are running an environment of 6 engines each with 30 containers.
Two engines are running containers with nginx proxy. These two containers are the only way into the network.
It is now the second time that we are facing a major problem with a set of containers in this environment:
Both nginx container cannot reach some of the containers on other machines. Only one physical engine has this problem, all others are fine. It started with timeouts of some machines, and now after 24 hours, all containers on that machine have the problem.
Some more details:
Nginx is running on machine prod-3.
Second Nginx is running on machine prod-6.
Containers with problems are running on prod-7.
Both nginx cannot reach the containers, but the containers can reach the nginx via "ping".
At the beginning and today in the morning we could reach some of the containers, other not. It started with timeouts, now we cannot ping the containers in the overlay network. This time we are able to look at the traffic using tcpdump:
on the nginx container (10.10.0.37 on prod-3) we start a ping and
as you can see: 100% packet lost:
root#e89c16296e76:/# ping ew-engine-evwx-intro
PING ew-engine-evwx-intro (10.10.0.177) 56(84) bytes of data.
--- ew-engine-evwx-intro ping statistics ---
8 packets transmitted, 0 received, 100% packet loss, time 7056ms
root#e89c16296e76:/#
On the target machine prod-7 (not inside the container) - we see that all ping packages are received (so the overlay network is routing correctly to the prod-7):
wurzel#rv_____:~/eventworx-admin$ sudo tcpdump -i ens3 dst port 4789 |grep 10.10.0.177
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens3, link-type EN10MB (Ethernet), capture size 262144 bytes
IP 10.10.0.37.35270 > 10.10.0.177.http: Flags [S], seq 2637350294, win 28200, options [mss 1410,sackOK,TS val 1897214191 ecr 0,nop,wscale 7], length 0
IP 10.10.0.37.35270 > 10.10.0.177.http: Flags [S], seq 2637350294, win 28200, options [mss 1410,sackOK,TS val 1897214441 ecr 0,nop,wscale 7], length 0
IP 10.10.0.37.35326 > 10.10.0.177.http: Flags [S], seq 2595436822, win 28200, options [mss 1410,sackOK,TS val 1897214453 ecr 0,nop,wscale 7], length 0
IP 10.10.0.37 > 10.10.0.177: ICMP echo request, id 83, seq 1, length 64
IP 10.10.0.37.35326 > 10.10.0.177.http: Flags [S], seq 2595436822, win 28200, options [mss 1410,sackOK,TS val 1897214703 ecr 0,nop,wscale 7], length 0
IP 10.10.0.37 > 10.10.0.177: ICMP echo request, id 83, seq 2, length 64
IP 10.10.0.37 > 10.10.0.177: ICMP echo request, id 83, seq 3, length 64
IP 10.10.0.37 > 10.10.0.177: ICMP echo request, id 83, seq 4, length 64
IP 10.10.0.37 > 10.10.0.177: ICMP echo request, id 83, seq 5, length 64
IP 10.10.0.37 > 10.10.0.177: ICMP echo request, id 83, seq 6, length 64
IP 10.10.0.37 > 10.10.0.177: ICMP echo request, id 83, seq 7, length 64
IP 10.10.0.37 > 10.10.0.177: ICMP echo request, id 83, seq 8, length 64
^C304 packets captured
309 packets received by filter
0 packets dropped by kernel
wurzel#_______:~/eventworx-admin$
At first - you can see that there is no anwer ICMP (firewall is not reponsible, also not appamor).
Inside the responsible container (evwx-intro = 10.10.0.177) nothing is received, the interface eth0 (10.10.0.0) is just silent:
root#ew-engine-evwx-intro:/home/XXXXX# tcpdump -i eth0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel
root#ew-engine-evwx-intro:/home/XXXXX#
It's really strange.
Any other tool from docker which can help us to see what's going on?
We did not change anything to the firewall, also no automatic updates of the system (maybe security).
The only activity was, that some old containers have been reactivated after a long period (of maybe 1-2 month of inactivity).
We are really lost, if you experienced something comparable, it would be very helpful to understand the steps you took.
Many thanks for any help with this.
=============================================================
6 hours later
After trying nearly everything for a full day, we did the final try:
(1) stop all the containers
(2) stop docker service
(3) stop docker socket service
(4) restart machine
(5) start the containers
... now it looks good at the moment.
To conclude:
(1) we have no clue what was causing the problem. This is bad.
(2) We have learned that the overlay network is not the problem, because the traffic is reaching the target machine where the container is living
(3) We are able to trace the network traffic until it reaches the target machine. Somehow it is not "entering" the container. Because inside the container the network interface shows no activity at all.
We have no knowledge about the vxnet virtual network which is used by docker, so if anybody has a hint, could you help us with a link or tool about it?
Many many thanks in advance.
Andre
======================================================
4 days later...
Just had the same situation again after updating docker-ce 18.06 to 18.09.
We have two machines using docker-ce 18 in combination with ubuntu 18.04 and I just updated the docker-ce to 18.09 because of this problem (Docker container should not resolve DNS in Ubuntu 18.04 ... new resolved service).
I stopped all machines, updated docker, restart machine, started all machines.
Problem: Same problem as described in this post. The ping was received by the target host operating system but not forwarded to the container.
Solution:
1. stop all containers and docker
2. consul leave,
3. cleanup all entries in consul keystore on other machines (was not deleted by leave)
3. start consul
4. restart all enigines
5. restart nginx container ... gotcha, network is working now.
Once again the same problem was hitting us.
We have 7 servers (each running docker as described above), two nginx entry points.
It looks like, that some errors with in the consul key store is the real problem causing the docker network to show the strange behaviour (described above).
In our configuration all 7 server have their own local consul instance which synchronises with the others. For network setup each docker service is doing a lookup at its local consul key store.
In last week we notice that at the same time of the problem with network reachability also the consul clients report problems with synchronisation (leader election problems, repeats etc).
The final solution was to stop the docker engines and the consul clients. Delete the consul database on some servers, join it again to the others. Start the docker engines.
Looks like the consul service is a critical part for the network configuration...
In progress...
I faced the exact issue with overlay network Docker Swarm setup.
I've found that it's not OS or Docker problem. Servers affected are using Intel NIC X series. Other servers with I series NIC are working fine.
Do you use on-premise servers? Or any cloud provider?
We use OVH and it might be caused by some datacenter network misconfiguration.

Process owner of a docker program

I have started an nginx container bound on the host network as follows:
docker run --rm -d --network host --name mynginx nginx
However, when querying process information with the ss command, this seems to be a pure nginx but not a docker process:
$ ss -tuap 'sport = :80'
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp LISTEN 0 128 0.0.0.0:http 0.0.0.0:* users:(("nginx",pid=16563,fd=6),("nginx",pid=16524,fd=6))
why is that?
You configured the nginx process to run in the host networking namespace --net host. In that mode you do not setup port forwarding from the host to the container network (e.g. -p 80:80). Had you done the port forwarding, you would see a docker process on the host which is forwarding to the same port in the container namespace for the nginx process.
Keep in mind that containers are a method to run an application with kernel options for things like namespacing, it is not a VM running under a separate OS, so you will see processes running and ports opened directly on the host.
Here's an example of what it would look like if you forwarded the port instead of using the host network namespace, and how you can also look at the network namespace inside the container:
$ docker run --rm -d -p 8000:80 --name mynginx nginx
d177bc43166ad59f5cdf578eca819737635c43b2204b2f75f2ba54dd5a9cffbb
$ sudo ss -tuap 'sport = :8000'
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp LISTEN 0 128 :::8000 :::* users:(("docker-proxy",pid=25229,fd=4))
$ docker run -it --rm --net container:mynginx --pid container:mynginx nicolaka/netshoot ss -tuap 'sport = :80'
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp LISTEN 0 128 *:http *:* users:(("nginx",pid=1,fd=6))
The docker-proxy process there is the default way that docker forwards a port to the container.
I am afraid there is some misunderstanding here about so-called docker process.
First of all, ss command doesn’t show what kind of process it is. It may show the application name(nginx here). But we could not say it’s so-called pure nginx process.
You could try pwdx nginx_pid. Otherwise, each running container is a process which we could check with ps -ef on its host machine.
Above all, you could use ps -ef|grep nginx and pwdx nginx_pid to find out what kind of process it is.

Can't ping docker IPv6 container

I ran docker daemon for using it with global IPv6 for containers:
docker daemon --ipv6 --fixed-cidr-v6="xxxx:xxxx:xxxx:xxxx::/64"
After it I ran docker container:
docker run -d --name my-container some-image
It successfully got Global IPv6 address( I checked by docker inspect my-container). But I can't to ping my container by this ip:
Destination unreachable: Address unreachable
But I can successfully ping docker0 bridge by it's IPv6 address.
Output of route -n -6 contains next lines:
Destination Next Hop Flag Met Ref Use If
xxxx:xxxx:xxxx:xxxx::/64 :: U 256 0 0 docker0
xxxx:xxxx:xxxx:xxxx::/64 :: U 1024 0 0 docker0
fe80::/64 :: U 256 0 0 docker0
docker0 interface has global IPv6 address:
inet6 addr: xxxx:xxxx:xxxx:xxxx::1/64 Scope:Global
xxxx:xxxx:xxxx:xxxx:: everywhere is the same, and it's global IPv6 address of my eth0 interface
Does docker required something additional configs for accessing my containers via IPv6?
Assuming IPv6 in your guest OS is properly configured probably you are pinging the container not from host OS, but outside and network discovery protocol is not configured. Other hosts does not know if your container is behind of your host. I'm doing this after start of container with IPv6 (in host OS) (in ExecStartPost clauses of Systemd .service file)
/usr/sbin/sysctl net.ipv6.conf.interface_name.proxy_ndp=1
/usr/bin/ip -6 neigh add proxy $(docker inspect --format {{.NetworkSettings.GlobalIPv6Address}} container_name) dev interface_name"
Beware of IPv6: docker developers say in replies to bug reports they do not have enough time to make IPv6 production-ready in version 1.10 and say nothing about 1.11.
Mb you use wrong ping command. For ipv6 is ping6.
$ ping6 2607:f0d0:1002:51::4

Resources