It's possible in Wireshark (View -> Time Display Formats -> Microseconds), see attached image
Wireshark settings. Can somebody please share how to do this via tshark? I can see there are "-t" and "-u" options and they take care of some bits of date-time formatting but not second precision part.
I need this to automate some workflow. I get capture files having both micro and nano seconds, but filtered text output must be normalized.
If your capture file contains timestamps of such precision, -t a could print it, see my example:
% tshark -r packetcapture.cap -t a
1 10:23:40.232514 192.168.3.140 → 192.168.1.1 TLSv1.2 0 Application Data
2 10:23:40.232524 192.168.1.1 → 192.168.3.140 TCP 0 443 → 60152 [ACK] Seq=1 Ack=77 Win=514 Len=0 TSval=1643984880 TSecr=1054195834
3 10:23:40.232785 192.168.1.1 → 192.168.3.140 TLSv1.2 0 Application Data
4 10:23:40.235273 192.168.3.140 → 192.168.1.1 TCP 0 60152 → 443 [ACK] Seq=77 Ack=7095 Win=1937 Len=0 TSval=1054195837 TSecr=1643984880
5 10:23:40.235594 192.168.3.140 → 192.168.1.1 TCP 0 [TCP Window Update] 60152 → 443 [ACK] Seq=77 Ack=7095 Win=2048 Len=0 TSval=1054195837 TSecr=1643984880
Other formats described in tshark man page.
Related
TCP rentransmission occurs on any docker network.
simple test: create VM in Azure, centos 7,7
yum update
yum install docker
systemctl start docker
docker run --name mynginx1 -P -d nginx
tshark -tad -i any -Y "tcp.analysis.retransmission"
curl localhost:32768
this results in occurence of TCP Retransmission.
[root#vm1 ~]# tshark -tad -i any -Y "tcp.analysis.retransmission"
Running as user "root" and group "root". This could be dangerous.
Capturing on 'any'
121 2020-04-24 07:26:55.504210673 109.81.211.189 -> 10.0.0.4 SSH 92 [TCP Retransmission] Encrypted request packet len=36
418 2020-04-24 07:27:04.982215355 109.81.211.189 -> 10.0.0.4 SSH 92 [TCP Retransmission] Encrypted request packet len=36
572 2020-04-24 07:27:10.746826933 172.17.0.1 -> 172.17.0.2 HTTP 147 [TCP Retransmission] GET / HTTP/1.1
576 2020-04-24 07:27:10.747858244 172.17.0.2 -> 172.17.0.1 TCP 307 [TCP Retransmission] http > 40514 [PSH, ACK] Seq=1 Ack=80 Win=29056 Len=239 TSval=1217913 TSecr=1217912
580 2020-04-24 07:27:10.747930345 172.17.0.2 -> 172.17.0.1 TCP 680 [TCP Retransmission] http > 40514 [PSH, ACK] Seq=240 Ack=80 Win=29056 Len=612 TSval=1217914 TSecr=1217913[Reassembly error, protocol TCP: New fragment overlaps old data (retransmission?)]
the same problem on Kubernetes (tested with flannel plugin)
this issue significantly reduce the performance of container, in our case high performance message parser inside docker has 4x less results. Using docker host network resolves the issue. But using bridge network causes this issue.
Please any advice/help?
for reference - the reason of retransmission is seen from wireshark as outoforder/duplicate
enter image wireshark trace detail
Finally we identified the problem in used image based on Alpine. After we moved the docker base image into debian/ubuntu retransmission problem gone.
I've network related issue on the Kubernetes host, using Calico network layer. For continuous integration I need to run docker in docker, but running simple docker build with this Dockerfile:
FROM praqma/network-multitool AS build
RUN route
RUN ping -c 4 google.com
RUN traceroute google.com
produces output:
Step 1/4 : FROM praqma/network-multitool AS build
---> 3619cb81e582
Step 2/4 : RUN route
---> Running in 80bda13a9860
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 * 255.255.0.0 U 0 0 0 eth0
Removing intermediate container 80bda13a9860
---> d79e864eafaf
Step 3/4 : RUN ping -c 4 google.com
---> Running in 76354a92a413
PING google.com (216.58.201.110) 56(84) bytes of data.
--- google.com ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 53ms
---> 3619cb81e582
Step 4/4 : RUN traceroute google.com
---> Running in 3aa7908347ba
traceroute to google.com (216.58.201.110), 30 hops max, 46 byte packets
1 172.17.0.1 (172.17.0.1) 0.009 ms 0.005 ms 0.003 ms
Seems docker container has invalid routing while created off Kubernetes. Pods orchestrated by Kubernetes can access internet normally.
bash-5.0# ping -c 3 google.com
PING google.com (216.58.201.110) 56(84) bytes of data.
64 bytes from prg03s02-in-f14.1e100.net (216.58.201.110): icmp_seq=1 ttl=55 time=0.726 ms
64 bytes from prg03s02-in-f14.1e100.net (216.58.201.110): icmp_seq=2 ttl=55 time=0.586 ms
64 bytes from prg03s02-in-f14.1e100.net (216.58.201.110): icmp_seq=3 ttl=55 time=0.451 ms
--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 10ms
rtt min/avg/max/mdev = 0.451/0.587/0.726/0.115 ms
bash-5.0# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 169.254.1.1 0.0.0.0 UG 0 0 0 eth0
169.254.1.1 * 255.255.255.255 UH 0 0 0 eth0
bash-5.0# traceroute google.com
traceroute to google.com (216.58.201.110), 30 hops max, 46 byte packets
1 10-68-149-194.kubelet.kube-system.svc.kube.example.com (10.68.149.194) 0.006 ms 0.005 ms 0.004 ms
we created a docker container like this:
docker container create \
--name orderer \
--network dscsa_net \
--workdir $WORK_DIR \
--expose=7050 \
hyperledger/fabric-orderer:1.3.0 ./start-orderer.sh
but are unable to connect to port 7050 on the container.
root#dcee7e74266f:/home# nc -vz 10.0.0.194 7050
nc: connect to 10.0.0.194 port 7050 (tcp) failed: Connection refused
we are able to ping the container:
root#dcee7e74266f:/home# ping 10.0.0.194
PING 10.0.0.194 (10.0.0.194) 56(84) bytes of data.
64 bytes from 10.0.0.194: icmp_seq=1 ttl=64 time=0.810 ms
64 bytes from 10.0.0.194: icmp_seq=2 ttl=64 time=1.30 ms
64 bytes from 10.0.0.194: icmp_seq=3 ttl=64 time=0.668 ms
64 bytes from 10.0.0.194: icmp_seq=4 ttl=64 time=1.10 ms
64 bytes from 10.0.0.194: icmp_seq=5 ttl=64 time=0.631 ms
^C
--- 10.0.0.194 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 0.631/0.902/1.301/0.261 ms
and also see a process listening on port 7050 on the container:
root#9756199efefa:/home# netstat -tuplen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 127.0.0.1:7050 0.0.0.0:* LISTEN 0 10097930 7/orderer
tcp 0 0 127.0.0.11:34865 0.0.0.0:* LISTEN 0 10097705 -
udp 0 0 127.0.0.11:51385 0.0.0.0:* 0 10097704 -
what is going on here? how can we fix this?
EDIT: we are on a overlay network. The publish flag suggested in the answer is n/a as we are doing container to container communication. Anyway we tried it and it doesn't work.
There is one thing we have noticed which is if we run:
docker network inspect <our-network-name>
Among other things, it prints out a containers section but in that section only the containers on the host from which docker network inspect is executed are listed. The containers hosted on other nodes are not listed (also mentioned here).
we verified that if we run:
docker node ls
all the nodes are part of the swarm.
It seems other people have also run into this issue e.g., here but what is the solution?
Note: we are able to connect to another container running a different service exposed on port 7054. This container was created without even using the expose flag.
root#dcee7e74266f:/home# nc -zv 10.0.0.164 7054
Connection to 10.0.0.164 7054 port [tcp/*] succeeded!
Did further debugging with tcpdump and output of tcpdump is identical to the output when someone tries to connect to a port on which no process is listening. But as shown earlier netstat shows a process that is listening and we can connect to the process from localhost.
Output of tcpdump:
root#dcee7e74266f:/test# tcpdump -s0 host 10.0.0.195
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
23:44:45.978583 IP dcee7e74266f.52148 > orderer.dscsa_net.7050: Flags [S], seq 3845506108, win 28200, options [mss 1410,sackOK,TS val 4203049443 ecr 0,nop,wscale 7], length 0
23:44:45.979324 IP orderer.dscsa_net.7050 > dcee7e74266f.52148: Flags [R.], seq 0, ack 3845506109, win 0, length 0
The R flag tells client to reset the connection.
Output of traceroute:
root#dcee7e74266f:/test# traceroute 10.0.0.195
traceroute to 10.0.0.195 (10.0.0.195), 30 hops max, 60 byte packets
1 orderer.dscsa_net (10.0.0.195) 1.008 ms 0.900 ms 0.872 ms
Expose only sets metadata on the image or container, it does not make the port externally accessible. The option you are looking for is publish:
docker container create \
--name orderer \
--network dscsa_net \
--workdir $WORK_DIR \
--publish=7050:7050 \
hyperledger/fabric-orderer:1.3.0 ./start-orderer.sh
Solved this issue thanks to 1. The server listening to 127.0.0.1 was the problem. Once we changed the listening address to 0.0.0.0 (shows as ::: in netstat output below), we are able to connect to the server:
root#e9766a94d102:/home# netstat -tuplen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 127.0.0.11:37641 0.0.0.0:* LISTEN 0 12821468 -
tcp6 0 0 :::7050 :::* LISTEN 0 12821696 7/orderer
udp 0 0 127.0.0.11:51855 0.0.0.0:* 0 12821467 -
there is no need for either expose or publish flags. note to self: wasted 1.5 days on this.
I want to know whether I had hit the maximum number of connections that are allowed in my Linux based server
# netstat -an | grep -i time | wc -l
1116
# netstat -an | grep -i estab | wc -l
2137
TCP parameters at Kernel level are as follows:
# cat /proc/sys/net/ipv4/tcp_fin_timeout
60
# cat /proc/sys/net/ipv4/ip_local_port_range
32768 61000
TIME_WAIT connections are from the load balancer IP (199.X.X.02)
tcp 0 0 199.X.X.05:8280 199.X.X.02:51884 TIME_WAIT
How can I know whether I had hit the maximum limit? Any kernel parameters which will will tell me the current no of open connections. Also , how to calculate the maximum concurrent connections that is supported.
"Any kernel parameters which will will tell me the current no of open connections".
Partial answer:
You should be able to see the list of your open sockets under /proc/PID/fd (where PID is you process' pid, of course).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have a virtual interface (oip1) configured which has a valid IP config. When I try to ping an address on the internet from oip1, I can see the ICMP echo requests/replies on tcpdump, but ping still reports 100% failing.
root#openair-PC:~/openair4G# ifconfig oip1
oip1 Link encap:AMPR NET/ROM HWaddr
inet addr:192.188.0.2 Bcast:192.188.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING NOARP MULTICAST MTU:1500 Metric:1
RX packets:10 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:840 (840.0 B) TX bytes:840 (840.0 B)
I've created a new table "lte":
root#openair-PC:~/openair4G# ip rule show
0: from all lookup local
32743: from all fwmark 0x1 lookup lte
32766: from all lookup main
32767: from all lookup default
Thus, I'm marking packets to/from oip1 address:
root#openair-PC:~/openair4G# iptables -t mangle -L
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
MARK all -- anywhere 192.188.0.0/24 MARK set 0x1
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
MARK all -- 192.188.0.0/24 anywhere MARK set 0x1
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
Then, I added a default gateway for table "lte":
root#openair-PC:~/openair4G# ip route show table lte
default dev oip1 scope link
ping result:
openair#openair-PC:~/openair4G/cmake_targets$ ping -c 5 -I oip1 8.8.8.8
PING 8.8.8.8 (8.8.8.8) from 192.188.0.2 oip1: 56(84) bytes of data.
^C
--- 8.8.8.8 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 3999ms
tcpdump screen:
openair#openair-PC:~/openair4G$ sudo tcpdump -p -i oip1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on oip1, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
17:19:10.576680 IP 192.188.0.2 > google-public-dns-a.google.com: ICMP echo request, id 20987, seq 1, length 64
17:19:11.021851 IP google-public-dns-a.google.com > 192.188.0.2: ICMP echo reply, id 20987, seq 1, length 64
17:19:11.576617 IP 192.188.0.2 > google-public-dns-a.google.com: ICMP echo request, id 20987, seq 2, length 64
17:19:11.752243 IP google-public-dns-a.google.com > 192.188.0.2: ICMP echo reply, id 20987, seq 2, length 64
17:19:12.576583 IP 192.188.0.2 > google-public-dns-a.google.com: ICMP echo request, id 20987, seq 3, length 64
17:19:12.676980 IP google-public-dns-a.google.com > 192.188.0.2: ICMP echo reply, id 20987, seq 3, length 64
17:19:13.576559 IP 192.188.0.2 > google-public-dns-a.google.com: ICMP echo request, id 20987, seq 4, length 64
17:19:13.767922 IP google-public-dns-a.google.com > 192.188.0.2: ICMP echo reply, id 20987, seq 4, length 64
17:19:14.576561 IP 192.188.0.2 > google-public-dns-a.google.com: ICMP echo request, id 20987, seq 5, length 64
17:19:15.097164 IP google-public-dns-a.google.com > 192.188.0.2: ICMP echo reply, id 20987, seq 5, length 64
^C
10 packets captured
10 packets received by filter
0 packets dropped by kernel
As you can see, all packets can be seen by tcpdump.
Does anyone have an idea about this ?
Best regards,
I finally solved this. It was cause by rp_filter activated on the concerned interface and on all the system as well. I disabled it with: sysctl -w net.ipv4.conf.all.rp_filter=0 (replace all with the interface also).
This could have been caused by the reverse-path-filtering done by kernel. This is likely to occur if you are doing things like proxy-arp in the next hop. The system thinks that the packet received is a spoofed one (if the reverse-path-filtering is enabled) and drops it.
Obvious (but not suggested) way is to disable the reverse-path-filtering. Another way is to set the default gateway in the host for the pinged IP, through the intended interface.