I have a query regarding SCTP multihoming heartbeat behavior. Consider below example -
Host_A (IP a(Primary), IP b(secondary)) : Local MultiHomed endpoint
Host_B (IP c(Primary), IP d(secondary)) : Remote multiHomed endpoint
Will there be heartbeat communication between primary-secondary i.e a<->d & c<->b? If not then can we make such settings?
In my case, I'm only seeing HB SEND/ACK messages between the 2 primaries & 2 secondaries but not between primary & secondary.
Edit :-
I did a small test. I ran sctp_darn on two systems connected with each other.
Host A : Primary IP 172.29.11.43; Secondary IP 172.29.11.75
Host B : Primary IP 172.29.11.40; Secondary IP 172.29.11.72
On Host A, I ran --> sctp_darn -s -p 4445 -h 172.29.11.40 -P 4444 -H 172.29.11.43 -B 172.29.11.75
On Host B, I ran --> sctp_darn -l -P 4445 -H 172.29.11.40 -B 172.29.11.72
I didn't send any data from A->B to monitor HB behavior. This is what i got from tcpdump output.
root#base0-0-0-4-0-11-1:/root> tcpdump -ni bond1 sctp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on bond1, link-type EN10MB (Ethernet), capture size 65535 bytes
17:09:23.856688 IP 172.29.11.43.4444 > 172.29.11.40.4445: sctp (1) [INIT] [init tag: 368944998] [rwnd: 63488] [OS: 10] [MIS: 65535] [init TSN: 2410047720]
17:09:23.856893 IP 172.29.11.40.4445 > 172.29.11.43.4444: sctp (1) [INIT ACK] [init tag: 797255774] [rwnd: 63488] [OS: 10] [MIS: 10] [init TSN: 659191795]
17:09:23.856988 IP 172.29.11.43.4444 > 172.29.11.40.4445: sctp (1) [COOKIE ECHO] , (2) [DATA] (B)(E) [TSN: 2410047720] [SID: 0] [SSEQ 0] [PPID 0x0]
17:09:23.857410 IP 172.29.11.40.4445 > 172.29.11.43.4444: sctp (1) [COOKIE ACK] , (2) [SACK] [cum ack 2410047720] [a_rwnd 63486] [#gap acks 0] [#dup tsns 0]
17:09:25.880280 IP 172.29.11.75.4444 > 172.29.11.72.4445: sctp (1) [HB REQ]
17:09:25.880519 IP 172.29.11.72.4445 > 172.29.11.75.4444: sctp (1) [HB ACK]
17:09:27.951827 IP 172.29.11.72.4445 > 172.29.11.75.4444: sctp (1) [HB REQ]
17:09:27.951868 IP 172.29.11.75.4444 > 172.29.11.72.4445: sctp (1) [HB ACK]
17:09:56.520282 IP 172.29.11.75.4444 > 172.29.11.72.4445: sctp (1) [HB REQ]
17:09:56.520526 IP 172.29.11.72.4445 > 172.29.11.75.4444: sctp (1) [HB ACK]
17:09:56.534773 IP 172.29.11.40.4445 > 172.29.11.43.4444: sctp (1) [HB REQ]
17:09:56.534797 IP 172.29.11.43.4444 > 172.29.11.40.4445: sctp (1) [HB ACK]
17:09:57.748715 IP 172.29.11.43.4444 > 172.29.11.40.4445: sctp (1) [HB REQ]
17:09:57.749006 IP 172.29.11.40.4445 > 172.29.11.43.4444: sctp (1) [HB ACK]
17:09:59.026986 IP 172.29.11.72.4445 > 172.29.11.75.4444: sctp (1) [HB REQ]
17:09:59.027013 IP 172.29.11.75.4444 > 172.29.11.72.4445: sctp (1) [HB ACK]
17:10:27.129950 IP 172.29.11.40.4445 > 172.29.11.43.4444: sctp (1) [HB REQ]
17:10:27.129982 IP 172.29.11.43.4444 > 172.29.11.40.4445: sctp (1) [HB ACK]
17:10:27.220294 IP 172.29.11.75.4444 > 172.29.11.72.4445: sctp (1) [HB REQ]
17:10:27.220576 IP 172.29.11.72.4445 > 172.29.11.75.4444: sctp (1) [HB ACK]
17:10:29.076286 IP 172.29.11.43.4444 > 172.29.11.40.4445: sctp (1) [HB REQ]
17:10:29.076582 IP 172.29.11.40.4445 > 172.29.11.43.4444: sctp (1) [HB ACK]
17:10:30.402389 IP 172.29.11.72.4445 > 172.29.11.75.4444: sctp (1) [HB REQ]
17:10:30.402430 IP 172.29.11.75.4444 > 172.29.11.72.4445: sctp (1) [HB ACK]
^C
24 packets captured
24 packets received by filter
0 packets dropped by kernel
root#base0-0-0-4-0-11-1:/root>
As you can see, HBs are going from primary-primary & secondary-secondary but not from primary to secondary and vice-versa.
Thanks.
Quoting RFC 4960:
A destination transport address is considered "idle" if no new chunk that can be used for updating path RTT (usually including first transmission DATA, INIT, COOKIE ECHO, HEARTBEAT, etc.) and no HEARTBEAT has been sent to it within the current heartbeat period of that address.
In other words, heartbeats are not necessary, as long as there are other chunks that can be used to determine the RTT.
So in your case, could it be that the active connections were a->d and c->b, which have enough traffic to make heartbeats on them redundant?
Edit in response to more details from author
According to the figure 1 in Experimental studies of SCTP multi-homing, with 2 NICs each you'd get 4 possibly independent paths for each direction if the routing is configured in such a way that these IP addresses are accessible through different paths. And I assume they'd have to be heart beating in order to determine the performance of each path.
Looking at the pairing of your primary paths and the heart beats, which is always
172.29.11.x? <-> 172.29.11.x?
i.e. a .7? never pairs with a .4?, is this perhaps an issue with the route configuration and/or subnets (and the SCTP implementation is considering that information in its pairing decision?). Just a guess.
Related
It's possible in Wireshark (View -> Time Display Formats -> Microseconds), see attached image
Wireshark settings. Can somebody please share how to do this via tshark? I can see there are "-t" and "-u" options and they take care of some bits of date-time formatting but not second precision part.
I need this to automate some workflow. I get capture files having both micro and nano seconds, but filtered text output must be normalized.
If your capture file contains timestamps of such precision, -t a could print it, see my example:
% tshark -r packetcapture.cap -t a
1 10:23:40.232514 192.168.3.140 → 192.168.1.1 TLSv1.2 0 Application Data
2 10:23:40.232524 192.168.1.1 → 192.168.3.140 TCP 0 443 → 60152 [ACK] Seq=1 Ack=77 Win=514 Len=0 TSval=1643984880 TSecr=1054195834
3 10:23:40.232785 192.168.1.1 → 192.168.3.140 TLSv1.2 0 Application Data
4 10:23:40.235273 192.168.3.140 → 192.168.1.1 TCP 0 60152 → 443 [ACK] Seq=77 Ack=7095 Win=1937 Len=0 TSval=1054195837 TSecr=1643984880
5 10:23:40.235594 192.168.3.140 → 192.168.1.1 TCP 0 [TCP Window Update] 60152 → 443 [ACK] Seq=77 Ack=7095 Win=2048 Len=0 TSval=1054195837 TSecr=1643984880
Other formats described in tshark man page.
I have tried to connect to a digilent ZedBoard using my host PC, which I can do using UART, but I am not able to ssh into the board or further use my host PC internet connection to access the internet through the ZedBoard.
Zedboard is running: Xillinux distribution for Zynq-7000 EPP
Host PC is running: Ubuntu 16.04
How should I set this up?
We will go through the steps of communicating to a digilent Zedboard using the UART and the Ethernet port.
Using UART port
Connect the host (USB) to the zedboard's UART port (micro USB) and execute on the host:
# Install minicom
apt update && apt install minicom
minicom –D /dev/ttyACM0 –b 115200 -8 -o
Congratulations, you are connected to the zedboard
* For minicom help: CTRL+a z
* To exit minicom CTRL+a x
Connect using the board's ethernet port
Connect the zedboard to the host using the ethernet port on the host system, or an ethernet to usb adapter.
By default the zedboard's os has eth0 cunfigured to have the static ip of: 192.168.1.10
Configure on the host:
Network Connections > (Select the connection interface to the zedboard) > Edit > IPv4 Settings:
Change Method to Manual
Edit Address to: 192.168.1.1
Edit Netmask to: 255.255.255.0
Use the menu on the host to disconnect and connect to the interface that you have just configured.
Connect to the board by: ssh root#192.168.1.10
Share your PC's internet with the zedboard
Network Connections > (Select the connection interface) > Edit > IPv4 Settings:
* Change Method to Share to other computers
Use the menu on the host to disconnect and connect to the interface that you have just configured
execute ip addr and confirm the ip of the connection interface that is being shared
10.42.0.1 in my machine (this may be different in your machine)
Use minicom to connect to the board (see above).
In the ZedBoard:
Edit the file /etc/network/interfaces
# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 10.42.0.10
netmask 255.255.255.0
gateway 10.42.0.1
And fix your DNS resolver by editing the file /etc/resolv.conf to
nameserver 10.42.0.1
Execute the command to change the configurations of your zedboard
ifdown eth0; ifup eth0
And voiala! At this point should would be able to ping your host at:
root#localhost:~# ping 10.42.0.1
PING 10.42.0.1 (10.42.0.1) 56(84) bytes of data.
64 bytes from 10.42.0.1: icmp_req=1 ttl=64 time=0.424 ms
64 bytes from 10.42.0.1: icmp_req=2 ttl=64 time=0.498 ms
Ping a internet hosted website 8.8.8.8 through your host connection:
root#localhost:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_req=1 ttl=53 time=6.93 ms
64 bytes from 8.8.8.8: icmp_req=2 ttl=53 time=6.89 ms
64 bytes from 8.8.8.8: icmp_req=3 ttl=53 time=7.22 ms
And if you have setup /etc/resolv.conf correctly you can also access the internet using full domain names:
root#localhost:~# ping www.google.com
PING www.google.com (172.217.10.132) 56(84) bytes of data.
64 bytes from lga34s16-in-f4.1e100.net (172.217.10.132): icmp_req=1 ttl=53 time=7.02 ms
64 bytes from lga34s16-in-f4.1e100.net (172.217.10.132): icmp_req=2 ttl=53 time=7.20 ms
Additional notes
Files to keep in mind
/etc/network/interfaces describes the network interfaces
/etc/hostname configures the nameserver credentials
/etc/hosts resolves IP addresses to hostnames
/etc/resolv.conf configure your DNS resolver
we created a docker container like this:
docker container create \
--name orderer \
--network dscsa_net \
--workdir $WORK_DIR \
--expose=7050 \
hyperledger/fabric-orderer:1.3.0 ./start-orderer.sh
but are unable to connect to port 7050 on the container.
root#dcee7e74266f:/home# nc -vz 10.0.0.194 7050
nc: connect to 10.0.0.194 port 7050 (tcp) failed: Connection refused
we are able to ping the container:
root#dcee7e74266f:/home# ping 10.0.0.194
PING 10.0.0.194 (10.0.0.194) 56(84) bytes of data.
64 bytes from 10.0.0.194: icmp_seq=1 ttl=64 time=0.810 ms
64 bytes from 10.0.0.194: icmp_seq=2 ttl=64 time=1.30 ms
64 bytes from 10.0.0.194: icmp_seq=3 ttl=64 time=0.668 ms
64 bytes from 10.0.0.194: icmp_seq=4 ttl=64 time=1.10 ms
64 bytes from 10.0.0.194: icmp_seq=5 ttl=64 time=0.631 ms
^C
--- 10.0.0.194 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 0.631/0.902/1.301/0.261 ms
and also see a process listening on port 7050 on the container:
root#9756199efefa:/home# netstat -tuplen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 127.0.0.1:7050 0.0.0.0:* LISTEN 0 10097930 7/orderer
tcp 0 0 127.0.0.11:34865 0.0.0.0:* LISTEN 0 10097705 -
udp 0 0 127.0.0.11:51385 0.0.0.0:* 0 10097704 -
what is going on here? how can we fix this?
EDIT: we are on a overlay network. The publish flag suggested in the answer is n/a as we are doing container to container communication. Anyway we tried it and it doesn't work.
There is one thing we have noticed which is if we run:
docker network inspect <our-network-name>
Among other things, it prints out a containers section but in that section only the containers on the host from which docker network inspect is executed are listed. The containers hosted on other nodes are not listed (also mentioned here).
we verified that if we run:
docker node ls
all the nodes are part of the swarm.
It seems other people have also run into this issue e.g., here but what is the solution?
Note: we are able to connect to another container running a different service exposed on port 7054. This container was created without even using the expose flag.
root#dcee7e74266f:/home# nc -zv 10.0.0.164 7054
Connection to 10.0.0.164 7054 port [tcp/*] succeeded!
Did further debugging with tcpdump and output of tcpdump is identical to the output when someone tries to connect to a port on which no process is listening. But as shown earlier netstat shows a process that is listening and we can connect to the process from localhost.
Output of tcpdump:
root#dcee7e74266f:/test# tcpdump -s0 host 10.0.0.195
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
23:44:45.978583 IP dcee7e74266f.52148 > orderer.dscsa_net.7050: Flags [S], seq 3845506108, win 28200, options [mss 1410,sackOK,TS val 4203049443 ecr 0,nop,wscale 7], length 0
23:44:45.979324 IP orderer.dscsa_net.7050 > dcee7e74266f.52148: Flags [R.], seq 0, ack 3845506109, win 0, length 0
The R flag tells client to reset the connection.
Output of traceroute:
root#dcee7e74266f:/test# traceroute 10.0.0.195
traceroute to 10.0.0.195 (10.0.0.195), 30 hops max, 60 byte packets
1 orderer.dscsa_net (10.0.0.195) 1.008 ms 0.900 ms 0.872 ms
Expose only sets metadata on the image or container, it does not make the port externally accessible. The option you are looking for is publish:
docker container create \
--name orderer \
--network dscsa_net \
--workdir $WORK_DIR \
--publish=7050:7050 \
hyperledger/fabric-orderer:1.3.0 ./start-orderer.sh
Solved this issue thanks to 1. The server listening to 127.0.0.1 was the problem. Once we changed the listening address to 0.0.0.0 (shows as ::: in netstat output below), we are able to connect to the server:
root#e9766a94d102:/home# netstat -tuplen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 127.0.0.11:37641 0.0.0.0:* LISTEN 0 12821468 -
tcp6 0 0 :::7050 :::* LISTEN 0 12821696 7/orderer
udp 0 0 127.0.0.11:51855 0.0.0.0:* 0 12821467 -
there is no need for either expose or publish flags. note to self: wasted 1.5 days on this.
I am able to ping rest of the world but not the host from container in which docker container is running. I am sure someone encountered this issue before
See below details
Ubuntu 14.04.2 LTS (GNU/Linux 3.16.0-30-generic x86_64
**Container is using "bridge" network
Docker version 18.06.1-ce, build e68fc7a**
IP address for eth0: 135.25.87.162
IP address for eth1: 192.168.122.55
IP address for eth2: 135.21.171.209
IP address for docker0: 172.17.42.1
route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 135.21.248.1 0.0.0.0 UG 0 0 0 eth1
135.21.171.192 * 255.255.255.192 U 0 0 0 eth2
135.21.248.0 * 255.255.255.0 U 0 0 0 eth1
135.25.87.128 * 255.255.255.192 U 0 0 0 eth0
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
192.168.122.0 * 255.255.255.192 U 0 0 0 eth1
#ping commands from container
# ping google.com
PING google.com (64.233.177.113): 56 data bytes
64 bytes from 64.233.177.113: icmp_seq=0 ttl=29 time=51.827 ms
64 bytes from 64.233.177.113: icmp_seq=1 ttl=29 time=50.184 ms
64 bytes from 64.233.177.113: icmp_seq=2 ttl=29 time=50.991 ms
^C--- google.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 50.184/51.001/51.827/0.671 ms
# ping 135.25.87.162
PING 135.25.87.162 (135.25.87.162): 56 data bytes
^C--- 135.25.87.162 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
root#9ed17e4c2ee3:/opt/app/tomcat#
If I have a hostname that has several IPv4 addresses assigned.
Which IPv4 will be used by ping request to resolve the hostname address [for example, while running "ping Some-Pc"]?
Run the command 'route' in Linux and you will see the routing tables. Based on the destination address and the routing table you should be able to determine the interface being used to send the ICMP messages and thus the src IP address.
For example, given this routing table in Linux:
[mynode]$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 100 0 0 enp0s3
10.0.2.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s3
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.56.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s8
192.168.124.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
If you send a ping to address 10.0.2.45, it will use enp0s3 and the corresponding IP address as src address.
If you send a ping to address 172.17.0.0 it will send the address from NIC docker0 and the corresponding src IP address.
With ifconfig in Linux (ipconfig in Windows) you can see the IP address assigned to each interface.