Like the wireshark outputs, I captured two TCP packets, one with sequence num 149483 and ack 149453, one with sequence num 149491 and ack 146685.
I think the packet with seq num 149483 is sent first(because it has low sequence num), acked 149453.
Then sent seq num 149491, but here why this packet acked 146685 which already confirmed by the first packets?
Both sequence and acknowledgement numbers are sequence numbers, from the perspective of TCP, but you must remember that each side selects a random initial sequence number from the 32-bit space. In other words, there is no relationship between the sequence and acknowledgement numbers.
The way it works as that hosts A and B establish a connection. Host A will select a random ISN and send that to B in the initial SYN. Host B will acknowledge that number (plus 1, by RFC) and select its own random ISN. Host A will acknowledge that (plus 1, again). From here on out, the sequence numbers are used in connection with the total amount of data sent in a segment to reassemble the segments according to the sequence numbers, with the sequence numbers effectively being used to track the number of bytes sent in each direction in the TCP connection.
Since this is how it works, it is perfectly fine to have a "low" acknowledgement number with a "high" sequence number. It's also fine to have pretty much any other combination that you can imagine.
Related
I am having an app in Delphi that monitors UDP traffic. What is the proper way to detect when a QUIC protocol is used? I have the data in a TBytes buffer.
QUIC rfc: https://datatracker.ietf.org/doc/html/rfc9000
Depending on how much of a positive match you're looking for, the effort varies between "walk in the park" and "a bit of a nightmare".
QUIC has a complex handshake, during which the encryption keys are derived, and then it moves into the fully-encrypted, application data phase. On top of this, the protocol is also designed to allow migration of endpoints during the exchange (such as a mobile device jumping between wifi and mobile data), so simply tracking IP addresses and ports isn't going to catch everything.
If all you want is basic detection of QUIC connections being initiated, then all you need to do is to look for the initial packets, which have a clear format, and are only obfuscated (not encrypted).
From RFC9000:
17.2.2. Initial Packet
An Initial packet uses long headers with a type value of 0x00. It
carries the first CRYPTO frames sent by the client and server to
perform key exchange, and it carries ACK frames in either direction.
Initial Packet {
Header Form (1) = 1,
Fixed Bit (1) = 1,
Long Packet Type (2) = 0,
Reserved Bits (2),
Packet Number Length (2),
Version (32),
Destination Connection ID Length (8),
Destination Connection ID (0..160),
Source Connection ID Length (8),
Source Connection ID (0..160),
Token Length (i),
Token (..),
Length (i),
Packet Number (8..32),
Packet Payload (8..),
}
So a quick and dirty way of detecting a QUIC version 1 initial packet, is to check for the following (psuedocode):
( packet[ 0 ] & 0xf0 ) == 0xc0
packet[ 1 ] == 0x00
packet[ 2 ] == 0x00
packet[ 3 ] == 0x00
packet[ 4 ] == 0x01
If you want to go beyond this, it quickly gets exponentially more complicated.
I'd strongly recommend downloading and running wireshark and seeing for yourself what it looks like on the wire.
I've known about SCTP for a decade or so, and although I never got to use it yet, I've always wanted to, because of some of its promising (purported) features:
multi-homing
multiplexing w/o head-of-line blocking
mixed order/unordered delivery on the same connection (aka association)
no TIME_WAIT
no SYN flooding
A Comparison between QUIC and SCTP however claims
SCTP intended to get rid of HOL-Blocking by substreams, but its
Transmission Sequence Number (TSN) couples together the transmission
of all data chunks. [...] As a result, in SCTP if a packet is lost,
all the packets with TSN after this lost packet cannot be received
until it is retransmitted.
That statement surprised me because:
removing head-of-line blocking is a stated goal of SCTP
SCTP does have a per-stream sequence number, see below quote from RFC 4960, which should allow processing per stream, regardless of the association-global TSN
SCTP has been in use in the telecommunications sector for perhaps close to 2 decades, so how could this have been missed?
Internally, SCTP assigns a Stream Sequence Number to each message
passed to it by the SCTP user. On the receiving side, SCTP ensures
that messages are delivered to the SCTP user in sequence within a
given stream. However, while one stream may be blocked waiting for
the next in-sequence user message, delivery from other streams may
proceed.
Also, there is a paper Head-of-line Blocking in TCP and SCTP: Analysis and Measurements that actually measures round-trip time of a multiplexed echo service in the face of package loss and concludes:
Our results reveal that [..] a small number of SCTP streams or SCTP unordered mode can avoid this head-of-line blocking. The alternative solution of multiple TCP connections performs worse in most cases.
The answer is not very scholarly, but at least according to the specification in RFC 4960, SCTP seems capable of circumventing head-of-line blocking. The relevant claim seems to be in Section 7.1.
Note: TCP guarantees in-sequence delivery of data to its upper-layer protocol within a single TCP session. This means that when TCP notices a gap in the received sequence number, it waits until the gap is filled before delivering the data that was received with sequence numbers higher than that of the missing data. On the other hand, SCTP can deliver data to its upper-layer protocol even if there is a gap in TSN if the Stream Sequence Numbers are in sequence for a particular stream (i.e., the missing DATA chunks are for a different stream) or if unordered delivery is indicated. Although this does not affect cwnd, it might affect rwnd calculation.
A dilemma is what does "are in sequence for a particular stream" entail? There is some stipulation about delaying delivery to the upper layer until packages are reordered (see Section 6.6, below), but reordering doesn't seem to be conditioned by filling the gaps at the level of the association. Also note the mention in Section 6.2 on the complex distinction between ACK and delivery to the ULP (Upper Layer Protocol).
Whether other stipulations of the RFC indirectly result in the occurence of HOL, and whether it is effective in real-life implementations and situations - these questions warrant further investigation.
Below are some of the excerpts which I've come across in the RFC and which may be relevant.
RFC 4960, Section 6.2 Acknowledgement on Reception of DATA Chunks
When the receiver's advertised window is 0, the receiver MUST drop any new incoming DATA chunk with a TSN larger than the largest TSN received so far. If the new incoming DATA chunk holds a TSN value less than the largest TSN received so far, then the receiver SHOULD drop the largest TSN held for reordering and accept the new incoming DATA chunk. In either case, if such a DATA chunk is dropped, the receiver MUST immediately send back a SACK with the current receive window showing only DATA chunks received and accepted so far. The dropped DATA chunk(s) MUST NOT be included in the SACK, as they were not accepted.
Under certain circumstances, the data receiver may need to drop DATA chunks that it has received but hasn't released from its receive buffers (i.e., delivered to the ULP). These DATA chunks may have been acked in Gap Ack Blocks. For example, the data receiver may be holding data in its receive buffers while reassembling a fragmented user message from its peer when it runs out of receive buffer space. It may drop these DATA chunks even though it has acknowledged them in Gap Ack Blocks. If a data receiver drops DATA chunks, it MUST NOT include them in Gap Ack Blocks in subsequent SACKs until they are received again via retransmission. In addition, the endpoint should take into account the dropped data when calculating its a_rwnd.
Circumstances which highlight how senders may receive acknowledgement for chunks which are ultimately not delivered to the ULP (Upper Layer Protocol).Note this applies to chunks with TSN higher than the Cumulative TSN (i.e. from Gap Ack Blocks). This together with unreliability of SACK order represent good reasons for the stipulation in Section 7.1 (see below).
RFC 4960, Section 6.6 Ordered and Unordered Delivery
Within a stream, an endpoint MUST deliver DATA chunks received with the U flag set to 0 to the upper layer according to the order of their Stream Sequence Number. If DATA chunks arrive out of order of their Stream Sequence Number, the endpoint MUST hold the received DATA chunks from delivery to the ULP until they are reordered.
This is the only stipulation on ordered delivery within a stream in this section; seemingly, reordering does not depend on filling the gaps in ACK-ed chunks.
RFC 4960, Section 7.1 SCTP Differences from TCP Congestion Control
Gap Ack Blocks in the SCTP SACK carry the same semantic meaning as the TCP SACK. TCP considers the information carried in the SACK as advisory information only. SCTP considers the information carried in the Gap Ack Blocks in the SACK chunk as advisory. In SCTP, any DATA chunk that has been acknowledged by SACK, including DATA that arrived at the receiving end out of order, is not considered fully delivered until the Cumulative TSN Ack Point passes the TSN of the DATA chunk (i.e., the DATA chunk has been acknowledged by the Cumulative TSN Ack field in the SACK).
This is stated from the perspective of the sending endpoint, and is accurate for the reason emphasized in section 6.6 above.
Note: TCP guarantees in-sequence delivery of data to its upper-layer protocol within a single TCP session. This means that when TCP notices a gap in the received sequence number, it waits until the gap is filled before delivering the data that was received with sequence numbers higher than that of the missing data. On the other hand, SCTP can deliver data to its upper-layer protocol even if there is a gap in TSN if the Stream Sequence Numbers are in sequence for a particular stream (i.e., the missing DATA chunks are for a different stream) or if unordered delivery is indicated. Although this does not affect cwnd, it might affect rwnd calculation.
This seems to be the core answer to what interests you.
In support of this argument, the format of the SCTP SACK chunk as exposed here and here.
I have a dissector which can read 2 udp packets and combine data from those.
Every time I select a packet (N) that is based on combining data from previous packet (N-1
) and current packet (N), I'm getting an error on the packet details on the custom protocol section.
Only when I select previous packet (N-1) and then the current packet (N), I can see the dissection as I expect and without any error.
Any idea how to solve the issue?
I have a very large tcpdump file that I split into 1 minute intervals. I am able to use tshark to extract TCP statistics for each of the 1 minute files using a loop code and save the results as a CSV file so I can perform further analysis in Excel. Now I want to be able to count the number of TCP flows in each 1 minute file for all the 1 minute files and save the data in a CSV file. A TCP flow here represents group of packets going from a specific source to a specific destination. Each flow has statistics such as source IP, dest IP, #pcakets from A->B, #bytes from A->B, #packets from B->A, #bytes from B->A, total packets, total bytes, etc. And I just want to count the number of TCP flows in each of the 1 minute files. From what I’ve read so far, it seems I need to create a dissector to do that. Can anyone give me pointers or code on how to get started? Thanks.
Tshark has a command to dump all of the necessary information: tshark -qz conv,tcp -r FILE. This writes one line per flow (plus a header and footer) so to count the flows just count the lines and subtract the header/footer.
Not a dissector, but a tap. See the Wireshark README.tapping document, and see the TShark iousers tap for a, sadly, not at all simple example in C.
It's also possible to write taps in Lua; see, for example, the Lua/Taps page in the Wireshark Wiki and the Lua Support in Wireshark section of the Wireshark User's Manual.
The C structure passed to TCP taps for each packet is:
/* the tcp header structure, passed to tap listeners */
typedef struct tcpheader {
guint32 th_seq;
guint32 th_ack;
gboolean th_have_seglen; /* TRUE if th_seglen is valid */
guint32 th_seglen;
guint32 th_win; /* make it 32 bits so we can handle some scaling */
guint16 th_sport;
guint16 th_dport;
guint8 th_hlen;
guint16 th_flags;
guint32 th_stream; /* this stream index field is included to help differentiate when address/port pairs are reused */
address ip_src;
address ip_dst;
/* This is the absolute maximum we could find in TCP options (RFC2018, section 3) */
#define MAX_TCP_SACK_RANGES 4
guint8 num_sack_ranges;
guint32 sack_left_edge[MAX_TCP_SACK_RANGES];
guint32 sack_right_edge[MAX_TCP_SACK_RANGES];
} tcp_info_t;
So, for C-language taps, the "data" argument to the tap listener's "packet" routine points to a structure of that sort.
For Lua taps, the "tapinfo" table passed as the third argument to the tap listener's "packet" routine is described as "a table of info based on the Listener's type, or nil.". For a TCP tap, the entries in the table include all the fields in that structure except for sack_left_edge and sack_right_edge; the keys in the table are the structure member names.
The th_stream field identifies the connection; each time the TCP dissector finds a new connection, it assigns a new value. As the comment indicates, "this stream index field is included to help differentiate when address/port pairs are reused", so that if a given connection is closed, and a later connection uses the same endpoints, the two connections have different th_stream values even though they have the same endpoints.
So you'd have a table using the th_stream value as a key. The table would store the endpoints (addresses and ports) and counts of packets and bytes in each direction. For each packet passed to the listener's "packet" routine, you'd look up the th_stream value in the table and, if you don't find it, you'd create a new entry, starting the counts off at zero, and use that new entry; otherwise, you'd use the entry you found. You'd then figure out whether the packet was going from A to B or B to A, and increase the appropriate packet count and byte count.
You'd also keep track of the time stamp. For the first packet, you'd store the time stamp for that packet. For each packet, you'd look at the time stamp and, if it's one minute or more later than the stored time stamp, you'd:
dump out the statistics from the table of connections;
empty out the table of connections;
store the new packet's time stamp, replacing the previous stored time stamp.
I want to write a receiver program using raw socket
it will use recvfrom() to receive packets
so I want to check the IP header and tcp header of a packet
when a program sends a packet, it will pay attention to the network byte order and host byte order problem
but for my recever program, when I use recvfrom(sockfd,mesg,1000,0,(struct sockaddr *)&cliaddr,&len);
what is the byte order of the data in the packets? it is network byte order or host byte order?
and how to deal with it?
for this example
http://www.binarytides.com/packet-sniffer-code-in-c-using-linux-sockets-bsd/
the author doesn't take into account the byte order problem when dealing with the received packets, why?
thanks!
what is the byte order of the data in the packets?
The convention is that network order is big endian order. However, the data you receive is the data you sent: nobody magically modifies "integers" to change their endianness.
and how to deal with it?
Use ntohl and ntohs when interpreting integer data
Be aware that bitfield endianness isn't standard
the author doesn't take into account the byte order problem when
dealing with the received packets,
The link you posted shows ntohs and ntohl calls. The author does handle endianness at least to some extent.