I use an UDP socket on iOS. I experience a very strange behaviour when setting it to non-blocking mode and/or simulating a bad connection.
To set the socket in non-blocking mode with the usual fcntl:
fcntl(socketfd, F_SETFL, O_NONBLOCK);
To simulate bad connection I use Network Link Conditioneer with 5/5 in/out packet loss and 50KBps out limit (which guarantees in my case that system out buffer will be full at some point).
I send data with sendto() and I clock the call with clock() and a printf().
Here is the data from my tests, data in ms:
blocking, good connection: min 0/max 929/avg 226/std 111
blocking, bad connection: min 0/max 611/avg 38/std 84
non blocking, good connection: min 0/max 6244/avg 601/std 1071
non blocking, bad connection: min 0/max 5774/avg 400/std 747
I also notice that in case 2 there are many entries with 0 ms, meaning that sendto() has returned immediately, that explains the low average and standard deviation of the case.
At all occasions sendto() returned a positive value corresponding to the number of bytes that were requested to be sent.
Now, there are several things I just can't understand:
in blocking mode, I expect it to block until there are available system buffers to store the data, it seems instead that the data is discarded (since the call returns immediately)
in non-blocking mode, I expect sendto() to return an error when it would block, instead from the data it seems that the call blocks until there is actually space to perform it
The behaviour seems inverted, with the exception that sendto never reports a failure.
What am I doing wrong?
Socket creation:
int socketfd = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP));
// error checks
setsockopt(socketfd, SOL_SOCKET, SO_NOSIGPIPE, (void *)&set, sizeof(int));
int tos = 0xB8; // VOICE
setsockopt(socketfd, IPPROTO_IP, IP_TOS, &tos, sizeof(tos));
int rc = fcntl(m_iSocket, F_SETFL, O_NONBLOCK);
if (rc != 0) {
// error of fcntl is notified
}
Sending to:
sendto(socketfd, buffer, buffer_length, 0, (struct sockaddr *) &m_sRemoteHost, sizeof(m_sRemoteHost))
How do you know your system's send buffer is getting full? You're just assuming that because you've rate-limited your connection, data backs up in the send buffer. It's more likely that it's getting dropped somewhere else. Since UDP makes no guarantees that any packet will be delivered, any piece of software or hardware anywhere on the packet's path is free to drop it at any time for any reason.
Per the UDP page on Wikipedia:
UDP is a minimal message-oriented transport layer protocol that is
documented in RFC 768. UDP provides no guarantees to the upper layer
protocol for message delivery and the UDP layer retains no state of
UDP messages once sent. For this reason, UDP sometimes is referred to
as Unreliable Datagram Protocol.
Related
I've known about SCTP for a decade or so, and although I never got to use it yet, I've always wanted to, because of some of its promising (purported) features:
multi-homing
multiplexing w/o head-of-line blocking
mixed order/unordered delivery on the same connection (aka association)
no TIME_WAIT
no SYN flooding
A Comparison between QUIC and SCTP however claims
SCTP intended to get rid of HOL-Blocking by substreams, but its
Transmission Sequence Number (TSN) couples together the transmission
of all data chunks. [...] As a result, in SCTP if a packet is lost,
all the packets with TSN after this lost packet cannot be received
until it is retransmitted.
That statement surprised me because:
removing head-of-line blocking is a stated goal of SCTP
SCTP does have a per-stream sequence number, see below quote from RFC 4960, which should allow processing per stream, regardless of the association-global TSN
SCTP has been in use in the telecommunications sector for perhaps close to 2 decades, so how could this have been missed?
Internally, SCTP assigns a Stream Sequence Number to each message
passed to it by the SCTP user. On the receiving side, SCTP ensures
that messages are delivered to the SCTP user in sequence within a
given stream. However, while one stream may be blocked waiting for
the next in-sequence user message, delivery from other streams may
proceed.
Also, there is a paper Head-of-line Blocking in TCP and SCTP: Analysis and Measurements that actually measures round-trip time of a multiplexed echo service in the face of package loss and concludes:
Our results reveal that [..] a small number of SCTP streams or SCTP unordered mode can avoid this head-of-line blocking. The alternative solution of multiple TCP connections performs worse in most cases.
The answer is not very scholarly, but at least according to the specification in RFC 4960, SCTP seems capable of circumventing head-of-line blocking. The relevant claim seems to be in Section 7.1.
Note: TCP guarantees in-sequence delivery of data to its upper-layer protocol within a single TCP session. This means that when TCP notices a gap in the received sequence number, it waits until the gap is filled before delivering the data that was received with sequence numbers higher than that of the missing data. On the other hand, SCTP can deliver data to its upper-layer protocol even if there is a gap in TSN if the Stream Sequence Numbers are in sequence for a particular stream (i.e., the missing DATA chunks are for a different stream) or if unordered delivery is indicated. Although this does not affect cwnd, it might affect rwnd calculation.
A dilemma is what does "are in sequence for a particular stream" entail? There is some stipulation about delaying delivery to the upper layer until packages are reordered (see Section 6.6, below), but reordering doesn't seem to be conditioned by filling the gaps at the level of the association. Also note the mention in Section 6.2 on the complex distinction between ACK and delivery to the ULP (Upper Layer Protocol).
Whether other stipulations of the RFC indirectly result in the occurence of HOL, and whether it is effective in real-life implementations and situations - these questions warrant further investigation.
Below are some of the excerpts which I've come across in the RFC and which may be relevant.
RFC 4960, Section 6.2 Acknowledgement on Reception of DATA Chunks
When the receiver's advertised window is 0, the receiver MUST drop any new incoming DATA chunk with a TSN larger than the largest TSN received so far. If the new incoming DATA chunk holds a TSN value less than the largest TSN received so far, then the receiver SHOULD drop the largest TSN held for reordering and accept the new incoming DATA chunk. In either case, if such a DATA chunk is dropped, the receiver MUST immediately send back a SACK with the current receive window showing only DATA chunks received and accepted so far. The dropped DATA chunk(s) MUST NOT be included in the SACK, as they were not accepted.
Under certain circumstances, the data receiver may need to drop DATA chunks that it has received but hasn't released from its receive buffers (i.e., delivered to the ULP). These DATA chunks may have been acked in Gap Ack Blocks. For example, the data receiver may be holding data in its receive buffers while reassembling a fragmented user message from its peer when it runs out of receive buffer space. It may drop these DATA chunks even though it has acknowledged them in Gap Ack Blocks. If a data receiver drops DATA chunks, it MUST NOT include them in Gap Ack Blocks in subsequent SACKs until they are received again via retransmission. In addition, the endpoint should take into account the dropped data when calculating its a_rwnd.
Circumstances which highlight how senders may receive acknowledgement for chunks which are ultimately not delivered to the ULP (Upper Layer Protocol).Note this applies to chunks with TSN higher than the Cumulative TSN (i.e. from Gap Ack Blocks). This together with unreliability of SACK order represent good reasons for the stipulation in Section 7.1 (see below).
RFC 4960, Section 6.6 Ordered and Unordered Delivery
Within a stream, an endpoint MUST deliver DATA chunks received with the U flag set to 0 to the upper layer according to the order of their Stream Sequence Number. If DATA chunks arrive out of order of their Stream Sequence Number, the endpoint MUST hold the received DATA chunks from delivery to the ULP until they are reordered.
This is the only stipulation on ordered delivery within a stream in this section; seemingly, reordering does not depend on filling the gaps in ACK-ed chunks.
RFC 4960, Section 7.1 SCTP Differences from TCP Congestion Control
Gap Ack Blocks in the SCTP SACK carry the same semantic meaning as the TCP SACK. TCP considers the information carried in the SACK as advisory information only. SCTP considers the information carried in the Gap Ack Blocks in the SACK chunk as advisory. In SCTP, any DATA chunk that has been acknowledged by SACK, including DATA that arrived at the receiving end out of order, is not considered fully delivered until the Cumulative TSN Ack Point passes the TSN of the DATA chunk (i.e., the DATA chunk has been acknowledged by the Cumulative TSN Ack field in the SACK).
This is stated from the perspective of the sending endpoint, and is accurate for the reason emphasized in section 6.6 above.
Note: TCP guarantees in-sequence delivery of data to its upper-layer protocol within a single TCP session. This means that when TCP notices a gap in the received sequence number, it waits until the gap is filled before delivering the data that was received with sequence numbers higher than that of the missing data. On the other hand, SCTP can deliver data to its upper-layer protocol even if there is a gap in TSN if the Stream Sequence Numbers are in sequence for a particular stream (i.e., the missing DATA chunks are for a different stream) or if unordered delivery is indicated. Although this does not affect cwnd, it might affect rwnd calculation.
This seems to be the core answer to what interests you.
In support of this argument, the format of the SCTP SACK chunk as exposed here and here.
How can I handle buffer overflow in ndis driver. Can anybody tell some buffer overflow scenarios or some use cases of buffer overflow conditions.
For NDIS miniport drivers
If you receive a packet that is larger than the MTU, discard it. Do not indicate the packet up to NDIS (i.e., do not pass the packet to NdisMIndicateReceiveNetBufferLists). If possible, increment the ifInErrors statistical counter.
The above rule is not affected by the NDIS_PACKET_TYPE_PROMISCUOUS flag; do not indicate excessively-large packets even when in promiscuous mode. However, you should indicate excessively-small (aka "runt") packets when in promiscuous mode, if your hardware permits it.
If you are asked to transmit a packet that is larger than the MTU, do not attempt to transmit it. Assign NET_BUFFER_LIST::Status = NDIS_STATUS_INVALID_LENGTH and return the NBL back to NDIS with NdisMSendNetBufferListsComplete. (I wouldn't expect you to ever see such a packet; it would be a bug for NDIS to attempt to send you such a packet.)
For NDIS protocol drivers
If you receive a packet that is larger than the MTU, you are free to discard it.
Never attempt to send a packet that is larger than the MTU.
For NDIS filter drivers
If a filter receives a packet that is larger than the MTU (FilterReceiveNetBufferLists), the filter may immediately discard the packet (NdisFReturnNetBufferLists if the receive indication is not made with NDIS_RECEIVE_FLAGS_RESOURCES, or just returning immediately if the resources flag is set).
If a filter is asked to send a packet that is larger than the MTU (FilterSendNetBufferLists), the filter may assign NET_BUFFER_LIST::Status = NDIS_STATUS_INVALID_LENGTH and return the packet immediately (NdisFSendNetBufferListsComplete).
Filters are not obligated to validate the size of every packet that passes through them. However, your filter should validate the size of any packets where a malformed packet would otherwise cause your filter to trigger a buffer overflow. For example, if your filter copies all ARP replies into a pre-allocated buffer, first validate that the ARP reply isn't too large to fit into the buffer. (This is not strictly necessary, since the miniport "shouldn't" give you an excessively-large packet. However, you are on the network datapath, which means you're handling untrusted data being processed by a potentially-buggy miniport. A little extra defense-in-depth is a good idea.)
Filters must not originate packets that are larger than the MTU (on either the send or receive paths).
I use libpcap to capture a lot packets, and then process/modify these packets and send them to another host.
First, I create a libpcap handler handle and set it NON-BLOCKING, and use pcap_get_selecable_fd(handle) to get a corresponding file descriptor pcap_fd.
Then I add an event for this pcap_fd to a libevent loop(it is like select() or epoll()).
In order to avoid frequently polling this file descriptor, each time there are packet arrival event, I use pcap_dispatch to collect a bufferful of packets and put them into a queue packet_queue, and then call process_packet to process/modify/send each packet in the queue packet_queue.
pcap_dispatch(handle, -1, collect_pkt, (u_char *)packet_queue);
process_packet(packet_queue);
I use tcpdump to capture the packets that are sent by process_packet(packet_queue), and notice:
at the very beginning, the interval between sent packets is small
after that several packets are sent, the interval becomes around 0.055 second
after 20 packets are sent, the interval becomes 0.031 second and keeps on being 0.031 second
I carefully checked my source code and find no suspicious blocks or logic which leads to so big intervals. So I wonder whether it is due to the problem of the function pcap_dispatch.
are there any efficiency problem on pcap_dispatch or pcap_next or even the libpcap file descriptor?
thanks!
On many platforms libpcap uses platform-specific implementations for faster packet capture, so YMMV. Generally they involve a shared buffer between the kernel and the application.
At the very beginning you have a time window between the moment packets start piling up on the RX buffer and the moment you start processing. The accumulation of these packets may cause the higher frequency here. This part is true regardless of implementation.
I haven't found a satisfying explanation to this. Maybe you got behind and missed a few packets, so you the time between packets resent becomes higher.
This is what you'd expect in normal operation, I think.
pcap_dispatch is pretty much as good as it gets, at least in libpcap. pcap_next, on the other hand, incurs in two penalties (at least on Linux, but I think it does in other mainstream platforms too): a syscall per packet (libpcap calls poll for error checking, even in non-blocking mode) and a copy (libpcap releases the "slot" in the shared buffer ASAP, so it can't just return that pointer). An implementation detail is that, on Linux, pcap_next just calls pcap_dispatch for one packet and with a copy callback.
I have read that in Erlang using gen_tcp the data sent over the socket can be aggregated in a single stream. How can i force the socket to send exactly a specific number of bytes?
TCP is a stream protocol (unlike UDP which is packet oriented) which e.g. means that the receiving application can't tell if the available data comes from one or several send() calls on the client.
You don't really have any control over the number of bytes being sent in a TCP packet, multiple send() calls may result in one TCP packet being received, and one send() call may result in several TCP packets being sent. This is controlled by the OS TCP stack.
In Erlang you can use the the socket option {packet, 1|2|4} to gen_tcp:connect and gen_tcp:listen to create a packet-oriented handling of the TCP data. This inserts a 1,2 or 4 bytes prefix to each send() and the receiving side (assuming it is also erlang and uses the same {packet, N} option) will read data until the sent number of bytes has been received, regardless of how the message was fragmented into TCP packets.
A call to gen_tcp:recv will block until the expected number of bytes has been read. And same for active mode sockets, the message is sent with the expected number of bytes.
In a partially distributed network app I'm working on in C++ on Linux, I have a message-passing abstraction which will send a buffer over the network. The buffer is sent in two steps: first a 4-byte integer containing the size is sent, and then the buffer is sent afterwards. The receiving end then receives in 2 steps as well - one call to read() to get the size, and then a second call to read in the payload. So, this involves 2 system calls to read() and 2 system calls to write().
On the localhost, I setup two test processes. Both processes send and receive messages to each other continuously in a loop. The size of each message was only about 10 bytes. For some reason, the test performed incredibly slow - about 10 messages sent/received per second. And this was on localhost, not even over a network.
If I change the code so that there is only 1 system call to write, i.e. the sending process packs the size at the head of the buffer and then only makes 1 call to write, the whole thing speeds up dramatically - about 10000 messages sent/received per second. This is an incredible difference in speed for only one less system call to write.
Is there some explanation for this?
You might be seeing the effects of the Nagle algorithm, though I'm not sure it is turned on for loopback interfaces.
If you can combine your two writes into a single one, you should always do that. No sense taking the overhead of multiple system calls if you can avoid it.
Okay, well I'm using TCP/IP (SOCK_STREAM) sockets. The example code is pretty straight forward. Here is a basic snippet that reproduces the problem. This doesn't include all the boiler plate setup code, error-checking, or ntohs code:
On the sending end:
// Send size
uint32_t size = strlen(buffer);
int res = write(sock, &size, sizeof(size));
// Send payload
res = write(sock, buffer, size);
And on the receiving end:
// Receive size
uint32_t size;
int res = read(sock, &size, sizeof(size));
// Receive payload
char* buffer = (char*) malloc(size);
read(sock, buffer, size);
Essentially, if I change the sending code by packing the size into the send buffer, and only making one call to write(), the performance increase is almost 1000x faster.
This is essentially the same question: C# socket abnormal latency .
In short, you'll want to use the TCP_NODELAY socket option. You can set it with setsockopt.
You don't give enough information to say for sure. You don't even say which protocol you're using.
Assuming TCP/IP, the socket could be configured to send a packet on every write, instead of buffering output in the kernel until the buffer is full or the socket is explicitly flushed. This means that TCP sends the two pieces of data in different fragments and has to defeagment them at the other end.
You might also be seeing the effect of the TCP slow-start algorithm. The first data sent is transmitted as part of the connection handshake. Then the TCP window size is slowly ramped up as more data is transmitted until it matches the rate at which the receiver can consume data. This is useful in long-lived connections but a big performance hit in short-lived ones. You can turn off slow-start by setting a socket option.
Have a look at the TCP_NODELAY and TCP_NOPUSH socket options.
An optimization you can use to avoid multiple system calls and fragmentation is scatter/gather I/O. Using the sendv or writev system call you can send the 4-byte size and variable sized buffer in a single syscall and both pieces of data will be sent in the same fragment by TCP.
The problem is that with the first call to send, the system has no idea the second call is coming, so it sends the data immediately. With the second call to send, the system has no idea a third call isn't coming, so it delays the data in hopes that it can combine the data with a subsequent call.
The correct fix is to use a 'gather' operation such as writev if your operating system supports it. Otherwise, allocate a buffer, copy the two chunks in, and make a single call to write. (Some operating systems have other solutions, for example Linux has a 'TCP cork' operation.)
It's not as important, but you should optimize your receiving code too. Call 'read' asking for as many bytes as possible and then parse them yourself. You're tying to teach the operating system your protocol, and that's not a good idea.