how to control maximum number of established connections - connection

when using netty to programe,
I occur an question : "control maximum number of established connections"
Is there such a method can implements this featrue?
just like : "serverBootstrap.option( ChannelOption.SO_BACKLOG,100);"
which is used to set the maximum muber of un-accepted queue connections.

You have write such a feature by yourself. For example, you could count the number of active conections in your handler and close any additionally incoming connections if the number of active connections exceeds the limit or simply stop accepting anything.

Related

Does SCTP really prevent head-of-line blocking?

I've known about SCTP for a decade or so, and although I never got to use it yet, I've always wanted to, because of some of its promising (purported) features:
multi-homing
multiplexing w/o head-of-line blocking
mixed order/unordered delivery on the same connection (aka association)
no TIME_WAIT
no SYN flooding
A Comparison between QUIC and SCTP however claims
SCTP intended to get rid of HOL-Blocking by substreams, but its
Transmission Sequence Number (TSN) couples together the transmission
of all data chunks. [...] As a result, in SCTP if a packet is lost,
all the packets with TSN after this lost packet cannot be received
until it is retransmitted.
That statement surprised me because:
removing head-of-line blocking is a stated goal of SCTP
SCTP does have a per-stream sequence number, see below quote from RFC 4960, which should allow processing per stream, regardless of the association-global TSN
SCTP has been in use in the telecommunications sector for perhaps close to 2 decades, so how could this have been missed?
Internally, SCTP assigns a Stream Sequence Number to each message
passed to it by the SCTP user. On the receiving side, SCTP ensures
that messages are delivered to the SCTP user in sequence within a
given stream. However, while one stream may be blocked waiting for
the next in-sequence user message, delivery from other streams may
proceed.
Also, there is a paper Head-of-line Blocking in TCP and SCTP: Analysis and Measurements that actually measures round-trip time of a multiplexed echo service in the face of package loss and concludes:
Our results reveal that [..] a small number of SCTP streams or SCTP unordered mode can avoid this head-of-line blocking. The alternative solution of multiple TCP connections performs worse in most cases.
The answer is not very scholarly, but at least according to the specification in RFC 4960, SCTP seems capable of circumventing head-of-line blocking. The relevant claim seems to be in Section 7.1.
Note: TCP guarantees in-sequence delivery of data to its upper-layer protocol within a single TCP session. This means that when TCP notices a gap in the received sequence number, it waits until the gap is filled before delivering the data that was received with sequence numbers higher than that of the missing data. On the other hand, SCTP can deliver data to its upper-layer protocol even if there is a gap in TSN if the Stream Sequence Numbers are in sequence for a particular stream (i.e., the missing DATA chunks are for a different stream) or if unordered delivery is indicated. Although this does not affect cwnd, it might affect rwnd calculation.
A dilemma is what does "are in sequence for a particular stream" entail? There is some stipulation about delaying delivery to the upper layer until packages are reordered (see Section 6.6, below), but reordering doesn't seem to be conditioned by filling the gaps at the level of the association. Also note the mention in Section 6.2 on the complex distinction between ACK and delivery to the ULP (Upper Layer Protocol).
Whether other stipulations of the RFC indirectly result in the occurence of HOL, and whether it is effective in real-life implementations and situations - these questions warrant further investigation.
Below are some of the excerpts which I've come across in the RFC and which may be relevant.
RFC 4960, Section 6.2 Acknowledgement on Reception of DATA Chunks
When the receiver's advertised window is 0, the receiver MUST drop any new incoming DATA chunk with a TSN larger than the largest TSN received so far. If the new incoming DATA chunk holds a TSN value less than the largest TSN received so far, then the receiver SHOULD drop the largest TSN held for reordering and accept the new incoming DATA chunk. In either case, if such a DATA chunk is dropped, the receiver MUST immediately send back a SACK with the current receive window showing only DATA chunks received and accepted so far. The dropped DATA chunk(s) MUST NOT be included in the SACK, as they were not accepted.
Under certain circumstances, the data receiver may need to drop DATA chunks that it has received but hasn't released from its receive buffers (i.e., delivered to the ULP). These DATA chunks may have been acked in Gap Ack Blocks. For example, the data receiver may be holding data in its receive buffers while reassembling a fragmented user message from its peer when it runs out of receive buffer space. It may drop these DATA chunks even though it has acknowledged them in Gap Ack Blocks. If a data receiver drops DATA chunks, it MUST NOT include them in Gap Ack Blocks in subsequent SACKs until they are received again via retransmission. In addition, the endpoint should take into account the dropped data when calculating its a_rwnd.
Circumstances which highlight how senders may receive acknowledgement for chunks which are ultimately not delivered to the ULP (Upper Layer Protocol).Note this applies to chunks with TSN higher than the Cumulative TSN (i.e. from Gap Ack Blocks). This together with unreliability of SACK order represent good reasons for the stipulation in Section 7.1 (see below).
RFC 4960, Section 6.6 Ordered and Unordered Delivery
Within a stream, an endpoint MUST deliver DATA chunks received with the U flag set to 0 to the upper layer according to the order of their Stream Sequence Number. If DATA chunks arrive out of order of their Stream Sequence Number, the endpoint MUST hold the received DATA chunks from delivery to the ULP until they are reordered.
This is the only stipulation on ordered delivery within a stream in this section; seemingly, reordering does not depend on filling the gaps in ACK-ed chunks.
RFC 4960, Section 7.1 SCTP Differences from TCP Congestion Control
Gap Ack Blocks in the SCTP SACK carry the same semantic meaning as the TCP SACK. TCP considers the information carried in the SACK as advisory information only. SCTP considers the information carried in the Gap Ack Blocks in the SACK chunk as advisory. In SCTP, any DATA chunk that has been acknowledged by SACK, including DATA that arrived at the receiving end out of order, is not considered fully delivered until the Cumulative TSN Ack Point passes the TSN of the DATA chunk (i.e., the DATA chunk has been acknowledged by the Cumulative TSN Ack field in the SACK).
This is stated from the perspective of the sending endpoint, and is accurate for the reason emphasized in section 6.6 above.
Note: TCP guarantees in-sequence delivery of data to its upper-layer protocol within a single TCP session. This means that when TCP notices a gap in the received sequence number, it waits until the gap is filled before delivering the data that was received with sequence numbers higher than that of the missing data. On the other hand, SCTP can deliver data to its upper-layer protocol even if there is a gap in TSN if the Stream Sequence Numbers are in sequence for a particular stream (i.e., the missing DATA chunks are for a different stream) or if unordered delivery is indicated. Although this does not affect cwnd, it might affect rwnd calculation.
This seems to be the core answer to what interests you.
In support of this argument, the format of the SCTP SACK chunk as exposed here and here.

How to calculate the number of packets sent to or forwarded by a node in RPL protocol of ContikiOS?

In RPL for selecting the best parent with the trust model In order to select a trusted parent, direct and indirect trust must be calculated. For direct trust computation, the number of packets sent to node A by node B and the number of packets forwarded by node A on behalf of node B must be counted and I have trouble in determining the number of forwarded packets. Any help will be useful for solving this problem.
You can look at the values of uip_stat.ip.sent and uip_stat.ip.forwarded. Make sure to enable uIP statistics (#define UIP_CONF_STATISTICS 1).

Control rate of individual topic consumption in Kafka Streams 0.9.1.0-cp1?

I am trying to backprocess data in Kafka topics using a Kafka Streams application that involves a join. One of the streams to be joined has much larger volume of data per unit of time in the corresponding topic. I would like to control the consumption from the individual topics so that I get roughly the same event timestamps from each topic in a single consumer.poll(). However, there doesn't appear to be any way to control the behavior of the KafkaConsumer backing the source stream. Is there any way around this? Any insight would be appreciated.
Currently Kafka cannot control the rate limit on both Producers and Consumers.
Refer:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-13+-+Quotas
But if you are using Apache Spark as the stream processing platform, you can limit the input rate for the Kafka receivers.
in the consumer side you can use consume([num_messages=1][, timeout=-1])
function instead of poll.
consume([num_messages=1][, timeout=-1]):
Consumes a list of messages (possibly empty on timeout). Callbacks may be executed as a side effect of calling this method.
The application must check the returned Message object’s Message.error() method to distinguish between proper messages (error() returns None) and errors for each Message in the list (see error().code() for specifics). If the enable.partition.eof configuration property is set to True, partition EOF events will also be exposed as Messages with error().code() set to _PARTITION_EOF.
num_messages (int) – The maximum number of messages to return (default: 1).
timeout (float) – The maximum time to block waiting for message, event or callback (default: infinite (-1)). (Seconds)

Is transmitted bytes event exist in Linux kernel?

I need to write a rate limiter, that will perform some stuff each time X bytes were transmitted.
The straightforward is to check the length of each transmitted packet, but I think it will be to slow for me.
Is there a way to use some kind of network event, that will be triggered by transmitted packets/bytes?
I think you may look at netfilter.
Using its (kernel level) api, you can have your custom code triggered by network events, modify received messages before passing it to application, and so on.
http://www.netfilter.org/
It's protocol dependent, actually. But for TCP, you can setsockopt the SO_RCVLOWAT option to define the minimum number of bytes (watermark) to permit the read operation.
If you need to enforce the maximum size too, adjust the receive buffer size using SO_RCVBUF.

How to determine total data upload+download in TCP/IP

I need to calculate total data transfer while transferring a fixed size data from client to server in TCP/IP. It includes connecting to the server, sending request,header, receiving response, receiving data etc.
More precisely, how to get total data transfer while using POST and GET method?
Is there any formula for that? Even a theoretical one will do fine (not considering packet loss or connection retries etc)
FYI I tried RFC2616 and RFC1180. But those are going over my head.
Any suggestion?
Thanks in advance.
You can't know the total transfer size in advance, even ignoring retransmits. There are several things that will stop you:
TCP options are negotiated between the hosts when the connection is established. Some options (e.g., timestamp) add additional data to the TCP header
"total data transfer size" is not clear. Ethernet, for example, adds quite a few more bits on top of whatever IP used. 802.11 (wireless) will add even more. So do HDLC or PPP going over a T1. Don't even think about frame relay. Some links may use compression (which will reduce the total size). The total size depends on where you measure it, even for a single packet.
Assuming you're just interested in the total octet size at layer 2, and you know the TCP options that will be negotiated in advance, you still can't know the path MTU. Which may change, even while the connection is in progress. Or if you're not doing path MTU discovery (which would be wierd), then the packet may get fragmented somewhere, and the remote end will see a different amount of data transfer than you.
I'm not sure why you need to know this, but I suggest that:
If you just want an estimate, watch a typical connection in Wireshark. Calculate the percent overhead (vs. the size of data you gave to TCP, and received from TCP). Use that number to estimate: it will be close enough, except in pathological situations.
If you need to know for sure how much data your end saw transmitted and received, use libpcap to capture the packet stream and check.
i'd say on average that request and response have about 8 lines of headers each and about 30 chars per line. Then allow for the size increase of converting any uploaded binary to Base64.
You didn't say if you also want to count TCP packet headers, in which case you could assume an MTU of about 1500 so add 16 bytes (tcp header) per 1500 data bytes
Finally, you could always setup a packet sniffer and count actual bytes for a sample of data.
oh yeah, and you may need to allow for deflate/gzip encoding as well.

Resources