What is the effect of UIP_CONF_BUFFER_SIZE in contiki_conf.h file - contiki

I am working on contiki for some time, and recently I faced a weird problem where I noticed that the cooja mote fails to receive any data packet larger than 57 bytes, for z1 mote the limit is something around 96 - 97 bytes (in cooja simulator) and in real hardware(mbxxx target) I've observed that this limit is 92 bytes. Anyone else faced similar situation, is this has something to do with platform specific configuration, and how do I change this? I've looked into contiki_conf.h file and found UIP_CONF_BUFFER_SIZE parameter. What is the effect if this parameter is changed?

I figured it out and it seems like the maximum IP payload handled by the uip stack. Thus it sums up the 40 byte IP header + 8 byte UDP header + UDP payload size. Same would hold for TCP connections. So for example if th UIP_CONF_BUFFER_SIZE is set to 140, and if we ping the mote with an effective IP packet size more than 140, the mote will fail to respond!

Related

Inspecting pcap file with Wireshark: [Packet size limited during capture]

I'm inspecting a pcap file with Wireshark and some of the entries have this written in their information field:
10001 โ†’ 27017 Len=121[Packet size limited during capture]
I read that this happens when you capture packets with tcpdump and tcpdump cuts off the packet at a specific length.
What does 10001 โ†’ 27017 mean?
In the information field it says Len=121, but in the Length field it says it is 163 Bytes long. What is the correct length?
What does 10001 โ†’ 27017 mean?
As #SteffenUllrich pointed out, these are the ports.
In the information field it says Len=121, but in the Length field, it says it is 163 Bytes long. What is the correct length?
I am not entirely sure, but I think the difference in the length could be because of the [Packet size limited during capture].
By default, Director has a packet size limit to capture data on the wire. Larger packets than the packet size limit will show "Packet size limited during capture" when reading the packet capture. Taking a capture on larger packet sizes increases the processing time of packets.
(https://knowledge.broadcom.com/external/article/165718/error-packet-size-limited-during-capture.html)
So, my speculation is that the packet is 163 bytes, but only 121 were captured.

LoRa: application layer receiving fragmented packets for each received transmission?

I am using Ebyte ttl-1w-433 RF module attached with a raspberry pi. When I send a packet, the receiver receives it but in my program(application layer) it prints the data in two fragments. I am using pySerial for my program. Below is the scenario that I am facing problem with-
sender sending 2 packets of 58 bytes each.
receiver receives two transmissions, and two only (receiver LED light blinks only twice)
receiver pushes the data in the application layer as 48, 10, 48, 10 fashion, instead of 58, 58 bytes fashion.
4.application layer(python script) prints four print statements (instead of two)
I am not loosing any data, I am just curious why data arriving app layer fragmented. tried with different serial baud rate and air data rate combination, but I always see the same pattern.
I am not familiar with the Ebyte ttl-1w-433 module but it uses the Semtech SX1276 chip. The SX1276 has a register RegPayloadLength (see SX1272 datasheet, page 114) which defines the payload length. Maybe your Raspberry Pi library (or whatsoever) which controls the access to the module, defines a fixed length of max. 48 bytes on initialization. Since you did not provide any code this is just a wild guess.

Cannot read/write SRAM using upper MSB address bits

I'm using an external sram (256kbx16b), with 16 bit data and 18 bit address, and I cannot read/write to the external sram when I'm accessing anything using the msb (addr bits 16 and 17).
accessing anything that does not require these bites (anything with addr bits 0-15) works just fine.
I found when I disconnect the 16 and 17 addr bits and tie them either high or low it works fine, but when these bits are connected to the PSoC 5lp and being selected by the emif component (an External Memory InterFace) it displays a random static value, I am expecting a specific set of changing values.
I have also verified the signals coming out of the psoc for address bits 16 and 17, and they seem to behave just as any of the other address bits are behaving. If I disconnect both the 16 and 17 wires, and then plug them in individually, whatever port on the psoc that was addressing the 16th or 17th bit freezes the data on my lcd.
the transfers to the ext sram are done via dma, while the reads are done directly using a pointer to the memory, this worked fine for previous srams, although they were 1Mbx8b instead.
this is consistent across gpio pins.
I'm using 512kbx16 sram: CY7C1041D
The sram is async.
again read/write with the PSoC works for all addresses on the sram that doesn't address using the upper 2 msb's.
Does anyone know what is going on?

How does sending tinygrams cause network congestion?

I've read advice in many places to the effect that sending a lot of small packets will lead to network congestion. I've even experienced this with a recent multi-threaded tcp app I wrote. However, I don't know if I understand the exact mechanism by which this occurs.
My initial guess is that if the MTU of the physical transmission media is fixed, and you send a bunch of small packets, then each packet may potential take up an entire transmission frame on the physical media.
For example, my understanding is that even though Ethernet supports variable frames most equipment uses a fixed Ethernet frame of 1500 bytes. At 100 Mbit, a 1500 byte frame "goes by" on the wire every 0.12 milliseconds. If I transmit a 1 byte message ( plus tcp & ip headers ) every 0.12 milliseconds I will effectively saturate the 100Mb Ethernet connection with 8333 bytes of user data.
Is this a correct understanding of how tinygrams cause network congestion?
Do I have all my terminology correct?
In wired ethernet at least, there is no "synchronous clock" that times the beginning of every frame. There is a minimum frame size, but it's more like 64 bytes instead of 1500. There are also minimum gaps between frames, but that might only apply to shared-access networks (ATM and modern ethernet is switched, not shared-access). It is the maximum size that is limited to 1500 bytes on virtually all ethernet equipment.
But the smaller your packets get, the higher the ratio of framing headers to data. Eventually you are spending 40-50 bytes of overhead for a single byte. And more for its acknowledgement.
If you could just hold for a moment and collect another byte to send in that packet, you have doubled your network efficiency. (this is the reason for Nagle's Algorithm)
There is a tradeoff on a channel with errors, because the longer frame you send, the more likely it experience an error and will have to be retransmitted. Newer wireless standards load up the frame with forward error correction bits to avoid retransmissions.
The classic example of "tinygrams" is 10,000 users all sitting on a campus network, typing into their terminal session. Every keystroke produces a single packet (and acknowledgement).... At a typing rate of 4 keystrokes per second, That's 80,000 packets per second just to move 40 kbytes per second. On a "classic" 10mbit shared-medium ethernet, this is impossible to achive, because you can only send 27k minimum sized packets in one second - excluding the effect of collisions:
96 bits inter-frame gap
+ 64 bits preamble
+ 112 bits ethernet header
+ 32 bits trailer
-----------------------------
= 304 bits overhead per ethernet frame.
+ 8 bits of data (this doesn't even include IP or TCP headers!!!)
----------------------------
= 368 bits per tinygram
10000000 bits/s รท 368 bits/packet = 27172 Packets/second.
Perhaps a better way to state this is that an ethernet that is maxed out moving tinygrams can only move 216kbits/s across a 10mbit/s medium for an efficiency of 2.16%
A TCP Packet transmitted over a link will have something like 40 bytes of header information. Therefore If you break a transmission into 100 1 byte packets, each packet sent will have 40 bytes, so about 98% of the resources used for transmission are overhead. If instead, you send it as one 100 byte packet, the total transmitted data is only 140 bytes, so only 28% overhead. In both cases you've transmitted 100 bytes of payload over the network, but in one you used 140 bytes of network resources to accomplish it, and in the other you've used 4000 bytes. In addition, it take more resources on the intermediate routers to correctly route 100 41 byte payloads, than 1 40 byte payloads. Routing 1 byte packets is pretty much the worst case scenerio for the routers performancewise, so they will generally exhibit their worst case performance under this situation.
In addition, especially with TCP, as performance degrades due to small packets, the machines can try do do things to compensate (like retransmit) that will actually make things worse, hence the use of Nagles algorithm to try to avoid this.
BDK has about half the answer (+1 for him). A large part of the problem is that every message comes with 40 bytes of overhead. Its actually a little worse than that though.
Another issue is that there is actually minimum packet size specified by IP. (This is not MTU. MTU is a Maximum before it will start fragmenting. Different issue entirely) The minimum is pretty small (I think 46 bytes, including your 24 byte TCP header), but if you don't use that much, it still sends that much.
Another issue is protocol overhead. Each packet sent by TCP causes an ACK packet to be sent back by the recipient as part of the protocol.
The result is that is you do something silly, like send one TCP packet every time the user hits a key, you could easily end up with a tremendous amount of wasted overhead data floating around.

How to determine total data upload+download in TCP/IP

I need to calculate total data transfer while transferring a fixed size data from client to server in TCP/IP. It includes connecting to the server, sending request,header, receiving response, receiving data etc.
More precisely, how to get total data transfer while using POST and GET method?
Is there any formula for that? Even a theoretical one will do fine (not considering packet loss or connection retries etc)
FYI I tried RFC2616 and RFC1180. But those are going over my head.
Any suggestion?
Thanks in advance.
You can't know the total transfer size in advance, even ignoring retransmits. There are several things that will stop you:
TCP options are negotiated between the hosts when the connection is established. Some options (e.g., timestamp) add additional data to the TCP header
"total data transfer size" is not clear. Ethernet, for example, adds quite a few more bits on top of whatever IP used. 802.11 (wireless) will add even more. So do HDLC or PPP going over a T1. Don't even think about frame relay. Some links may use compression (which will reduce the total size). The total size depends on where you measure it, even for a single packet.
Assuming you're just interested in the total octet size at layer 2, and you know the TCP options that will be negotiated in advance, you still can't know the path MTU. Which may change, even while the connection is in progress. Or if you're not doing path MTU discovery (which would be wierd), then the packet may get fragmented somewhere, and the remote end will see a different amount of data transfer than you.
I'm not sure why you need to know this, but I suggest that:
If you just want an estimate, watch a typical connection in Wireshark. Calculate the percent overhead (vs. the size of data you gave to TCP, and received from TCP). Use that number to estimate: it will be close enough, except in pathological situations.
If you need to know for sure how much data your end saw transmitted and received, use libpcap to capture the packet stream and check.
i'd say on average that request and response have about 8 lines of headers each and about 30 chars per line. Then allow for the size increase of converting any uploaded binary to Base64.
You didn't say if you also want to count TCP packet headers, in which case you could assume an MTU of about 1500 so add 16 bytes (tcp header) per 1500 data bytes
Finally, you could always setup a packet sniffer and count actual bytes for a sample of data.
oh yeah, and you may need to allow for deflate/gzip encoding as well.

Resources