What is throughput of ESP8266 NodeMCU - esp8266

I am comparing wireless technologies such as Zigbee and WiFi.
I bought Zigbee module which has bandwidth of 256kbps but actual throughput is less than 30kbps.
Similarly ESP8266 promises 1Mbps. But I don't know what throughput I am going to get.
Has anyone checked throughput of ESP8266 NodeMCU ?

One of Espressif's staff posted a response to a similar question here regarding throughput on the ESP8266 in AT bridge mode:
TCP Throughput: AT TCP passthrough Tx 303 Kbps; Rx 302Kbps # baudrate 420000
UDP Throughput: AT UDP passthrough Tx 250 Kbps; Rx 250 Kbps # baudrate 420000

Related

How to Measure Packet Loss on Ethernet POE passthrough Board?

I made an ethernet PCB passthrough circuit board that has an input connector, the correct magnetics to isolate the signal on both ends, and an output connector. I'd like to test my board by sending data into it and measure the packets on the output to make sure nothing is being lost.
I have Wireshark on my computer but I'm not sure how to inject a certain number of packets into the input or how measure it on the output to look for lost packets. I can connect on both ends to laptops but there is no IP address on my board, it's just hardware.

ESP 32 : Sending data over WIFI

I want to connect a Lidar sensor to my ESP 32 and send the results to my laptop over WiFi. The sensor can take readings approximately at a rate of 1,500/sec so I am looking to send a reading consisting of a distance in cm and an angle ( eg 40123,180.1).
I would like to send the data as near to real time as possible. I have successfully tried sending data using both websockets and server sent events but I cannot get anywhere near the speed needed to send single readings, the asyncServer gives an error message saying that too many messages are queued.
So the my question: Is there another protocol that I can use on the ESP 32 which will achieve this? If so could anyone give me a simple example to test.
Thanks.
If you encode your distance and angle values as an integer (4B) and float (4B) then your payload data would be in the neighbourhood of 8 * 1500 = 12KBps which is not too demanding. The bottleneck, I suspect, would be the number of packets per second that the ESP and your WiFi equipment+environment can handle. I wouldn't be surprised if 1500 packets per second is too much to ask of it. Batch your readings into as few packets as your latency requirements allow and then send. E.g. batching 50 samples into a single packet would mean sending 30 packets per second, carrying 400 B of payload each. The latency for the oldest sample in your packet would then be roughly 33 ms + network latency (which add another 1-1000 ms depending on how busy your ether is).
As for the protocol, UDP is usually used for low-latency, low-jitter stuff. Mind you, it doesn't handle packet loss in any way - once a packet is lost, it's lost. Also, packets might arrive out of order. If you don't like that, use TCP which recovers from packet loss and orders them. The price of TCP is significant added latency and jitter as the protocol slowly discovers and re-transmits lost packets, delaying the reception of newer data on receiver side.
As for an example, google for "posix udp socket tutorial".

Packets are greater than configured MTU

I made a tcpdump and captured packets, the configured MTU is 2140. I am analysing pcap files using Wireshark.
According to the configured MTU the expected maximum size of the packets should be 2154 (2140 bytes +14 ethernet header bytes). But I see packets of size greater than 2154 (ex 9010 bytes), On analyzing I found that these packets are generated on the machine where I made tcpdump (let's say A) and have the destination to another machine (let's say B). I expect a packet to be fragmented before it is sent to another host. I found some explanations online that says tcpdump captures packets before NIC breakdown, though this seems to be a valid explanation but it seems to contradict in my case because on machine A, I received packets of size greater than 2154 from B. Any thoughts, on why machine A is sending and receiving packets greater than configured MTU.
What you are seeing is most likely the result of TCP Segment Reassembly Offloading. This is a feature available on some network cards with matching drivers.
The idea is that the reassembly of many of the TCP segments is handled in the NIC itself. This turns out to be pretty effective in reducing overhead on the CPU/OS side since the network driver need only handle, perhaps, 1 "packet" out of 10, seeing just one large packet, rather than receiving and reassembling all 10.
You can read more about it here.
Updated answer
If your packet is UDP
This behaviour is normal. but there is not much you can do to see the individual packets on the end machines. The UDP packet is broken down into MTU compliant packets and reassembled at the Link layer, usually by specific hardware. This is too low to to be captured by Wireshark/pcap.
If you want to capture the individual broken down packets, you have to do this on an intermediate machine/network card, for example a gateway between the two hosts, because the original UDP packet is not reassembled until it reaches its final destination. Note : this gateway can be virtual.
See notes.shichao.io/tcpv1/ch10
Leaving this here in case someone with the same problem comes...
If your packet is TCP :
It sounds like Wireshark is reassembling packets for you. This is often the default for TCP streams. You can change this by richt-click on a stream -> Protocol Preferences -> Allow subdissectors to reassemble TCP.

UDP small-packet (inside MTU) slack bytes during transmission?

Background: Coding multiplayer for a simulator (Windows, .net), using peer-to-peer UDP transmission. This Q is not about advantages of UDP vs. TCP nor about packet headers. A related discussion to this Q is here.
Consider: I send a UDP packet with payload size X, where X can be anything between 1 and 500 bytes.
Q: Will there/can there, at any point during the transmission, temporarily be added slack bytes to the packet, ie. bytes in addition to needed headers/payload? For example, could it be that any participant in the transmission (Windows OS - NAT - internet - NAT - Windows OS) added bytes to fulfill a certain block size, so that these added bytes become a part of the transmission (even though cut off later on), and actually are transmitted, thus consuming processor (switch, server CPU) cycles?
(Reason for asking is how much effort to spend on composing/decomposing the packet, of course :-). Squeeze it to the last bit (small, more local CPU cycles) vs. allow the packet to be partially self-describing (bigger, less local CPU). Note that packet size is always less than the (nearest to me, that i know of) MTU, the normally-closer-to 1500 bytes)
Thx!
The short answer is: Yes.
Take Ethernet as an example. For collision detection purposes, the minimum payload size of an Ethernet frame is 42 bytes. If payload (which includes application data, UDP and IP header, in this case) is less than that, a padding will be added to the Ethernet frame.
Also, as far as I know, the network card will to this job, not it's driver or the OS.
If you want to decide whether it is better to send small packets or wait and send bigger ones, take a look at Nagle's algorithm.
Here ou can see the Ethernet padding in practice:
What are the 0 bytes at the end of an Ethernet frame in Wireshark?

Buffer Overflow - Discarding packets unicast vs broadcast

I have a small confusion and curiosity.
When a buffer is full, How does the respective OSI layer start removing packets? Does it discriminate between broadcasts vs unicasts?
(For specifics if required, 802.11g and 802.15.4)
I remember reading in some paper that it starts with discarding unicasts packets first. But i can't find some credible source on the subject.
Thank you for your time.
Best regards,
Rehan
Context
I am trying to highlight the inherent differences between how broadcast packets are handled vs unicasts. Namely:
1) Unlike Unicast packets, Broadcast packets don't require RTS/CTS
2) Unlike Unicast packets, Broadcast packets don't require a destination address
3) In an event of full buffer,... ?

Resources