ESP 32 : Sending data over WIFI - wifi

I want to connect a Lidar sensor to my ESP 32 and send the results to my laptop over WiFi. The sensor can take readings approximately at a rate of 1,500/sec so I am looking to send a reading consisting of a distance in cm and an angle ( eg 40123,180.1).
I would like to send the data as near to real time as possible. I have successfully tried sending data using both websockets and server sent events but I cannot get anywhere near the speed needed to send single readings, the asyncServer gives an error message saying that too many messages are queued.
So the my question: Is there another protocol that I can use on the ESP 32 which will achieve this? If so could anyone give me a simple example to test.
Thanks.

If you encode your distance and angle values as an integer (4B) and float (4B) then your payload data would be in the neighbourhood of 8 * 1500 = 12KBps which is not too demanding. The bottleneck, I suspect, would be the number of packets per second that the ESP and your WiFi equipment+environment can handle. I wouldn't be surprised if 1500 packets per second is too much to ask of it. Batch your readings into as few packets as your latency requirements allow and then send. E.g. batching 50 samples into a single packet would mean sending 30 packets per second, carrying 400 B of payload each. The latency for the oldest sample in your packet would then be roughly 33 ms + network latency (which add another 1-1000 ms depending on how busy your ether is).
As for the protocol, UDP is usually used for low-latency, low-jitter stuff. Mind you, it doesn't handle packet loss in any way - once a packet is lost, it's lost. Also, packets might arrive out of order. If you don't like that, use TCP which recovers from packet loss and orders them. The price of TCP is significant added latency and jitter as the protocol slowly discovers and re-transmits lost packets, delaying the reception of newer data on receiver side.
As for an example, google for "posix udp socket tutorial".

Related

UDP small-packet (inside MTU) slack bytes during transmission?

Background: Coding multiplayer for a simulator (Windows, .net), using peer-to-peer UDP transmission. This Q is not about advantages of UDP vs. TCP nor about packet headers. A related discussion to this Q is here.
Consider: I send a UDP packet with payload size X, where X can be anything between 1 and 500 bytes.
Q: Will there/can there, at any point during the transmission, temporarily be added slack bytes to the packet, ie. bytes in addition to needed headers/payload? For example, could it be that any participant in the transmission (Windows OS - NAT - internet - NAT - Windows OS) added bytes to fulfill a certain block size, so that these added bytes become a part of the transmission (even though cut off later on), and actually are transmitted, thus consuming processor (switch, server CPU) cycles?
(Reason for asking is how much effort to spend on composing/decomposing the packet, of course :-). Squeeze it to the last bit (small, more local CPU cycles) vs. allow the packet to be partially self-describing (bigger, less local CPU). Note that packet size is always less than the (nearest to me, that i know of) MTU, the normally-closer-to 1500 bytes)
Thx!
The short answer is: Yes.
Take Ethernet as an example. For collision detection purposes, the minimum payload size of an Ethernet frame is 42 bytes. If payload (which includes application data, UDP and IP header, in this case) is less than that, a padding will be added to the Ethernet frame.
Also, as far as I know, the network card will to this job, not it's driver or the OS.
If you want to decide whether it is better to send small packets or wait and send bigger ones, take a look at Nagle's algorithm.
Here ou can see the Ethernet padding in practice:
What are the 0 bytes at the end of an Ethernet frame in Wireshark?

how much memory should be occupied by a valid SIP request?and how to monitor it before forwording to server?

I am gonna do a project related to intrusion detection in which i need to monitor the load on memory by a SIP request before forwarding it to server in order to determine that the specific request is valid(not a flooding attack on server)and will not cause DOS?Any information related to memory monitoring of the incoming request,algorithms and related articles will be appreciated.
What you are looking for is normally described as a Session border controller.
As for the normal packet size, SIP normally uses UDP so its normal packet is is normally under 1400 bytes. If it exceeds this (and it can with a large SDP payload) then it normally (auto) switches over to TCP.
Most SIP packets tho are pretty small on average.

How to set/ get the time spent on packets fragmentation in ns-3 models?

If I transfer a packets through multiple subnets, which have different MTU on the routers, it may be fragmented. How can I get or set the time spent on each operation of fragmentations in ns-3 models? I need to know this to calculate the speed.
What you are asking is unclear to me but let me try to answer.
If you want to measure the CPU time it takes for ns-3 to create fragments and reassemble them, you can run a simple 2-node experiment and change the mtu of the outbound network interface of the sending node to see how much wall-clock time is spent fragmenting vs non-fragmenting.
If, on the other hand, you want to measure the effect in terms of simulation time of splitting a packet in multiple packets and perform the MAC-level access function for each fragment, it's just a function of
the access function used at the MAC level. If you want to model switched ethernet, it's easy. make it zero.
the transmission delay through your medium. If ethernet, it's easy again: it's the length of the cable modulo the speed of the electromagnetic waves in your cable which depends on the quality of the cable.
the size of the fragment and the throughput of your medium.
Basically, if you know how many times the packet will be fragmented (it is possible for multiple routers to fragment the packet consecutively in smaller fragments) and which mtu will be used every time, you can trivially create an analytical model of that process and predict the actual transport-level transmission delay of your packets through the simulation.

Sending UDP datagram without fragmentation

I want to send broadcast message to all peers in my local network. Message is a 32 bit integer. I can be sure, that message will not me fragmented, right? There will be two options:
- peer will receive whole message at once
- peer will not receive message at all
Going further, 4 bytes is maximum size of data, that can be sent in one UDP datagram? I use Ethernet based network in 99%.
IPv4 specifies a minimum supported MTU of 576 bytes, including the IP header. Your 4 byte UDP payload will result in an IP packet far smaller than this, so you need not fear fragmentation.
Furthermore, your desired outcome - "peer will receive whole message at once or peer will not receive message at all" is always how UDP works, even in the presence of fragmentation. If a fragment doesn't arrive, your app won't recieve the packet at all.
The rules for UDP are "The packet may arrive out-of-order, duplicated, or not at all. If the packet does arrive, it will be the whole packet and error-free.". ("Error-free" is obviously only true within the modest limits of the IP checksum).
Ethernet packets can be up to around 1500 bytes (and that's not counting jumbo frames). If you send broadcast messages with a payload of only 4 bytes, they shouldn't get fragmented at all. Fragmentation should only occur when the packet is larger than the Maximum Transmission Unit (so about 1500 bytes over Ethernet).

Using recvfrom() with raw sockets : general doubt

I have created a raw socket which takes all IPv4 packets from data link layer (with data link layer header removed). And for reading the packets I use recvfrom.
My doubt is:
Suppose due to some scheduling done by OS, my process was asleep for 1 sec. When it woke up,it did recvfrom (with number of bytes to be received say 1000) on this raw socket (with the intention of receiving only one IPv4 packet and say the size of this packet is 380 bytes). And suppose many network applications were also running simultaneously during this time, so all IPv4 packets must have been queued in the receive buffer of this socket. So now recvfrom will return all 1000 bytes (with other IPv4 packets from 381th byte onwards) bcoz it has enough data in its buffer to return. Although my program was meant to understand only one IPv4 packet
So how to prevent this thing? Should I read byte by byte and parse each byte but it is very inefficient.
IIRC, recvfrom() will only return one packet at a time, even if there are more in the queue.
Raw sockets operate at the packet layer, there is no concept of data streams.
You might be interested in recvmmsg() if you want to read multiple packets in one system call. Recent Linux kernels only, there is no equivalent send side implementation.

Resources