I use UDP (via WIFI network) to transmit packages (car's velocity information) from PC to car's controller. However, while transferring, there is always delay: maximum is even 1 second. Whether i have bug is UDP connection implementation so that delays exist?
Besides, is there any way to reserve bandwidth for that connection so that no delay exists?
If delays are unavoidable, how can i skip old packages in receiving buffer and just get the latest one?
My system is as following:
1. Server: Camera + PC that detects car's position and calculate it's velocity.
2. Client: Car that receives it's velocity from PC then adjust it's velocity via its PID controller.
Related
I made an ethernet PCB passthrough circuit board that has an input connector, the correct magnetics to isolate the signal on both ends, and an output connector. I'd like to test my board by sending data into it and measure the packets on the output to make sure nothing is being lost.
I have Wireshark on my computer but I'm not sure how to inject a certain number of packets into the input or how measure it on the output to look for lost packets. I can connect on both ends to laptops but there is no IP address on my board, it's just hardware.
I want to connect a Lidar sensor to my ESP 32 and send the results to my laptop over WiFi. The sensor can take readings approximately at a rate of 1,500/sec so I am looking to send a reading consisting of a distance in cm and an angle ( eg 40123,180.1).
I would like to send the data as near to real time as possible. I have successfully tried sending data using both websockets and server sent events but I cannot get anywhere near the speed needed to send single readings, the asyncServer gives an error message saying that too many messages are queued.
So the my question: Is there another protocol that I can use on the ESP 32 which will achieve this? If so could anyone give me a simple example to test.
Thanks.
If you encode your distance and angle values as an integer (4B) and float (4B) then your payload data would be in the neighbourhood of 8 * 1500 = 12KBps which is not too demanding. The bottleneck, I suspect, would be the number of packets per second that the ESP and your WiFi equipment+environment can handle. I wouldn't be surprised if 1500 packets per second is too much to ask of it. Batch your readings into as few packets as your latency requirements allow and then send. E.g. batching 50 samples into a single packet would mean sending 30 packets per second, carrying 400 B of payload each. The latency for the oldest sample in your packet would then be roughly 33 ms + network latency (which add another 1-1000 ms depending on how busy your ether is).
As for the protocol, UDP is usually used for low-latency, low-jitter stuff. Mind you, it doesn't handle packet loss in any way - once a packet is lost, it's lost. Also, packets might arrive out of order. If you don't like that, use TCP which recovers from packet loss and orders them. The price of TCP is significant added latency and jitter as the protocol slowly discovers and re-transmits lost packets, delaying the reception of newer data on receiver side.
As for an example, google for "posix udp socket tutorial".
i'm trying to do the following:
a) Set Contiki in Promiscuous Mode.
b) Then retrieve all UDP and RPL packets send, not only to current node but also between two other nodes within communication range.
I have the following code:
NETSTACK_RADIO.set_value(RADIO_PARAM_RX_MODE, 0);
simple_udp_register(&unicast_connection, 3001,
NULL, 3000, receiver);
where receiver is an appropriate callback function. I am able to collect UDP packets send to the current node, but still unable to receive packets send between other nodes in my communication range.
Setting the RADIO_PARAM_RX_MODE only controls which packets the radio driver filters out. There are multiple layers in an OS network stack, of which the radio driver is only the first one. The next ones are RDC and MAC, which still filter out packets addressed to other nodes, and there is no API to disable that.
The solution is to either modify the MAC to disable dropping of packets not addressed to the local mode or write your own simple MAC. The latter is what the Sensniff (Contiki packet sniffer) does - see its README and source code. By the way, if you just want to log all received packets, just use Sensniff!
I'm developing a realtime game using Sprite Kit and Game Kit. The game features a multiplayer mode where 4 players can play with each other. I've been reading the Game Kit programing guide and came across the following passage:
Although the GKMatch object creates a full peer-to-peer connection
between all the participants, you can reduce the network traffic by
layering a ring or client-server networking architecture on top of it.
Figure 8-1 shows three possible network topologies for a four-player
game. On the left, a peer-to-peer game has 12 connections between the
various devices. However, you could layer a client-server architecture
on top of this by nominating one of the devices to act as the host. If
your game transmits to or from the host only, you can halve the number
of connections. A ring architecture allows devices to forward network
packets to the next device only, but further reduces the number of
connections. Each topology provides different performance
characteristics, so you will want to test different models to find one
that provides the performance your game requires.
So here is where I am confused. Currently in my game I have implemented the peer-to-peer topology, where each user sends their position to every other player in the game. This ends up totaling 12 messages being sent, because each player sends 3 messages.
However according to the documentation, if I layer a client-server topology over my game, I can reduce the network traffic by reducing the number of connections. If I do this though, then each client will send their position to the host and then the host would need to relay those positions to the remaining clients. So now one player (the host) needs to work extra because the clients no longer communicate with each other. And then we still end up with 12 messages. The host sends 9 messages (3 messages for each player, plus 6 messages for relaying the other clients' positions) then each client sends 1 position message to the host. 9 + 1 + 1 + 1 = 12 messages. Which makes sense, all we did was unevenly distribute the message sending, so now one player needs to work harder to makeup for the less work the other players are doing.
Furthermore, relaying the client messages takes additional time because each client's position now needs to pass through the host.
So while there are now less connections, one player is sending more messages (9 messages) rather than each player evenly distributing the workload (i.e. each player sending 3 messages). This seems like it would lead to a greater chance for disconnects to occur because it will be easier for the host to disconnect from the match.
So can someone explain to me how network traffic gets reduced by layering a client-server topology? Does just the fact of having less connections in the match reduce network traffic even though overall messages are the same? Keep in mind, there is no dedicated server here, I (and the documentation) am talking about layering a client-server topology on top of the peer-to-peer match. Also isn't the host at a greater chance of disconnecting because he is sending 3X as much messages as the other players. After all, the GKMatch will disconnect a player after a brief period of packet loss. Or does simply the fact of having 12 connections have a greater chance for disconnects because of the supposedly increased traffic?
I am sorry for the very short answer to a very descriptive and well written question, but the answer is simple. The Server (you used the term "host", but this is confusing) does not have to send 3 separate messages to each client. The Server collects all the information and sends just one message containing all information to each client.
While using the default (blocking) behavior on an UDP socket, in which case will a call to sendto() block? I'm interested essentially in the Linux behavior.
For TCP I understand that congestion control makes the send() call blocking if the sending window is full, but what about UDP? Does it even block sometimes or just let packets getting discarded at lower layers?
This can happen if you filled up your socket buffer, but it is highly operating system dependent. Since UDP does not provide any guarantee your operating system can decide to do whatever it wants when your socket buffer is full: block or drop. You can try to increase SO_SNDBUF for temporary relief.
This can even depend on the fine tuning of your system, for instance it can also depend on the size of the TX ring in the driver of your network interface. There are a few discussions about this in the iperf mailing list, but you really want to discuss this with the developers of your operating system. Pay special attention to O_NONBLOCK and EAGAIN / EWOULDBLOCK.
This may be because your operating system is attempting to perform an ARP request in order to get the hardware address of the remote host.
Basically whenever a packet goes out, the header requires the IP address of the remote host and the MAC address of the remote host (or the first gateway to reach it). 192.168.1.34 and AB:32:24:64:F3:21.
Your "block" behavior could be that ARP is working.
I've heard in older versions of Windows (2k I think), that the 1st packet would sometimes get discarded if the request is taking too long and you're sending out too much data. A service pack probably fixed that since then.