I'm sending messages upon detection of both discrete and continuous gestures. For continuous, UDP should be fine because even if a couple packets are lost, there's so many change events that it shouldn't matter.
I'm wondering about discrete events though, for example tap or swipe. Since there would only be a single packet sent, what is the risk that it doesn't arrive, and the application at the other end isn't notified of the gesture?
I understand that TCP guarantees delivery, but I'm thinking it might be too much overhead for the high frequency of messages generated from continuous gestures.
If your only concern with TCP is the extra overhead, then I wouldn't worry too much. Certainly, TCP has more overhead than UDP. However, the overhead isn't that much, especially for the modest amount of data you are likely to be sending. Some quick back-of the envelope calculations:
Assume you want to send status information every millisecond. (Likely more often than you really need to.)
Assume your individual messages can easily fit within 50 bytes / each. (Likely larger than you really need.)
Total bandwidth 50 bytes / ms = 400 bits / ms = 400 kbps
Even with these larger-than-necessary messages, and faster-than-needed updates, your total bandwidth is only around 5% of a slowish 802.11b wireless network. The extra overhead of TCP isn't likely to make a big difference here.
Personally, I tend to stick with TCP unless I have a strong reason not to. Sure, you could save some extra bits by using UDP, but to me, having the reliable delivery (including correctly ordered data, non duplicated data) is worth the extra overhead. One less thing to worry about.
EDIT: TCP does have some other drawbacks. In particular, it can take a bit more programming effort to create the initial connection, and to parse individual messages from the byte stream. UDP can certainly make these tasks easier. However, you didn't list programming complexity as one of your criteria, so I focused on your overhead questions instead.
LATENCY: As noted in comments below, latency is a critical factor. In this case, UDP has some significant advantages: If a packet is dropped, TCP will wait for that packet to be retransmitted before sending others. The benefit of this approach is that data are guaranteed to arrive in the original order. (A big plus when messages must be processed sequentially.) The drawback, of course, is that there could be a significant delay for new data.
UDP, on the other hand, will continue sending subsequent packets, even if one is dropped. The good news is that UDP reduces the latency of the remaining packets. The bad news, though, is that you must add some sort of "retry" in case discrete events are lost-- if you do so, you now have cases of gestures arriving out of order, perhaps significantly so. Can your app handle a case where a "move then click" gets changed to a "click then move"? If you chose to go the UDP route, you'll need to carefully think through all these cases to make sure it won't cause (perhaps subtle) problems in your app.
Related
I've seen a couple examples like this:
service Service{
rpc updload(stream Data) returns (google.protobuf.Empty) {};
rpc download(google.protobuf.Empty) returns (stream Data) {};
}
message Data { bytes bytes = 1; }
What is the purpose of using stream, does it make the transfer more efficient?
In theory - yes - I obviously wan't to stream my file transfers but that's what happens over a connection... So, what is the actual benefit to this keyword, does it enforce some form of special buffering to reduce some overhead? Either way, the data is being transmitted, in full!
It's more efficient because, within a single call, multiple messages may be sent.
This avoids, not only re-establishing another (hopefully TLS i.e. even more work) connection with the server but also avoids spinning up client and server "stubs"; both the client and server are ready for more messages.
It's somewhat similar to being connected on a telephone call with your friend who, before hanging up, says "Oh, another thing...". Instead of hanging up the call and then, 5 minutes later, calling you back, interrupting dinner and causing you to pause a movie.
The answer is very similar to the gRPC + Image Upload question, although from a different perspective.
Doing a large download (10+ MB) as a single response message puts strong limits on the size of that download, as the entire response message is sent and processed at once. For most use cases, it is much better to chunk a 100 MB file into 1-10 MB chunks than require all 100 MB to be in memory at once. That also allows the downloader to begin processing the file before the entire file is acquired which reduces processing latency.
Without streaming, chunking would require multiple RPCs, which are annoying to coordinate and have performance complications. Because there is latency to complete RPCs, for reasonable performance you either have to do many RPCs in parallel (but how many?) or have a large batch size (but how big?). Multiple RPCs can also hit colder application caches, as each RPC goes to a different backend.
Using streaming provides the same throughput as the non-chunking approach without as many headaches of normal chunking approaches. Since streaming is pipelined (server can start sending next chunk as soon as previous chunk is sent) there's no added per-chunk latency between the client and server. This makes it much easier to choose a chunk size, as there is a wide range of "reasonable" sizes that will behave similarly and the system will naturally react as network performance varies.
While sending a message on an existing stream has less overhead than creating a new RPC, for many users the difference is negligible and it is generally better to structure your RPCs in a way that is architecturally beneficial to your application and not just to eek out small optimizations in gRPC. The reason to use the stream in this case is to make your application perform better at lower complexity.
So i'm curious, is there any info on a router's config page or manual, or is there any way to measure it's buffer? (I'm talking about the "memory" it has to keep packets until the router can transmit them)
You should be able to measure your outgoing buffer by sending lots of (UDP) data out, as quickly as possible, and see how much goes out before data starts getting dropped. You will need to send it really fast, and have something at the other end to capture it as it comes in; your send speed has to be a lot faster than your receive speed.
Your buffer will be a little smaller than that, since in the time it takes you to send the data, at least the first packet will have left the router.
Note that you will be measuring the smallest buffer between you and the remote end; if you have, for example, a firewall and a router in two separate devices, you don't really know which buffer you are testing.
Your incoming buffer is a lot more difficult to test, since you can't fill it fast enough (from the internet side) not to be deliverable quickly enough.
We have a critical need to lower the latency of our UDP listener on iOS.
We're implementing an alternative to RTP-MIDI that runs on iOS but relies on a simple UDP server to receive MIDI data. The problem we're having is that RTP-MIDI is able receive and process messages around 20ms faster than our simple UDP server on iOS.
We wrote 3 different code bases in order to try and eliminate the possibility that something else in the code was causing the unacceptable delays. In the end we concluded that there is a lag between time when the iPAD actually receives a packet and when that packet is actually presented to our application for reading.
We measured this with with a scope. We put a pulse on one of the probes from the sending device every time it sent a Note-On command. We put another probe attached to the audio output of the ipad. We triggered on the pulse and measured the amount of time it took to hear the audio. The resulting timing was a reliable average of 45ms with a minimum of 38 and maximum around 53 in rare situations.
We did the exact same test with RTP-MIDI (a far more verbose protocol) and it was 20ms faster. The best hunch I have is that, being part of CoreMIDI, RTPMIDI could possibly be getting higher priority than our app, but simply acknowledging this doesn't help us. We really need to figure out how fix this. We want our app to be just as fast, if not faster, than RTPMIDI and I think this should be theoretically possible since our protocol will not be as messy. We've declared RTPMIDI to be unacceptable for our application due to the poor design of its journal system.
The 3 code bases that were tested were:
Objective-C implementation derived from the PGMidi example which would forward data received on UDP verbatim via virtual midi ports to GarageBand etc.
Objective-C source base written by an experienced audio engine developer with a built-in low-latency sine wave generator for output.
Unity3D application with Mono-based UDP listener and built-in sound-font synthesizer plugns.
All 3 implementations showed identical measurements on the scope test.
Any insights on how we can get our messages faster would be greatly appreciated.
NEWER INFORMATION in the search for answers:
I was digging around for answers, and I found this question which seems to suggest that iOS might respond more quickly if the communication were TCP instead of UDP. This would take some effort to test on our part because our embedded system lacks TCP capabilities, only UDP. I am curious as to whether maybe I could hold open a TCP connection for the sole purpose of keeping the Wifi responsive. Crazy Idea? I dunno. Has anyone tried this? I need this to be as real-time as possible.
Answering my own question here:
In order to keep the UDP latency down, it turns out, all I had to do was to make sure the Wifi doesn't go silent for more than 150ms (or so). The exact timing requirements are unknown to me at this time, however the initial tests I was running were with packets 500ms apart and that was too long. When I increased the packet rate to 1 every 150ms, the UDP latency was on par with RTPMIDI giving us total lag time of around 18ms average (vs. 45ms) using the same techniques I described in the original question. This was on par with our RTPMIDI measurements.
Is it possible to bounce data back and forwards between lets say a USA computer and an Australian computer through the internet and just send these packets back and forwards and use this bounced data as a data storage?
As I understand it would take some time for the data to go from A to B, lets say 100 milliseconds, then therefore the data in transfer could be considered to be data in storage. If both nodes had a good bandwidth and free bandwidth, could data be stored in this transmission space? - by bounce the data back and forwards in a loop.
Would there be any reasons why this would not work.
The idea comes from a different idea I had some time ago where I thought you could store data in empty space by shooting laser pulse between two satellites a few light minutes apart. In the light minutes of space between then you could store data in this empty space as the transmission of data.
Would there be any reasons why this would not work.
Lost packets. Although some protocols (like TCP) have means to prevent packet loss, it involves the sender re-sending lost packets as needed. That means each node must still keep a copy of the data available to send it again (or the protocol might fail), so you'd still be using local storage while the communication does not complete.
If you took any networking classes, you would know the End-to-End principle, which states
The end-to-end principle states that application-specific functions ought to reside in the end hosts of a network rather than in intermediary nodes
Hence, you can not expect routers between your two hosts to keep the data for you. They have to freedom to discard it at anytime (or they themselves may crash at any time with your data in their buffer).
For more, you can read this wiki link:
End-to-End principle
It think this should actually work as in reality you store that information in various IO buffers of the numerous routers, switches and network cards. However the amount of storable information would probably be too small to have practical use, and network administrators of all levels are unlikely to enjoy and support such a creative approach.
Storing information in the delay line is a known approach and has been used to build memory devices in the past. However the past methods rely on delay during signal propagation over physical medium. As Internet mostly uses wires and electromagnetic waves that travel with the sound of light, not much information can be stored this way. Past memory devices mostly used sound waves.
On one end, you have TCP, which guarantees that packets arrive and that they arrive in order. It's also designed for the commodity Internet, with congestion control algorithms that "play nice" in traffic. On the other end of the spectrum, you have UDP, which doesn't guarantee arrival time and order of packets, and it allows you to send large data to a receiver. Somewhere in the middle, you have reliable UDP-based programs, such as UDT, that offer customized congestion control algorithms and reliability, but with greater speed and flexibility.
However, what I'm looking for is the capability to send large chunks of data over UDP (greater than the 64k datagram size of UDP), but without a concern for reliability of each individual datagram. The idea is that the large data is broken down into datagrams of a specified size (<= 64,000 bytes), probably with some header data stuck on the front and sent over the network. On the receiving side, these datagrams are read in and stored. If a datagram doesn't arrive, all of the datagrams associated with that transfer are simply thrown out by the client.
Most of the "reliable UDP" implementations try to maintain reliability of each datagram, but I'm only interested in the whole, and if I don't get the whole, it doesn't matter - throw it all away and wait for the next. I'd have to dig deeper, but it might be possible with custom congestion control algorithms in UDT. However, are there any protocols with this approach?
You could try ENet, whilst not specifically aimed at what you're trying to do it does have the concept of 'fragmented data blocks' whereby you send data larger than its MTU and it sends as a sequence of datagrams of its MTU with header details that relate one part of the sequence to the rest. The version I'm using only supports 'reliable' fragments (that is the ENet reliability layer will kick in to resend missing fragments) but I seem to remember seeing discussion on the mailing list about unreliable fragments which would likely to exactly what you want; i.e. deliver the whole payload if it all arrives and throw away the bits if it doesn't.
See http://enet.bespin.org/
Alternatively take a look at the answers to this question: What do you use when you need reliable UDP?