I'm familiar with using Reachability to determine the type of internet connection (if any) being used on an iOS device. Unfortunately that's not a decent indicator of connection quality. Wifi with low signal strength is pretty sketchy and 3G with anything less than 3 bars is a disaster (not to mention networks that only allow EDGE connections).
How can I determine the quality of my connection so I can help my users decide if they should be downloading larger files on their current connection?
A pragmatic approach would be to download one moderately large-sized file hosted on a reliable, worldwide CDN, at the start of your application. You know the filesize beforehand, you just have to measure the time it takes, make a simple computation and then you've got your estimate of the quality of the connection.
For example, jQuery UI source code, unminified, gzipped weighs roughly 90kB. Downloading it from http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.14/jquery-ui.js takes 327ms here on my Mac. So one can assume I have at least a decent connection that can handle approximately 300kB/s (and in fact, it can handle much more).
The trick is to find the good balance between the original file size and the latency of the network, as the full download speed is never reached on a small file like this. On the other hand, downloading 1MB right after launching your application will surely penalize most of your users, even if it will allow you to measure more precisely the speed of the connection.
Cyrille's answer is a good pragmatic answer, but is not really in the end a great solution in the mobile context for these reasons:
It involves doing a test "at the start of your application" by which I assume he means when your app launches. But your app may execute for a long while, may go background and then back into the foreground, and all the while the user is changing network contexts with changes in underlying network performance - so that initial test result may bear no relationship to the "current" performance of the network connection.
For the reason he rightly points out, that it is "penalizing" your user by making them download a test file over what may already be constrained network conditions.
You also suggest in your original post that you want your user to decide if they should download based on information you present to them. But I would suggest that this is not a good way to approach interacting with mobile users - that you should not be asking them to make complicated decisions. If absolutely necessary, only ask if they want to download the file if you think it may present a problem, but keep it that simple - "Do you want to download XYZ file (100 MB)?" I personally would even avoid even that.
Instead of downloading a test file, the better solution is to monitor and adapt. Measure the performance of the connection as you go along, keep track of the "freshness" of that information you have about how well the connection is performing, and only present your user with a decision to make if based on the on-going performance of the connection it seems necessary.
EDIT: For example, if you determine a patience threshold that in your opinion represents tolerable download performance, keep track of each download that the user does in order to determine if that threshold is being reached. That way, instead of clogging up the users connection with test downloads, you're using the real world activity as the determining factor for "quality of the connection", which is ultimately about the end-user experience of the quality of the connection. If you decide to provide the user with the ability to cancel downloads, then you have an excellent "input" about the user's actual patience threshold, and can adapt your functionality to that situation, by subsequently giving them the choice before they start the download. If you've flipped into this type of "confirmation" mode, but then find that files are starting to download faster, you could dynamically exit the confirmation mode.
Rob's answer is very good, but for a more specific implementation start with (https://developer.apple.com/library/archive/samplecode/SimplePing/Introduction/Intro.html#//apple_ref/doc/uid/DTS10000716)Apple's Simple Ping example source code
Target the domain for the server that you want to monitor connection quality to. Use the ping library to "ping" it on a regular basis (say 1 or 10 seconds depending upon your UI needs). Measure how long it takes to get a response to your ping (or if it never returns) to develop an estimate of the connection quality to communicate to your user.
Related
I'm a higher layer guy, I don't and don't want to know much about can-bus, j1939 or even particular ECUs. I just don't like the software solution, so I'd like to ask, if customer's requirements are legitimate.
If particular ECU doesn't receive CAN frame within 300 ms timeout after powerup, it stops responding to any further frames and must be power cycled. This is a information from customer's technicians, I have to just believe it.
It is possible to powerup ECU after CAN driver thread is ready, but it would require some extra wiring by end customers.
Software solutions are all bad or worse, like running FreeRTOS before important checks, put CAN driver code to code common with other products, or start CAN periphery in the bootloader and left running without software control until driver starts.
The sensitive part is, that we have no explicit demand to start CAN driver within such a short time in specification. Customer says, that it's part of J1939 specification.
Can someone confirm or disprove, that J1939 allows devices to unrecoverably stop receiving after 300 ms of silence or requires devices to start transmitting within 300 ms after powerup? Or at least guide me to parts of J1939 standard, which could possibly regard this?
Thank you
If particular ECU doesn't receive CAN frame within 300 ms timeout after powerup, it stops responding to any further frames and must be power cycled. This is a information from customer's technicians, I have to just believe it.
This does of course entirely depend on what task it is performing.
Generally, an ECU, as in an automotive computer in a car/truck etc is never allowed to hang up/latch up. The normal course of action would be for the ECU to either reboot/reset itself or revert to a fail-safe mode.
But in case of tractors and heavy machinery the normal safe mode is "stop everything".
It is possible to powerup ECU after CAN driver thread is ready, but it would require some extra wiring by end customers.
I don't know what this is supposed to mean. What is "extra wiring"? Something to keep other nodes in common mode while one is rebooting? Terminating resistors? Some dirty power-up delay circuit?
Software solutions are all bad or worse, like running FreeRTOS before important checks, put CAN driver code to code common with other products, or start CAN periphery in the bootloader and left running without software control until driver starts.
Generally speaking, it's custom to initialize critical hardware like clocks, watchdogs, prescalers, pull resistors etc very early on. Initializing hardware peripherals may or may not be critical. It's custom to do this after the CRT has been executed, at the beginning of main() and the order of initialization usually matters a lot.
If you have a delay longer than 300ms from power-on reset to the start of main(), something is terribly wrong with the program.
The sensitive part is, that we have no explicit demand to start CAN driver within such a short time in specification. Customer says, that it's part of J1939 specification.
I haven't worked much with J1939 and I don't remember what it says specifically, but 300ms is an eternity in a real-time system! It's not a "short time".
In general, correctly designed mission-/safety-critical CAN control systems in automotive/industrial settings work like this:
All data is sent repeatedly in fixed intervals, regardless of if it has changed or not. Commonly once per 10ms or once per 100ms.
A node which has not received new data will use the previously received data for now.
There is a timeout from the point of when last valid data was received, when the receiving node must stop using old data and revert to a fail-safe mode. This time is often relative to how fast the controlled object can move. It's common to have timeouts after some multiple of 100ms.
I would say that your customer's requirements are very reasonable, it's nothing out of the ordinary.
My colleague answered, that there's no such demand, only vague 300 ms timeout.
I am attempting to stream audio files from a server to iOS devices and play them completely synchronized. For example on my phone I might be 20 secs into a song and then my friend next to me should also be 20 secs into the song as well. I know this is not an easy problem to solve, but I am attempting to do so.
I can currently get them within one second of each other by calculating the difference in time between the devices and then have them sync up, however that is not good enough because the human ear can detect a major difference in a second and this is over WIFI.
My next approach is going to be to unicast the one file from the server and then have the all devices pick it up directly from the server and then implement some type of buffer system similar to netflix so that network connectivity would be a limiting factor. http://www.wowza.com/ is what I would use to help with that.
I know this can be done, because http://lysn.in/ is does it with their app and I want to be able to do something similar.
Any other recommendations after I try my unicast option?
Would implementing firebase help solve a lot of the heavy lifting problems?
(1) In answer to ONE of your questions (the final one):
Firebase is not "realtime" in "that sense" -- PubNub is probably (almost certainly) the fastest "realtime" messaging for and between apps/browser/etc.
But they don't mean real-time in the sense of real-time, say, as race game engineers mean it or indeed in your use-case.
So firebase is not relevant to you here and won't help.
(2) Regarding your second general question: "how to synchronise time on two or more devices, given that we have communications delays."
Now, this is a really well-travelled problem in computer science.
It would be pointless outlining it here, because it is fully explained here http://www.ntp.org/ntpfaq/NTP-s-algo.htm if you click on "How is time synchronised"?
So in fact, to get a good time base on both machines, you should use that! Have both machines really accurately set a time to NTP using the existing (perfected for decades) NTP synchronisation.
(So for example https://stackoverflow.com/a/6744978/294884 )
In fact are you doing this?
It's possible that doing that will solve all your problems; then just agree to start at a certain exact time.
Hope it helps!
I would recommend against using the data movement to synchronize the playback. This should be straightforward to do with a buffer and a periodic "sync" signal that is sent at a period of < 1/2 the buffer size. Worst case this should generate a small blip on devices that get ahead or behind relative to the sync signal.
I need to implement a draft application for a fantasy sports website. Each users will have 1m30 to choose a player on its team and if that time has elapsed it will be selected automatically. Our planned implementation will use Juggernaut to push the turn changes to each user participating in the draft. But I'm still not sure about how to handle latency.
The main issue here is if a user got a higher latency than the others, he will receive the turn changes a little bit later and his timer won't be synchronized. Say someone receive a turn change after choosing a player himself while on his side he think he still got 2 seconds left, how can we handle that case? Is it better to try to measure each user latency and adjust the client-side timer to minimize that issue? If so, how could we implement that?
This is a tricky issue, but there are some good solutions out there. Look into what time.gov does, and how it does it; essentially, as I understand it, they use Java to perform multiple repeated requests to the server, to attempt to get an idea of the latency involved in the communication, then they generate a measure of latency that they use to skew the returned time data. You could use the same process for your application, with even more accuracy; keeping track of what the latency is and how it varies over time lets you make some statistical inferences about how reliable your latency numbers are, etc. It can be a bit complex, but it can definitely allow you to smooth out your performance. My understanding is that this is what most MMOs do as well, to manage lag.
For security reasons I have a feeling that that testing should be done server side. Nonetheless, that would be rather taxing on the server, right? Given the gear and buffs a player is wearing they will have a higher movement speed, so each time they move I would need to calculate that new constant and see if their movement is legitimate (using TCP so don't need to worry about lost, unordered packets). I realize I could instead just save the last movement speed and only recalculate it if they've changed something affecting their speed, but even then that's another check.
Another idea I had is that the server randomly picks data that the client is sending it and verifies it and gives each client a trust rating. A low enough trust rating would mean every message from the client would be inspected and all of their actions would be logged in a more verbose manner. I would then know they're hacking by inspecting the logs and could ban/suspend them as well as undo any benefits they may have spread around through hacking.
Any advice is appreciated, thank you.
Edit: I just realized there's also the case where a hacker could send tiny movements (within the capability of their regular speed) in a very high succession. Each individual movement alone would be legite, but the cumulative effect would be speed hacking. What are some ways around this?
The standard way to deal with this problem is to have the server calculate all movement. The only thing that the clients should send to the server are commands, e.g. "move left" and the server should then calculate how fast the player moves etc., then finally send the updated position back to the client.
If you leave any calculation at all on the client, the chances are that sooner or later someone will find a way to cheat.
[...] testing should be done server side. Nonetheless, that would be rather taxing on the server, right?
Nope. This is the way to do it. It's the only way to do it. All talk of checking trust or whatever is inherently flawed, one way or another.
If you're letting the player send positions:
Check where someone claims they are.
Compare that to where they were a short while ago. Allow a tiny bit of deviation to account for network lag.
If they're moving too quickly, reposition them somewhere more reasonable. Small errors may be due to long periods of lag, so clients should use interpolation to smooth out these corrections.
If they're moving far too quickly, disconnect them. And check for bugs in your code.
Don't forget to handle legitimate traversals over long distance, eg. teleports.
The way around this is that all action is done on the server. Never trust any information that comes from the client. If anybody actually plays your game, somebody will reverse-engineer the communication to the server and figure out how to take advantage of it.
You can't assign a random trust rating, because cautious cheaters will cheat only when they really need to. That gives them a considerable advantage with a low chance of being spotted cheating.
And, yes, this means you can't get by with a low-grade server, but there's really no other method of preventing client-side cheating.
If you are developing in a language that has access to Windows API function calls, I have found from my own studies in speed hacking, that you can easily identify a speed hacker by calling two functions and comparing results.
TimeGetTime
and...
GetTickCount
Both functions will return the number of seconds since the system started. However, TimeGetTime is much more accurate than GetTickCount, whereas TimeGetTime is accurate up to ~1ms vs. GetTickCount, which is accurate at around ~50ms
Even though there is a small lag between these two functions, if you turn on a speed hacking application (pick your poison), you should see a very large difference between the two result sets, sometimes even up to a couple of seconds. The difference is very noticable.
Write a simple application that returns the GetTickCount and TimeGetTime results, without the speed hacking application running, and leave it running. Compare the results and display the difference -- you should see a very small difference between the two. Then, with your application still running, turn on the speed hacking application and you will see the very large difference in the two result sets.
The trick is figuring out what threshold will constitue suspicious activity.
We have a requirement from a customer to provide a "lite" version for dial-up and all the bells-and-whistles for a broadband user.
The solution will use Flex / Flash / Java EJB and some jsp.
Is there a way for the web server to distinguish between the two?
You don't care about the user's connection type, you care about the download speed.
Have a tiny flash app that downloads the rest the of the flash, and times how long it takes. Or an HTML page that times how long an Ajax download takes.
If the download of the rich-featured app takes too long, have the initially downloaded stub page/flash redirect to the slow download page (or download the bare-bones flash app, or whatever).
The simplest and most reliable mechanism is probably to get the user to select their connection type from a drop down. Simple, I know, but it may save you a world of grief!
There's no way to distinguish between a broadband or dial-up as a connection type, but you can make an educated guess by connection speed.
Gmail does this and provides a link to a basic HTML version of their service if they detect it.
(source: nirmaltv.com)
My guess is that there is some client side javascript polling done on AJAX requests. If the turnaround time surpasses a threshold, the option to switch to "lite" appears.
The best part about this option is that you allow the user to choose if they want to use the lite version instead of forcing them.
Here's a short code snippet from a code who attempted something similar. It's in C#, but it's pretty short and it's just the concept that's of interest.
Determine the Connection Speed of your client
Of course, they could be a temporary speed problem that has nothing to do with the user's connection at the time you test, etc, etc.
I had a similar problem a couple of years ago and just let the user choose between the hi and lo bandwidth sites. The very first thing I loaded on the page was this option, so they could move on quickly.
I think the typical approach to this is just to ask the user. If you don't feel confidant that your users will provide an accurate answer, I suspect you'll have to write an application that runs a speed test on the client. Typically these record how long it takes the client to receive x number of bytes, and use that to determine bandwidth.
Actionscript 3 has a library to help you with this task, but I believe it requires you to deploy your flex/flash app on Flash Media Server. See ActionScript 3.0 native bandwidth detection for details.
#Apphacker (I'd comment instead of answering if I had enough reputation...):
Can't guarantee the reverse, either--I have Earthlink dial-up, soon to upgrade to Earthlink DSL (it's what's available here...).
You could check their IP and see if it resolves to/is assigned to a dial up provider, such as AOL, Earthlink, NetZero. Wouldn't guarantee that those that don't resolve to such a provider are broadband users.
you could ...
ask the user
perform a speed test and ask the user if the result you found is correct
perform a speed test and hope that the result found is correct
I think a speed test should be enough.
If you only have a small well known user group it is sometimes possible to determine the connection speed by the ip. (Some providers assign different subnets to dial-up/broadband connections)