I have a timer project requirement in my web server. Some effects done by clients operations at the server needs to be reset after sometime the had occurred. To do this, I intend to use erlang:start_timer/3 function to send a reset message to a process that does the resetting for each effects. This is ok with few client's operations coming in. The question is, does erlang timer scales very well as the number of current effects to time for reset increases?
Don't guess, don't ask, try it and measure. Nobody know your use case and requirements better than you. Is it for profit? Then you are paid for it. Is it as a hobby, then be used to it. It is an integral part of your job.
Related
I am attempting to stream audio files from a server to iOS devices and play them completely synchronized. For example on my phone I might be 20 secs into a song and then my friend next to me should also be 20 secs into the song as well. I know this is not an easy problem to solve, but I am attempting to do so.
I can currently get them within one second of each other by calculating the difference in time between the devices and then have them sync up, however that is not good enough because the human ear can detect a major difference in a second and this is over WIFI.
My next approach is going to be to unicast the one file from the server and then have the all devices pick it up directly from the server and then implement some type of buffer system similar to netflix so that network connectivity would be a limiting factor. http://www.wowza.com/ is what I would use to help with that.
I know this can be done, because http://lysn.in/ is does it with their app and I want to be able to do something similar.
Any other recommendations after I try my unicast option?
Would implementing firebase help solve a lot of the heavy lifting problems?
(1) In answer to ONE of your questions (the final one):
Firebase is not "realtime" in "that sense" -- PubNub is probably (almost certainly) the fastest "realtime" messaging for and between apps/browser/etc.
But they don't mean real-time in the sense of real-time, say, as race game engineers mean it or indeed in your use-case.
So firebase is not relevant to you here and won't help.
(2) Regarding your second general question: "how to synchronise time on two or more devices, given that we have communications delays."
Now, this is a really well-travelled problem in computer science.
It would be pointless outlining it here, because it is fully explained here http://www.ntp.org/ntpfaq/NTP-s-algo.htm if you click on "How is time synchronised"?
So in fact, to get a good time base on both machines, you should use that! Have both machines really accurately set a time to NTP using the existing (perfected for decades) NTP synchronisation.
(So for example https://stackoverflow.com/a/6744978/294884 )
In fact are you doing this?
It's possible that doing that will solve all your problems; then just agree to start at a certain exact time.
Hope it helps!
I would recommend against using the data movement to synchronize the playback. This should be straightforward to do with a buffer and a periodic "sync" signal that is sent at a period of < 1/2 the buffer size. Worst case this should generate a small blip on devices that get ahead or behind relative to the sync signal.
I want to deploy managed iOS devices to employees of the company, and the app they will use will timestamp data that will be recorded locally, then forwarded. I need those timestamps to be correct, so I must prevent the user from adjusting the time on the device, recording a value, then resetting the date and time. Date and time will be configured to come from the network automatically, but the device may not have network connectivity at all times (otherwise I would just read network time every time a data value is recorded). I haven't seen an option in Apple Configurator to prevent changing the date and time, so is there some other way to do this?
You won't be able to prevent a user either changing their clock or just hitting your API directly as other commentators have posted. These are two separate issues and can be solved by having a local time that you control on the device and by generating a hashed key of what you send to the server.
Local Time on Device:
To start, make an API call when you start the app which sends back a timestamp from the server; this is your 'actual time'. Now store this on the device and run a timer which uses a phone uptime function (not mach_absolute_time() or CACurrentMediaTime() - these get weird when your phone is in standby mode) and a bit of math to increase that actual time every second. I've written an article on how I did this for one of my apps at (be sure to read the follow up as the original article used CACurrentMediaTime() but that has some bugs). You can periodically make that initial API call (i.e. if the phone goes into the background and comes back again) to make sure that everything is staying accurate but the time should always be correct so long as you don't restart the phone (which should prompt an API call when you next open the app to update the time).
Securing the API:
You now have a guaranteed* accurate time on your device but you still have an issue in that somebody could send the wrong time to your API directly (i.e. not from your device). To counteract this, I would use some form of salt/hash with the data you are sending similar to OAuth. For example, take all of the parameters you are sending, join them together and hash them with a salt only you know and send that generated key as an extra parameter. On your server, you know the hash you are using and the salt so you can rebuild that key and check it with the one that was sent; if they don't match, somebody is trying to play with your timestamp.
*Caveat: A skilled attacked could hi-jack the connection so that any calls to example.com/api/timestamp come from a different machine they have set up which returns the time they want so that the phone is given the wrong time as the starting base. There are ways to prevent this (obfuscation, pairing it with other data, encryption) but that becomes a very open-ended question very quickly so best asked elsewhere. A combination of the above plus a monitor to notice weird times might be the best thing.
There doesn't appear to be any way to accomplish what you're asking for. There doesn't seem to be a way to stop the user from being able to change the time. But beyond that, even if you could prevent them from changing the time, they could let their device battery die, then plug it in and turn it on where they don't have a net connection, and their clock will be wrong until it has a chance to set itself over a network. So even preventing them from changing the time won't guarantee accuracy.
What you could do is require a network connection to record values, so that you can verify the time on a server. If you must allow it to work without a net connection, you could at least always log the current time when the app is brought up and note if the time ever seems to go backwards. You'll know something is up if the timestamp suddenly is earlier than the previous timestamp. You could also do this check perhaps only when they try to record a value. If they record a value that has a timestamp earlier than any previous recorded value, you could reject it, or log the event so that the person can be questioned about it at a later time.
This is also one of those cases where maybe you just have to trust the user not to do this, because there doesn't seem to be a perfect solution to this.
The first thing to note is that the user will always be able to forge messages to your server in order to create incorrect records.
But there are some useful things you can use to at least notice problems. Most of the time the best way to secure this kind of system is to focus on detection, and then publicly discipline anyone who has gone out of their way to circumvent policy. Strong locks are meaningless unless there's a cop who's eventually going to show up and stop you.
Of course you should first assume that any time mistakes are accidental. But just publicly "noticing" that someone's device seems to be "misbehaving" is often enough to make bad behaviors go away.
So what can you do? The first thing is to note the timestamps of things when they show up at the server. Timestamps should always move forward in time. So if you've already seen records from a device for Monday, you should not later receive records for the previous Sunday. The same should be true for your app. You can keep track of when you are terminated in NSUserDefaults (as well as posting this information to the server). You should not generally wake up in the past. If you do, complain to your server.
Watch for UIApplicationSignificantTimeChangeNotification. I believe you'll receive it if the time is manually changed (you'll receive it in several other cases as well, most of them benign). Watch for time moving significantly backwards. Complain to your server.
Pay attention to mach_absolute_time(). This is the time since the device was booted and is not otherwise modifiable by the user without jailbreaking. It's useful for distinguishing between reboots and other events. It's in a weird time unit, but it can be converted to human time as described in QA1398. If the mach time difference is more than an hour greater than the wall clock time, something is weird (DST changes can cause 1 hour). Complain to your sever.
All of these things could be benign. A human will need to investigate and make a decision.
None of these things will ensure that your records are correct if there is a dedicated and skilled attacker involved. As I said, a dedicated and skilled attacker could just send you fake messages. But these things, coupled with monitoring and disciplinary action, make it dangerous for insiders to even experiment with how to beat the system.
You cannot prevent the user from changing time.
Even the time of an Location is adjusted by Apple, and not a real GPS time.
You could look at mach kernel time, which is a relative time.
Compare that to the time when having last network connection.
But this all sounds not reliable.
I'm familiar with using Reachability to determine the type of internet connection (if any) being used on an iOS device. Unfortunately that's not a decent indicator of connection quality. Wifi with low signal strength is pretty sketchy and 3G with anything less than 3 bars is a disaster (not to mention networks that only allow EDGE connections).
How can I determine the quality of my connection so I can help my users decide if they should be downloading larger files on their current connection?
A pragmatic approach would be to download one moderately large-sized file hosted on a reliable, worldwide CDN, at the start of your application. You know the filesize beforehand, you just have to measure the time it takes, make a simple computation and then you've got your estimate of the quality of the connection.
For example, jQuery UI source code, unminified, gzipped weighs roughly 90kB. Downloading it from http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.14/jquery-ui.js takes 327ms here on my Mac. So one can assume I have at least a decent connection that can handle approximately 300kB/s (and in fact, it can handle much more).
The trick is to find the good balance between the original file size and the latency of the network, as the full download speed is never reached on a small file like this. On the other hand, downloading 1MB right after launching your application will surely penalize most of your users, even if it will allow you to measure more precisely the speed of the connection.
Cyrille's answer is a good pragmatic answer, but is not really in the end a great solution in the mobile context for these reasons:
It involves doing a test "at the start of your application" by which I assume he means when your app launches. But your app may execute for a long while, may go background and then back into the foreground, and all the while the user is changing network contexts with changes in underlying network performance - so that initial test result may bear no relationship to the "current" performance of the network connection.
For the reason he rightly points out, that it is "penalizing" your user by making them download a test file over what may already be constrained network conditions.
You also suggest in your original post that you want your user to decide if they should download based on information you present to them. But I would suggest that this is not a good way to approach interacting with mobile users - that you should not be asking them to make complicated decisions. If absolutely necessary, only ask if they want to download the file if you think it may present a problem, but keep it that simple - "Do you want to download XYZ file (100 MB)?" I personally would even avoid even that.
Instead of downloading a test file, the better solution is to monitor and adapt. Measure the performance of the connection as you go along, keep track of the "freshness" of that information you have about how well the connection is performing, and only present your user with a decision to make if based on the on-going performance of the connection it seems necessary.
EDIT: For example, if you determine a patience threshold that in your opinion represents tolerable download performance, keep track of each download that the user does in order to determine if that threshold is being reached. That way, instead of clogging up the users connection with test downloads, you're using the real world activity as the determining factor for "quality of the connection", which is ultimately about the end-user experience of the quality of the connection. If you decide to provide the user with the ability to cancel downloads, then you have an excellent "input" about the user's actual patience threshold, and can adapt your functionality to that situation, by subsequently giving them the choice before they start the download. If you've flipped into this type of "confirmation" mode, but then find that files are starting to download faster, you could dynamically exit the confirmation mode.
Rob's answer is very good, but for a more specific implementation start with (https://developer.apple.com/library/archive/samplecode/SimplePing/Introduction/Intro.html#//apple_ref/doc/uid/DTS10000716)Apple's Simple Ping example source code
Target the domain for the server that you want to monitor connection quality to. Use the ping library to "ping" it on a regular basis (say 1 or 10 seconds depending upon your UI needs). Measure how long it takes to get a response to your ping (or if it never returns) to develop an estimate of the connection quality to communicate to your user.
I need to implement a draft application for a fantasy sports website. Each users will have 1m30 to choose a player on its team and if that time has elapsed it will be selected automatically. Our planned implementation will use Juggernaut to push the turn changes to each user participating in the draft. But I'm still not sure about how to handle latency.
The main issue here is if a user got a higher latency than the others, he will receive the turn changes a little bit later and his timer won't be synchronized. Say someone receive a turn change after choosing a player himself while on his side he think he still got 2 seconds left, how can we handle that case? Is it better to try to measure each user latency and adjust the client-side timer to minimize that issue? If so, how could we implement that?
This is a tricky issue, but there are some good solutions out there. Look into what time.gov does, and how it does it; essentially, as I understand it, they use Java to perform multiple repeated requests to the server, to attempt to get an idea of the latency involved in the communication, then they generate a measure of latency that they use to skew the returned time data. You could use the same process for your application, with even more accuracy; keeping track of what the latency is and how it varies over time lets you make some statistical inferences about how reliable your latency numbers are, etc. It can be a bit complex, but it can definitely allow you to smooth out your performance. My understanding is that this is what most MMOs do as well, to manage lag.
For security reasons I have a feeling that that testing should be done server side. Nonetheless, that would be rather taxing on the server, right? Given the gear and buffs a player is wearing they will have a higher movement speed, so each time they move I would need to calculate that new constant and see if their movement is legitimate (using TCP so don't need to worry about lost, unordered packets). I realize I could instead just save the last movement speed and only recalculate it if they've changed something affecting their speed, but even then that's another check.
Another idea I had is that the server randomly picks data that the client is sending it and verifies it and gives each client a trust rating. A low enough trust rating would mean every message from the client would be inspected and all of their actions would be logged in a more verbose manner. I would then know they're hacking by inspecting the logs and could ban/suspend them as well as undo any benefits they may have spread around through hacking.
Any advice is appreciated, thank you.
Edit: I just realized there's also the case where a hacker could send tiny movements (within the capability of their regular speed) in a very high succession. Each individual movement alone would be legite, but the cumulative effect would be speed hacking. What are some ways around this?
The standard way to deal with this problem is to have the server calculate all movement. The only thing that the clients should send to the server are commands, e.g. "move left" and the server should then calculate how fast the player moves etc., then finally send the updated position back to the client.
If you leave any calculation at all on the client, the chances are that sooner or later someone will find a way to cheat.
[...] testing should be done server side. Nonetheless, that would be rather taxing on the server, right?
Nope. This is the way to do it. It's the only way to do it. All talk of checking trust or whatever is inherently flawed, one way or another.
If you're letting the player send positions:
Check where someone claims they are.
Compare that to where they were a short while ago. Allow a tiny bit of deviation to account for network lag.
If they're moving too quickly, reposition them somewhere more reasonable. Small errors may be due to long periods of lag, so clients should use interpolation to smooth out these corrections.
If they're moving far too quickly, disconnect them. And check for bugs in your code.
Don't forget to handle legitimate traversals over long distance, eg. teleports.
The way around this is that all action is done on the server. Never trust any information that comes from the client. If anybody actually plays your game, somebody will reverse-engineer the communication to the server and figure out how to take advantage of it.
You can't assign a random trust rating, because cautious cheaters will cheat only when they really need to. That gives them a considerable advantage with a low chance of being spotted cheating.
And, yes, this means you can't get by with a low-grade server, but there's really no other method of preventing client-side cheating.
If you are developing in a language that has access to Windows API function calls, I have found from my own studies in speed hacking, that you can easily identify a speed hacker by calling two functions and comparing results.
TimeGetTime
and...
GetTickCount
Both functions will return the number of seconds since the system started. However, TimeGetTime is much more accurate than GetTickCount, whereas TimeGetTime is accurate up to ~1ms vs. GetTickCount, which is accurate at around ~50ms
Even though there is a small lag between these two functions, if you turn on a speed hacking application (pick your poison), you should see a very large difference between the two result sets, sometimes even up to a couple of seconds. The difference is very noticable.
Write a simple application that returns the GetTickCount and TimeGetTime results, without the speed hacking application running, and leave it running. Compare the results and display the difference -- you should see a very small difference between the two. Then, with your application still running, turn on the speed hacking application and you will see the very large difference in the two result sets.
The trick is figuring out what threshold will constitue suspicious activity.