Let's say we've got a server with ticking clock (just an integer variable that holds a number of milliseconds from epoch and being updated every time when the system time is changed).
Each client should be able to synchronize its own clock (not the system clock! just an abstract timer, an integer value that's increasing with time) with server's.
For example the client can send a "sync" request and the server will respond with its current clock value.
The problem is that we can't know exactly how much time will be taken by the sending and receiving the data. We know the time that was passed from sending the request to receiving the data, we can divide this difference by 2 and add this to the value received from server, but it is not very accurate!
Is there a common concept of synchronizing clock between server and a client with minimum difference of values on both sides after performing it?
I would recommend using NTP (Network Time Protocol) for this, it is just the way to synchronize clocks in an entire network.
http://de.wikipedia.org/wiki/Network_Time_Protocol
Almost every operating system out there (I manage Linux, Solaris, BSD and Windows) have clients and servers for NTP. The protocol has build in capabilities to take care of network latency.
Related
What is the best way to measure the elapsed time between any two points, including across device reboot? I thought clock_gettime_nsec_np(CLOCK_MONOTONIC) would work but I recently realized that the clock restarts when the device reboots.
In my understanding different solutions based on system clock time would change if the user changes the device time, which is an issue.
If you have a reasonable guarantee of network connectivity for your application, you could leverage NTP with an NTP client like instacart/TrueTime.swift to get the current time from an NTP server you control or trust at the start and end of the time span you wish to measure.
Keep in mind that NTP (at its core) is rather vulnerable to MiTM attacks by motivated attackers (or outright firewall blocking of its UDP port 123 by heavy-handed system administrators), so the acceptability of this as a solution will depend heavily on your threat model.
Let's say you are writing a program that many clients use. One server will only be able to handle the connections to a certain amount. The more connections you need to handle, the more power you need until you get a server farm containing different devices.
If you for example run an application where different clients can store data on your servers how is it possible to synchronize data on each device? Which hardware/software solutions exist? Or how is all the data stored?
I can suggest an idea for a manual program creation , using file system only , you can exchange files between clients and server , but the server program will , in a period of time ( for example every 5 minutes ) broadcast the list of all his files to all connected clients and the exchanges will have to wait then ( if we are talking about a big volume of files) if its for small files then a 30 sec or 1 minute can be enough
There are many forms to do that...
I think you will need to have a single source of truth, let's say (the server on this case).
Then you can have a incremental number version (id) that contains the latest changes.
So each device can poll that number version only in order to know if is up to date.
If not, the device can ask the server the changes from the version that device has.
Each time a device makes a change, that change is stored on the server, and the version is incremented.
That can be one basic implementation. If you need real time updates, you can add to that implementation some publish-suscribe channel using Sockets... or some service like channels from pusher.com
So, every time a device made a change on the server, you can send a notificaiton on the channel with some information (the change) or the new ID of versiĆ³n and all the devices can update the information via the change (if is only one) or ask the server all the changes if there are many (like the device was disconnected from internet for a while).
I am working on a project and I want to know the time between client and server that packet takes to reach it destination. I am using a raspberry pi as client and my laptop as server. The connection between the server and client will be socket connection. So, I want to send an image from client to the server and take the timestamps in both ending to know the time.
Sincerely
First, don't send an image to measure a single packet time. The image might require multiple packets to be sent and you'll end up measuring more than you wanted.
Second, using timestamp on both ends is not very reliable as it depends on the time synchronization between the systems which is hard to achieve and maintain.
Third, don't reinvent the wheel. If you want to measure communication lag, use PING. Its tried and tested, its efficient and its implemented for you so it's faster & cheaper to use and you don't risk adding bugs of your own.
I am using Axibase Time Series Database Community Edition, version 10379. I try to store my data that comes from a force sensor and save it every 2 milliseconds, how can I configure the portal to accept this time resolution?
I made an attempt to send the data in that rate by using an Arduino board with WiFi shield but the TCP connection disconnected after sending a little data.
Time resolution in Axibase Time-Series Database is 1 millisecond by default, so the problem is probably occurring for other reasons such as:
Invalid timestamp
Missing end-of-line character at the end of the series command
Same timestamp for multiple commands with the same entity/metric/tags. For example, these commands are duplicates and one of the them will be discarded:
series ms:1445762625574 e:e-1 m:m-1=100
series ms:1445762625574 e:e-1 m:m-1=125
Overflow of receiving queue in ATSD. This can occur if ingestion rate is higher than disk write speed for long period of time. Open ATSD portal in the GUI and check the top right chart if rejected_count metric is greater than zero. This can be addressed by changing default configuration settings.
Other reasons specified in https://axibase.com/docs/atsd/api/data/#errors
I would recommend starting netcat in server mode and recording data from the Arduino board to file to see exactly what commands are sent into ATSD.
Stop ATSD with ./atsd-tsd.sh stop
Launch netcat in server mode and record received data to command.log file:
netcat -lk 8081 > command.log
Restart Arduino and send some data into ATSD (now netcat). Review command.log file
Start ATSD with ./atsd-tsd.sh start
Disclosure: I work for Axibase.
I need to synch audio in networked devices with millisecond accuracy. I've hacked together something that works quite well, but is'nt perfectly reliable :
1)Server device sends an rpc with a timeSinceClick param
2)client device launches the same click offsetted according to the time the rpc spent in transit,
3)System.Diagnostics.StopWatch checks periodicaly on all connected devices to make sure playback hasn't deviated too much from absolute time and corrects if necessary
Are there any more elegant ways to do this? Also, my way of doing it requires manual synching if non iOS devices are added to the mix : latency divergences make it very hard to automate...
I'm all eyes!
Cheers,
Gregzo
It is difficult to synchronize multiple devices on the same machine with millisecond accuracy, so if you are able to do this on multiple machines I would say you are doing well. I'm not familiar enough with iOS to the steps you describe, but I can tell you how I would approach this in cross-platform way. Maybe your approach amounts to the same thing.
one machine (the "master") would send a UDP packet with to all other machines.
all other machines would reply as quickly as possible.
the time it takes to receive the reply, divided by two, would be (approximately) the time it takes to get a packet from one machine to another. (this would have to be validated. maybe it takes much longer to process and send a packet? probably not)
after repeating steps 1-3, ignoring any extreme values and averaging the remaining results, you know about how long it takes to get a message from one machine to another.
now "sync" UDP packets can be sent from the main machine to the "slave" machines. The sync packets will include delay information so that when the save machine receive the packets, they know they were sent x milliseconds ago. Several sync packets may need to be sent in case the network delays or drops some of them.