How to synchronize event on more than one iOS device? - ios

I would like to synchronize (within .05 seconds) an event in the future on more than one iOS device, whether or not those devices have network access at that future time.
To do this, I believe I'll need to sync to a common clock between these devices, and the best way would be to get a precise time from an NTP server(s). I've looked at iOS-NTP, but the description says that that it determines time with 1 second accuracy, and that doesn't quite do it for me.
One alternative is to get the (accurate?) timestamp from a GPS clock, but I'm not sure that the devices will always have the ability to get a GPS signal.
Any other suggestions?

iOS doesn't give you low level access to the GPS hardware, so you can't use that method.
If the devices are close to each other, you can establish a peer to peer wireless connection using GameKit. That should give you around 35ms latency with bluetooth, and the latency should be consistent so you could measure it a few times to get the clocks in sync.
If they're not close together... then I would setup a server and use that as the "real" time. With several calls you should be able to measure the latency between the device and the server, then calculate the clock offset with reasonable accuracy.
If possible use UDP instead of TCP, that way network congestion will drop packets instead of delaying them.
You'll need to keep on re-calculating the offset though, since iOS devices change their clock frequently (I think it's part of the 3G/LTE network protocol? Jumping from one cell tower to another might update the clock).

Related

Location Update and sending location to server causing iphone heats and restarts automatically

I am fetching user location using CLLlocationManager and running webservice when lcoation is updated in background but it causes iphone heating up and battery Drains? Any one have solution for this ?
Getting your position drains power, you can do few things to avoid that:
use significant location changes (it is good if you do not need precise locations per time)
limit the accuracy (changing this can make you avoid the use of GPS that it is really a battery drainer)
I'm do not understand the heat, yes GPS make the device become hotter, but I've never experienced a restart due to heat.
Are you sure that you are not getting also into an expensive computational tasks?, you can check this by using profiler or the later versions of xcode.
You can also set the distance filter, this will continue to get the position (it will not reduce the battery drain) but will call the delagate callback only when the distance threshold is reached.
On iOS6 it has been introduced also the concept of deferring location updates in background, that probably is the best solution also for managing network traffic outgoing from your device.
In fact you have only the decision between low location accuracy (1000km) and high (3-6m).
In the first case the GPS chip is disabled, in the second it is enabled.
If it is enabled, and you need that precise locations you can do nothing.
GPS needs power, and that power last for a bit more than 8 hours full precision locations (measured on my iphone4)
warming up is no problem, however I cannot remember a warming up on my phone caused by GPS (I will check that soon). But for sure it never warms up so much that it will restart,
So your case this is a bit strange, that also could be a defect of your device.
The cause for warming up can be also that you try to comminicate very often with the server.
You can check that yourself, just download a decent GPS aplication, and let it record a track.
If it does get hot too, your device might have a problem. (Or you are living in a extremly hot environment and the sun shines strongly on your phone.)
Test also by disabling your network code.

How to know the speed of internet in ios programmatically

I am developing the application in which every things depends on internet.my Requirement is that when the speed of internet is low,app should display alert to user that "your internet speed is low, that's why this feature is not available to you."
I want to know that is there any feature available in ios from which we can know that our internet connection is low or speed of internet, I have found 2-3 answers about this but not get any feasible solution.
There is no magic call to know whether the Internet connection is fast or slow. There's only one solution - transfer data and time how long it takes to transfer a certain amount of data.
The problem is that the next chunk of data can be much slower, much faster, or about the same.
So you really need some sort of threshold where if the app is unable to transfer at least X number of bytes in Y seconds, then you stop the transfer and alert the user.
In other words, there's no simple way to ask "Is the connection fast or slow?".

Antenna usage when sending a batch of http requests

I'm trying to optimize battery usage when networking. If I hold all my http requests in an array for example, then I send them all (just empty out the array at once (for loop)), will the antenna turn on once to perform the 10 requests, or will it turn on and off n times? (I'm using NSURLRequest)
Is there a way to batch send requests at once? Or is this basically "batch" sending requests.
The documentation says nothing about how iDevice's hardware handles multiple NSURLRequests. It can be that handling on one model or OS version is different than on another one (e.g. iPhone 4 vs iPhone 5).
You will have to use Instruments and research it on your own using Energy Diagnostics. However, this is rather simple. Here is a short plan how to do it:
Connect the device to your development system.
Launch Xcode or Instruments.
On the device, choose Settings > Developer and turn on power logging.
Disconnect the device and perform the desired tests.
Reconnect the device.
In Instruments, open the Energy Diagnostics template.
Choose File > Import Energy Diagnostics from Device.
Moreover, have a look at Analyzing CPU Usage in Your App
The energy optimisation performed by the OS is not publicly known.
The exact handling for a particular interface depends on the interface, so some interfaces have very low set up/tear down costs ( e.g. Bluetooth LE), and others are quite cheap to run, but take time to set up and tear down ( e.g. 2G).
You generally have to take a course that gives the OS the best options possible, and then let it do what it can.
We can say a few things. It is unlikely that the connection is being powered up/down for individual packets, so the connection will be powered up when there is data to send, and kept up as long as you're trying to receive. The OS may be able to run at a lower power when it is just waiting, as it doesn't need to ACK packets, but it won't be able to power off completely.
Bottom line, if you send your requests sequentially, I believe that the OS is unlikely to cycle power in between requests, if you send them in parallel, it almost certainly won't.
Is this a worthwhile optimisation? Depends how much of it you're doing.
Possibly of interest: background downloads whereby the OS times your fetch when it knows it is going to do some other network activity anyway.

iOS UDP Server Message Processing Latency Too High ~35-40ms

We have a critical need to lower the latency of our UDP listener on iOS.
We're implementing an alternative to RTP-MIDI that runs on iOS but relies on a simple UDP server to receive MIDI data. The problem we're having is that RTP-MIDI is able receive and process messages around 20ms faster than our simple UDP server on iOS.
We wrote 3 different code bases in order to try and eliminate the possibility that something else in the code was causing the unacceptable delays. In the end we concluded that there is a lag between time when the iPAD actually receives a packet and when that packet is actually presented to our application for reading.
We measured this with with a scope. We put a pulse on one of the probes from the sending device every time it sent a Note-On command. We put another probe attached to the audio output of the ipad. We triggered on the pulse and measured the amount of time it took to hear the audio. The resulting timing was a reliable average of 45ms with a minimum of 38 and maximum around 53 in rare situations.
We did the exact same test with RTP-MIDI (a far more verbose protocol) and it was 20ms faster. The best hunch I have is that, being part of CoreMIDI, RTPMIDI could possibly be getting higher priority than our app, but simply acknowledging this doesn't help us. We really need to figure out how fix this. We want our app to be just as fast, if not faster, than RTPMIDI and I think this should be theoretically possible since our protocol will not be as messy. We've declared RTPMIDI to be unacceptable for our application due to the poor design of its journal system.
The 3 code bases that were tested were:
Objective-C implementation derived from the PGMidi example which would forward data received on UDP verbatim via virtual midi ports to GarageBand etc.
Objective-C source base written by an experienced audio engine developer with a built-in low-latency sine wave generator for output.
Unity3D application with Mono-based UDP listener and built-in sound-font synthesizer plugns.
All 3 implementations showed identical measurements on the scope test.
Any insights on how we can get our messages faster would be greatly appreciated.
NEWER INFORMATION in the search for answers:
I was digging around for answers, and I found this question which seems to suggest that iOS might respond more quickly if the communication were TCP instead of UDP. This would take some effort to test on our part because our embedded system lacks TCP capabilities, only UDP. I am curious as to whether maybe I could hold open a TCP connection for the sole purpose of keeping the Wifi responsive. Crazy Idea? I dunno. Has anyone tried this? I need this to be as real-time as possible.
Answering my own question here:
In order to keep the UDP latency down, it turns out, all I had to do was to make sure the Wifi doesn't go silent for more than 150ms (or so). The exact timing requirements are unknown to me at this time, however the initial tests I was running were with packets 500ms apart and that was too long. When I increased the packet rate to 1 every 150ms, the UDP latency was on par with RTPMIDI giving us total lag time of around 18ms average (vs. 45ms) using the same techniques I described in the original question. This was on par with our RTPMIDI measurements.

Could you use the internet to store data in the transmission space between countries?

Is it possible to bounce data back and forwards between lets say a USA computer and an Australian computer through the internet and just send these packets back and forwards and use this bounced data as a data storage?
As I understand it would take some time for the data to go from A to B, lets say 100 milliseconds, then therefore the data in transfer could be considered to be data in storage. If both nodes had a good bandwidth and free bandwidth, could data be stored in this transmission space? - by bounce the data back and forwards in a loop.
Would there be any reasons why this would not work.
The idea comes from a different idea I had some time ago where I thought you could store data in empty space by shooting laser pulse between two satellites a few light minutes apart. In the light minutes of space between then you could store data in this empty space as the transmission of data.
Would there be any reasons why this would not work.
Lost packets. Although some protocols (like TCP) have means to prevent packet loss, it involves the sender re-sending lost packets as needed. That means each node must still keep a copy of the data available to send it again (or the protocol might fail), so you'd still be using local storage while the communication does not complete.
If you took any networking classes, you would know the End-to-End principle, which states
The end-to-end principle states that application-specific functions ought to reside in the end hosts of a network rather than in intermediary nodes
Hence, you can not expect routers between your two hosts to keep the data for you. They have to freedom to discard it at anytime (or they themselves may crash at any time with your data in their buffer).
For more, you can read this wiki link:
End-to-End principle
It think this should actually work as in reality you store that information in various IO buffers of the numerous routers, switches and network cards. However the amount of storable information would probably be too small to have practical use, and network administrators of all levels are unlikely to enjoy and support such a creative approach.
Storing information in the delay line is a known approach and has been used to build memory devices in the past. However the past methods rely on delay during signal propagation over physical medium. As Internet mostly uses wires and electromagnetic waves that travel with the sound of light, not much information can be stored this way. Past memory devices mostly used sound waves.

Resources