iOS - synching audio between devices with millisecond accuracy - ios

I need to synch audio in networked devices with millisecond accuracy. I've hacked together something that works quite well, but is'nt perfectly reliable :
1)Server device sends an rpc with a timeSinceClick param
2)client device launches the same click offsetted according to the time the rpc spent in transit,
3)System.Diagnostics.StopWatch checks periodicaly on all connected devices to make sure playback hasn't deviated too much from absolute time and corrects if necessary
Are there any more elegant ways to do this? Also, my way of doing it requires manual synching if non iOS devices are added to the mix : latency divergences make it very hard to automate...
I'm all eyes!
Cheers,
Gregzo

It is difficult to synchronize multiple devices on the same machine with millisecond accuracy, so if you are able to do this on multiple machines I would say you are doing well. I'm not familiar enough with iOS to the steps you describe, but I can tell you how I would approach this in cross-platform way. Maybe your approach amounts to the same thing.
one machine (the "master") would send a UDP packet with to all other machines.
all other machines would reply as quickly as possible.
the time it takes to receive the reply, divided by two, would be (approximately) the time it takes to get a packet from one machine to another. (this would have to be validated. maybe it takes much longer to process and send a packet? probably not)
after repeating steps 1-3, ignoring any extreme values and averaging the remaining results, you know about how long it takes to get a message from one machine to another.
now "sync" UDP packets can be sent from the main machine to the "slave" machines. The sync packets will include delay information so that when the save machine receive the packets, they know they were sent x milliseconds ago. Several sync packets may need to be sent in case the network delays or drops some of them.

Related

Im aware this is for programming only

I have come accross a situation that does not need to talk about programming but I need this answered on a stackoverflow site. My question is that when a wifi network is in a house hold , do the wifi users need to be disconnected from the wifi network in order for the one desired machine to be used properly with out it being slowed down or does it not matter how many machines are connected. Can any one answer this please looking for reasonable advice..
Depending on the bandwidth you pay for, it's possible that multiple machines in use will cause each machine to have a slower internet connection.
If you think of the internet like a series of pipes, then the bandwidth you're paying for is the throughput of the pipe connection to your wifi. Let's say your pipe connection can push through 10 oz of water per second. If a youtube video is requesting 2 oz of water per second, and you have 3 devices watching youtube videos, then you're using 6oz of water per second and shutting off any one device won't affect the other devices. If you have 10 devices all trying to watch youtube videos at the same time, then you're requesting 20 oz of water per second when the pipe can only provide 10, so at least some devices will be slower. (It depends on your actual router for how that affects your devices. Perhaps each device gets 1 oz per second instead of 2, or else perhaps 5 devices get 2 oz per second and 5 devices get nothing).
I used water as an example because it's easier to visualize, but your internet connection will actually be measured in some multiple of bytes per second (kilobytes per second, megabytes per second, gigabytes per second, etc.). Also, most devices don't require a steady stream of data to work. It all depends on your specific setup: how much data each device is requesting and how much data throughput your internet service provider is giving you.

How to know the time that packet takes to travel from client to server?

I am working on a project and I want to know the time between client and server that packet takes to reach it destination. I am using a raspberry pi as client and my laptop as server. The connection between the server and client will be socket connection. So, I want to send an image from client to the server and take the timestamps in both ending to know the time.
Sincerely
First, don't send an image to measure a single packet time. The image might require multiple packets to be sent and you'll end up measuring more than you wanted.
Second, using timestamp on both ends is not very reliable as it depends on the time synchronization between the systems which is hard to achieve and maintain.
Third, don't reinvent the wheel. If you want to measure communication lag, use PING. Its tried and tested, its efficient and its implemented for you so it's faster & cheaper to use and you don't risk adding bugs of your own.

Real time audio conversation iOS

I am designing an iOS app for a customer who wants to allow real-time (with minimum lag, max 50ms) conversations between users (a sort of Teamspeak). The lag must be low because the audio can also be live music, played with instruments, so all the users need to synchronize. I need a server, which will request audio recordings to every client and send to others (and make them hear the same sound at the same time).
HTTP is easy to manage/implement and easy to scale, but very low-performing because an average HTTP request takes > 50ms... (with a mid-level hardware), so I was thinking of TCP/UDP connections kept open between clients and server.
But I have some questions:
If I develop the server in Python (using TwistedMatrix, for example), how are its performance ?
I can't develop the server in C++ because it is hard to manage (scalable) and to develop.
Anyone used Nodejs (which is easy to scale) to manage TCP/UDP connections?
If I use HTTP, will it be fast enough with Keep-Alive? Becuase usually the time required for an HTTP Request to be performed is > 50ms (because opening-closing connection is hard), and I want the total procedure to be less than that time.
The server will be running on a Linux machine.
And finally: which type of compression can you suggest me? I thought Ogg Vorbis would be nice, but if there's anything better (and can be used in iOS), I am open to changes.
Thank you,
Umar.
First off, you are not going to get sub 50 ms latency. Others have tried this. See for example http://ejamming.com/ a service that attempts to do what you are doing, but has a musically noticeable delay over the line and is therefore, in the ears of many, completely unusable. They use special routing techniques to get the latency as low as possible and last I heard their service doesn't work with some router configurations.
Secondly, what language you use on server probably doesn't make much difference, as the delay from client to server will be worse than any delay caused by your service, but if I understand your service correctly, you are going to need a lot of servers (or server threads) just relaying audio data between clients or doing some sort of minimal mixing. This is a small amount of work per connection, but a lot of connections, so you need something that can handle that. I would lean towards something like Java, Scala, or maybe Go. I could be wrong, but I don't think this is a good use-case for node, which, as I understand it, does not do multithreading well at this time. Also, don't poo-poo C++, scalable services have been built C++. You could also build the relay part of the service in C++ and the rest in whatever.
Third, when choosing a compression format, you'll have to choose one that can survive packet loss if you plan to use UDP, and I think UDP is the only way to go for this. I don't think vorbis is up to this task, but I could be wrong. Off the top of my head, I'm not sure of anything that works on the iPhone and is UDP friendly, but I'm sure there are lots of things. Speex is an example and is open-source. Not sure if the latency and quality meet your needs.
Finally, to be blunt, I think there are som other things you should research a bit more. eg. DNS is usually cached locally and not checked every http call (though it may depend on the system/library. At least most systems cache dns locally). Also, there is no such protocol as TCP/UDP. There is TCP/IP (sometimes just called TCP) and UDP/IP (sometimes just called UDP). You seem to refer to the two as if they are one. The difference is very important for what you are doing. For example, HTTP runs on top of TCP, not UDP, and UDP is considered "unreliable", but has less overhead, so it's good for streaming.
Edit: speex
What concerns the server, the request itself is not a bottleneck. I guess you have sufficient time to set up the connection, as it happens only in the beginning of the session. Therefore the protocol is not of much relevance.
But consider that HTTP is a stateless protocol and not suitable for audio streaming. There are a couple of real time streaming protocols you can choose from. All of them will work over TCP or UDP (e.g. use raw sockets), and there are plenty of implementations.
In your case, the bottleneck with latency is not the server but the network itself. The connection between an iOS device and a wireless access point (AP) eats up about 40ms if the AP is not misconfigured and connection is good. (ping your iPhone.) In total, you'd have a minimum of 80ms for the path iOS -> AP -> Server -> AP -> iOS. But it is difficult to keep that latency stable. (Typical latency of AirPlay on my local network is about 300ms.)
I think live music over iOS devices is not practicable today. Try skype between two iOS devices and look how close you can get to 50ms. I'd bet no one can do it significantly better, what concerns latency.
Update: New research result!
I have to revise my claims regarding the latency of wifi connections of the iDevice. Apparently when you first ping your device, latency will be bad. But if I ping again no later than 200ms after that, I see an average latency 2ms-3ms between AP and iDevice.
My interpretation is that if there is no communication between AP and iDevice for more than 200ms, the network adapter of the iDevice will go to a less responsive sleep mode, probably to save battery power.
So it seems, live music is within reach again... :-)
Update 2
The ping-interval required for keep alive of low latency apparently differs from device to device. The reported 200ms is for an 3rd gen. iPad. For my iPhone 4 it's more like 50ms.
While streaming audio you probably don't need to bother with this, as data is exchanged on a more frequent basis. In my own context, I have sparse communication between an iDevice and a server, but low latency is crucial. A keep alive therefore is the way to go.
Best, Peter

Setting up priority of packets that are being transmitted over the network

I am working on a project wherein I am implementing a MAC protocol. My first task is to implement priority based scheduling of packets. To be more specific, I want to schedule the transmission of packets such that certain types of packets which are more important than the others will be allotted higher priority than the rest of the packets.
I have been trying to establish this since quite sometime now and have used various approaches to achieve the same.
There are certain bits in the IP header which are allotted for setting the priority of the packets that are being transmitted. I have used socket programming to be able to achieve the same. I also tried it using raw sockets but it was causing some problems and was not working the way I wanted it to.
So I turned back to normal SOCK_DGRAM and SOCK_STREAM. But I am still facing some problems.
Can anyone help me regarding this?

easy to write a script to test whether the network is ever down for the next 24 hours?

is it easy to write a script to test whether the network is ever down for the next 24 or 48 hours? I can use ssh to connect to a shell and come back 48 hours later to see if it is still connected to see if the network has ever been down, but can i do it programmatically easily?
The Internet (and your ethernet) is a packet-switched network, which makes the definition of 'down' difficult.
Some things are obvious; for example, if your ethernet card doesn't report a link, then it's down (unless you have redundant connections). But having a link doesn't mean its up.
The basic, 100 mile view of how the Internet works is that your computer splits the data it wants to send into ~1500-byte segments called packets. It then, pretty much blindly, sends them on their way, however your routing table says to. Then that machine repeats the process. Eventually, through many repetitions, it reaches the remote host.
So, you may be tempted to define up as the packet reached its destination. But, what happens if the packet gets corrupted, e.g., due to faulty hardware or interference? The next router will just discard it. Ok, that's fine, you may well want to consider that down. What if a router on the path is too busy, or the link it needs to be sent on is full? The packet will be dropped. You probably don't want to count that as down.
Packets routinely get lost; higher-level protocols (e.g., TCP) deal with it and retransmit the packet. In fact, TCP uses the packet drop on link full behavior to discover the link bandwidth.
You can monitor packet loss with a tool like ping, as the other answer states.
If your need is more adminstrative in nature, and using existing software is an option, you could try something like monit:
http://mmonit.com/monit/
Wikipedia has a list of similar software:
http://en.wikipedia.org/wiki/Comparison_of_network_monitoring_systems
You should consider also whether very short outages need to be detected. Obviously, a periodic reachability test cannot guarantee detecting outages shorter than the testing interval.
If you only care about whether there was an outage, not how many there were or how long they lasted, you should be able to automate your existing ssh technique using expect pretty easily.
http://expect.nist.gov/
Most platforms support a ping command that can be used to find out if a network path exists to an IP address somewhere "else". Where, exactly, to check depends on what you are really trying to answer.
If the goal is to discover the reliability of your first hop to your ISP's network, then pinging a router or their DNS regularly might be sufficient.
If your concern is really the connection to a single node (your mention of leaving an ssh session open implies this) then just pinging that node is probably the best idea. The ping command will usually fail in a way that a script can test if the connection times out.
Regardless, it is probably a good idea to check at a rate no faster than once a minute, and slower than that is probably sufficient.

Resources