I need to connect around 25 client devices to one server device (will all be iOS, though Android would be nice). I know there are several solutions to this problem, and I'd lean towards MultipeerConnectivity myself, but there is a limit of 8 simultaneous users. As dicussed elsewhere, there are workarounds, and I am not opposed to opening multiple sessions, but it it seems rather roundabout. I thought about using CocoaHTTPServer to make an API and advertise over Bonjour, but I would like it to work with a restrictive network, and preferably bypassing a public network all together. GameKit seems out for me because I don't want it to be open to the public (not a game, and specific to a confined area).
An HTTP server on some obscure (random) port seems like a good option, being cross-platform and easily testable with multiple devices, but school networks can be very restrictive. Multipeer gives the limitations of device numbers and other difficult-to-test variables, and GameKit is too public. Is there another route here, or should I narrow it down to CocoaHTTPServer, MultipeerConnectivity, or a combination of the two?
I decided to go with MultipeerConnectivity and using only one session, and letting some client devices wait for an opening in the server. I didn't really need every device to be connected simultaneously, and figured that the odds of Bluetooth being reliable with that many devices is unlikely anyways.
Related
I was thinking about a project the other day and an idea stumbled across my mind:
How would you set up a server, for instance at a restaurant, and have an app of the restaurant chain which finds and directly connects to the locally running server without being in the same network. This should work at higher ranges, so simple Bluetooth is not fitting here.
I've done some research, but I'm not very... good with protocols, and nothing I've found seems to really fit that goal. So I'm wondering what the best way to code this connection would be. Any ideas would be helpful :)
Here are the requirements:
A standard phone should be able to perform the operation.
It should have a fairly high range, think a big store or restaurant.
Server and device do not have to be in the same network.
The connection get's established by the phone on app start, to a server that can do whatever it needs to get this done.
I am writing an app that is suppose to work without a connection to mobile carrier and without local WiFi. Each device will act as transmitter, receiver and router.
My main challenge so far is that I cannot figure out how exactly MultipeerConnectivity works as documentation on MC is really limited. Apple denied revealing technical specification of MC claiming it's a proprietary network stack, so I have to rely on network sniffers and reverse-engineering which is not the quickest way to figure out how MC works.
Suppose I have 100 devices forming a mesh network in such way that each device is within the range of at least one other device and at maximum three other devices.
Is there any way to send a message from node A to node B that is not within the range of node A without the need to broadcast the message to all other nodes? I mean that message should be properly routed through all other nodes.
Does MC include a routing layer too or I have to write it myself?
From what I can see ad hoc delay tolerant wireless networks is still a hot subject in research.
These slides on ad hoc delay tolerant wireless network shed more light on the subject as it was a few years ago. And also this paper. Has Apple progressed it much with MC?
I cannot really see any way to send a message between nodes not directly connected to each other without flooding.
Correct?
The MCSession Reference states that
Sessions currently support up to 8 peers, including the local peer.
Also, the overview you cited says
In the discovery phase, your app uses a browser object […] to browse for nearby peers[.]
Moreover, the documentation on managing peers manually suggests that all peers in a session must be connected with each other to have them in a session.
This is suggesting that the framework only covers the communication between nearby devices, as in 'reachable by bluetooth or WiFi'. Naturally, those devices do not need complex routing, as they do communicate with each other and the benefit of the framework is simple multicasting between nearby devices, from a programmers' point of view.
As far as your question goes, this is about it - trivially, since all peers an a MCSession have links to each other - there is no routing needed.
This does however, allow you to construct a routing layer pretty easy.
Given your scenario, there will be multiple MCSessions with devices being part of at least one. All devices that are part of more than one MCSession do become routers and interconnect the MCSessions with each other.
The rest of the task should be straight forward; defining a namespace for addressing devices and implementing a routing protocol of your choice.
The old days of the internet, with unstable dialup connections, might be a plus factor for you as the routing protocols in place are rather stable in regards of link loss.
Here are two good starting points for you to make your choice of better fit:
Link state routing
Distance vector routing
Is it possible to access my Cisco router details like Name,Model,IP Address,Connection status etc from my iOS mobile.
I'm even ready to write small mobile app in iOS to get all router details.
Since I have just started learning in iOS, don't know if any library already exists for above task.
If my router does not work or gets hang.. I even want to try for restart of router using my mobile.
If example code exist, it will be very useful.
Like Cisco already has andriod and iOS app for same above function but dont want to use this app and want to write my own app with limited features only.
(http://www.addictivetips.com/mobile/cisco-connect-express-manage-router-settings-remotely-android-ios/)
Thanks,
Accessing network gear is best done by using SNMP. Cisco has extremely rich management/monitoring capabilities via SNMP and all of their MIBs are publicly available here.
Almost all Cisco gear supports the SNMPv2-SMI MIB (the 1.3.6.1.2.1 OID) so querying things like sysName, sysLocation, sysContact, sysDescription, sysUpTime should be very easy. This MIB even supports tables for listing all the interfaces and IP addresses and has a whole lot of other things that might be of interest to you.
If you have SNMP write access on the device then you can even make config changes and perform management functions like rebooting or bringing an interface up/down.
There are a few SNMP libraries for ObjectiveC and I think Net-SNMP is the most popular (It's not .net even though the title suggests that).
If you are new to SNMP then I suggest starting simple by querying easy objects like 1.3.6.1.2.1.1.5 (sysName) and 1.3.6.1.2.1.1.6 (sysLocation) before trying to jump into tables like 1.3.6.1.2.1.2.2 (ifTable)
Remember, you don't have to stick with the standard MIBs you can download all of the custom ones that are particular to your device which will give you incredible amounts of flexibility.
You could use a screen-scraping technique to telnet or ssh to the Cisco device and parse the "show version" output. This will give you some of the information you need. For others, like IP addresses, you can use "show ip interface brief", "show cdp neighbors" etc. as you need.
Keep security in mind: make sure that telnet/ssh credentials are adequately protected in your app's settings, and try to restrict your commands to those that do not need privileged access on the Cisco device.
Be aware that Cisco devices have a small pool of available VTYs, and every telnet/ssh access from your app will use up one VTY. So if you have for example 30 guys wanting to use the access the device simultaneously from their apps, some of those instances are not going to get access to the device.
If this is a concern, SNMP is a better and more scalable option as suggested by previous answer. Make sure that you (a) have a read-only community string configured on the device, and (b) use only the ro community string from the app.
I am designing an iOS app for a customer who wants to allow real-time (with minimum lag, max 50ms) conversations between users (a sort of Teamspeak). The lag must be low because the audio can also be live music, played with instruments, so all the users need to synchronize. I need a server, which will request audio recordings to every client and send to others (and make them hear the same sound at the same time).
HTTP is easy to manage/implement and easy to scale, but very low-performing because an average HTTP request takes > 50ms... (with a mid-level hardware), so I was thinking of TCP/UDP connections kept open between clients and server.
But I have some questions:
If I develop the server in Python (using TwistedMatrix, for example), how are its performance ?
I can't develop the server in C++ because it is hard to manage (scalable) and to develop.
Anyone used Nodejs (which is easy to scale) to manage TCP/UDP connections?
If I use HTTP, will it be fast enough with Keep-Alive? Becuase usually the time required for an HTTP Request to be performed is > 50ms (because opening-closing connection is hard), and I want the total procedure to be less than that time.
The server will be running on a Linux machine.
And finally: which type of compression can you suggest me? I thought Ogg Vorbis would be nice, but if there's anything better (and can be used in iOS), I am open to changes.
Thank you,
Umar.
First off, you are not going to get sub 50 ms latency. Others have tried this. See for example http://ejamming.com/ a service that attempts to do what you are doing, but has a musically noticeable delay over the line and is therefore, in the ears of many, completely unusable. They use special routing techniques to get the latency as low as possible and last I heard their service doesn't work with some router configurations.
Secondly, what language you use on server probably doesn't make much difference, as the delay from client to server will be worse than any delay caused by your service, but if I understand your service correctly, you are going to need a lot of servers (or server threads) just relaying audio data between clients or doing some sort of minimal mixing. This is a small amount of work per connection, but a lot of connections, so you need something that can handle that. I would lean towards something like Java, Scala, or maybe Go. I could be wrong, but I don't think this is a good use-case for node, which, as I understand it, does not do multithreading well at this time. Also, don't poo-poo C++, scalable services have been built C++. You could also build the relay part of the service in C++ and the rest in whatever.
Third, when choosing a compression format, you'll have to choose one that can survive packet loss if you plan to use UDP, and I think UDP is the only way to go for this. I don't think vorbis is up to this task, but I could be wrong. Off the top of my head, I'm not sure of anything that works on the iPhone and is UDP friendly, but I'm sure there are lots of things. Speex is an example and is open-source. Not sure if the latency and quality meet your needs.
Finally, to be blunt, I think there are som other things you should research a bit more. eg. DNS is usually cached locally and not checked every http call (though it may depend on the system/library. At least most systems cache dns locally). Also, there is no such protocol as TCP/UDP. There is TCP/IP (sometimes just called TCP) and UDP/IP (sometimes just called UDP). You seem to refer to the two as if they are one. The difference is very important for what you are doing. For example, HTTP runs on top of TCP, not UDP, and UDP is considered "unreliable", but has less overhead, so it's good for streaming.
Edit: speex
What concerns the server, the request itself is not a bottleneck. I guess you have sufficient time to set up the connection, as it happens only in the beginning of the session. Therefore the protocol is not of much relevance.
But consider that HTTP is a stateless protocol and not suitable for audio streaming. There are a couple of real time streaming protocols you can choose from. All of them will work over TCP or UDP (e.g. use raw sockets), and there are plenty of implementations.
In your case, the bottleneck with latency is not the server but the network itself. The connection between an iOS device and a wireless access point (AP) eats up about 40ms if the AP is not misconfigured and connection is good. (ping your iPhone.) In total, you'd have a minimum of 80ms for the path iOS -> AP -> Server -> AP -> iOS. But it is difficult to keep that latency stable. (Typical latency of AirPlay on my local network is about 300ms.)
I think live music over iOS devices is not practicable today. Try skype between two iOS devices and look how close you can get to 50ms. I'd bet no one can do it significantly better, what concerns latency.
Update: New research result!
I have to revise my claims regarding the latency of wifi connections of the iDevice. Apparently when you first ping your device, latency will be bad. But if I ping again no later than 200ms after that, I see an average latency 2ms-3ms between AP and iDevice.
My interpretation is that if there is no communication between AP and iDevice for more than 200ms, the network adapter of the iDevice will go to a less responsive sleep mode, probably to save battery power.
So it seems, live music is within reach again... :-)
Update 2
The ping-interval required for keep alive of low latency apparently differs from device to device. The reported 200ms is for an 3rd gen. iPad. For my iPhone 4 it's more like 50ms.
While streaming audio you probably don't need to bother with this, as data is exchanged on a more frequent basis. In my own context, I have sparse communication between an iDevice and a server, but low latency is crucial. A keep alive therefore is the way to go.
Best, Peter
I'm considering using 2 NSStream for up/down channels. However, it looks somewhat complex. If you know simpler way (or recommendations) to do this, please let me know!
-- edit --
This is a kind of fast prototyping of internal/in-house remote controller. Low latency is best, but not required.
Binary formatted data, but not so heavy. Most of them are short control messages, and sometimes big chunks exceptionally.
On Cocoa / Cocoa touch. Platforms are limited to them.
Two peers are on LAN or at least WiFi network. So I can assume the connection is basically fast.
Compatibility to unknown hosts, high efficiency/performance/reliability and such things are no need to be considered. Just simplicity is most important now.
Without knowing acceptable latency, amount of data, type of data, and/or network topology (same LAN? routing over WAN?), it is impossible to say.
For most purposes, HTTP provides an awfully big and versatile hammer. And HTTP is supported by just about everything.
You want simple? Nothing is as simple as HTTP simply because it is a ubiquitous high level protocol that everyone and there brother has implemented anywhere from high level APIs (like NSHTTP*/NSURL*) down to less-than-$1 embedded chips.
If the devices you want to control have an option for an HTTP server, go for that. It'll be dead simple and debugging is much much easier when working with a high level protocol like HTTP.
At this point, it is hard not to buy a device with a LAN/wLAN port that doesn't also have an HTTP server in it (off the top of my head, my home theater receiver, solar controller, bbq, printer, security camera, PS3, VOIP box, and U-verse router all have HTTP servers).
However, the requirements on your non-Cocoa Touch side may dictate otherwise.