I'm developing a realtime game using Sprite Kit and Game Kit. The game features a multiplayer mode where 4 players can play with each other. I've been reading the Game Kit programing guide and came across the following passage:
Although the GKMatch object creates a full peer-to-peer connection
between all the participants, you can reduce the network traffic by
layering a ring or client-server networking architecture on top of it.
Figure 8-1 shows three possible network topologies for a four-player
game. On the left, a peer-to-peer game has 12 connections between the
various devices. However, you could layer a client-server architecture
on top of this by nominating one of the devices to act as the host. If
your game transmits to or from the host only, you can halve the number
of connections. A ring architecture allows devices to forward network
packets to the next device only, but further reduces the number of
connections. Each topology provides different performance
characteristics, so you will want to test different models to find one
that provides the performance your game requires.
So here is where I am confused. Currently in my game I have implemented the peer-to-peer topology, where each user sends their position to every other player in the game. This ends up totaling 12 messages being sent, because each player sends 3 messages.
However according to the documentation, if I layer a client-server topology over my game, I can reduce the network traffic by reducing the number of connections. If I do this though, then each client will send their position to the host and then the host would need to relay those positions to the remaining clients. So now one player (the host) needs to work extra because the clients no longer communicate with each other. And then we still end up with 12 messages. The host sends 9 messages (3 messages for each player, plus 6 messages for relaying the other clients' positions) then each client sends 1 position message to the host. 9 + 1 + 1 + 1 = 12 messages. Which makes sense, all we did was unevenly distribute the message sending, so now one player needs to work harder to makeup for the less work the other players are doing.
Furthermore, relaying the client messages takes additional time because each client's position now needs to pass through the host.
So while there are now less connections, one player is sending more messages (9 messages) rather than each player evenly distributing the workload (i.e. each player sending 3 messages). This seems like it would lead to a greater chance for disconnects to occur because it will be easier for the host to disconnect from the match.
So can someone explain to me how network traffic gets reduced by layering a client-server topology? Does just the fact of having less connections in the match reduce network traffic even though overall messages are the same? Keep in mind, there is no dedicated server here, I (and the documentation) am talking about layering a client-server topology on top of the peer-to-peer match. Also isn't the host at a greater chance of disconnecting because he is sending 3X as much messages as the other players. After all, the GKMatch will disconnect a player after a brief period of packet loss. Or does simply the fact of having 12 connections have a greater chance for disconnects because of the supposedly increased traffic?
I am sorry for the very short answer to a very descriptive and well written question, but the answer is simple. The Server (you used the term "host", but this is confusing) does not have to send 3 separate messages to each client. The Server collects all the information and sends just one message containing all information to each client.
Related
I have come accross a situation that does not need to talk about programming but I need this answered on a stackoverflow site. My question is that when a wifi network is in a house hold , do the wifi users need to be disconnected from the wifi network in order for the one desired machine to be used properly with out it being slowed down or does it not matter how many machines are connected. Can any one answer this please looking for reasonable advice..
Depending on the bandwidth you pay for, it's possible that multiple machines in use will cause each machine to have a slower internet connection.
If you think of the internet like a series of pipes, then the bandwidth you're paying for is the throughput of the pipe connection to your wifi. Let's say your pipe connection can push through 10 oz of water per second. If a youtube video is requesting 2 oz of water per second, and you have 3 devices watching youtube videos, then you're using 6oz of water per second and shutting off any one device won't affect the other devices. If you have 10 devices all trying to watch youtube videos at the same time, then you're requesting 20 oz of water per second when the pipe can only provide 10, so at least some devices will be slower. (It depends on your actual router for how that affects your devices. Perhaps each device gets 1 oz per second instead of 2, or else perhaps 5 devices get 2 oz per second and 5 devices get nothing).
I used water as an example because it's easier to visualize, but your internet connection will actually be measured in some multiple of bytes per second (kilobytes per second, megabytes per second, gigabytes per second, etc.). Also, most devices don't require a steady stream of data to work. It all depends on your specific setup: how much data each device is requesting and how much data throughput your internet service provider is giving you.
I am writing an app that is suppose to work without a connection to mobile carrier and without local WiFi. Each device will act as transmitter, receiver and router.
My main challenge so far is that I cannot figure out how exactly MultipeerConnectivity works as documentation on MC is really limited. Apple denied revealing technical specification of MC claiming it's a proprietary network stack, so I have to rely on network sniffers and reverse-engineering which is not the quickest way to figure out how MC works.
Suppose I have 100 devices forming a mesh network in such way that each device is within the range of at least one other device and at maximum three other devices.
Is there any way to send a message from node A to node B that is not within the range of node A without the need to broadcast the message to all other nodes? I mean that message should be properly routed through all other nodes.
Does MC include a routing layer too or I have to write it myself?
From what I can see ad hoc delay tolerant wireless networks is still a hot subject in research.
These slides on ad hoc delay tolerant wireless network shed more light on the subject as it was a few years ago. And also this paper. Has Apple progressed it much with MC?
I cannot really see any way to send a message between nodes not directly connected to each other without flooding.
Correct?
The MCSession Reference states that
Sessions currently support up to 8 peers, including the local peer.
Also, the overview you cited says
In the discovery phase, your app uses a browser object […] to browse for nearby peers[.]
Moreover, the documentation on managing peers manually suggests that all peers in a session must be connected with each other to have them in a session.
This is suggesting that the framework only covers the communication between nearby devices, as in 'reachable by bluetooth or WiFi'. Naturally, those devices do not need complex routing, as they do communicate with each other and the benefit of the framework is simple multicasting between nearby devices, from a programmers' point of view.
As far as your question goes, this is about it - trivially, since all peers an a MCSession have links to each other - there is no routing needed.
This does however, allow you to construct a routing layer pretty easy.
Given your scenario, there will be multiple MCSessions with devices being part of at least one. All devices that are part of more than one MCSession do become routers and interconnect the MCSessions with each other.
The rest of the task should be straight forward; defining a namespace for addressing devices and implementing a routing protocol of your choice.
The old days of the internet, with unstable dialup connections, might be a plus factor for you as the routing protocols in place are rather stable in regards of link loss.
Here are two good starting points for you to make your choice of better fit:
Link state routing
Distance vector routing
I am writing a 3rd person tank shooter game and ask this about networking:
1/ I think P2P is fairer for player because the ping is about similar in each pair of players (the same distance) while client/ server will has lower ping for nearby player and higher ping for another, is it true?
2/ The game is for fun, so I can trust clients, and if I can trust client, is there any technique for better lag compensation than traditional model where you can not trust clients?
I am developing a program that uses the network just a much as any game, and we use a mix of the two.
Our client programs act as a server in a background process, which is handled by our dedicated server.
so instead of p2p, the client with the strongest connection is the server for the other clients. Our dedicated server makes that decision, and gives connection info to all other clients.
I'm working on an engineering project where I want a go-kart to maintain a direct connection with a base station. The base and go-kart can be separated by about a half mile (with lots of obstacles in between) which is too far for WiFi.
I'm thinking about using 3G/4G to directly connect the two. Does anyone have any resources or ideas that might help?
Or, alternatively, a better way to connect them? I'm just trying to send some sensor data (pretty low bandwidth) in real-time.
The biggest problem you face is radio spectrum that you are allowed to use. All 3G/4G spectrum is licensed to some firm and they get really unhappy (e.g. have you hunted down and fined) when you transmit in their space.
I did find DASH7 which
is an open source wireless sensor networking standard … which operates in the 433 MHz unlicensed ISM band. DASH7 provides multi-year battery life, range of up to 2 km, indoor location with 1 meter accuracy, low latency for connecting with moving things, a very small open source protocol stack …
with a parts cost around US$ 10. This sounds like it satisfies your requirements and keeps the local constabulary from bothering you.
You could maybe use SMS, between a modem on the kart and a mobile phone or modem at the base.
A mobile data connection like a telephone call isn't possible directly between the two; you have to make a data connection from the kart to a server in your operator's core network, identified by the APN. Then you can access IP addresses as for a normal internet connection - so the base computer would have to be a web server.
I am designing an iOS app for a customer who wants to allow real-time (with minimum lag, max 50ms) conversations between users (a sort of Teamspeak). The lag must be low because the audio can also be live music, played with instruments, so all the users need to synchronize. I need a server, which will request audio recordings to every client and send to others (and make them hear the same sound at the same time).
HTTP is easy to manage/implement and easy to scale, but very low-performing because an average HTTP request takes > 50ms... (with a mid-level hardware), so I was thinking of TCP/UDP connections kept open between clients and server.
But I have some questions:
If I develop the server in Python (using TwistedMatrix, for example), how are its performance ?
I can't develop the server in C++ because it is hard to manage (scalable) and to develop.
Anyone used Nodejs (which is easy to scale) to manage TCP/UDP connections?
If I use HTTP, will it be fast enough with Keep-Alive? Becuase usually the time required for an HTTP Request to be performed is > 50ms (because opening-closing connection is hard), and I want the total procedure to be less than that time.
The server will be running on a Linux machine.
And finally: which type of compression can you suggest me? I thought Ogg Vorbis would be nice, but if there's anything better (and can be used in iOS), I am open to changes.
Thank you,
Umar.
First off, you are not going to get sub 50 ms latency. Others have tried this. See for example http://ejamming.com/ a service that attempts to do what you are doing, but has a musically noticeable delay over the line and is therefore, in the ears of many, completely unusable. They use special routing techniques to get the latency as low as possible and last I heard their service doesn't work with some router configurations.
Secondly, what language you use on server probably doesn't make much difference, as the delay from client to server will be worse than any delay caused by your service, but if I understand your service correctly, you are going to need a lot of servers (or server threads) just relaying audio data between clients or doing some sort of minimal mixing. This is a small amount of work per connection, but a lot of connections, so you need something that can handle that. I would lean towards something like Java, Scala, or maybe Go. I could be wrong, but I don't think this is a good use-case for node, which, as I understand it, does not do multithreading well at this time. Also, don't poo-poo C++, scalable services have been built C++. You could also build the relay part of the service in C++ and the rest in whatever.
Third, when choosing a compression format, you'll have to choose one that can survive packet loss if you plan to use UDP, and I think UDP is the only way to go for this. I don't think vorbis is up to this task, but I could be wrong. Off the top of my head, I'm not sure of anything that works on the iPhone and is UDP friendly, but I'm sure there are lots of things. Speex is an example and is open-source. Not sure if the latency and quality meet your needs.
Finally, to be blunt, I think there are som other things you should research a bit more. eg. DNS is usually cached locally and not checked every http call (though it may depend on the system/library. At least most systems cache dns locally). Also, there is no such protocol as TCP/UDP. There is TCP/IP (sometimes just called TCP) and UDP/IP (sometimes just called UDP). You seem to refer to the two as if they are one. The difference is very important for what you are doing. For example, HTTP runs on top of TCP, not UDP, and UDP is considered "unreliable", but has less overhead, so it's good for streaming.
Edit: speex
What concerns the server, the request itself is not a bottleneck. I guess you have sufficient time to set up the connection, as it happens only in the beginning of the session. Therefore the protocol is not of much relevance.
But consider that HTTP is a stateless protocol and not suitable for audio streaming. There are a couple of real time streaming protocols you can choose from. All of them will work over TCP or UDP (e.g. use raw sockets), and there are plenty of implementations.
In your case, the bottleneck with latency is not the server but the network itself. The connection between an iOS device and a wireless access point (AP) eats up about 40ms if the AP is not misconfigured and connection is good. (ping your iPhone.) In total, you'd have a minimum of 80ms for the path iOS -> AP -> Server -> AP -> iOS. But it is difficult to keep that latency stable. (Typical latency of AirPlay on my local network is about 300ms.)
I think live music over iOS devices is not practicable today. Try skype between two iOS devices and look how close you can get to 50ms. I'd bet no one can do it significantly better, what concerns latency.
Update: New research result!
I have to revise my claims regarding the latency of wifi connections of the iDevice. Apparently when you first ping your device, latency will be bad. But if I ping again no later than 200ms after that, I see an average latency 2ms-3ms between AP and iDevice.
My interpretation is that if there is no communication between AP and iDevice for more than 200ms, the network adapter of the iDevice will go to a less responsive sleep mode, probably to save battery power.
So it seems, live music is within reach again... :-)
Update 2
The ping-interval required for keep alive of low latency apparently differs from device to device. The reported 200ms is for an 3rd gen. iPad. For my iPhone 4 it's more like 50ms.
While streaming audio you probably don't need to bother with this, as data is exchanged on a more frequent basis. In my own context, I have sparse communication between an iDevice and a server, but low latency is crucial. A keep alive therefore is the way to go.
Best, Peter