Building Turn-based Multiplayer Game Server [closed] - ios

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I started to make multiplayer game but as I have no expirience I tried it different ways but something doesn't feel right to me.
So, I really need an advice about which platforms/tools/languages/techniches I should use best.
I must say that I don't believe in things such as: Photon, AppWrap, Skiller, Gamooga and others. I dont believe they will scale great and it won't be too pricey, or they are too big (I don't mean size, I mean how much things they has that I don't need) for my needs.
First, I'll describe simplified game session proccess.
Three players starting the game session
Each player receives a question and should answer within 10 seconds.
When player answered he should be able to see any answers that were given already by any other players (if any) and he should be able to see any answer given as soon as it was given. Basically, any answers should be received by other clients in realtime but only after we answered (to avoid cheating). If time is out then any who doesn't answered will receive no score and next question goes.
Deciding winner and goes to the next question. Finish the game session after N rounds.
Second, I'll explain few requirements that I taken into consideration.
Game should be run on iOS/Android/Web. This leaves me no choice but to make it based on HTTP.
I looked for Google Cloud Endpoints about which I was really enjoyed. It has iOS/Android/JS SDKs, Google Cloud Platform has Google BigQuery, and many other great things. But because I need realtime answer delivery I don't know if that suitable (there is Channel API but no client SDK for iOS, and people saying its not that good).
Then I looked for Node.js and long polling (AFNetworking on client side) but it is so hard to manage. I need to serve game state updates to clients (and I need to send deltas). That way I need to track all changes individually for each player. And when player connects I should check if there any changes already; if it is then send right away; if it isn't then listen for 'change' event and then send. In the end code looks so awkward and hard to understand and I don't know how to make it right. There is socket.io which should make things better on the server side but again no iOS SDK for client.
I don't know where to go from here. Any help would be very appreciated.

Turn based architectures are actually not too complicated as lag is really not a huge concern, and data is not being sent constantly.
I would create two web services, one for matchmaking and another to handle the actual game.
The matchmaking would simply queue up players, when there were enough for a match, the service would pick a group of players and assign them a sessionID and pass the players to the game service.
For the game service, it is important to differentiate what can be handled on the client and the service.
The game service would store all game information for each sessionID including clients. This would allow a single service to manage hundreds of games at once with ease. When a player answered a question it would send that in a request to the server with the sessionID. The server would iterate over the clients in the session and send the information to them.
The client could handle hiding questions until the user has answered. (You could even encrypt the other question information if you were concerned about hacking).
The server would also track the timer for the session, when the timer expired it would send a response to all the clients, as well as ignore any later answers. A round integer could be stored in the session, and wrapped in communication with sessionID so as to differentiate answers to past questions. You could have a timer for prediction on the client, but the server needs to be the authority over the timer to avoid cheating.

Use secure ssl https protocols using your own auth tokens to keep the cheaters out.
The client would need to keep track of time span for each player not the actual time. The individual times spans are sent to the server after the round ends on each client.
Think of it like this. There are 3 clients, all polling a server as when they start the round. Because the three could have varying network speeds you don't know who will actually start first. So when each client finally receives the green light then the timer starts for that device, on that device at that time it is received on that client device. You wait until all 3 report back to the server with their time spans to determine who won the round.
There as some topics of concern out of the logic of just the game. Here are some examples.
User Identity and Authorizing. (Game Center)
Game Data Persistence and Storage. (Cloud Database like AWS DynamoDB)
Game Match Queuing. (AWS SQS) Don't attempt this with a database using pessimistic concurrency.
Notifications of Match Players are ready for sleeping clients. (AWS SNS to APNS to Endpoint(this mobile device))
Polling or Notification for Next Move. (AWS SQS or SNS) I wouldn't poll the Database.
Those services are just example recommendations. I don't work for Amazon, they are the easiest to get up and running but there maybe better services out there.
Basically what I am getting at you are going to want more than a traditional MySQL database on some basic hosting site. Some of these cloud services have become very affordable as compared to creating all the infrastructure yourself on dedicated servers.
The are exponentially more scalable also.
You could do all that listed above to start out for under $15 a month using cloud services. The best thing is if your idea takes off you simply bump up the thresholds on those with a flick of a switch from an admin portal.
That would be a good problem to have.

Related

How to set a permanent link between 2 iPads [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 8 years ago.
Improve this question
I need to build a simple interface between 2 instances of the same app running on 2 different iPads, which can communicate between themselves.
The idea is to create a permanent link between them (by exchanging some kind of Id), that would be possible to keep (possible by storing the Id) even after one or both the iPads reboot, without the need of user intervention.
For the sake of context, that interface could be used, for instance, on a shared grocery list app, or on a 1-to-1 turn base game.
The Apps would not need to be nearby, nor both would need to be active when data is sent (the receiver could be turned off when data is sent, and receive it later)
I imagine that, if this is possible, it would need to be done using gamekit. Can this be done? If so, how?
Thank you
There are so many ways to do this. But in general:
Database: you'll want a server side database to store the common data. The most common options for this are i) host your own database server and create REST API endpoints to access the datastore, ii) use on of the many Platform as a Service (PaaS) companies out there (Parse, Stackmob, Azure, etc). Generally, the PaaS provide a cheaper faster way to get up and running and you'd probably only want your own server if the app was fairly complex. You could always start PaaS and transition to proprietary later if needed.
Synchronization: To communicate between the devices your options are i) client side polling (ie. check for updates to the database every n secs/mins), or ii) push notifications from the server when a record is inserted/updated). For push, you'll want to avoid using APNS (Apple Push Notification Service) as message delivery is not guaranteed (users can decline to receive push notifications) and you'll want to either create your own sockets connection or use a push service like Pusher or PubNub which provide reliable message delivery from server to client. You'll only want to implement APNS for when the app is closed (to notify the user of new activity). When the app is open, use one of the more reliable methods listed above.
That's the general methodology.
EDIT: To be clear, there is no reliable way to do this without using a) a server to store the messages/state, b) a third party service like Pusher or PubNub to reliably deliver the messages between devices, whether the other device is active or not (and really you are then just using their server instead of your own). You could skip using your own server/database and simply send messages back and forth with a reliable service and have them each maintain state locally and synchronize. But note, APNS is not a reliable message delivery service for maintaining synchronization like this.

How an orchestration engine works [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have several questions on this topic. For example, I have found a lot of papers like
"Towards Dynamic Orchestration of Semantic Web Services"
"Decentralized Orchestration of Composite Web Services"
and so on... but in practice, I have only found orchestration linked to Bitztalk or ESB (I mean big software programmers).
Is it possible to develop and orchestration language yourself?
What is the best way to develop an orchestration engine?
Perhaps the best source of information on Purpose and Application of Orchestration are the papers themselves which you quoted.
Background
We surf the web on Firefox, type a document on Microsoft Office. These are centralized applications. These types of software sit and work from one place. They work on your computer, they work on my computer.
You go to a supermarket, pickup an item and check-out from one of the many cash counters. Each clerk at each desk has his own barcode scanner and his own swipe card reader. Each of the cash registers on these counters are linked to one server in a back room somewhere. In this setup, the billing software is distributed across the clerk's counters, but the whole application is still centralized. The server manages the stock and records the sales. It is the center of the centralization.
Then you write an email and send it. Say you send an email from your Gmail account to my Hotmail account. There is Gmail's server, and there is Hotmail's server. There are two centers, instead of just one. Now things are no longer centralized - we have a distributed system. Here, failure of one center does not cripple the whole system. If Hotmail goes down, Gmail still survives.
Rather than purchasing from local supermarket, purchase something from an online store. Say eBay or Amazon. In this example, there is eBay's server, and there are the suppliers to eBay. The suppliers manage their own inventory on their own servers, not on eBay's server. There is also the courier company which brings the package to the buyer. The buyers have their own servers as well. The online payment that the buyer made was through MasterCard, yet another separate server. Now we are talking about a really big distributed system.
Purpose
Now that you are making an online purchase, a hell lot of things are bout to happen, which would need more than a bunch of servers. There has to be a master puppeteer who synchronizes activities among these servers. An account has to be deducted. An email has to sent. A warehouse has to be notified. A courier needs to be arranged. who controls this intricate dance? This is your orchestrator.
Application
Most of the time there are many different and independent servers, each owned by different entities. Yet when all these entities need to work together to create a business flow, a "user interaction session", we need orchestration.
Orchestration of the activities among a set of servers is achieved through a master-puppeteer server. In reality, the orchestrator is itself a set of servers. So one set of servers directs another set of servers. These second set of servers is where actual work is being done. eMails are being sent. Images are being compressed. Addresses are being sorted, and so on. The first set of servers (the orchestrator) makes sure things happen in the order they needs to happen.
Implementation
One answer: queues. The one activity that started this whole story was an attempt to make an online purchase. From there, the clicks you made, the commands you sent, were all queued up in these orchestration servers. The command like purchase-this-item or make-a-payment and then payment-received are all queued and then processed one after another.
The orchestration system accepts these commands on one thread, and on a different thread, the orchestration server asynchronously dispatches them to the respective worker servers. So the purchase-this-item command is dispatched to eBay's server while make-a-payment command is dispatched to MasterCard's server.
The worker servers might produce further commands. The MasterCard server, after validating your card number, might decide to send you an SMS. so it add a send-sms command to the queue. That command is dispatched to, say, a Vodaphone server.
This queuing and dispatching logic is called the "orchestration engine". From there, things can complicate. Vodaphone server might be offline. MasterCard might reject the card. The warehouse server might return an out-of-stock response. Then that response will need to be routed to eBay's server which could re-queue the purchase command to some different warehouse. There are server crashes. Disk failures. Power outages, and so on.
Finally
Orchestration is to make sure that so many diverse components, distributed geographically, in different points of time, some are parallel, some are faulty, some are slow, some are malicious, some are illegal,.. all work together towards getting you the damn headphones that you purchased over internet.

Large number of WebSocket connections

I am writing an application that keeps track of content pushed around between users of a certain task. I am thinking of using WebSockets to send down new content as they are available to all users who are currently using the app for that given task.
I am writing this on Rails and the client side app is on iOS (probably going to be in Android too). I'm afraid that this WebSocket solution might not scale well. I am after some advice and things to consider while making the decision to go with WebSockets vs. some kind of polling solution.
Would Ruby on Rails servers (like Heroku) support large number of WebSockets open at the same time? Let's say a million connections for argument sake. Any material anyone can provide me of such stuff?
Would it cost a lot more on server hosting if I architect it this way?
Is it even possible to maintain millions of WebSockets simultaneously? I feel like this may not be the best design decision.
This is my first try at a proper Rails API. Any advice is greatly appreciated. Thx.
Million connections over WebSockets, using Ruby, I can't see its real if you not using clustering to spread connections between different instances to handle all the data processing.
The problem here is serializing and deserializing data.
As well you have to research of how often you will need to pull data to client from server, and if it worth to have just periodical checks using AJAX, then handling connection for whole time. Because if you do handle connection and then you not using it - it is waste of resources. WebSockets are build on top of TCP layer, and all connections are not "cheap" as well going through for OS and asking them for data available again is not the simple process, with millions connections it is something really almost impossible without using most advanced technologies in the world.
I head that Erlang is able to handle millions of connections, but I don't have details over it. As well connection is one thing, another is processing data and interaction between connections - this you might want to check, because if you have heavy processing algorithms, then you definitely need to look into horizontal scaling options over clustering solutions.
If you are implementing chat, use websockets.
If you are implementing 1 way messages in realtime use server sent events.
If you are implementing 1 way messages sent every few hours or so, use APNS.
The saying goes phone in hand, use websockets / server sent events.
Phone in pocket, use APNS.
APNS will alleviate wifi dips, tcp/ip socket hangs and many other issues. Really useful. There is the chance that it may take a little time to get through. But then again, there is the chance that websockets will take
Recent versions of iOS let you send APNS to the client without a popup message to the client so it can ask the server for more information. That along with some backgrounding implementations really improves things.
If possible, do not implement totally anonymous clients. It is very tricky to detect if a client reinstalls the app. So you'll end up sending duplicates to the client. Need to take that into account.
APNS looks trivial to implement in ruby, but I'd suggest avoiding the urge and going to using an existing gem/service out there that supports both google and apple. It is much trickier to implement than it may seem at first.
If you decide to stick with websockets, it may make sense to just leverage websockets in nginx like https://github.com/wandenberg/nginx-push-stream-module
ASIDE:
Using SMS where speed is critical is very expensive. $1/month per phone number only sends a max rate of 1 message per second. So sending 100 messages per second = $100/month plus message fees. Do note that 100 messages at a rate of 50 messages/second = $50/month. But if you want to send 1k messages, that takes 20 seconds.
Good luck

Websocket scalability, broadcasting concerns

If you have a complex requirement set with many users(&servers) how will your websocket infrastructure (server[s]) will scale, especially with broadcasting?
Of course, broadcasting is not part of the any websocket spec but it's there even in basic chat examples (a.k.a. hello world for websocket).
Client side (asking for new data) solution still seems more scalable than server side (broadcasting) solution with websockets' low latency and relatively cheap (http headerless) nature.
Edit:
OK, just think that you want to replace all your ajax code with websocket implementations which may mean that so many connections within so many different contexts. This adds enormous complexity to your system if you want to keep track of every possible scenario for broadcasting.
Low (network/thread etc) level implementation suggestions are also part of the problem not the solution, because this means you have to code a special server unlike general http servers.
Moreover, broadcasting brings some sort of stateful nature to the table which can't easily scale. Think about adding more servers and load balancing.
Scaling realtime web solutions can be a complex problem but one that services like Pusher (who I work for) have solved, and one that there are most definitely solutions defined for self hosted realtime web solutions - the PubSub paradigm is well understood and has been solved many times and in order to solve the problem there needs to be some state (who is subscribing to what). This paradigm is used in broadcasting the the types of scenarios that you are talking about.
Realtime web technologies have been built with large amounts of simultaneous connections in mind - many from the ground up. If you wanted to create a scalable solution you would most likely use an existing realtime web server that supports WebSockets, in the same way that it's highly unlikely that you would decide to implement your own HTTP Server you are unlikely to want to implement your own server which supports WebSockets from scratch.
Dedicated Realtime web servers also let you separate your application logic from your realtime communication mechanism (separation of concerns). Your application might need to maintain some state but the realtime technology deals with managing subscriptions and connections. How communication between the application and the realtime web technology is achieved is up to you but frequently messages queues are used and specifically redis is very popular in this space.
HTTP polling may conceptually be easier to understand - you can maintain statelessness and with each HTTP poll request you specify exactly what you are looking for. But it most definitely means that you will need to start scaling much sooner (adding more resource to handle the load).
WebSocket polling is something I've not considered before and I don't think I've seen it suggested anywhere before either; the idea that the client should say "I'm ready for my next set of data and here's what I want" is an interesting one. WebSockets have generally taken a leap away from the request/response paradigm but there may be scenarios where the increased efficiency of WebSockets and request/response using them may have some benefits. The SocketStream application framework might be worth a look as it might be relevant; after the initial application load all communication is performed over WebSockets which means that event basic request/response functionality uses WebSockets.
However, since we are talking about broadcasting data we need to go back to the PubSub paradigm where it makes much more sense to have active subscriptions and when new data is available that new data is distributed to those active subscriptions (pushed). All your application needs to know is if there are any active subscriptions or not in order to decide whether to publish the data or not. That problem has been solved.
The idea of websockets is that you keep a persistent connection with each client. When there is new data that you want to send to every client, you already know who all the clients are so you should just send it.
It sound like you want each client to constantly be sending requests to the server for new data. Why? It seems like that would waste everyone's bandwidth and I don't know why you think it will be more scalable. Maybe you could add more detail to your question like what kind of information you are broadcasting, how often, how many bytes, how many clients, etc.
Why not just consider an open websocket connection to be like a standing request from the client for more data?

Is it possible to build a web-based chat client without a socket-based framework?

I have heard that web-based chat clients tend to use networking frameworks such as the twisted framework.
But would it be possible to build a web-based chat client without a networking framework - using only ajax connections?
I would like to build a session-based one-to-one web chat client that uses sessions to indicate when a chat has ended. Would this be possible in Rails using only ajax and without a networking framework?
What effect does it have to use a networking framework and what impact would it have on my app to not use one? Also any general recommendations for approaching this project would be appreciated.
If i understand you correctly, you want to have to clients connect to you server and send messaged to each other to each other through ajax, via the server.
This is possible, there are two approaches to do this.
The easy approach is to have both client poll every few seconds to check for new messages posted by the other. Drawback is that the messages are not instantly delivered. I think this is an example found in the rails book.
The more complex approach is to keep an open connection and sent the messages to the client as soon as they are received by the server. To do this you can use something like Juggernaut
I would like to add that though the latter works, it is not something http was meant for and it a bit of hack, but hey, whatever gets the job done. A working example of this is the rails chat project which uses a juggernaut derivative.
Technically speaking every network based application has a networking framework under it and, therefore, is socket based...
The only real question here is whether you want to have all that chatter go through your server or allow point to point communication. If the former, you can use the ajax framework to talk to your web server. This means that all of your clients will be constantly polling the web server for updates.
If the later, then you have to allow direct tcp connections between the two clients and need to get a little closer to the metal so to speak.
So, ask yourself this: Do you want to pay for the traffic costs AND have potential liability over divulging whatever it is that people might be typing into their client; or, would you rather just build a chat program that people can use to talk to each other?
Of course, before even going that far, do you really want to build yet another chat client? That space is already pretty crowded.

Resources