On non-web based chat system the server distinguishes its clients by their PIDs, right? And what should be used to distinguish the clients on web-based chat system?
Thnx in advance
The fact that you're using a web server shouldn't change much about your model. You're still building chat. You also don't want to make your chats tied too deeply to the process that is managing their HTTP connection. HTTP connections are ephemeral, even if everything is going well and you're using long polling there's no guarantee that the connection will be re-used with Keep-Alive for the next long poll. The user might also want to open up the same chat in multiple browser windows, multiple computers, whatever.
I haven't looked closely at any of these but you're not the first person that has built web chat with Erlang:
http://chrismoos.com/2009/09/28/building-an-erlang-chat-server-with-comet-part-1/
http://www.erlang-factory.com/upload/presentations/31/EugeneLetuchy-ErlangatFacebook.pdf
http://yoan.dosimple.ch/blog/2008/05/15/
https://github.com/yrashk/socket.io-erlang (more of a general tool for this sort of thing, not chat specifically)
https://github.com/rvirding/chat_demo (as seen above)
I think the confusion comes from the notion that a Erlang server process must stay alive for every individual client. It can, but Mochiweb doesn't do that by default if I'm not mistaken. It just spawns a new process for every request. If you would like to have a long lived bidirectional client <-> server process connection you can do that for example by;
sending a client identifier with every request and map that to a long-lived process on the server. The process will maintain servers state and you can call methods on it. It's still pull and not push though.
use the web socket implementations. Not sure if Mochiweb has one, but other Erlang HTTP servers like Misultin and Yaws provide one. For a web based chat system I believe web sockets would be a great fit.
For a very trivial example of a web-based chat system using websockets and Misultin you can check out this chat demo. It was written to demonstrate an idea and is not very elegant, but it does work.
Related
If develop a online real time game with websocket, multiplayers running on the different containers, how to sync data when add or reduce containers if they are playing?
Does kubernetes has any good feature on this case?
ThatBrianDude already gave an awesome answer, and mine will not be that good. But I think your last comment gave us more hints about the architecture you have in mind. I hope my humble answer will shed a light on more ideas to your game. Here are some suggestions:
First, avoid keeping any state in the websocket apps.
The basic idea with containers is that they should be stateless.
ThatBrianDude
So, why not use caches and a messaging layer to help you with that. Imagine the following examples:
Situation 1: if the client sends an action to the websocket server, the server should put it in a queue/topic (some other service will process it later on).
Situation 2: The server might also listen to a(some) topic(s) for some types of messages, and send them back to the clients that need that information.
Situation 3: when the client asks for information or if the websocket server needs some information to send to the client, the server must read it from a cache, as reading from DB might be slow for a multiplayer game.
Situation 4: eventually a container is killed. The clients connected to that server will receive a connection error, and should reconnect. That means another handshake, and the player might feel it, depending on what the game was doing, so killing a container should not happen that often. But that would be just it, no information is lost.
This way, the websocket server containers are totally stateless, and the messaging topics and caches will help you to: provide all the information needed to containers, and; keep websockets, persistance and processing isolated and scalable.
Summing up, the information would flow like this:
clients are showering the websocket server containers with actions
websocket servers just send them to the messaging layer
processing containers (which can be scalled too!) receive those messages, process them, save to the database and/or to a cache and eventually send more messages to other topics
(optional) websocket servers receive those messages and send them to the clients.
Or like this:
clients ask for information or websocket servers periodically need to send the world state to clients
websocket servers look up the information in the cache
and send it to the clients.
Or even like this:
Some processing servers are independent of messages, they just read the game/world state (from the cache?) periodically
they process the physics and mechanics of the game
and save the result back in the cache, which will be sent to the clients by the websocket servers periodically, or send it in a topic so the websocket server can listen to it and send it to the clients.
Lastly, don't forget the suggestion to have one machine responsible for one game/world. It would be nice if each processing server (or each thread of a server) works with one game/world. That would make it easier to persist things without the need to sync stuff.
The basic idea with containers is that they should be stateless.
This means that any persistant data your game might have (highscores etc.) must be saved to a persistant DB whereas other temporary data like current ingame score or nickname etc. can stay inside the memory of the container and be gone once the container dies.
how to sync data when add or reduce containers if they are playing?
This sounds like you want to use multiple containers computing one game world?
Thats a whole other beast on its own but you might want to take a look at SpatialOS which pretty much allows for massive multiplayer worlds and is designed for games that require more than one machine per world.
If thats not what you are looking for I would recommend you to keep one machine responsible for one game/world as you will avoid high complexity when you try to sync stuff later on.
Please read before tagging as duplicate.
I'm creating a set of applications which rely on smart cards for authentication. Up to now, each application has controlled the smart card reader individually. In a few weeks, some of my customers will be using more than one application at the same time. So, I thought maybe it would be more practical to create a service application which controls the authentication process. I'd like my desktop applications to tell the service application they are interested in the authentication process, and the service application would then provide them with information about current user. This part is easy, using named pipes. The hard part is, how can the service tell the desktop applications that an event has occurred (UserLogIn, UserLogOut, PermissionsChanged, ... to name a few). So far I have two methods in mind. CallBack functions, and Messages. Does anyone have a better idea? I'm sure someone has.
You want do to IPC (Inter Process Communication) with Delphi.
There are many links that can help you, Cromis IPC is just one to give you an idea what you are after.
A similar SO question to yours is here.
If you want to go pure Windows API, then take a look at how OutputDebugString communications is implemented.
Several tools can listen to the mechanism and many apps can send information to it.
Search for DBWIN_DATA_READY and DbWin32 for more information on how the protocol for OutputDebugString works.
This and this are good reading.
When it gets into IPC, some tips:
Do not be tied on one protocol: for instance, if you implements named pipe communication, you would later perhaps need to run it over a network, or even over HTTP;
Do not reinvent the wheel, nor use proprietary messages, but standard formats (like XML / JSON / BSON);
Callbacks events are somewhat difficult to implement, since the common pattern could be to implement a server for each Desktop client, to receive notifications from the server.
My recommendation is not to use callbacks, but polling on a stateless architecture, on the Desktop applications. You open a communication channel with the server, then every second / half second (use a TTimer in your UI), you make a small request asking for what did change (you can put a revision number or a time stamp of your last retrieval). Therefore, you synchronize your desktop data with pending events. Asking for updates on an existing connection is very fast, and will just send one IP packet over the network back and forth, if nothing changed. It is a very small task, and won't slow down nor the client nor the server (if you use some in-memory cache).
On practice, with real application, such a stateless architecture is very responsive, from the end-user point of view, and is much more easy to deploy. You do not need to create a server on each desktop application, so you don't have to open firewall ports or such. Since HTTP is stateless, it is even Internet friendly.
If you want to develop services, you can use DataSnap, something like RemObjects or you can try our Open Source mORmot framework which is able to create interface-based services with light JSON messages over REST, either in-process, using GDI messages, named pipes or TCP/HTTP - for free, with unbeatable performance, build-in security, and from Delphi 6 up to XE2. For your event-based task, just using the Client-Server ORM available in mORMot could be enough: create a table/class storing the events (you can even define a round-robin in-memory storage - no need to use SQLite3 engine nor a DB here), then ask for all pending events since the last refresh. And the server can safely be a background service, or a normal application - some mORMot users even have the same executable able to be either a stand-alone application, a server service, an application server, or a UI client, just by changing the configuration.
Edit / announcement:
On the mORMot roadmap, we added a new upcoming feature, to easily implement one-way callbacks from the server.
That is, add transparent "push" mode to our Service Oriented Architecture framework.
Aim is to implement notification events triggered from the server side, very easily from Delphi code, via some interface definitions, even over a single HTTP connection - for instance, WCF does not allow this: it will need a dual binding, so will need to open a firewall port and such.
It will used for easy Event Collaboration, via a publish / subscribe pattern, and allow Event Sourcing. I will try to make it implement the two modes: polling and lock-and-wait. A direct answer to your question.
You can use a simple TCP socket connection (bidirectional) to allow asynchronous server to client messages on the same socket.
An example is the Indy TIdTelnetClient class, it uses a thread for incoming messages from the server.
You can build a similar text-based protocol and only need a Indy TCP server instance in the service, and one Indy Client instance in the application(s).
I'm currently designing an application for iOS (using MonoTouch) that will have a server component running on Windows Azure. The application will essentially be a chat type application where users will generate messages within their clients and send them to the server, which will then need to forward those messages out (as quickly as practicable) to other clients that the user might be sending the messages to.
My question is - is there a recommended practice for architecting an application like this, where clients need to receive 'push' messages from the server?
I've considered a few options but would appreciate feedback.
The first option is to use Apple's Push Notifications service (APNs). I have two concerns about this - first, the clients only need to receive the messages when they're online (APNs sends messages through when the app is closed too, which I don't need or want); and second, there is a possibility that there will be a high volume of messages, which I know Apple would probably get unhappy about (perfectly fairly).
A second option I considered is using a web service (WCF-based) and having the client call this service every (say) 2-3 seconds, which is the maximum delay we could tolerate. This would seem to involve a great deal of potentially unnecessary network traffic, though ("have you got anything for me?", "no", repeated ad nauseum).
A third option is to maintain a persistent web service connection between the client and the server. When the client app starts it would call a web service method on a background thread. The server would hold the connection open (by not returning anything), and if any messages came through it would immediately return them. This connection might time out after, say, 2 minutes at which point it would be re-established. This seems to do what I want, but again, I'm concerned that there'd be a lot of connections open to the server at any moment, which could require server resources unnecessarily.
A fourth option is to use a persistent connection over TCP (or UDP, although from what I've found, Windows Azure doesn't support this). This seems to be a good option, but again, might be overkill in terms of server usage - there could potentially be hundreds or even thousands of clients connected at any moment.
A fifth option is to somehow have the server push messages directly to the client, perhaps by having the client run a mini web server or similar. However, as the app will be running on 3G and WiFi networks (beyond my control) I don't expect incoming ports will be open for this sort of thing.
If anyone has any other suggestions, or thinks one of the above options would be a good idea (or is a standard way of approaching this sort of problem) I'd be very interested to hear about it.
Thanks in advance,
John
You had a look at Pubnub http://www.pubnub.com/ ?
Ok I know this is pretty broad, but let me narrow it down a bit. I've done a little bit of client-server programming but nothing that would need to handle more than just a couple clients at a time. So I was wondering design-wise what the most mainstream approach to these servers is. And if people could reference either tutorials, books, or ebooks.
Haha ok. didn't really narrow it down. I guess what I'm looking for is a simple but literal example of how the server side program is setup.
The way I see it: client sends command: server receives command and puts into queue, server has either a single dedicated thread or a thread pool that constantly polls this queue, then sends the appropriate response back to the client. Is non-blocking I/O often used?
I suppose just tutorials, time and practice are really what I need.
*EDIT: Thanks for your responses! Here is a little more of what I'm trying to do I suppose.
This is mainly for the purpose of learning so I'd rather steer away from use of frameworks or libraries as much as I can. Take for example this somewhat made up idea:
There is a client program it does some function and constantly streams the output to a server(there can be many of these clients), the server then creates statistics and stores most of the data. And lets say there is an admin client that can log into the server and if any clients are streaming data to the server it in turn would stream that data to each of the admin clients connected.
This is how I envision the server program logic:
The server would have 3 Threads for managing incoming connections(one for each port listening on) then spawning a thread to manage each connection:
1)ClientConnection which would basically just receive output, which we'll just say is text
2)AdminConnection which would be for sending commands between server and admin client
3)AdminDataConnection which would basically be for streaming client output to the admin client
When data comes in from a client to the server the server parses what is relevant and puts that data in a queue lets say adminDataQueue. In turn there is a Thread that watches this queue and every 200ms(or whatever) would check the queue to see if there is data, if there is, then cycle through the AdminDataConnections and send it to each.
Now for the AdminConnection, this would be for any commands or direct requests of data. So you could request for statistics, the server-side would receive the command for statistics then send a command saying incoming statistics, then immediately after that send a statistics object or data.
As for the AdminDataConnection, it is just the output from the clients with maybe a few simple commands intertwined.
Aside from the bandwidth concerns of the logical problem of all the client data being funneled together to each of the admin clients. What sort of problems would arise from this design due to scaling issues(again neglecting bandwidth between clients and server; and admin clients and server.
There are a couple of basic approaches to doing this.
Worker threads or processes. Apache does this in most of its multiprocessing modes. In some versions of this, a thread or process is spawned for each request when the request arrives; in other versions, there's a pool of waiting threads which are assigned work as it arrives (avoiding the fork/thread create overhead when the request arrives).
Asynchronous (non-blocking) I/O and an event loop. This is basically using the UNIX select call (although both FreeBSD and Linux provide more optimized alternatives such as kqueue). lighttpd uses this approach and is able to achieve very high scalability, but any in-server computation blocks all other requests. Concurrent dynamic request handling is passed on to separate processes (via CGI) or waiting processes (via FastCGI or its equivalent).
I don't have any particular references handy to point you to, but if you look at the web sites for open source projects using the different approaches for information on their design wouldn't be a bad start.
In my experience, building a worker thread/process setup is easier when working from the ground up. If you have a good asynchronous framework that integrates fully with your other communications tasks (such as database queries), however, it can be very powerful and frees you from some (but not all) thread locking concerns. If you're working in Python, Twisted is one such framework. I've also been using Lwt for OCaml lately with good success.
I have heard that web-based chat clients tend to use networking frameworks such as the twisted framework.
But would it be possible to build a web-based chat client without a networking framework - using only ajax connections?
I would like to build a session-based one-to-one web chat client that uses sessions to indicate when a chat has ended. Would this be possible in Rails using only ajax and without a networking framework?
What effect does it have to use a networking framework and what impact would it have on my app to not use one? Also any general recommendations for approaching this project would be appreciated.
If i understand you correctly, you want to have to clients connect to you server and send messaged to each other to each other through ajax, via the server.
This is possible, there are two approaches to do this.
The easy approach is to have both client poll every few seconds to check for new messages posted by the other. Drawback is that the messages are not instantly delivered. I think this is an example found in the rails book.
The more complex approach is to keep an open connection and sent the messages to the client as soon as they are received by the server. To do this you can use something like Juggernaut
I would like to add that though the latter works, it is not something http was meant for and it a bit of hack, but hey, whatever gets the job done. A working example of this is the rails chat project which uses a juggernaut derivative.
Technically speaking every network based application has a networking framework under it and, therefore, is socket based...
The only real question here is whether you want to have all that chatter go through your server or allow point to point communication. If the former, you can use the ajax framework to talk to your web server. This means that all of your clients will be constantly polling the web server for updates.
If the later, then you have to allow direct tcp connections between the two clients and need to get a little closer to the metal so to speak.
So, ask yourself this: Do you want to pay for the traffic costs AND have potential liability over divulging whatever it is that people might be typing into their client; or, would you rather just build a chat program that people can use to talk to each other?
Of course, before even going that far, do you really want to build yet another chat client? That space is already pretty crowded.