Notify backend that container has been deployed - docker

I want to create containers which start small webservices. Developers of our team should then upload small images which contain different services. A main backend system then uses these services.
My problem is: When a developer uploads a new service, how does the backend service know there is a new service it can use? Beforehand, when service X should be used and there was no service for that functionality, it returned just a simple message. When there is a service uploaded to do X, the main backend should use that service. But how does it know it is there and should be used?

You can add some notification to your small web service. But what do you do when the service will be down unexpectedly or network will be down at the short time? You need add some logic to clients for refreshing connection.
And docker recommends to use this way.
The problem of waiting for a database (for example) to be ready is
really just a subset of a much larger problem of distributed systems.
In production, your database could become unavailable or move hosts at
any time. Your application needs to be resilient to these types of
failures.
To handle this, your application should attempt to re-establish a
connection to the database after a failure. If the application retries
the connection, it should eventually be able to connect to the
database.
The best solution is to perform this check in your application code,
both at startup and whenever a connection is lost for any reason.

Related

How to use Kubernetes to do multiplayer online game with websocket?

If develop a online real time game with websocket, multiplayers running on the different containers, how to sync data when add or reduce containers if they are playing?
Does kubernetes has any good feature on this case?
ThatBrianDude already gave an awesome answer, and mine will not be that good. But I think your last comment gave us more hints about the architecture you have in mind. I hope my humble answer will shed a light on more ideas to your game. Here are some suggestions:
First, avoid keeping any state in the websocket apps.
The basic idea with containers is that they should be stateless.
ThatBrianDude
So, why not use caches and a messaging layer to help you with that. Imagine the following examples:
Situation 1: if the client sends an action to the websocket server, the server should put it in a queue/topic (some other service will process it later on).
Situation 2: The server might also listen to a(some) topic(s) for some types of messages, and send them back to the clients that need that information.
Situation 3: when the client asks for information or if the websocket server needs some information to send to the client, the server must read it from a cache, as reading from DB might be slow for a multiplayer game.
Situation 4: eventually a container is killed. The clients connected to that server will receive a connection error, and should reconnect. That means another handshake, and the player might feel it, depending on what the game was doing, so killing a container should not happen that often. But that would be just it, no information is lost.
This way, the websocket server containers are totally stateless, and the messaging topics and caches will help you to: provide all the information needed to containers, and; keep websockets, persistance and processing isolated and scalable.
Summing up, the information would flow like this:
clients are showering the websocket server containers with actions
websocket servers just send them to the messaging layer
processing containers (which can be scalled too!) receive those messages, process them, save to the database and/or to a cache and eventually send more messages to other topics
(optional) websocket servers receive those messages and send them to the clients.
Or like this:
clients ask for information or websocket servers periodically need to send the world state to clients
websocket servers look up the information in the cache
and send it to the clients.
Or even like this:
Some processing servers are independent of messages, they just read the game/world state (from the cache?) periodically
they process the physics and mechanics of the game
and save the result back in the cache, which will be sent to the clients by the websocket servers periodically, or send it in a topic so the websocket server can listen to it and send it to the clients.
Lastly, don't forget the suggestion to have one machine responsible for one game/world. It would be nice if each processing server (or each thread of a server) works with one game/world. That would make it easier to persist things without the need to sync stuff.
The basic idea with containers is that they should be stateless.
This means that any persistant data your game might have (highscores etc.) must be saved to a persistant DB whereas other temporary data like current ingame score or nickname etc. can stay inside the memory of the container and be gone once the container dies.
how to sync data when add or reduce containers if they are playing?
This sounds like you want to use multiple containers computing one game world?
Thats a whole other beast on its own but you might want to take a look at SpatialOS which pretty much allows for massive multiplayer worlds and is designed for games that require more than one machine per world.
If thats not what you are looking for I would recommend you to keep one machine responsible for one game/world as you will avoid high complexity when you try to sync stuff later on.

How to ensure the receiving application is up and running in client-server communication?

I am presently working on a client-server solution to transfer files to another machine via a socket network connection. Since I intend to do some evaluation on the receiving end as well I am assuming that I will need to have some kind of client or server programme running there, too.
I am fairly new to the whole client-server thing and therefore have the following elementary question:
My present understanding is that client and server will be two independent programmes running on two different machines. How would one typically ensure that the communication partner (i.e., the server when sending from a client and the client when sending from a server) is actually up and running on the remote machine that I want to transfer a file to?
So far, I have been looking into the following options:
In the sending programme include an ssh access to the remote
machine and start an instance of the receiving programme on the
remote machine.
Have the receiving programme run as a demon process on the remote
machine. This would mean that the receiving programme should always
be running on the remote machine. However, how would I know whether
the process has crashed or has been shut down for some reason and
how would one recover from that without option 1) above?
So, my main question is: Are there any additional options that might be worth considering?
Thanks for your view on this!
Depending on how your client server messages are setup, a ping (I don't mean the ICMP ping, but the basic idea) message, where the server can respond with "I am alive" would help. This way at least you know the server end is running.
It is not uncommon in production environments using these that monitoring systems are put in place. Other options worth considering - xinet.d scripts - stuff that gets started on incoming connections.
There probably new ways to achieve the automatic start/restart or start on connection of this with systemd/systemctl but I am not familiar enough with them to give you the specifics.
A somewhat crude, but effective means may be a cron job that periodically runs a script to enforce keeping the service up.

Sending large amounts of data from windows app to service app

I'm building a system with some remote desktop capabilities. The client is considered every computer which is sharing its desktop, the server is considered a central server with a database which receives the images of all the multiple desktops. On the client side, I would like to build two projects: A windows service application and a VCL forms application. Each client app would presumably be running under a different user account on the computer, so there might be multiple client apps running at once, and they all send their image into this client service, which relays them to the central server.
The service will be responsible for connecting to the server, sending the image, and receiving mouse/keyboard events. The application, which is running in the background, will connect to this service some how and transmit the screenshots into the service. The goal is that one service is running while multiple "clients" are able to connect to it and send their desktop image. This service will be connected to the "central server" which receives all these different screenshots from different "clients". The images will then be either saved and logged or re-directed to any "dashboard" which might be viewing that "client".
The question is through what method should I use to connect the client applications to the client service to send images? They will be running on the same computer. I will need both the abilities to send simple command packets as well as stream a chunk of an image. I was about to use the Indy components (TIdTCPServer etc.) but I'm sure there must be an easier and cleaner way to do it. I'm using the Indy components elsewhere in the projects too.
Here's a diagram of the overall system I'm aiming for - I'm just worried about the parts on the far right and far left - where the apps connect to the service within the same computer. As you can see, since there are many layers, I need to make sure whatever method(s) I use are powerful enough to accommodate for streaming massive amounts of image data.
Communicates among processes, you can use Pipe/Mailslots/Socket, I also think while sending a stream file Shared Memory maybe the most efficient way
I've done this a few times now, in a number of different configurations. The key to making it easy for me was using the RemObjects SDK which took care of the communications part. With a thread that controls its state, I can have a connection to a server or service that is reliable, and can transfer anything from a status byte through to transferring many megabytes of data (it is recommended that you use small chunks for large data so that you have more fine grained control over errors and flow). I now have a set of high reliability templates that I can deploy to make a new variation quite easily, and it can be updated with new function calls without much hassle (first thing I do is negotiate versions between the client and server so they know what they can support). Because it all works at a high level, my code is just making "function calls" and never worrying about what the format on the wire is. Likewise I can switch from their binary format to standard SOAP or other without changing the core logic. Finally, the connections can be local, to the same machine (I use this for end user apps talking to a background service) or to a machine on the LAN or internet. All in the same code.

Erlang web-distribution

On non-web based chat system the server distinguishes its clients by their PIDs, right? And what should be used to distinguish the clients on web-based chat system?
Thnx in advance
The fact that you're using a web server shouldn't change much about your model. You're still building chat. You also don't want to make your chats tied too deeply to the process that is managing their HTTP connection. HTTP connections are ephemeral, even if everything is going well and you're using long polling there's no guarantee that the connection will be re-used with Keep-Alive for the next long poll. The user might also want to open up the same chat in multiple browser windows, multiple computers, whatever.
I haven't looked closely at any of these but you're not the first person that has built web chat with Erlang:
http://chrismoos.com/2009/09/28/building-an-erlang-chat-server-with-comet-part-1/
http://www.erlang-factory.com/upload/presentations/31/EugeneLetuchy-ErlangatFacebook.pdf
http://yoan.dosimple.ch/blog/2008/05/15/
https://github.com/yrashk/socket.io-erlang (more of a general tool for this sort of thing, not chat specifically)
https://github.com/rvirding/chat_demo (as seen above)
I think the confusion comes from the notion that a Erlang server process must stay alive for every individual client. It can, but Mochiweb doesn't do that by default if I'm not mistaken. It just spawns a new process for every request. If you would like to have a long lived bidirectional client <-> server process connection you can do that for example by;
sending a client identifier with every request and map that to a long-lived process on the server. The process will maintain servers state and you can call methods on it. It's still pull and not push though.
use the web socket implementations. Not sure if Mochiweb has one, but other Erlang HTTP servers like Misultin and Yaws provide one. For a web based chat system I believe web sockets would be a great fit.
For a very trivial example of a web-based chat system using websockets and Misultin you can check out this chat demo. It was written to demonstrate an idea and is not very elegant, but it does work.

Push Messages from Azure application to MonoTouch (iPhone) application without Apple Push Notifications

I'm currently designing an application for iOS (using MonoTouch) that will have a server component running on Windows Azure. The application will essentially be a chat type application where users will generate messages within their clients and send them to the server, which will then need to forward those messages out (as quickly as practicable) to other clients that the user might be sending the messages to.
My question is - is there a recommended practice for architecting an application like this, where clients need to receive 'push' messages from the server?
I've considered a few options but would appreciate feedback.
The first option is to use Apple's Push Notifications service (APNs). I have two concerns about this - first, the clients only need to receive the messages when they're online (APNs sends messages through when the app is closed too, which I don't need or want); and second, there is a possibility that there will be a high volume of messages, which I know Apple would probably get unhappy about (perfectly fairly).
A second option I considered is using a web service (WCF-based) and having the client call this service every (say) 2-3 seconds, which is the maximum delay we could tolerate. This would seem to involve a great deal of potentially unnecessary network traffic, though ("have you got anything for me?", "no", repeated ad nauseum).
A third option is to maintain a persistent web service connection between the client and the server. When the client app starts it would call a web service method on a background thread. The server would hold the connection open (by not returning anything), and if any messages came through it would immediately return them. This connection might time out after, say, 2 minutes at which point it would be re-established. This seems to do what I want, but again, I'm concerned that there'd be a lot of connections open to the server at any moment, which could require server resources unnecessarily.
A fourth option is to use a persistent connection over TCP (or UDP, although from what I've found, Windows Azure doesn't support this). This seems to be a good option, but again, might be overkill in terms of server usage - there could potentially be hundreds or even thousands of clients connected at any moment.
A fifth option is to somehow have the server push messages directly to the client, perhaps by having the client run a mini web server or similar. However, as the app will be running on 3G and WiFi networks (beyond my control) I don't expect incoming ports will be open for this sort of thing.
If anyone has any other suggestions, or thinks one of the above options would be a good idea (or is a standard way of approaching this sort of problem) I'd be very interested to hear about it.
Thanks in advance,
John
You had a look at Pubnub http://www.pubnub.com/ ?

Resources