I have created a connection pooling process in Erlang that has sub-processes ( each being a connection ). The connection pooling process (supervisor) needs to hold the state of all children sub-processes, such as a flag that indicates if the sub-process is available to be leased to a requester. This state is stored on a ETS table.
POOL-MASTER :
connection process 1
connection process 2
connection process 3
When a client requests a connection to POOL-MASTER, it must find out which connection process is available looking at ETS and fetching the state. This phase is called "get-lease". Then the state is updated. Similarly, when a client returns the connection to the pool, it uses a "return-lease" function that flags the item to be available to the next client.
I want to have the functions above "get-lease and return-lease" to be thread-safe. In other words, I want to make sure that no client is concurrently using these functions otherwise it turns out the state of the connections can be mixed up ( two clients get the same connection ). In java a synchronized method would be used for this purpose.
Is there anything in erlang that can be done to achieve this ? For instance some sort of locking mechanism on the ETS table and then relasing the lock ? Or should this be done creating a single process that handles the specific functions to be locked/unlocked and send messages to this process (assuming the messaging is single threaded ) ?
Thread-safe ? What is it ? Erlang does'nt know it :) since we work on message passing between processes. This makes sure that access to any structure (maintained by a server erlang process) will always be in serialized manner [same what Don Branson has mentioned.]
What I would have done is:
1. Create a gen server process monitored by a supervisor process.
2. This server process would be the manager of your ETS table and exposes API/methods to be called by clients for requesting and releasing connections.
3. The requests will be handled by handle_call(for sync call) or by handle_cast(for async call)
4. You might even want to implement some Timeout functionality to release connections by iterating over your ETS table and deleting from it based on some criteria
The above would work just fine giving you decent performance as well (if performance came to your mind). AND no race conditions as the accesses are serilized.
One approach would be to have a process dedicated to managing leases via messaging. Send a get_lease message to that process. It would receive the lease message, thus serializing access, and send a reply message to the requesting process when a lease becomes available. The lessee would send a return_lease message to the manager, which would add the lease back to the free-list.
The manager would also have to do something about processes that acquire a lease and fail to return it. It's a lease, so presumably there's an expiration that could be used for this, but the manager should also probably monitor the lessee and free the lease if a lessee fails.
Related
Cowboy is webserver written in erlang. It spawns new process for each request and than using that process for subsequent requests if HTTP pipelining (sending multiple requests on same socket one after the other without waiting for the response and assuming that responses will be send back in same order as requests was sent) is used by client.
This is fine, but if you want to use that webserver for building realtime web app, it has one problem and that is when socket is closed for instance because of client network problems, the process representing that socket on the server is terminated. That means you can`t use that process for storing some session data (because in realtime web app you probably want to go behind the end of the http request (if long polling is used for instance) and have some state associated to the connected client and think about him as "he is online" even if the http request was ended.
In sock.js, it is solved by spawning one more process for each client (each session id).
So if you have 2000 clients using websockets, you will have around 4k processes (one process from cowboy that represents that socket and one more for keeping the session state alive for case that cowboy process will be terminated (for instance because of network problems).
THE QUESTION IS: i am relative new in erlang so i don`t know if it does make sense much in question of performance improvement, but i am thinking about rewriting that Cowboy webserver a bit so the process representing realtime connection will not ends until i want it (the process will be alive even when the underlying websocket socket will be terminated).
This will eliminate the needs to have one more session process for each client. So instead of 4000 processes you will have just 2000. Can it be huge performance booster in erlang?
Erlang is pretty good with processes, but, too much of anything ain't good. Using processes as direct mappings to sessions is not a good idea. Why not do it logically ? I assume you can have some IN-MEMORY storage, say, ETS, or even mnesia.
If am using Web Sockets to communicate, each user is connected via one such process, however, you simply map a certain random unique Session Key to each individual Process, hence to each individual user.
-record(client,{web_sock_pid, session_key,username}).
If the process exits, and the client end has a way pf reconnecting, once it re-identifies itself as the same user, then , the session key still holds, but the pid of the attached process has changed. it does not matter.
If it is NOT web sockets, and it is just HTTP REST/JSON/JSONP/XML services , then it is even very easy. Use ETS tables in RAM. A new session is stored and the parameters defining that session are store in RAM, then for each request, the session key can come along plus other parameters. Message delivery is by comet or frequent checks by the client end.
Sounds like you are doing some premature optimizations if you ask me.
Erlang processes are very inexpensive. You shouldn't really have to worry about spawning too manny processes.
Write it with two processes per websocket, then do some measurements to see where it is using the most memory and wasting the most cpu cycles.
Use case is to have a server connect to thousands of users email accounts and sniff incoming mail in java preferably with java mail and spring integration/amqp/rabbit mq type scalable infrastructure.. And imap idle type connections and add server processing nodes as needed.
Single inbound channel is easy with imap idle inbound adapter.. You could configure few in XML. But if you need a persistent listener/imapidlechannel adapters queue of thousands of these adapters and Needed to add new user connection dynamically for server processing.. This would be a challenge. Also need fault taulerance that if the java listener dies or server reboots all these listeners and their configuration also reboot vs rebuilding thousands of these connections and recovery if some connections loose their idle receive capability without rebuilding all user connections for the idle receiving.
Any ideas welcome as searched a lot however could not find anything? This seems to be a significant scalability issue about e mail receive connections open.
If you want to use the IMAP IDLE command to listen for new messages using JavaMail, you'll need one thread per mailbox, which is likely to impact your scalability. Even keeping thousands of connections open might be an issue.
You don't say how quickly you need to react to new messages. Unless you have near real time requirements, it might be better to poll a subset of mailboxes every so often, eventually cycling through all the mailboxes.
You'll need to deal with the fault tolerance issues yourself, using checkpointing or transactions or whatever seems appropriate for your application.
The other option is to perhaps take a look at something like Akka with actors performing the async io. You'll need to ditch the JavaMail package and parse the imap commands yourself but there's lots of packages out there to do that. Would love to hear if you had a better solution.
I am using TIdCmdTCPClient and TIdCmdTCPServer. Suddenly I find that I might like to have bi-directional communication.
What would be best? Should I possibly use some other components? If so, which? Or should I kludge and have the 'client' poll the 'server' to ask if it wishes to communciate anything?
This is a very small system. Two clients and ten servers, with a burst of one tarnscation every 30 to 60 seconds for a few minutes once a day, so overhead for polling is inconsequential.
I'm just woder if there is a 'correct' way.
Update: this really is an incredibly simple system. Very little traffic and all of it simple. All transmissions are an indication of even type an an optional single parameter.
<event type> [ <parameter>] e.g. "HERE_IS_SOME_DATA 42"
This can be sent in both directions, hover here is no "reply" as such. Just fire off a message (and hope that it got there)? Receive an Ack with no data? Non-catching of an exception indicates that message was successfully sent?)
Would it be possible (would it be overkill) to use two TIdCmdTCPServer?
Both TIdCmdTCPClient and TIdCmdTCPServer continuously poll their socket endpoints for inbound data during the lifetime of the connection. You do not have to do anything extra for that. So, as soon as a TIdCmdTCPClient connects to the TIdCmdTCPServer, both components will initially be in a reading state until one of them sends a command to the other.
Now, there is a problem with doing that - as soon as either component sends that first command, the receiving component will interpret it as a command and send back a reply, which the other component will interpret as a command and send back a reply, which will be interpretted as a command and send back a reply, and so on, causing an endless cycle of replies back and forth. For that reason, it is not wise to use TIdCmdTCPClient and TIdCmdTCPServer together. You should either use TIdTCPClient with TIdCmdTCPServer, or use TIdCmdTCPClient with TIdTCPServer. Depending on what exactly your protocol looks like, you may have to forgo using TIdCmdTCPClient and TIdCmdTCPServer altogether and just use TIdTCPClient with TIdTCPServer so you have more control over reading and writing on both ends. It is hard to answer with actual code without first knowing what the communication protocol should look like.
A single TCP socket connection can be used in two directions. The server can send data asynchronously to the client at any time. It is up to the client however to read the socket, for asynchronous processing this is done in a listener thread which reads from the socket and synchronizes incoming data operations with the main worker thread.
An example use case in the Indy components is the Telnet client component (TIdTelnet) which has a receive thread listening for server messages.
But you also asked about the 'correct' way - and then the answer depends on other factors such as network stability, guaranteed delivery and how to handle temporary server outages. In enterprise environments, one central messaging hub is preferred in many use cases, so that all parties connect only to this central server which is only responsible for reliable message delivery, and keeps messages until the recipient is available.
You can download the INDY 10 TCP server demo sample code here.
I have over 50 clients connected to one server (low end server, running windows 2003 server), every time there is a power failure or switch failure the clients will disconnect from the server, the server might remain on during this incidents (if power backup is installed), when the clients came back they automatically detect the server and initiate a connection procedure, at this point the server will start dishing out the relevant data to the clients. Its at this point you realize some clients will start freezing becouse the server is not quick enough to dish out data and so it blocks the rest of the clients.
I have implemented a crude method to control this client storm but i was asking if guys out there have better algorithms to perform this kind of task.
NB: Am using Asta sockets components on a delphi application, but i dont mind examples from different fields,
Similar to network collision-detection protocols, perhaps clients could wait a random period of time before initiating their connection at startup?
In addition to the random startup delay suggested by Bremen, implement some sort of "too busy; try again later" message in your protocol. Rejecting a client with a short message should not be a problem for 50, 100, or even 1000 clients. Have the clients respond by doing a random delay and retrying + exponential backoff.
The solution depends on your preferences as well. Is it ok for you to drop down the connections request or send busy message?
Another option can be that you start sending data to the clients in sort of roundrobin manner. To this end you can have different threads responsible for sending data to different clients. Advantage of this case can be that none of the clients will be starved.
I'm looking to detect local connection loss. Is there a mean to do that, as with the events on the Corelabs components ?
Thanks
EDIT:
Sorry, I'm going to try to be more specific:
I'm currently designing a prototype using datasnap 2009. So I've got a thin client, a stateless server app and a database server.
What I would be able to do is to detect and handle connection loss (internet connectivity) between the client and the server app to handle it appropriately, ie: Display an informative error message to the user or to detect a server shutdown to silently redirect on another app server.
In 2-tier I used to manage that with ODAC components, the TOraSession have some events to handle this issues.
Normally there is no event fired when a connection is broken, unless a statement is fired against the database. This is because there is no way of knowing a connection loss unless there is some sort of is-alive pinging going on.
Many frameworks check if a connection is still valid by doing a very small query against the server. Could be getting the time from a server. Especially in a connection pooling environment.
You can implement a connection checking function in your application in some of the database events (beforeexecute?). Or make a timer that checks every 10 seconds.
Spawn a thread on the client which periodically sends some RPC 'Ping' or 'Heartbeat' commands to the server.
if this fails, the client knows that something happened to the connection
if the server does not hear the client anymore for some time period (for example, two times the heartbeat interval), he can conclude that the client disconnected, however this requires a stateful server (and your design is stateless so it would require event processing in a secondary system, which could be fed through a message queue)