How do I ignore timeouts in erlang? - erlang

I have a server with a number of clients and each client is able to ask the server for information about the other clients. If they do so, the server have to get the information from each client and then return it to the asking client.
If two clients does this request at the same time, a deadlock might appear. The thing is that this request is done so often that the client would not have to care if it sometimes fails. How do I just ignore the timeout message that terminates everything when this problem appear?

Strict answer to your question
If you're using gen_server, then call/3 allows you to specify a timeout (and call/2 defaults to 5 seconds).
This code will either give you the gen_server's reply or the atom timeout if it failed.
Result = try gen_server:call(Target, Message, Timeout) of
Reply ->
Reply
catch
exit:{timeout, _} ->
timeout
end.
Better answer
evnu and rvirding recommended using asynchronous calls, which is a superior technique. Here are two possible ways to do this:
1. Server stores the data
Have clients periodically gen_server:cast/2 to the server to tell it their information. The server stores the latest information about each client. When a client wants to learn about its siblings, it calls gen_server:call/2 to the server.
The server call is synchronous because it doesn't need to contact any client; it's just returning the cached values.
2. Async return
The clients call gen_server:cast/2 to request data from the server. The server calls gen_server:call/2 to fetch data from each client on demand. Once the server has collected all data, it calls gen_server:cast/2 to pass the collected data back to the client that requested it.
Here, the clients are always waiting to handle requests from the server. The server calls the client synchronously, but can't deadlock because there is only one server.
3. More gen_servers
This one's hard to describe without knowing more about your code, but you could break the clients into more pieces. One piece to handle data requests and another piece to generate the requests.
Based on your description that the clients make this data request "so often", I think you should try the first method. If your clients are requesting data frequently enough, having the server collect and cache the client information will actually result in fresher data for the clients.

Related

Rails API, microservices, async/deferred responses

I have a Rails API which can handle requests from the clients. Clients use that API to perform analysis of their data. Client POSTs the data to API, API checks if that data have been analysed before. If so API just respond with analysis result. If the data haven't been analyzed before API:
Tells client that analysis started.
Establishes the connection with analyzing microservice.
Performs asynchronous (or deferred or i don't know) request to the analyzing microservice and waiting for response. The analysis takes much time so neither the API nor the microservice should be blocked while doing it.
When the response from analyzing microservice is returned API hands it to the client.
The main issue for me is to set up things such way that client could receive somehow the message "Your data had been sent to analysis" right after he performed the request. And then when analysis will be done client could receive its result.
The question is what approach I have to use in that case? Async responses, deferred responses, something else? And what known solutions could help me with that? Any gems?
I'm new to that stuff so I'm really sorry if I ask dumb questions.
If using HTTP you can only have one response to every request. To send multiple responses, i.e. "work in progress", then later the "results", you would need to use a different protocol, e.g. web sockets.
Since HTTP is so very common I'd stick with that in combination with background jobs. There are a couple of options which spring to mind.
Polling: The API kicks off a background jobs (to call the microservice) and responds to the client with a URL which the client can ping periodically for the result. The URL would respond with some kind of "work in progress" status until the result is actually ready). The URL would need to include some kind of id so the API can lookup the background job.
The API would potentially have two URLS; /api/jobs/new and /api/jobs/<ID>. They would, in Rails, map to a controller new and show action.
Webhooks: Have the client include a URL of its own in the request. Once the result is available have the background job hit the given URL with the result.
Either way, if using HTTP, you will not be able to handle the whole thing within a request/response, you will have to use some kind of background processing (so request to the microservice happens in a different process). You could look at Sidekiq, for example.
Here is an example for polling:
URL: example.com/api/jobs/new
web app receives client request
generates a unique id for the request, SecureRandom.uuid.
starts a background job (Sidekiq) passing in the uuid and any other parameters needed
respond with URL such as example.com/api/jobs/
--
background job
sends request to microservice API and waits for response
saves result to database with uuid
--
URL: example.com/api/jobs/UUID
look in database for UUID, if not found respond that job is "in progress". If found return result found in database.
Depending on what kind of API you use. I assume your clients interact via HTTP.
If you want to build an asynchronous API over HTTP the first thing that you should do: accept the request, create a job, handle it in the background and immediately return.
For the client to get the response you have to 2 options:
Implement a status endpoint where clients can periodically poll the status of the job
Implement a callback via webhooks. So the client has to provide a URL which you then call after you're done.
A good start for background processing is the sidekiq gem or more general ActiveJob that ships with Rails.

HTTP disconnect/timeout between request and response handling

Assume following scenario:
Client is sending HTTP POST to server
Request is valid and
have been processed by server. Data has been inserted into database.
Web application is responding to client
Client meets timeout
and does not see HTTP response.
In this case we meet situation where:
- client does not know if his data was valid and been inserted properly
- web server (rails 3.2 application) does not show any exception, no matter if it is behind apache proxy or not
I can't find how to handle such scenario in HTTP documentation. My question are:
a) should client expect that his data MAY be processed already? (so then try for example GET request to check if data has been submitted)
b) if not (a) - should server detect it? is there possibility to do it in rails? In such case changes can be reversed. In such case i would expect some kind of expection from rails application but there is not...
HTTP is a stateless protocol: Which means by definition you cannot know on the client side that the http-verb POST has succeeded or not.
There are some techniques that web applications use to overcome this HTTP 'feature'. They include.
server side sessions
cookies
hidden variables within the form
However, none of these are really going to help with your issue. When I have run into these types of issues in the past they are almost always the result of the server taking too long to process the web request.
There is a really great quote to that I whisper to myself on sleepless nights:
“The web request is a scary place, you want to get in and out as quick
as you can” - Rick Branson
You want to be getting into and out of your web request in 100 - 500 ms. You meet those numbers and you will have a web application that will behave well/play well with web servers.
To that end I would suggest that you investigate how long your post's are taking and figure out how to shorten those requests. If you are doing some serious processing on the server side before doing dbms inserts you should consider handing those off to some sort of tasking/queuing system.
An example of 'serious processing' could be some sort of image upload, possibly with some image processing after the upload.
An example of a tasking and queuing solution would be: RabbitMQ and Celery
An example solution to your problem could be:
insert a portion of your data into the dbms ( or even faster some NoSQL solution )
hand off the expensive processing to a background task.
return to the user/web-client. ( even tho in the background the task is still running )
listen for the final response with ( polling, streaming or websockets) This step is not a trivial undertaking but the end result is well worth the effort.
Tighten up those web request and it will be a rare day that your client does not receive a response.
On that rare day that the client does not receive the data: How do you prevent multiple posts... I don't know anything about your data. However, there are some schema related things that you can do to uniquely identify your post. i.e. figure out on the server side if the data is an update or a create.
This answer covers some of the polling / streaming / websockets techniques you can use.
You can handle this with ajax and jQuery as the documentation of complete callback explains below:
Complete
Type: Function( jqXHR jqXHR, String textStatus )
A function to be called when the request finishes (after success and error callbacks are executed). The function gets passed two arguments: The jqXHR (in jQuery 1.4.x, XMLHTTPRequest) object and a string categorizing the status of the request ("success", "notmodified", "error", "timeout", "abort", or "parsererror").
Jquery ajax API
As for your second question, is their away to handle this through rails the answer is no as the timeout is from the client side and not the server side however to revert the changes i suggest using one of the following to detect is the user still online or not
http://socket.io/
websocket-rails

Process job using workers while client waits and return response when complete

I'm building an API using Rails where requests come in and they need to be executed by a cluster of workers running on a different server (these workers call remote APIs and parse the data, etc...). I'm going to be using Sidekiq or Resque to handle the queueing/processing of that.
My issue is the client needs to wait while this is happening and the controller needs to return the response to the client once it's complete. How would I handle this in the controller? We're using a redis backend, so I was thinking something along the lines of subscribing to a pub/sub channel and waiting for the worker to publish a status message. The controller would wait for a set time period and then return a 'check back later' response to the client if it doesn't receive a message in time. What would be the best way to implement that, or is there a better solution?
Do not make your clients wait! There are a lot of issues if you make the controller block for a long running job:
Other programs may assume the request timed out (proxies, browsers, scripts, etc.)
It makes your API endpoints become a source for denial of service
It requires you to put more engineering work into web servers (since a rails process can't handle another web request while it's handling the blocking call)
Part of the reason of using Sidekiq or Resque is the avoid controllers that do heavily lifting during the http request.
Instead, background jobs should report their status to the database. Then web server should query and return to the client the latest status from the database.
If clients need more immediate feedback, you can:
make clients constantly poll
post request to the client (if the API consumer is another webserver)
use another protocol mechanism (eg - websockets).

Implementing Acknowledge-Extension for CometD in Jetty/ASP.NET

we're using CometD 2 to achieve the connection between a central data provider and several backends consuming the data. Up to now, when one of the backends fails briefly, all messages posted in the meantime are lost. Now we heard about the "Acknowledge Extension" for CometD. It is supposed to create a server-side list of messages and delivers them when one of the clients reports to be back online. Here are some questions:
1) Does this also work with several clients?
2) The documentation (http://cometd.org/documentation/2.x/cometd-ext/ack) says: "Note that if the disconnected browser is disconnected for in excess of maxInterval (default 10s), then the client will be timed out and the unacknowledged queue discarded." -> does this mean that in case my client doesn't restore within the maxInterval, the messages are lost anyway?
Hence,
2.1) What's the maximal maxInterval? Which consequences does it have to set it to a high value?
2.2) We'd need a secure mechanism for fail outs of at least a few minutes. Is this possible? Are there any alternatives?
3) Is it really only necessary to add the two extensions in both the client and cometD server? We're using Jetty for the server and .NET Oyatel for the client. Does anyone have some experiences with this?
I'm sorry for this bunch of questions, but unfortunately, the CometD project isn't really well documented. I really appreciate any answers.
Cheers,
Chris
1) Does this also work with several Clients
Yes, it does. There is one message queue allocated for each client (see AcknowledgedMessagesClientExtension).
2) does this mean that in case my client doesn't restore within the maxInterval, the messages are lost anyway?
Yes, it does. When the client can't reach the server for maxInterval milliseconds, the server will throw away all state associated with that client.
2.1) What's the maximal maxInterval? Which consequences does it have to set it to a high value?
maxInterval is a servlet parameter of the Cometd servlet. It is internally treated as a long value, so the maximal value for it is Long.MAX_VALUE.
Example configuration:
<init-param>
<!-- The max period of time, in milliseconds, that the server will wait for
a new long poll from a client before that client is considered invalid
and is removed -->
<param-name>maxInterval</param-name>
<param-value>10000</param-value>
</init-param>
Setting it to a high value means that the server will wait longer before throwing away the state associated with a client (from the time the client stops contacting the server).
I see two problems with this. First, the memory requirements of the server will potentially be higher (which may also make denial of service easier). Second, the RemoveListener isn't called on the Server before the maxInterval expires, which may require you to implement additional logic that differentiates between "momentarily unreachable" and "disconnected".
2.2) We'd need a secure mechanism for fail outs of at least a few minutes. Is this possible? Are there any alternatives?
Yes, it is possible to configure the maxInterval to last for a few minutes.
An alternative would be to restore any server side state on every handshake. This can be achieved by adding a listener to "/meta/handshake" and publishing a message to a "/service/" channel (to make sure only the server receives the message), or by adding an additional property to the "ext" property of the handshake message. Be careful to let the client restore only valid state (sign it on the server if you must).
3) Is it really only necessary to add the two extensions in both the client and cometD server?
On the server it is sufficient to do something like:
bayeux.addExtension(new AcknowledgedMessagesExtension());
I don't know how you'd do it on Oyatel. In Javascript it suffices to simply include the extension (dojo.require or script include for jQuery).
When a client with the AckExtension connects to the server, a message similar to the following will be logged (from my Jetty console log):
[qtp959713667-32] INFO org.cometd.server.ext.AcknowledgedMessagesExtension - Enabled message acknowledgement for client 51vkuhps5qgsuaxhehzfg6yw92
Another note because it may not be obvious: the ack extension will only provide server to client delivery guarantee, not client to server. That is, when you publish a message from the client to the server, it may not reach the server and will be lost.
Once the message has made it to the server, the ack extension will ensure that all recipients connected at that time will receive the message (as long as they aren't unreachable for maxInterval milliseconds).
It is relatively straightforward to implement client-side retrying if you listen to notifications on "/meta/unsuccessful" and resend the message (the original message that failed is passed as message.request to the handler).

Progress feedback in stateless HTTP session

I need to program a stateless server to execute remote methods. The client uses REST with a JSON parameter to pass the method name and its parameters. After servicing the result the session is closed. I have to use Indy10, TCP/IP as protocol, and therefore look at using IdHTTPServer.
Large result sets are chunked by Indy10 and sent to the client in parts.
My problem now is:
The methods on the server provide progress information if they take longer to produce the results. These are short messages. How can I write back to the client?
So far I have used writeflush on the server, but the client waited for the request to end before handing back the full resultset, including the progress information. What can I do to display/process such progress information on the client and yet keep the connection open to receive further data on the same request?
On the client side instead of the regular HTTP client component TIdHTTP you can instead use Indy class TIdTCPClientCustom in unit IdTCPClient to send the request and process the response.
This class gives total control over the processing of the server responses. I used the TIdTelnet class as a starting point to implement a client for a message broker messaging protocol, and found it stable and reliable for both text and binary data.
In the receiving thread, the incoming data can be read up to delimiters and parsed into chunks (for the progress information) and immediately processed.

Resources