How to set Jetty Socket timeout - timeout

I am trying to configure Jetty Server for Socket(read)timeout. I have one rest API exposed on the server and it might take well over 1 minute to send over data over the wire. Where can I set this so that Client doesn't get read timeout and gets all data over the wire? Or is on http connector I added to my server?

A read timeout on the client wouldn't be controlled by a server setting.
For server level controls, you should be looking at Idle Timeout on the ServerConnector.

Related

Spring Web Socket not broadcasting message to all application servers

I am trying to setup load balanced environment having 2 application server instances. I am unable to make spring web socket relay messages to all instances. Let's take a example to describe my problem better:
Server 1 : Responsible for job executions and 35% user load.
Server 2 : 100% user load.
Both are connected to same database schema so job request can come for any server instance but will get executed on Server 1.
Now, I have used spring web socket plugin for my GRAILS application and I push messages to browser using
brokerMessagingTemplate.convertAndSend(user.notificationChannel, ((notification.toMap(user) as JSON)).toString())
It was working fine on single server setup. But on multi-server setup,
notifications are only received on Server 1 as that is the one calling the code block, if reverse the scenario, then vice-versa result is observed.
How can I push same notification to all server instances, so that user always gets the notification no matter what server instance he is on?
I initially thought of utilising a common queue like RabitMQ but that will add to system requirements and will get disapproved by client.
NOTE: Third party service solutions won't work in my case as applications are on intranet and don't have internet access.
websockets by default point to a hostname/ip address - whilst you could setup a dns record / hostname that points to multiple different ip's / servers. This itself would break communication flow of the websockets if it sent handshake to one and the message to another.
The most simplest approach would be to think of some db table that is shared across both and as each instance comes up/alive it records its local ip / socket port to a db table - each instance can then read this table and work out at any point which are the hosts to transmit a socket message to - (this table would need to managed somehow - upon a brand new bootup ) it would be empty and would popuplate as instances came up - something again to manage when a host is taken down shutdown.
Each instance would then be running an ws internal client. When a message is sent the ws client would be triggered attempting to find all alive websocket servers "from the db" to each using the ws client it would attempt to connect and send the message on. Each would then get the message and either broadcast to all connected users or if it is from user x meant for user x then like per chat plugin it would relay it only to user x if found on server y and so on.
this then keeps it all inline with 1 technology controlling the entire process websocket server that has its own client which relay to the end multiple instance ws socket server

WSO2 ESB connections issue

Am working with wso2esb4.9.0 and having around 160 services which are http and are processed frequently, Initially when the server is started every thing is fine all service request response is up to the mark,
After 10-12 days the ESB server gets hanged does not process any request and no exception are seen in the log file even,They are some request which may be piled up in the server and not allowing new connection to process.
when i do restart of the server all the connections get releases and works for other 10-12 days again.
But doing a restart of the server may not be a good idea to do , where can i find these connections and close them if possible and am i missing any config changes of wso2 esb.
Am trying to find some different connection number using JMX and also what to know if any one face this issue and found the possible solution.

IIS re-processing the same request during 504 (Gateway timeout) on AWS (Amazon)

The setup is as follows =>
There is an Amazon ELB (Elastic Load Balancer) that forwards requests to IIS. The ELB has a time out setting of 15 seconds.
If the web server takes more that 15 seconds to process a request, I observe two behaviors.
1) Sometimes, at the 15 second mark, a 504 (Gateway Timeout) is issued to the client (browser). This behavior I understand and is expected.
2) However, sometimes, at the 15 second mark, the web server (IIS) begins to re-process the same request again from the beginning. There is no 504 (Gateway Timeout) issued to the client. This behavior, I don't understand. I use ASP.NET MVC stack.
I know its the same request from the client because the client generated id stays same for the request. But there is a new server generated id for the request. Some the intermediary (ELB) seems to be reforwarding the request at the timeout (15 second) mark in some cases.
Does anyone have insights on what could be causing (2) ?
It sounds like you're dealing with this issue:
http://absenceofblue.blogspot.com/2013/08/retry-hassles-with-elb.html
Supposedly it was fixed, but apparently if your backend timeout is the same as the ELB timeout there can be a race condition, so you should set your timeout to at least 1 second greater than the ELB timeout so that ELB dumps the connection, and not your code.
The interesting thing about this is that the retry DOES in fact originate from the client, but it's because the client's TCP stack is fooled into thinking the packet was dropped so it retries the packet at the TCP level.

Grails handling network connection stall

I am using Grails Ws-Client Plugin
but my application waits for the SOAP response back from the server from which i am consuming web service and my application waits from this code
def proxy = webService.getClient(wsdlUrl)
This mostly occours when the server is down or net connection is slow.
the wait also continues in case the webservice is temporarily removed from the server and the url containing the wsdl is redirecting to home page of website when try to access on web browser.
How can i detect that the wsdl is present or not and how can i set timeout like property so that the wait for response exist for 10 seconds and then it stops waiting for response and code start executing normally in case of stall .
I also don't get any exception or error as well.
Sounds like there's no read and/or connect timeouts set on the client by default. This should help if the web service is down: proxy.setConnectionTimeout(value_in_milliseconds)
I'm not sure about setting the read timeout though, which is what you'd see if the host was up and accepting connections but the web service wasn't available or not responding. The best solution we found for this was to use the Apache Commons HTTP client instead of the default client, which gave us much more granular configuration over the client's connection settings. It's possible they're in the WS-Client plugin also, but the relevant documentation (actually the GroovyWS documentation) doesn't appear to mention anything about read timeouts.

Implementing Acknowledge-Extension for CometD in Jetty/ASP.NET

we're using CometD 2 to achieve the connection between a central data provider and several backends consuming the data. Up to now, when one of the backends fails briefly, all messages posted in the meantime are lost. Now we heard about the "Acknowledge Extension" for CometD. It is supposed to create a server-side list of messages and delivers them when one of the clients reports to be back online. Here are some questions:
1) Does this also work with several clients?
2) The documentation (http://cometd.org/documentation/2.x/cometd-ext/ack) says: "Note that if the disconnected browser is disconnected for in excess of maxInterval (default 10s), then the client will be timed out and the unacknowledged queue discarded." -> does this mean that in case my client doesn't restore within the maxInterval, the messages are lost anyway?
Hence,
2.1) What's the maximal maxInterval? Which consequences does it have to set it to a high value?
2.2) We'd need a secure mechanism for fail outs of at least a few minutes. Is this possible? Are there any alternatives?
3) Is it really only necessary to add the two extensions in both the client and cometD server? We're using Jetty for the server and .NET Oyatel for the client. Does anyone have some experiences with this?
I'm sorry for this bunch of questions, but unfortunately, the CometD project isn't really well documented. I really appreciate any answers.
Cheers,
Chris
1) Does this also work with several Clients
Yes, it does. There is one message queue allocated for each client (see AcknowledgedMessagesClientExtension).
2) does this mean that in case my client doesn't restore within the maxInterval, the messages are lost anyway?
Yes, it does. When the client can't reach the server for maxInterval milliseconds, the server will throw away all state associated with that client.
2.1) What's the maximal maxInterval? Which consequences does it have to set it to a high value?
maxInterval is a servlet parameter of the Cometd servlet. It is internally treated as a long value, so the maximal value for it is Long.MAX_VALUE.
Example configuration:
<init-param>
<!-- The max period of time, in milliseconds, that the server will wait for
a new long poll from a client before that client is considered invalid
and is removed -->
<param-name>maxInterval</param-name>
<param-value>10000</param-value>
</init-param>
Setting it to a high value means that the server will wait longer before throwing away the state associated with a client (from the time the client stops contacting the server).
I see two problems with this. First, the memory requirements of the server will potentially be higher (which may also make denial of service easier). Second, the RemoveListener isn't called on the Server before the maxInterval expires, which may require you to implement additional logic that differentiates between "momentarily unreachable" and "disconnected".
2.2) We'd need a secure mechanism for fail outs of at least a few minutes. Is this possible? Are there any alternatives?
Yes, it is possible to configure the maxInterval to last for a few minutes.
An alternative would be to restore any server side state on every handshake. This can be achieved by adding a listener to "/meta/handshake" and publishing a message to a "/service/" channel (to make sure only the server receives the message), or by adding an additional property to the "ext" property of the handshake message. Be careful to let the client restore only valid state (sign it on the server if you must).
3) Is it really only necessary to add the two extensions in both the client and cometD server?
On the server it is sufficient to do something like:
bayeux.addExtension(new AcknowledgedMessagesExtension());
I don't know how you'd do it on Oyatel. In Javascript it suffices to simply include the extension (dojo.require or script include for jQuery).
When a client with the AckExtension connects to the server, a message similar to the following will be logged (from my Jetty console log):
[qtp959713667-32] INFO org.cometd.server.ext.AcknowledgedMessagesExtension - Enabled message acknowledgement for client 51vkuhps5qgsuaxhehzfg6yw92
Another note because it may not be obvious: the ack extension will only provide server to client delivery guarantee, not client to server. That is, when you publish a message from the client to the server, it may not reach the server and will be lost.
Once the message has made it to the server, the ack extension will ensure that all recipients connected at that time will receive the message (as long as they aren't unreachable for maxInterval milliseconds).
It is relatively straightforward to implement client-side retrying if you listen to notifications on "/meta/unsuccessful" and resend the message (the original message that failed is passed as message.request to the handler).

Resources