I'm using java.net.http.HttpClient.newHttpClient() under Java 19 (Temurin) and perform sendAsync(...) requests from different treads on the same instance. I assume this is ok, as the javadoc states:
Once built, an HttpClient is immutable...
However, some requests fail with:
java.io.IOException: HTTP/1.1 header parser received no bytes
The weird thing is, it depends on the speed of my requests:
Requests every 5 seconds: 30% failure
Requests every 3 seconds: 0% failure
I've written a test for it:
private final HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create("https://..."))
.setHeader("Content-Type", "application/json")
.POST(HttpRequest.BodyPublishers.ofByteArray("[]".getBytes()))
.build();
#ParameterizedTest
#ValueSource(ints = {3, 5})
void httpClientTest(int intervalSeconds) throws Exception {
HttpClient httpClient = HttpClient.newHttpClient();
httpClient.sendAsync(request, HttpResponse.BodyHandlers.ofByteArray()).get();
Thread.sleep(Duration.ofSeconds(intervalSeconds));
httpClient.sendAsync(request, HttpResponse.BodyHandlers.ofByteArray()).get();
Thread.sleep(Duration.ofSeconds(intervalSeconds));
httpClient.sendAsync(request, HttpResponse.BodyHandlers.ofByteArray()).get();
Thread.sleep(Duration.ofSeconds(intervalSeconds));
httpClient.sendAsync(request, HttpResponse.BodyHandlers.ofByteArray()).get();
Thread.sleep(Duration.ofSeconds(intervalSeconds));
httpClient.sendAsync(request, HttpResponse.BodyHandlers.ofByteArray()).get();
}
I've already tried the following:
Doing the same with curl on the command line. No requests fail whatever interval I try. So it's probably not a problem with the server.
Running the tests multiple times in parallel. Still the 5-second-intervals fail (then multiple times in parallel). So it's probably not a problem with the server.
Creating an HttpClient.newHttpClient() for every request. No requests fail whatever interval. So it's probably not a problem with the server but with an internal state of the HttpClient (although it claims to be immutable?).
Do you have an idea what I could do, without needing to create a new HttpClient for every request?
Here is the answer for the record: the java.net.HttpClient has a long default HTTP/1.1 keepAlive time, which is longer than what usual servers are configured with. This often results in the server closing idle HTTP/1.1 connections before the client does. If the server closes the connection at about the same time than the client tries to reuse it, some IOException might get raised.
If such exceptions are observed too frequently applications should consider adapting the default keepAlive time in the client to some value shorter than what the servers it connects to are using.
A default value for the HttpClient HTTP/1.1 keepAlive time can be specified on the command line with: -Djdk.httpclient.keepalive.timeout=duration-in-seconds
So for instance - if a server is configured with a keepAlive time of 5s, you could consider supplying -Djdk.httpclient.keepalive.timeout=3 or -Djdk.httpclient.keepalive.timeout=4 on the client's java command line.
Related
We implemented connection pooling in our client code to invoke a server which closes(sends Connection:close in response headers) a connection after 2.5mins. Due to server behaviour we sometimes/intermittently get NoHttpResponseException. And this may occur at high TPS or at low TPS as well.
We are using apache http client version 4.5.11. And there is one validateAfterInactivity setting in PoolingHttpClientConnectionManager which is by-default set to 2000ms. But i think we may get same exception if we try to get the connection in 2000ms period.
We can choose to set aggressive value for validateAfterInactivity but i heard that it can degrade the performance by ~20 to 30ms for each request.
is retrying this exception a good solution ?
And also align to same context, can we retry in case of java.net.SocketException: Connection reset ?
#ok2c any suggestion here ?
Thanks in advance.
NoHttpResponseException is considered safe to retry for idempotent methods.
In your particular case however I would consider limiting the TTL (total to live) of client connections to 2.5 minutes to match that of the server endpoints.
We have a Rails app with an integration with box.com. It happens fairly frequently that a request for a box action to our app results in a Passenger process being tied up for right around 15 minutes, and then we get the following exception:
Errno::ETIMEDOUT: Connection timed out - SSL_connect
Often it's on something that should be fairly quick, such as listing the contents of a small folder, or deleting a single document.
I'm under the impression that these requests never actually got to an open channel, that either at the tcp or ssl levels we got no initial response, or the full handshake/session-setup never completed.
I'd like to get either such condition to timeout quickly, say 15 seconds, but allow for a large file that is successfully transferring to continue.
Is there any way to get TCP or SSL to raise a timeout much sooner when the connection at either of those levels fails to complete setup, but not raise an exception if the session is successfully established and it's just taking a long time to actually transfer the data?
Here is what our current code looks like - we are not tied to doing it this way (and I didn't write this code):
def box_delete(uri)
http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = true
http.verify_mode = OpenSSL::SSL::VERIFY_NONE
request = Net::HTTP::Delete.new(uri.request_uri)
http.request(request)
end
I have JMS queue message processor sequence where request is send to SOAP endpoint. However request to this endpoint can take a long time, up to 30 minutes or so. How can I can configure ESB to allow long timeout values ? Currently I'm getting following error after 60 seconds:
[2014-01-20 14:18:31,772] WARN - TargetHandler http-outgoing-4: Connection time out while in state: REQUEST_DONE
[2014-01-20 14:18:31,775] WARN - SynapseCallbackReceiver Synapse received a response for the request with message Id : urn:uuid:c6a023c2-7fb4-4321-b1c2-d78e9bb13add But a callback is not registered (anymore) to process this response
Thanks for any help
Edit: I added http.socket.timeout=1800000 -property in repository/conf/passthru-http.properties which seems to solve the timeout issue.
Assuming this is a "Scheduled Message Forwarding Processor", to increase the send timeout up to 30 minutes :
In your endpoint, verify that "connection timeout" is "never
timeout" (edit the endpoint in the console and "Show Advanced
options")
Edit repository/conf/synapse.properties and modify
synapse.global_timeout_interval (in ms) : this is the maximum time a
callback instance will exist in wso2 to receive the response
copy the sample axis2 conf file
from samples/axis2Client/client_repo/conf/axis2.xml to, for example,
repository/conf/axis2/axis2_mp.xml
Edit this axis2_mp.xml config, find
transportSender name="http" and add a parameter "SO_TIMEOUT" (in ms) : <parameter name="SO_TIMEOUT" locked="false">108000000</parameter>
Edit your Message Processor and in Show Additional Parameters, specify the entry "Axis2 Configuration" to repository/conf/axis2/axis2_mp.xml
SO_TIMEOUT is the time to wait for the response.
You can specify CONNECTION_TIMEOUT for the max time to establish the connection.
Pay attention : all callbacks will persist up to 30 minutes in the ESB !
i run in some problems with my jersey rest api and a client.
This is how im using the methods on a server side:
#POST
#Path("/seed")
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
public Response addSeed(Seed seed) throws InterruptedException {
if (!Validator.isValidSeed(seed)) {
return Response.status(400).entity("{\"message\":\"Please verify your JSON!\", \"stat\":\"failed\"}")
.build();
}
save(seed);
return Response.status(200).build();
}
If i run a Jersey client in a while(true) loop, there are connections open and won't close. So im running into a problem i have a lot of connections open and my network crashes. So i can't use my server any more. After the connections are closed i can connect to the server.
This is a client:
ClientConfig config = new DefaultClientConfig();
Client client = Client.create(config);
WebResource service = client.resource(getBaseURI()).path("api/seed");
while (true) {
ClientResponse cr = service.header("Content-Type", "application/json").post(ClientResponse.class, seed);
System.out.println(cr);
cr.close();
My Questions are:
What can i do on the server side, to prevent clients open a new connection?
How can i specify a max number of connections?
And how should i implement the jersey client to reuse open connection?
I don't know of a way to limit Jersey resources at the web-app level. If you upgrade to GlassFish EE, you can make your resources EJBs #Stateless #StatelessDeployment(maxInstances=16)
The pile up of connections could be because of Keep-Alive settings. In Tomcat 6 there are two you can tune your connector with:
maxKeepAliveRequests, which defaults to 100. It's the maximum number of HTTP requests which can be pipelined until the connection is closed by the server. Setting this attribute to 1 will disable HTTP/1.0 keep-alive, as well as HTTP/1.1 keep-alive and pipelining. Setting this to -1 will allow an unlimited amount of pipelined or keep-alive HTTP requests.
keepAliveTimeout, which defaults to connectionTimeout which defaults to 60k ms. It it the number of milliseconds this Connector will wait for another HTTP request before closing the connection.
I'm using rabbitMQ server with amq.
I am having a difficult problem. After leaving the server alone for about 10 min, the connection is lost.
What could be causing this?
If you look at the Erlang client documentation http://www.rabbitmq.com/erlang-client-user-guide.html you will see a section titled Connecting To A Broker
This gives you a few different options that you can specify when setting up your connection to the RabbitMQ server, one of the options is the heartbeat, as you can see the default is 0 so no heartbeat is specified.
I don't know the exact Erlang notation, but you will need to do something like:
{ok, Connection} = amqp_connection:start(#amqp_params_network{heartbeat = 5})
The heartbeat timeout is specified in seconds. So this would cause your consumer to heartbeat back to the server every 5seconds.
Also take a look at this discussion: https://groups.google.com/forum/?fromgroups=#!topic/rabbitmq-discuss/u227xzvqOr8
The default connection timeout for the RabbitMQ connection factory is 600 seconds (at least in the Java client API), hence your 10 minutes. You can change this by specifying to the connection factory your timeout of choice.
It is good practice to ensure your connection is release and recreated after a specific amount of time, to prevent eventual leaks and excessive resournces. Your code should ensure that it seeks a valid connection that is not close to be timed-out, and re-establish a new connection on the ones that did time-out. Overall, adopt a connection-pooling approach.
- Java example:
ConnectionFactory factory = new ConnectionFactory();
factory.setHost(this.serverName);
factory.setPort(this.serverPort);
factory.setUsername(this.userName);
factory.setPassword(this.userPassword);
factory.setConnectionTimeout( YOUR-TIMEOUT-IN-SECONDS );
Connection = factory.newConnection();