I've written a Grails application that connects to S3 and streams a file back to the client. This has worked great so far, up until I've tried to use it to download a large (2GB) file. I see the following behaviour:
When starting the download normally by calling the controller, after around 1GB, the download 'completes'.
Opening multiple tabs to trigger several simultaneous downloads causes the downloads to 'complete' after a few MB, although the actual amount downloaded seems to be random each time. This can also be observed when performing simultaneous downloads on multiple machines.
In both cases, the error messages are the same:
errors.GrailsExceptionResolver SocketException occurred when processing request: [GET] /download
Connection reset.:
java.net.SocketException: Connection reset
errors.GrailsExceptionResolver IllegalStateException occurred when processing request: [GET] /download
getOutputStream() has already been called for this response.
Here's the part of the controller concerned with the download:
DownloadController.groovy
def index() {
def fileStream = s3Service.getStream()
response.setHeader("Content-disposition", "attachment;filename=foo.csv")
response.contentType = "text/csv"
response.outputStream = fileStream
response.outputStream.flush()
}
..and a snippet of the service that connects to S3 and gets the file:
S3Service.groovy
def getStream() {
def outputStream = ""
try {
AmazonS3 s3 = new AmazonS3Client()
S3Object object = s3.getObject(new GetObjectRequest('my-bucket-name', 'path/to/file.csv'))
outputStream = object.getObjectContent()
}
catch (AmazonServiceException ase) {
/* Log the error. Omitted for brevity. */
}
catch (AmazonClientException ase) {
/* Log the error. Omitted for brevity. */
}
return outputStream
}
I'm really stumped as to what's causing this.
It turns out that this error was occurring because the app is running on a server behind a load balancer, which is attempting to cache files that pass through it. It's unable to cache the larger files as it has limited disk space, and the download fails.
This was verified by connecting an instance of the app running locally to AWS, and observing that the strange behaviour no longer occurs.
Related
Hi I'm trying to create a web app where you can access files on a system through a web browser. The web app structure looks like this:
commander
cmdr
packages
lib
cmdr.dart
gui
packages
web
assets
css
console.dart
editor.dart
client.dart
explorer.dart
index.html
cmdr.dart is the server and I initiate the http_server to host index.html on there. The index pulls the client.dart file as a script. The editor.dart, console.dart and explorer.dart files are all part of the client file.
The problem is when I run the http_server to host index.html, it doesn't access the client files. Most of the dependencies aren't pulled except for the CSS files. Furthermore I thought maybe compiling into javascript would solve this issue as it would pull all the code together into one file. However the same issue exists where the HTML gets put together but none of the client components are created.
My server code is as follows:
// Setting up Virtual Directory
VirtualDirectory virDir;
void directoryHandler(dir, request) {
var indexUri = new Uri.file(dir.path).resolve('index.html');
virDir.serveFile(new File(indexUri.toFilePath()), request);
}
void main(List<String> args) {
// Set default dir as current working directory.
Directory dir = Directory.current;
// Creating Virtual Directory
virDir = new VirtualDirectory(Platform.script.resolve('/Users/donghuynh/git/commander/gui/web').toFilePath())
..allowDirectoryListing = true
..directoryHandler = directoryHandler
..followLinks = true;
// Set up logging.
log = new Logger('server');
Logger.root.onRecord.listen(new SyncFileLoggingHandler("server.log"));
// Create an args parser to override the workspace directory if one is supplied.
var parser = new ArgParser();
parser.addOption('directory', abbr: 'd', defaultsTo: Directory.current.toString(), callback: (directory) {
dir = new Directory(directory);
});
parser.parse(args);
// Initialize the DirectoryWatcher.
watcher = new DirectoryWatcher(dir.path);
// Set up an HTTP webserver and listen for standard page requests or upgraded
// [WebSocket] requests.
HttpServer.bind(InternetAddress.ANY_IP_V4, 8080).then((HttpServer server) {
log.info("HttpServer listening on port:${server.port}...");
server.listen((HttpRequest request) {
// WebSocket requests are considered "upgraded" HTTP requests.
if (WebSocketTransformer.isUpgradeRequest(request)) {
log.info("Upgraded ${request.method} request for: ${request.uri.path}");
WebSocketTransformer.upgrade(request).then((WebSocket ws) {
handleWebSocket(ws, dir);
});
} else {
log.info("Regular ${request.method} request for: ${request.uri.path}");
// TODO: serve regular HTTP requests such as GET pages, etc.
virDir.serveRequest(request);
}
});
});
}
Acessing index.html through a web browser I get these errors:
http://localhost:8080/packages/bootjack/css/bootstrap.min.css Failed to load resource: the server responded with a status of 404 (Not Found)
http://localhost:8080/packages/ace/src/js/ext-language_tools.js Failed to load resource: the server responded with a status of 404 (Not Found)
http://localhost:8080/packages/ace/src/js/ace.js Failed to load resource: the server responded with a status of 404 (Not Found)
http://localhost:8080/packages/browser/dart.js Failed to load resource: the server responded with a status of 404 (Not Found)
http://localhost:8080/packages/bootjack/css/bootstrap.min.css Failed to load resource: the server responded with a status of 404 (Not Found)
The files actually exist in those locations (though sym linked) but they are not being pulled for some reason.
-Don
As far as I remember you need to pass an additional argument to VirtualDirectory to follow symlinks.
You shoud'nt serve Dart source directly anyway but instead the output from pub build (at least for production). For development, forwarding requests to a running pub serve instance is probably the better and also the officially suggested way.
I keep getting the following error on a seemingly random basis from a WildFly 8.1.0.Final install running under NetBeans:
08:51:09,742 ERROR [io.undertow.request] (default task-40) Blocking request failed HttpServerExchange{ GET /web/faces/javax.faces.resource/dynamiccontent.properties}: java.lang.RuntimeException: java.io.IOException: Broken pipe
at io.undertow.servlet.spec.HttpServletResponseImpl.responseDone(HttpServletResponseImpl.java:527)
at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:287)
at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:227)
at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:73)
at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:146)
at io.undertow.server.Connectors.executeRootHandler(Connectors.java:177)
at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:727)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_20]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_20]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_20]
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method) [rt.jar:1.8.0_20]
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) [rt.jar:1.8.0_20]
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) [rt.jar:1.8.0_20]
at sun.nio.ch.IOUtil.write(IOUtil.java:65) [rt.jar:1.8.0_20]
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) [rt.jar:1.8.0_20]
at org.xnio.nio.NioSocketConduit.write(NioSocketConduit.java:150) [xnio-nio-3.2.2.Final.jar:3.2.2.Final]
at io.undertow.server.protocol.http.HttpResponseConduit.write(HttpResponseConduit.java:531)
at io.undertow.conduits.ChunkedStreamSinkConduit.flush(ChunkedStreamSinkConduit.java:256)
at org.xnio.conduits.ConduitStreamSinkChannel.flush(ConduitStreamSinkChannel.java:162) [xnio-api-3.2.2.Final.jar:3.2.2.Final]
at io.undertow.channels.DetachableStreamSinkChannel.flush(DetachableStreamSinkChannel.java:100)
at org.xnio.channels.Channels.flushBlocking(Channels.java:63) [xnio-api-3.2.2.Final.jar:3.2.2.Final]
at io.undertow.servlet.spec.ServletOutputStreamImpl.close(ServletOutputStreamImpl.java:625)
at io.undertow.servlet.spec.HttpServletResponseImpl.closeStreamAndWriter(HttpServletResponseImpl.java:451)
at io.undertow.servlet.spec.HttpServletResponseImpl.responseDone(HttpServletResponseImpl.java:525)
... 9 more
The requested pages appear to load without a problem, so other than the exceptions in the log, I haven't noticed any breaks. Any ideas?
I've face a similar problem and thanks to the idea of this response, I advanced a little it. I'm going to expose my case.
I was creating a REST API using Java (Java 7) (javax.ws.rs) and deploying it on a JBoss server (8.x).
My Api responds to these paths:
/myapi/a
/myapi/a?filer=myfilter
So I code it this way:
private static final String FILTER = "filter";
#GET
#Path("/a")
#Produces(MediaType.APPLICATION_JSON)
public Object
foo(#Context UriInfo requestInfo) {
LOG.info("Http request: GET /myapi/a");
if (requestParameters.getQueryParameters().containsKey(FILTER)) {
return foo(requestInfo.getQueryParameters().get(FILTER));
}
// no params
return ...
}
public Object foo(List<String> filter) {
LOG.info(" > Requested filter");
return ...;
}
But I was getting sometimes this exception from the server (not my code)
UT005023: Exception handling request to ... sessionState: org.jboss.resteasy.spi.UnhandledException: Response is committed, can't handle exception caused by java.io.IOException: Broken pipe
Investigating it I come across something really interesting: it was only able to reproduce it from Safari browser, not Chrome. So what? The point is that Safari has a functionality Chrome doesn't: When Safari auto-completes the request, it sends the request. Chrome doesn't send the request until the enter button is pressed. And this is important because the bug appears only if:
request with Safari's autocomplete to /a?filter=f
request (hitting enter) to /a
At this point, I don't know the reason (it's something related to the http header) => as stephen-c, the problem is that you are trying to do stuff that would require a change to the HTTP response headers ... after the headers have been sent
[EDITED]
I'm almost sure (99%) that we could not handled that exception. basically it's saying that you have lose one request and, as a warning, the server is telling you that you're not going to handle it.
There is another way to recreate the exception: try to put your finger at F5 or CMD-R. Your are going to create hundred of requests... but you'll lose some of them (related to pool thread, workers, etc) and you'll see the exception for those lose requests.
I've decided not to worry about this anymore.
I had the same warnings, but only with Firefox. Daniel.lichtenberger's post explains well the issue and how to solve it.
Summarized, Firefox's RCWN makes two simultaneous requests and cancels the slowest, resulting in the broken pipe warning. To disable RCWN type about:config in Firefox and disable network.http.rcwn.enable
If you are sending multipart/form-data Request In IE,
you must append hidden type to form, like this
<form>
...
<!-- for IE -->
<input type='hidden' name='_4ie' value='for IE'>
</form>
How to know exactly what URL does the 3rd-party application-server try to access while sending message to a client-device via the GCM-server?
In "SendAllMessagesServlet.java"
(which can be found # android-sdks\extras\google\gcm\samples\gcm-demo-server\src\com\google\android\gcm\demo\server\SendAllMessagesServlet.java)
.....
// Error 500 ( Connection timed out) at the following line
Result result = sender.send(message, registrationId, 5);
.....
3rd-party app-server - Tomcat v7.0
The URL is in the Constants.java file as GCM_SEND_ENDPOINT
in the android\gcm\gcm-server libray
public static final String GCM_SEND_ENDPOINT =
"https://android.googleapis.com/gcm/send";
The error 500 is due to proxy settings.
Thank You
We just redesigned a web application and as a result we've noticed a bug in Chrome (but it supposedly affects all WebKit browsers) that causes a full image/js/css reload after a Post/Redirect/Get. Our app is built using ASP.NET and uses a lot of Response.Redirect's which means users will run into this issue a lot. There's a bug report for the issue with test case: https://bugs.webkit.org/show_bug.cgi?id=38690
We've tried the following to resolve the issue:
Change all Response.Redirects to be JavaScript redirects. This wasn't ideal because instead of images reloading, there would be a "white flash" during page transitions.
We wrote our own HTTP handler for images, CSS and JS files. We set it up to where the handler sends a max-age of 1 hour. When the client requests the file again, the handler checks the If-Modified-Since header sent by the browser to see if the file had been updated since the last time it was downloaded. If the dates match, the handler returns an HTTP 302 (Not Modified) with 0 for the Content-Length. We ran a test where if the image was downloaded for the first time (HTTP 200), there was a delay of 10 seconds. So the first time the page loaded, it was very slow. If the handler returned 302 (Not Modified), there was no delay. What we noticed was that Chrome would still "reload" images even when the server returned a 302 (Not Modified). It's not pulling the file from the server (if it were, it would cause a 10 seconds delay), but yet it's flashing/reloading the images. So Chrome seems to be ignoring the 302 and still reloading images from it's cache causing the "reload".
We've checked big sites to see if they've fixed it somehow, but sites like NewEgg and Amazon are also affected.
Has anyone found a solution to this? Or a way to minimize the effect?
Thanks.
This is a bug. The only "workaround" I've seen untill now is to use a Refresh header instead of a Location header to do the redirecting. This is far from ideal.
Bug 38690 - Submitting a POST that leads to a server redirect causes all cached items to redownload
Also, this question is a duplicate of "Full page reload on Post/Redirect/Get ignoring cache control".
I ran into this problem myself with an ASP.NET web forms site that uses
Response.Redirect(url, false) following a post on many of its pages.
From reading the HTTP/1.1 specification it sounds like a 303 response code would be correct for implementing the Request: POST, Response: Redirect behavior. Unfortunately changing the status code does not make browser caching work in Chrome.
I implemented the workaround described in the post above by creating a custom module for non-static content. I'm also deleting the response content from 302's to avoid the appearance of a blink of "object moved to here". This is probably only relevant for the refresh headers. Comments are welcome!
public class WebKitHTTPHeaderFixModule : IHttpModule
{
public void Init(HttpApplication httpApp)
{
// Attach application event handlers.
httpApp.PreSendRequestHeaders += new EventHandler(httpApp_PreSendRequestHeaders);
}
void httpApp_PreSendRequestHeaders(object sender, EventArgs e)
{
HttpContext context = HttpContext.Current;
if (context.Response.StatusCode == 302)
{
context.Response.ClearContent();
// If Request is POST and Response is 302 and browser is Webkit use a refresh header
if (context.Request.HttpMethod.Equals("POST", StringComparison.OrdinalIgnoreCase) && context.Request.Headers["User-Agent"].ToLower().Contains("webkit"))
{
string location = context.Response.Headers["Location"];
context.Response.StatusCode = 200;
context.Response.AppendHeader("Refresh", "0; url=" + location);
}
}
}
public void Dispose()
{}
}
Note: I don't think this will work with the non-overloaded version of Response.Redirect since it calls Response.End().
I have read that HttpURLConnection supports persistent connections, so that a connection can be reused for multiple requests. I tried it and the only way to send a second POST was by calling openConnection for a second time. Otherwise I got a IllegalStateException("Already connected");
I used the following:
try{
URL url = new URL("http://someconection.com");
}
catch(Exception e){}
HttpURLConnection con = (HttpURLConnection) url.openConnection();
//set output, input etc
//send POST
//Receive response
//Read whole response
//close input stream
con.disconnect();//have also tested commenting this out
con = (HttpURLConnection) url.openConnection();
//Send new POST
The second request is send over the same TCP connection (verified it with wireshark) but I can not understand why (although this is what I want) since I have called disconnect.
I checked the source code for the HttpURLConnection and the implementation does keep a keepalive cache of connections to the same destinations. My problem is that I can not see how the connection is placed back in the cache after I have send the first request. The disconnect closes the connection and without the disconnect, still I can not see how the connection is placed back in the cache. I saw that the cache has a run method to go through over all idle connections (I am not sure how it is called), but I can not find how the connection is placed back in the cache. The only place that seems to happen is in the finished method of httpClient but this is not called for a POST with a response.
Can anyone help me on this?
EDIT
My interest is, what is the proper handling of an HttpUrlConnection object for tcp connection reuse. Should input/output stream be closed followed by a url.openConnection(); each time to send the new request (avoiding disconnect())? If yes, I can not see how the connection is being reused when I call url.openConnection() for the second time, since the connection has been removed from the cache for the first request and can not find how it is returned back.
Is it possible that the connection is not returned back to the keepalive cache (bug?), but the OS has not released the tcp connection yet and on new connection, the OS returns the buffered connection (not yet released) or something similar?
EDIT2
The only related i found was from JDK_KeepAlive
...when the application calls close()
on the InputStream returned by
URLConnection.getInputStream(), the
JDK's HTTP protocol handler will try
to clean up the connection and if
successful, put the connection into a
connection cache for reuse by future
HTTP requests.
But I am not sure which handler is this. sun.net.www.protocol.http.Handler does not do any caching as I saw
Thanks!
Should input/output stream be closed
followed by a url.openConnection();
each time to send the new request
(avoiding disconnect())?
Yes.
If yes, I can not see how the connection is being
reused when I call
url.openConnection() for the second
time, since the connection has been
removed from the cache for the first
request and can not find how it is
returned back.
You are confusing the HttpURLConnection with the underlying Socket and its underlying TCP connection. They aren't the same. The HttpURLConnection instances are GC'd, the underlying Socket is pooled, unless you call disconnect().
From the javadoc for HttpURLConnection (my emphasis):
Each HttpURLConnection instance is
used to make a single request but the
underlying network connection to the
HTTP server may be transparently
shared by other instances. Calling the
close() methods on the InputStream or
OutputStream of an HttpURLConnection
after a request may free network
resources associated with this
instance but has no effect on any
shared persistent connection. Calling
the disconnect() method may close the
underlying socket if a persistent
connection is otherwise idle at that
time.
I found that the connection is indeed cached when the InputStream is closed. Once the inputStream has been closed the underlying connection is buffered. The HttpURLConnection object is unusable for further requests though, since the object is considered still "connected", i.e. its boolean connected is set to true and is not cleared once the connection is placed back in the buffer. So each time a new HttpUrlConnection should be instantiated for a new POST, but the underlying TCP connection will be reused, if it has not timed out.
So EJP answer's was the correct description. May be the behavior I saw, (reuse of the TCP connection) despite explicitly calling disconnect() was due to caching done by the OS? I do not know. I hope someone who knows can explain.
Thanks.
How do you "force use of HTTP1.0" using the HttpUrlConnection of JDK?
According to the section ā€˛Persistent Connectionsā€¯ of the Java 1.5 guide support for HTTP1.1 connections can be turned off or on using the java property http.keepAlive (default is true). Furthermore, the java property http.maxConnections indicates the maximum number of (concurrent) connections per destination to be kept alive at any given time.
Therefore, a "force use of HTTP1.0" could be applied for the whole application at once by setting the java property http.keepAlive to false.
Hmmh. I may be missing something here (since this is an old question), but as far as I know, there are 2 well-known ways to force closing of the underlying TCP connection:
Force use of HTTP 1.0 (1.1 introduced persistent connections) -- this as indicated by the http request line
Send 'Connection' header with value 'close'; this will force closing as well.
Abandoning streams will cause idle TCP connections. The response stream should be read completely. Another thing I overlooked initially, and have seen overlooked in most answers on this topic is forgetting to deal with the error stream in case of exceptions. Code similar to this fixed one of my apps that wasn't releasing resources properly:
HttpURLConnection connection = (HttpURLConnection)new URL(uri).openConnection();
InputStream stream = null;
BufferedReader reader = null;
try {
stream = connection.getInputStream();
reader = new BufferedReader(new InputStreamReader(stream, Charset.forName("UTF-8")));
// do work on part of the input stream
} catch (IOException e) {
// read the error stream
InputStream es = connection.getErrorStream();
if (es != null) {
BufferedReader esReader = null;
esReader = new BufferedReader(new InputStreamReader(es, Charset.forName("UTF-8")));
while (esReader.ready() && esReader.readLine() != null) {
}
if (esReader != null)
esReader.close();
}
// do something with the IOException
} finally {
// finish reading the input stream if it was not read completely in the try block, then close
if (reader != null) {
while (reader.readLine() != null) {
}
reader.close();
}
// Not sure if this is necessary, closing the buffered reader may close the input stream?
if (stream != null) {
stream.close();
}
// disconnect
if (connection != null) {
connection.disconnect();
}
}
The buffered reader isn't strictly necessary, I chose it because my use case required reading one line at a time.
See also: http://docs.oracle.com/javase/1.5.0/docs/guide/net/http-keepalive.html