I have a program that downloads files and realized that by downloading using the program the speed hangs at 300kbps, however downloading the same file using a browser doubles the speed. I have seen in some posts about using ranges to separate the file into parts and download using threads, but I could not implement it.
My code:
IdHTTP1.Head(URL);
Range: = IdHTTP1.Request.Ranges.Add;
Range.StartPos: = 0;
Range.EndPos: = Trunc(IdHTTP1.Response.ContentLength / 2);
IdHTTP1.Post(URL, Parameters, FS);
When I run the code, it returns me the whole file, i'm not sure if this code is correct, I tried to do something simple to test, but it did not work.
When I use IdHttp.Head() the raw header returns this information:
Date: Mon, 29 May 2017 18:02:11 GMT
Pragma: No-cache
Cache-Control: no-cache, no-store, max-age = 0
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Disposition: attachment; filename = "pecas.pdf"
Content-Type: application / pdf
Content-Length: 18507774
Related
I'm working on a compression and caching mechanism in my asp.net mvc 5 app.
I'm sending files with the following cache headers:
Response.Cache.SetCacheability(HttpCacheability.Public);
Response.Cache.SetExpires(DateTime.UtcNow.AddYears(1).ToUniversalTime());
Response.Cache.SetLastModified(System.IO.File.GetLastWriteTime(serverPath).ToUniversalTime());
Response.AppendHeader("Vary", "Accept-Encoding");
IE11, Edge, Firefox, all sends the If-Modified-Since header on F5 refresh, but not Chrome. Why is that and how to workaround it? In Chrome I got 200 status code and file is loaded from cache.
The second problem I have is with enabling gzip compression.
I have a standard action filter for this:
public class CompressContentMvcAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
GZipEncodePage();
}
private bool IsGZipSupported()
{
string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];
if (!string.IsNullOrEmpty(AcceptEncoding) &&
(AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate")))
{
return true;
}
return false;
}
private void GZipEncodePage()
{
HttpResponse Response = HttpContext.Current.Response;
if (IsGZipSupported())
{
string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];
if (AcceptEncoding.Contains("gzip"))
{
Response.Filter = //new GZipCompressionService().CreateCompressionStream(Response.Filter);
new System.IO.Compression.GZipStream(Response.Filter,
System.IO.Compression.CompressionMode.Compress);
Response.Headers.Remove("Content-Encoding");
Response.AppendHeader("Content-Encoding", "gzip");
}
else
{
Response.Filter =// new DeflateCompressionService().CreateCompressionStream(Response.Filter);
new System.IO.Compression.DeflateStream(Response.Filter,
System.IO.Compression.CompressionMode.Compress);
Response.Headers.Remove("Content-Encoding");
Response.AppendHeader("Content-Encoding", "deflate");
}
}
// Allow proxy servers to cache encoded and unencoded versions separately
Response.AppendHeader("Vary", "Content-Encoding");
}
}
I apply this filter on my action method returning application assets, but it got the Transfer-Encoding: chunked for each file, not gziped.
This filter is copied from my previous project and there it is still working at is expected. Could it be a problem with IIS server? Locally I have a IIS 10 and .NET 4.7, the the older app, where it works is hosted on IIS 8.5 and framework 4.5. Can't think of anything else. I'm googling the second day and can't find any clue.
I'm not interested in compressing in IIS.
[edit]
Header I got from response:
HTTP/1.1 200 OK
Cache-Control: public
Content-Type: text/javascript
Expires: Sat, 18 May 2019 08:58:48 GMT
Last-Modified: Thu, 10 May 2018 13:26:02 GMT
Vary: Content-Encoding
Server: Microsoft-IIS/10.0
X-AspNetMvc-Version: 5.2
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Fri, 18 May 2018 08:58:48 GMT
Transfer-Encoding: chunked
I always use Fiddler to inspect these kind of challenges.
The F5/If-Modified-Since issue.
Chrome just doesn't make a new request if the expires header has been set and its datetime value is still actual. So Chrome is respecting your expected caching behaviour. When navigating over your website via an other browser, you 'll see that also these don't send any requests for these assets. F5 is 'special', it's a forced refresh.
The chunked/gzip issue
Clear your browser cache and inspect the first response. Fiddler will show 'Response body is encoded', which means compressed (gzip or deflate).
Whether you see Transfer-Encoding chunked depends on whether the Content-Length header is present. See the difference in the responses below. If you don't want the chunked Transfer-Encoding set the Content-Length header.
Content-Type: text/javascript; charset=utf-8
Content-Encoding: gzip
Expires: Sat, 25 May 2019 13:14:11 GMT
Last-Modified: Fri, 25 May 2018 13:14:11 GMT
Vary: Accept-Encoding
Server: Microsoft-IIS/10.0
Content-Length: 5292
Content-Type: text/javascript; charset=utf-8
Transfer-Encoding: chunked
Content-Encoding: gzip
Expires: Sat, 25 May 2019 13:14:11 GMT
Last-Modified: Fri, 25 May 2018 13:14:11 GMT
Vary: Accept-Encoding
Server: Microsoft-IIS/10.0
Because you are handling the serving of assets via your own code, instead of via the IIS static files module, you have to deal with all response headers yourself.
Our Rails web app generates PDFs using wkhtmltopdf and sends them to the client. This works in every web browser we've tested it with except Edge.
We've tried rendering the response in a couple of different ways, this is how it was originally:
kit = PDFKit.new(#html_content)
render text: kit.to_pdf, content_type: 'application/pdf'
This opens the PDF viewer with the PDF displaying correctly in every browser that we tested with except Edge where the browser displays: Something's keeping this PDF from opening.
In our application logs, there is the POST request which is the form submission and I can see our app send the pdf file response, then there are subsequent GET requests to the form submission url which error because it's not expecting any GET request to that url. I've no idea what's going on here.
The response headers for the request are:
Cache-Control: max-age=0, private, must-revalidate
Connection: Keep-Alive
Content-Length: 34865
Content-Type: application/pdf; charset=utf-8
Date: Thu, 18 Jun 2015 14:35:30 GMT
Etag: "4baf297d1866339e60e8e893300909a0"
Server: WEBrick/1.3.1 (Ruby/2.0.0/2013-06-27)
Set-Cookie: _APP_session=<long cookie>; path=/; HttpOnly
X-Request-Id: 617580a8-4d7d-43c4-8e49-aeaeafba7b79
X-Runtime: 21.868098
X-XSS-Protection: 1; mode=block
x-content-type-options: nosniff
x-frame-options: SAMEORIGIN
x-ua-compatible: chrome=1
I have also tried using send_data like this:
send_data kit.to_pdf, type: 'application/pdf', disposition: 'inline'
Which results in the following response headers but ultimately the same problem:
Cache-Control: private
Connection: Keep-Alive
Content-Disposition: inline
Content-Length: 34866
Content-Transfer-Encoding: binary
Content-Type: application/pdf
Date: Thu, 18 Jun 2015 14:39:42 GMT
Etag: "11db49f1a26444a38fa2b51f3c3336ed"
Server: WEBrick/1.3.1 (Ruby/2.0.0/2013-06-27)
Set-Cookie: _APP_session=<long cookie>; path=/; HttpOnly
X-Request-Id: 501d9832-b07e-4764-8ecc-f1c1e9a6421e
X-Runtime: 7.054236
X-XSS-Protection: 1; mode=block
x-content-type-options: nosniff
x-frame-options: SAMEORIGIN
x-ua-compatible: chrome=1
If I remove the Content-Disposition: inline header from the above it brings up the save file prompt and downloading the file works fine. We need it to load in the browser window though.
I don't believe it to be a duplicate of this question because it works in IE 9, 10 and 11 and is only a problem with Edge.
We've been having what sounds like the same problem with PDF reports that we generate on the server and dispatch inline - the new tab that opens for the viewer appears to re-issue a request for the content instead of displaying the content from the response. Since we use a synthetic one-time-use path (for largely historical reasons to ensure a new version of the report is fetched), the report isn't actually there for the new tab's GET request.
Since we're using 20.10240, I'm not convinced it was actually fixed in 10158.
As with the OP, this seems to apply only to "Content-Disposition: inline"; if we use "attachment" instead, a temporary file is saved locally and the temporary file is opened in the viewer.
It was a bug but Microsoft have fixed it in build 10158! :)
I just found a partial response being cached as complete in one of our customer's machines, which rendered the whole website unusable. And I have absolutely no idea, what could possible have gone wrong there.
So what could have possibly gone wrong in the following setup?
On the server-side, we have an ASP.NET-application running. One IHttpHandler handles requests to javascript-files. It basically minifies the files as they are requested and writes the result on the response-stream. It does also log the length of the string being written to the Response-Stream:
String javascript = /* Javascript is retrieved here */;
HttpResponse response = context.Response;
response.ContentEncoding = Encoding.UTF8;
response.ContentType = "application/javascript";
HttpCachePolicy cache = response.Cache;
cache.SetCacheability(HttpCacheability.Public);
cache.SetMaxAge(TimeSpan.FromDays(300));
cache.SetETag(ETag);
cache.SetExpires(DateTime.Now.AddDays(300));
cache.SetLastModified(LastModified);
cache.SetRevalidation(HttpCacheRevalidation.None);
response.Headers.Add("Vary", "Accept-Encoding");
Log.Info("{0} characters sent", javascript.length);
response.Write(javascript);
response.Flush();
response.End();
The content is then normally sent using gzip-encoding with chunked transfer-encoding. Seems simple enough to me.
Unfortunately, I just had a remote-session with a user, where only about 1/3 of the file was in the cache, which broke the file of course (15k instead of 44k). In the cache, the content-encoding was also set to gzip, all communication took place via https.
After having opened the source-file on the user's machine, I just hit Ctrl-F5 and the full content was displayed immediately.
What could have possibly gone wrong?
In case it matters, please find the cache-entry from Firefox below:
Cache entry information
key: <resource-url>
fetch count: 49
last fetched: 2015-04-28 15:31:35
last modified: 2015-04-27 15:29:13
expires: 2016-02-09 14:27:05
Data size: 15998 B
Security: This is a secure document.
security-info: (...)
request-method: GET
request-Accept-Encoding: gzip, deflate
response-head: HTTP/1.1 200 OK
Cache-Control: public, max-age=25920000
Content-Type: application/javascript; charset=utf-8
Content-Encoding: gzip
Expires: Tue, 09 Feb 2016 14:27:12 GMT
Last-Modified: Tue, 02 Jan 2001 11:00:00 GMT
Etag: W/"0"
Vary: Accept-Encoding
Server: Microsoft-IIS/8.0
X-AspNet-Version: 4.0.30319
Date: Wed, 15 Apr 2015 13:27:12 GMT
necko:classified: 1
Your clients browser is most likely caching the JavaScript files which would mean the src of your scripts isn't changing.
For instance if you were to request myScripts
<script src="/myScripts.js">
Then the first time, the client would request that file and any further times the browser would read its cache.
You need to append some sort of unique value such as a timestamp to the end of your scripts so even if the browser caches the file, the new timestamp will act like a new file name.
The client receives the new scripts after pressing Ctrl+F5 because this is a shortcut to empty the browsers cache.
MVC has a really nice way of doing this which involves appending a unique code which changes everytime the application or it's app pool is restarted. Check out MVC Bundling and Minification.
Hope this helps!
I'm building an API on Rails version 4.1.7/Nginx that responds to request from an iOS app. We're seeing some weird caching on the client and we think it has something to do with a small difference in the response that Rails is sending back. My questions...
1) I want to understand why, for the exact same request (with only the Authorization header value changed), Rails sends back transfer-encoding: chunked sometimes and Content-Length: <number> sometimes? I thought that maybe it had something to do with the response size, but in the example responses whose headers I've pasted below, the data returned in the body is EXACTLY the same.
2) Is there a way to force it to use Content-Length? We think that that will fix our caching issues in our iOS app.
Response #1
HTTP/1.1 200 OK
Cache-Control: max-age=0, private, must-revalidate
Content-Type: application/json; charset=utf-8
Date: Wed, 18 Mar 2015 00:59:31 GMT
ETag: "86f277ea63295460d4f3bed9a073eaa2"
Server: nginx/1.6.2
Status: 200 OK
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: dd36f139-1986-4da6-9645-4438d41e74b0
X-Runtime: 0.123865
X-XSS-Protection: 1; mode=block
transfer-encoding: chunked
Connection: keep-alive
Request #2
HTTP/1.1 200 OK
Cache-Control: max-age=0, private, must-revalidate
Content-Type: application/json; charset=utf-8
Date: Wed, 18 Mar 2015 00:59:36 GMT
ETag: "86f277ea63295460d4f3bed9a073eaa2"
Server: nginx/1.6.2
Status: 200 OK
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: 0cfd7705-157b-41b5-aa36-739bc6f8302e
X-Runtime: 0.092672
X-XSS-Protection: 1; mode=block
Content-Length: 2234
Connection: keep-alive
Both responses are valid according to HTTP 1.1, so you need to fix your client code that it can handle both. It is a bad idea to try to fix the server so that that it behave in a way that it does not trigger a bug in the client.
The next version of nginx may behave differently, you users may even have proxies that change the transfer, maybe only when they do roaming and use a different provider.
If you want to do some finger-printing on the header, the ETag-header may help you, as the ETag should stay constant when the content of the response is not changed, regardless of the transfer.
The server typically sends in chunks when it calls a dynamic page, because it then does not need to create a buffer for the whole page and wait till all of the page is generated.
The server often send the response in one go if it has the buffer already, for example because it is in cache or the content is on a file and is not to big. Sending in one go is more efficient, on the other hand, an extra copy of the data to buffer the output needs more memory and is less efficient. So the server may even decide this according to the available memory.
I am trying to call a ASP.NET Web API that I have hosted in IIS from MonoDroid. The service is fine and I can call it from different endpoints. The problem is that in MonoDroid I get invalid cast exception when I try to do this.
var s = response.GetResponseStream();
var j = (JsonObject)JsonObject.Load(s);
System.InvalidCastException: comes back on the load part.
I have done some reading and people seem to say to try to switch the Web API to use JsonNetFormatter class. I tried that and still no luck.
Anybody have any ideas on what I can try?
UPDATE
Here is payload
<ArrayOfAlbum xmlns:xsi="w3.org/2001/XMLSchema-instance";
xmlns:xsd="w3.org/2001/XMLSchema">;
<Album>
<AlbumPK>f09d14cf-3bab-44c8-b614-2b7cf728efd4</AlbumPK>
<Name>Colorado</Name>
<UserName>firstUser</UserName>
<ParentAlbumFK xsi:nil="true" />
<DateCreated>2012-03-12T19:47:54.493</DateCreated>
</Album>
</ArrayOfAlbum>
And
[{"AlbumPK":"f09d14cf-3bab-44c8-b614-2b7cf728efd4","Name":"Colorado",
"UserName":"emorin","ParentAlbumFK":null,
"DateCreated":"2012-03-12T19:47:54.493"}]
Response from Fiddler changing the accept header to JSON.
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Transfer-Encoding: chunked
Content-Type: application/json; charset=utf-8
Expires: -1
Server: Microsoft-IIS/7.0
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Tue, 17 Apr 2012 19:45:47 GMT
97
[{ "AlbumPK":"f09d14cf-3bab-44c8-614-2b7cf728efd4","Name":"Colorado",
"UserName":"emorin","ParentAlbumFK":null,
"DateCreated":"2012-03-12T19:47:54.493"}]
0
It seems that your server is sending data in the chunked encoding and perhaps the MonoDroid is having problem understanding chunked encoding. Try turning it off in IIS.