I put code from end of this article to my MVC controller method:
http://msdn.microsoft.com/en-us/library/windowsazure/gg680299.aspx
I configured cname for cdn and all working fine except I feel that cdn not caching :)
There is CDN url
http://cdn.services.idemkvrachu.ru/services/BranchLogo/82f204fe-bb1d-4204-b817-d424e1284b17/E0F4F2AE-B6C2-4516-BE7C-59B649E2C5AC?lastUpdated=635169430040919922&width=499
And this is original url
http://prm.idemkvrachu.ru/cdn/services/BranchLogo/82f204fe-bb1d-4204-b817-d424e1284b17/E0F4F2AE-B6C2-4516-BE7C-59B649E2C5AC?lastUpdated=635169430040919922&width=499
This is my code:
Response.Cache.SetExpires(DateTime.Now.AddDays(14));
Response.Cache.SetCacheability(HttpCacheability.Public);
Response.Cache.SetLastModified(blob.ChangDateOfs.DateTime);
return File(bytes, format);
When I checked timings receiving picture from original link and cdn - I found that timings higher on cdn.
Also I was trying change blob.ChangDateOfs and comparing Last-Modified header from cdn response: it immediately changes.
What's wrong with my code? Maybe this header breaks cdn cache Cache-Control public, no-cache="Set-Cookie" ?
To troubleshoot caching issues the first thing you want to do is validate if your content is actually getting cached or not.o
To do this you can add the X-LDebug header with a value of 2. An example of doing this against your endpoint with the relevant portions of output included:
C:\Azure\Tools\wget\bin>wget -S --header "X-LDebug:2" http://cdn.services.idemkvrachu.ru/services/BranchLogo/82f204fe-bb1d-4204-b817-d424e1284b17/E0F4F2AE-B6C2-4516-BE7C-59B649E2C5AC?lastUpdated=635169430040919922&width=499
Cache-Control: public, no-cache="Set-Cookie"
Set-Cookie: ASP.NET_SessionId=nnxb3xqdqetj0uhlffdmtf03; path=/; HttpOnly
Set-Cookie: idCity=31ed5892-d3cb-45eb-bd4f-526cd65f5302; domain=idemkvrachu.cloudapp.net;
X-Cache: MISS from cds173.sat9.msecn.net
As you can see, you are setting the Cache-Control header to no-cache="Set-Cookie", and then are setting a cookie. This is telling the CDN to not cache the content. Since your code is only setting the cache control to Public I assume that you have a setting in your web.config or aspx page that is modifying the cache control header to add the no-cache="Set-Cookie".
Related
I have a grails 2.2.4 application. I wanted to enable CORS
So I installed cors plugin by having the following line in build config.
plugins {
runtime ':cors:1.1.8'
}
Then in the config.groovy
cors.headers = ['Access-Control-Allow-Origin': '*']
But after this when I run the application, CORS in not enabled. So I debugged the CORS plugin. The issue seems to be in CorsFilter class in the following method
private boolean checkOrigin(HttpServletRequest req, HttpServletResponse resp) {
String origin = req.getHeader("Origin");
if (origin == null) {
//no origin; per W3C spec, terminate further processing for both preflight and actual requests
return false;
}
The origin parameter in the above line is always null as the request does not have the parameter 'Origin'. Is there something i'm doing wrong? I'm not looking for the answer which says add a manual header with the name "Origin" since that is not exactly a proper fix
I'm quite new to CORS so appriciate the help.
In addition to Access-Control-Allow-Origin, and in addition to setting the Origin header on request, you probably need to specify these response headers as well:
Access-Control-Allow-Headers: accept
Access-Control-Allow-Headers: origin
Access-Control-Allow-Headers: content-type
Access-Control-Allow-Method: GET
Access-Control-Allow-Method: POST
Also make sure you respond to HTTP OPTIONS requests with these headers and a blank 200 OK response.
For now, let's assume that RestClient is sending the Origin header properly. It may still be getting stripped by your application. You can prevent this using the Access-Control-Allow-Headers: Origin header.
Most of the problems I have had with my web services is that the right headers are being sent, but they are stripped from the message by my web server. So I tend to adopt a shotgun approach of "allow everything" and then one by one remove what I don't need. My allow-headers header usually is pretty long and I end up having to include stuff like Content-Type, X-Requested-With and other junk before my requests will finally go through.
I further recommend that you test using something besides RestClient, if only as a sanity check. I use Postman, a free Chrome app, for all my messaging tests. It looks to me like the problem is with RestClient not sending the proper Origin header.
We have a controller returning dynamically generated JS code in a JavaScriptResult (ActionResult).
public ActionResult Merchant(int id)
{
var js = "<script>alert('bleh')</script>"; // really retrieved from custom storage
return JavaScript(js); // same as new JavaScriptResult() { Script = js };
}
The default gzip filter is compressing it but not setting Content-Length. Instead, it just sets Transport-Mode: Chunked and leaves it at that.
Arr-Disable-Session-Affinity:True
Cache-Control:public, max-age=600
Content-Encoding:gzip
Content-Type:application/x-javascript; charset=utf-8
Date:Fri, 06 Nov 2015 22:25:17 GMT
Server:Microsoft-IIS/8.0
Timing-Allow-Origin:*
Transfer-Encoding:chunked
Vary:Accept-Encoding
X-AspNet-Version:4.0.30319
X-AspNetMvc-Version:5.2
X-Powered-By:ASP.NET
How can I get it to add the header? I absolutely need this header. Without Content-Length, there's no way to tell if the file completed downloading, e.g. if the connection dropped. Without it, some CDNs like Amazon CloudFront aren't able to properly cache.
After turning off compression, I get a normal Content-Length. It seems trivial for the gzip filter to add this length - there must be an option somewhere?
Could you tell me how can I uncache the default home page in Rikulo Stream? By home page I mean the main domain (xxx.xxx.com) with no sub path (/xxx), not even including '/'. The urimapping setting doesn't allow me to set a filter for a path that not start with '/', '.', '[' or '(' and (.*) is not working for me, (cache-control is still set to max-age=2592000 for the default home page).
Is it a static page (e.g., index.html) or a RSP page?
If it is RSP, you can specify the header(s) you like. For example,
[:header
Cache-Control="no-cache, must-revalidate, no-store, private, max-stale=0, max-age=0, post-check=0, pre-check=0"
Expires="0" Pragma="no-cache"]
If it is static, there is no direct way to override max-age, ETAG, and related headers. But, there is a few alternatives. First, you can implement your own resource loader).
Second, you implement a handler to set the header and include the real page. Assume you mapped HTML files under /s:
uriMapping: {
r"/s/.*\.html": (HttpConnect connect) {
connect.response.headers..contentType = "text/html"
..add("Cache-Control", "no-cache"); //also other headers
return connect.include(connect.request.uri.path.substring(2));
}
If a page is included, it won't update the headers.
Of course, you can implement your HTML file in RSP. Then, you got the total control. Plus, you can use the script tag to generate a proper link easily (which includes a simple version control).
The code: I got the following piece of code in some HAML file.
= image_tag "https://s3.amazonaws.com/my_bucket/my_image.jpg"
It sends a request to s3 and loads the image in the browser. I got the following CORS configuration on the bucket:
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>https://www.my_site.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
</CORSRule>
</CORSConfiguration>
The problerm:
In order to be able to manipulate the images on client side, the images are supposed to be served with the following headers:
Access-Control-Allow-Origin: https://www.my_site.com
Access-Control-Allow-Methods: GET
but this does not happen.
The cause: My browser does not send 'Origin' request header, and therefore s3 does not respond with the desired headers.
Why I think that missing "Origin" header is the cause:
Because the response of:
wget --server-response --header "Origin:https://www.my_site.com" "https://s3.amazonaws.com/my_bucket/my_image.jpg"
is the following:
HTTP/1.1 200 OK
x-amz-id-2: kQV8HEChV1...QHmHC1Gt/
x-amz-request-id: A626...4A2
Date: Wed, 03 Jul 2013 10:10:38 GMT
Access-Control-Allow-Origin: https://www.my_site.com
Access-Control-Allow-Methods: GET
Access-Control-Allow-Credentials: true
Vary: Origin, Access-Control-Request-Headers, Access-Control-Request-Method
...
i.e. the 'Access-Control-Allow-Origin' and 'Access-Control-Allow-Methods' are present.
The supposed solution:
Is there a way to add manually the desired headers to the image_tag in the HAML file? Something like:
= image_tag "https://s3.amazonaws.com/my_bucket/my_image.jpg", :headers=>["Origin"]
Not using image_tag, no. Remember all image_tag does is generate the HTML tag. The end user's browser is responsible for loading the image source and displaying it. Unless I've missed something major, there isn't any way for you to tell the browser to pass additional headers via HTML.
You might be able to change your method and load those images using Javascript. Maybe you can pass headers that way.
Check How to debug CORS error for a solution.
I spent a lot of time thinking the problem was server side but finally it was due to a browser issue.
You can force Chrome to user the proper CORS protocol by loading your image through JavaScript.
Try changing your html to an empty img tag and loading your image into it. Like:
<span id="source-url" class="hidden"><%= #product.images.first.attachment.url(:large) %></span>
<img id="art-image">
and in a .js file
artImage = document.getElementById('art-image');
$(artImage).attr('crossOrigin', '');
$(artImage).attr("src", $("#source-url").text());
$(artImage).one('load', function() {
// image processing
});
Hope it helps!
Recently I added a Varnish instance to a Rails application stack. Varnish in it's default configuration can be convinced from caching a certain resource using the Cache-Control Header like so:
Cache-Control: max-age=86400, public=true
I achieved that one using the expires_in statement in my controllers:
def index
expires_in 24.hours, public: true
respond_with 'some content'
end
That worked well. What I did not expect is, that the Cache-Control header ALSO affects the browser. That leads to the problem that both - Varnish and my users browser cache a certain resource. The resource is purged from varnish correctly, but the browser does not attempts to request it again unless max-age is reached.
So I wonder wether I should use 'expires_in' in combination with Varnish at all? I could filter the Cache-Control header in a Nginx or Apache instance in front of Varnish, but that seems odd.
Can anyone enlighten me?
Regards
Felix
That is actually a very good and valid question, and a very common one with reverse proxies.
The problem is that there's only one Cache-Control property and it is intended for the client browser (private cache) and/or a proxy server (shared cache). If you don't want 3rd party proxies to cache your content at all, and want every request to be served by your Varnish (or by your Rails backend), you must send appropriate Cache-Control header from Varnish.
Modifying Cache-Control header sent by the backend is discussed in detail at https://www.varnish-cache.org/trac/wiki/VCLExampleLongerCaching
You can approach the solution from two different angles. If you wish to define max-age at your Rails backend, for instance to specify different TTL for different objects, you can use the method described in the link above.
Another solution is to not send Cache-Control headers at all from the backend, and instead define desirable TTLs for objects in varnish vcl_fetch(). This is the approach we have taken.
We have a default TTL of 600 seconds in Varnish, and define longer TTLs for pages that are definitely explicitly purged when changes are made. Here's our current vcl_fetch() definition:
sub vcl_fetch {
if (req.http.Host ~ "(forum|discus)") {
# Forum pages are purged explicitly, so cache them for 48h
set beresp.ttl = 48h;
}
if (req.url ~ "^/software/") {
# Software pages are purged explicitly, so cache them for 48h
set beresp.ttl = 48h;
}
if (req.url ~ "^/search/forum_search_results" ) {
# We don't want forum search results to be cached for longer than 5 minutes
set beresp.ttl = 300s;
}
if(req.url == "/robots.txt") {
# Robots.txt is updated rarely and should be cached for 4 days
# Purge manually as required
set beresp.ttl = 96h;
}
if(beresp.status == 404) {
# Cache 404 responses for 15 seconds
set beresp.http.Cache-Control = "max-age=15";
set beresp.ttl = 15s;
set beresp.grace = 15s;
}
}
In our case we don't send Cache-Control headers at all from the web backend servers.