Add expire header to image from database - asp.net-mvc

Does anybody know if it possible to cache the image from the database?
I know that there is an OutputCache attribute for above the Action. You then could set the VaryByParam to the id of the image in the database.
But this would just save the image on the server and not on the client right?
I was hoping that er was something as a expiration header for an image. Can you add that to an image? In that way, the client is responsible for the request to the server. This saves a request to the server...
If i'm wrong, please correct me because i'm new in this kind (OutputCache and Expiration Headers) of caching.
Thanks

Output caching affects the client's caching as well so this will actually work ok for you.
See my note on caching here:
Disable browser cache for entire ASP.NET website
Someone on that thread thought that output caching was only on the server side as well but a quick test can tell you otherwise. This doesn't mean there aren't scenarios where its limited to the server (such as varying by key). I would have one action method responsible for only serving up these files. That method doesn't need to cache by key, just change your Duration to say a minute and watch your headers coming down in Fiddler to verify.

Related

MVC 5 how to achieve POST that behaves like a redirect to GET with content

My client redirects to a https://domain.com/Controller/GetInfo?Querystring method. Now my query string is getting dangerously close to the 2K limit, so I need to reproduce this behavior but pack my query string into the content of the messages. Since it would be heresy (etc.) to try a GET with content, I'll use a POST. However, I can't redirect to a POST since a Redirect has no content.
So, what I am looking for is the best MVC 5 pattern to resolve this: I need to provide lots of content, but I want the resulting page hosted on my remote server (i.e. as if I had redirected)
Also, since I use load balanced servers in azure, I'd prefer maintaining my clean stateless server if at all possible (else I'll have to introduce session caching).
#AntP is absolutely right in the comments above. If your query string is approaching 2K, then you're abusing it.
If there's a particular object you're referencing, then you can simply include the id or some other identifying piece of it and use that to look it up again from your data store.
If there's no persistent record of the object, then you can use something like Session or TempData to store it between one request and the next.
Regardless, it's not possible to redirect with a request body, with also means it's not possible to redirect using POST. The reason for this that the a redirect is not something the server does, but rather the client. The server merely suggests that the client go to a different URL. It's then up to the client (web browser) to issue a new request for that URL. Since the client is the one issuing the request, it makes the decision about what data is or isn't included in that request, not the server.

URL Schema for Long Running Operation in Web Application

In our web application we have some pages that may take long time to generate. The reason is that they need information that takes between few seconds and few minutes to calculate. Once the data is calculated it is cached and access is very fast.
During the time the system calculate the information we want to show the user some message and not just leave the browser spinning.
The question is how to architect the URL schema:
Use the same URLs and return a different content that show the "loading" sign and reload every few seconds.
Redirect the client (302 temporary) to another URL which redirect the client back to the real URL once the information is ready.
Please take into account we have several URLs that use that same data:
/index/{id}
/export/{id}
So using option 1 will keep the URL schema simpler but will not be so friendly with output cache and cache in general.
I've decide to use option #1 and use the same URL.
The main reason for doing that is that it's much easier to support url parameters the user enter when the long operation had to be done. If I would have redirect to another URL I had to keep those parameters.
I do ensure to update cache headers to ensure the client will not cache the "loading" screen.

Can IIS or asp.net/mvc somehow achieve this?

There was some coding error recently, and the site was down for a couple of hours during working hour.
Our site is basically a publishing site, user can upload some excels and we grab information and generate some pdfs.
The final pdf location is something like
https://SomeUrl.url.com/Documents/ClientName/DocumentName.pdf
Documents is the controller and we map it to some action and ClientName and document name are the parameters.
What the client want is that even if the site is down (means they can't upload or modify anything), they want the above url to be still up.
Other than rewriting the whole logic, is there something we can do in IIS level?
I thought about url rewriting or url redirect, but don't really think it is possible.
Anyone got any ideas?
Many Thanks
URL Rewrite IIS Extension won't be helpful as it's based on URL pattern. It doesn't care about whether the site is up or down.
You should consider setting up a load balancer instead. It's its job to decide which server to hit depending on server current load or if it's available or not.

Rails implementation for securing S3 documents

I would like to protect my s3 documents behind by rails app such that if I go to:
www.myapp.com/attachment/5 that should authenticate the user prior to displaying/downloading the document.
I have read similar questions on stackoverflow but I'm not sure I've seen any good conclusions.
From what I have read there are several things you can do to "protect" your S3 documents.
1) Obfuscate the URL. I have done this. I think this is a good thing to do so no one can guess the URL. For example it would be easy to "walk" the URL's if your S3 URLs are obvious: https://s3.amazonaws.com/myapp.com/attachments/1/document.doc. Having a URL such as:
https://s3.amazonaws.com/myapp.com/7ca/6ab/c9d/db2/727/f14/document.doc seems much better.
This is great to do but doesn't resolve the issue of passing around URLs via email or websites.
2) Use an expiring URL as shown here: Rails 3, paperclip + S3 - Howto Store for an Instance and Protect Access
For me, however this is not a great solution because the URL is exposed (even for just a short period of time) and another user could perhaps in time reuse the URL quickly. You have to adjust the time to allow for the download without providing too much time for copying. It just seems like the wrong solution.
3) Proxy the document download via the app. At first I tried to just use send_file: http://www.therailsway.com/2009/2/22/file-downloads-done-right but the problem is that these files can only be static/local files on your server and not served via another site (S3/AWS). I can however use send_data and load the document into my app and immediately serve the document to the user. The problem with this solution is obvious - twice the bandwidth and twice the time (to load the document to my app and then back to the user).
I'm looking for a solution that provides the full security of #3 but does not require the additional bandwidth and time for loading. It looks like Basecamp is "protecting" documents behind their app (via authentication) and I assume other sites are doing something similar but I don't think they are using my #3 solution.
Suggestions would be greatly appreciated.
UPDATE:
I went with a 4th solution:
4) Use amazon bucket policies to control access to the files based on referrer:
http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?UsingBucketPolicies.html
UPDATE AGAIN:
Well #4 can easily be worked around via a browsers developer's tool. So I'm still in search of a solid solution.
You'd want to do two things:
Make the bucket and all objects inside it private. The naming convention doesn't actually matter, the simpler the better.
Generate signed URLs, and redirect to them from your application. This way, your app can check if the user is authenticated and authorized, and then generate a new signed URL and redirect them to it using a 301 HTTP Status code. This means that the file will never go through your servers, so there's no load or bandwidth on you. Here's the docs to presign a GET_OBJECT request:
https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Presigner.html
I would vote for number 3 it is the only truly secure approach. Because once you pass the user to the S3 URL that is valid till its expiration time. A crafty user could use that hole the only question is, will that affect your application?
Perhaps you could set the expire time to be lower which would minimise the risk?
Take a look at an excerpt from this post:
Accessing private objects from a browser
All private objects are accessible via
an authenticated GET request to the S3
servers. You can generate an
authenticated url for an object like
this:
S3Object.url_for('beluga_baby.jpg', 'marcel_molina')
By default
authenticated urls expire 5 minutes
after they were generated.
Expiration options can be specified
either with an absolute time since the
epoch with the :expires options, or
with a number of seconds relative to
now with the :expires_in options:
I have been in the process of trying to do something similar for quite sometime now. If you dont want to use the bandwidth twice, then the only way that this is possible is to allow S3 to do it. Now I am totally with you about the exposed URL. Were you able to come up with any alternative?
I found something that might be useful in this regard - http://docs.aws.amazon.com/AmazonS3/latest/dev/AuthUsingTempFederationTokenRuby.html
Once a user logs in, an aws session with his IP as a part of the aws policy should be created and then this can be used to generate the signed urls. So in case, somebody else grabs the URL the signature will not match since the source of the request will be a different IP. Let me know if this makes sense and is secure enough.

CDN and URL's with query-strings

We have an images folder on our web servers that we may publish via a CDN. Sometimes we append query-string like syntax to URL's to help us freshen content that has changed, even though it rarely does. Example:
/images/file.png?20090821
will URL's like this work with your average content-delivery-network?
Yes, We use Akamai, which keeps a cached copy of each distict url requested including the querystring. So the first request for /images/file.png?20090821 will go to the origin server. Requests there after for /images/file.png?20090821 will get the image from the Akamai servers. The next day, assuming the img src changes to /images/file.png?20090822, the first request will go to the origin server again.
Amazon CloudFront turned on this feature in May 2012
You wouldn't have problem with CDN. However, you may have problem with browsers. Some browsers wouldn't cache any content with query string. Even though it may be faster to fetch the image from CDN but it will not be as fast as cached image. So you want do something like this,
/images/file.png/20090821
Our CDN provider also recommends a hash mechanism. When we publish our content, it adds a hash to the URL so you don't have to add the version yourself. Unfortunately, I don't know the details on how that magic is done.
amazon cloudfront won't propagate the query string.

Resources