Using akamai caching in rails app - don't cache logged in users - ruby-on-rails

I am attempting to use akamai in my production app to cache basically every page when you are logged out, as only a small percentage of our users have accounts. However I want to be able to serve logged in users a none cached version of the page.
It seems that I may be able to do this in the controller with something like:
headers['Edge-control'] = "no-cache, no-store"
Will this work? Is there a better way to handle this, perhaps from a lower level, like Rack? I am having a lot of trouble finding standard practices.
Thanks!

I just dealt with this situation with akamai and wordpress. Even if akamai honors headers, it's probably more robust to base the rule on a cookie, the same cookie you use to track the login. That way, caching is tied to something visible -- if the cookie is not present, the user is not logged in. The header-based solution will be more prone to silent failures and would require more effort to validate for correct behavior.

This doesn't work because Akamai doesn't look at response header. You can use cookies to do it though.

Yes, you can in fact do this with headers.
Just send Edge-Control: no-store
Akamai does in fact examine response headers...how else could they honor cache-control headers from origin...which is a very common configuration setting.

As user3995360 states, you're better off using cookies to tell Akamai not to cache the results for a number of reasons:
If Akamai has a cached version of your page, the logged in user will be served that - your server won't have a chance to send a different header.
There's nothing to tell Akamai why the header is different for some requests - if your logged in user managed to get the no-store header, and then an anonymous user caches the page, you're back to point 1.
That being said, when I've done this in past we've had to involve the Akamai Consultants to enable this feature on our setup.

Related

API protection from spoof referrer

I have a project with a rails-api backend and an angular repo running on a separate, nginx server. The front end makes normal JSON requests to the API, but I have some internal methods that I want only our front end to make. So far I've been using referrer protection as a whitelist for our front end servers, but I know that can be spoofed.
How can I prevent an attacker from creating accounts through these internal methods and flooding the server with requests?
The other solution i considered was to send a CSRF token token to the front end on every request and then require the front end to send that with every request. I don't like that idea either, as the attacker can also make a request to this endpoint to get the CSRF token everytime he makes a request.
Am I missing anything obvious here? How people are tackling this issue?
I don't hear anything in your description that makes your use case different from a regular, non-angularized app.
If I have a regular rails app serving a "signup" page, there's nothing preventing a malicious user from scripting an infinite loop of signups on that page. This seems to be the problem you're describing, but the problem seems different because of the distinction you're making in your head between APIs that are intentionally public and those that are for internal use.
The typical solution for this is to use a captcha or something, to make sure you've got a human on the other end of the API request.
Frontend js sources are available to any user. Even obfuscated, the can be used for reverse engineering.
It seems to be your application architecture issue, that your front-end allows user to make some actions, that are restricted for him.
Probably you should provide more information about your app here. Or review and change apps architecture.

Rails cross-domain requests security concerns

I am developing a Rails app which relies on a lot of jQuery AJAX requests to the server, in the form of JSONs. The app has no authentication (it is open to the public). The data in these requests is not sensitive in small chunks, but I want to avoid external agents from having access to the data, or automating requests (because of the server load and because of the data itself).
I would ideally like to include some kind of authentication whereby only requests can only be made from javascript in the same domain (i.e. clients on my website), but I don't how or if this can be done. I am also thinking about encrypting the query strings and/or the responses.
Thank you.
What do you mean only your app should request these JSONs? A client will eventually have to trigger an event, otherwise no request will be sent to the server.
Look at the source code of any of your app's pages. You will notice an authenticity token, generated by the protect_from_forgery method in your application controller - from the api:
Turn on request forgery protection. Bear in mind that only non-GET, HTML/JavaScript requests are checked.
By default, this is enabled and included in your application controller.
If you really need to check whether a request comes from your own IP, have a look at this great question.
I want to avoid external agents from having access to the data... because of the server load and because of the data itself.
If you're really concerned about security, this other question details how to implement an API key: What's the point of a javascript API key when it can be seen to anyone viewing the js code
You shouldn't solve problems you don't have yet, server load shouldn't be a concern until it actually is a problem. Why don't you monitor server traffic and implement this feature if you notice too much load from other agents?
I ended up passing token=$('meta[name=csrf-token]').attr("content")in the request URL and comparing with session[:_csrf_token] in the controller.
def check_api
redirect_to root_url, :alert => 'effoff' unless request.host =~ /yourdomain.com/
end
that should work to check your domain. Not sure you need the js part, but it's something.

The best way for an App to recognize Bots (Googelbot/Yahoo Slurp)

I have a (Rails) site and I want the search engines to crawl and index it. However, I also have some actions that I want to log as having happened - and these actions can be triggered by logged in users as well as users not logged in. Now, to ensure that the count for non-logged in ie anonymous users doesn't include bot traffic I am considering a few options and am looking for guidance on which way to go:
Set a cookie for all users, if this cookie doesn't come back since Bots usually dont accept or send back cookies, I can distinguish bots from anonymous humans.
Check the header and see if the agent is a bot (some whitelist): How to recognize bots with php?
Set that action to be a POST rather than a GET. Bots issue GETs so they don't get counted.
Any other approaches?
I am sure folks have had to do this before so what's the 'canonical' way to solve this?
If you don't want the spiders to follow the links, then you can use rel="nofollow" on them. However, since there might be other links pointing into the pages, you will probably also want to look at the User-Agent header. In my experience, the most common User-Agent headers are:
Google: Googlebot/2.1 ( http://www.googlebot.com/bot.html)
Google Image: Googlebot-Image/1.0 ( http://www.googlebot.com/bot.html)
MSN Live: msnbot-Products/1.0 (+http://search.msn.com/msnbot.htm)
Yahoo: Mozilla/5.0 (compatible; Yahoo! Slurp;)
Just check the User-Agent header, that might be enough for your purposes. Note that a user agent can just pose as Google bot. So if you want to be sure more checking is needed. But I don't think you'd need to bother further than this.

Is it possible for Rails sessions to be created 'just in time'?

My understanding of the session lifecycle in Ruby on Rails (specifically v3 and upwards) is that a session is created at the start of a request, for each and every request, and if that request doesn't carry an existing session cookie a new one will be created, otherwise the session cookie is deserialized and stored in the session hash.
The purpose of this, of course, supports a number of security features such as CSRF etc.
However, this poses a bit of an issue when it comes to caching of pages in a site with HTTP cache services and proxies such as Varnish, as most of the configurations tend to strip out these (generally all) cookies on both the request and response end (as the cache is usually intended for a generalized audience).
I know that it is possible to setup Varnish etc to create the object hash with the cookie details included, and this would scope the cached data to that session (and therefor that user), however I am wondering if this is completely necessary.
I have an application which is fairly 'static' in nature - content is pulled from a database, rendered into a page which can then be cached - there are a few elements (such as comment count, 'recent' items etc) which can be added in with an ESI, but for every request Rails still tends to want to setup a new session, and when a user already has a session this stuff is stripped out by the cache server.
I am wondering if it might be possible (via pre-existing functionality, or building the functionality myself) to allow the developer to control when a session is required, and only when that is specified is the back-and-forwards with cookies, session initialization/deserialization etc necessary.
That, or I am thinking about this problem the wrong way and need to address the issue from another angle...
From what I know rails sessions can be controlled fairly in-depth via ActionController::SessionManagement
http://ap.rubyonrails.org/classes/ActionController/SessionManagement/ClassMethods.html#M000070
There are examples in the API docs of disabling it per action, per controller, etc.
If your site is mostly static then you may want to use full page caching. This takes Rails out of the request entirely and let's the web server deal with it once the content has been generated. Might cause some serious headaches depending on your exact needs as far as the comment counts and user-specifics though.

Tomcat session management - url rewrite and switching from http to https

I'm an old hand at C but a raw newbie at Java/Tomcat.
I'm fine with Tomcat session management in http alone. Its when I've come to look at switching to https that I've had problems.
I gather for Tomcat that you have to start with an http session if you want to maintain a session as you switch from http to https and back to http. This works fine for me when the browser is enabled for cookies.
But when the browser is disabled for cookies (and URL rewriting is being used) then switching http to https or back again causes a fresh session to be started each time. I'm assuming this is a security thing.
Q1 - Is it possible/desirable to maintain a session between http and https using URL rewriting?
Q2 - If it isnt possible then what do e-commerce developers do about non-cookie users?
I dont want to prevent non-cookie people using my site. I do want some flexibility switching between http and https.
thanks for any help,
Steven.
It doesn't seem desirable to maintain session between HTTP and HTTPS using the same cookie or URL token.
Imagine the case where you're user is logged on, with a given cookie (or URL token) passed back and forth for every request/response in an e-commerce website. If someone in the middle is able to read that cookie, he can then log on to the HTTP or HTTPS variant of the site with it. Even if whatever the legitimate user is then doing is over HTTPS, the attacker will still be able to access that session (because he too will have the legitimate cookie). He could see pages like the cart, the payment method, perhaps change the delivery address.
It makes sense to pass some form of token between the HTTP session and the HTTPS session (if you're using sessions), but treating them as one and the same would cause some vulnerability. Creating a one-off token in the query parameter just the transition could be a solution. You should however treat them as two separate authenticated sessions.
This vulnerability can happen sometimes with websites that use mixed HTTP and HTTPS content (certain browsers such as Firefox will give you a warning when that happens, although most people tend to disable it the first time it pops up). You could have your HTTPS session cookie for the main page, but that page contains images for the company logo, over plain HTTP. Unfortunately, the browser would send the cookie for both (so the attacker would be able the cookie then). I've seen it happen, even if the image in question wasn't even there (the browser would send the request with the cookie to the server, even if it returned a 404 not found).

Resources