Rails + SSL: Per controller or application-wide? - ruby-on-rails

I could use some wisdom from any developers who have worked with Rails and SSL. I have a fairly simple app and I'm in the process of implementing payment processing. Obviously payment processing calls for SSL, so I'm setting that up now.
My intention when I started working on this today was to find the simplest / cleanest way to enforce SSL on specific controller actions - namely anything having to do with payment. I figured there was no reason to run the rest of my site on SSL.
I found the ssl_requirement gem which seems to take care of setting SSL per-controller-action without much difficulty, so that's good. I also found this question which seems to indicate that handling SSL with a gem is now out-of-style.
I also found several answers / comments etc. suggesting that a site should just use Rack middleware like Rack-SSL to force the entire site to SSL mode.
So now I'm kind of confused, and not sure what I should do. Could anyone with experience working with Rails 3 and SSL help me understand:
Whether I should force the whole site to SSL, or only per certain actions.
What gotchas to look out for using SSL in Rails (I've never done it before).
If per-controller is the way to go, whether it makes sense to use the ssl-requirement gem or whether I should just use the new routing and link helper options...
I'd very much appreciate your insight, this has become a paralyzing decision for me. Thanks!

I've found myself "paralyzed" by this decision in the past, and here's what I think about each time.
First, keep in mind that some browsers will throw pop-up warnings if you keep switching out of and into SSL, or if you serve some content (the page) with SSL and other content (images, css) without. Obviously that's not a good experience for users.
The only possible downside to requiring SSL everywhere is performance. But unless you're expecting 1000+ users/day who will be doing lots of things that *don't * require SSL, this is negligible.
SSL is handled at the Apache/Nginx/whatever level. So if you decide to put your entire app behind SSL, it makes most sense to deal with it at the Webserver level (redirect http:/yoursite.com to https://yoursite.com.
And if, for performance reasons, you decide not to put everything behind SSL, then it still could make sense to handle SSL redirects at the Webserver level. Allowing your user through your Webserver, then sending him through half Rails stack, just to boot him back out to start over again is very wasteful.
Of course there's something to be said for simplicity and domains of knowledge, which would suggest handling redirects in your Rails app or middleware, since it "knows" what's safe and unsafe.
But those are things you'll have to weigh yourself. It depends on whether raw performance or simplicity of development/maintenance is more important.
I usually end up with a virtual host for http://mysite.com which redirects everything (or sometimes only certain uris) to https://mysite.com/$1. Hope that's helpful.

Related

What are the differences between implementing HTTPS everywhere via IIS or MVC?

I'm working on a project to require HTTPS everywhere among a suite of MVC and WebAPI applications. I'm trying to understand the trade-offs between clicking the "Require SSL" checkbox in IIS & using a URL Rewrite zmodule vs. using a RequireHttpsAttribute in my global filters and modifying my web.config.
I've found the following guides detailing each approach:
https://webmasters.stackexchange.com/questions/28057/iis-7-require-ssl-automatically-redirect-to-https
http://tech.trailmax.info/2014/02/implemnting-https-everywhere-in-asp-net-mvc-application/
Explain the mechanism can be lengthy, so I will just list the most significant differences in behaviour:
do "Require SSL" in IIS:
The context basically expalin what it do, it's "Require" not "Enforce", which means, if people trying to access your website content through http, the server will just respond with a 403 error, which is usually not a desired behavior, but this may help some certain situation
using URL rewrite module:
The module itself can do quite some different thing, but I assume you are just going to do the regular https redirect. Which means, if user trying to hit ANY content of the site through http, the server will do a 301 or 302 redirect to the https version of same url. This is usually a good option since it doesn't affect any usability of the website.
Global RequireHttpsAttribute action filter: This do similar thing to option number 2, it will do a 302 redirect for any http request that is hitting an ACTION. The main difference is that this only applies to all actions in your controllers, Which means, if someone trying to just get a image or css file through http on your website, this option will let it through and not do any enforcement. This leave you the capability to serve static contents through http, which can be useful in some specific circumstances
Just one extra thing worth mention, the 301 and 302 redirect is not going too well with http POST, so if your user trying to do a post through http, the request body will get lost (thanks to the info from #ChrisPratt).
Typically the folks managing the infrastructure are responsible for making sure things are on https. Typically they aren't very good at this so that is where the RequireHttpsAttribute kicks in as it can encforce https requests at a code level thereby enforcing the HTTPS-only attribute.
In practice it isn't so great as many production setups -- including stackoverflow.com's -- see https terminated in an edge device before being unwrapped and handed to the back-end apps as http and the require https attribute isn't quite nuanced enough to understand this distinction.
The best bet in general is to configure the edge device providing the public http interface to take HTTPS and only HTTPS. Then setup secondary virtual sites [or whatever is vendor appropriate] to redirect all traffic to the cannonical HTTPS url. I'd be a bit nervous about relying upon the RequireHttpsAttribute unless it will be a small app handling it's own requests. That still leaves open holes in terms of artifacts and other things that might not be coming off of a controller.

How to prevent unauthorized HTTP requests?

I have some code in my iOS app like this:
URL *url = [NSURL URLWithString:#"http://urltomyapp.com/createaccount"];
ASIFormDataRequest *createAccountRequest = [ASIFormDataRequest requestWithURL:url];
[createAccountRequest setPostValue:email forKey:#"email"];
[createAccountRequest setPostValue:password forKey:#"password"];
[createAccountRequest startAsynchronous];
In my server implementation, I simply take this information via self.request.get('email') and create an account, not doing any checks or anything. However, it seems that anyone can run the above piece of code easily (I mean all you'd need to do is copy the above code and put it into your own app, right?), all they'd need to know is the server address and they can attach any data they want to the request, and the server would go ahead and create an account for them.
How would I authorize requests to know that they are coming from my app and my app only? Is this a common concern? How do other products protect against this?
First, a disclaimer. I am certainly not not a web expert, nor am I a security expert. In fact, the only reason I'm answering at all is because of the discussion in stackmonster's reply.
However, I do know that intercepting an SSL connection is exceptionally easy, especially if the user is complicit.
In general, though, I think the following is of some benefit.
You have to determine who/what you are trying to protect. If you just want to protect the data in the communication between the app and the server, https will be just fine. External snooping will be as effective (or ineffective) as snooping other SSL traffic.
However, if you are trying to protect your API (which your question seems to suggest), it is trivial for a user to see what commands you are sending (as you, yourself found out by using Charles).
So, do you want to prevent anyone from knowing the details of your API? Do you want to just prevent DOS attacks, or only let valid users issue commands, or what?
You can then worry about authentication and authorizations (two different topics). Maybe validating that the request comes from a known entity is enough.
Anyway, it is extremely difficult to give guidance because you first have to decide what your networking privacy goals are.
Then, if they are lofty, you are in for a lot of reading.
At some point, though, you have to decide what is crucial to your app/business, and what is not. Just like any good software design, then create a set of requirements. Then, prioritize them in some order (e.g., mandatory, essential, nice to have, can live without).
That will tell you if you need additional security, and what kind.
Most, however, find that it's not worth the time and investment to even lock all the doors and bar the windows (not to mention protecting the chimney, adding concrete to the walls, floors, and ceilings, building a safe-room, and hiring armed guards).
use HTTPS and put a cert inside your app to verify the client is allowed to talk to your server.
But trust me, its really not worth all that. Using HTTPS is generally ok on its own.

How to block an ip address from linode?

I have rails application running in linode server .Some guy is continuously spamming (writing bullsh*t in my site).Can anybody tell how to block that person ip.Any other help would be appreciated.
I would suggest you don't only look into blocking that one person but rather into making sure this can't happen again.
Spams usually originate from bots that randomly try to fill in their marketing message into input fields on pages they encounter.
You block one, another will find your page and continue.
The only ways to prevent this kind of automated spam I know of are either using some sort of CAPTCHA or by securing your site through authentication.
There are some very nice captcha gems like reCaptcha or look around in the captcha category on Ruby Toolbox and you should be up and running soon.
If it's really a person that is annoying you with writing bad stuff to your site, while not ideal an IP block is easily set up through apache. Just put the following into your VirtualHost file inside the <Directory> node and then enable the mod_authz_host module through a2enmod authz_host
Deny from 192.168.205
You can do this in the web server config file, here is the example for Nginx:
http://www.cyberciti.biz/faq/linux-unix-nginx-access-control-howto/

A workaround for SSL on Heroku

Got an app running great on Heroku, only issue is that their custom-domain SSL solution is way expensive (http://docs.heroku.com/ssl), leaving piggybacking of their *.heroku.com as an only viable option. The good thing is that my app only requires SSL for a couple of pages (for ordering). Right now, I use "ssl_required" in my controller for those couple actions. Any idea on how to create a before_filter that would bump the user to https://myapp.heroku.com just for those two actions and redirect to http://www.myapp.com for anything else? Ugly ugly, but seems like the best way to go for now.
You could hack/monkey-patch:
SSL Requirement plugin (github.com/rails/ssl_requirement), so that it redirected to different hosts.
BTW if you plan to host multiple applications, they can share one multi-domain certificate (and one pricey SSL Addon). Here's more detailed description: http://wojciech.oxos.pl/post/277669886/save-on-herokus-custom-ssl-addons

Ruby on rails (based on Mephisto) - Unable to contact server

I am completely new to ruby and I inherited a ruby system for a product catalogue. Most of my users are able to view everything as they should but overseas users (specifically Mexico) cannot contact the server once logged in. They are an active user. I'm sorry I cannot be more specific, and the system is private so I cannot grant access.
Has anyone had any issues similar to this before? Is it a user-end issue or a system error?
Speaking as somebody who regularly ends up on your user's side of the fence, the number one culprit for this symptom is "Clueless administrator". There are many, many sites which generically block either large blocks of IP space or which geolocate and carve out big portions of the world.
For example, a surprising number of American blogs block Asian countries (including Japan) out of a misplaced effort to avoid DDOS attacks (which actually probably originated in Russia or China but, hey, this species of administrator isn't very good on fine tuning solutions). I have to hop over to my American proxy server to access those sites.
So the first thing I'd do to diagnose your problems is to see whether your Mexican users are making it to the server at all, or whether they're being blocked somewhere earlier (router? firewall? etc). Then, to determine whether the problem is on your end or their end, I'd try to replicate the issue with you proxying your connection through a Mexican proxy and repeating the actions they took to cause the issue.
The fact that they get blocked after logging in could indicate that you have https issues , for example with an HTTPS accelerator installed [1], or it could be that your frontend server is properly serving up the static content but doing the checking on dynamic requests only.
[1] We've seen some really weird bugs at work caused by a malfunctioning HTTPS accelerator.
If it's working for everyone else then it would appear that the problem is not with Ruby or Rails working, since they are...
My first thought would be to check for a network issue: are the Mexican users all behind the same proxy server and/or firewall?
Is login handled within the Rails application or via some other resource? Can you see any evidence that requests from Mexican users are reaching your web server at all?
Login is handled by the rails app. Am currently trying to hunt down the logs, taking some time as again I am new to this system.
Cheers guys
Maybe INS is cracking down on cyber-immigration.

Resources