Security: Rails + Nginx HTTPS, should I proxy_pass with https? - ruby-on-rails

I successfully moved my Rails app to https with the following Nginx config:
upstream example_staging {
server localhost:3000;
}
server {
listen 443 ssl;
server_name example.com;
location / {
proxy_pass http://example_staging;
proxy_read_timeout 90;
}
ssl_certificate /etc/letsencrypt/live/staging.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/staging.example.com/privkey.pem;
# other configs ...
}
The good thing is that I did not have to change anything in Rails, since it is still receiving http queries.
But I'm wondering if there might be a security breach because of that... Like the cookie/session encryption might be endangered?
Should I do something like:
location / {
proxy_pass https://example_staging; # with HTTPS
proxy_read_timeout 90;
}
and let Rails know about certificates or am I fine like that? (which would be great because it is simple, and it must be faster since there's no need for two decryptions)

If the server is completely under your control and your upstream traffic never leaves the server (I'd also advise to switch to unix sockets for additional security and some performance) there's no need to encrypt.

Related

Why Nginx response extremely slow while accessing my Rails app (Both running on same Windows machine)?

I have both a working Rails 4 application (http://localhost:3000) and Nginx server (http://localhost:80) accessible through the browser.
Nginx has been configured as reverse proxy with my Rails 4 app so that http://localhost actually reaches my rails application http://localhost:3000. Now, this is working fine but the web pages get displayed extremely slowly whenever I access the application through Nginx. I have configured Tomcat with Apache Web Server in past and never slowness problem before and practically speaking Nginx is said to much lighter and faster than Apache Web Server.
This makes me think if I have configured my Rails app with Nginx correctly?
Modified nginx.conf
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
proxy_pass http://localhost:3000;
}
...
...
}

Faye-rails, ngnix, passenger slow responce from faye

I want to create a simple chat.
I am not a guru of server administration.
So I have a question about ngnix and faye.
I use ngnix + passenger for my production server. I have a droplet on digitalocean and want deploy my application on this.
So for deployment I use official passenger tutorial https://www.phusionpassenger.com/library/install/nginx/install/oss/trusty/
For model callbacks I use faye-rails gem. Like faye-rails say if I use passenger, I need use this configuration
config.middleware.use FayeRails::Middleware, mount: '/faye', :timeout => 25, server: 'passenger', engine: {type: Faye::Redis, host: 'localhost'} do
map '/announce/**' => SomeController
end
In my development localhost:3000 chat works perfectly fast. But when I deploy it, it works very slowly(the response comes in the interval of 5 to 60 seconds). I dont know how to fix it.
In my /etc/ngnix/sites-enabled/myapp.conf I use this config:
server {
listen 80;
server_name server_ip;
# Tell Nginx and Passenger where your app's 'public' directory is
root /project_path_to_public;
# Turn on Passenger
passenger_enabled on;
passenger_ruby /ruby_wrapper_path;
}
Need I upgrade my /etc/ngnix/sites-enabled/myapp.conf and how? Or what I need to do?
I'm currently using Faye and Redis on an application I'm developing. This is not a direct solution to the question's current setup, but an alternative method that I have implemented. Below is my nginx configuration and then I have Faye running via rackup in a screen on the server.
/etc/nginx/sites-enabled/application.conf:
server {
listen 80;
listen [::]:80;
server_name beta.application.org;
# Tell Nginx and Passenger where your app's 'public' directory is
root /var/www/application/current/public;
# Turn on Passeger
passenger_enabled on;
passenger_ruby /usr/local/rvm/gems/ruby-2.2.1/wrappers/ruby;
rails_env production;
location ~* ^/assets/ {
# Per RFC2616 - 1 year maximum expiry
# http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
expires 1y;
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
break;
}
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream websocket {
server 127.0.0.1:9292;
}
server {
listen 8020;
location / {
proxy_pass http://127.0.0.1:9292/push;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
This link should provide a little insight into how it works.
https://chrislea.com/2013/02/23/proxying-websockets-with-nginx/
You can also reference the Faye github for some guidance on setting it up with Passenger.
Also, if you followed the Digital Ocean tutorials for initial server setup and ended up enabling your firewall, please ensure you allow the ports you have Faye/websockets running on. (See here under configuring a basic firewall: Additional Recommended Steps for New Ubuntu 14.04 Servers
My alternative method involves running Faye in a separate screen on the server. A few commands you will need to manage screens on an ubuntu server are:
screen -S <pick screen name> (new screen)
screen -ls (lists screens)
screen -r <screen number> (attach screen)
to quit from a screen, ctrl + a THEN "d" (detach screen)
Once you have a new screen running, run the Faye server in that screen using rackup: rackup faye.ru -s thin -E production
As a note, with this option, every time you restart your Digital Ocean server (i.e. if you create a screenshot as a backup), you will need to create a new screen and run the faye server again; however, using something like Daemon would be a better implementation to circumvent this (I merely haven't implemented it yet...). Head over to Github and look for FooBarWidget/daemon_controller.
Let me know if you have any other questions and I'll try to help out!

Error code: ssl_error_rx_record_too_long for https in nginx on ruby on rails application

am using rails 3.2 and ruby 1.9 for my app, have to run application in https with domain name like https://welcome.com on my system. so i configure my nginx by creating ssl certificate for domain name and https
snapshort of ssl:
# HTTPS server
#
server {
listen 443 ssl;
server_name welcome.com;
root html;
index index.html index.htm;
ssl on;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
ssl_session_timeout 5m;
}
i can able to saw nginx home page by calling welcome.com and https://welcome.com. without running the rails application
My application also running in port 443 successfully, but after querying in browser like https://welcome.com
Rails terminal showing error:
ERROR bad Request-Line `\x16\x03\x01\x00�\x01\x00\x00�\
ERROR bad URI `._i\b8\x10�yA�^6�v�M|
In browser throwing error:
SSL received a record that exceeded the maximum permissible length.
(Error code: ssl_error_rx_record_too_long)
Even tried by clearing browser history repeatedly, but the result is same.
Am not sure what i made wrong, can any one help me?
have i made any wrong in certificate creation ?
You can't have both listen 443 ssl; and ssl on;, remove the ssl on; line and restart nginx.

Cheap SSL certification for an app hosted on Heroku

I have a Rails app on Heroku and I would need to add there a SSL certificate. In the Heroku add-ons section I see that is possible to buy on Heroku add-on, but the price is $20/month, which is $240 and I cannot afford it at the moment.
Is there any cheaper way to get an SSL for a Heroku app?
We've installed our SSL certificate on a DigitalOcean.com instance running Nginx as a reverse proxy.
Trade-offs include a bump in latency and paying for bandwidth overages but those haven't been issues for us.
Here is a basic Nginx config similar to ours:
server {
listen 80;
rewrite ^ https://www.example.com$request_uri? permanent;
}
# HTTPS server
server {
listen 443;
ssl on;
ssl_certificate /root/example.crt;
ssl_certificate_key /root/example.key;
ssl_session_timeout 5m;
ssl_protocols SSLv3 TLSv1;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP;
ssl_prefer_server_ciphers on;
location / {
proxy_pass https://example.herokuapp.com/;
}
}
This is a basic example and could be made a little more secure (possibly forcing SSL in your app) but this gets you started.
This also gives you the opportunity to speed up your app by creating a cache or serving the app's static assets. You could upload your precompiled assets and have Nginx serve them like this:
location /assets/ {
root /path-to/assets/;
expires 1y;
add_header Cache-Control public;
}
EDIT: July 2017
My, how things have changed. There are a lot of low/no cost solutions for this now. Cloudflare is a great option.

Memory Caching for ASP MVC apps running on Mono using Nginx

I have some https ASP MVC 2 web services that run on linux machines using Mono and Nginx.
How can I configure them work with the Output Cache feature of ASP MVC?
using System.Web.Mvc;
namespace MvcApplication1.Controllers
{
[HandleError]
public class HomeController : Controller
{
[OutputCache(Duration=10)]
public ActionResult Index()
{
return View();
}
}
}
I want to store the cache on the same machine that that runs the webservice.
I've tried adjusting my nginx configuration to include proxy_cache and redirect to another port on the same machine to run the original request, using much of the code found in this example. However, I've had no luck getting it to work.
Here's what I have:
proxy_cache_path /usr/local/nginx/proxy_temp/ levels=1:2 keys_zone=cache:10m inactive=10m max_size=250M;
proxy_temp_path /usr/local/nginx/proxy_temp/tmp;
server
{
listen 443 ssl;
server_name myserver.com;
ssl_certificate /home/ubuntu/ssl/nginx_https.pem;
ssl_certificate_key /home/ubuntu/ssl/nginx_https.key;
location /
{
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass https://127.0.0.1:4430;
proxy_cache cache;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
}
}
server
{
listen 4430 ssl;
root /var/www/mywebpage/;
ssl_certificate /home/user/ssl/https.pem;
ssl_certificate_key /home/user/ssl/https.key;
location /
{
index index.html index.htm default.aspx Default.aspx;
fastcgi_pass 127.0.0.1:9000;
include /etc/nginx/fastcgi_params;
}
}
If I add the line:
proxy_ignore_headers Cache-Control;
Then the caching works, however it starts caching everything, I only want it to cache the methods marked with the OutputCache attribute in my MVC app, though I'm not sure how to properly configure the Nginx cache to deal with it.
What is the proper way to couple the Nginx caching system with the Output Cache attributes of an ASP MVC app running on Mono?
Nginx caching and ASP.NET caching are two completely separate things. If you use [OutputCache] in your ASP.NET project, Nginx will not be aware of that. And vice versa, the proxy_cache_* directives in your nginx config will in no way affect the ASP.NET caching.
You should decide where to cache, either use nginx or use ASP.NET. If you want to use Nginx for caching, remove the [OutputCache] attribtues or disable output caching via web.config alltogether. Instead, create different locations for different cache zones in nginx like this:
# for your Home controller, assuming you use the /home/ route
location /home/ {
proxy_set_header Host $http_host;
proxy_pass https://127.0.0.1:4430;
proxy_cache cache;
proxy_cache_valid 200 302 10m;
proxy_cache_valid 404 1m;
}
# for all other routes
location / {
proxy_set_header Host $http_host;
proxy_pass https://127.0.0.1:4430;
# no proxy_cache here means no caching on the nginx side
}
You can create complex location sections and use even regular expression to match your ASP.NET pathes/routes. See the nginx docs about that.
If you want to control caching from within ASP.NET, remove any proxy_cache* directives from the nginx configuration (like in the last location section in the example above) and use the regular ASP.NET caching directives like [OutputCache] directive.
I'd recommend using the nginx approach, as nginx is very fast and powerful, though requires a little reading at first. But if you get to know it, you can use its reverse proxy feature to create powerful web caches, not only for ASP.NET, but also for all other web application like ruby, node.js and so on.

Resources