How to proxy files from firewalled server through rails application - ruby-on-rails

I have a rails application running on Nginx which needs to serve files for download from another internal server. The internal server uses a dynamic url to generate the file for download, so it isn't a static file sitting in a folder. Both the rails server and server with the files are on the same LAN but only the rails server is open to the public on port 80.
Additionally the files that I'm wanting to serve are anywhere from 5GB - 200GB so I don't want to tie up the rails process for the whole download if that is possible. Is there a way to do this with Net::HTTP + send_data? Or perhaps some kind of Nginx proxy rule?
From inside the LAN you can download a file with a url like this:
http://username:password#192.168.0.5/export?uuid=1234567890
The problem is 1) there is no access control for that url, with the user / pass you can download any file you want by passing in it's uuid parameter and 2) the server is only LAN accessible.

I figured out the answer to this question by following the tutorial here: http://kovyrin.net/2010/07/24/nginx-fu-x-accel-redirect-remote/
To handle the HTTP Basic authentication you need to add this line to your nginx config:
proxy_set_header Authorization "Basic BASE64_USER_PASS";
Where BASE64_USER_PASS is a base64 string of your username and password in the format "user:pass"

Related

HTTP website behind HTTPS Let's Encrypt NGINX Route

I can't seem to figure out if this is common practice or not, but I want to create a website (Running on a container) and then have traffic forwarded to the website from a wildcard on my domain, I want to secure it and use Nginx Proxy Manager and Let's Encrypt to manage the certificate.
Do I keep the website running on my internal server as just HTTP:80 and redirect traffic to to via Nginx? My current site is just a serverside Blazor webapp.
I've seen other people do this, but it makes me wonder if that is indeed secure, at some point between Nginx and the internal server it is not encrypted. Is my understanding correct?
I imagine it looks something like this:
Client connects securely to Nginx Proxy Manager (HTTPS)
Nginx Proxy Manager then decrypts and forwards to the Internal Website (HTTP)
Is my understanding correct?
Is this common practice, or is there a better way to achieve what I want?

Using mruby, ngx_mruby and redis - Applying on current production server

I am very afraid of making some modifications on the server. Because the server is working fine with the current settings.
I will to explain: The server is an Amazon EC2 instance. In this instance I have:
ruby -v: ruby 2.2.2p95 (2015-04-13 revision 50295) [x86_64-linux]
rails -v: Rails 4.2.3
nginx -v: nginx/1.8.0
passenger -v: Phusion Passenger version 5.0.10
I have 1 Load Balancer, which has a listner:
Load Balancer Protocol: HTTPS
Load Balancer Port: 443
Instance Protocol: HTTP
Instance Port: 80
SSL Certificate: Using a certificate issued on Amazon Certificate Manager. I have the domain and all sub-domains (wildcard).
This settings allow me to:
Have the main domain to the app:
www.testname.com and testname.com to use as institutional pages (About, Price, Terms etc.);
app.testname.com for users to use the system;
Have how many subdomains I want, because EACH USER has an specific page:
user1.testname.com
user2.testname.com
user3.testname.com
etc.
All work is dynamic. The user registers on the app and has a subdomain. In this subdomain, the user can access via https://. It works fine.
Users WANT to use their own domain, off course. This part is easily resolved. I create a CNAME record in the custom domain, pointing to the our subdomain, like that:
usercustomname.com CNAME TO user1.testname.com
It works fine. BUT, the big problem is: 'https://' not working on its custom domain name, obviously. Our certificate allows the domain testname.com and its subdomains.
With Amazon Certificate Manger I can import custom certificates. And then, using the awesome rails-letsencrypt gem, I can generate Let's Encrypt certificates to the custom domain names.
But the Amazon Load Balancer, in the https listner, allow to use only 1 certificate! This is very bad, because I can to have a lot of certificates, but using only one in the whole server.
Recently, Amazon releases multiple certificates to Application Load Balancer using SNI. I can to migrate my Classic Load Balancer to the Application Load Balancer, but this not solves the problem, because the max certificates limit is 25 per Load Balancer. Is very low.
The solution I found is to create an Amazon Elasticache to run a REDIS server. And then, using ngx_mruby to get the certificate. I plan it like that:
Change the https listner like that:
Instance Protocol: HTTPS
Instance Port: 443
Remove the certificate issued in Amazon Certificate Manger
Install mruby
Install ngx_mruby
Using rails-letsencrypt gem, create 1 certificate for each institutional subdomain (app, www, empty subdomain) AND create 1 certificate for each user subdomain.
When a certificate is created, the rails-letsencrypt gem can save the certificate in REDIS.
Using ngx_mruby, listen the port 443, the certificate for the domain is picked up on redis.
Apparently, this will work. The logic seems right but I do not know in practice.
My questions are:
1) To install mruby, I will follow these steps. After install, will impact in the current ruby installation? Will I need to change the system code already developed as a mruby installation result?
2) Using REDIS will affect something in the current server? Despite the $ 12/month increase in Amazon's account, I believe that using REDIS will not influence the current server at all.
3) Do you think that what I planned to solve the Amazon certificate limit will works?
Sorry the big text. I'm not server specialist. This is the unique server I have, AND without backup. And I'm afraid to break the server with no way to fix.
Tks and I appreciate any help :)
EDIT 1
Using ngx_mruby and redis with Amazon Classic Load Balancer will not works, beacuse the listner https requires one certificate. So even if I generate the certificates and connect ngx_mruby with redis, before it, the Load Balancer will respond with the default domain certificate.
But, I found a way (it works):
All customers URL have this structure:
customer1.myapp.com
customer2.myapp.com
customer3.myapp.com
All requisitions are using https listner via Load Balancer and has no way to using multiple ssl certificates in classic load balancer. Then, I did:
Register another domain, like myapp.net
Using Amazon Route 53, I created another hosted zone and I point the domain DNS records to this hosted zone
In Amazon Route 53, I created these records:
Type A point to the instance IP
Type CNAME with name * and value myapp.net
I setup my Rails App to identify the domain myapp.net. With this, the access to customer1.myapp.com AND customer1.myapp.net calls the same resource, BUT, customer1.myapp.com uses the lister https in the load balancer and customer1.myapp.net don't.
I just save the SSL CERTIFICATE which was generated by the gem in the folder /etc/nginx/ssl/ and then, create a virtual host in NGINX. After that, FINALLY WORKS!
Now, I have to dicover HOW TO SAVE the certificate in the folder and HOW TO CREATE a virtual host in NGINX, using RAILS. The manual process is described in my another question.
You need to recompile your Nginx to support ngx_mruby, we didn't suggest using dynamic module feature now. Because there is no use case with it when people using ngx_mruby. And your ruby version in your system didn't be changed.
If you want to enable Redis as the cache for your Rails, it may influence your website. But if you only create a new ElasticCache instance, there no other side effect for you. And I think to use ElasticCache you will get better optimize than host it by yourself.
I didn't try it, but it may work. Maybe others can answer your question.

Rails. Getting PORT and HOST in Config file

By default rails uses localhost:3000 in development mode. This one is not written in any projects config files. I am currently trying to edit ./config/environments/development.rb file to use CORS.
There is host_and_port method which may be used in contollers to get the HTTP requests HOST value as defined in its heading (point me if I am wrong).
I can write my host:port in config files manually and change it as long as my development host and port changes... But I want to configure my development environment as rarely as possible, so I need to access host and port configurations in config files.
So... how do I access my HOST and PORT in config files?

Hosting a rails app inside another website's directory

I have a php site on my server, suppose it is example.com.
And I want to run an rails app inside the domain like example.com/rails.
I am using nginx and tried to edit the example.com's host config and proxied the /rails location to unicorn upstream. But that did not work.
Is it possible to do like example.com/rails ?

Serving Juggernaut 2 over pure HTTPS connection

I have a Ruby on Rails website at which I force all connections to be SSL. I need all connections from that site to use HTTPS as well. Also, Google Chrome will automatically switch to HTTPS even if I connect to another port.
This means that I cannot connect to
http://www.mysite.com:8080
I have to serve the juggernaut js file over https. But that doesn't work since Juggernaut doesn't want to use https instead of http at its internal webserver. So I copied the application.js file from the juggernaut folder /usr/local/lib/node_modules/juggernaut/public/application.js into my rails folder public/juggernaut and changed the following line in my HTML code:
to
Now I seem to be able to at least initiate a Juggernaut object. The problem arises when I start to actually do some listening. I get this error:
Not found: https://www.mysite.com:8080/socket.io/1/?t=1340749304426&jsonp=0
So either I need to
a) be able to change it so I can actually have Juggernauts webserver use https instead of http. This is preferable.
or
b1) fix Juggernaut so it doesn't try to access socket.io over port 8080 and
b2) add socket.io to my server, preferably under the www.mysite.com/juggernaut folder instead of the root.
Any ideas?
Thanks!
Might be a little late but I was able to get it to work using this. (My juggernaut is hosted on heroku)
var jug = new Juggernaut({
secure: true,
host: 'yourHostHere',
port: 443,
transports: ['xhr-polling','jsonp-polling']
});

Resources