Is anyone providing their own routing mesh with a reverse proxy back to Heroku as a backend? - ruby-on-rails

I'm interested in having a Heroku backend i.e. application server, database server, workers, add ons, etc.
I would like to provide my own routing mesh though, and reverse proxy with Nginx and unicorn server for a Rails app.
Does this seem possible before I give it a shot. I would assume that I could just provide my apps Heroku domain in the upstream directive?
Thanks!

Heroku does not provide a way to configure this in their reverse proxy, but you can run your own, e.g. with nginx-buildpack (see also this answer).

Related

Ruby on Rails Deploy - Is nginx necessary?

I've successfully deployed my Rails application to Digital Ocean by configuring a git post-receive hook and running my puma server through screen (screen rails server).
It seems to be working and accessible at http://178.128.12.158:3000/
Do I still need to implement nginx? My purpose is only serving my API and a CMS website at the same domain.
And about deployment packages like capistrano/mina? Why should I care about them if git hook is serving me well?
Thank you in advance
If you're going to manage large number of traffic with load balancing mode nginx will help. We can add some constraint like block some sent of IP access, etc...
For more refer the following link: https://www.nginx.com/resources/glossary/application-server-vs-web-server/
If you want static resources to be served by a web server, which is often faster, you'll want to front-end your rails app with something like nginx. Nginx will offer a lot more flexibility for tuning how you serve your app.
Capistrano is for deployments, and again, is more flexible than the basic hook approach. For instance, if you intend to have different hosts (for db, web, assets, etc.), or multiples of them, then Cap is your friend.

What is the best practice for Nginx/ELB/Unicorn architecture on AWS?

We have an RoR application in AWS Beijing. AWS Beijing does not have Route 53 (We can't use Alias to apply ELB to Apex domain), so we must use a Front-end Server running Nginx in front of ELB.
Now our architecture likes below:
Front-end (Nginx) -- ELB --- App-(1~n) (Nginx--Unicorn)
We have noticed the words from Unicorn description below:
"Unicorn must never be exposed to slow clients, as it will never ever use new-fangled things like non-blocking socket I/O, threads, epoll or kqueue. Unicorn must be used with a fully-buffering reverse proxy such as nginx for slow clients."
So my question are:
1. Before Unicorn, do we need nginx on the App Server?
2. If we remove nginx on App Server, can nginx on Front-end Server play such the effect like unicorn describing?
I would recommend replacing the ELB with HAProxy in this scenario where you don't have the alias feature from Route53 to point to your apex domain. Putting a Nginx instance in front of the ELB doesn't seem to be a good idea because you are adding a new layer just because you can't reference the ELB on Route53. You also lose the benefit of high availably by putting a Nginx instance in front of it the ELB.
My suggestion is that you keep one instance of Nginx on each of your app servers in front of Unicorn and use HAProxy as load balancer: HAProxy > [Nginx > Unicorn]. In a simple setup of HAProxy you also don't have the same availability of the ELB but you can setup a high available configuration if needed.
1) Nginx must be always in front Unicorn because Unicorn can't deal with slow clients efficiently, it just locked by those clients
2) Never talk to Unicorn via network, it means each app server need to have its own Nginx. Nginx as Load Balancer is a way better than ELB black box.

How to view neo4j database on the hosted linode server

I am running standalone neo4j database server at localhost:7474 on a linode instance.
Is there any way to view this in the browser?
If you have SSH access to the Linode instance then you can run ssh -L 7474:localhost:7474 youruser#123.123.123.123 which will tunnel the remote port 7474 to localhost 7474. In your browser you can now use http://localhost:7474 to see the remote server without opening anything to the world.
You want what's called a "reverse proxy". Outside of your box, you can't talk about localhost:7474 as a hostname. So you want an external facing web server that "proxies" requests and sends them to localhost:7474.
One such option is Apache mod_proxy used as a reverse proxy. Examples on how to use it are behind the link. In general it's going to boil down to a configuration directive that looks something like:
ProxyPassReverse /neo4j http://localhost:7474
You also really want to read the documentation on securing the neo4j server.
WARNING - neo4j's web interface will let you do just about anything without authentication, including delete all of your data, change it, put new data in, and so on. It is a very bad idea to expose that functionality to the entire internet. So if you use a reverse proxy as suggested above, make sure you add some authentication layer (again you can do this with apache and mod_proxy) to permit just any random person from connecting to your instance and optionally deciding to trash it.

Add a reverse proxy to heroku

I have a rails app running on heroku at, e.g myapp.herokuapp.com.
Now I want to reverse proxy from myapp.heroku.com/proxy/ to somewhereelse.com/ (i.e: myapp.heroku.com/proxy/stuff is reverse proxifed to somewhereelse.com/stuff)
Is that possible on Heroku? How to achieve this?
For anyone coming to this question through a search, this can be done.
Check out https://github.com/ryandotsmith/nginx-buildpack to vendor nginx into your heroku instance. This will place nginx in front of your rails app, and allow you to reverse proxy requests on this domain, having your heroku app configured as apex and allowing somewhereelse.com/stuff go elsewhere.
You dont have access to frontend routing infrastructure so its not possible to do add something like nginx location based reverse proxying or apache's modproxy. From my understanding too you can only bind to one port (the $PORT) within the dyno so its not possible to shadow your Rails app with your own vendored version of nginx (unless it is possible to communicate over a non TCP/IP socket between nginx and your rack/rails app, if this is the case then perhaps you can get rack to listen to /tmp/mysocket.git and nginx to reverse proxy on this, this could be a no starter though, Im just throwing out ideas).
Which means the only probable option if you have to handle this yourself in your rails app, I have only a tiny tiny bit of rails/ruby experience, but if no proxy functionality exists in rails then you perhaps you can explicitly accept the route and then use a http client to invoke the other parts.

Deploying Rails and Nodejs

I wrote a real-time web app that consists of the following:
Rails to serve the web pages (listens on port 80)
Nodejs to handle real-time logic (listens to port 8888)
So on a particular page served by my rails app, the JS will use socket.io to establish a connection to my nodejs instance to allow real time http push.
Currently Nodejs communicates with Rails simply by updating the rails database. (I know this is ghetto but it works).
What are my options for deployment?
I have deployed simple web apps on heroku before and I really like the simplicity.
I have also deployed a web app with similar functionality (except it's made up of django + nodejs). I used HAProxy to do reverse proxying to handle direction of traffic to the correct process on my machine. However, I deployed this on a VPS server instead.
Note: the ugliness will probably revolve around:
I am relying on a common db
These processes are listening on different ports
We had this exact issue. We deployed them to separate Heroku applications, but kept them within the same code base. http://techtime.getharvest.com/blog/deploying-multiple-heroku-apps-from-a-single-repo outlines how to do it.
Manually set the buildpack
Set a config variable that you can reuse in step #3.
Create a custom web script that your Procfile uses
A custom script in bin/web
#!/bin/bash
if [ "$RAILS_DEPLOYMENT" == "true" ]; then
bundle exec rails server -p $PORT
else
node node/index.js
fi
And the Procfile:
web: bin/web
I would consider setting these two applications up as separate Heroku applications on different subdomains and just having them both on port 80. The communication between them goes through a shared database so they don't need to reside on the same machine or even datacenter. Socket.io supports cross domain requests on all browsers, so that shouldn't cause any problems.

Resources