Distinguishing between nginx and thin - ruby-on-rails

This is a newbie question around nginx and thin in the rails environment. In reading/learning on rails I frequently hear about nginx and thin being a great combination for a rails site. In reading the descriptions of each, they both describe themselves as web servers, so I'm a little confused at what the combination brings to the table. If anyone could briefly describe what they are and how they complement each other I would be greatly appreciative.
Thanks!

A typical small application deployment will have Nginx(or Apache) and a handful of Thin(or Mongrel, Unicorn, etc) servers running all on one machine.
Nginx receives every request. It then serves and static files directly (css, js, images, cached stuff). If the request requires processing it then hands the request off to a rails process (Thin).
This way your (relatively) slow application servers are freed up from serving static files, and your web server is providing a sort of load balancing.
The benefit of Nginx/Thin over something like Apache/Mongrel is that Nginx/Thin can communicate directly via a unix socket, removing the overhead of communicating via the tcp/ip stack.

Thin is an application server while Nginx is a web server.
From http://www.javaworld.com/javaqa/2002-08/01-qa-0823-appvswebserver.html
The application server exposes business logic to client applications through various protocols, possibly including HTTP. While a Web server mainly deals with sending HTML for display in a Web browser, an application server provides access to business logic for use by client application programs. The application program can use this logic just as it would call a method on an object (or a function in the procedural world).

Speaking out of ignorance (I've never used Thin), it is quite normal to mix nginx and an application server together, using nginx to serve up static content and act as a reverse proxy for the application server.
This makes it easy to blend ludicrously fast static content serving with the application server of choice (which varies between programming languages), all coming from the same address:port.

Related

Does Puma have something like Apache's "Location" tag?

I'm using Puma (version 3.11.0) as the web server for a Rails application (Rails version 5.1.4). I need the whole application to be SSL encrypted, but I need one particular route to also have the SSL "verify_mode" set to peer. In Apache, I would normally use a "Location" or "LocationMatch" block to configure the SSL options differently from the rest of the site.
How can I do the same thing with Puma?
I totally agree with #user3309314.
Exposing Puma to the internet directly (or exposing any application server, for that matter), isn't a great idea.
Web servers (unlike application servers) are designed to be in the front, protecting application servers from the cruel world...
...and along the way, they should be the ones to handle SSL/TLS (along with DoS attacks and other annoying concerns).
So use nginx or apache to forward requests to your Ruby application(s) and if you need a special TLS/SSL rule for a specific path, do that with nginx or apache.
Puma doesn't (and IMHO shouldn't) support the feature you're asking about.
EDIT (some of the information given in the comments + explanations)
It's best to think of application servers as a "bridge" between the host machine's routing layer (nginx/apache) and the applications.
It's the host routing layer (nginx/apache) that filters and routes certain host names and paths to certain applications (or the same application with different headers / variables / requirements).
The application server's job is to simply "bridge" between the host routing layer and the actual application, translating between the different data formats (HTTP data to Ruby objects and back).
In order to support the feature you're asking about, the application server should perform the same functions as the host routing layer (routing the correct host name / path to the correct application with the correct changes)
This would violate any "separation of concerns" as well as add redundancy to the system, inflicting a performance penalty (not to mention the larger code base that duplicates the same task in different modules).
This is the reason why, IMHO, these features should not get coded into Ruby application servers.
It's unlikely that Puma supports this.
But you can configure Nginx or Apache as a reverse proxy, so requests get forwarded to the Puma application server, and you can configure SSL options as you need.

How to make requests to a local Rails app from a local AngularJS app running on a different port?

I'm developing a web application with an AngularJS frontend and a Rails backend.
My goal is to keep the two entirely separate, since the Rails app will ultimately be relegated to a simple REST server, and the AngularJS frontend will be just one of a few different clients that use the backend. For that reason, I don't want to integrate the frontend into the Rails asset pipeline, or similar.
The Rails app is running locally on port 3000. The frontend uses Gulp to compile static assets, and BrowserSync, which contains a development server that I'm running on port 3001.
For development purposes, how can I get the AngularJS app to talk to the Rails server, avoiding this browser error (which I'd expected) when making HTTP request via Angular's $http service?
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Origin 'http://localhost:3001' is therefore not allowed access.
Do I just need to set up CORS in the Rails app? Is there another way - maybe some sort of local hosts file trick (I'm on a Mac, FWIW) or similar? I ask because my coworker is responsible for the Rails app (I have limited experience with Rails) but is out of town for a while; my knowledge is mostly limited to the AngularJS stack that I've set up.
I've searched and read up for about an hour, but haven't yet been able to find anything that I can grok that is applicable to my situation. I might not be aware of the best search terms.
Thanks for any help!
Ah, wonderful. Here are a few solutions:
set up CORS, which shouldn't be too hard. All you have to do is add the headers to pages serving your static content (not the rails REST endpoints), The problem with this is you shouldn't really be doing it unless you have a good reason (so see 3). How are you serving the static content? There should be plugins to do the headers for you if it's a popular tool
serve the angularJS pages from the rails backend for now (using some kind of static route)
if you're thinking about production and want to sort this out for good, use nginx to proxy the two services into one single domain, http://nginx.org/en/docs/beginners_guide.html

How to deploy a webservice on Rails 2.3.5/Ruby 1.8.7 to support concurrent requests

I have a single controller implementing my SOAP webservice API using the datanoise-actionwebservice (2.3.2). I am not using Rail as a web app or website to serve any static content at all - it is purely running as a webservice, with direct Mysql.real_connect based queries from the controller code. This is more from my lack of deeper understanding of how to use the model part of Rails, but more practically because the queries are pretty complex and easier to code.
This webservice further makes SOAP webservice calls to other legacy services, and replies back to the windows based webservice client apps.
Challenge is that I have so far used the default mongrel server to handle the multiple calls to my API across several terminals, but obviously need to redeploy mongrel as a cluster or use other setups such as passenger to improve the response time to the clients. I'm unable to find a simple HOWTO for achieving this or comparable approaches - most point to usage as a website/webapp, where the need for apache etc. is more for serving statick content and CSS etc., along with the ruby part.
Can anyone point me to a resource where I can setup the cluster to handle multiple webservice calls concurrently and process in the controller and respond. Does rails automatically instantiate a controller for each webservice request so that it gets done in parallel? Or do we need to rewrite using available libraries?
Any help/advice is much appreciated. I cannot rewrite anything - it needs to be in Rails2.3.5/Ruby1.8.7 - upgrades are not an option as this is already working in production, and datanoise itself doesn't seem to like 1.8.7+ or Rails 3 from what I can see.
Cheers!
Fundamentally, you need to run more than one Mongrel.
The mongrel processes will become your backend servers, and you will need to run a front-end load-balancing proxy.
The thing is, the advantage of the pure-Ruby single-threaded Mongrel is easy setup. Once you move beyond the easy-setup realm, you may as well deal with the complexity of running Passenger. Passenger will do the load balancing internally.
If it was me, I would compile nginx or tengine from source, as it does not include Passenger out-of-the-box. This is as simple as adding an argument to the configure script.
# gem install rake rack passenger --no-rdoc --no-ri
# exit
$ ./configure --add-module=`passenger-config --root`/ext/nginx
$ make

Rails: What is the use of web servers (Apache / nginx / passenger)?

Hi I've been learning rails for the past half year and have a few apps up on Heroku. So for me I thought deploying apps onto the world wide web was just as simple as heroku push. However, I just got my first internship doing Rails and one of my seniors is talking about Apache and Nginx and I'm not sure how they fit in the picture, since I thought apps consisted of only Rails + cloud app platform. I have looked it up but I still don't get how and where it affects my app life cycle. Can someone explain what/where/when of using web servers?
So you've got your Rails app, and as you know you've got controllers and actions and view and what not.
When a user in their browser goes to your app on Heroku, they type in the URL which points to the Heroku servers.
The Heroku servers are web servers that listen to your users that type in the URL and connect them to your Rails application. The rails application does its thing (gets a list of blog posts or whatever) and the server sends this information back to your user's browser.
You've been using a web server the whole time, just it was abstracted away from you and made super simple thanks to Heroku.
So the life cycle is somewhat like this:
Whilst you've been building your applications on your development machine you've probably come across the command rails server. This starts a program called WEBrick which is a web server, and listens on port 3000. You go to your app via http://localhost:3000.
WEBrick listens on port 3000 and responds to requests from users, such as the "hey give me a list of posts" command.
When you push your code into production (in your experience via heroku push) you're sending your code a provider who takes care of the production equivalent of rails server for you.
A production setup (which your senior developers are talking about) is a bit more complex than your local rails server setup on your development machine.
In production you have your Rails server (often things like Unicorn, Passenger) which takes the place of WEBrick.
In a lot of production setups, another server, such as Apache or nginx is also used, and is the server that the user connects to when they go to your application.
This server exists often as a bit of a router to work out how different types of requests should be handled. For instance, requests to static files (css, images, javascript etc) that are storted on the server might just be processed directly by Apache or nginx, since it does a fantastic (and fast) job of sending static assets back to the client.
Other requests, such as "get me a list of all blog posts" get passed onto the Rails server (Unicorn, Passenger etc) who in turn do the required work and send the response to Apache/nginx, who send it back to the client.
Heroku does all this for you in a nice easy to use package, but it sounds like the place your working at manages this themselves, rather than using Heroku. They've setup their own bunch of web servers, and will have their own way doing an equivalent of heroku push which will send the code to the servers, and make sure they're up and running ready to respond to user requests.
Hope that helps!
Web Pages need a Web Server to make them available on the Internet.
So a site that is all static content (all just .html pages) just needs a web server and that's where Apace, nginx, etc come in. They are web servers.
When you use frameworks like rails, an additional component is added, an application server. This pre-processes the pages using the rails framework and then (still) uses the above mentioned web server to make the final pages (which are .html of course) available to the end users through their browser.
Passenger Phusion is an application server that, with rails will help manage and automate the deployment of code.
Heroku is a cloud service, meaning they take care of hardware and software allowing you to seamlessly publish you application without worrying about what is going on behind the scene. So the only thing you have to do is push your code to their Git and voila.
On the other hand, Rails can also be deployed on a system built by you completely from scratch, and you will be the responsible not only for the app development but also for the server maintenance and choice of the hardware and/or software. You could then choose between several application servers capable of running rails such as ngix.
Hope that helps.

Where does a connection manager fit in rails?

I'm writing a rails application that acts as a proxy, thus hereby referred to as the proxy. The idea is that the user should be able to manage his servers through a web UI, that's always up and running even if his servers are down.
To accomplish this, the proxy needs to keep an open connections to the servers at all times. For this I've created a background process using daemonz that accept incomming connections from servers and spawns threads that are constantly listening on the sockets.
Now I have two problems: I need to be able to send messages on these sockets from my rails controllers and I need to know which socket to use, to reach the right server. I was planning to use a ConnectionManager class to take care of this for me, but I don't know where such a class fits into rails structure and I don't know how to make the object and the sockets available to both processes.
That makes two questions:
Where does the connection manager belong?
How do I share the connection manager and the sockets between the processes?
If you only know the answer to the first question, please go ahead and answer. It's possible that I should create a separate post for my second question.
This does not seem like a useful thing to build in Rails/Ruby.
What might be more useful would be a Rails admin application that configured an existing load balancer/proxy like haproxy under the covers.
You could have a mapping of servers/ports/configuration in your Rails app and then project that into an haproxy config and restart the load balancer. A great place to start would be the haproxy-tools gem, that allows you to parse/generate an haproxy config file.
It doesn't make sense to re-write your own load balancer and Ruby/Rails is a poor technology stack even if you were going to do that.

Resources