How to server multiple sites with nginx/passenger? - ruby-on-rails

I have different websites/applications built with rails, which has different domain names. The thing is I want to serve them from a server with Nginx/passenger. I tried some techniques, but I cannot make them work, basically, I have very few information about this.
So, I can serve different websites/applications on different ports. But how can I make people to see application "AAA" if they are coming from aaa.com and see application "BBB" if they are coming from bbb.com?

Phusion Passenger's documentation has a passage on this here, section 3.2: http://www.modrails.com/documentation/Users%20guide%20Nginx.html
Basically, you can set up virtual hosts that point to different applications on the same web server/app server pair.
You can also do rewrites or forwarding purely through nginx configuration, if the above doesn't work.

Related

One VPS, multiples services, different projects/domains

This is my first VPS, so I am pretty new to administrating my own box. I already have experience with a managed web server, registrars, DNS settings, etc. The basics. Now I'd like to take it a step further and manage my own VPS to run multiple services for different business and private projects.
So far I got an VPS from Contabo, updated the system, set up a new user with sudo rights, secured the root user, configured Ufw, installed Nginx with server blocks for two domains and created SSL certificates for one domain using Certbot.
Before I go on with setting up my VPS, I'd like to verify my approach for hosting multiple services for multiple domains makes sense and is a good way to go.
My goal is to host the following services on my VPS. Some of them will be used by all projects some only by a single one:
static website hosting
dynamic website hosting with a lightweight CMS
send and receive emails
Nextcloud/Owncloud
Ghost blog
My current approach is to run all services except for Nginx and the mail server with Docker. Using Nginx as proxy to the services encapsulated in Docker.
Is this an overkill or a valid way to go forward in order to keep the system nice and clean? Since I am new to all of this, I am unsure if I also could run all of the services without using Docker but still be able to serve the different projects on different domains without messing up the system.
Furthermore, I'd like to make sure, that access to the services and the stored data is properly separated between the different tenants (projects). And of course ideally the admin of the services is kind of manageable.

Does Puma have something like Apache's "Location" tag?

I'm using Puma (version 3.11.0) as the web server for a Rails application (Rails version 5.1.4). I need the whole application to be SSL encrypted, but I need one particular route to also have the SSL "verify_mode" set to peer. In Apache, I would normally use a "Location" or "LocationMatch" block to configure the SSL options differently from the rest of the site.
How can I do the same thing with Puma?
I totally agree with #user3309314.
Exposing Puma to the internet directly (or exposing any application server, for that matter), isn't a great idea.
Web servers (unlike application servers) are designed to be in the front, protecting application servers from the cruel world...
...and along the way, they should be the ones to handle SSL/TLS (along with DoS attacks and other annoying concerns).
So use nginx or apache to forward requests to your Ruby application(s) and if you need a special TLS/SSL rule for a specific path, do that with nginx or apache.
Puma doesn't (and IMHO shouldn't) support the feature you're asking about.
EDIT (some of the information given in the comments + explanations)
It's best to think of application servers as a "bridge" between the host machine's routing layer (nginx/apache) and the applications.
It's the host routing layer (nginx/apache) that filters and routes certain host names and paths to certain applications (or the same application with different headers / variables / requirements).
The application server's job is to simply "bridge" between the host routing layer and the actual application, translating between the different data formats (HTTP data to Ruby objects and back).
In order to support the feature you're asking about, the application server should perform the same functions as the host routing layer (routing the correct host name / path to the correct application with the correct changes)
This would violate any "separation of concerns" as well as add redundancy to the system, inflicting a performance penalty (not to mention the larger code base that duplicates the same task in different modules).
This is the reason why, IMHO, these features should not get coded into Ruby application servers.
It's unlikely that Puma supports this.
But you can configure Nginx or Apache as a reverse proxy, so requests get forwarded to the Puma application server, and you can configure SSL options as you need.

is it possible to have multiple project of rails on same port?

I want to add a new project beside of my current rails app without starting a new server for that.
I think it is impossible to have two rails app on one port but my boss want it.
Is it possible at all?
Yes it's possible if you configure a web server (nginx, etc) as a reverse proxy to listen on the port you want, and have it forward traffic to the correct app based on subdomain.
Yes and no. You can't run two web servers, e.g. Puma, on the same port. That won't work. But you can run one web server to serve two Rails apps. Incoming requests are routed based on either their subdomain (app1.example.com) or their path (example.com/app1).
A common setup is to use Apache/nginx as the web server in combination with Passenger as the application server. This question ask a similar question and points to Passenger's documentation on how to serve apps from subdomains: How to deploy multiple rails app on a single IP (Apache + Passenger)?
The configuration depends heavily on your setup, so I can't give you a more detailed answer. But searching for "multiple apps" and the combination of your web and application server should yield enough results and tutorials for you to solve your problem.

I got a confusion about some URLs that I see on the Internet

Please tell me the difference between "someSite.com/something" and "something.someSite.com". Are they equivalent? As an amteur programmer, I know how to do the former. I think that I may need to learn network administration to be able to do the latter.
It's usually referred to as a subdomain. This is the over simplified version:
You have a DNS server that converts the domain to an IP. That DNS server also handles subdomains. usually www is synonymous with the base domain itself. You can have more subdomains also, like sub.domain.something.someSite.com/something
You can make them resolve to the same or different IPs, depending on their purpose.
Even if they resolve to the same IP, the web server at that IP receives a request that includes the original domain name. So on that same IP, the server can give different responses for each domain. This is usually the case with small hosting packages, as they can have thousands of domains on a single IP and they all serve up different websites from different clients.
someSite.com/something is from technical point of view a file on the server, while something.someSite.com is a subdomain, which could link to a completely different webserver.
In most cases, the two variants does give you the identical content, because both of them are server-side linked to the same page.

Find all the web pages in a domain and its subdomains

I am looking for a way to find all the web pages and sub domains in a domain. For example, in the uoregon.edu domain, I would like to find all the web pages in this domain and in all the sub domains (e.g., cs.uoregon.edu).
I have been looking at nutch, and I think it can do the job. But, it seems that nutch downloads entire web pages and indexes them for later search. But, I want a crawler that only scans a web page for URLs that belong to the same domain. Furthermore, it seems that nutch saves the linkdb in a serialized format. How can I read it? I tried solr, and it can read nutch's collected data. But, I dont think I need solr, since I am not performing any searches. All I need are the URLs that belong to a given domain.
Thanks
If you're familiar with ruby, consider using anemone. Wonderful crawling framework. Here is sample code that works out of the box.
require 'anemone'
urls = []
Anemone.crawl(site_url)
anemone.on_every_page do |page|
urls << page.url
end
end
https://github.com/chriskite/anemone
Disclaimer: You need to use a patch from the issues to crawl subdomains and you might want to consider adding a maximum page count.
The easiest way to find all subdomains of a given domain is to ask the DNS administrators of the site in question to provide you with a DNS Zone Transfer or their zone files; if there are any wildcard DNS entries in the zone, you'll have to also get the configurations (and potentially code) of the servers that respond to requests on the wildcard DNS entries. Don't forget that portions of the domain name space might be handled by other DNS servers -- you'll have to get data from them all.
This is especially complicated because HTTP servers might have different handling for requests to different names baked into their server configuration files, or the application code running the servers, or perhaps the application code running the servers will perform database lookups to determine what to do with the given name. FTP does not provide for name-based virtual hosting, and whatever other services you're interested in may or may not provide name-based virtual hosting protocols.

Resources