I'm planning on hosting three websites on a single host, lets call it raspi-dev:
home.redacted.ca, brew.redacted.ca, www.redacted.ca (or redacted.ca)
To enable this, I'm working with a container which is accepting incoming connections on port 80. Here's a snippet of the reverse proxy config:
server {
server_name brew.redacted.ca;
listen 80;
location / {
proxy_pass http://brewweb;
}
}
brewweb being a container named brewweb. The communication works so far. When navigating to http://brew.redacted.ca, I get exactly what I expect:
"GET / HTTP/1.1" 301 169 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:60.0) Gecko/20100101 Firefox/60.0" "-"
The point is to redirect traffic to https. Here's the code from the other webserver:
server {
listen 80;
server_name brew.redacted.ca;
return 301 https://brew.redacted.ca$request_uri;
}
server {
server_name brew.redacted.ca;
root /var/www/brew;
listen 443 ssl;
location / {
try_files $uri $uri/ /index.html =404;
}
ssl_certificate /etc/nginx/ssl/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
}
So it works as intended. You go to http://brew.redacted.ca and it tells you to come back on port 443. However, in the browser, I get a dead link: "Unable to Connect". My assumption right now is that it's because my reverse proxy, which is the only container listening on the host network, can't receive and then forward on the future 443 requests. I'm at a bit of a loss on how to handle this though.. do I get a cert for the reverse proxy??? How would I even do that since it's not even serving up any content itself..
I'm open to all suggestions.
It is your proxy that needs to speak securely to clients, not the hosted applications. I do this on my internal network. I have a docker host running a couple of different developer tools behind a single Nginx reverse proxy.
All SSL/TLS is handled by the proxy server. The web servers that actually serve up the applications are in docker containers only visible to the Nginx proxy container. They communicate with the proxy via http not https.
I'm using a wildcard certificate issued for my main server *.app.co that has SA entries for www.app.co, utility.app.co and another.app.co
Nginx proxy server configuration actually ends up looking something like this:
ssl_certificate redacted.pem
ssl_certificate_key redacted.key
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name www.app.co;
proxy_pass http://www-container
...
}
server {
listen 443 default_server;
server_name www.app.co;
proxy_pass http://www-container
...
}
server {
listen 443;
server_name another.app.co;
proxy_pass http://another-container
...
}
Related
I have installed shopware5 in a docker container and made it to go out with a reverse proxy nginx.
After the installation, the main page of the website works, but when I click on any of it's tabs, it forwards to the container directly and changes the address in the URL to the address and the port of the container. Therefore it shows that the website cant be reached.
I am wondering if this could be something related to the nginx or the shopware itself.
Any advises will be greatly appreciated.
this is the configuration of the proxy:
server {
listen 443 ssl http2;
# listen 80 http2;
server_name domainname.com;
ssl_certificate /etc/nginx/certificates/domainname.crt;
ssl_certificate_key /etc/nginx/certificates/domainname.key;
ssl_session_timeout 10m;
ssl_protocols TLSv1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
ssl_ecdh_curve secp384r1;
# root /var/www/html;
error_log /var/log/nginx/domain-error.log;
access_log /var/log/nginx/domain-access.log;
add_header Access-Control-Allow-Origin *;
location / {
proxy_pass http://localhost:8081/;
}
}
I have a site that I built using ruby on rails on nginx server with passenger. My client decided to install ssl certificate.I am a newbie to that kind of issues and I have never did it before and I need to confirm that my sites-enabled/default file is configured properly.
My current configuration is :
server {
listen 80;
listen [::]:80 ipv6only=on;
server_name www.mysite.com;
passenger_enabled on;
rails_env production;
root /home/directory;
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
and for adding ssl certificate, I will add another server block like below:
server {
listen 443;
server_name www.mysite.com;
passenger_enabled on;
rails_env production;
root /home/directory;
ssl on;
ssl_certificate /etc/ssl/my_certificate;
ssl_certificate_key /etc/ssl/my_private_key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
ssl_prefer_server_ciphers on;
ssl_session_timeout 10m;
ssl_session_cache shared:SSL:10m;
ssl_stapling on
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
is that a right way and parameters to configure nginx or I need to combine them in one server block ?
is there any thing missing should I add to the previous config ?
in the :server_name www.mysite.com;
can I replace it with my IP address instead of the domain name ?
Thanks for your time in advance
You can have HTTP and HTTPS servers in the same server section
server {
listen 80;
listen [::]:80 ipv6only=on;
listen 443 ssl;
...
}
For complete SSL related configuration I would recommend to use Mozilla generator
Yes, but you shouldn't. Nginx will match your first server section even if you haven't set server_name properly, but such configuration is hard to support and troubleshoot
PROBLEM:
Images are not getting read/write to my DB server file structure for Dragonfly. I am able to interact with my database through active record for all of my Ruby models. All my static assets are working. User generated images should be saved as www.test.example.com/media/AgGdsgDGsdgsDGSGdsgsdg...
on my remote server. However they are getting saved on whatever app server they get uploaded from.
BACKGROUND:
Ruby/Rails, Nginx, Passenger. We are moving from a single server solution to a 3 server solution. I have 2 app servers that sit behind a DB server. I am using Dragonfly Gem for user generated images and other content. On our current, single server setup, everything just points to localhost and works great.
10.102.66.4 is my lan IP for the DB server.
APP SERVERS NGINX.CONF:
user pete;
...
http {
passenger_pre_start http://example.com;
passenger_pre_start http://example.com:3000;
...
proxy_cache_path /home/pete/example/shared/tmp/dragonfly levels=2:2
keys_zone=dragonfly:100m inactive=30d max_size=1g;
...
server {
listen 80;
server_name example.com;
rewrite ^ https://example.com$request_uri? permanent;
}
server {
listen 443 ssl default deferred;
ssl on;
ssl_certificate /etc/ssl/example.com.crt;
ssl_certificate_key /etc/ssl/server.key;
ssl_session_cache shared:SSL:1m;
server_name example.com *.example.com;
root /home/pete/example/current/public;
passenger_enabled on;
location /media {
proxy_pass http://10.102.66.4:443;
proxy_cache dragonfly;
proxy_cache_valid 200 30d;
break;
}
}
}
DB SERVER NGINX.CONF:
user pete;
...
http {
sendfile on;
...
keepalive_timeout 65;
types_hash_max_size 2048;
...
large_client_header_buffers 4 16k;
server {
listen 443 ssl;
ssl_certificate /etc/ssl/example.com.crt;
ssl_certificate_key /etc/ssl/server.key;
location / {
try_files $uri #app;
}
location #app {
proxy_set_header host $Host;
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
root /home/pete/example/shared/;
}
}
}
DRAGONFLY.RB:
require 'dragonfly'
app = Dragonfly[:images]
app.configure_with(:imagemagick)
app.configure_with(:rails)
if defined?(ActiveRecord::Base)
app.define_macro(ActiveRecord::Base, :image_accessor)
app.define_macro(ActiveRecord::Base, :file_accessor)
end
WHAT IVE TRIED:
'chown -R pete:pete /home/pete/example/current/public' and the permissions look correct.
Restarted server/nginx/ruby/etc...
Add 'large_client_header_buffers 4 16k;' to nginx.conf
ERRORS/LOGS:
CHROME CONSOLE:
Failed to load resource: the server responded with a status of 400 (Bad Request)
NGINX ERROR.LOG (Yes.. I know it says 'warn')
2015/06/25 11:49:11 [warn] 25591#0: *345 a client request body is buffered to a temporary file /var/lib/nginx/body/0000000002, client: 173.204.167.103, server: example.com, request: "POST /offices/1-big-o/users/1-peterb HTTP/1.1", host: "test.example.com", referrer: "https://test.example.com/offices/1-big-o/users/1-peterb/edit"
NGINX ACCESS.LOG:
[25/Jun/2015:11:49:14 -0700] "GET /media/W1siZiIsIjIwMTUvMDYvMjUvMTFfNDlfMTFfNDcxXzhfYml0X21scF9vY19fX2xvY2tlX3R1bWJsZXJfYnlfbmlnaHRzaGFkZTQyNF9kNXppdmpmLmpwZyJdXQ HTTP/1.0" 400 681 "https://test.example.com/offices/1-big-o/users/1-peterb/edit" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.125 Safari/537.36
curl -k https://10.102.66.4:443
<html>
<head><title>403 Forbidden</title></head>
<hr><center>nginx/1.4.6 (Ubuntu)</center>
</body>
</html>
UPDATE 1:
It seems that INSTEAD of saving my files on the DB server, it is saving them locally to my app server. The file structure is correct... Just wrong server.
Since dragonfly runs on the app servers it saves them there. If you want them to be accessible by the reverse proxy you could setup nfs or some other file sharing system. Then the files could be save locally but accessed by the proxy server.
I have a staging rails app running with passenger on nginx. I want to secure the connections with SSL. I have read a lot of resources online but I have yet to make it run on SSL.
So far, my server block on nginx.conf is:
server {
listen 80;
listen 443 default deferred;
server_name example.com;
root /home/deploy/app/public;
passenger_enabled on;
passenger_set_cgi_param HTTP_X_FORWARDED_PROTO https;
ssl on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:RSA+3DES:!ADH:!AECDH:!MD5;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_certificate /etc/ssl/server.crt;
ssl_certificate_key /etc/ssl/server.key;
}
The site is running but not on HTTPS.
I've just made the decission to go with SSL myself and found an article on the DigitalOcean site on how to do this. It might be the listen 443 default deferred;, which according to that article should be ssl not deferred.
Here's the nginx block they use;
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
listen 443 ssl;
root /usr/share/nginx/html;
index index.html index.htm;
server_name your_domain.com;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
location / {
try_files $uri $uri/ =404;
}
}
UPDATE:
I now have my own site running on SSL. Along with the above I just told Rails to force SSL. In your production environment config;
# ./config/environments/production.rb
config.force_ssl = true
Optionally, you can add these setting in the nginx.conf;
http {
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
keepalive_timeout 70;
}
UPDATE: 2015-09
Since I wrote this answer I've added a few of extra things to my nginx config, which I believe everyone should also include. Add the following to your server block;
server {
ssl_prefer_server_ciphers On;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
add_header X-Frame-Options DENY;
}
The first three lines (ssl_prefer_server_ciphers, ssl_protocols, ssl_ciphers) are the most import as they make sure you have a good strong SSL settings.
The X-Frame-Options prevents your site from being included via the <iframe> tags. I expect most people will benefit from including this setting.
I have a VPS with a Rails 4 application running on Ubuntu, NginX and Unicorn.
As I want all pages to be SSL encrypted, all requests to http:// are forwarded to https:// which is working fine.
This is an excerpt of my NginX configuration:
http {
....
server {
listen 80;
server_name example.com;
rewrite ^ https://$server_name$request_uri? permanent;
}
server {
listen 443;
server_name example.com;
root /home/rails/public;
index index.htm index.html;
ssl on;
ssl_certificate /etc/ssl/example.com.crt;
ssl_certificate_key /etc/ssl/example.com.key;
location / {
try_files $uri/index.html $uri.html $uri #app;
}
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_pass http://app_server;
}
}
}
How can I make it that all requests to http://example.com and https://example.com are forwarded to https://www.example.com?
Thanks for any help.
We use this in apache2:
<VirtualHost *80>
ServerName frontlineutilities.co.uk
ServerAlias www.frontlineutilities.co.uk
</VirtualHost>
Docs
Having researched how you'd achieve this in Nginx, I found this:
server {
listen 80;
server_name example.org www.example.org;
...
}
--
Capturing Requests
The reason I wrote this as an answer is because your choice is whether to use Middleware or the web server itself
Although I don't know the specifics, I do know that adding to the Rails middleware will eventually lead to bloat. I am a firm proponent of modular programming - and will gladly separate functionality into different parts of the stack
The problem you have is not really a rails one - it's a server issue (how to route all requests to www.). I would therefore highly recommend you focus on the server to get it sorted. As in the end, the sever is there to capture requests to your server IP & route them accordingly
I would start with the resources above & work to redirect in the server. It doesn't matter to rails whether you send a request to www. or the standard domain
If you are looking to setup which redirects http://example.com or https://example.com to http://www.example.com and https://www.example.com
The following should do the redirect
server {
listen 80;
server_name example.com;
rewrite ^/(.*) https://www.example.com$request_uri permanent;
}
server {
listen 443;
server_name example.com;
rewrite ^/(.*) https://www.example.com$request_uri permanent;
}
Also you have change your original server_name to www.example.com
Hope this helps.