On my VPS I only have one port and this is the reason why I use proxer - a dockerized nginx reverse proxy where you provide domain names and local ports to which it should be redirected.
I have successfully set up socket.io server with polling transport (as it uses http methods) but I would like to use websocket transport and this is where it fails. It tells me it can't estabilish wss:// connection to this url.
This is nginx reverse proxy code I am using:
for cfg in $(cat /config); do
domain=$(echo $cfg | cut -f1 -d=)
destination=$(echo $cfg | cut -f2 -d=)
echo ">> Building config for $domain";
config=$(cat <<EOF
server {
listen 80;
listen [::]:80;
server_name $domain;
location / {
proxy_pass $destination;
proxy_set_header Host \$host;
proxy_http_version 1.1;
proxy_set_header Upgrade \$http_upgrade;
#proxy_set_header Connection \$connection_upgrade;
proxy_ssl_name \$host;
proxy_ssl_server_name on;
proxy_ssl_verify off;
proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
proxy_ssl_session_reuse off;
proxy_set_header X-Forwarded-For \$remote_addr;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_read_timeout 120;
proxy_send_timeout 120;
proxy_connect_timeout 120;
}
}
EOF
)
# append new config to the end of nginx config file
echo "$config" >>$out
done
I noticed that this line is commented out and I read that it is needed for wss://:
#proxy_set_header Connection \$connection_upgrade;
Why is it commented out?
Will changing this line affect http proxy?
What changes should I do to allow wss:// on one of domains
This tool has pretty much everything what is needed to set up nginx reverse proxy.
Note that it won't support wss:// out of the box though due to this commented line in its config.
#proxy_set_header Connection \$connection_upgrade;
Uncomment it, rebuild and have a happy reverse proxy which supports wss:// :)
Related
I am currently working on a Nginx config to group some of my docker containers into subdomains. Some of these containers are not running permanently and prevent nginx from starting (with error host not found in upstream "somecontainer:5000" in /etc/nginx/conf.d/default.conf:48) because the host defined in the upstream is not reachable. Is there a way to set a fallback upstream server in case the first one is not running?
The config currently looks like that:
upstream somecontainer {
server somecontainer:5000;
# here i need something like: if host is unreachable
# server fallbackserver:5000
}
server {
listen 443 ssl http2;
server_name some.subdomain.com;
root /public_html/;
client_max_body_size 16384m;
ssl on;
server_tokens off;
ssl_certificate sslstuff;
ssl_certificate_key sslstuff;
ssl_buffer_size 8k;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
location / {
proxy_pass http://somecontainer;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
}
Unfortunately, it's because of Nginx design.
You can use variable in proxy_pass which will be resolved at runtime, so there is no such error on nginx load:
set $destination_host somecontainer;
proxy_pass http://$destination_host:5000;
But the disadvantage of the solution above is that you can not leverage nginx upstream such as specify load balancing algorithm or weighted balancing...
Additionally, You have to patch nginx if both upstream and dynamic service initialization is a need. I have a patch which change that Nginx design and was discussed here and I'm using it on production environment for a while. You can check it if patching is not a problem to you https://github.com/ZigzagAK/ngx_dynamic_upstream/issues/8#issuecomment-814702336
I deployed Nginx reverse proxy in docker, and it belong to the bridge network which using 172.16.10.0/24. And I have the other web app in docker which in different bridge network 172.16.20.0/24. In order to let Niginx reverse proxy to connect web app, I have set Nginx reverse proxy to join the 172.16.20.0/24 as well.
My web app is hosting in http://localhost:8899, and I have bind host:8899 --> container:80. What I want to try is: when someone visit https://mydomain, and reverse proxy should pass to http://localhost:8899.
My nginx config is as follow:
server {
listen 80;
listen [::]:80;
server_name mydomain;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name mydomain;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
ssl_certificate /ssl/my_domain_cert.pem;
ssl_certificate_key /ssl/my_domain.key;
location / {
proxy_set_header Host $host;
proxy_set_header Cookie $http_cookie;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_pass http://localhost:8899;
proxy_read_timeout 90;
}
}
However, when i connect to https://mydomain, the error is SSL handshake failed (Error code 525). How should I fix the problem?
The 525 HTTP error means, there is no valid SSL certificate installed.
The nginx conf is searching for the SSL certificate files in these locations:
ssl_certificate /ssl/my_domain_cert.pem;
ssl_certificate_key /ssl/my_domain.key;
Unless you created a SSL certificate in your Dockerfile or created one before and put them in these locations, you have to MANUALLY create a SSL certificate.
How to create a key and pem file:
https://www.digitalocean.com/community/tutorials/how-to-create-a-self-signed-ssl-certificate-for-nginx-on-centos-7
How to get .pem file from .key and .crt files?
I'm a bit out of my element trying to deploy a Laravel (php) application via docker. Everything works great until I try to use SSL certs via Lets Encrypt, which triggers a redirect loop I'm unable to resolve.
upstream app {
server app1520925178:80;
}
server {
listen 80 default_server;
server_name app.example.com;
# handle future LE refreshes
location /.well-known {
root /var/www/html;
try_files $uri $uri/ =404;
}
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen 443 ssl default_server;
server_name app.example.com;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 24h;
keepalive_timeout 300s;
ssl on;
ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
charset utf-8;
location / {
#include proxy_params;
proxy_pass http://app;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
# Handle Web Socket connections
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Any guidance is greatly appreciated.
EDIT: It "randomly" started working a few minutes after this post was created. Still not 100% sure why it would take time to "propagate", if anyone has insight into that, I'd appreciate it.
If you use Cloudflare as your DNS service (not registering domains over it, but managing your DNS records with it) and have Cloudflare's protection enabled (orange cloud symbol), this can occur.
Note the following paragraph in this support article by Cloudflare:
If the origin server happens to be configured to redirect HTTP requests to HTTPS, server responses back to Cloudflare are encrypted and since Cloudflare is expecting HTTP traffic, it keeps resending the same request, resulting in a redirect loop. This causes browsers to display "The page isn’t redirecting properly" or "ERR_TOO_MANY_REDIRECTS" errors.
So, the flexible SSL is most likely your issue. You can turn that off by going to the crypto page in your Cloudflare control panel and setting the SSL mode to "Full (strict)":
This resolved my issues on an Apache system, but I'm very confident it's the same source of problems with nginx.
Changing the crypto setting to "Full (strict)" may help:
So I tried to set up a rails app using Bryan Bate's private_pub gem (a wrapper for the faye gem) to create chat channels. It works great on my local machine in dev mode.
I'm also booting up the private_pub server on port 8080 at the same time my rails app starts by including a initializer file with the lines:
Thread.new do
system("rackup private_pub.ru -s thin -E production -p 8080")
end
however, after deploying to aws ec2 ubuntu instance with the nginx webserver and puma app sever, the chrome console keeps showing this every 2 seconds, and the real time chat features don't work.
GET http://localhost:8080/faye.js net::ERR_CONNECTION_REFUSED
If I open port 8080 in my aws security group, I can see the big chunk of javascript code in faye.js using curl from localhost:8080/faye.js when I ssh into the instance. I can also access it from my browser if I go to http://my.apps.public.ip:8080/faye.js. I can't access it if I remove 8080 from the security group, so I don't think this is an firewall problem.
Also, if I change the address from localhost to 0.0.0.0 or the public ip for my ec2 instance, the chrome console error is gone, but the real time chat is still not working.
I suspect I might have to do more configuration to nginx because all I have done so far to configure the nginx server is in /etc/nginx/sites-available/default, I have:
upstream app {
server unix:/home/deploy/myappname/shared/tmp/sockets/puma.sock fail_timeout=0;
}
server {
listen 80;
server_name localhost;
root /home/deploy/myappname/public;
try_files $uri/index.html $uri #app;
location / {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Connection '';
proxy_pass http://app;
}
location ~ ^/(assets|fonts|system)/|favicon.ico|robots.txt {
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
But maybe this has nothing to do with nginx either, I'm pretty lost. Has anyone experienced this or could suggest an alternative solution? I could post any additional config files here if needed.
Solved
first I took Ian's advice and set server_name to my public ip
then
based on guide from http://www.johng.co.uk/2014/02/18/running-faye-on-port-80/
I added the location block
location ^~ /faye {
proxy_pass http://127.0.0.1:9292/faye;
proxy_redirect off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_buffering off;
proxy_cache_bypass $http_pragma $http_authorization;
proxy_no_cache $http_pragma $http_authorization;
}
finally, for my private_pub.yml I set the faye entry point for production:
production:
server: "http://my.public.ip:80/faye/faye"
secret_token: "mysecrettoken"
signature_expiration: 3600 # one hour
and now the chatting in my app responds much faster than when I was using the remote standalone chat server I put on on heroku because both the chat server and my main app is running in the same instance.
I'm experimenting with ActionCable (mostly replicating the DHH example) and trying to get it to run on an Ubuntu server with thin (on port 8443) and nginx. It all works fine locally but, when I try to proxy it on a live server I get this error response: failed: Error during WebSocket handshake: Unexpected response code: 301.
Here's my the relevant bits of my nginx configuration:
server {
listen 80;
server_name _not_;
root /home/ubuntu/www/;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream websocket {
server 127.0.0.1:8443;
}
server {
listen 80;
...
location /websocket/ {
proxy_pass http://127.0.0.1:8443;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_redirect off;
}
...
}
I'm sort of out of my league with nginx here -- what am I missing or getting wrong?
I came back to this after a month and discovered that the issues were not with the nginx configuration but related to thin. I did three things:
(1) Configured thin to use the Faye adapter:
# config.ru
require 'faye'
Faye::WebSocket.load_adapter('thin')
require ::File.expand_path('../config/environment', __FILE__)
use Faye::RackAdapter, mount: '/faye', timeout: 25
run Rails.application
(2) Switched to mounting ActionCable in routes.rb, rather than trying to run it as a standalone.
#routes.rb
MyAwesomeApp::Application.routes.draw do
...
match "/websocket", to: ActionCable.server, via: [:get, :post]
end
(3) Returned to my normal nginx configuration, the websockets upstreaming thin (as the webserver does:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close; }
upstream thin {
server 127.0.0.1:3000;
}
server {
...
location /websocket {
proxy_pass http://thin;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade; }
...
}
So, my apologies nginx, I'm exonerating you -- seems the issues were primarily related to thin.
Edit: I've added the old nginx configuration I had returned to after mounting in the routes. Also worth noting, for those using SSL is that config.force_ssl will break the secure wss websocket. Instead, you should do force_ssl on the controller level, as recommended here, and configure nginx to rewrite any HTTP routes to HTTPS.
This thread was very helpful to me, but I elected to separate the AC process into a separate puma instance so I could configure workers, etc separately. I later added SSL proxying from nginx to ensure the latest ciphers, etc are used. This avoids rails/puma/AC having to worry about SSL vs non-SSL; everything is non-SSL inside the server instance.
Here's my server section for AC:
server {
listen 881 ssl;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
server_name localhost;
location / {
proxy_pass http://cable;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Note from this github issue: you need to ensure your AC config allows your domain as a source:
#config/initializers/cable.rb
ActionCable.server.config.allowed_request_origins = %w( http://my-domain.com )