I'm experimenting with ActionCable (mostly replicating the DHH example) and trying to get it to run on an Ubuntu server with thin (on port 8443) and nginx. It all works fine locally but, when I try to proxy it on a live server I get this error response: failed: Error during WebSocket handshake: Unexpected response code: 301.
Here's my the relevant bits of my nginx configuration:
server {
listen 80;
server_name _not_;
root /home/ubuntu/www/;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream websocket {
server 127.0.0.1:8443;
}
server {
listen 80;
...
location /websocket/ {
proxy_pass http://127.0.0.1:8443;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_redirect off;
}
...
}
I'm sort of out of my league with nginx here -- what am I missing or getting wrong?
I came back to this after a month and discovered that the issues were not with the nginx configuration but related to thin. I did three things:
(1) Configured thin to use the Faye adapter:
# config.ru
require 'faye'
Faye::WebSocket.load_adapter('thin')
require ::File.expand_path('../config/environment', __FILE__)
use Faye::RackAdapter, mount: '/faye', timeout: 25
run Rails.application
(2) Switched to mounting ActionCable in routes.rb, rather than trying to run it as a standalone.
#routes.rb
MyAwesomeApp::Application.routes.draw do
...
match "/websocket", to: ActionCable.server, via: [:get, :post]
end
(3) Returned to my normal nginx configuration, the websockets upstreaming thin (as the webserver does:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close; }
upstream thin {
server 127.0.0.1:3000;
}
server {
...
location /websocket {
proxy_pass http://thin;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade; }
...
}
So, my apologies nginx, I'm exonerating you -- seems the issues were primarily related to thin.
Edit: I've added the old nginx configuration I had returned to after mounting in the routes. Also worth noting, for those using SSL is that config.force_ssl will break the secure wss websocket. Instead, you should do force_ssl on the controller level, as recommended here, and configure nginx to rewrite any HTTP routes to HTTPS.
This thread was very helpful to me, but I elected to separate the AC process into a separate puma instance so I could configure workers, etc separately. I later added SSL proxying from nginx to ensure the latest ciphers, etc are used. This avoids rails/puma/AC having to worry about SSL vs non-SSL; everything is non-SSL inside the server instance.
Here's my server section for AC:
server {
listen 881 ssl;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
server_name localhost;
location / {
proxy_pass http://cable;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Note from this github issue: you need to ensure your AC config allows your domain as a source:
#config/initializers/cable.rb
ActionCable.server.config.allowed_request_origins = %w( http://my-domain.com )
Related
I'm running a dockerized rails app with puma and nginx, however, I'm getting ERR_TOO_MANY_REDIRECTS when trying to access the application from a browser.
I have config.force_ssl = true on my application.rb and this is my nginx conf file:
upstream kisoul {
server rails:3000;
}
server {
listen 80;
listen 443 ssl;
root /usr/share/nginx/kisoul;
try_files $uri #kisoul;
location #kisoul {
proxy_pass_request_headers on;
proxy_ignore_headers Expires Cache-Control;
proxy_set_header Host $http_host;
proxy_pass_header Set-Cookie;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://kisoul;
}
ssl_certificate /etc/nginx/fullchain.pem;
ssl_certificate_key /etc/nginx/privkey.pem;
}
I have already tried disabling force_ssl from rails and forcing the redirect through nginx, but then I get a problem with Origin header, saying that the origin header (https://localhost) didn't match request.base_url (HTTP://localhost)
I tried many different solutions already described here, but I couldn't find any solution
On my VPS I only have one port and this is the reason why I use proxer - a dockerized nginx reverse proxy where you provide domain names and local ports to which it should be redirected.
I have successfully set up socket.io server with polling transport (as it uses http methods) but I would like to use websocket transport and this is where it fails. It tells me it can't estabilish wss:// connection to this url.
This is nginx reverse proxy code I am using:
for cfg in $(cat /config); do
domain=$(echo $cfg | cut -f1 -d=)
destination=$(echo $cfg | cut -f2 -d=)
echo ">> Building config for $domain";
config=$(cat <<EOF
server {
listen 80;
listen [::]:80;
server_name $domain;
location / {
proxy_pass $destination;
proxy_set_header Host \$host;
proxy_http_version 1.1;
proxy_set_header Upgrade \$http_upgrade;
#proxy_set_header Connection \$connection_upgrade;
proxy_ssl_name \$host;
proxy_ssl_server_name on;
proxy_ssl_verify off;
proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
proxy_ssl_session_reuse off;
proxy_set_header X-Forwarded-For \$remote_addr;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_read_timeout 120;
proxy_send_timeout 120;
proxy_connect_timeout 120;
}
}
EOF
)
# append new config to the end of nginx config file
echo "$config" >>$out
done
I noticed that this line is commented out and I read that it is needed for wss://:
#proxy_set_header Connection \$connection_upgrade;
Why is it commented out?
Will changing this line affect http proxy?
What changes should I do to allow wss:// on one of domains
This tool has pretty much everything what is needed to set up nginx reverse proxy.
Note that it won't support wss:// out of the box though due to this commented line in its config.
#proxy_set_header Connection \$connection_upgrade;
Uncomment it, rebuild and have a happy reverse proxy which supports wss:// :)
What config.action_cable.url should be configured for websockets / Rails / Kubernetes /Minukube with Nginx?
When running "docker-compose" locally an Nginx processes in front of a Rails process (not API only, but with SSR) and a standalone Cable process (cf the guides), the websockets work fine by passing the following server-side (in say "/config/application.rb", with action_cable_meta_tag set in the layouts):
config.action_cable.url = 'ws://localhost:28080'
I am targetting Kubernetes with Minikube locally: I deployed Nginx in front of a Rails deployment (RAILS_ENV=production) along with a Cable deployment but I can't make it work. The Cable service is internal of type "ClusterIP", with "port" and "targetPort". I tried several variations.
Any advice?
Note that I use Nginx -> Rails + Cable on Minikube, and the entry-point is the Nginx service, external of kind LoadBalancer where I used:
upstream rails {
server rails-svc:3000;
}
server {
listen 9000 default_server;
root /usr/share/nginx/html;
try_files $uri #rails;
add_header Cache-Control public;
add_header Last-Modified "";
add_header Etag "";
location #rails {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header Host $http_host;
proxy_pass_header Set-Cookie;
proxy_redirect off;
proxy_pass http://rails;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
To allow any origin, I also set:
config.action_cable.disable_request_forgery_protection = true
I have an answer: in a Minikube cluster, don't put anything and disable forgery protection, Rails will default the correct value. When Nginx is front of a Rails pod and a standalone ActionCable/websocket pod (the Rails image is launched with bundle exec puma -p 28080 cable/config.ru), if I name "cable-svc" the service that exposes the ActionCable container, and "rails-svc" the one for the Rails container, you need to:
in K8, don't set the config for CABLE_URI
in the Rails backend, you don't have the URL (unknown 127.0.0.1:some_port), do:
# config.action_cable.url <-- comment this
config.action_cable.disable_request_forgery_protection=true
in the Nginx config, add a specific location for the "/cable" path:
upstream rails {
server rails-svc:3000;
}
server {
[...root, location #rails {...}]
location /cable {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass "http://cable-svc:28080";
}
}
Check the logs, no more WebSockets in Rails, and Cable responds.
So I tried to set up a rails app using Bryan Bate's private_pub gem (a wrapper for the faye gem) to create chat channels. It works great on my local machine in dev mode.
I'm also booting up the private_pub server on port 8080 at the same time my rails app starts by including a initializer file with the lines:
Thread.new do
system("rackup private_pub.ru -s thin -E production -p 8080")
end
however, after deploying to aws ec2 ubuntu instance with the nginx webserver and puma app sever, the chrome console keeps showing this every 2 seconds, and the real time chat features don't work.
GET http://localhost:8080/faye.js net::ERR_CONNECTION_REFUSED
If I open port 8080 in my aws security group, I can see the big chunk of javascript code in faye.js using curl from localhost:8080/faye.js when I ssh into the instance. I can also access it from my browser if I go to http://my.apps.public.ip:8080/faye.js. I can't access it if I remove 8080 from the security group, so I don't think this is an firewall problem.
Also, if I change the address from localhost to 0.0.0.0 or the public ip for my ec2 instance, the chrome console error is gone, but the real time chat is still not working.
I suspect I might have to do more configuration to nginx because all I have done so far to configure the nginx server is in /etc/nginx/sites-available/default, I have:
upstream app {
server unix:/home/deploy/myappname/shared/tmp/sockets/puma.sock fail_timeout=0;
}
server {
listen 80;
server_name localhost;
root /home/deploy/myappname/public;
try_files $uri/index.html $uri #app;
location / {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Connection '';
proxy_pass http://app;
}
location ~ ^/(assets|fonts|system)/|favicon.ico|robots.txt {
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
But maybe this has nothing to do with nginx either, I'm pretty lost. Has anyone experienced this or could suggest an alternative solution? I could post any additional config files here if needed.
Solved
first I took Ian's advice and set server_name to my public ip
then
based on guide from http://www.johng.co.uk/2014/02/18/running-faye-on-port-80/
I added the location block
location ^~ /faye {
proxy_pass http://127.0.0.1:9292/faye;
proxy_redirect off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_buffering off;
proxy_cache_bypass $http_pragma $http_authorization;
proxy_no_cache $http_pragma $http_authorization;
}
finally, for my private_pub.yml I set the faye entry point for production:
production:
server: "http://my.public.ip:80/faye/faye"
secret_token: "mysecrettoken"
signature_expiration: 3600 # one hour
and now the chatting in my app responds much faster than when I was using the remote standalone chat server I put on on heroku because both the chat server and my main app is running in the same instance.
I get my SSL request through a load balancer which is performing SSL termination and sending the decrypted traffic to my server on port 81.
I'm running Nginx and have setup a listener to the port 81 and I would like to tell my Rails app that this is a SSL request.
Here is how my Nginx config looks like:
server {
listen 81;
server_name SERVER;
root /var/www/app/current/public;
error_log /var/log/nginx/app-error.log;
access_log /var/log/nginx/app-access.log;
passenger_enabled on;
client_max_body_size 100M;
location / {
passenger_enabled on;
proxy_set_header X-FORWARDED-PROTO https;
}
}
It goes through but Ruby on Rails doesn't detect the header request.headers['X-Forwarded-Proto'], so it takes the request as non-https.
What should I add in order to get Rails thinking this is a SSL request?
So proxy_set_header is not used by passenger, you need to use passenger_set_cgi_param instead:
passenger_set_cgi_param HTTP_X_FORWARDED_PROTO https;
The nginx directive passenger_set_cgi_param has been replaced by passenger_set_env.
https://www.phusionpassenger.com/documentation/Users%20guide%20Nginx.html#PassengerEnvVar
for newer nginx the directive should be
passenger_set_cgi_param HTTP_X_FORWARDED_PROTO https;