What config.action_cable.url should be configured for websockets / Rails / Kubernetes /Minukube with Nginx?
When running "docker-compose" locally an Nginx processes in front of a Rails process (not API only, but with SSR) and a standalone Cable process (cf the guides), the websockets work fine by passing the following server-side (in say "/config/application.rb", with action_cable_meta_tag set in the layouts):
config.action_cable.url = 'ws://localhost:28080'
I am targetting Kubernetes with Minikube locally: I deployed Nginx in front of a Rails deployment (RAILS_ENV=production) along with a Cable deployment but I can't make it work. The Cable service is internal of type "ClusterIP", with "port" and "targetPort". I tried several variations.
Any advice?
Note that I use Nginx -> Rails + Cable on Minikube, and the entry-point is the Nginx service, external of kind LoadBalancer where I used:
upstream rails {
server rails-svc:3000;
}
server {
listen 9000 default_server;
root /usr/share/nginx/html;
try_files $uri #rails;
add_header Cache-Control public;
add_header Last-Modified "";
add_header Etag "";
location #rails {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header Host $http_host;
proxy_pass_header Set-Cookie;
proxy_redirect off;
proxy_pass http://rails;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
To allow any origin, I also set:
config.action_cable.disable_request_forgery_protection = true
I have an answer: in a Minikube cluster, don't put anything and disable forgery protection, Rails will default the correct value. When Nginx is front of a Rails pod and a standalone ActionCable/websocket pod (the Rails image is launched with bundle exec puma -p 28080 cable/config.ru), if I name "cable-svc" the service that exposes the ActionCable container, and "rails-svc" the one for the Rails container, you need to:
in K8, don't set the config for CABLE_URI
in the Rails backend, you don't have the URL (unknown 127.0.0.1:some_port), do:
# config.action_cable.url <-- comment this
config.action_cable.disable_request_forgery_protection=true
in the Nginx config, add a specific location for the "/cable" path:
upstream rails {
server rails-svc:3000;
}
server {
[...root, location #rails {...}]
location /cable {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass "http://cable-svc:28080";
}
}
Check the logs, no more WebSockets in Rails, and Cable responds.
Related
I'm deploying some services using Docker, with docker-compose and a nginx container to proxy to my domain.
I have already deployed the frontend app, and it is accessible from the web. Supposedly, I only need to expose the frontend/nginx port to the web, without me needing to expose the rest of the services. But I'm not able to do that for now.
For example, Client -> Login Request -> frontend <-> (local) Backend get request.
Right now, I'm getting connection refused, and the get is pointing to http://localhost, and not the name of the service defined in docker-compose. (all containers are deployed on the same network, one that I have created)
What do I need to do to configure this?
Here is my nginx config so far:
server {
listen 80 default_server;
listen 443 ssl;
server_name mydomain;
ssl_certificate /etc/letsencrypt/live/mydomain/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain/privkey.pem;
location / {
proxy_pass http://frontend:3000/;
}
location /auth {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://auth:3003;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
# Same for the other services
(...)
EDIT:
Should I create a location for every get and post that I have for my services?
As:
location getUser/ {
proxy_pass http://auth:3003;
}
I'm setting up a web app, with a react frontend that I want to expose and some local backend modules, all deployed in docker-compose. Everything works fine in localhost.
Now, I need to use nginx to proxy the requests to my purchased domain, using a cloudflare website, and later using https. In Cloudflare, the SSL/TLS encryption mode is Off, for testing with http. Everything has been setup in cloudflare, according to some docs that I read. Unfortunatly, my nginx configuration is only working for localhost, preventing me from doing a https configuration. (getting 522 error when loading the page using my domain).
This is my nginx config file:
server {
listen 80;
server_name mydomain;
root /usr/share/nginx/html;
index index.html index.htm;
location / {
try_files $uri $uri/ /index.html;
}
location /statsmodule {
proxy_pass http://statsmodule:3020;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
}
location /auth {
proxy_pass http://auth:3003;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
}
(...)
# the other location blocks for the other services are similar to this
}
What am I missing?
I'm having trouble getting ActionCable hooked up in my prod environment, and related questions haven't had a working solution. I'm using an nginx+puma setup with Rails 6.1.3.2 on Ubuntu 20.04. I have confirmed that redis-server is running on port 6379, and that Rails is running as production.
Here's what I'm getting in my logs:
I, [2021-05-25T22:47:25.335711 #72559] INFO -- : [5d1a85f7-0102-4d25-bd4e-d81355b846ee] Started GET "/cable" for 74.111.15.223 at 2021-05-25 22:47:25 +0000
I, [2021-05-25T22:47:25.336283 #72559] INFO -- : [5d1a85f7-0102-4d25-bd4e-d81355b846ee] Started GET "/cable/"[non-WebSocket] for 74.111.15.223 at 2021-05-25 22:47:25 +0000
E, [2021-05-25T22:47:25.336344 #72559] ERROR -- : [5d1a85f7-0102-4d25-bd4e-d81355b846ee] Failed to upgrade to WebSocket (REQUEST_METHOD: GET, HTTP_CONNECTION: close, HTTP_UPGRADE: )
I, [2021-05-25T22:47:25.336377 #72559] INFO -- : [5d1a85f7-0102-4d25-bd4e-d81355b846ee] Finished "/cable/"[non-WebSocket] for 74.111.15.223 at 2021-05-25 22:47:25 +0000
This happens every few seconds. You can see matching output in the browser console:
For one, I'm pretty sure that I need to add some sections to my nginx site config, such as a /cable section, but I haven't figured out the correct settings. Here's my current config:
server {
root /home/rails/myapp/current/public;
server_name myapp.com;
index index.htm index.html;
location ~ /.well-known {
allow all;
}
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# needed to allow serving of assets and other public files
location ~ ^/(assets|packs|graphs)/ {
gzip_static on;
expires 1y;
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/myapp.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/myapp.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = myapp.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name myapp.com;
return 404; # managed by Certbot
}
Here's my config/cable.yml:
development:
adapter: async
test:
adapter: test
production:
adapter: redis
url: <%= ENV.fetch("REDIS_URL") { "redis://localhost:6379/1" } %>
channel_prefix: myapp_production
In config/environments/production.rb, I've left these lines commented out:
# Mount Action Cable outside main process or domain.
# config.action_cable.mount_path = nil
# config.action_cable.url = 'wss://example.com/cable'
# config.action_cable.allowed_request_origins = [ 'http://example.com', /http:\/\/example.*/ ]
I have not mounted ActionCable manually in my routes or anything. This is as standard of a setup as you can get. I'm thinking the answer lies centrally in a correct nginx configuration, but I don't know what it should be. Perhaps there are Rails config settings that are needed too, though. I don't remember having to change any when deploying with passenger, but maybe puma is a different story.
Update
I also noticed that a lot of the proposed solutions in other questions, like this one, seem to reference a .sock file in tmp/sockets/. My sockets/ directory is empty, though. Web server's running fine besides ActionCable though.
Update #2
I also noticed that changing the config.action_cable.url to something like ws://myapp.com instead of wss://myapp.com has no effect even after restarting Rails. The browser console errors still say its trying to connect to wss://myapp.com. Possibly due to how I'm set up to force redirect HTTP to HTTPS. I wonder if that has anything to do with it?
I got it working. Here are the settings I needed:
nginx config
The server section from the config in my question must be modified to include the following two sections for the / and /cable locations:
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /cable {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade "websocket";
proxy_set_header Connection "Upgrade";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
config/environments/production.rb
# Change myapp.com to your app's location
config.action_cable.allowed_request_origins = [ 'https://myapp.com' ]
config.hosts << "myapp.com"
config.hosts << "localhost"
Huge thanks to Lam Phan in the comments for helping me out.
So I tried to set up a rails app using Bryan Bate's private_pub gem (a wrapper for the faye gem) to create chat channels. It works great on my local machine in dev mode.
I'm also booting up the private_pub server on port 8080 at the same time my rails app starts by including a initializer file with the lines:
Thread.new do
system("rackup private_pub.ru -s thin -E production -p 8080")
end
however, after deploying to aws ec2 ubuntu instance with the nginx webserver and puma app sever, the chrome console keeps showing this every 2 seconds, and the real time chat features don't work.
GET http://localhost:8080/faye.js net::ERR_CONNECTION_REFUSED
If I open port 8080 in my aws security group, I can see the big chunk of javascript code in faye.js using curl from localhost:8080/faye.js when I ssh into the instance. I can also access it from my browser if I go to http://my.apps.public.ip:8080/faye.js. I can't access it if I remove 8080 from the security group, so I don't think this is an firewall problem.
Also, if I change the address from localhost to 0.0.0.0 or the public ip for my ec2 instance, the chrome console error is gone, but the real time chat is still not working.
I suspect I might have to do more configuration to nginx because all I have done so far to configure the nginx server is in /etc/nginx/sites-available/default, I have:
upstream app {
server unix:/home/deploy/myappname/shared/tmp/sockets/puma.sock fail_timeout=0;
}
server {
listen 80;
server_name localhost;
root /home/deploy/myappname/public;
try_files $uri/index.html $uri #app;
location / {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Connection '';
proxy_pass http://app;
}
location ~ ^/(assets|fonts|system)/|favicon.ico|robots.txt {
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
But maybe this has nothing to do with nginx either, I'm pretty lost. Has anyone experienced this or could suggest an alternative solution? I could post any additional config files here if needed.
Solved
first I took Ian's advice and set server_name to my public ip
then
based on guide from http://www.johng.co.uk/2014/02/18/running-faye-on-port-80/
I added the location block
location ^~ /faye {
proxy_pass http://127.0.0.1:9292/faye;
proxy_redirect off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_buffering off;
proxy_cache_bypass $http_pragma $http_authorization;
proxy_no_cache $http_pragma $http_authorization;
}
finally, for my private_pub.yml I set the faye entry point for production:
production:
server: "http://my.public.ip:80/faye/faye"
secret_token: "mysecrettoken"
signature_expiration: 3600 # one hour
and now the chatting in my app responds much faster than when I was using the remote standalone chat server I put on on heroku because both the chat server and my main app is running in the same instance.
I'm trying to setup websockets on my rails application. My application works with iOS client that uses SocketRocker library.
As websockets backend i use faye-rails gem.
It is integrated to the rails app as rack middleware
config.middleware.delete Rack::Lock
config.middleware.use FayeRails::Middleware, mount: '/ws', server: 'passenger', engine: {type: Faye::Redis, uri: redis_uri}, :timeout => 25 do
map default: :block
end
It works perfect until i upload it to the production server with Nginx. I have tried a lot of solutions to pass websocket request to the backend, but with no luck. The main thing is there are two servers running, but i have just one. My idea was i just needed to proxify requests from /faye endpoint to /ws (to update headers). What is correct proxy_pass parameters should be in my case?
location /faye {
proxy_pass http://$server_name/ws;
proxy_buffering off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
I had a similar problem and after struggling for a while, I finally could make it work.
I'm using nginx 1.8 with thin server with gem 'faye-rails' and my mount point is /faye
My nginx config looked like this:
upstream thin_server {
server 127.0.0.1:3000;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
...
proxy_redirect off;
proxy_cache off;
location = /faye {
proxy_pass http://thin_server;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
chunked_transfer_encoding off;
proxy_buffering off;
proxy_cache off;
}
location / {
try_files $uri/index.html $uri.html $uri #proxy;
}
location #proxy {
proxy_pass http://thin_server;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
}
...
}
The final turn point for me to make it work was when I set the "location = /faye". Before I tried "location /faye" and "location ~ /faye" and it failed.
It looks like the equal sign "=" prevents nginx to mix with other location settings.