I'm trying to connect to an ActionCable websocket and everything works fine running locally with just Puma and without nginx.
However, when I try to do the exact same thing on my staging environment, the connection is immediately closing after connecting. I am able to get the downstream welcome messages and maybe a ping.
However, the connection abruptly closes without any of the onClose callbacks so my guess is nginx is not letting the connection persist.
Here is my sites nginx configuration.
upstream app {
# Path to Puma SOCK file, as defined previously
server unix:/home/deploy/my-app/shared/tmp/sockets/puma.sock fail_timeout=60;
keepalive 60;
}
server {
listen 80;
server_name localhost;
# websocket_pass websocket;
root /home/deploy/my-app/current/public;
try_files $uri/index.html $uri #app;
location #app {
proxy_pass http://app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
#location / {
# proxy_set_header X-Forwarded-Proto $scheme;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Real-IP $remote_addr;
# proxy_set_header Host $host;
# proxy_redirect off;
# proxy_http_version 1.1;
# proxy_set_header Connection '';
# proxy_pass http://app;
#}
location ~ ^/(assets|fonts|system)/|favicon.ico|robots.txt {
gzip_static on;
expires max;
add_header Cache-Control public;
}
location /cable {
proxy_pass http://app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
I also found this error in nginx error logs:
2019/02/11 21:08:35 [error] 10233#10233: *2 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: x.x.x.x, server: localhost, request: "GET /cable/ HTTP/1.1", upstream: "http://unix:/home/deploy/wr-api/shared/tmp/sockets/puma.sock:/cable/", host: "x.x.x.x"
So after some time, we noticed that the staging environment cable.yml had this value for url:
url: <%= ENV.fetch("REDIS_URL") { "redis://localhost:6379/1" } %>
Removing that value and all other values except for adapter: staging fixed it for us.
New cable.yml staging config:
staging:
adapter: redis
Working!
Related
I'm not able to upload files larger than 4.7gb, larger than that fails with a 404 not found.
It works just fine in development without nginx, so I assume nginx is the source of the problem here.
Nginx config:
server {
upstream bench {
server 127.0.0.1:3001;
}
server_name servername;
proxy_http_version 1.1;
root /path/to/server;
proxy_max_temp_file_size 10024m;
client_body_in_file_only on;
client_body_buffer_size 1M;
client_max_body_size 100G;
}
location / {
try_files $uri #bench;
}
location /cable {
proxy_http_version 1.1;
proxy_pass http://bench/cable;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location #bench {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://bench;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 128;
proxy_buffering off;
proxy_request_buffering off;
}
error_page 500 502 503 504 /500.html;
keepalive_timeout 10;
routes.rb
Rails.application.routes.draw do
resources :benches, :path => "benchmarks"
root 'benches#index'
end
The problem wasn't nginx. The service_url activestorage creates when you start the upload is only valid for 5 minutes by default, so if your files takes more than 5minutes to upload.. you're out of luck.
config.active_storage.service_urls_expire_in = 1.hour
I'm creating realtime update chat app with Rails(api mode) and react using following technologies.
1.rails 5.2.1
2.puma 3.12.0
3.ruby 2.4.6
4.nginx 1.12.2
5.redis 4.0.10
6.aws elasticache
frontend/index.js
import ActionCable from 'actioncable'
ActionCable.createConsumer(`wss://myapp.com:3000/cable?authorization=${tokens['Authorization']}&firebaseUID=${tokens['FirebaseUID']}`)
config/environments/production.rb
config.action_cable.url = 'wss://myapp.com/cable:3000'
config.action_cable.allowed_request_origins = [ 'http://myapp.com', /http:\/\/myapp.*/, 'https://myapp.com', /https:\/\/myapp.*/ ]
nginx.conf
upstream puma {
server unix:///srv/myapp/tmp/sockets/puma.sock;
}
location / {
try_files $uri $uri/index.html $uri.html #webapp;
}
location #webapp {
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://puma;
}
location /cable {
proxy_pass http://puma/cable;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
config/cable.yml
production:
adapter: redis
url: ENV['ELASTIC_CACHE_URL']} <= endpoint of elasticache(redis4.0)
But I'm getting the following error.
main.js:89513 WebSocket connection to 'wss://myapp.com:3000/cable' failed: WebSocket is closed before the connection is established
Can somebody give me advice?
I tried to deploy my Rails app which is contain Actioncable with Puma and Nginx.
So i got from the browser
WebSocket connection to 'ws://sub.domain.com/cable' failed: Error during WebSocket handshake: net::ERR_CONNECTION_RESET
Then when i check nginx error log i got this
This is my nginx config file
upstream puma {
server unix:///home/deploy/apps/app_repo/shared/tmp/sockets/app_repo-puma.sock;
}
server {
listen 80 default deferred;
server_name sub.domain.com;
root /home/deploy/apps/app_repo/current/public;
access_log /home/deploy/apps/app_repo/current/log/nginx.access.log;
error_log /home/deploy/apps/app_repo/current/log/nginx.error.log info;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
# enables WS support
location /cable {
proxy_pass http://puma;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 180;
}
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_read_timeout 180;
proxy_pass http://puma;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 20M;
keepalive_timeout 180;
}
My puma.rb
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count
port ENV.fetch("PORT") { 3000 }
workers ENV.fetch("WEB_CONCURRENCY") { 2 }
plugin :tmp_restart
daemonize true
pidfile 'home/deploy/apps/app_repo/shared/tmp/pids/puma.pid'
So how can i fix this and make Actioncable working well on prodeuction server?
Thanks you so much.
I'm trying to follow this article:
https://www.digitalocean.com/community/tutorials/how-to-deploy-a-rails-app-with-puma-and-nginx-on-ubuntu-14-04
with a fresh Amazon Linux EC2 instance. I'm using the out-of-the-box /etc/nginx/nginx.conf file, and added my config file to /etc/nginx/sites-default/default
Puma seems to be running fine:
/home/ec2-user/flviewer/shared/log/puma_error.log: [8006] *
Listening on
unix:///home/ec2user/flviewer/shared/sockets/tmp/puma.sock
But this shows up in /var/log/nginx/error.log:
2016/12/12 05:33:00 [error] 11018#0: *1 open()
"/usr/share/nginx/html/flviewer" failed (2: No such file or
directory), client: 173.73.119.219, server: localhost, request: "GET
/flviewer HTTP/1.1", host: "54.86.222.53"
Why the heck is it looking in '/usr/share/nginx/html/flviewer' when it should be looking at the socket i opened?
here is my config as dumped by 'nginx -T':
# configuration file /etc/nginx/sites-available/default:
upstream app {
# Path to Puma SOCK file, as defined previously
server unix:/home/ec2-user/flviewer/shared/tmp/sockets/puma.sock fail_timeout=0;
}
server {
listen 80;
server_name localhost;
root /home/ec2-user/flviewer/current/public;
try_files $uri/index.html $uri #app;
location #app {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
#proxy_http_version 1.1;
proxy_set_header Connection '';
proxy_pass http://app;
#autoindex on;
}
location ~ ^/(assets|fonts|system)/|favicon.ico|robots.txt {
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
Nothing worked. I stripped /etc/nginx.conf down to just this, and am up and running. I had to throw away all of the boilerplate that was in nginx.conf. This works:
config file:
# Run nginx as a normal console program, not as a daemon
daemon off;
user ec2-user;
# Log errors to stdout
error_log /dev/stdout info;
events {} # Boilerplate
http {
# Print the access log to stdout
access_log /dev/stdout;
# Tell nginx that there's an external server called #app living at our socket
upstream app {
server unix:/home/ec2-user/flv/shared/tmp/sockets/puma.sock fail_timeout=0;
}
server {
# Accept connections on localhost:2048
listen 80;
server_name localhost;
# Application root
root /home/ec2-user/flv/shared/public;
# If a path doesn't exist on disk, forward the request to #app
try_files $uri/index.html $uri #app;
# Set some configuration options on requests forwarded to #app
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
location ~ ^/(assets|fonts|system)/|favicon.ico|robots.txt {
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
}
I think it has to do with using the default nginx config file. Try moving /etc/nginx/sites-available/default to /etc/nginx/sites-enabled/flviewer.
$ mv /etc/nginx/sites-available/default /etc/nginx/sites-enabled/flviewer
Then reload and restart nginx.
Trying to deploy super simple rails 5 application to AWS using beanstalk.
Everything works fine except actioncable. Browser can't connect to the cable server.
WebSocket connection to 'ws://prod.3x52xijcqx.us-west-1.elasticbeanstalk.com/cable' failed: Error during WebSocket handshake: Unexpected response code: 404
I found similar questions at stackoverflow but all of them were pointing to configure nginx using this template, however it didn't help me.
I customized nginx config using this script which I've put inside .ebextensions
files:
"/etc/nginx/conf.d/websockets.conf":
content: |
upstream backend {
server unix:///var/run/puma/my_app.sock;
}
server_names_hash_bucket_size 128;
server {
listen 80;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
server_name prod.3x52xijcqx.us-west-1.elasticbeanstalk.com;
# prevents 502 bad gateway error
large_client_header_buffers 8 32k;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
# prevents 502 bad gateway error
proxy_buffers 8 32k;
proxy_buffer_size 64k;
proxy_pass http://backend;
proxy_redirect off;
location /assets {
root /var/app/current/public;
}
# enables WS support
location /cable {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}
container_commands:
01_restart_nginx:
command: "service nginx restart"
Full app source is here