WebSocket connection to 'ws://my-ec2/cable' failed:
Error during WebSocket handshake: Unexpected response code: 404
Guys, I saw this (old) issue a lot here, so it may seem like it is duplicated. But in my case, I try very hard to fix this error but I can't.
I also follow this correction: https://stackoverflow.com/a/55715218/8478892 but no success.
My nginx.conf:
upstream puma {
server unix:///home/ubuntu/apps/my_app/shared/tmp/sockets/my_app-puma.sock;
}
server {
listen 80 default_server deferred;
# If you're planning on using SSL (which you should), you can also go ahead and fill out the following server_name variable:
# server_name example.com;
# Don't forget to update these, too
root /home/ubuntu/apps/my_app/current/public;
access_log /var/log/nginx/nginx.access.log;
error_log /var/log/nginx/nginx.error.log info;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma;
}
location /cable {
proxy_pass http://puma;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass_request_headers on;
proxy_buffering off;
proxy_redirect off;
break;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 10M;
keepalive_timeout 10;
}
And in my cable.yml:
production:
url: redis://http://my-ec2.com:6379
local: &local
url: redis://localhost:6379
development: *local
test: *local
And my environments/production.rb:
Rails.application.configure do
config.cache_classes = true
config.eager_load = true
config.consider_all_requests_local = false
config.public_file_server.enabled = ENV['RAILS_SERVE_STATIC_FILES'].present?
config.active_storage.service = :local
config.action_cable.mount_path = '/cable'
config.action_cable.url = 'ws://my-ec2/cable'
config.action_cable.allow_same_origin_as_host = true
config.action_cable.allowed_request_origins = ["*"]
config.log_level = :debug
config.log_tags = [ :request_id ]
config.action_mailer.perform_caching = false
config.i18n.fallbacks = true
config.active_support.deprecation = :notify
config.log_formatter = ::Logger::Formatter.new
if ENV["RAILS_LOG_TO_STDOUT"].present?
logger = ActiveSupport::Logger.new(STDOUT)
logger.formatter = config.log_formatter
config.logger = ActiveSupport::TaggedLogging.new(logger)
end
config.active_record.dump_schema_after_migration = false
end
Thought of the day for those who are having this problem:
Setting up ActionCable en localhost had already been a good fight, but setting up in production is an entire war.
Do you have redis installed on your machine or is it a docker container? I think you are using it with sidekiq, and do you have any sidekiq/redis initializer file under /config/initializers?
In cable.yml it should be
production:
url: redis://redis:6379/0
After a few days, I managed to solve this problem myself.
The main reason for my error is that in my environments / production.rb file, I was saying that the actioncable endpoint was the public ip of my ec2. But in reality you should put localhost. I used the same configuration I have in development.rb:
production.rb before:
...
config.action_cable.mount_path = '/cable'
config.action_cable.url = 'ws://my_ec2/cable'
config.action_cable.allow_same_origin_as_host = true
config.action_cable.allowed_request_origins = ["*"]
...
production.rb after:
...
config.action_cable.disable_request_forgery_protection = true
config.action_cable.url = "ws://localhost:3000/cable"
config.action_cable.allowed_request_origins = [/http:\/\/*/, /https:\/\/*/]
config.action_cable.allowed_request_origins = /(\.dev$)|^localhost$/
...
Related
I have a situation which I need your experties please, I have created an upstream in http block of kubernetes ingress controller nginx.conf similar to the following:
upstream newbackend {
server foo.com:34111;
server boo.com:34111;
keepalive 320;
keepalive_timeout 60s;
keepalive_requests 10000;
}
Now I need to modify proxy_pass http://upstream_balancer; to point to my new upstream which is proxy_pass http://newbackend; , I need to do this in the following location block. BTW, if I am not mistaken this location comes by default in the nginx.conf file. Why I need to do this? cause I want any uri that the path is not found to redirect them to my newbackend url.
location ~* "^/" {
}
I tried to put nginx.ingress.kubernetes.io/configuration-snippet into my ingress yaml file, similar to the following:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_pass http://newbackend;
labels:
app.kubernetes.io/name: ingress-nginx
name: ingress-upstream
namespace: mytest-ns
spec:
ingressClassName: myingressclassname
rules:
- http:
paths:
- backend:
service:
name: dummy
port:
number: 8080
path: /
pathType: ImplementationSpecific
but after implementing above kubectl apply -f myingress.yaml I get the following error
410#410: "proxy_pass" directive is duplicate in /tmp/nginx-cfg2480164747:777
nginx: [emerg] "proxy_pass" directive is duplicate in /tmp/nginx-cfg2480164747:777
nginx: configuration file /tmp/nginx-cfg2480164747 test failed
Now my question is how can I change proxy_pass value of the following location block which is already pointing to proxy_pass http://upstream_balancer;? ( please bear in mind that this must be done by kubernetes/nginx so I can not modify the nginx.conf manually)
location ~* "^/" {
set $namespace "";
set $ingress_name "";
set $service_name "";
set $service_port "";
set $location_path "";
set $global_rate_limit_exceeding n;
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = false,
ssl_redirect = false,
force_no_ssl_redirect = false,
preserve_trailing_slash = false,
use_port_in_redirects = false,
global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
})
balancer.rewrite()
plugins.run()
}
# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
# other authentication method such as basic auth or external auth useless - all requests will be allowed.
#access_by_lua_block {
#}
header_filter_by_lua_block {
lua_ingress.header()
plugins.run()
}
body_filter_by_lua_block {
plugins.run()
}
log_by_lua_block {
balancer.log()
monitor.call()
plugins.run()
}
access_log off;
port_in_redirect off;
set $balancer_ewma_score -1;
set $proxy_upstream_name "upstream-default-backend";
set $proxy_host $proxy_upstream_name;
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
set $proxy_alternative_upstream_name "";
client_max_body_size 1m;
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Request-ID $req_id;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Forwarded-Scheme $pass_access_scheme;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering off;
proxy_buffer_size 4k;
proxy_buffers 4 4k;
proxy_max_temp_file_size 1024m;
proxy_request_buffering on;
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout;
proxy_next_upstream_timeout 0;
proxy_next_upstream_tries 3;
proxy_pass http://upstream_balancer;
proxy_redirect off;
}
thank you very much in advance.
I am trying to configure the server with rails 5, Nginx and Puma. The application is running fine but Actioncable is giving
WebSocket connection to 'ws://server_name.com/cable' failed:
Error during WebSocket handshake: Unexpected response code: 200
Below are my nginx settings,
upstream app {
server unix:/tmp/app.sock fail_timeout=0;
}
server {
listen 80;
server_name server_name.com;
try_files $uri/index.html $uri #app;
client_max_body_size 100M;
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
client_max_body_size 10M;
}
location /cable {
proxy_pass http://app/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location ~ ^/(assets|uploads)/ {
root assets_path;
gzip_static on;
expires max;
add_header Cache-Control public;
add_header ETag "";
break;
}
error_page 500 502 503 504 /500.html;
}
In rails in production.rb, I did the settings like below.
config.action_cable.url = 'ws://server_name.com/cable'
Any help will be appreciated.
try to add:
config.action_cable.allowed_request_origins = ['*']
config.action_cable.disable_request_forgery_protection = true
to your config/environments/production.rb file
it works with me, you can check this link also.
and this question
Try using,
location /cable {
proxy_pass http://app; # not http://app/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
And also make sure that you have config.action_cable.allowed_request_origins = [/http:\/\/*/, /https:\/\/*/] in your production.rb if you don't have ssl
I have the application written in Rails and Ember frontend for it. It is accessible by nginx server. Here is configuration for Rails part:
upstream app_project_app {
server unix:///tmp/project.sock fail_timeout=0;
}
And here is configuration for ember part:
server {
listen 80;
server_name project.demo.domain.pl;
root /home/lunar/apps/project-ember/current;
try_files /system/maintenance.html $uri/index.html $uri.html $uri #app;
access_log /var/log/nginx/project_app_access.log;
error_log /var/log/nginx/project_app_error.log;
keepalive_timeout 5;
proxy_read_timeout 60;
proxy_send_timeout 60;
proxy_connect_timeout 60;
if ($request_method !~ ^(GET|HEAD|PUT|POST|DELETE|OPTIONS)$ ){
return 405;
}
location ~ ^/assets/ {
expires max;
add_header Cache-Control public;
add_header ETag "";
break;
}
location = /favicon.ico {
expires max;
add_header Cache-Control public;
}
location / {
try_files $uri/index.html $uri.html $uri #app;
error_page 404 /404.html;
error_page 422 /422.html;
error_page 500 502 503 504 /500.html;
error_page 403 /403.html;
}
location #app {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://app_project_app;
}
}
Now the application grown and has websockets server (using faye). And the client can't connect to the server:
WebSocket connection to 'ws://project.demo.domain.pl/faye' failed: Error during WebSocket handshake: Unexpected response code: 400
I've read, that I need to enable SSL for this handshake. How can I do this in nginx? I also read, that I don't need to use https and I can use SSL only for websockets, is it true? And if yes, how should look configuration for nginx in this case?
For websocket support you need add the following directives in your #app location block
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgradeā;
Read more here
Using the websocket-rails gem, I'm able to successfully get a websocket connection straight through puma in development, however, when deployed to production and attempting to access the websocket through nginx (passing off to puma) I have a couple of errors: one in the nginx error log:
[info] 14340#0: *7 upstream timed out (110: Connection
timed out) while proxying upgraded connection, client: 123.45.67.89, server:
foo.com, request: "GET /websocket HTTP/1.1", upstream:
"http://unix:///opt/oneconnect/shared/tmp/sockets/puma.sock:/websocket", host:
"foo.com"
... and one on the javascript console:
WebSocket connection to 'ws://foo.com/websocket' failed: Error during WebSocket handshake: Unexpected response code: 301
I found that nginx (the version I'm using is 1.4.6) is capable of websocket use but requires special configuration, which I've already attemped (getting the errors above). Here's my nginx.conf:
upstream oneconnect {
server unix:///opt/oneconnect/shared/tmp/sockets/puma.sock;
}
server {
listen 80;
listen 443 ssl;
#ssl on;
ssl_certificate /etc/ssl/foo.com.crt;
ssl_certificate_key /etc/ssl/foo.com.key;
root /opt/oneconnect/current/public;
try_files $uri #oneconnect;
access_log /opt/oneconnect/current/log/nginx.access.log;
error_log /opt/oneconnect/current/log/nginx.error.log info;
server_name foo.com;
location ~ ^/(assets)/ {
root /opt/oneconnect/current/public;
gzip_static on;
expires max;
add_header Cache-Control public;
}
location /websocket/ {
proxy_pass http://oneconnect;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location #oneconnect {
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://oneconnect;
}
}
I'm assuming that I'm missing something simple, but I'm stumped at this point and have Googled until my eyes started bleeding. If anyone could help it would be much appreciated, or maybe just point me to how to debug these connections (it seems hard to get debug info from a ws connection). Thanks for your time.
Assuming u have already initializer for eventmachine
config/initializers/eventmachine.rb
Thread.new { EventMachine.run } unless EventMachine.reactor_running? && EventMachine.reactor_thread.alive?
nginx site conf:
upstream puma_project_production {
server unix:/var/www/project/shared/tmp/sockets/puma.sock fail_timeout=0;
}
server {
listen 80;
client_max_body_size 4G;
keepalive_timeout 10;
error_page 500 502 504 /500.html;
error_page 503 #503;
server_name localhost project.local;
root /var/www/project/current/public;
try_files $uri/index.html $uri #puma_project_production;
location #puma_project_production {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma_project_production;
# limit_req zone=one;
access_log /var/www/project/shared/log/nginx.access.log;
error_log /var/www/project/shared/log/nginx.error.log;
}
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
location = /50x.html {
root html;
}
location = /404.html {
root html;
}
location #503 {
error_page 405 = /system/maintenance.html;
if (-f $document_root/system/maintenance.html) {
rewrite ^(.*)$ /system/maintenance.html break;
}
rewrite ^(.*)$ /503.html break;
}
if ($request_method !~ ^(GET|HEAD|PUT|PATCH|POST|DELETE|OPTIONS)$ ){
return 405;
}
if (-f $document_root/system/maintenance.html) {
return 503;
}
location /websocket {
proxy_pass http://puma_project_production;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location ~ \.(php|rb)$ {
return 405;
}
}
I am using following nginx configurations:
user www-data;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
# multi_accept on;
}
http {
include /etc/nginx/mime.types;
access_log /var/log/nginx/access.log;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
tcp_nodelay on;
gzip on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
upstream myapp.co {
server 127.0.0.1:8080;
}
server{
listen 80;
server_name myapp.co;
rewrite ^ https://myapp.co$request_uri? permanent;
}
server {
listen 443 ssl;
server_name myapp.co;
root /home/deployer/myapp/public;
ssl on;
ssl_certificate /etc/nginx/certs/myapp.co.crt;
ssl_certificate_key /etc/nginx/certs/myapp.co.private.key;
#server_name myapp.co _;
#root /home/deployer/myapp/public;
location / {
proxy_set_header X_FORWARDED_PROTO $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header CLIENT_IP $remote_addr;
proxy_redirect http:// https://;
if (!-f $request_filename) {
proxy_pass http://myapp.co;
break;
}
if (-f $document_root/system/maintenance.html) {
return 503;
}
}
}
}
The issue: when I load http://www.myapp.co, I get the error message
Welcome to nginx
But if I set to the browser
https://www.myapp.co
https://myapp.co
http://myapp.co
Everything is working well.
How can I fix up the proper displaying of the Rails app also for the request http://www.myapp.co?
I am quite amateur with setting up of nginx, so I'll be grateful for every advice.
Thank you
I think, you should set your server_name (in both server sections) like this:
server_name myapp.co www.myapp.co;