Elastic Beanstalk Redis Fail, Webapp Unresponsive - ruby-on-rails
Can't get past sidekiq errors.
Trying to migrate from Heroku to AWS EB. I have a rails app running rails 4.2.0, ruby 2.3 on a linux machine, but keep running into issues. The webapp won't load - it simply times out over and over.
INFO: Running in ruby 2.3.1p112 (2016-04-26 revision 54768) [x86_64-linux]
INFO: See LICENSE and the LGPL-3.0 for licensing details.
INFO: Upgrade to Sidekiq Pro for more features and support: http://sidekiq.org
INFO: Booting Sidekiq 3.5.4 with redis options {:url=>nil}
ERROR: heartbeat: MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error.
Redis keeps telling me its url is nil despite what seems to be a solid setup. (It works on another app I managed to get running with the same configuration. I also found the ERROR MISCONF notice to be troublesome too, but the Redis URL isn't even setting. Further, both are in the same security group
This is my config/sidekiq.rb:
rails_root = Rails.root || File.dirname(__FILE__) + '/../..'
rails_env = Rails.env || 'development'
redis_config = YAML.load_file(rails_root.to_s + '/config/redis.yml')
redis_config.merge! redis_config.fetch(Rails.env, {})
redis_config.symbolize_keys!
Sidekiq.configure_server do |config|
config.redis = { url: "redis://#{ENV['REDIS_HOST']}:#{redis_config[:port]}/12" }
end
Sidekiq.configure_client do |config|
config.redis = { url: "redis://#{ENV['REDIS_HOST']}:#{redis_config[:port]}/12" }
end
And my config/redis.yml:
development:
host: localhost
port: 6379
test:
host: localhost
port: 6379
production:
host: ENV['REDIS_HOST']
port: 6379
My applicatoin.yml:
REDIS_HOST: project-name-001.random-token.0001.use1.cache.amazonaws.com
Here's the setup_swap.config, sidekiq.config, and nginx.config.
I've also seen this issue, but I assume it's unrelated. Perhaps I'm mistaken? If irrelevant, will address in another post.
Starting nginx: nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
What could it be?
Is there anything important I'm missing?
Edit: Add nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
index index.html index.htm;
server {
listen 80 ;
listen [::]:80 ;
server_name localhost;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
# redirect server error pages to the static page /40x.html
#
error_page 404 /404.html;
location = /40x.html {
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# Settings for a TLS enabled server.
#
# server {
# listen 443 ssl http2 ;
# listen [::]:443 ssl http2 ;
# server_name _;
# root /usr/share/nginx/html;
#
# ssl_certificate "/etc/pki/nginx/server.crt";
# ssl_certificate_key "/etc/pki/nginx/private/server.key";
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 10m;
# ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# ssl_ciphers <redacted>;
# ssl_prefer_server_ciphers on;
#
# # Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
#
# location / {
# }
#
# error_page 404 /404.html;
# location = /40x.html {
# }
#
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
# }
}
Updated response: I updated nginx.conf to read: include /etc/nginx/conf.d/webapp_healthd.conf; but still got the following:
[root] service nginx restart
nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
Stopping nginx: [ OK ]
Starting nginx: nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
[ OK ]
And also, the following persists:
ERROR: heartbeat: MISCONF Redis is configured to save RDB snapshots,
but is currently not able to persist on disk. Commands that may modify
the data set are disabled. Please check Redis logs for details about
the error.
Update 2 removed duplicate references to localhost port 80 and nginx stopped complaining, but I still get the Heartbeat MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. error.
Output from Sidekiq.redis(&:info):
{
"redis_version"=>"3.2.8",
"redis_git_sha1"=>"00000000",
"redis_git_dirty"=>"0",
"redis_build_id"=>"12e5c8be08dc4d3",
"redis_mode"=>"standalone",
"os"=>"Linux 4.4.51-40.60.amzn1.x86_64 x86_64",
"arch_bits"=>"64",
"multiplexing_api"=>"epoll",
"gcc_version"=>"4.8.3",
"process_id"=>"24835",
"run_id"=>"83a8de8b50f482a4e271228435b2f0c8e3fa5b5c",
"tcp_port"=>"6379",
"uptime_in_seconds"=>"341217",
"uptime_in_days"=>"3",
"hz"=>"10",
"lru_clock"=>"1108155",
"executable"=>"/usr/local/bin/redis-server",
"config_file"=>"/etc/redis/redis.conf",
"connected_clients"=>"2",
"client_longest_output_list"=>"0",
"client_biggest_input_buf"=>"0",
"blocked_clients"=>"0",
"used_memory"=>"842664",
"used_memory_human"=>"822.91K",
"used_memory_rss"=>"3801088",
"used_memory_rss_human"=>"3.62M",
"used_memory_peak"=>"924360",
"used_memory_peak_human"=>"902.70K",
"total_system_memory"=>"1043574784",
"total_system_memory_human"=>"995.23M",
"used_memory_lua"=>"37888",
"used_memory_lua_human"=>"37.00K",
"maxmemory"=>"0",
"maxmemory_human"=>"0B",
"maxmemory_policy"=>"noeviction",
"mem_fragmentation_ratio"=>"4.51",
"mem_allocator"=>"jemalloc-4.0.3",
"loading"=>"0",
"rdb_changes_since_last_save"=>"177",
"rdb_bgsave_in_progress"=>"0",
"rdb_last_save_time"=>"1493941570",
"rdb_last_bgsave_status"=>"err",
"rdb_last_bgsave_time_sec"=>"0",
"rdb_current_bgsave_time_sec"=>"-1",
"aof_enabled"=>"0",
"aof_rewrite_in_progress"=>"0",
"aof_rewrite_scheduled"=>"0",
"aof_last_rewrite_time_sec"=>"-1",
"aof_current_rewrite_time_sec"=>"-1",
"aof_last_bgrewrite_status"=>"ok",
"aof_last_write_status"=>"ok",
"total_connections_received"=>"17",
"total_commands_processed"=>"141824",
"instantaneous_ops_per_sec"=>"0",
"total_net_input_bytes"=>"39981126",
"total_net_output_bytes"=>"72119284",
"instantaneous_input_kbps"=>"0.00",
"instantaneous_output_kbps"=>"0.00",
"rejected_connections"=>"0",
"sync_full"=>"0",
"sync_partial_ok"=>"0",
"sync_partial_err"=>"0",
"expired_keys"=>"3",
"evicted_keys"=>"0",
"keyspace_hits"=>"14",
"keyspace_misses"=>"533",
"pubsub_channels"=>"0",
"pubsub_patterns"=>"0",
"latest_fork_usec"=>"160",
"migrate_cached_sockets"=>"0",
"role"=>"master",
"connected_slaves"=>"0",
"master_repl_offset"=>"0",
"repl_backlog_active"=>"0",
"repl_backlog_size"=>"1048576",
"repl_backlog_first_byte_offset"=>"0",
"repl_backlog_histlen"=>"0",
"used_cpu_sys"=>"167.52",
"used_cpu_user"=>"46.03",
"used_cpu_sys_children"=>"0.00",
"used_cpu_user_children"=>"0.00",
"cluster_enabled"=>"0",
"db0"=>"keys=1,expires=0,avg_ttl=0"
}
Interestingly, I can't find my redis logs to investigate further. In my redis.conf, all I see is this.
# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile ""
I've even run find / -path /sys -prune -o -path /proc -prune -o -name *redis* and don't see ANY log files. (╯°□°)╯︵ ┻━┻
What's also strange is that production.log is simply not getting written to, check the permissions: rw-r--r-- 1 webapp webapp 0 May 8 20:01 production.log
Please share your /etc/nginx/nginx.conf, I guess nginx.conf include other servers conf files in conf.d folder, check for the line include /etc/nginx/conf.d/*.conf; in your nginx.conf, if so it might load the file twice or other default file with the same server name, you can change it to include /etc/nginx/conf.d/webapp_healthd.conf or what ever name you want, but before check what is the file on the machine.
Also Check out the /etc/nginx/sites-enabled/ directory if there is any temp file such as ~default or .save. check it with ls -lah, delete them, restart nginx and check for errors or do it via ebextensions and deploy again.
UPDATE
Try to remove from nginx.confall the section of server { ... }, make sure to include inside http your file /etc/nginx/conf.d/webapp_healthd.conf, there you already have server listen 80; and localhost..
nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/webapp_healthd.conf;
index index.html index.htm;
}
003_nginx.config
files:
"/etc/nginx/conf.d/webapp_healthd.conf" :
mode: "000755"
owner: root
group: root
content: |
upstream my_app {
server unix:///var/run/puma/my_app.sock;
}
log_format healthd '$msec"$uri"'
'$status"$request_time"$upstream_response_time"'
'$http_x_forwarded_for';
server {
listen 80;
server_name _ localhost; # need to listen to localhost for worker tier
root /var/app/current/public;
if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") {
set $year $1;
set $month $2;
set $day $3;
set $hour $4;
}
access_log /var/log/nginx/access.log main;
access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd;
try_files $uri/index.html $uri #my_app;
location #my_app {
proxy_pass http://my_app; # match the name of upstream directive which is defined above
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /assets {
alias /var/app/current/public/assets;
gzip_static on;
gzip on;
expires max;
add_header Cache-Control public;
}
# redirect server error pages to the static page /40x.html
#
error_page 404 /404.html;
location = /40x.html {
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
"/opt/elasticbeanstalk/hooks/appdeploy/post/03_restart_nginx.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
rm /etc/nginx/conf.d/webapp_healthd.conf.bak
rm /etc/nginx/conf.d/custom.conf
service nginx restart
Related
containerized reverse proxy showing default site
No matter what I do, I keep running into the problem where my website publishes the default nginx website. I'm trying to dockerize my webserver such that it can point to home assistant running in another container. I've been able to get it to work when both were hosted on the same raspi, not running in containers, but not when both are running in containers. I've attached my nginx.conf, Dockerfile and default.conf that I was using to start the environment up. I've spent the last 2 days looking for someone who was trying to do something similar, but I assume I'm making such a stupid mistake that most have been able to figure it out on their own.. nginx.conf: user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } default.conf (/etc/nginx/conf.d/hass.conf) map $http_upgrade $connection_upgrade { default upgrade; '' close; } server { # Update this line to be your domain server_name nekohouse.ca; # These shouldn't need to be changed listen [::]:80 default_server ipv6only=off; return 301 https://$host$request_uri; } server { # Update this line to be your domain server_name nekohouse.ca; # Ensure these lines point to your SSL certificate and key ssl_certificate fullchain.pem; ssl_certificate_key privkey.pem; # Use these lines instead if you created a self-signed certificate # ssl_certificate /etc/nginx/ssl/cert.pem; # ssl_certificate_key /etc/nginx/ssl/key.pem; # Ensure this line points to your dhparams file ssl_dhparam /etc/nginx/dhparams.pem; # These shouldn't need to be changed listen [::]:443 default_server ipv6only=off; # if your nginx version is >= 1.9.5 you can also add the "http2" flag here add_header Strict-Transport-Security "max-age=31536000; includeSubdomains"; ssl on; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4"; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; proxy_buffering off; location / { proxy_pass http://localhost:8123; proxy_set_header Host $host; proxy_redirect http:// https://; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; } } default.conf (/etc/nginx/conf.d/default.conf) server { listen 8100; server_name localhost; #charset koi8-r; #access_log /var/log/nginx/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} }
The problem is because of this line proxy_pass http://localhost:8123; When running in containers, you should understand that localhost refers to the nginx container and not the docker host. So, you should either change localhost to the hostname of your docker host or use docker-compose so that you can change it to the name of the container defined. If you are just running the containers separately, you could also just use the container IP for now but note that it will change everytime the container is restarted.
Rails static images not showing up
I apologize for asking what seems to be such a simple question that is asked again and again. I built a small app using Rails 4.2.3. Everything works locally so I am trying to deploy to AWS with Elastic Beanstalk and the following setup: 64bit Amazon Linux 2016.03 v2.1.6 running Ruby 2.3 (Puma) Before I deploy I run: rake assets:precompile RAILS_ENV=production I then commit those files to git and use eb deploy to push the files up the the EC2 instance. Some things work: When I ssh into that instance, I see all of the precompiled assets in /var/app/current/public/assets CSS all looks correct Coffeescripts are running properly But, neither static images or ones that I upload via Paperclip show up as I would expect. In production.rb I have this line: config.serve_static_files = ENV['RAILS_SERVE_STATIC_FILES'].present? I can confirm that key is not in my ENV variable by going into the console: irb(main):001:0> ENV['RAILS_SERVE_STATIC_FILES'] => nil which leads me to believe that the serving of these files should be handled by nginx. I can confirm that nginx is running, but quite frankly I don't know how it is configured. [ec2-user#ip-172-31-13-16 assets]$ ps waux | grep nginx root 2800 0.0 0.4 109364 4192 ? Ss Oct08 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf nginx 2801 0.0 0.6 109820 6672 ? S Oct08 0:09 nginx: worker process ec2-user 21321 0.0 0.2 110456 2092 pts/0 S+ 23:02 0:00 grep --color=auto nginx I "think" I am supposed to edit my .ebextensions file to do a few things automatically when I deploy, but that's about where I got stuck. Any suggestions? /etc/nginx/nginx.conf # For more information on configuration, see: # * Official English Documentation: http://nginx.org/en/docs/ # * Official Russian Documentation: http://nginx.org/ru/docs/ user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf; index index.html index.htm; server { listen 80 ; listen [::]:80 ; server_name localhost; root /usr/share/nginx/html; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; location / { } # redirect server error pages to the static page /40x.html # error_page 404 /404.html; location = /40x.html { } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # Settings for a TLS enabled server. # # server { # listen 443 ssl; # listen [::]:443 ssl; # server_name localhost; # root /usr/share/nginx/html; # # ssl_certificate "/etc/pki/nginx/server.crt"; # ssl_certificate_key "/etc/pki/nginx/private/server.key"; # # It is *strongly* recommended to generate unique DH parameters # # Generate them with: openssl dhparam -out /etc/pki/nginx/dhparams.pem 2048 # #ssl_dhparam "/etc/pki/nginx/dhparams.pem"; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 10m; # ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # ssl_ciphers HIGH:SEED:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!RSAPSK:!aDH:!aECDH:!EDH-DSS-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!SRP; # ssl_prefer_server_ciphers on; # # # Load configuration files for the default server block. # include /etc/nginx/default.d/*.conf; # # location / { # } # # error_page 404 /404.html; # location = /40x.html { # } # # error_page 500 502 503 504 /50x.html; # location = /50x.html { # } # } } /etc/nginx/conf.d/virtual.conf # # A virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} /etc/nginx/conf.d/webapp_healthd.conf upstream my_app { server unix:///var/run/puma/my_app.sock; } log_format healthd '$msec"$uri"' '$status"$request_time"$upstream_response_time"' '$http_x_forwarded_for'; server { listen 80; server_name _ localhost; # need to listen to localhost for worker tier if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") { set $year $1; set $month $2; set $day $3; set $hour $4; } access_log /var/log/nginx/access.log main; access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd; location / { proxy_pass http://my_app; # match the name of upstream directive which is defined above proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /assets { alias /var/app/current/public/assets; gzip_static on; gzip on; expires max; add_header Cache-Control public; } location /public { alias /var/app/current/public; gzip_static on; gzip on; expires max; add_header Cache-Control public; } }
Fix webapp_healthd.conf to make nginx to serve files in public folder and if cannot or they do not exist then proxy_pass to Your app: upstream my_app { server unix:///var/run/puma/my_app.sock; } server { listen 80; server_name _; # need to listen to localhost for worker tier if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") { set $year $1; set $month $2; set $day $3; set $hour $4; } access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd; index index.html index.htm; location #app { log_not_found off; access_log off; proxy_pass http://my_app; # proxy passing to upstream proxy_http_version 1.1; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_cache_bypass $http_upgrade; } root /var/app/current/public; location / { try_files $uri $uri/ #app; # tries to serve static files if not will ask #app } }
Deploy ruby on rail app to AWS(Amazon Linux)
I am following the step in https://www.sitepoint.com/deploy-your-rails-app-to-aws/ to deploy my ruby on rails app to AWS(Amazon Linux). I did everything sucessfully except for the setting of nginx. In the artcile, it asks me to comment out the existing content and paste the following into /etc/nginx/sites-available/default. upstream app { # Path to Puma SOCK file, as defined previously server unix:/home/deploy/contactbook/shared/tmp/sockets/puma.sock fail_timeout=0; } server { listen 80; server_name localhost; root /home/deploy/contactbook/public; try_files $uri/index.html $uri #app; location / { proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Connection ''; proxy_pass http://app; } location ~ ^/(assets|fonts|system)/|favicon.ico|robots.txt { gzip_static on; expires max; add_header Cache-Control public; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; } But after I installed nginx on Amazon Linux, there is no folder /etc/nginx/sites-available So I created this folder and file default. But I got 404 error when I try to access my home page. Then, I found I have /etc/nginx/nginx.conf, so I updated this file. But when I did sudo service nginx restart, I got error msg as: nginx: [emerg] "upstream" directive is not allowed here in /etc/nginx/nginx.conf:1 nginx: configuration file /etc/nginx/nginx.conf test failed Does anyone know how should I do this correctly? FYI, the content of /etc/nginx/nginx.conf before I broke it: # For more information on configuration, see: # * Official English Documentation: http://nginx.org/en/docs/ # * Official Russian Documentation: http://nginx.org/ru/docs/ user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf; index index.html index.htm; server { listen 80 default_server; listen [::]:80 default_server; server_name localhost; root /usr/share/nginx/html; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; location / { } # redirect server error pages to the static page /40x.html # error_page 404 /404.html; location = /40x.html { } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # Settings for a TLS enabled server. # # server { listen 443 ssl; # listen [::]:443 ssl; # server_name localhost; # root /usr/share/nginx/html; # # ssl_certificate "/etc/pki/nginx/server.crt"; # ssl_certificate_key "/etc/pki/nginx/private/server.key"; # # It is *strongly* recommended to generate unique DH parameters # # Generate them with: openssl dhparam -out /etc/pki/nginx/dhparams.pem 2048 # #ssl_dhparam "/etc/pki/nginx/dhparams.pem"; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 10m; # ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # ssl_ciphers HIGH:SEED:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!RSAPSK:!aDH:!aECDH:$ # ssl_prefer_server_ciphers on; # # # Load configuration files for the default server block. # include /etc/nginx/default.d/*.conf; # # location / { # } error_page 404 /404.html; # location = /40x.html { # } # # error_page 500 502 503 504 /50x.html; # location = /50x.html { # } # } }
Ruby on Rails in nginx server, HTTPS redirects to HTTP
I have a client that wanted SSL on its site so I got the certificate and set up the nginx conf (below is the config) with it. If I dont point the root of the HTTPS part to the real server root it works, but if I set the root to the site files HTTPS gets redirected to HTTP. No error messages. Any ideas? user www-data; worker_processes 4; error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { passenger_root /usr/local/rvm/gems/ruby-1.9.3-p448/gems/passenger-4.0.14; passenger_ruby /usr/local/rvm/wrappers/ruby-1.9.3-p448/ruby; include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name www.nope.se; passenger_enabled on; root /var/www/current/public/; #charset koi8-r; #access_log logs/host.access.log main; #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root html; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # server { listen 443; server_name www.nope.se; ssl on; ssl_certificate /opt/nginx/cert/www.nope.se.crt; ssl_certificate_key /opt/nginx/cert/www.nope.se.key; ssl_session_timeout 10m; #ssl_protocols SSLv2 SSLv3 TLSv1; #ssl_ciphers HIGH:!aNULL:!MD5; #ssl_prefer_server_ciphers on; passenger_enabled on; root /var/www/current/public/; # location / { # root html; # index index.html index.htm; # } } }
I honestly do not understand your question. But here is some gyan on how a typical nginx-https configuration is done. hope you find it useful. SSL is a protocol that works one layer below HTTP. Think of it as a tunnel inside which HTTP protocol travels. Hence your SSL certificates are loaded, no matter where you specify them, before any HTTP related configuration. This is also the reason why there should be only one SSL setting per nginx instance. I recommend that you move your ssl certificate related logic to a separate server block like this. server { listen 443 ssl default_server; ssl_certificate ssl/website.pem; ssl_certificate_key ssl/website.key; ssl_trusted_certificate ssl/ca.all.pem; ssl_session_cache builtin:1000 shared:SSL:10m; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; # default on newer versions ssl_prefer_server_ciphers on; # The following is all one long line. We use an explicit list of ciphers to enable # forward secrecy without exposing ciphers vulnerable to the BEAST attack ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:RC4-SHA:RC4-MD5:ECDHE-RSA-AES256-SHA:AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:DES-CBC3-SHA:AES128-SHA; # The following is for reference. It needs to be specified again # in each virtualhost, in both HTTP and non-HTTP versions. # All this directive does it to tell the browser to use HTTPS version of the site and remember this for a month add_header Strict-Transport-Security max-age=2592000; } I also recommend that you set a 301 redirect in your non-https server block as shown below. Change this: server { listen 80; server_name www.nope.se; ... } to something like this: server { listen 80; server_name www.nope.se; add_header Strict-Transport-Security max-age=7200; return 301 https://$host$request_uri; } With this in place, when a user visits http://www.nope.se they will be automatically redirected to https://www.nope.se
400 Bad Request - request header or cookie too large
I am getting a 400 Bad Request request header or cookie too large from nginx with my Rails app. Restarting the browser fixes the issue. I am only storing a string id in my cookie so it should be tiny. Where can I find the nginx error logs? I looked at nano /opt/nginx/logs/error.log, but it doesn't have anything related. I tried to set following and no luck: location / { large_client_header_buffers 4 32k; proxy_buffer_size 32k; } nginx.conf #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { passenger_root /home/app/.rvm/gems/ruby-1.9.3-p392/gems/passenger-3.0.19; passenger_ruby /home/app/.rvm/wrappers/ruby-1.9.3-p392/ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; client_max_body_size 20M; server { listen 80; server_name localhost; root /home/app/myapp/current/public; passenger_enabled on; #charset koi8-r; #access_log logs/host.access.log main; # location / { # large_client_header_buffers 4 32k; # proxy_buffer_size 32k; # } # location / { # root html; # index index.html index.htm; # client_max_body_size 4M; # client_body_buffer_size 128k; # } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } Here's my code storing the cookies and a screenshot of the cookies in Firebug. I used firebug to check stored session and I found New Relic and jQuery are storing cookies too; could this be why the cookie size is exceeded? def current_company return if current_user.nil? session[:current_company_id] = current_user.companies.first.id if session[:current_company_id].blank? #current_company ||= Company.find(session[:current_company_id]) end
It's just what the error says - Request Header Or Cookie Too Large. One of your headers is really big, and nginx is rejecting it. You're on the right track with large_client_header_buffers. If you check the docs, you'll find it's only valid in http or server contexts. Bump it up to a server block and it will work. server { # ... large_client_header_buffers 4 32k; # ... } By the way, the default buffer number and size is 4 and 8k, so your bad header must be the one that's over 8192 bytes. In your case, all those cookies (which combine to one header) are well over the limit. Those mixpanel cookies in particular get quite large.
Fixed by adding server { ... large_client_header_buffers 4 16k; ... }
With respect to answers above, but there is client_header_buffer_size needs to be mentioned: http { ... client_body_buffer_size 32k; client_header_buffer_size 8k; large_client_header_buffers 8 64k; ... }
I get the error almost per 600 requests when web scraping. Firstly, assumed that a proxy server or remote ngix limits. I've tried to delete all cookies and other browser solutions that generally talked by related posts, but no luck. Remote server is not in my control. In my case, I made a mistake about adding over and over new header to the httpClient object. After defined a global httpclient object, added header once and the problem doesn't appear again. It was a little mistake but unfortunately instead of try to understand the problem, jumped to the stackoverflow :) Sometimes, we should try to understand the problem own.
In my case (Cloud Foundry / NGiNX buildpack) the reason was the directive proxy_set_header Host ..., after removing this line nginx became stable: http { server { location /your-context/ { # remove it: # proxy_set_header Host myapp.mycfdomain.cloud; } } }