Puma Socket connection time out or unavailable under high loads - ruby-on-rails

I have a Rails 4.2 app that is currently running on an Ubuntu server with Nginx and Passenger and it get's a lot of traffic which Passenger doesn't handle very well (very often processes hang).
I decided to replace Passenger with Puma as I did with other apps on another server where things improved drastically but with this one as soon as i deployed the new version running on Puma, i noticed problems started happening, getting a lot of 502 bad gateway errors and looking in the logs i saw a lot of either of these errors:
puma.sock failed (11: Resource temporarily unavailable)
[error] 6658#6658: *5788 upstream timed out (110: Connection timed out) while reading response header from upstream
After googling around and ended up trying several things including the following Sysctl tweaks:
/etc/sysctl.conf
# Increase number of incoming connections
net.core.somaxconn = 65535
# Increase number of incoming connections backlog
net.core.netdev_max_backlog = 65536
Then reloaded with sudo sysctl -p
I've also tweaked the following Nginx configs:
/etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
worker_rlimit_nofile 400000;
events {
worker_connections 10000;
use epoll;
multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
keepalive_requests 100000;
server_tokens off;
server_names_hash_bucket_size 256;
[...]
}
Here's my Puma config:
workers 3
preload_app!
threads 1, 6
app_dir = File.expand_path("../..", __FILE__)
shared_dir = "#{app_dir}/shared"
# Default to production
rails_env = ENV['RAILS_ENV'] || "production"
environment rails_env
# Set up socket location
bind "unix://#{shared_dir}/sockets/puma.sock"
# Logging
if rails_env == "production"
stdout_redirect "#{shared_dir}/log/puma.stdout.log", "#{shared_dir}/log/puma.stderr.log", true
end
# Set master PID and state locations
pidfile "#{shared_dir}/pids/puma.pid"
state_path "#{shared_dir}/pids/puma.state"
on_worker_boot do
#reconnect to mongo
Mongoid::Clients.clients.each do |name, client|
client.close
client.reconnect
end
#reconnect to redis
$redis.redis.client.reconnect
end
before_fork do
Mongoid.disconnect_clients
end
I've also tried specifying the backlog value when biding to the socket like so:
bind "unix://#{shared_dir}/sockets/puma.sock?backlog=1024"
Here's the nginx config for the app:
upstream pumamyapp {
server unix:///var/www/myapp/shared/sockets/puma.sock;
}
server {
listen 80;
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/myapp/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/myapp/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
server_name www.myapp.com;
rewrite ^(.*) https://myapp.com$1 permanent;
}
server {
listen 80;
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/myapp/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/myapp/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
root /var/www/myapp/public;
server_name myapp.com;
if ($ssl_protocol = "") {
rewrite ^ https://$server_name$request_uri? permanent;
}
client_max_body_size 100M;
location ~* ^/assets/ {
# Per RFC2616 - 1 year maximum expiry
# http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
expires 1y;
add_header Cache-Control public;
# Some browsers still send conditional-GET requests if there's a
# Last-Modified header or an ETag header even if they haven't
# reached the expiry date sent in the Expires header.
add_header Last-Modified "";
add_header ETag "";
break;
}
location /cgi-bin {
return 404;
}
location /setup.cgi {
return 404;
}
location / {
try_files $uri #app;
}
location #app {
proxy_pass http://pumamyapp;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 128;
proxy_redirect off;
}
}
I had to rollback to the previous version that runs on passenger because the site was unusable, any idea what is wrong and how can I make it right?

Related

How to get React frontend and Django backend to run on HTTPS? ERR_SSL_PROTOCOL_ERROR

My web application is running on a Linux Ubuntu 20.04.4 VPS server and its IP is assigned to a domain name.
React frontend has an SSL certificate and is running on port 443.
API (written in Django) is running inside a Docker container on port
1414 with no SSL.
Frontend's attempts to send requests to the API result in ERR_SSL_PROTOCOL_ERROR getting thrown in the console. How can I get my backend to run on HTTPS? Do I need to create another certificate within my Docker container, or do I have to configure something in the domain?
For now, my colleague wrote a script in PHP that somehow manages to bypass the error and make the requests. However, this seems like a very insecure way of doing things and is only a temporary solution.
Below you'll find my Nginx configuration files
# etc/nginx/sites-available/default
server {
listen 80;
listen [::]:80 default_server;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
try_files $uri /index.html =404;
}
}
server {
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name www.domainname.com domainname.com; # managed by Certbot
location / {
try_files $uri /index.html =404;
}
rewrite ^/api/get/(.*)$ /api.php?method=get&url=$1 last;
rewrite ^/api/post/(.*)$ /api.php?method=post&url=$1 last;
rewrite ^/api/put/(.*)$ /api.php?method=put&url=$1 last;
rewrite ^/api/delete/(.*)$ /api.php?method=delete&url=$1 last;
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/domainname.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/domainname.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
server_name img.domainname.com;
access_log /var/log/nginx/reverse-access.log;
error_log /var/log/nginx/reverse-error.log;
location / {
proxy_pass http://127.0.0.1:1414/static/media/uploads/meal/;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
}
listen [::]:443 ssl; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/img.domainname.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/img.domainname.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.domainname.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = domainname.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 ;
listen [::]:80 ;
server_name www.domainname.com domainname.com;
return 404; # managed by Certbot
}
server {
if ($host = img.domainname.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name img.domainname.com;
listen [::]:80;
return 404; # managed by Certbot
}
# /etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
}
http {
# Basic Settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POOD LE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
And this is my Nginx config file inside my Docker container (for the backend)
server {
listen ${LISTEN_PORT};
location /static {
alias /vol/static;
}
location / {
uwsgi_pass ${APP_HOST}:${APP_PORT};
include /etc/nginx/uwsgi_params;
client_max_body_size 10M;
}
}

How to get Puma and Nginx to run in Rails production?

Fighting for months with this, another new career path blooming every week, it seems, I look down.
So, that said. Here's the closest I've come. I had it working several times, but it's so brittle as I'm more a developer than a devops (?) person.
I am running Ubuntu 20.04.
be puma -C config/puma.rb config.ru -e production \
--pidfile /run/puma.pid \
--control-url 'unix:///root/mysite/tmp/sockets/mysite-puma.sock' \
--control-token 'app' \
--state tmp/puma.state \
-b 'tcp://mysite.com'
I can run pumactl as so: bundle exec pumactl -T 'app' -C 'unix:///root/mysite/tmp/sockets/mysite-puma.sock' -S tmp/puma.state [pumactl switch]
My nginx config.
/etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 1024;
multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip off;
gzip_vary off;
#gzip_proxied any;
#gzip_comp_level 6;
#gzip_buffers 16 8k;
#gzip_http_version 1.1;
#gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/mysite.com;
}
/etc/nginx/sites-enabled/mysite.com
# This configuration uses Puma. If using another rack server, substitute appropriate values throughout.
upstream puma {
server unix:///root/mysite/tmp/sockets/mysite.sock;
}
# We need to be listing for port 80 (HTTP traffic).
# The force_ssl option will redirect to port 443 (HTTPS)
server {
# Update this
server_name mysite.com www.mysite.com;
# Don't forget to update these, too.
# For help with setting this part up, see:
# http://localhost:4000/2018/09/18/deploying-ruby-on-rails-for-ubuntu-1804.html
root /root/mysite/public;
access_log /root/mysite/log/nginx.access.log;
error_log /root/mysite/log/nginx.error.log info;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma;
}
}
# This is the configuration for port 443 (HTTPS)
server {
listen [::]:443 ssl ipv6only=on; # managed by Certbot
server_name mysite.com www.mysite.com;
ssl_certificate /etc/letsencrypt/live/mysite.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mysite.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
# Don't forget to update these, too.
# I like to update my log files to include 'ssl' in the name.
# If there's ever any need to consult the logs, it's handy to have HTTP and HTTPS traffic separated.
root /root/mysite/public;
access_log /root/mysite/log/nginx.ssl.access.log; # Updated file name
error_log /root/mysite/log/nginx.ssl.error.log info; # Updated file name
error_page 500 502 503 504 /500.html;
client_max_body_size 10M;
location ^~ /assets/ {
gzip_static off;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# This is an important line to help fix some redirect issues.
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma;
}
}
# If you chose Certbot to redirect all traffic to HTTPS, this will be in your current config.
# Remove it or you'll run into redirection errors:
server {
if ($host = example.com) {
return 301 https://www.example.com$request_uri;
} # managed by Certbot
listen [::]:80 default_server deferred;
server_name example.com;
return 404; # managed by Certbot
}
First thing first, in your /etc/nginx/sites-enabled/mysite.com
First step
change
upstream puma {
server unix:///root/mysite/tmp/sockets/mysite.sock;
}
to
upstream puma {
server 0.0.0.0:9838; # port number in which your puma server starts
}
After changing your /etc/nginx/sites-enabled/mysite.com It should look like the following.
upstream puma {
server 0.0.0.0:9838;
}
server {
server_name mysite.com www.mysite.com;
client_max_body_size 200m;
gzip on;
gzip_comp_level 4;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain application/javascript application/json application/x-javascript text/xml text/css application/xml text/javascript;
root /root/mysite/public;
location / {
try_files $uri/index.html $uri #app;
}
location ~* ^/assets {
root /root/mysite/public;
expires 1y;
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
break;
}
error_page 500 502 503 504 /500.html;
location #app {
proxy_pass http://puma;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_redirect off;
}
location ~ /.well-known {
allow all;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/mysite.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mysite.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = mysite.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = www.mysite.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name mysite.com www.mysite.com;
listen 80;
return 404; # managed by Certbot
}
Second step
Then run gem install foreman to install foreman library. To know more about foreman click here.
Third step
Create Procfile in your project root directory and paste the below content
web: RAILS_ENV=production bundle exec puma -e production -p 9838 -d -S ~/puma -C config/puma.rb
Final step
Run foreman start to start the puma server and there you go, you will be able to see your application running.

Rails 4.2 app doesn't respond after 2-3 hours IDLE time on virtual private server

I am trying to setup my virtual private server on upcloud. I am using PUMA 4.3 with ruby 2.4.2 and rails 4.2.8. My Puma Configurations are
threads_count = Integer(ENV['RAILS_MAX_THREADS'] || 5)
threads threads_count, threads_count
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 3000
environment ENV['RACK_ENV'] || 'development'
on_worker_boot do
# Worker specific setup for Rails 4.1+
# See: https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#on-worker-boot
ActiveRecord::Base.establish_connection
end
My Nginx configuration is
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 36000s;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
Lastly my example.com.conf file is
upstream my_app {
server unix:///var/run/my_app.sock;
}
server {
listen 80;
server_name 94.237.52.251:443; # change to match your URL
root /soop/soup; # I assume your app is located at that location
location / {
proxy_pass http://94.237.52.251:443; # match the name of upstream directive which is defined above
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location ~* ^/assets/ {
# Per RFC2616 - 1 year maximum expiry
expires 1y;
add_header Cache-Control public;
# Some browsers still send conditional-GET requests if there's a
# Last-Modified header or an ETag header even if they haven't
# reached the expiry date sent in the Expires header.
add_header Last-Modified "";
add_header ETag "";
break;
}
}
I have to restart rails server after every 15-20 mins of idle time, or everytime after I get disconnected from my VPS otherwise, browser shows me either
94.237.52.251 took too long to respond.
OR
Currently unable to handle this request. HTTP ERROR 500
I am facing this issue for a week now. Kindly help me out.
Change your keepalive_timeout 36000s; to 75s

containerized reverse proxy showing default site

No matter what I do, I keep running into the problem where my website publishes the default nginx website. I'm trying to dockerize my webserver such that it can point to home assistant running in another container. I've been able to get it to work when both were hosted on the same raspi, not running in containers, but not when both are running in containers.
I've attached my nginx.conf, Dockerfile and default.conf that I was using to start the environment up. I've spent the last 2 days looking for someone who was trying to do something similar, but I assume I'm making such a stupid mistake that most have been able to figure it out on their own..
nginx.conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
default.conf (/etc/nginx/conf.d/hass.conf)
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
# Update this line to be your domain
server_name nekohouse.ca;
# These shouldn't need to be changed
listen [::]:80 default_server ipv6only=off;
return 301 https://$host$request_uri;
}
server {
# Update this line to be your domain
server_name nekohouse.ca;
# Ensure these lines point to your SSL certificate and key
ssl_certificate fullchain.pem;
ssl_certificate_key privkey.pem;
# Use these lines instead if you created a self-signed certificate
# ssl_certificate /etc/nginx/ssl/cert.pem;
# ssl_certificate_key /etc/nginx/ssl/key.pem;
# Ensure this line points to your dhparams file
ssl_dhparam /etc/nginx/dhparams.pem;
# These shouldn't need to be changed
listen [::]:443 default_server ipv6only=off; # if your nginx version is >= 1.9.5 you can also add the "http2" flag here
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
ssl on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
proxy_buffering off;
location / {
proxy_pass http://localhost:8123;
proxy_set_header Host $host;
proxy_redirect http:// https://;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
default.conf (/etc/nginx/conf.d/default.conf)
server {
listen 8100;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
The problem is because of this line
proxy_pass http://localhost:8123;
When running in containers, you should understand that localhost refers to the nginx container and not the docker host.
So, you should either change localhost to the hostname of your docker host or use docker-compose so that you can change it to the name of the container defined.
If you are just running the containers separately, you could also just use the container IP for now but note that it will change everytime the container is restarted.

Rails static images not showing up

I apologize for asking what seems to be such a simple question that is asked again and again.
I built a small app using Rails 4.2.3. Everything works locally so I am trying to deploy to AWS with Elastic Beanstalk and the following setup: 64bit Amazon Linux 2016.03 v2.1.6 running Ruby 2.3 (Puma)
Before I deploy I run:
rake assets:precompile RAILS_ENV=production
I then commit those files to git and use eb deploy to push the files up the the EC2 instance.
Some things work:
When I ssh into that instance, I see all of the precompiled assets in /var/app/current/public/assets
CSS all looks correct
Coffeescripts are running properly
But, neither static images or ones that I upload via Paperclip show up as I would expect.
In production.rb I have this line:
config.serve_static_files = ENV['RAILS_SERVE_STATIC_FILES'].present?
I can confirm that key is not in my ENV variable by going into the console:
irb(main):001:0> ENV['RAILS_SERVE_STATIC_FILES']
=> nil
which leads me to believe that the serving of these files should be handled by nginx. I can confirm that nginx is running, but quite frankly I don't know how it is configured.
[ec2-user#ip-172-31-13-16 assets]$ ps waux | grep nginx
root 2800 0.0 0.4 109364 4192 ? Ss Oct08 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
nginx 2801 0.0 0.6 109820 6672 ? S Oct08 0:09 nginx: worker process
ec2-user 21321 0.0 0.2 110456 2092 pts/0 S+ 23:02 0:00 grep --color=auto nginx
I "think" I am supposed to edit my .ebextensions file to do a few things automatically when I deploy, but that's about where I got stuck. Any suggestions?
/etc/nginx/nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
index index.html index.htm;
server {
listen 80 ;
listen [::]:80 ;
server_name localhost;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
# redirect server error pages to the static page /40x.html
#
error_page 404 /404.html;
location = /40x.html {
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# Settings for a TLS enabled server.
#
# server {
# listen 443 ssl;
# listen [::]:443 ssl;
# server_name localhost;
# root /usr/share/nginx/html;
#
# ssl_certificate "/etc/pki/nginx/server.crt";
# ssl_certificate_key "/etc/pki/nginx/private/server.key";
# # It is *strongly* recommended to generate unique DH parameters
# # Generate them with: openssl dhparam -out /etc/pki/nginx/dhparams.pem 2048
# #ssl_dhparam "/etc/pki/nginx/dhparams.pem";
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 10m;
# ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# ssl_ciphers HIGH:SEED:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!RSAPSK:!aDH:!aECDH:!EDH-DSS-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!SRP;
# ssl_prefer_server_ciphers on;
#
# # Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
#
# location / {
# }
#
# error_page 404 /404.html;
# location = /40x.html {
# }
#
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
# }
}
/etc/nginx/conf.d/virtual.conf
#
# A virtual host using mix of IP-, name-, and port-based configuration
#
#server {
# listen 8000;
# listen somename:8080;
# server_name somename alias another.alias;
# location / {
# root html;
# index index.html index.htm;
# }
#}
/etc/nginx/conf.d/webapp_healthd.conf
upstream my_app {
server unix:///var/run/puma/my_app.sock;
}
log_format healthd '$msec"$uri"'
'$status"$request_time"$upstream_response_time"'
'$http_x_forwarded_for';
server {
listen 80;
server_name _ localhost; # need to listen to localhost for worker tier
if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") {
set $year $1;
set $month $2;
set $day $3;
set $hour $4;
}
access_log /var/log/nginx/access.log main;
access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd;
location / {
proxy_pass http://my_app; # match the name of upstream directive which is defined above
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /assets {
alias /var/app/current/public/assets;
gzip_static on;
gzip on;
expires max;
add_header Cache-Control public;
}
location /public {
alias /var/app/current/public;
gzip_static on;
gzip on;
expires max;
add_header Cache-Control public;
}
}
Fix webapp_healthd.conf to make nginx to serve files in public folder and if cannot or they do not exist then proxy_pass to Your app:
upstream my_app {
server unix:///var/run/puma/my_app.sock;
}
server {
listen 80;
server_name _; # need to listen to localhost for worker tier
if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") {
set $year $1;
set $month $2;
set $day $3;
set $hour $4;
}
access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd;
index index.html index.htm;
location #app {
log_not_found off;
access_log off;
proxy_pass http://my_app; # proxy passing to upstream
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
}
root /var/app/current/public;
location / {
try_files $uri $uri/ #app; # tries to serve static files if not will ask #app
}
}

Resources