From where we start
We have a web application myapp made of Docker containers. The 3 main containers are database, backend and reverse_proxy. The reverse_proxy runs NGINX with a conf like this:
server {
listen 443 ssl;
server_name myapp;
ssl_certificate /run/secrets/myapp.crt;
ssl_certificate_key /run/secrets/myapp.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
set $backend http://backend:8000;
# The backend HTTP api
location /api/ {
resolver 127.0.0.11;
proxy_pass $backend;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
proxy_buffering off;
proxy_request_buffering off;
}
# The frontend Single Page Application
location / {
root /app/dist;
index index.html;
try_files $uri $uri/ /index.html;
}
location /static {
alias /app/static;
}
}
We run the containers with Docker Swarm (with volumes, secrets and default network), and everything is ok.
What we want to achieve
We have a Debian VM somewhere on the web that we want to use as a demo platform for multiple versions of myapp, for example 1.0.0, 1.1.0, and 1.2.0. We plan to run the 3 stacks with Docker Swarm, exposing a different ports for example 127.0.0.1:6000, 127.0.0.1:6001 and 127.0.0.1:6002.
Then we plan to run a reverse proxy for each of these stack, that would just forward the HTTPS traffic to the stack matching the URL, and serve an index.html file to point the available myapp version:
Docker container Docker Swarm secrets
main NGINX
+-------------------+ +-------------------------------+ +-----+
| | | stack for myapp 1.0.0 | | |
| URL=https://myapp.com/1.0.0/... 6000 | 443 HTTP |<-->| |
| | <----------------> | reverse_proxy <-> back <-> db | | |
| | HTTPS +-------------------------------+ | |
| | | | |
| | volumes | |
| | | |
| | | |
| | +-------------------------------+ | |
443 | | | stack for myapp 1.1.0 | | |
client <----->| URL=https://myapp.com/1.1.0/... 6001 | 443 HTTP |<-->| |
HTTPS | | <----------------> | reverse_proxy <-> back <-> db | | |
| | HTTPS +-------------------------------+ | |
| | | | |
| | volumes | |
| | | |
| | | |
| | +-------------------------------+ | |
| | | stack for myapp 1.2.0 | | |
| URL=https://myapp.com/1.2.0/... 6002 | 443 HTTP |<-->| |
| | <----------------> | reverse_proxy <-> back <-> db | | |
+-------------------+ HTTP +-------------------------------+ +-----+
|
volumes
Questions
Is the good approach/design/architecture?
Is it ok to have the 3 secondary reverse proxies in the Docker stack manage the SSL (encrypt/decrypt, know the certificate and private key), and the main reverse proxy to just forward the HTTPS traffic? If so, what would its config look like, is it documented?
Related
I have a nginx web app that runs on a Google VM instance and works perfectly but when I try to add SSL support, I can no longer access the site, "Unable to Connect". I believe I correctly configure my nginx config to cater SSL. I can also see in the logs that nginx and certbot are started.
http {
upstream react {
server client:3000;
}
upstream phoenix {
server web:4000;
}
server {
# Listen to port 443 on both IPv4 and IPv6.
listen 80 default_server
listen 443 ssl default_server reuseport;
listen [::]:443 ssl default_server reuseport;
# Domain names this server should respond to.
server_name tabi.blanknodes.com www.tabi.blanknodes.com;
# Load the certificate files.
ssl_certificate /etc/letsencrypt/live/tabi/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/tabi/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/tabi/chain.pem;
# Load the Diffie-Hellman parameter.
ssl_dhparam /etc/letsencrypt/dhparams/dhparam.pem;
location / {
proxy_pass https://react;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /api {
proxy_pass https://phoenix/api;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /socket {
proxy_pass https://phoenix/socket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}
Logs:
nginx_1 | Starting the Nginx service
nginx_1 | Starting the certbot autorenewal service
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: using the "epoll" event method
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: nginx/1.21.0
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: built by gcc 8.3.0 (Debian 8.3.0-6)
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: OS: Linux 5.4.104+
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: getrlimit(RLIMIT_NOFILE): 1048576:1048576
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: start worker processes
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: start worker process 122
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: start worker process 123
nginx_1 | Couldn't find the dhparam file '/etc/letsencrypt/dhparams/dhparam.pem'; creating it...
nginx_1 | mkdir: created directory '/etc/letsencrypt/dhparams'
nginx_1 |
nginx_1 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
nginx_1 | % ATTENTION! %
nginx_1 | % %
nginx_1 | % This script will now create a 2048 bit Diffie-Hellman %
nginx_1 | % parameter to use during the SSL handshake. %
nginx_1 | % %
nginx_1 | % >>>>> This MIGHT take a VERY long time! <<<<< %
nginx_1 | % (Took 65 minutes for 4096 bit on an old 3GHz CPU) %
nginx_1 | % %
nginx_1 | % However, there is some randomness involved so it might %
nginx_1 | % be both faster or slower for you. 2048 is secure enough %
nginx_1 | % for today and quite fast to generate. These files will %
nginx_1 | % only have to be created once so please be patient. %
nginx_1 | % A message will be displayed when this process finishes. %
nginx_1 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
nginx_1 |
nginx_1 | Will now output to the following file: '/etc/letsencrypt/dhparams/dhparam.pem'
nginx_1 | Generating DH parameters, 2048 bit long safe prime, generator 2
nginx_1 | This is going to take a long time
I'm using docker-nginx-certbot for my nginx + certbot.
I'm trying to use certbot certonly --webroot to create cert for multiple domains but got only one certificate
well, I went through this tutorial: link which works great for one domain.
so I tried to do the same for two domains(sub.domain.com domain.com)
nginx conf:
server {
listen 80;
listen [::]:80;
server_name domain.com www.domain.com;
location / {
proxy_pass http://api:80;
}
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
}
server {
listen 80;
listen [::]:80;
server_name sub.domain.com www.sub.domain.com;
location / {
proxy_pass http://api:80;
}
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
}
then i used this command inside certbot container:
command: certonly --force-renewal --webroot --webroot-path=/var/www/html -d domain.com -d sub.domain.com --email some.email#gmail.com --agree-tos --no-eff-email --staging
it's works but i got only one certificate (sub.domain.com).
certbot | Saving debug log to /var/log/letsencrypt/letsencrypt.log
certbot | Plugins selected: Authenticator webroot, Installer None
certbot | Renewing an existing certificate
certbot | IMPORTANT NOTES:
certbot | - Congratulations! Your certificate and chain have been saved at:
certbot | /etc/letsencrypt/live/sub.domain.com/fullchain.pem
certbot | Your key file has been saved at:
certbot | /etc/letsencrypt/live/sub.domain.com/privkey.pem
certbot | Your cert will expire on 2020-06-09. To obtain a new or tweaked
certbot | version of this certificate in the future, simply run certbot
certbot | again. To non-interactively renew *all* of your certificates, run
certbot | "certbot renew"
certbot exited with code 0
I am dockerising my rails application with nginx, passenger, ruby 2.4.0 and rails 5.1.6.
I was following this to setup phusion/passenger on docker.
When I run docker-compose up. It is throwing following error:
nginx: [emerg] unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
Here is the full trace.
postgres_1 | 2018-08-31 09:26:31.495 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2018-08-31 09:26:31.495 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2018-08-31 09:26:31.502 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2018-08-31 09:26:31.526 UTC [47] LOG: database system was shut down at 2018-08-31 09:26:31 UTC
postgres_1 | 2018-08-31 09:26:31.531 UTC [1] LOG: database system is ready to accept connections
company_data_1 | *** Running /etc/my_init.d/30_presetup_nginx.sh...
company_data_1 | Aug 31 09:26:31 caeb10c704df syslog-ng[11]: EOF on control channel, closing connection;
company_data_1 | *** Running /etc/rc.local...
company_data_1 | *** Booting runit daemon...
company_data_1 | *** Runit started as PID 18
company_data_1 | ok: run: /etc/service/nginx-log-forwarder: (pid 25) 0s
company_data_1 | Aug 31 09:26:31 caeb10c704df cron[23]: (CRON) INFO (pidfile fd = 3)
company_data_1 | Aug 31 09:26:31 caeb10c704df cron[23]: (CRON) INFO (Running #reboot jobs)
company_data_1 | nginx: [emerg] unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:32 [emerg] 24#24: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:32 [emerg] 24#24: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | ok: run: /etc/service/nginx-log-forwarder: (pid 30) 0s
company_data_1 | nginx: [emerg] unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | ok: run: /etc/service/nginx-log-forwarder: (pid 30) 0s
company_data_1 | nginx: [emerg] unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:32 [emerg] 24#24: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:34 [emerg] 28#28: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:34 [emerg] 31#31: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:32 [emerg] 24#24: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:34 [emerg] 28#28: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:34 [emerg] 31#31: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | ok: run: /etc/service/nginx-log-forwarder: (pid 36) 0s
company_data_1 | nginx: [emerg] unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | ok: run: /etc/service/nginx-log-forwarder: (pid 36) 0s
company_data_1 | nginx: [emerg] unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:32 [emerg] 24#24: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:34 [emerg] 28#28: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:34 [emerg] 31#31: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:36 [emerg] 34#34: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:36 [emerg] 37#37: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:32 [emerg] 24#24: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:34 [emerg] 28#28: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:34 [emerg] 31#31: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:36 [emerg] 34#34: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:36 [emerg] 37#37: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
Dockerfile
FROM ubuntu:16.04
FROM phusion/passenger-ruby24
LABEL Name=company_data Version=0.0.1
EXPOSE 80
RUN rm -f /etc/service/nginx/down
# ADD default /etc/nginx/sites-enabled/default
RUN rm /etc/nginx/sites-enabled/default
ADD company_financials.conf /etc/nginx/sites-enabled/company_financials.conf
ADD 00_app_env.conf /etc/nginx/conf.d/00_app_env.conf
ADD passenger.conf /etc/nginx/passenger.conf
# Set correct environment variables.
ENV HOME /root
# Use baseimage-docker's init process.
CMD ["/sbin/my_init"]
# throw errors if Gemfile has been modified since Gemfile.lock
RUN bundle config --global frozen 1
RUN apt-get install -y nginx openssh-server git-core openssh-client curl
RUN apt-get install -y build-essential
WORKDIR /home/app/company_financials
COPY . /home/app/company_financials
RUN chmod -R 777 /home/app/company_financials
# install RVM, Ruby, and Bundler
RUN \curl -sSL https://get.rvm.io | bash
RUN /bin/bash -l -c "rvm install ruby 2.4.0"
RUN /bin/bash -lc 'rvm --default use ruby-2.4.0'
RUN /bin/bash -l -c "which ruby"
RUN /bin/bash -l -c "ls /usr/local/rvm/rubies/"
RUN /bin/bash -l -c "ls /home/app/company_financials"
RUN /bin/bash -l -c "gem install bundler --no-ri --no-rdoc"
RUN /bin/bash -l -c "bundle install"
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
docker-compose.yml
version: '2.1'
services:
company_data:
image: company_data
build: .
ports:
- "80:80"
env_file:
- .env
links:
- postgres
- redis
- sidekiq
redis:
image: redis
command: ["redis-server", "--appendonly", "yes"]
hostname: redis
restart: always
postgres:
image: 'postgres:10.3-alpine'
sidekiq:
build: .
command: bundle exec sidekiq
depends_on:
- postgres
- redis
nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Phusion Passenger config
##
# Uncomment it if you installed passenger or passenger-enterprise
##
include /etc/nginx/passenger.conf;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
passenger.conf
passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini;
passenger_ruby /usr/local/rvm/rubies/ruby-2.4.0/bin/ruby;
nginx_server.conf
server {
listen 80;
server_name "~^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$" localhost;
passenger_enabled on;
passenger_user app;
root /home/app/company_financials/public;
}
00_app_env.conf
passenger_app_env development;
As per: https://www.phusionpassenger.com/library/config/nginx/reference/#passenger_app_env
The context for passenger_app_env should be http, server, location, if. What I see from your question is that 00_app_env.conf is not nested in any context.
You need to put this configuration in a context.
For ex. your passenger.conf file is in http context as you included it include /etc/nginx/passenger.conf; inside http context. You need to do the same with 00_app_env.conf.
I have 6 containers to create my own application with microservices. In this project I have an example app. The problem happens when I try to access to the URL (http://localhost:80/) the browser returns error 500 with message:
return We're sorry, but something went wrong. If you are the
application owner check the logs for more information.
List Containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
69ee0ef25faa falcon_nginx "/usr/sbin/nginx" 59 seconds ago Up 10 seconds 0.0.0.0:80->80/tcp falcon_nginx_1
411e41943a35 falcon_web "foreman start" About a minute ago Up 11 seconds 0.0.0.0:3000->3000/tcp falcon_web_1
95de66fe7aae redis "/entrypoint.sh redis" About a minute ago Up 12 seconds 6379/tcp falcon_redis_1
7693b7e2d2eb memcached:latest "/entrypoint.sh memca" About a minute ago Up 12 seconds 0.0.0.0:11211->11211/tcp falcon_memcached_1
020566c4a77a mysql:latest "/entrypoint.sh mysql" About a minute ago Up 12 seconds 0.0.0.0:3306->3306/tcp falcon_db_1
7bf8176503f4 busybox "true" About a minute ago Exited (0) About a minute ago mysql_data
Log:
web_1 | 14:34:04 web.1 | started with pid 32
web_1 | 14:34:04 log.1 | started with pid 33
web_1 | 14:34:04 log.1 | ==> log/puma.stderr.log <==
web_1 | 14:34:04 log.1 | === puma startup: 2015-11-05 10:39:01 -0300 ===
web_1 | 14:34:04 log.1 | === puma startup: 2015-11-09 14:33:16 +0000 ===
web_1 | 14:34:04 log.1 |
web_1 | 14:34:04 log.1 | ==> log/development.log <==
web_1 | 14:34:04 log.1 |
web_1 | 14:34:04 log.1 | ==> log/puma.stdout.log <==
web_1 | 14:34:04 log.1 | [17288] - Gracefully shutting down workers...
web_1 | 14:34:04 log.1 | [17288] === puma shutdown: 2015-11-05 10:39:07 -0300 ===
web_1 | 14:34:04 log.1 | [17288] - Goodbye!
web_1 | 14:34:04 log.1 | === puma startup: 2015-11-09 14:33:16 +0000 ===
web_1 | 14:34:04 log.1 | [31] * Starting control server on unix:///tmp/puma-status-1447079596225-31
web_1 | 14:34:04 log.1 | [31] - Gracefully shutting down workers...
web_1 | 14:34:04 log.1 | [39] Early termination of worker
web_1 | 14:34:04 log.1 | [43] Early termination of worker
web_1 | 14:34:04 log.1 | [31] === puma shutdown: 2015-11-09 14:33:24 +0000 ===
web_1 | 14:34:04 log.1 | [31] - Goodbye!
web_1 | 14:34:05 web.1 | [32] Puma starting in cluster mode...
web_1 | 14:34:05 web.1 | [32] * Version 2.11.2 (ruby 2.1.5-p273), codename: Intrepid Squirrel
web_1 | 14:34:05 web.1 | [32] * Min threads: 1, max threads: 16
web_1 | 14:34:05 web.1 | [32] * Environment: development
web_1 | 14:34:05 web.1 | [32] * Process workers: 2
web_1 | 14:34:05 web.1 | [32] * Phased restart available
web_1 | 14:34:05 web.1 | [32] * Listening on unix:///home/app/tmp/sockets/puma.sock
web_1 | 14:34:05 web.1 | [32] Use Ctrl-C to stop
web_1 | 14:34:05 log.1 | === puma startup: 2015-11-09 14:34:05 +0000 ===
web_1 | 14:34:05 log.1 |
web_1 | 14:34:05 log.1 | ==> log/puma.stderr.log <==
web_1 | 14:34:05 log.1 | === puma startup: 2015-11-09 14:34:05 +0000 ===
web_1 | 14:34:05 log.1 |
web_1 | 14:34:05 log.1 | ==> log/puma.stdout.log <==
web_1 | 14:34:05 log.1 | [32] * Starting control server on unix:///tmp/puma-status-1447079645129-32
web_1 | 14:34:09 log.1 | [32] - Worker 1 (pid: 44) booted, phase: 0
web_1 | 14:34:09 log.1 | [32] - Worker 0 (pid: 40) booted, phase: 0
Nginx site:
upstream rails {
server web fail_timeout=0;
}
server {
listen 80;
client_max_body_size 2M;
server_name localhost;
keepalive_timeout 5;
root /home/app/public;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location ~ ^/(assets)/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
location / {
try_files $uri/index.html $uri.html $uri #app;
error_page 404 /404.html;
error_page 422 /422.html;
error_page 500 502 503 504 /500.html;
error_page 403 /403.html;
}
location #rails {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
proxy_pass http://rails;
}
location = /favicon.ico {
expires max;
add_header Cache-Control public;
}
}
For more details check project
Docker version 1.8.1
docker-compose version: 1.4.0
Thanks for the help :)
There is an error in your nginx conf:
try_files $uri/index.html $uri.html $uri #app;
Try this:
try_files $uri/index.html $uri.html $uri #rails;
Ok guys, I found the problem with help #ehoffmann
As comments #ehoffmann i have change app to rails.
Communication between puma and nginx must be via tcp. I change in docker-compose.yml line command: foreman start to command: bundle exec puma -e development -p 3000
Log: Listening on tcp://0.0.0.0:3000
I've just setup nginx and unicorn. I start unicorn like this:
unicorn_rails -c /var/www/Web/config/unicorn.rb -D
I've tried the various commands for stopping the unicorn but none of them work. I usually just restart the server and start unicorn again but this is very annoying.
EDIT
unicorn.rb file (/var/www/Web/config/):
# Set the working application directory
# working_directory "/path/to/your/app"
working_directory "/var/www/Web"
# Unicorn PID file location
# pid "/path/to/pids/unicorn.pid"
pid "/var/www/Web/pids/unicorn.pid"
# Path to logs
# stderr_path "/path/to/log/unicorn.log"
# stdout_path "/path/to/log/unicorn.log"
stderr_path "/var/www/Web/log/unicorn.log"
stdout_path "/var/www/Web/log/unicorn.log"
# Unicorn socket
listen "/tmp/unicorn.Web.sock"
listen "/tmp/unicorn.Web.sock"
# Number of processes
# worker_processes 4
worker_processes 2
# Time-out
timeout 30
default.conf (/etc/nginx/conf.d/):
upstream app {
# Path to Unicorn SOCK file, as defined previously
server unix:/tmp/unicorn.Web.sock fail_timeout=0;
}
server {
listen 80;
server_name localhost;
# Application root, as defined previously
root /root/Web/public;
try_files $uri/index.html $uri #app;
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
This is what I do:
$ for i in `ps awx | grep unico | grep -v grep | awk '{print $1;}'`; do kill -9 $i; done && unicorn_rails -c /var/www/Web/config/unicorn.rb -D
If you don't want to have all this line, script it, like this:
/var/www/Web/unicorn_restart.sh:
#!/bin/bash
for i in `ps awx | grep unicorn | grep -v grep | awk '{print $1;}'`; do
kill $i
done
unicorn_rails -c /var/www/Web/config/unicorn.rb -D
and then:
$ chmod +x /var/www/Web/unicorn_restart.sh
summon it each time calling:
$ /var/www/Web/unicorn_restart.sh