Error 500 access in rails app with puma and nginx - ruby-on-rails

I have 6 containers to create my own application with microservices. In this project I have an example app. The problem happens when I try to access to the URL (http://localhost:80/) the browser returns error 500 with message:
return We're sorry, but something went wrong. If you are the
application owner check the logs for more information.
List Containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
69ee0ef25faa falcon_nginx "/usr/sbin/nginx" 59 seconds ago Up 10 seconds 0.0.0.0:80->80/tcp falcon_nginx_1
411e41943a35 falcon_web "foreman start" About a minute ago Up 11 seconds 0.0.0.0:3000->3000/tcp falcon_web_1
95de66fe7aae redis "/entrypoint.sh redis" About a minute ago Up 12 seconds 6379/tcp falcon_redis_1
7693b7e2d2eb memcached:latest "/entrypoint.sh memca" About a minute ago Up 12 seconds 0.0.0.0:11211->11211/tcp falcon_memcached_1
020566c4a77a mysql:latest "/entrypoint.sh mysql" About a minute ago Up 12 seconds 0.0.0.0:3306->3306/tcp falcon_db_1
7bf8176503f4 busybox "true" About a minute ago Exited (0) About a minute ago mysql_data
Log:
web_1 | 14:34:04 web.1 | started with pid 32
web_1 | 14:34:04 log.1 | started with pid 33
web_1 | 14:34:04 log.1 | ==> log/puma.stderr.log <==
web_1 | 14:34:04 log.1 | === puma startup: 2015-11-05 10:39:01 -0300 ===
web_1 | 14:34:04 log.1 | === puma startup: 2015-11-09 14:33:16 +0000 ===
web_1 | 14:34:04 log.1 |
web_1 | 14:34:04 log.1 | ==> log/development.log <==
web_1 | 14:34:04 log.1 |
web_1 | 14:34:04 log.1 | ==> log/puma.stdout.log <==
web_1 | 14:34:04 log.1 | [17288] - Gracefully shutting down workers...
web_1 | 14:34:04 log.1 | [17288] === puma shutdown: 2015-11-05 10:39:07 -0300 ===
web_1 | 14:34:04 log.1 | [17288] - Goodbye!
web_1 | 14:34:04 log.1 | === puma startup: 2015-11-09 14:33:16 +0000 ===
web_1 | 14:34:04 log.1 | [31] * Starting control server on unix:///tmp/puma-status-1447079596225-31
web_1 | 14:34:04 log.1 | [31] - Gracefully shutting down workers...
web_1 | 14:34:04 log.1 | [39] Early termination of worker
web_1 | 14:34:04 log.1 | [43] Early termination of worker
web_1 | 14:34:04 log.1 | [31] === puma shutdown: 2015-11-09 14:33:24 +0000 ===
web_1 | 14:34:04 log.1 | [31] - Goodbye!
web_1 | 14:34:05 web.1 | [32] Puma starting in cluster mode...
web_1 | 14:34:05 web.1 | [32] * Version 2.11.2 (ruby 2.1.5-p273), codename: Intrepid Squirrel
web_1 | 14:34:05 web.1 | [32] * Min threads: 1, max threads: 16
web_1 | 14:34:05 web.1 | [32] * Environment: development
web_1 | 14:34:05 web.1 | [32] * Process workers: 2
web_1 | 14:34:05 web.1 | [32] * Phased restart available
web_1 | 14:34:05 web.1 | [32] * Listening on unix:///home/app/tmp/sockets/puma.sock
web_1 | 14:34:05 web.1 | [32] Use Ctrl-C to stop
web_1 | 14:34:05 log.1 | === puma startup: 2015-11-09 14:34:05 +0000 ===
web_1 | 14:34:05 log.1 |
web_1 | 14:34:05 log.1 | ==> log/puma.stderr.log <==
web_1 | 14:34:05 log.1 | === puma startup: 2015-11-09 14:34:05 +0000 ===
web_1 | 14:34:05 log.1 |
web_1 | 14:34:05 log.1 | ==> log/puma.stdout.log <==
web_1 | 14:34:05 log.1 | [32] * Starting control server on unix:///tmp/puma-status-1447079645129-32
web_1 | 14:34:09 log.1 | [32] - Worker 1 (pid: 44) booted, phase: 0
web_1 | 14:34:09 log.1 | [32] - Worker 0 (pid: 40) booted, phase: 0
Nginx site:
upstream rails {
server web fail_timeout=0;
}
server {
listen 80;
client_max_body_size 2M;
server_name localhost;
keepalive_timeout 5;
root /home/app/public;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location ~ ^/(assets)/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
location / {
try_files $uri/index.html $uri.html $uri #app;
error_page 404 /404.html;
error_page 422 /422.html;
error_page 500 502 503 504 /500.html;
error_page 403 /403.html;
}
location #rails {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
proxy_pass http://rails;
}
location = /favicon.ico {
expires max;
add_header Cache-Control public;
}
}
For more details check project
Docker version 1.8.1
docker-compose version: 1.4.0
Thanks for the help :)

There is an error in your nginx conf:
try_files $uri/index.html $uri.html $uri #app;
Try this:
try_files $uri/index.html $uri.html $uri #rails;

Ok guys, I found the problem with help #ehoffmann
As comments #ehoffmann i have change app to rails.
Communication between puma and nginx must be via tcp. I change in docker-compose.yml line command: foreman start to command: bundle exec puma -e development -p 3000
Log: Listening on tcp://0.0.0.0:3000

Related

nginx reverse proxy that just forward HTTPS traffic

From where we start
We have a web application myapp made of Docker containers. The 3 main containers are database, backend and reverse_proxy. The reverse_proxy runs NGINX with a conf like this:
server {
listen 443 ssl;
server_name myapp;
ssl_certificate /run/secrets/myapp.crt;
ssl_certificate_key /run/secrets/myapp.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
set $backend http://backend:8000;
# The backend HTTP api
location /api/ {
resolver 127.0.0.11;
proxy_pass $backend;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
proxy_buffering off;
proxy_request_buffering off;
}
# The frontend Single Page Application
location / {
root /app/dist;
index index.html;
try_files $uri $uri/ /index.html;
}
location /static {
alias /app/static;
}
}
We run the containers with Docker Swarm (with volumes, secrets and default network), and everything is ok.
What we want to achieve
We have a Debian VM somewhere on the web that we want to use as a demo platform for multiple versions of myapp, for example 1.0.0, 1.1.0, and 1.2.0. We plan to run the 3 stacks with Docker Swarm, exposing a different ports for example 127.0.0.1:6000, 127.0.0.1:6001 and 127.0.0.1:6002.
Then we plan to run a reverse proxy for each of these stack, that would just forward the HTTPS traffic to the stack matching the URL, and serve an index.html file to point the available myapp version:
Docker container Docker Swarm secrets
main NGINX
+-------------------+ +-------------------------------+ +-----+
| | | stack for myapp 1.0.0 | | |
| URL=https://myapp.com/1.0.0/... 6000 | 443 HTTP |<-->| |
| | <----------------> | reverse_proxy <-> back <-> db | | |
| | HTTPS +-------------------------------+ | |
| | | | |
| | volumes | |
| | | |
| | | |
| | +-------------------------------+ | |
443 | | | stack for myapp 1.1.0 | | |
client <----->| URL=https://myapp.com/1.1.0/... 6001 | 443 HTTP |<-->| |
HTTPS | | <----------------> | reverse_proxy <-> back <-> db | | |
| | HTTPS +-------------------------------+ | |
| | | | |
| | volumes | |
| | | |
| | | |
| | +-------------------------------+ | |
| | | stack for myapp 1.2.0 | | |
| URL=https://myapp.com/1.2.0/... 6002 | 443 HTTP |<-->| |
| | <----------------> | reverse_proxy <-> back <-> db | | |
+-------------------+ HTTP +-------------------------------+ +-----+
|
volumes
Questions
Is the good approach/design/architecture?
Is it ok to have the 3 secondary reverse proxies in the Docker stack manage the SSL (encrypt/decrypt, know the certificate and private key), and the main reverse proxy to just forward the HTTPS traffic? If so, what would its config look like, is it documented?

flask_restx APP with Nginx + Gunicorn in Docker doesn't respond to HTTP-Requests

My flask_restx application is running in a docker container, with nginx as reverse proxy and Gunicorn workers.
When I send a HTTP-Request to my application, the expected result is a response after the application does its job, which takes about 3-5 minutes.
However, I fail to get a response to my request after about a minute after sending the request.
I added a timeout to my gunicorn config. Before I did that, I received a 502 Bad Gateway error from the nginx-server after 30 seconds, as the default timeout of gunicorn is 30 seconds.
In any case, in the background, my application works as intended and successfully does its job. It is only the response to my request that doesn't work properly.
Furthermore, I was able to see the print messages of my application when I didn't have --timeout 300 specified for gunicorn, but the workers timed out and booted up randomly during the time my application received a request.
Here is the gunicorn config as defined in the Dockerfile:
EXPOSE 5000
CMD gunicorn --workers 3 --bind 0.0.0.0:5000 wsgi:app --log-level=debug --timeout 300
The nginx config:
server {
listen 8080 ssl;
server_name localhost;
access_log off;
error_log off;
location / {
proxy_pass http://application-url:5000/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 90001;
proxy_send_timeout 90001;
proxy_read_timeout 90001;
send_timeout 90001;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
}
}
Here is the log of the container:
Run pod=$( kubectl get pods -n application | grep application | grep -v nginx | awk '{print $1}')
[2022-08-03 16:23:16 +0000] [7] [DEBUG] Current configuration:
config: ./gunicorn.conf.py
wsgi_app: None
bind: ['0.0.0.0:5000']
backlog: 2048
workers: 3
worker_class: sync
threads: 1
worker_connections: 1000
max_requests: 0
max_requests_jitter: 0
timeout: 300
graceful_timeout: 300
keepalive: 2
limit_request_line: 4094
limit_request_fields: 100
limit_request_field_size: 8190
reload: False
reload_engine: auto
reload_extra_files: []
spew: False
check_config: False
print_config: False
preload_app: False
sendfile: None
reuse_port: False
chdir: /application/connector/
daemon: False
raw_env: []
pidfile: None
worker_tmp_dir: None
*** 101
group: 0
umask: 0
initgroups: False
tmp_upload_dir: None
secure_scheme_headers: {'X-FORWARDED-PROTOCOL': 'ssl', 'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-SSL': 'on'}
forwarded_allow_ips: ['127.0.0.1']
accesslog: None
disable_redirect_access_to_syslog: False
access_log_format: %(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"
errorlog: -
loglevel: debug
capture_output: False
logger_class: gunicorn.glogging.Logger
logconfig: None
logconfig_dict: {}
syslog_addr: udp://localhost:514
syslog: False
syslog_prefix: None
syslog_facility: user
enable_stdio_inheritance: False
statsd_host: None
dogstatsd_tags:
statsd_prefix:
proc_name: None
default_proc_name: wsgi:app
pythonpath: None
paste: None
on_starting: <function OnStarting.on_starting at 0x7f2c8dfc8a60>
on_reload: <function OnReload.on_reload at 0x7f2c8dfc8b80>
when_ready: <function WhenReady.when_ready at 0x7f2c8dfc8ca0>
pre_fork: <function Prefork.pre_fork at 0x7f2c8dfc8dc0>
post_fork: <function Postfork.post_fork at 0x7f2c8dfc8ee0>
post_worker_init: <function PostWorkerInit.post_worker_init at 0x7f2c8ddb3040>
worker_int: <function WorkerInt.worker_int at 0x7f2c8ddb3160>
worker_abort: <function WorkerAbort.worker_abort at 0x7f2c8ddb3280>
pre_exec: <function PreExec.pre_exec at 0x7f2c8ddb33a0>
pre_request: <function PreRequest.pre_request at 0x7f2c8ddb34c0>
post_request: <function PostRequest.post_request at 0x7f2c8ddb3550>
child_exit: <function ChildExit.child_exit at 0x7f2c8ddb3670>
worker_exit: <function WorkerExit.worker_exit at 0x7f2c8ddb3790>
nworkers_changed: <function NumWorkersChanged.nworkers_changed at 0x7f2c8ddb38b0>
on_exit: <function OnExit.on_exit at 0x7f2c8ddb39d0>
proxy_protocol: False
proxy_allow_ips: ['127.0.0.1']
keyfile: None
certfile: None
ssl_version: 2
cert_reqs: 0
ca_certs: None
suppress_ragged_eofs: True
do_handshake_on_connect: False
ciphers: None
raw_paste_global_conf: []
strip_header_spaces: False
[2022-08-03 16:23:16 +0000] [7] [INFO] Starting gunicorn 20.1.0
[2022-08-03 16:23:16 +0000] [7] [DEBUG] Arbiter booted
[2022-08-03 16:23:16 +0000] [7] [INFO] Listening at: http://0.0.0.0:5000 (7)
[2022-08-03 16:23:16 +0000] [7] [INFO] Using worker: sync
[2022-08-03 16:23:16 +0000] [9] [INFO] Booting worker with pid: 9
[2022-08-03 16:23:16 +0000] [10] [INFO] Booting worker with pid: 10
[2022-08-03 16:23:16 +0000] [11] [INFO] Booting worker with pid: 11
[2022-08-03 16:23:16 +0000] [7] [DEBUG] 3 workers
[2022-08-03 16:23:34 +0000] [10] [DEBUG] GET /cgi-bin/ExportLogs.sh
[2022-08-03 16:23:39 +0000] [10] [DEBUG] POST /
[2022-08-03 16:23:39 +0000] [10] [DEBUG] GET /downloader.php
[2022-08-03 16:24:04 +0000] [10] [DEBUG] GET /
[2022-08-03 16:24:04 +0000] [10] [DEBUG] GET /swaggerui/droid-sans.css
[2022-08-03 16:25:16 +0000] [10] [DEBUG] Ignoring EPIPE
[2022-08-03 16:25:39 +0000] [10] [DEBUG] PUT /wp-content/plugins/w3-total-cache/pub/sns.php
[2022-08-03 16:26:01 +0000] [10] [DEBUG] GET /index.php
[2022-08-03 16:26:06 +0000] [10] [DEBUG] GET /listing/
[2022-08-03 16:26:42 +0000] [10] [DEBUG] GET /filter/jmol/js/jsmol/php/jsmol.php
[2022-08-03 16:27:50 +0000] [9] [DEBUG] POST /ViewPoint/admin/Site/ViewPointLogin
[2022-08-03 16:28:07 +0000] [10] [DEBUG] GET /wp-json/wp/v2/asked-question
[2022-08-03 16:28:57 +0000] [11] [DEBUG] GET /client/index.html
[2022-08-03 16:31:11 +0000] [11] [DEBUG] GET /_users/_all_docs
[2022-08-03 16:31:30 +0000] [11] [DEBUG] POST /json-rpc/
[2022-08-03 16:33:15 +0000] [9] [DEBUG] GET /wp-json/anycomment/v1/auth/wordpress
[2022-08-03 16:33:20 +0000] [11] [DEBUG] GET /sell-media-search/
[2022-08-03 16:33:21 +0000] [11] [DEBUG] GET /index.php
[2022-08-03 16:33:55 +0000] [10] [DEBUG] GET /module/smartblog/archive
[2022-08-03 16:34:12 +0000] [9] [DEBUG] GET /.git/config
[2022-08-03 16:34:22 +0000] [10] [DEBUG] GET /docker-compose.yml
[2022-08-03 16:34:26 +0000] [9] [DEBUG] GET /docker-compose.prod.yml
[2022-08-03 16:34:30 +0000] [10] [DEBUG] GET /docker-compose.production.yml
[2022-08-03 16:34:34 +0000] [10] [DEBUG] GET /docker-compose.staging.yml
[2022-08-03 16:34:38 +0000] [10] [DEBUG] GET /docker-compose.dev.yml
[2022-08-03 16:34:42 +0000] [9] [DEBUG] GET /docker-compose-dev.yml
[2022-08-03 16:34:46 +0000] [11] [DEBUG] GET /docker-compose.override.yml
[2022-08-03 16:34:49 +0000] [9] [DEBUG] GET /docker-cloud.yml
[2022-08-03 16:35:47 +0000] [9] [DEBUG] GET /maint/modules/endpointcfg/endpointcfg.php

unable to connect with correct nginx + certbot config

I have a nginx web app that runs on a Google VM instance and works perfectly but when I try to add SSL support, I can no longer access the site, "Unable to Connect". I believe I correctly configure my nginx config to cater SSL. I can also see in the logs that nginx and certbot are started.
http {
upstream react {
server client:3000;
}
upstream phoenix {
server web:4000;
}
server {
# Listen to port 443 on both IPv4 and IPv6.
listen 80 default_server
listen 443 ssl default_server reuseport;
listen [::]:443 ssl default_server reuseport;
# Domain names this server should respond to.
server_name tabi.blanknodes.com www.tabi.blanknodes.com;
# Load the certificate files.
ssl_certificate /etc/letsencrypt/live/tabi/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/tabi/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/tabi/chain.pem;
# Load the Diffie-Hellman parameter.
ssl_dhparam /etc/letsencrypt/dhparams/dhparam.pem;
location / {
proxy_pass https://react;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /api {
proxy_pass https://phoenix/api;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /socket {
proxy_pass https://phoenix/socket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}
Logs:
nginx_1 | Starting the Nginx service
nginx_1 | Starting the certbot autorenewal service
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: using the "epoll" event method
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: nginx/1.21.0
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: built by gcc 8.3.0 (Debian 8.3.0-6)
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: OS: Linux 5.4.104+
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: getrlimit(RLIMIT_NOFILE): 1048576:1048576
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: start worker processes
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: start worker process 122
nginx_1 | 2021/06/23 13:26:34 [notice] 117#117: start worker process 123
nginx_1 | Couldn't find the dhparam file '/etc/letsencrypt/dhparams/dhparam.pem'; creating it...
nginx_1 | mkdir: created directory '/etc/letsencrypt/dhparams'
nginx_1 |
nginx_1 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
nginx_1 | % ATTENTION! %
nginx_1 | % %
nginx_1 | % This script will now create a 2048 bit Diffie-Hellman %
nginx_1 | % parameter to use during the SSL handshake. %
nginx_1 | % %
nginx_1 | % >>>>> This MIGHT take a VERY long time! <<<<< %
nginx_1 | % (Took 65 minutes for 4096 bit on an old 3GHz CPU) %
nginx_1 | % %
nginx_1 | % However, there is some randomness involved so it might %
nginx_1 | % be both faster or slower for you. 2048 is secure enough %
nginx_1 | % for today and quite fast to generate. These files will %
nginx_1 | % only have to be created once so please be patient. %
nginx_1 | % A message will be displayed when this process finishes. %
nginx_1 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
nginx_1 |
nginx_1 | Will now output to the following file: '/etc/letsencrypt/dhparams/dhparam.pem'
nginx_1 | Generating DH parameters, 2048 bit long safe prime, generator 2
nginx_1 | This is going to take a long time
I'm using docker-nginx-certbot for my nginx + certbot.

Dockerising Rails application: Error - "unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1" while running "docker-compose up"

I am dockerising my rails application with nginx, passenger, ruby 2.4.0 and rails 5.1.6.
I was following this to setup phusion/passenger on docker.
When I run docker-compose up. It is throwing following error:
nginx: [emerg] unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
Here is the full trace.
postgres_1 | 2018-08-31 09:26:31.495 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2018-08-31 09:26:31.495 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2018-08-31 09:26:31.502 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2018-08-31 09:26:31.526 UTC [47] LOG: database system was shut down at 2018-08-31 09:26:31 UTC
postgres_1 | 2018-08-31 09:26:31.531 UTC [1] LOG: database system is ready to accept connections
company_data_1 | *** Running /etc/my_init.d/30_presetup_nginx.sh...
company_data_1 | Aug 31 09:26:31 caeb10c704df syslog-ng[11]: EOF on control channel, closing connection;
company_data_1 | *** Running /etc/rc.local...
company_data_1 | *** Booting runit daemon...
company_data_1 | *** Runit started as PID 18
company_data_1 | ok: run: /etc/service/nginx-log-forwarder: (pid 25) 0s
company_data_1 | Aug 31 09:26:31 caeb10c704df cron[23]: (CRON) INFO (pidfile fd = 3)
company_data_1 | Aug 31 09:26:31 caeb10c704df cron[23]: (CRON) INFO (Running #reboot jobs)
company_data_1 | nginx: [emerg] unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:32 [emerg] 24#24: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:32 [emerg] 24#24: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | ok: run: /etc/service/nginx-log-forwarder: (pid 30) 0s
company_data_1 | nginx: [emerg] unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | ok: run: /etc/service/nginx-log-forwarder: (pid 30) 0s
company_data_1 | nginx: [emerg] unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:32 [emerg] 24#24: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:34 [emerg] 28#28: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:34 [emerg] 31#31: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:32 [emerg] 24#24: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:34 [emerg] 28#28: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:34 [emerg] 31#31: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | ok: run: /etc/service/nginx-log-forwarder: (pid 36) 0s
company_data_1 | nginx: [emerg] unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | ok: run: /etc/service/nginx-log-forwarder: (pid 36) 0s
company_data_1 | nginx: [emerg] unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:32 [emerg] 24#24: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:34 [emerg] 28#28: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:34 [emerg] 31#31: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:36 [emerg] 34#34: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:36 [emerg] 37#37: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:32 [emerg] 24#24: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:34 [emerg] 28#28: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:34 [emerg] 31#31: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:36 [emerg] 34#34: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
company_data_1 | 2018/08/31 09:26:36 [emerg] 37#37: unexpected "d" in /etc/nginx/conf.d/00_app_env.conf:1
Dockerfile
FROM ubuntu:16.04
FROM phusion/passenger-ruby24
LABEL Name=company_data Version=0.0.1
EXPOSE 80
RUN rm -f /etc/service/nginx/down
# ADD default /etc/nginx/sites-enabled/default
RUN rm /etc/nginx/sites-enabled/default
ADD company_financials.conf /etc/nginx/sites-enabled/company_financials.conf
ADD 00_app_env.conf /etc/nginx/conf.d/00_app_env.conf
ADD passenger.conf /etc/nginx/passenger.conf
# Set correct environment variables.
ENV HOME /root
# Use baseimage-docker's init process.
CMD ["/sbin/my_init"]
# throw errors if Gemfile has been modified since Gemfile.lock
RUN bundle config --global frozen 1
RUN apt-get install -y nginx openssh-server git-core openssh-client curl
RUN apt-get install -y build-essential
WORKDIR /home/app/company_financials
COPY . /home/app/company_financials
RUN chmod -R 777 /home/app/company_financials
# install RVM, Ruby, and Bundler
RUN \curl -sSL https://get.rvm.io | bash
RUN /bin/bash -l -c "rvm install ruby 2.4.0"
RUN /bin/bash -lc 'rvm --default use ruby-2.4.0'
RUN /bin/bash -l -c "which ruby"
RUN /bin/bash -l -c "ls /usr/local/rvm/rubies/"
RUN /bin/bash -l -c "ls /home/app/company_financials"
RUN /bin/bash -l -c "gem install bundler --no-ri --no-rdoc"
RUN /bin/bash -l -c "bundle install"
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
docker-compose.yml
version: '2.1'
services:
company_data:
image: company_data
build: .
ports:
- "80:80"
env_file:
- .env
links:
- postgres
- redis
- sidekiq
redis:
image: redis
command: ["redis-server", "--appendonly", "yes"]
hostname: redis
restart: always
postgres:
image: 'postgres:10.3-alpine'
sidekiq:
build: .
command: bundle exec sidekiq
depends_on:
- postgres
- redis
nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Phusion Passenger config
##
# Uncomment it if you installed passenger or passenger-enterprise
##
include /etc/nginx/passenger.conf;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
passenger.conf
passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini;
passenger_ruby /usr/local/rvm/rubies/ruby-2.4.0/bin/ruby;
nginx_server.conf
server {
listen 80;
server_name "~^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$" localhost;
passenger_enabled on;
passenger_user app;
root /home/app/company_financials/public;
}
00_app_env.conf
passenger_app_env development;
As per: https://www.phusionpassenger.com/library/config/nginx/reference/#passenger_app_env
The context for passenger_app_env should be http, server, location, if. What I see from your question is that 00_app_env.conf is not nested in any context.
You need to put this configuration in a context.
For ex. your passenger.conf file is in http context as you included it include /etc/nginx/passenger.conf; inside http context. You need to do the same with 00_app_env.conf.

Nginx, Unicorn, net::ERR_TOO_MANY_REDIRECTS

When I try to visit my rails app on Digital Ocean Server, I get the following error in the browser: redirected you too many times. In console: net::ERR_TOO_MANY_REDIRECTS
At this point capistrano deploys successfully with no errors. The app runs perfect locally (no errors) there either.
Here are other relevant files below starting with error log files for nginx, unicorn, and rails, and also shown below that the configuration files for nginx and unicorn.
/etc/defaults/unicorn
CONFIGURED=yes
TIMEOUT=60
APP_ROOT=/home/rails/rails_project/current
CONFIG_RB=/etc/unicorn.conf
PID=/var/run/unicorn.pid
RAILS_ENV="production"
UNICORN_OPTS="-D -c $CONFIG_RB -E $RAILS_ENV"
PATH=/home/rails/.rvm/gems/ruby-2.2.2/bin:/home/rails/.rvm/gems/ruby-2.2.2#global/bin$
export GEM_HOME=/home/rails/.rvm/gems/ruby-2.2.2
export GEM_PATH=/home/rails/.rvm/gems/ruby-2.2.2:/home/rails/.rvm/gems/ruby-2.2.2#glo$
export HOME=/home/rails
DAEMON=/home/rails/.rvm/gems/ruby-2.2.2/bin/unicorn
/var/log/unicorn/unicorn.log
I, [2016-05-13T22:26:59.271424 #11584] INFO -- : reaped #<Process::Status: pid 11587 exit 0> worker=0
I, [2016-05-13T22:26:59.272911 #11584] INFO -- : reaped #<Process::Status: pid 11590 exit 0> worker=1
I, [2016-05-13T22:26:59.273219 #11584] INFO -- : reaped #<Process::Status: pid 11592 exit 0> worker=2
I, [2016-05-13T22:26:59.273573 #11584] INFO -- : reaped #<Process::Status: pid 11595 exit 0> worker=3
I, [2016-05-13T22:26:59.274209 #11584] INFO -- : master complete
I, [2016-05-13T22:27:00.664033 #12489] INFO -- : unlinking existing socket=/var/run/unicorn.sock
I, [2016-05-13T22:27:00.664570 #12489] INFO -- : listening on addr=/var/run/unicorn.sock fd=10
I, [2016-05-13T22:27:00.668023 #12489] INFO -- : worker=0 spawning...
I, [2016-05-13T22:27:00.669161 #12489] INFO -- : worker=1 spawning...
I, [2016-05-13T22:27:00.670383 #12492] INFO -- : worker=0 spawned pid=12492
I, [2016-05-13T22:27:00.671245 #12489] INFO -- : worker=2 spawning...
I, [2016-05-13T22:27:00.676617 #12489] INFO -- : worker=3 spawning...
I, [2016-05-13T22:27:00.680864 #12495] INFO -- : worker=1 spawned pid=12495
I, [2016-05-13T22:27:00.681924 #12489] INFO -- : master process ready
I, [2016-05-13T22:27:00.689107 #12497] INFO -- : worker=2 spawned pid=12497
I, [2016-05-13T22:27:00.696755 #12500] INFO -- : worker=3 spawned pid=12500
I, [2016-05-13T22:27:00.802486 #12492] INFO -- : Refreshing Gem list
I, [2016-05-13T22:27:00.804209 #12500] INFO -- : Refreshing Gem list
I, [2016-05-13T22:27:00.807876 #12497] INFO -- : Refreshing Gem list
I, [2016-05-13T22:27:00.811879 #12495] INFO -- : Refreshing Gem list
I, [2016-05-13T22:27:10.403242 #12500] INFO -- : worker=3 ready
I, [2016-05-13T22:27:10.404469 #12497] INFO -- : worker=2 ready
I, [2016-05-13T22:27:10.406947 #12495] INFO -- : worker=1 ready
I, [2016-05-13T22:27:10.407474 #12492] INFO -- : worker=0 ready
/var/log/nginx/error.log
2016/05/13 21:00:18 [error] 31654#0: *1 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 148.75.53.23, server: codepajamas.com, request: "GET / HTTP/1.1", upstream: "http://unix:/var/run/unicorn.sock/", host: "codepajamas.com"
2016/05/13 21:05:33 [error] 31654#0: *5 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 148.75.53.23, server: codepajamas.com, request: "GET / HTTP/1.1", upstream: "http://unix:/var/run/unicorn.sock/", host: "codepajamas.com"
2016/05/13 21:19:52 [error] 31654#0: *8 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 180.76.15.29, server: codepajamas.com, request: "GET / HTTP/1.1", upstream: "http://unix:/var/run/unicorn.sock/", host: "www.codepajamas.com"
2016/05/13 21:21:40 [error] 31654#0: *12 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 148.75.53.23, server: codepajamas.com, request: "GET / HTTP/1.1", upstream: "http://unix:/var/run/unicorn.sock/", host: "codepajamas.com"
2016/05/13 21:30:33 [error] 31654#0: *16 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 148.75.53.23, server: codepajamas.com, request: "GET / HTTP/1.1", upstream: "http://unix:/var/run/unicorn.sock/", host: "codepajamas.com"
/home/rails/rails_project/current/log/production.log
(empty)
/etc/unicorn.conf
listen "unix:/var/run/unicorn.sock"
worker_processes 4
user "rails"
working_directory "/home/rails/rails_project/current"
pid "/var/run/unicorn.pid"
stderr_path "/var/log/unicorn/unicorn.log"
stdout_path "/var/log/unicorn/unicorn.log"
/etc/nginx/nginx.conf
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
text/xml application/xml application/xml+rss text/javascript;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
/etc/nginx/sites-available/rails
upstream app_server {
server unix:/var/run/unicorn.sock fail_timeout=0;
}
server {
listen 443 ssl;
server_name <mydomain>.com www.<mydomain>.com;
ssl_certificate /etc/letsencrypt/live/<mydomain>.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/<mydomain>.com/privkey.pem;
root /home/rails/rails_project/current/public;
index index.htm index.html;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers <removed don't want you to see>
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
location / {
try_files $uri/index.html $uri.html $uri #app;
}
location ~* ^.+\. (jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|mp3|flv|mpeg|avi)$ {
try_files $uri #app;
}
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
location ~ /.well-known {
allow all;
root /usr/share/nginx/html;
}
}
server {
# redirect HTTP to HTTPS
listen 80;
server_name <mydomain>.com www.<mydomain>.com;
return 301 https://$host$request_uri;
}
Any help is much appreciated.
Okay so, after a lot of digging I discovered the error message was trying to hint to me that I was forcing HTTPS and getting into some kind of redirect loop. I simply had to comment out a line in config/environments/production.rb
# Force all access to the app over SSL, use Strict-Transport-Security, and use secure cookies.
# config.force_ssl = true
This was commented out by default, I must have uncommented it. Have to remember when something breaks that was working previously check your git diff to see all the changes you made. It appears both the rails app and nginx were both trying to force ssl this created a redirect loop somehow not certain. These other posts were also helpful:
Why am I getting infinite redirect loop with force_ssl in my Rails app?
Nginx configuration leads to endless redirect loop

Resources