changing nginx conf file - docker

I installed openresty alpine docker image and mounted conf.d to define the server in there. It works fine.
Next, I want to change nginx.conf and set worker_process=auto. However worker_processes are defined in nginx.conf. I tried to volume mount nginx.conf in Docker-compose file as:
volumes:
- ./conf.d:/etc/nginx/conf.d
- ./conf/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf
however, it creates a directory nginx.conf in ./conf
How can I mount/modify nginx.conf?

You are mounting it with the wrong directory of the Docker if you want to update nginx root configuration.
Nginx Config Files
The Docker tooling installs its own nginx.conf file. If you want to
directly override it, you can replace it in your own Dockerfile or via
volume bind-mounting.
For the Linux images, that nginx.conf has the directive include
/etc/nginx/conf.d/*.conf; so all nginx configurations in that
directory will be included. The default virtual host configuration has
the original OpenResty configuration and is copied to
/etc/nginx/conf.d/default.conf.
docker run -v /my/custom/conf.d:/etc/nginx/conf.d openresty/openresty:alpine
Second thing, Better to use an absolute path for mounting.
docker run -v $PWD/conf/nginx.conf:/etc/nginx/nginx.conf openresty/openresty:1.15.8.2-1-alpine
or
docker run -v abs_path/nginx.conf:/etc/nginx/nginx.conf openresty/openresty:1.15.8.2-1-alpine
Openresty config:
docker run -v $PWD/conf/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf openresty/openresty:1.15.8.2-1-alpine
You should mount exact file, otherwise it will break the container.
here is the default config for /usr/local/openresty/nginx/conf/nginx.conf
# nginx.conf -- docker-openresty
#
# This file is installed to:
# `/usr/local/openresty/nginx/conf/nginx.conf`
# and is the file loaded by nginx at startup,
# unless the user specifies otherwise.
#
# It tracks the upstream OpenResty's `nginx.conf`, but removes the `server`
# section and adds this directive:
# `include /etc/nginx/conf.d/*.conf;`
#
# The `docker-openresty` file `nginx.vh.default.conf` is copied to
# `/etc/nginx/conf.d/default.conf`. It contains the `server section
# of the upstream `nginx.conf`.
#
# See https://github.com/openresty/docker-openresty/blob/master/README.md#nginx-config-files
#
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}

Related

Connect two wordpress containers with same NGINX docker

I use nginx in a docker to connect my two wordpress websites, which are dockerized too.
I can set up one website with the following settings:
In docker-compose.yml
nginx:
image: nginx:alpine
volumes:
- ./web_ndnb_prod/src:/var/www/html
- ./nginx/conf.d:/etc/nginx/conf.d:ro
depends_on:
- web_ndnb_test
- web_ndnb_prod
In my NGINX conf file located in /nginx/conf.d
server {
[...]
root /var/www/html/;
[...]
}
However to add a 2nd website, I try to change the root and the websites return a 404
In docker-compose.yml
nginx:
image: nginx:alpine
volumes:
- ./web_ndnb_prod/src:/var/www/web_ndnb_prod
- ./web_ndnb_test/src:/var/www/web_ndnb_test
- ./nginx/conf.d:/etc/nginx/conf.d:ro
depends_on:
- web_ndnb_test
- web_ndnb_prod
In one of the 2 NGINX conf files
server {
[...]
root /var/www/web_ndnb_prod/;
[...]
}
If I execute
sudo docker exec -ti nginx ls /var/www/web_ndnb_prod
It outputs the wordpress files correctly
Why does Nginx not find them?
Edit 1
The main nginx.conf file is
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}

Can't connect from one docker container to another by its public domain name

I have an application composed of containerized web-services deployed with docker-compose (it's a test env). One of the containers is nginx that operates as a reverse proxy for services and also serves static files. A public domain name points to the host machine and nginx has a server section that utilizes it.
The problem I am facing is that I can't talk to nginx by that public domain name from the containers launched on this same machine - connection always timeouts. (For example, I tried doing a curl https://<mypublicdomain>.com)
Referring by the containers name (using docker's hostnames) works just fine. Reuqests to the same domain name from other machines also work ok.
I understand this has to do with how docker does networking, but fail to find any docs that would outline what exactly goes wrong here. Could anyone explain the root of the issue to me or maybe just point in the right direction?
(For extra context: originally I was going to use this to set up monitoring with prometheus and blackbox exporter to make it see the server the same way anyone from the outside would do + to automatically check that SSL is working. For now I pulled back to point the prober to nginx by its docker hostname)
Nginx image
FROM nginx:stable
COPY ./nginx.conf /etc/nginx/nginx.conf.template
COPY ./docker-entrypoint.sh /docker-entrypoint.sh
COPY ./dhparam/dhparam-2048.pem /dhparam-2048.pem
COPY ./index.html /var/www/index.html
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
docker-compose.yaml
version: "3"
networks:
mainnet:
driver: bridge
services:
my-gateway:
container_name: my-gateway
image: aturok/manuwor_gateway:latest
restart: always
networks:
- mainnet
ports:
- 80:80
- 443:443
expose:
- "443"
volumes:
- /var/stuff:/var/www
- /var/certs:/certsdir
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
(I show only the nginx service, as others are irrelevant - I would for example spin up a nettools container and not connect it to the mainnet network - still expect the requests to reach nginx, since I am using the public domain name. The problem also happens with the containers connected to the same network)
nginx.conf (normally it comes with a bunch of env vars, replaced + removed irrelevant backend)
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
#include /etc/nginx/conf.d/*.conf;
server {
listen 80;
server_name mydomain.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name mydomain.com;
ssl_certificate /certsdir/fullchain.pem;
ssl_certificate_key /certsdir/privkey.pem;
server_tokens off;
ssl_buffer_size 8k;
ssl_dhparam /dhparam-2048.pem;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8;
root /var/www/;
index index.html;
location / {
root /var/www;
try_files $uri /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
Note: certificates are ok when I access the server from elsewhere

starting nginx docker container with custom nginx config file

My Requirements
I am working on a Windows 10 machine
I have my test app running on http://localhost:3000/
I need to have a reverse proxy setup so http://localhost:80 redirects to http://localhost:3000/ ( i will be adding further rewrite rules when i get the basic setup up and running)
Steps
I am following instructions from
https://www.docker.com/blog/tips-for-deploying-nginx-official-image-with-docker/
I'm trying to create a container (name = mynginx1) specifying my own nginx conf file
$ docker run --name mynginx1 -v C:/zNGINX/testnginx/conf:/etc/nginx:ro -P -d nginx
where "C:/zNGINX/testnginx/conf" contains the file "default.conf" and its contents are
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://localhost:3000;
}
}
A container ID is returned, but "docker ps" does not show it running.
Viewing the container logs using "docker logs mynginx1" shows the following error
2020/03/30 12:27:18 [emerg] 1#1: open() "/etc/nginx/nginx.conf" failed (2: No such file or directory)
nginx: [emerg] open() "/etc/nginx/nginx.conf" failed (2: No such file or directory)
What am I doing wrong?
There were 2 errors in what i was doing
(1) In the conf file, I was using "proxy_pass http://localhost:3000;"
"localhost" in the container is the CONTAINER host, not MY computer. Therefore this needed changing to
proxy_pass http://host.docker.internal:3000;
(2) the path to copy my config file to on the container was not right, i needed to add "conf.d"
docker run --name mynginx1 -v C:/zNGINX/testnginx/conf:/etc/nginx/conf.d:ro -P -d nginx
The documentation I was reading (multiple websites) did not mention adding the "conf.d" directory on the end of the path. However if you view the "/etc/nginx/nginx.conf" file there is a clue on the last line
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
The "include /etc/nginx/conf.d/*.conf;" indicates that it loads any file ended in ".conf" from the "/etc/nginx/conf.d/" directory.

nginx index directive works fine locally but gives 404 on ec2

I have a web project that I want to deploy using docker-compose and nginx.
Locally, I:
docker-compose build
docker-compose push
If I docker-compose up, I can access localhost/ and get redirected to my index.html.
Now on my ec2 instance (a regular ec2 instance where I installed docker and docker-compose) I docker-compose pull, then docker-compose up.
All the containers launch correctly and I can exec sh into my nginx container and see there's a /facebook/index.html file.
If I go to [instance_ip]/index.html, everything works as expected.
If I go to [instance_ip]/, I get a 404 response.
nginx receives the request (I see it in the access logs) but does not redirect to index.html.
Why is the index directive not able to redirect to my index.html file?
I tried to:
Reproduce locally by remove all local images and pulling from my registry.
Kill my ec2 instance and launch a new one.
But I got the same result.
I'm using docker-compose 1.11.1 and docker 17.05.0. On the ec2 instance it's docker 17.03.1 and I tried both docker-compose 1.11.1 and 1.14.1 (Sign that I'm a bit desperate ;)).
An extract from my docker-compose file:
nginx:
image: [image from registry]
build:
context: ./
dockerfile: deploy/nginx.dockerfile
ports:
- "80:80"
depends_on:
- web
My nginx image starts from alpine, installs nginx, adds the index.html file and copies my conf file in /etc/nginx/nginx.conf.
Here's my nginx config. I checked that it is present on the running containers (both locally and on ec2).
# prevent from exiting when using `run` to launch container
daemon off;
worker_processes auto;
#
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
sendfile off;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
server {
error_log /var/log/nginx/file.log debug;
listen 80 default_server;
# root /home/domain.com;
# Bad developers use underscore in headers.
underscores_in_headers on;
# root should be out of location block
root /facebook;
location / {
index index.html;
# autoindex on;
try_files $uri #app;
}
location #app {
include uwsgi_params;
# Using docker-compose linking, the nginx docker-compose service depends on a 'web' service.
uwsgi_pass web:3033;
}
}
}
I have no idea why the container is behaving differently on the ec2 instance.
Any pointers appreciated!

Elastic Beanstalk Redis Fail, Webapp Unresponsive

Can't get past sidekiq errors.
Trying to migrate from Heroku to AWS EB. I have a rails app running rails 4.2.0, ruby 2.3 on a linux machine, but keep running into issues. The webapp won't load - it simply times out over and over.
INFO: Running in ruby 2.3.1p112 (2016-04-26 revision 54768) [x86_64-linux]
INFO: See LICENSE and the LGPL-3.0 for licensing details.
INFO: Upgrade to Sidekiq Pro for more features and support: http://sidekiq.org
INFO: Booting Sidekiq 3.5.4 with redis options {:url=>nil}
ERROR: heartbeat: MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error.
Redis keeps telling me its url is nil despite what seems to be a solid setup. (It works on another app I managed to get running with the same configuration. I also found the ERROR MISCONF notice to be troublesome too, but the Redis URL isn't even setting. Further, both are in the same security group
This is my config/sidekiq.rb:
rails_root = Rails.root || File.dirname(__FILE__) + '/../..'
rails_env = Rails.env || 'development'
redis_config = YAML.load_file(rails_root.to_s + '/config/redis.yml')
redis_config.merge! redis_config.fetch(Rails.env, {})
redis_config.symbolize_keys!
Sidekiq.configure_server do |config|
config.redis = { url: "redis://#{ENV['REDIS_HOST']}:#{redis_config[:port]}/12" }
end
Sidekiq.configure_client do |config|
config.redis = { url: "redis://#{ENV['REDIS_HOST']}:#{redis_config[:port]}/12" }
end
And my config/redis.yml:
development:
host: localhost
port: 6379
test:
host: localhost
port: 6379
production:
host: ENV['REDIS_HOST']
port: 6379
My applicatoin.yml:
REDIS_HOST: project-name-001.random-token.0001.use1.cache.amazonaws.com
Here's the setup_swap.config, sidekiq.config, and nginx.config.
I've also seen this issue, but I assume it's unrelated. Perhaps I'm mistaken? If irrelevant, will address in another post.
Starting nginx: nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
What could it be?
Is there anything important I'm missing?
Edit: Add nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
index index.html index.htm;
server {
listen 80 ;
listen [::]:80 ;
server_name localhost;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
# redirect server error pages to the static page /40x.html
#
error_page 404 /404.html;
location = /40x.html {
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# Settings for a TLS enabled server.
#
# server {
# listen 443 ssl http2 ;
# listen [::]:443 ssl http2 ;
# server_name _;
# root /usr/share/nginx/html;
#
# ssl_certificate "/etc/pki/nginx/server.crt";
# ssl_certificate_key "/etc/pki/nginx/private/server.key";
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 10m;
# ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# ssl_ciphers <redacted>;
# ssl_prefer_server_ciphers on;
#
# # Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
#
# location / {
# }
#
# error_page 404 /404.html;
# location = /40x.html {
# }
#
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
# }
}
Updated response: I updated nginx.conf to read: include /etc/nginx/conf.d/webapp_healthd.conf; but still got the following:
[root] service nginx restart
nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
Stopping nginx: [ OK ]
Starting nginx: nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
[ OK ]
And also, the following persists:
ERROR: heartbeat: MISCONF Redis is configured to save RDB snapshots,
but is currently not able to persist on disk. Commands that may modify
the data set are disabled. Please check Redis logs for details about
the error.
Update 2 removed duplicate references to localhost port 80 and nginx stopped complaining, but I still get the Heartbeat MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. error.
Output from Sidekiq.redis(&:info):
{
"redis_version"=>"3.2.8",
"redis_git_sha1"=>"00000000",
"redis_git_dirty"=>"0",
"redis_build_id"=>"12e5c8be08dc4d3",
"redis_mode"=>"standalone",
"os"=>"Linux 4.4.51-40.60.amzn1.x86_64 x86_64",
"arch_bits"=>"64",
"multiplexing_api"=>"epoll",
"gcc_version"=>"4.8.3",
"process_id"=>"24835",
"run_id"=>"83a8de8b50f482a4e271228435b2f0c8e3fa5b5c",
"tcp_port"=>"6379",
"uptime_in_seconds"=>"341217",
"uptime_in_days"=>"3",
"hz"=>"10",
"lru_clock"=>"1108155",
"executable"=>"/usr/local/bin/redis-server",
"config_file"=>"/etc/redis/redis.conf",
"connected_clients"=>"2",
"client_longest_output_list"=>"0",
"client_biggest_input_buf"=>"0",
"blocked_clients"=>"0",
"used_memory"=>"842664",
"used_memory_human"=>"822.91K",
"used_memory_rss"=>"3801088",
"used_memory_rss_human"=>"3.62M",
"used_memory_peak"=>"924360",
"used_memory_peak_human"=>"902.70K",
"total_system_memory"=>"1043574784",
"total_system_memory_human"=>"995.23M",
"used_memory_lua"=>"37888",
"used_memory_lua_human"=>"37.00K",
"maxmemory"=>"0",
"maxmemory_human"=>"0B",
"maxmemory_policy"=>"noeviction",
"mem_fragmentation_ratio"=>"4.51",
"mem_allocator"=>"jemalloc-4.0.3",
"loading"=>"0",
"rdb_changes_since_last_save"=>"177",
"rdb_bgsave_in_progress"=>"0",
"rdb_last_save_time"=>"1493941570",
"rdb_last_bgsave_status"=>"err",
"rdb_last_bgsave_time_sec"=>"0",
"rdb_current_bgsave_time_sec"=>"-1",
"aof_enabled"=>"0",
"aof_rewrite_in_progress"=>"0",
"aof_rewrite_scheduled"=>"0",
"aof_last_rewrite_time_sec"=>"-1",
"aof_current_rewrite_time_sec"=>"-1",
"aof_last_bgrewrite_status"=>"ok",
"aof_last_write_status"=>"ok",
"total_connections_received"=>"17",
"total_commands_processed"=>"141824",
"instantaneous_ops_per_sec"=>"0",
"total_net_input_bytes"=>"39981126",
"total_net_output_bytes"=>"72119284",
"instantaneous_input_kbps"=>"0.00",
"instantaneous_output_kbps"=>"0.00",
"rejected_connections"=>"0",
"sync_full"=>"0",
"sync_partial_ok"=>"0",
"sync_partial_err"=>"0",
"expired_keys"=>"3",
"evicted_keys"=>"0",
"keyspace_hits"=>"14",
"keyspace_misses"=>"533",
"pubsub_channels"=>"0",
"pubsub_patterns"=>"0",
"latest_fork_usec"=>"160",
"migrate_cached_sockets"=>"0",
"role"=>"master",
"connected_slaves"=>"0",
"master_repl_offset"=>"0",
"repl_backlog_active"=>"0",
"repl_backlog_size"=>"1048576",
"repl_backlog_first_byte_offset"=>"0",
"repl_backlog_histlen"=>"0",
"used_cpu_sys"=>"167.52",
"used_cpu_user"=>"46.03",
"used_cpu_sys_children"=>"0.00",
"used_cpu_user_children"=>"0.00",
"cluster_enabled"=>"0",
"db0"=>"keys=1,expires=0,avg_ttl=0"
}
Interestingly, I can't find my redis logs to investigate further. In my redis.conf, all I see is this.
# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile ""
I've even run find / -path /sys -prune -o -path /proc -prune -o -name *redis* and don't see ANY log files. (╯°□°)╯︵ ┻━┻
What's also strange is that production.log is simply not getting written to, check the permissions: rw-r--r-- 1 webapp webapp 0 May 8 20:01 production.log
Please share your /etc/nginx/nginx.conf, I guess nginx.conf include other servers conf files in conf.d folder, check for the line include /etc/nginx/conf.d/*.conf; in your nginx.conf, if so it might load the file twice or other default file with the same server name, you can change it to include /etc/nginx/conf.d/webapp_healthd.conf or what ever name you want, but before check what is the file on the machine.
Also Check out the /etc/nginx/sites-enabled/ directory if there is any temp file such as ~default or .save. check it with ls -lah, delete them, restart nginx and check for errors or do it via ebextensions and deploy again.
UPDATE
Try to remove from nginx.confall the section of server { ... }, make sure to include inside http your file /etc/nginx/conf.d/webapp_healthd.conf, there you already have server listen 80; and localhost..
nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/webapp_healthd.conf;
index index.html index.htm;
}
003_nginx.config
files:
"/etc/nginx/conf.d/webapp_healthd.conf" :
mode: "000755"
owner: root
group: root
content: |
upstream my_app {
server unix:///var/run/puma/my_app.sock;
}
log_format healthd '$msec"$uri"'
'$status"$request_time"$upstream_response_time"'
'$http_x_forwarded_for';
server {
listen 80;
server_name _ localhost; # need to listen to localhost for worker tier
root /var/app/current/public;
if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") {
set $year $1;
set $month $2;
set $day $3;
set $hour $4;
}
access_log /var/log/nginx/access.log main;
access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd;
try_files $uri/index.html $uri #my_app;
location #my_app {
proxy_pass http://my_app; # match the name of upstream directive which is defined above
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /assets {
alias /var/app/current/public/assets;
gzip_static on;
gzip on;
expires max;
add_header Cache-Control public;
}
# redirect server error pages to the static page /40x.html
#
error_page 404 /404.html;
location = /40x.html {
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
"/opt/elasticbeanstalk/hooks/appdeploy/post/03_restart_nginx.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
rm /etc/nginx/conf.d/webapp_healthd.conf.bak
rm /etc/nginx/conf.d/custom.conf
service nginx restart

Resources