nginx.conf:
user nginx;
worker_processes auto;
error_log /dev/null;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
multi_accept on;
use epoll;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
log_format compression '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" "$gzip_ratio"';
access_log off;
error_log /dev/null;
log_not_found off;
server_tokens off;
server_names_hash_bucket_size 64;
server_names_hash_max_size 2048;
server_name_in_redirect off;
client_body_timeout 300s;
client_header_timeout 300s;
keepalive_timeout 0;
send_timeout 300s;
client_body_buffer_size 64m;
client_header_buffer_size 2m;
client_max_body_size 48m;
proxy_connect_timeout 70s;
proxy_send_timeout 90s;
proxy_read_timeout 90s;
large_client_header_buffers 4 8k;
##
# Gzip Settings
##
gzip on;
gzip_http_version 1.1;
gzip_disable "msie6";
gzip_static on;
gzip_vary on; #vary header
gzip_min_length 0; # compress though it's volume
gzip_proxied any;
gzip_comp_level 3; #low performance server: 2 and good server: 6
gzip_buffers 16 8k;
gzip_types text/plain text/css text/javascript text/xml application/javascript application/json application/x-javascript application/xml application/xml+rss image/svg+xml image/svg;
limit_conn_zone $binary_remote_addr zone=ddos_conn:10m;
limit_req_zone $binary_remote_addr zone=ddos_req:10m rate=4r/s;
limit_req_status 500;
server {
listen 8010;
server_name 0.0.0.0;
client_max_body_size 48m;
client_body_timeout 5s;
client_header_timeout 5s;
location / {
proxy_pass http://0.0.0.0:8002;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
#proxy_ignore_client_abort on;
proxy_redirect off;
proxy_connect_timeout 70s;
proxy_send_timeout 90s;
proxy_read_timeout 90s;
keepalive_timeout 90;
limit_req zone=ddos_req burst=4 nodelay;
limit_conn ddos_conn 8;
}
}
server {
listen 8011;
location / {
return 200 'pong';
keepalive_timeout 90;
}
}
}
and commands:
for nginx container:
docker run --name nginx_con -d -p 0.0.0.0:8011:8011 -p 0.0.0.0:8010:8010 --restart always -v /Users/mac/docker/conf_files/nginx.conf:/etc/nginx/nginx.conf:ro -v /Users/mac/docker/conf_files/error_log:/var/log/nginx/error_log -v /Users/mac/dcoker/conf_files/access_log:/var/log/nginx/access_log nginx:1.15.8-alpine
for api container:
sudo docker run --name was_con -d -p 0.0.0.0:8002:8002 --net=host --restart always -e MODE='deploy' -v /Users/mac/docker/conf_files/uwsgi.ini:/uwsgi.ini apiaccount/apiproject:latest
When I go to url 0.0.0.0:8011, It works. (returns 'pong' well)
and I go to url 0.0.0.0:8002 It works well
but I go to url 0.0.0.0:8010 It not works.
return 502 bad gateway.
How to solve it?
I'm running it on mac OS..
nginx can't find 0.0.0.0:8002 in docker. You have to provide the ip address or dns name of the api container in docker.
so e.g. proxy_pass http://was_con:8002. The IP you can find with the docker inspect was_con command
"Networks": {
"name": {
IPAddress": "172.18.0.9"
But you're running the api on host network, then you need the host IP as seen from docker, default is 172.17.0.1
Related
When I trying to login Nexus docker repo - I get an error:
docker login https://registry.mysite.online
Username: admin
Password:
Error response from daemon: login attempt to https://registry.mysite.online/v2/ failed with status: 404 Not Found
I added Docker hosted repo to nexus, without specifying any port -
same error
Nexus itself is behind nginx reverse proxy, here's config:
http {
client_body_buffer_size 32k;
client_header_buffer_size 8k;
large_client_header_buffers 8 64k;
proxy_send_timeout 120;
proxy_read_timeout 300;
proxy_buffering off;
tcp_nodelay on;
ssl_certificate /etc/letsencrypt/live/mysite.online/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mysite.online/privkey.pem;
ssl_dhparam /usr/lib/python3/dist-packages/certbot/ssl-dhparams.pem;
client_max_body_size 1G;
sendfile on;
tcp_nopush on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
include /etc/nginx/conf.d/*.conf;
server {
server_name registry.mysite.online;
listen *:443 ssl;
location / {
proxy_pass http://localhost:8081/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
There aren't any problems with a Helm repos - so it's not TLS trouble
I've already tried to google it - and I didn't find answer for my question
I tried:
Set up an separate server rule for port 5000
Set a HTTPS port for repo with 443
And got same error
Found it!
So, you must get a nexus docker ports via nexus swagger, located at http://site/#admin/system/api
Find /v1/repositories/docker/hosted query and execute it.
You'll get response smth like this:
curl -X 'POST' \
'https://registry.site.online/service/rest/v1/repositories/docker/hosted' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-H 'NX-ANTI-CSRF-TOKEN: 0.9591684909938468' \
-H 'X-Nexus-UI: true' \
-d '{
"name": "internal",
"online": true,
"storage": {
"blobStoreName": "default",
"strictContentTypeValidation": true,
"writePolicy": "allow_once",
"latestPolicy": true
},
"cleanup": {
"policyNames": [
"string"
]
},
"component": {
"proprietaryComponents": true
},
"docker": {
"v1Enabled": false,
"forceBasicAuth": true,
"httpPort": 8082,
"httpsPort": 8083,
"subdomain": "docker-a"
}
}'
"docker": {} - it's what we need
and after that you can configure a port through which you'll be able to login to docker
I couldn't configure docker login through https port, so I used an http (8082)
Final nginx.conf could looks like this
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
client_body_buffer_size 32k;
client_header_buffer_size 8k;
large_client_header_buffers 8 64k;
proxy_send_timeout 120;
proxy_read_timeout 300;
proxy_buffering off;
tcp_nodelay on;
ssl_certificate /etc/letsencrypt/live/site.online/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/site.online/privkey.pem;
ssl_dhparam /usr/lib/python3/dist-packages/certbot/ssl-dhparams.pem;
client_max_body_size 1G;
sendfile on;
tcp_nopush on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
include /etc/nginx/conf.d/*.conf;
server {
server_name nexus.site.online;
listen *:443 ssl;
location / {
proxy_pass http://localhost:8081/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
server {
server_name registry.site.online;
listen *:443 ssl;
location / {
proxy_pass http://localhost:8082/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
We are using Artifactory on Openshift as docker registry. the Installation happened via helm chart from jFrog. So far everything is working but uploading large docker images > 10GB to the registry.
We are using the Nginx reverse proxy in one pod and the artifactory in an other pod. It should be behaving like Nginx is not on the same server as artifactory itself.
On console it looks like the push st working. The smaller layers are pushed and the large one is also uploading. After some seconds, it starts re uploading again.
Artifactory throws this error
2021-08-23T05:41:53.624Z [jfrt ] [ERROR] [ ] [.j.a.c.g.GrpcStreamObserver:97] [default-executor-755] - refreshing affected platform config stream - got an error (status: Status{code=INTERNAL, description=Received unexpected EOS on DATA frame from server., cause=null})
io.grpc.StatusRuntimeException: INTERNAL: Received unexpected EOS on DATA frame from server.
at io.grpc.Status.asRuntimeException(Status.java:533)
at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:478)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at org.jfrog.access.client.grpc.AuthorizationInterceptor$AuthenticatedClientCall$RejoiningClientCallListener.onClose(AuthorizationInterceptor.java:73)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:413)
at io.grpc.internal.ClientCallImpl.access$500(ClientCallImpl.java:66)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:742)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:721)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
the Artifactory Nginx conf looks like this (mostly generated by Artifactory):
server {
listen 443 ssl;
listen 80;
server_name ~(?<repo>.+)\\.my.url.ch my.url my.nonssl.url;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_certificate /var/opt/jfrog/nginx/ssl/tls.crt;
ssl_certificate_key /var/opt/jfrog/nginx/ssl/tls.key;
ssl_password_file /var/opt/jfrog/nginx/ssl/tls.pass;
ssl_ciphers HIGH:!aNULL:!MD5;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
rewrite ^/$ /ui/ redirect;
rewrite ^/ui$ /ui/ redirect;
rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2;
chunked_transfer_encoding on;
client_max_body_size 0;
location / {
proxy_read_timeout 2400s;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_buffer_size 128k;
proxy_buffers 40 128k;
proxy_busy_buffers_size 128k;
#proxy_buffering off;
#proxy_request_buffering off;
proxy_pass http://devlab-artifactory:8082;
proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header Strict-Transport-Security always;
location ~ ^/artifactory/ {
proxy_pass http://artifactory:8081;
}
}
}
nginx.conf
# Main Nginx configuration file
worker_processes 4;
error_log stderr warn;
pid /tmp/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
variables_hash_max_size 1024;
variables_hash_bucket_size 64;
server_names_hash_max_size 4096;
server_names_hash_bucket_size 128;
types_hash_max_size 2048;
types_hash_bucket_size 64;
proxy_read_timeout 2400s;
client_header_timeout 2400s;
client_body_timeout 2400s;
proxy_connect_timeout 75s;
proxy_send_timeout 2400s;
proxy_buffer_size 128k;
proxy_buffers 40 128k;
proxy_busy_buffers_size 128k;
proxy_temp_file_write_size 250m;
proxy_http_version 1.1;
client_max_body_size 100G;
client_body_buffer_size 128k;
client_body_in_file_only clean;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format timing 'ip = $remote_addr '
'user = \"$remote_user\" '
'local_time = \"$time_local\" '
'host = $host '
'request = \"$request\" '
'status = $status '
'bytes = $body_bytes_sent '
'upstream = \"$upstream_addr\" '
'upstream_time = $upstream_response_time '
'request_time = $request_time '
'referer = \"$http_referer\" '
'UA = \"$http_user_agent\"';
access_log /var/opt/jfrog/nginx/logs/access.log timing;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
We tried a lot of things here. Larger client body sizes or disabling proxy buffering but I did not manage to upload more than 5.6 GB
On Harbor I managed to upload this kind of image, so it should be somehow possible to do the same thing on Artifactory.
This happens often to me due the proxy buffering of NGINX. Check the logs there to see if this is where the issue happens.
I suggest to try NGINX disabling the proxy buffering with the following:
proxy_buffering off;
proxy_ignore_headers "X-Accel-Buffering";
I think the best way is to minimize your docker image size by using multi stage docker file to minimize the size .
I deployed a website with docker and bitnami/nginx as image: https://www.10studio.tech/demo. After deployment, I realized that files like analyzejs.js was not gzipped:
Here is docker-compose.yml:
version: "3"
services:
docusaurus:
image: bitnami/nginx:1.16
restart: always
volumes:
- ./build:/app
- ./certs:/certs:ro
- ./my_server_block.conf:/opt/bitnami/nginx/conf/server_blocks/my_server_block.conf:ro
ports:
- "3001:3001"
- "3002:3002"
Here is my_server_block.conf:
server {
listen 3002;
absolute_redirect off;
root /app;
location = / {
rewrite ^(.*)$ https://$http_host/docs/introduction redirect;
}
location / {
try_files $uri $uri/ =404;
}
}
server {
listen 3001 ssl;
ssl_certificate /certs/server.crt;
ssl_certificate_key /certs/server.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://localhost:3002;
proxy_redirect off;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
}
}
Here is /opt/bitnami/nginx/conf/nginx.conf, where gzip seems to be enabled:
I have no name!#8317023de7ec:/app$ cat /opt/bitnami/nginx/conf/nginx.conf
# Based on https://www.nginx.com/resources/wiki/start/topics/examples/full/#nginx-conf
# user www www; ## Default: nobody
worker_processes auto;
error_log "/opt/bitnami/nginx/logs/error.log";
pid "/opt/bitnami/nginx/tmp/nginx.pid";
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log "/opt/bitnami/nginx/logs/access.log";
add_header X-Frame-Options SAMEORIGIN;
client_body_temp_path "/opt/bitnami/nginx/tmp/client_body" 1 2;
proxy_temp_path "/opt/bitnami/nginx/tmp/proxy" 1 2;
fastcgi_temp_path "/opt/bitnami/nginx/tmp/fastcgi" 1 2;
scgi_temp_path "/opt/bitnami/nginx/tmp/scgi" 1 2;
uwsgi_temp_path "/opt/bitnami/nginx/tmp/uwsgi" 1 2;
sendfile on;
tcp_nopush on;
tcp_nodelay off;
gzip on;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_proxied any;
gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;
keepalive_timeout 65;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
include "/opt/bitnami/nginx/conf/server_blocks/*.conf";
# HTTP Server
server {
# port to listen on. Can also be set to an IP:PORT
listen 8080;
location /status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
}
}
Does anyone know what's wrong here and how I could enable gzip?
I also just stumbled over the same problem. It seems like the docker image already has gzip enabled. I also have an nginx deployed for the whole server which acts as reverse proxy for the different docker container in the server. What worked for me is to also enabled gzip in the global nginx configuration /etc/nginx/nginx.conf.
Don't know if you also have a wrapping nginx. Hope this helps.
I have 2 docker containers deployed using docker compose.
One is nginx and the other is my flask application. I am only using nginx as a static server for let's encrypt certification.
If I deploy my flask app without nginx, I can successfully curl / ping my server. However, the moment nginx is introduced, I am not able to connect.
What I want to do is at least access my server via numeric external ip e.g. xx.xx.xx.xx and then my domain which points to the same ip. (My domain is actually a subdomain e.g. api.domain.com)
My docker compose is:
services:
nginx:
build:
context: ./nginx
dockerfile: Dockerfile
args:
DOMAIN: ${DOMAIN}
FLASK: application
ports:
- 80:80
- 443:443
volumes:
- /etc/letsencrypt:/etc/letsencrypt
depends_on:
- application
application:
build:
context: ./flask_app
dockerfile: Dockerfile
ports:
- 5000:5000
nginx.conf
user nginx;
worker_processes auto;
worker_rlimit_nofile 8192;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
charset utf-8;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
log_not_found off;
types_hash_max_size 2048;
client_max_body_size 16M;
include mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ecdh_curve X25519:sect571r1:secp521r1:secp384r1;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_ciphers 'TLS13+AESGCM+AES128:TLS13+AESGCM+AES256:TLS13+CHACHA20:EECDH+AESGCM:EECDH+CHACHA20';
ssl_stapling on;
ssl_stapling_verify on;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
include conf.d/*.conf;
}
flask_app.conf
server {
listen 80;
listen [::]:80;
server_name www.${DOMAIN} ${DOMAIN};
location ^~ /.well-known/acme-challenge/ {
root /var/www/_letsencrypt;
}
location / {
return 301 https://${DOMAIN}${DOLLAR}request_uri;
}
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name www.${DOMAIN} ${DOMAIN};
ssl_certificate /etc/letsencrypt/live/${DOMAIN}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/${DOMAIN}/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/${DOMAIN}/chain.pem;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
# You might want to change the CSP policy to fit your needs - see https://content-security-policy.com/
add_header Content-Security-Policy "default-src 'self'; script-src 'self'; connect-src 'self'; img-src 'self'; style-src 'self';";
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
add_header Allow "GET, POST, HEAD" always;
access_log /var/log/nginx/${DOMAIN}.access.log;
error_log /var/log/nginx/${DOMAIN}.error.log warn;
location / {
proxy_http_version 1.1;
proxy_cache_bypass ${DOLLAR}http_upgrade;
proxy_hide_header X-Powered-By;
proxy_hide_header Server;
proxy_hide_header X-AspNetMvc-Version;
proxy_hide_header X-AspNet-Version;
proxy_set_header Proxy "";
proxy_set_header Upgrade ${DOLLAR}http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host ${DOLLAR}host;
proxy_set_header X-Real-IP ${DOLLAR}remote_addr;
proxy_set_header X-Forwarded-For ${DOLLAR}proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto ${DOLLAR}scheme;
proxy_set_header X-Forwarded-Host ${DOLLAR}host;
proxy_set_header X-Forwarded-Port ${DOLLAR}server_port;
proxy_pass http://application:5000;
}
location ~* \.(?:css|cur|js|jpe?g|gif|htc|ico|png|html|xml|otf|ttf|eot|woff|woff2|svg)${DOLLAR} {
expires 7d;
add_header Pragma public;
add_header Cache-Control public;
proxy_pass http://application:5000;
}
if ( ${DOLLAR}request_method !~ ^(GET|POST|HEAD)${DOLLAR} ) {
return 405;
}
if (${DOLLAR}http_user_agent ~* LWP::Simple|BBBike|wget) {
return 403;
}
location ~ /\.(?!well-known) {
deny all;
}
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml application/json application/javascript application/rss+xml application/atom+xml image/svg+xml;
}
I'm not sure why your nginx config contains ${DOLLAR} in multiple places. I don't think this is valid syntax, and can't find any documentation relating to this. Lines like:
proxy_set_header Host ${DOLLAR}host;
Should actually be:
proxy_set_header Host $host;
As for using ${DOMAIN} in the nginx conf, I would avoid this and opt for a more simple configuration. Just specify the domain in the nginx config file:
server_name www.example.com example.com;
I'd familiarise yourself with the official nginx image docs under "Complex configuration" it shows you how to copy a working config out of a running container, then modify this to your needs.
Once you have this working, if you really want to specify the domain in your docker-compose file, and treat your nginx config as a template which is modified at container-start time, you could proceed to read the section "Using environment variables in nginx configuration" which shows a workaround to use envsubst to acehive this. This is probably not required for single site deployments however.
I am following this guide to setup Rails service using Nginx and Unicorn http://ariejan.net/2011/09/14/lighting-fast-zero-downtime-deployments-with-git-capistrano-nginx-and-unicorn/
When I started Nginx without Unicorn I get 502 Bad Gateway error
and as soon as I start the Unicorn server using the following command unicorn_rails -c config/unicorn.rb -D the request times out and I get 504 Gateway Time-out error. The CPU usage for ruby process is 100% and seems like something is stuck in a loop but I do not understand what is happening
nginx/1.2.6 (Ubuntu)
This is my /etc/nginx/nginx.conf
user ubuntu staff;
# Change this depending on your hardware
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay off;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
# gzip_vary on;
gzip_proxied any;
gzip_min_length 500;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml applicat
ion/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
and this is my /etc/nginx/sites-available/default
upstream home {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
# for UNIX domain socket setups:
server unix:/tmp/home.socket fail_timeout=0;
}
server {
# if you're running multiple servers, instead of "default" you should
# put your main domain name here
listen 80;
# you could put a list of other domain names this application answers
server_name patellabs.com;
root /home/ubuntu/apps/home/current/public;
access_log /var/log/nginx/home_access.log;
rewrite_log on;
location / {
#all requests are sent to the UNIX socket
proxy_pass http://home;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
# if the request is for a static resource, nginx should serve it directly
# and add a far future expires header to it, making the browser
# cache the resource and navigate faster over the website
# this probably needs some work with Rails 3.1's asset pipe_line
location ~ ^/(images|javascripts|stylesheets|system)/ {
root /home/ubuntu/apps/home/current/public;
expires max;
break;
}
}