I'm trying to run and use docker behind nginx as a reverse proxy. Everything works fine except when docker responds with a raw stream "application/vnd.docker.raw-stream" instead of normal HTTP response. This is happening with endpoints /start, /attach, etc. documented here: https://docs.docker.com/engine/api/v1.21/#operation/ExecStart
This is when my nginx configuration doesn't forward the docker response to the client. I tried searching it up and there's just one blog article suggesting a patch to actual nginx C file: https://blog.yadutaf.fr/2014/12/12/how-to-run-docker-behind-an-nginx-reverse-proxy/
I followed the blog above completely, however, setting r->upstream->upgrade = 1; seems to have no effect on the /start HTTP endpoint in docker. Nginx simply doesn't respond. Is there any way around this? This is my nginx.conf file at the moment:
daemon off;
error_log /dev/stdout info;
# error_log logs/error.log debug;
events {
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stdout main;
# include mime.types;
# default_type application/octet-stream;
# sendfile on;
# keepalive_timeout 65;
upstream dockerpool {
# session_sticky cookie=sessionid fallback=off mode=insert option=indirect;
# backup server
# server nginx_dev_test:80;
server socat:2376;
}
server {
listen 80;
location / {
# The upstream here must be a nginx variable
set $ups dockerpool;
proxy_buffering off;
proxy_pass http://$ups;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
}
For anyone struggling with the same problem, I did extensive research on internet and there's no sane method to get what I wanted to work with Nginx, and even if you solve this, you'll blow your heads off autoscaling/load-balancing this architecture.
Today, I moved to HAProxy using stick tables, works like a charm. Nginx is not suitable for this use case.
Update: STUPID ME. This COULD work with nginx and/or HAProxy, just make sure you upgrade (downgrade?) your HTTP connection to TCP connection when running docker commands, etc.
Related
I have a fully dockerised application:
nginx as proxy
a backend server (express.js)
a database (mongodb)
a frontend server (express js)
goaccess for logging
The problem is when I hit my backend endpoint with a POST request, the response is never sent to the client. A 499 code is logged by nginx along with this log
epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while sending request to upstream,
The client is the browser, there is no doubt about it.
The error arise after 1min of processing in firefox and 5min of processing in chrome. As far as I know, these times match the timeout settings of theses browsers. I could increase the timeout in firefox but it is not a viable solution.
When I get rid of the proxy, the request completes and the client get the response in about 15min. So I think there is a problem with the nginx configuration but I don't know what.
So far I tried to increase all timeout you can imagine but that didn't change anything.
I also try to set the proxy_ignore_client_abort in nginx but it is not useful in my case. Indeed the connection between nginx and my backend is still alive and the request completes after 15min (code 200 in nginx logs) but the ui is not updated because the client has terminated the connection with nginx.
I think that the browser thinks nginx is dead, because it doesn't receive any data, so it closes the TCP connection.
I'll try later on to "stimulates" this TCP connection when the request is still processing by switching between my website pages (so the browser should not close the connection), but if I have to do some weird stuff to get my backend result, it is not a viable solution.
There should be a way to process long requests without facing these browser's timeout but I don't know how.
Any help would be appreciated :)
My nginx configuration:
user nginx;
pid /run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 65535;
events {
multi_accept on;
worker_connections 65535;
}
http {
charset utf-8;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
log_not_found off;
types_hash_max_size 2048;
types_hash_bucket_size 64;
client_max_body_size 16M;
# mime
include mime.types;
default_type application/octet-stream;
# logging
log_format my_log '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" ';
access_log /var/log/nginx/access.log my_log;
error_log /var/log/nginx/error.log info;
# limits
limit_req_log_level warn;
limit_req_zone $binary_remote_addr zone=main:10m rate=10r/s;
# SSL
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
# Mozilla Intermediate configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
# OCSP
ssl_stapling on;
ssl_stapling_verify on;
resolver 1.1.1.1 1.0.0.1 8.8.8.8 8.8.4.4 208.67.222.222 208.67.220.220 valid=60s;
resolver_timeout 2s;
# Connection header for WebSocket reverse proxy
map $http_upgrade $connection_upgrade {
default upgrade;
"" close;
}
map $remote_addr $proxy_forwarded_elem {
# IPv4 addresses can be sent as_is
~^[0-9.]+$ "for=$remote_addr";
# IPv6 addresses need to be bracketed and quoted
~^[0-9A-Fa-f:.]+$ "for\"[$remote_addr]\"";
# Unix domain socket names cannot be represented in RFC 7239 syntax
default "for=unknown";
}
map $http_forwarded $proxy_add_forwarded {
# If the incoming Forwarded header is syntactially valid, append to it
"~^(,[ \\t]*)*([!#$%&'*+.^_`|~0-9A-Za-z-]+=([!#$%&'*+.^_`|~0-9A-Za-z-]+|\"([\\t \\x21\\x23-\\x5B\\x5D-\\x7E\\x80-\\xFF]|\\\\[\\t \\x21-\\x7E\\x80-\\xFF])*\"))?(;([!#$%&'*+.^_`|~0-9A-Za-z-]+=([!#$%&'*+.^_`|~0-9A-Za-z-]+|\"([\\t \\x21\\x23-\\x5B\\x5D-\\x7E\\x80-\\xFF]|\\\\[\\t \\x21-\\x7E\\x80-\\xFF])*\"))?)*([ \\t]*,([ \\t]*([!#$%&'*+.^_`|~0-9A-Za-z-]+=([!#$%&'*+.^_`|~0-9A-Za-z-]+|\"([\\t \\x21\\x23-\\x5B\\x5D-\\x7E\\x80-\\xFF]|\\\\[\\t \\x21-\\x7E\\x80-\\xFF])*\"))?(;([!#$%&'*+.^_`|~0-9A-Za-z-]+=([!#$%&'*+.^_`|~0-9A-Za-z-]+|\"([\\t \\x21\\x23-\\x5B\\x5D-\\x7E\\x80-\\xFF]|\\\\[\\t \\x21-\\x7E\\x80-\\xFF])*\"))?)*)?)*$" "$http_forwarded, $proxy_forwarded_elem";
# Otherwise, replace it
default "$proxy_forwarded_elem";
}
# Load configs
include /etc/nginx/conf.d/localhost.conf;
}
and localhost.conf
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name localhost;
root /usr/share/nginx/html;
ssl_certificate /etc/nginx/live/localhost/cert.pem;
ssl_certificate_key /etc/nginx/live/localhost/key.pem;
include /etc/nginx/conf.d/security.conf;
include /etc/nginx/conf.d/proxy.conf;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log info;
# nginx render files or proxy the request
location / {
try_files $uri #front;
}
location #front {
proxy_pass http://frontend:80;
}
location ^~ /api/v1 {
proxy_read_timeout 30m; # because an inference with SIMP can takes some time
proxy_send_timeout 30m;
proxy_connect_timeout 30m;
proxy_pass http://backend:4000;
}
location = /report.html {
root /usr/share/goaccess/html/;
}
location ^~ /ws {
proxy_pass http://goaccess:7890;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_read_timeout 7d;
proxy_connect_timeout 3600;
}
include /etc/nginx/conf.d/general.conf;
}
EDIT:
The request is sent via the Angular HttpClient, maybe this module is built in a way to abort requests if a response in not send in a short time frame, I'll try to investigate on that.
Ok I think I can answer my own question.
HTTP requests are not designed for long requests. When a request is issued, a response should be delivered as quickly as possible.
When you are doing a long process job, you should use workers and messages architecture (or event driven architecture) with tools like rabbitmq or kafka. You can also use polling (but it is not the more efficient solution).
So that, in my POST handler what I should do is when data arrive send a message to my broker and then issue an appropriate response (like request is processing).
The worker subscribe to a queue and can receive the message previously delivered, do the job and then reply back to my back end. We can then use a STOMP (websocket) plugin to route the result to the front end.
I'm trying to change the settings for Nginx ssl_protocols, but the changes don't reflect on the server.
At first I thought it was because we were using Ubuntu 12.04, but now we're updated to 14.04.
Nginx version:
nginx version: nginx/1.10.1
built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
built with OpenSSL 1.0.1f 6 Jan 2014
TLS SNI support enabled
configure arguments: --sbin-path=/usr/local/sbin/nginx --with-http_ssl_module --with-http_stub_status_module --with-http_gzip_static_module
Openssl version:
OpenSSL 1.0.1f 6 Jan 2014
Ngnix.conf:
http {
include /usr/local/nginx/conf/mime.types;
default_type application/octet-stream;
tcp_nopush on;
keepalive_timeout 65;
tcp_nodelay off;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log debug;
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
client_body_timeout 10;
client_header_timeout 10;
sendfile on;
# output compression
gzip on;
gzip_min_length 1100;
gzip_buffers 4 8k;
gzip_proxied any;
gzip_types text/plain text/html text/css text/js application/x-javascript application/javascript application/json;
# include config for each site here
include /etc/nginx/sites/*;
/etc/nginx/sites/site.conf:
server {
listen 443 ssl;
server_name server_name;
root /home/deploy/server_name/current/public;
access_log /var/log/nginx/server_name.access.log main;
ssl_certificate /usr/local/nginx/conf/ssl/wildcard.server_name.com.crt;
ssl_certificate_key /usr/local/nginx/conf/ssl/wildcard.server_name.com.key.unsecure;
ssl_client_certificate /usr/local/nginx/conf/ssl/geotrust.crt;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA;
ssl_prefer_server_ciphers on;
location ~ ^/assets/ {
expires max;
add_header Cache-Control public;
add_header ETag "";
break;
}
location / {
try_files $uri #server_name;
proxy_set_header X-Forwarded-Proto https;
}
location #server_name {
include proxy.conf;
proxy_pass http://server_name;
proxy_set_header X-Forwarded-Proto https;
}
# stats url
location /nginx_stats {
stub_status on;
access_log off;
}
}
The config files get loaded properly and are both being used as intended. If it has any relevance the server is running Ruby on Rails with Unicorn.
Does anyone have an idea what could be wrong?
Description
I had a similar problem. My changes would be applied (nginx -t would warn about duplicate and invalid values), but TLSv1.0 and TLSv1.1 would still be accepted. My line in my sites-enabled/ file reads
ssl_protocols TLSv1.2 TLSv1.3;.
I ran grep -R 'protocol' /etc/nginx/* to find other mentions ssl_protocols, but I only found the main configuration file /etc/nginx/nginx.conf and my own site config.
Underlying problem
The problem was caused by a file included by certbot/letsencrypt, at /etc/letsencrypt/options-ssl-nginx.conf. In certbot 0.31.0 (certbot --version) the file includes this line:
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
This somewhat sneakily enabled these versions of TLS.
I was tipped off by Libre Software.
0.31.0 is the most up-to-date version I was able to get for Ubuntu 18.04 LTS
Solution
TLS versions <1.2 were disabled by default in the certbot nginx config starting from certbot v0.37.0 (thank you mnordhoff). I copied the file from there into the letsencrypt config (options-ssl-nginx.conf), added a note to myself and subsequent maintainers and everything was all right with the world again.
How to not get into this mess in the first place
grepping one level higher (/etc/* instead of /etc/nginx*) would have allowed me to find the culprit. But a more reliable and powerful tool is nginx -T, which prints out all the configuration files that are considered.
Other useful commands:
nginx -s reload after you change configs
nginx -v to find out your nginx version. To enable TSL version 1.3, you need version 1.13.0+.
openssl version: you need at least OpenSSL 1.1.1 "built with TLSv1.3 support"
curl -I -v --tlsv<major.minor> <your_site> for testing whether a certain version of TLS is in fact enabled
journalctl -u nginx --since "10 minutes ago" to make absolutely sure something else isn't going on.
Want to add another (somewhat obscure) possibility since the CERTbot one didn't cover me. Mine is only for NGINX installs with multiple domains. Basically the info you specify for your specific domain may be modified because of the server default. That default is set from the first domain name encountered when reading the config (basically alphabetically). Details on this page.
http://nginx.org/en/docs/http/configuring_https_servers.html
A common issue arises when configuring two or more HTTPS servers listening on a single IP address:
server {
listen 443 ssl;
server_name www.example.com;
ssl_certificate www.example.com.crt;
...
}
server {
listen 443 ssl;
server_name www.example.org;
ssl_certificate www.example.org.crt;
...
}
With this configuration a browser receives the default server’s certificate, i.e. www.example.com regardless of the requested server name. This is caused by SSL protocol behaviour. The SSL connection is established before the browser sends an HTTP request and nginx does not know the name of the requested server. Therefore, it may only offer the default server’s certificate.
The issue wasn't in the server itself, but instead in the AWS Load Balancer having wrong SSL Ciphers selected.
I'm working on NGINX Plus setup as a reverse proxy for traffic management and routing on my Azure Cloud Solution.
I'm just getting started and everything works independently, but when I try to use the proxy_pass to route web traffic to a .NET Web App that rests in the cloud, I get 404 errors.
I've tried with an app I've had deployed for a while(a .NET MVC Web App) and also a node express app that is nothing more than the basic offering as a test:
http://rpsexpressnodetest.azurewebsites.net/
Each of these runs as expected when I go the directly to them, but then when I enable the pass thru I get a 404 error.
I'm using the following config file for nginx:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream web_rps{
server rpsexpressnodetest.azurewebsites.net;
}
# ssl_certificate /etc/nginx/ssl/server.crt;
# ssl_certificate_key /etc/nginx/ssl/server.key;
# drop requests with no Host header
# server{
# listen 80 default_server;
# server_name "";
# return 444;
# }
server{
listen *:80;
# listen *:443 ssl;
root /usr/share/nginx/html;
location / {
proxy_pass http://web_rps;
}
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
In any case, if I navigate to http://rpsnginx.cloudapp.net/ (my nginx vm), I always get a 404 web app not found...
Error 404 - Web app not found.
The web app you have attempted to reach is not available in this
Microsoft Azure App Service region. This could be due to one of
several reasons:
The web app owner has registered a custom domain to point to the Microsoft Azure App Service, but has not yet configured Azure to
recognize it. Click here to read more.
The web app owner has moved the web app to a different region, but the DNS cache is still directing to the old IP Address that was used
in the previous region. Click here to read more.
If I remove the pass through proxy I get the standard "Welcome to NGINX" index.html file, so the NGINX seems to work just fine too...
I sincerely hope my new(b)ness is causing the issue.
Any assistance would be a great help!
First off, big props to NGINX Support for getting back to me as quickly as I could transpose this post from an email I sent them...
More importantly, here is the answer provided by them that worked!
My guess that this is this the source of the problem.
Try adding following directive to "location /" block:
proxy_set_header Host rpsexpressnodetest.azurewebsites.net;
Worked like a champ!
I searched google for deploying multiple rails websites using phusion passenger 3.0.17 with nginx but I didn't get relevant results. Any how I completed passenger nginx setup by running passenger-install-nginx-module command.
Ques 1) I am looking for proper beginner tutorial for running multiple rails websites using phusion passenger 3.0.17 with nginx
Ques 2) I am looking commands for start, stop, restart the (whole passenger nginx server (ie) for all websites) and also for (Individual rails websites)
Note: I am not looking for passenger standalone solution. I am using REE 1.8.7 and rails 2.3.14
According to the documentation for Passenger, you create a new vhost for each app you want to deploy.
And point the site root at your apps public directory, and add the passenger_enabled directive. Exactly the same as deploying with Apache.
http {
...
server {
listen 80;
server_name www.mycook.com;
root /webapps/mycook/public;
passenger_enabled on;
}
...
}
More here: http://www.modrails.com/documentation/Users%20guide%20Nginx.html#deploying_a_ror_app
In regards question 2. Restarting depends on what you are trying to do. I'm going to assume you're using a distro that uses init.d
These are 3 cases where you do a different kind of 'restart'.
You have an issue with some config you have on Nginx. Or it's behaving strangely.
So you would restart the Nginx service like this: /etc/init.d/nginx restart
The next case is you have a rails or sinatra app deployed on Nginx with the passenger module.
And you want to make it reload some changes you just pushed to the server.
Passenger watches the tmp/restart.txt file in your application. So by simply runnging touch tmp/restart.txt. While cd'd into the app's folder will tell Passenger to reload the application.
And the last case for restarting/reloading is reload for Nginx.
You use this when you add or change your VHOSTs.
/etc/init.d/nginx reload. This allows you to reload your vhosts and other config without dropping connections.
Have a gander at the Passenger Documentation, it is very thorough. nginx-passenger docs
Here is a step-by-step tutorial on Configuring Nginx for multiple virtual hosts: http://articles.slicehost.com/2007/12/7/ubuntu-gutsy-installing-nginx-via-aptitude
Note that:
You cannot restart an individual website/virtual host, if you change some configurations in Nginx conf, as stuartc mentions. You have to restart Nginx for the changes to take effect. You can however do a $ touch current/tmp/restart.txt in the server app directory after pushing files, if you want to apply a production fix directly.
I have experience problems with Nginx restart on Ubuntu; explict stop and start seem to give more assured results. Use <NGINX_HOME>/bin/nginx stop to stop and then <NGINX_HOME/bin/nginx to start.
To help you, here are my configuration files.
nginx.conf:
#user nobody;
worker_processes 4;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
passenger_root /rails/common/ruby-1.9.2-p290/lib/ruby/gems/1.9.1/gems/passenger-3.0.17;
passenger_ruby /rails/common/ruby-1.9.2-p290/bin/ruby_with_env;
passenger_max_pool_size 30;
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
gzip on;
include /rails/common/nginx/conf/sites-enabled/*.conf;
}
A sample site.conf inside sites-enabled folder:
server {
listen 80;
server_name domainname1.com;
root /rails/myapp1/current/public;
passenger_enabled on;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
if (-f $document_root/system/maintenance.html) {
return 503;
}
error_page 503 #maintenance;
location #maintenance {
rewrite ^(.*)$ /system/maintenance.html break;
}
}
A new file in sites-enabled is all it takes to add a new site.
I have a server with the next components:
Ubuntu 12.04
Nginx 1.2.2
Passenger 3.0.15
I'm running a Ruby on Rails app on this server.
Now in my error.log of Nginx I found this error popping up regularly.
[ pid=12615 thr=3065355072 file=ext/nginx/HelperAgent.cpp:923 time=2012-10-22 09:31:03.929 ]: Couldn't forward the HTTP response back to the HTTP client: It seems the user clicked on the 'Stop' button in his browser.
Does anybody has an idea where this issue comes from?
This is my Nginx conf:
user deployer staff;
worker_processes 5;
error_log /var/log/nginx/error.log notice;
pid /var/log/nginx/nginx.pid;
events {
worker_connections 2048;
multi_accept on;
}
http {
passenger_root /var/lib/gems/1.9.1/gems/passenger-3.0.15;
passenger_ruby /usr/bin/ruby1.9.1;
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
passenger_enabled on;
server_name xxx.com;
listen 80;
rails_env production;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
root /var/rails/alfa_paints/current/public;
error_page 404 /404.html;
error_page 500 502 503 504 /500.html;
}
}
You configuration looks fine. I think the error is exactly what it says: the end user clicked "stop" on their browser, closing the TCP connection to the server. Everything in your application stack is likely working as designed. Unless you have end users complaining about the app not working, that's the most likely explanation.
That said, if you're seeing this error a lot, the next question you might ask is "why are users hitting the stop button so much"? Maybe part of your application is taking too long to respond to users, and you need to either speed it up or add some sort of progress indicator. You might look back at your logs and see if you can correlate the errors with a particular kind of request.
another situation maybe like this nginx Timeout serving large requests
to resolve this problem,you can try this:
gem install passenger #use the lastest passenger
passenger-config --root # get passenger_root_path
#download the lastest nginx file and cd nginx-X.X.X
./configure --with-http_ssl_module --with-http_gzip_static_module --with-cc-opt=-Wno-error --add-module=$passenger_root_path/ext/nginx
make
make install
after this, config nginx.conf and restart,you will find everything is ok!