Here is the story:
My server is a cloud server running centos, and serves a few bunch of web pages.
These web pages can be divided into 3 main categories, my homework, my little projects and big projects.
To manage them efficiently, I decided to move them into docker containers.And here is the structure:
This is my plan but websocket can't work on this setup
port 80---Nginx
|
|-another port--container port 80-nginx--static files
| |--container port--back end server
|-another port ....
......
################################################################################
Websocket works fine on this setup
physical port -- container port -- nginx -- static files
|--container port--back end server
I tried to use nginx to listen port 80 on my physical machine and proxy pass requests to my containers, and nginx in my containers proxy pass requests to my back end server. In this case, every thing works fine except websocket. Web pages can be load, ajax requests to my backend server can be responded but when it comes to websocket, back end server can receive, upgrade, and hold it however the browser just can't get any response until nginx closed the connection with error code 504 after time exceeded.
When I bind port 80 of container 80 instead of using nginx proxy on the physical machine, every thing works fine.
I think it is not an issue about headers because I set the header with code before upgrade it.
I can't figure out why, can anybody help me?
Here are my configurations:
##############################
the nginx.conf on physical machine
http{
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
upstream myminiprojs{
server 127.0.0.1:19306;
}
.......... # a few more upstreams
server {
listen 80;
server_name # can't be published;
charset utf-8;
add_header Cache-Control no-store;
location /{
root /root/coding;
}
location /homeworks{
proxy_pass http://myminiprojs;
}
...............
}
}
###################
the nginx.conf in container
http{
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
upstream chatroomAjax{
server 127.0.0.1:19306;
}
server {
listen 80;
charset utf-8;
add_header Cache-Control no-store;
location /{
root /app;
}
location /homeworks/chatroom/ajax{
proxy_pass http://chatroomAjax;
}
}
}
#############################
a few code of my back end server
//go webSocket server
{
scheduleBroadCast := func(w http.ResponseWriter, r *http.Request) {
r.Header.Set("Connection","Upgrade")
r.Header.Set("Upgrade","websocket")
fmt.Println(r.Header.Get("Connection"),r.Header.Get("Upgrade"),)
var upGrader websocket.Upgrader
upGrader.CheckOrigin = func(r *http.Request) bool {
return true
}
conn, err := upGrader.Upgrade(w, r, nil)
if err != nil {
fmt.Println("web socket 建立失败", err)
fmt.Println(r.Method)
return
}
broadCast.AddListener(conn)
}
http.HandleFunc(route+"/websocket/", scheduleBroadCast)
}
I did not use proxy_set_header to pass the header to the back end server. Instead, I tried to make the back end server upgrade the http request to websocket dispite the header.
When I used only one nginx proxy, it works. But in this case, the second nginx will fail to realize that it is a websocket connection. So it does not work.
So it is necessary to use proxy_set_headr .
Related
A go GRPC server is running on an amazon Linux 2 EC2 instance. GRPC web wrapper is used, which makes the server available for a NEXTjs application. Two ports are exposed. One for regular GRPC requests and another for GRPC web requests. Nginx is configured to reverse proxy the requests, and TLS is enabled.
Regular GRPC server
server {
listen 8000 http2;
listen [::]:8000;
server_name example.org;
location / {
grpc_pass grpc://localhost:5000;
grpc_read_timeout 300;
grpc_send_timeout 300;
}
}
GRPC web server
server {
server_name example.org;
location / {
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:5001;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
}
access_log /var/log/nginx/example.org/access.log;
error_log /var/log/nginx/example.org/error.log;
listen [::]:443 ssl; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.org/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.org/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
A server-side stream has been implemented. It sends an initial response soon after the connection and further responses for internal events. It works fine for regular GRPC requests but not for GRPC web.
Once the client makes a request, the status goes to pending, and once the stream closes, the client gets the response. Interim responses are not sent from the server. Requests from the client are logged in the server. They reach the server immediately. But the response it delayed. Sometimes, after 1 minute, the client gets this error - "(failed)net::ERR_INCOMPLETE_CHUNKED_ENCODING" I expect the response to be similar to regular GRPC calls.
I have a fully dockerised application:
nginx as proxy
a backend server (express.js)
a database (mongodb)
a frontend server (express js)
goaccess for logging
The problem is when I hit my backend endpoint with a POST request, the response is never sent to the client. A 499 code is logged by nginx along with this log
epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while sending request to upstream,
The client is the browser, there is no doubt about it.
The error arise after 1min of processing in firefox and 5min of processing in chrome. As far as I know, these times match the timeout settings of theses browsers. I could increase the timeout in firefox but it is not a viable solution.
When I get rid of the proxy, the request completes and the client get the response in about 15min. So I think there is a problem with the nginx configuration but I don't know what.
So far I tried to increase all timeout you can imagine but that didn't change anything.
I also try to set the proxy_ignore_client_abort in nginx but it is not useful in my case. Indeed the connection between nginx and my backend is still alive and the request completes after 15min (code 200 in nginx logs) but the ui is not updated because the client has terminated the connection with nginx.
I think that the browser thinks nginx is dead, because it doesn't receive any data, so it closes the TCP connection.
I'll try later on to "stimulates" this TCP connection when the request is still processing by switching between my website pages (so the browser should not close the connection), but if I have to do some weird stuff to get my backend result, it is not a viable solution.
There should be a way to process long requests without facing these browser's timeout but I don't know how.
Any help would be appreciated :)
My nginx configuration:
user nginx;
pid /run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 65535;
events {
multi_accept on;
worker_connections 65535;
}
http {
charset utf-8;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
log_not_found off;
types_hash_max_size 2048;
types_hash_bucket_size 64;
client_max_body_size 16M;
# mime
include mime.types;
default_type application/octet-stream;
# logging
log_format my_log '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" ';
access_log /var/log/nginx/access.log my_log;
error_log /var/log/nginx/error.log info;
# limits
limit_req_log_level warn;
limit_req_zone $binary_remote_addr zone=main:10m rate=10r/s;
# SSL
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
# Mozilla Intermediate configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
# OCSP
ssl_stapling on;
ssl_stapling_verify on;
resolver 1.1.1.1 1.0.0.1 8.8.8.8 8.8.4.4 208.67.222.222 208.67.220.220 valid=60s;
resolver_timeout 2s;
# Connection header for WebSocket reverse proxy
map $http_upgrade $connection_upgrade {
default upgrade;
"" close;
}
map $remote_addr $proxy_forwarded_elem {
# IPv4 addresses can be sent as_is
~^[0-9.]+$ "for=$remote_addr";
# IPv6 addresses need to be bracketed and quoted
~^[0-9A-Fa-f:.]+$ "for\"[$remote_addr]\"";
# Unix domain socket names cannot be represented in RFC 7239 syntax
default "for=unknown";
}
map $http_forwarded $proxy_add_forwarded {
# If the incoming Forwarded header is syntactially valid, append to it
"~^(,[ \\t]*)*([!#$%&'*+.^_`|~0-9A-Za-z-]+=([!#$%&'*+.^_`|~0-9A-Za-z-]+|\"([\\t \\x21\\x23-\\x5B\\x5D-\\x7E\\x80-\\xFF]|\\\\[\\t \\x21-\\x7E\\x80-\\xFF])*\"))?(;([!#$%&'*+.^_`|~0-9A-Za-z-]+=([!#$%&'*+.^_`|~0-9A-Za-z-]+|\"([\\t \\x21\\x23-\\x5B\\x5D-\\x7E\\x80-\\xFF]|\\\\[\\t \\x21-\\x7E\\x80-\\xFF])*\"))?)*([ \\t]*,([ \\t]*([!#$%&'*+.^_`|~0-9A-Za-z-]+=([!#$%&'*+.^_`|~0-9A-Za-z-]+|\"([\\t \\x21\\x23-\\x5B\\x5D-\\x7E\\x80-\\xFF]|\\\\[\\t \\x21-\\x7E\\x80-\\xFF])*\"))?(;([!#$%&'*+.^_`|~0-9A-Za-z-]+=([!#$%&'*+.^_`|~0-9A-Za-z-]+|\"([\\t \\x21\\x23-\\x5B\\x5D-\\x7E\\x80-\\xFF]|\\\\[\\t \\x21-\\x7E\\x80-\\xFF])*\"))?)*)?)*$" "$http_forwarded, $proxy_forwarded_elem";
# Otherwise, replace it
default "$proxy_forwarded_elem";
}
# Load configs
include /etc/nginx/conf.d/localhost.conf;
}
and localhost.conf
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name localhost;
root /usr/share/nginx/html;
ssl_certificate /etc/nginx/live/localhost/cert.pem;
ssl_certificate_key /etc/nginx/live/localhost/key.pem;
include /etc/nginx/conf.d/security.conf;
include /etc/nginx/conf.d/proxy.conf;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log info;
# nginx render files or proxy the request
location / {
try_files $uri #front;
}
location #front {
proxy_pass http://frontend:80;
}
location ^~ /api/v1 {
proxy_read_timeout 30m; # because an inference with SIMP can takes some time
proxy_send_timeout 30m;
proxy_connect_timeout 30m;
proxy_pass http://backend:4000;
}
location = /report.html {
root /usr/share/goaccess/html/;
}
location ^~ /ws {
proxy_pass http://goaccess:7890;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_read_timeout 7d;
proxy_connect_timeout 3600;
}
include /etc/nginx/conf.d/general.conf;
}
EDIT:
The request is sent via the Angular HttpClient, maybe this module is built in a way to abort requests if a response in not send in a short time frame, I'll try to investigate on that.
Ok I think I can answer my own question.
HTTP requests are not designed for long requests. When a request is issued, a response should be delivered as quickly as possible.
When you are doing a long process job, you should use workers and messages architecture (or event driven architecture) with tools like rabbitmq or kafka. You can also use polling (but it is not the more efficient solution).
So that, in my POST handler what I should do is when data arrive send a message to my broker and then issue an appropriate response (like request is processing).
The worker subscribe to a queue and can receive the message previously delivered, do the job and then reply back to my back end. We can then use a STOMP (websocket) plugin to route the result to the front end.
I have created a domain(domain.com) and subdomain (abc.domain.com), and also generated SSL certificates for both by using letsencrypt. Both the Django projects are hosted on AWS EC2 and created proxy server for them which is as follow:
server {
listen 443 ssl;
server_name example.com;
location / {
proxy_pass https://1.2.3.4:444;
proxy_ssl_server_name on;
proxy_ssl_verify on;
proxy_ssl_certificate /home/domain/fullchain.pem;
proxy_ssl_certificate_key /home/domain/privkey.pem;
}
}
server {
listen 443 ssl;
server_name abc.example.com;
location / {
proxy_pass https://1.2.3.4:445;
proxy_ssl_server_name on;
proxy_ssl_verify on;
proxy_ssl_certificate /home/subdomain/fullchain.pem;
proxy_ssl_certificate_key /home/subdomain/privkey.pem;
}
}
I strats the proxy server and both the projects, starting not giving any problem the problem is that when i enter https://example.com on the browser it is not showing the page, but when i pull domain with port no. https://example.com:444, it starts showing the page. I do not know what I am missing.
In order to make https://example.com work you need to correctly configure Nginx with SSL configuration which include using ssl_certificate and ssl_certificate_key directives as it does not seem that you are using them.
Using proxy_ssl_certificate is for using HTTPS connection between Nginx and the Proxied Server which in your case the django application.
Using ssl_certificate is for using HTTPS connection between the user's browser and Nginx which you need to make https://example.com works as expected
For more details check configuring HTTPS servers
I am working on a live streaming service that will be used by a pretty small community, and I am currently unable to figure out how to secure the nginx server that takes in rtmp from OBS/xSplit and puts it out as HLS. The HLS is played using VideoJS. User accounts are done using the Devise module, the usernames are used to create the path for the client and the streamer. I need to add stream keys to protect the user accounts from being streamed to by anyone, this can be done with on_publish and PHP in nginx.conf, but that wouldn't work with my RoR/Devise accounts system. What can I do to do this within Devise and Rails?
Edit: I was asked to add some code, though I don't know of any method to do this in Rails, here is a PHP and Python example https://github.com/Nesseref/nginx-rtmp-auth
The NGINX http server is running on port 8080, but is behind an apache2 proxy on port 80, as this machine hosts multiple websites.
Here's my NGINX config
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
#gzip on;
server {
listen 8080;
server_name localhost;
location /hls {
types {
application/vnd.apple.mpegurl m3u8;
}
root /tmp;
add_header Cache-Control no-cache;
# To avoid issues with cross-domain HTTP requests (e.g. during development)
add_header Access-Control-Allow-Origin *;
}
location / {
root html;
index index.html index.htm;
}
location /stat {
rtmp_stat all;
rtmp_stat_stylesheet stat.xsl;
}
location /stat.xsl {
root /var/www/html/;
}
#error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
rtmp {
server {
listen 1935;
chunk_size 4096;
application live {
live on;
deny play all;
hls on;
hls_path /tmp/hls;
hls_fragment 15s;
}
}
}
I'm working on NGINX Plus setup as a reverse proxy for traffic management and routing on my Azure Cloud Solution.
I'm just getting started and everything works independently, but when I try to use the proxy_pass to route web traffic to a .NET Web App that rests in the cloud, I get 404 errors.
I've tried with an app I've had deployed for a while(a .NET MVC Web App) and also a node express app that is nothing more than the basic offering as a test:
http://rpsexpressnodetest.azurewebsites.net/
Each of these runs as expected when I go the directly to them, but then when I enable the pass thru I get a 404 error.
I'm using the following config file for nginx:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream web_rps{
server rpsexpressnodetest.azurewebsites.net;
}
# ssl_certificate /etc/nginx/ssl/server.crt;
# ssl_certificate_key /etc/nginx/ssl/server.key;
# drop requests with no Host header
# server{
# listen 80 default_server;
# server_name "";
# return 444;
# }
server{
listen *:80;
# listen *:443 ssl;
root /usr/share/nginx/html;
location / {
proxy_pass http://web_rps;
}
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
In any case, if I navigate to http://rpsnginx.cloudapp.net/ (my nginx vm), I always get a 404 web app not found...
Error 404 - Web app not found.
The web app you have attempted to reach is not available in this
Microsoft Azure App Service region. This could be due to one of
several reasons:
The web app owner has registered a custom domain to point to the Microsoft Azure App Service, but has not yet configured Azure to
recognize it. Click here to read more.
The web app owner has moved the web app to a different region, but the DNS cache is still directing to the old IP Address that was used
in the previous region. Click here to read more.
If I remove the pass through proxy I get the standard "Welcome to NGINX" index.html file, so the NGINX seems to work just fine too...
I sincerely hope my new(b)ness is causing the issue.
Any assistance would be a great help!
First off, big props to NGINX Support for getting back to me as quickly as I could transpose this post from an email I sent them...
More importantly, here is the answer provided by them that worked!
My guess that this is this the source of the problem.
Try adding following directive to "location /" block:
proxy_set_header Host rpsexpressnodetest.azurewebsites.net;
Worked like a champ!