Data not streamed from the server to the client using grpc web and nginx reverse proxy - devops

A go GRPC server is running on an amazon Linux 2 EC2 instance. GRPC web wrapper is used, which makes the server available for a NEXTjs application. Two ports are exposed. One for regular GRPC requests and another for GRPC web requests. Nginx is configured to reverse proxy the requests, and TLS is enabled.
Regular GRPC server
server {
listen 8000 http2;
listen [::]:8000;
server_name example.org;
location / {
grpc_pass grpc://localhost:5000;
grpc_read_timeout 300;
grpc_send_timeout 300;
}
}
GRPC web server
server {
server_name example.org;
location / {
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:5001;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
}
access_log /var/log/nginx/example.org/access.log;
error_log /var/log/nginx/example.org/error.log;
listen [::]:443 ssl; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.org/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.org/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
A server-side stream has been implemented. It sends an initial response soon after the connection and further responses for internal events. It works fine for regular GRPC requests but not for GRPC web.
Once the client makes a request, the status goes to pending, and once the stream closes, the client gets the response. Interim responses are not sent from the server. Requests from the client are logged in the server. They reach the server immediately. But the response it delayed. Sometimes, after 1 minute, the client gets this error - "(failed)net::ERR_INCOMPLETE_CHUNKED_ENCODING" I expect the response to be similar to regular GRPC calls.

Related

How to define one domain for both static webpage and shiny server app?

I have a shiny server app using aws ec2 & route53, nginx & certbot for ssl. right now my domain name is used by the app.
I would like to have a static homepage to welcome users and offer the access to login to the app.
The purpose is to have an homepage intro and so it can be indexed by google.
Can i use one domain for that (for both app and webpage)?
how should i define and manage my domain to do so?
hope i made my Q clear enough.
thanks in advance
I forgot to mention that my static website is on aws s3 bucket (and not on the ec2 +nginx server).
I'm not sure about the syntax to define the nginx.conf. the following is how the nginx.conf is working now fine:
server {
listen 80;
listen [::]:80;
# redirect all HTTP requests to HTTPS with a 301 Moved Permanently response.
return 301 https://$host$request_uri;
}
server {
# listen 443 means the Nginx server listens on the 443 port.
listen 443 ssl http2;
listen [::]:443 ssl http2;
# certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
ssl_certificate /etc/letsencrypt/live/app.mydomain/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.mydomain/privkey.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
ssl_session_tickets off;
# curl https://ssl-config.mozilla.org/ffdhe2048.txt > /path/to/dhparam.pem
ssl_dhparam /etc/nginx/snippets/dhparam.pem;
# intermediate configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES12>
ssl_prefer_server_ciphers off;
# HSTS (ngx_http_headers_module is required) (63072000 seconds)
add_header Strict-Transport-Security "max-age=63072000" always;
# OCSP stapling
ssl_stapling on;
ssl_stapling_verify on;
# verify chain of trust of OCSP response using Root CA and Intermediate certs
ssl_trusted_certificate /etc/letsencrypt/live/app.mydomain/chain.pem;
# Replace it with your (sub)domain name.
server_name app.mydomain;
# The reverse proxy, keep this unchanged:
location / {
proxy_pass http://localhost:3838;
proxy_redirect http://localhost:3838/ $scheme://$host/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 20d;
proxy_buffering off;
}
}
and if i understood #AlexStoneham, i need to add something like that:
server{
server_name mydomain;
location / {
proxy_pass $scheme://$host.s3-website-eu-central-1.amazonaws.com$request_uri
}
}
but that adding doesnt work. should i add to it the 443 listener block and add ssl certificate all over again?
app.mydomain is for the shiny app and working fine now.
mydomain should direct to s3 static webpage.
thanks
Use nginx server blocks with your nginx conf
and subdomains with your route53 conf
Leverage a subdomain like app.yourdomain.com to go to the shiny app configured with nginx to serve the shiny app in one server block. Set up another subdomain like www.yourdomain.com to go to the static pages configured with nginx to server the static pages in another server block.
See:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-routing-traffic-for-subdomains.html
for the route53 details
and:
https://www.nginx.com/resources/wiki/start/topics/examples/server_blocks/
for the nginx details
The nginx.conf was ok and didn't need to add anything because the static webpage is on s3 bucket and not on nginx/ec2.
The issue was that in one of my many tries i made a certbot certificate of the "mydomain" that was the same name of the s3 bucket.
That clashed and made the problem when trying to link my s3 bucket with that domain name through route53 (the s3 endpoint is http and not https).
The solution was to delete that specific ssl certificate from my ec2 server(with nginx on it):
$ sudo certbot certificates #shows the exist certificates
$ sudo certbot delete #choose the certificate to delete, in my case: "mydomain"

nginx client closes connection

I have a fully dockerised application:
nginx as proxy
a backend server (express.js)
a database (mongodb)
a frontend server (express js)
goaccess for logging
The problem is when I hit my backend endpoint with a POST request, the response is never sent to the client. A 499 code is logged by nginx along with this log
epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while sending request to upstream,
The client is the browser, there is no doubt about it.
The error arise after 1min of processing in firefox and 5min of processing in chrome. As far as I know, these times match the timeout settings of theses browsers. I could increase the timeout in firefox but it is not a viable solution.
When I get rid of the proxy, the request completes and the client get the response in about 15min. So I think there is a problem with the nginx configuration but I don't know what.
So far I tried to increase all timeout you can imagine but that didn't change anything.
I also try to set the proxy_ignore_client_abort in nginx but it is not useful in my case. Indeed the connection between nginx and my backend is still alive and the request completes after 15min (code 200 in nginx logs) but the ui is not updated because the client has terminated the connection with nginx.
I think that the browser thinks nginx is dead, because it doesn't receive any data, so it closes the TCP connection.
I'll try later on to "stimulates" this TCP connection when the request is still processing by switching between my website pages (so the browser should not close the connection), but if I have to do some weird stuff to get my backend result, it is not a viable solution.
There should be a way to process long requests without facing these browser's timeout but I don't know how.
Any help would be appreciated :)
My nginx configuration:
user nginx;
pid /run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 65535;
events {
multi_accept on;
worker_connections 65535;
}
http {
charset utf-8;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
log_not_found off;
types_hash_max_size 2048;
types_hash_bucket_size 64;
client_max_body_size 16M;
# mime
include mime.types;
default_type application/octet-stream;
# logging
log_format my_log '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" ';
access_log /var/log/nginx/access.log my_log;
error_log /var/log/nginx/error.log info;
# limits
limit_req_log_level warn;
limit_req_zone $binary_remote_addr zone=main:10m rate=10r/s;
# SSL
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
# Mozilla Intermediate configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
# OCSP
ssl_stapling on;
ssl_stapling_verify on;
resolver 1.1.1.1 1.0.0.1 8.8.8.8 8.8.4.4 208.67.222.222 208.67.220.220 valid=60s;
resolver_timeout 2s;
# Connection header for WebSocket reverse proxy
map $http_upgrade $connection_upgrade {
default upgrade;
"" close;
}
map $remote_addr $proxy_forwarded_elem {
# IPv4 addresses can be sent as_is
~^[0-9.]+$ "for=$remote_addr";
# IPv6 addresses need to be bracketed and quoted
~^[0-9A-Fa-f:.]+$ "for\"[$remote_addr]\"";
# Unix domain socket names cannot be represented in RFC 7239 syntax
default "for=unknown";
}
map $http_forwarded $proxy_add_forwarded {
# If the incoming Forwarded header is syntactially valid, append to it
"~^(,[ \\t]*)*([!#$%&'*+.^_`|~0-9A-Za-z-]+=([!#$%&'*+.^_`|~0-9A-Za-z-]+|\"([\\t \\x21\\x23-\\x5B\\x5D-\\x7E\\x80-\\xFF]|\\\\[\\t \\x21-\\x7E\\x80-\\xFF])*\"))?(;([!#$%&'*+.^_`|~0-9A-Za-z-]+=([!#$%&'*+.^_`|~0-9A-Za-z-]+|\"([\\t \\x21\\x23-\\x5B\\x5D-\\x7E\\x80-\\xFF]|\\\\[\\t \\x21-\\x7E\\x80-\\xFF])*\"))?)*([ \\t]*,([ \\t]*([!#$%&'*+.^_`|~0-9A-Za-z-]+=([!#$%&'*+.^_`|~0-9A-Za-z-]+|\"([\\t \\x21\\x23-\\x5B\\x5D-\\x7E\\x80-\\xFF]|\\\\[\\t \\x21-\\x7E\\x80-\\xFF])*\"))?(;([!#$%&'*+.^_`|~0-9A-Za-z-]+=([!#$%&'*+.^_`|~0-9A-Za-z-]+|\"([\\t \\x21\\x23-\\x5B\\x5D-\\x7E\\x80-\\xFF]|\\\\[\\t \\x21-\\x7E\\x80-\\xFF])*\"))?)*)?)*$" "$http_forwarded, $proxy_forwarded_elem";
# Otherwise, replace it
default "$proxy_forwarded_elem";
}
# Load configs
include /etc/nginx/conf.d/localhost.conf;
}
and localhost.conf
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name localhost;
root /usr/share/nginx/html;
ssl_certificate /etc/nginx/live/localhost/cert.pem;
ssl_certificate_key /etc/nginx/live/localhost/key.pem;
include /etc/nginx/conf.d/security.conf;
include /etc/nginx/conf.d/proxy.conf;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log info;
# nginx render files or proxy the request
location / {
try_files $uri #front;
}
location #front {
proxy_pass http://frontend:80;
}
location ^~ /api/v1 {
proxy_read_timeout 30m; # because an inference with SIMP can takes some time
proxy_send_timeout 30m;
proxy_connect_timeout 30m;
proxy_pass http://backend:4000;
}
location = /report.html {
root /usr/share/goaccess/html/;
}
location ^~ /ws {
proxy_pass http://goaccess:7890;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_read_timeout 7d;
proxy_connect_timeout 3600;
}
include /etc/nginx/conf.d/general.conf;
}
EDIT:
The request is sent via the Angular HttpClient, maybe this module is built in a way to abort requests if a response in not send in a short time frame, I'll try to investigate on that.
Ok I think I can answer my own question.
HTTP requests are not designed for long requests. When a request is issued, a response should be delivered as quickly as possible.
When you are doing a long process job, you should use workers and messages architecture (or event driven architecture) with tools like rabbitmq or kafka. You can also use polling (but it is not the more efficient solution).
So that, in my POST handler what I should do is when data arrive send a message to my broker and then issue an appropriate response (like request is processing).
The worker subscribe to a queue and can receive the message previously delivered, do the job and then reply back to my back end. We can then use a STOMP (websocket) plugin to route the result to the front end.

Nginx Internal Server Error with docker throw error 500

I,m trying to deploy a nginx application in docker. After I have installed certificates with cerbot i have this nginx.conf:
server {
listen 80;
server_name web.com www.web.com;
location / {
return 301 https://$server_name$request_uri;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
listen 443 ssl default_server;
server_name web.com www.web.com;
location / {
proxy_pass https://www.web.com;
}
ssl_certificate /etc/letsencrypt/live/web.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/web.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
When I try to access to my web url the browser show 500 Internal Server Error. nginx/1.15.12
I can`t see the logs so I don't know what I have to do.
The ssl certificate works fine becaouse the lock appear in the url bar
Can you check if the container is started or not?
If container is starting, you can connect to container and then check the nginx logs (must be available /var/log/nginx/error.log).

Docker port is not working over https after setting up an SSL over ubuntu nginx

I set up the Letsencrypt certificate directly to an AWS EC2 Ubuntu instance running Nginx and a docker server using port 9998. The domain is set up on Route 53. Http is redirected to https.
So https://example.com is working fine but https://example.com:9998 gets ERR_SSL_PROTOCOL_ERROR. If I use the IP address like http://10.10.10.10:9997 is working and checked the server using port 9998 okay.
The snapshot of the server on docker is:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
999111000 img-server "/bin/sh -c 'java -j…" 21 hours ago Up 21 hours 0.0.0.0:9998->9998/tcp hellowworld
It seems something is missing between Nginx and the server using port 9998. How can I fix it?
Where have you configured the ssl certificate ? Only Nginx?
The reason why you cannot visit https://example.com:9998 using ssl protocal is that that port provides http service rather than https.
I suggest not to publish 9998 of hellowworld and proxy all the traffic with nginx (if Nginx is also started with docker and in the same network).
Configure https in Nginx and the origin sever provides http.
This is a sample configuration https://github.com/newnius/scripts/blob/master/nginx/config/conf.d/https.conf
server {
listen 80;
server_name example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443;
server_name example.com;
access_log logs/example.com/access.log main;
error_log /var/log/nginx/debug.log debug;
ssl on;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location / {
proxy_pass http://apache:80;
proxy_set_header Host $host;
proxy_set_header CLIENT-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location ~ /.well-known {
allow all;
proxy_pass http://apache:80;
}
# Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store(Mac).
location ~ /\. {
deny all;
}
}

ssl certificate or nginx proxy server not working

I have created a domain(domain.com) and subdomain (abc.domain.com), and also generated SSL certificates for both by using letsencrypt. Both the Django projects are hosted on AWS EC2 and created proxy server for them which is as follow:
server {
listen 443 ssl;
server_name example.com;
location / {
proxy_pass https://1.2.3.4:444;
proxy_ssl_server_name on;
proxy_ssl_verify on;
proxy_ssl_certificate /home/domain/fullchain.pem;
proxy_ssl_certificate_key /home/domain/privkey.pem;
}
}
server {
listen 443 ssl;
server_name abc.example.com;
location / {
proxy_pass https://1.2.3.4:445;
proxy_ssl_server_name on;
proxy_ssl_verify on;
proxy_ssl_certificate /home/subdomain/fullchain.pem;
proxy_ssl_certificate_key /home/subdomain/privkey.pem;
}
}
I strats the proxy server and both the projects, starting not giving any problem the problem is that when i enter https://example.com on the browser it is not showing the page, but when i pull domain with port no. https://example.com:444, it starts showing the page. I do not know what I am missing.
In order to make https://example.com work you need to correctly configure Nginx with SSL configuration which include using ssl_certificate and ssl_certificate_key directives as it does not seem that you are using them.
Using proxy_ssl_certificate is for using HTTPS connection between Nginx and the Proxied Server which in your case the django application.
Using ssl_certificate is for using HTTPS connection between the user's browser and Nginx which you need to make https://example.com works as expected
For more details check configuring HTTPS servers

Resources