I have simple application that has public address, lets say http://example.com
I need to setup reverse proxy which will forward requests to this service, so I running nginx as docker image, with following configuration.
http {
server {
listen 80 default_server;
listen [::]:80 default_server;
location / {
proxy_http_version 1.1;
proxy_set_header "Connection" "";
resolver 127.0.0.11;
proxy_pass http://example.com;
}
}
}
But it is not working, I'm receiving bunch of
send() failed (111: Connection refused) while resolving, resolver:
127.0.0.11:53
And finally,
example.com could not be resolved (110: Operation timed out),
Solution was to use Google Public DNS. On top of that I had to turn of ipv6 as it was causing other issues.
resolver 8.8.8.8 ipv6=off;
Related
How to call GRPC Server which is located in docker container on Swarm cluster from NGINX reverse proxy?
GRPC Server in container/service called webui with kestrel development certificate installed
NGINX Proxy which is located outside the stack and routes access to Swarm stacks
GRPC Client is located on a separate virtual machine on another network, the browser page at https://demo.myorg.com is available
part nginx.conf
server {
listen 443 ssl;
server_name demo.myorg.com;
...
location / {
proxy_pass https://namestack_webui;
}
GRPC Client appsetting.json
{
"ConnectionStrings": {
"Database": "Data Source=Server_name;Initial Catalog=DB;User Id=user;Password=pass;MultipleActiveResultSets=True;"
}
...
"GRPCServerUri": "https://demo.myorg.com/",
...
}
}
Problem when connecting GRPC Client to Server, i get error
END] GetOpcDaServerSettingsQuery. Time spent: 7,7166ms
fail: Grpc.Net.Client.Internal.GrpcCall[6]
Error starting gRPC call.
System.Net.Http.HttpRequestException: The SSL connection could not be established, see inner exception.
---> System.Security.Authentication.AuthenticationException: Authentication failed, see inner exception.
---> System.ComponentModel.Win32Exception (0x80090367): No common application protocol exists between the client and the server. Application protocol negotiation failed..
--- End of inner exception stack trace ---
Tried to write and specify a kestrel development certificate (for GRPC Client) that is loaded into the Swarm stack (namestack) through which the other containers in the stack are authenticated, the error is the same.
I understand that it is necessary to specify in appsetting.json the GRPC Server container address (https://namestack_webui), but it is behind NGINX, and I can only specify the GRPC host address (https://demo.myorg.com), tell me what is wrong?
The perfect solution for such a case was not found online.
I finally figured out and found a solution to my question, and I publish it for discussion.
If there are no comments against, then mark it as correct, at least it works for me and will work for YOU.
to proxy grpc connections through NGINX in the configuration, the location section must specify something similar to the url /PackageName.ServiceName/MethodName (This is indicated here by https://learn.microsoft.com/en-aspnetus/aspnet/core/grpc/troubleshoot?view=aspnetcor7.0#unable-to-start-aspnet-core-grpc-app-on-macos )
This URL can be checked with the developer or in the logs when grpc client connects
Should be used to proxy directive grpc_pass grpcs://namecontainer;
Should use http2 protocol.
So the correct configuration file for nginx in my case should look like this
server {
listen 443 ssl **http2**;
server_name demo.myorg.com;
ssl_certificate ...;
ssl_certificate_key ...;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers RC4:HIGH:!aNULL:!MD5:!kEDH;
add_header Strict-Transport-Security 'max-age=604800';
underscores_in_headers on;
large_client_header_buffers 4 16k;
location / {
proxy_pass https://name_container;
# Configuration for WebSockets
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_cache off;
# WebSockets were implemented after http/1.0
proxy_http_version 1.1;
# Configuration for ServerSentEvents
proxy_buffering off;
# Configuration for LongPolling or if your KeepAliveInterval is longer than 60 seconds
proxy_read_timeout 100s;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-URL-SCHEME https;
}
location /App.Name.Api.Contract.ApiService/UpdateOpcDaTags {
grpc_pass grpcs://name_container;
}
}
Background
My setup is based on the following tutorial:
Dockerizing Django with Postgres, Gunicorn, and NGINX
TL;DR: (italics: not covered by tutorial; personal adventure)
3 Docker services for: nginx -> django -> postgres (arrow indicating dependency)
Nginx proxy passes requests to exposed port in Django service.
HTTP (non-SSL) requests working
require SSL connections by redirecting http -> https
Details
I've generated a self-signed certificate to test out ssl redirects with NGINX locally before trying to get it to work on a VPS in production. I'm quite new to working with NGINX and so I'm not entirely sure what's going wrong or how to diagnose problems.
Here's what I want to happen with the NGINX file I've provided below... (spoilers: it doesn't):
Go to http://localhost
Get redirected to https://localhost
Warning from browser about a self-signed cert; accept warning and proceed
Site rendered fine, SSL redirect working!
But this isn't the case. I get a 502 Bad Gateway, and NGINX outputs the following logs:
prod_1 | 192.168.144.1 - - [03/Jun/2019:00:01:44 +0000] "GET / HTTP/1.1" 502 158 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:67.0) Gecko/20100101 Firefox/67.0" "-"
prod_1 | 2019/06/03 00:01:44 [error] 8#8: *1 peer closed connection in SSL handshake while SSL handshaking to upstream, client: 192.168.144.1, server: , request: "GET / HTTP/1.1", upstream: "https://192.168.144.3:8000/", host: "localhost"
Can anyone tell me what's going on or how to fix it? I feel like there's probably a whole bunch wrong with my conf file even outside of the SSL redirect, but I don't really know how to identify any problems. The conf file is below...
upstream mysite {
server web:8000;
}
# redirect http traffic to https
server {
listen 80;
listen [::]:80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
location / {
proxy_pass https://mysite;
proxy_ssl_server_name on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
}
location /assets/ {
alias /usr/src/site/assets/;
}
location /media/ {
alias /usr/src/site/media/;
}
ssl_certificate /etc/ssl/certificates/site.crt;
ssl_certificate_key /etc/ssl/certificates/site.key;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
}
Based on your error
prod_1 | 2019/06/03 00:01:44 [error] 8#8: *1 peer closed connection in SSL handshake while SSL handshaking to upstream, client: 192.168.144.1, server: , request: "GET / HTTP/1.1", upstream: "https://192.168.144.3:8000/", host: "localhost"
I would say it's likely that you have some kind of protocol mismatch between Nginx and Django. It's likely that Django is expecting non-secure communication (http). Your nginx configuration indicates that you have configured it to communicate with Django via https:
proxy_pass https://mysite;
My suggestion is to make sure that the communication between Nginx and Django is using the same protocol, either http or https.
Whether you want to use http or https is up to you. There are two differing schools of thought on whether http is secure here.
The first school of thought is that http IS secure in this scenario, since the communication is within the same machine.
The second school of thought is to secure all communications, and to use https. However if you agree with this line of thinking, you will have to make sure that the communication between your web server and database is also using a secure protocol. After all, you are only as secure as your weakest link.
I tend to lean towards the first school of thought. Though this is not necessarily what is appropriate for you.
I set up the Letsencrypt certificate directly to an AWS EC2 Ubuntu instance running Nginx and a docker server using port 9998. The domain is set up on Route 53. Http is redirected to https.
So https://example.com is working fine but https://example.com:9998 gets ERR_SSL_PROTOCOL_ERROR. If I use the IP address like http://10.10.10.10:9997 is working and checked the server using port 9998 okay.
The snapshot of the server on docker is:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
999111000 img-server "/bin/sh -c 'java -j…" 21 hours ago Up 21 hours 0.0.0.0:9998->9998/tcp hellowworld
It seems something is missing between Nginx and the server using port 9998. How can I fix it?
Where have you configured the ssl certificate ? Only Nginx?
The reason why you cannot visit https://example.com:9998 using ssl protocal is that that port provides http service rather than https.
I suggest not to publish 9998 of hellowworld and proxy all the traffic with nginx (if Nginx is also started with docker and in the same network).
Configure https in Nginx and the origin sever provides http.
This is a sample configuration https://github.com/newnius/scripts/blob/master/nginx/config/conf.d/https.conf
server {
listen 80;
server_name example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443;
server_name example.com;
access_log logs/example.com/access.log main;
error_log /var/log/nginx/debug.log debug;
ssl on;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location / {
proxy_pass http://apache:80;
proxy_set_header Host $host;
proxy_set_header CLIENT-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location ~ /.well-known {
allow all;
proxy_pass http://apache:80;
}
# Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store(Mac).
location ~ /\. {
deny all;
}
}
I am working on SOA system and i am using consul service discovery with nginx and registrator. everything is dockerized. the idea is to have all this backend services running inside a docker container to be visible to the consul server and use nginx as a load balancer to route requests to the correct service.
I've set up consul and registrator successfully and tested it using the consul UI. If I spin up a service running inside docker (redis for example), I can see consul discovers the service. The problem i am having is configuring nginx to connect to the upstream servers. I have a bunch of PHP services running inside a container and I want nginx to connect to the correct upstream server and serve the response. however nginx always returns a 502.
here is my nginx.conf file
upstream app-cluster {
least_conn;
{{range service "app-http"}}server {{.Address}}:{{.Port}}
max_fails=3 fail_timeout=60 weight=1;
{{else}}server 127.0.0.1:65535; # force a 502{{end}}
}
server {
listen 80 default_server;
location / {
proxy_pass http://app-cluster;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
}
}
nginx error log :
2018/08/29 09:56:29 [error] 27#27: *7 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.10.24, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:32795/", host: "aci-host-01:8080"
does anyone know any comprehensive guide on this or might have an idea where the problem might be?
thanks in advance
I have two docker containers:
One container runs my spring boot application which listens on port 8080:
This container exposes 8080 port to other docker containers.
Container ip in the docker network is 172.17.0.2.
The other container runs nginx which publishes port 80.
I can successfully put my spring boot app behind nginx with the following conf in my nginx container:
server {
server_name <my-ip>;
listen 80;
location / {
proxy_pass http://172.17.0.2:8080/;
}
}
Doing a GET request to my REST API (http://my-ip/context-url) works fine.
I am trying now to put my application behind nginx with https. My nginx conf is as follows:
server {
server_name <my-ip>;
listen 80;
return 301 https://$server_name$request_uri;
}
server {
server_name <my-ip>;
listen 443;
ssl on;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
location / {
proxy_pass http://172.17.0.2:8080/;
}
}
However I cannot access my application now either through http or https.
http redirects to https and result is ERR_CONNECTION_REFUSED
Problem was that I was not publishing 443 port when running nginx container but only port 80.The nginx configuration is right.