I have an official Nginx image in which I added certificates and the .conf file so it listens over port 443 and 80. I just added on Nginx official dockerfile:
ADD gp-search2.conf /etc/nginx/conf.d/
ADD STAR.GREY.COM.crt /etc/ssl/certs/
ADD wildcard_grey.com.key /etc/ssl/certs/
I´m using Azure container instaces and I´m creating the container with this command successfully
az container create --resource-group RG --name nginx --image xxxxx.azurecr.io/api-s:nginx4 --cpu 1 --memory 1.5 --registry-username xxxxxx --registry-password xxxxxxxxxx --ip-address public --ports 443 --dns-name-label prod3
After this I get a container created on azure successfully with a public IP and with the FQDN that I provided prod3eastus.azurecontainer.io. Also I create in Dynect.net a new node for the domain we have: newcontainer.example.com and added there the public IP of the new container so that the valid certificates I have on container are ok with than domain.
If I access the container with the public FQDN or the IP that Azure provides I can access ok, but if I try to access with HTTPS I get:
This page isn’t working If the problem continues, contact the site owner. HTTP ERROR 400
and
*WARNING: cannot verify newcontainer.example.com's certificate, issued by ‘CN=Network Solutions OV Server CA 2,O=Network Solutions L.L.C.,L=Herndon,ST=VA,C=US’: Unable to locally verify the issuer's authority.*
Even though that:
the certificates are valid
the certificates are on the same path that I indicate on file.conf in Nginx container (if not container won´t be up).
This is the .conf file I have:
upstream searchapl {
server 40.x.x.x:8080 fail_timeout=0;
}
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name _;
#status_zone go-backend-servers;
ssl_certificate /etc/ssl/certs/STAR.client.COM.pem;
ssl_certificate_key /etc/ssl/certs/STAR.client.COM.key;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://searchapl;
proxy_redirect http:// https://;
# Socket.IO Support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# time out settings
proxy_connect_timeout 159s;
proxy_send_timeout 600;
proxy_read_timeout 600;
proxy_buffer_size 64k;
proxy_buffers 16 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_pass_header Set-Cookie;
}
}
This Nginx is used as a reverse proxy that redirects to a Tomcat container that is working ok. Redirection works successfully if I enter container IP. It takes me to Tomcat. But over 443 I get certificates issue. What else can I check? Certificate and key have Read Access.
OK Now I move forward a little step but still getting error. I created the .pem file instead of .crt whit all the chain of primary certificates and intermediate + root.
Don´t get anymore error of unable to check certificate authority but now I´m getting:
***--2018-08-24 21:05:05-- https://xxxxxxxxxxxx.com/
Resolving xxxxxxxxxxxx.com (gp_searchv2.grey.com)... 23.x.x.x
Connecting to xxxxxxxxx.com (xxxxxxxxxx.com)|23.x.x.x|:443... connected.
HTTP request sent, awaiting response... 400
2018-08-24 21:05:05 ERROR 400: (no description).***
Related
How to call GRPC Server which is located in docker container on Swarm cluster from NGINX reverse proxy?
GRPC Server in container/service called webui with kestrel development certificate installed
NGINX Proxy which is located outside the stack and routes access to Swarm stacks
GRPC Client is located on a separate virtual machine on another network, the browser page at https://demo.myorg.com is available
part nginx.conf
server {
listen 443 ssl;
server_name demo.myorg.com;
...
location / {
proxy_pass https://namestack_webui;
}
GRPC Client appsetting.json
{
"ConnectionStrings": {
"Database": "Data Source=Server_name;Initial Catalog=DB;User Id=user;Password=pass;MultipleActiveResultSets=True;"
}
...
"GRPCServerUri": "https://demo.myorg.com/",
...
}
}
Problem when connecting GRPC Client to Server, i get error
END] GetOpcDaServerSettingsQuery. Time spent: 7,7166ms
fail: Grpc.Net.Client.Internal.GrpcCall[6]
Error starting gRPC call.
System.Net.Http.HttpRequestException: The SSL connection could not be established, see inner exception.
---> System.Security.Authentication.AuthenticationException: Authentication failed, see inner exception.
---> System.ComponentModel.Win32Exception (0x80090367): No common application protocol exists between the client and the server. Application protocol negotiation failed..
--- End of inner exception stack trace ---
Tried to write and specify a kestrel development certificate (for GRPC Client) that is loaded into the Swarm stack (namestack) through which the other containers in the stack are authenticated, the error is the same.
I understand that it is necessary to specify in appsetting.json the GRPC Server container address (https://namestack_webui), but it is behind NGINX, and I can only specify the GRPC host address (https://demo.myorg.com), tell me what is wrong?
The perfect solution for such a case was not found online.
I finally figured out and found a solution to my question, and I publish it for discussion.
If there are no comments against, then mark it as correct, at least it works for me and will work for YOU.
to proxy grpc connections through NGINX in the configuration, the location section must specify something similar to the url /PackageName.ServiceName/MethodName (This is indicated here by https://learn.microsoft.com/en-aspnetus/aspnet/core/grpc/troubleshoot?view=aspnetcor7.0#unable-to-start-aspnet-core-grpc-app-on-macos )
This URL can be checked with the developer or in the logs when grpc client connects
Should be used to proxy directive grpc_pass grpcs://namecontainer;
Should use http2 protocol.
So the correct configuration file for nginx in my case should look like this
server {
listen 443 ssl **http2**;
server_name demo.myorg.com;
ssl_certificate ...;
ssl_certificate_key ...;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers RC4:HIGH:!aNULL:!MD5:!kEDH;
add_header Strict-Transport-Security 'max-age=604800';
underscores_in_headers on;
large_client_header_buffers 4 16k;
location / {
proxy_pass https://name_container;
# Configuration for WebSockets
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_cache off;
# WebSockets were implemented after http/1.0
proxy_http_version 1.1;
# Configuration for ServerSentEvents
proxy_buffering off;
# Configuration for LongPolling or if your KeepAliveInterval is longer than 60 seconds
proxy_read_timeout 100s;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-URL-SCHEME https;
}
location /App.Name.Api.Contract.ApiService/UpdateOpcDaTags {
grpc_pass grpcs://name_container;
}
}
one nginx server is exposed to the internet and redirects the traffic.
A second nginx server is only available internally and listens to port 6880 (bookstack (a wiki) is hosted via docker).
In the local network, everything is unencrypted, while to the outside only https via port 443 is available.
The application (bookstack) works fine in the local network (http).
When accessing the application from the outside via https, the page is displayed, but all links are http instead of https. (For example, http://.../logo.png is in the login page's source code, but https://.../logo.png would be correct.)
Where and how do i switch to https?
First server sites-enabled/bookstack (already contains the redirect to https):
server {
listen 80;
server_name bookstack.example.org;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name bookstack.example.org;
ssl_certificate /etc/letsencrypt/live/<...>/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/<...>/privkey.pem;
location / {
try_files index index.html $uri $uri/index.html $uri.html #bookstack;
}
location #bookstack {
proxy_redirect off;
proxy_set_header X-FORWARDED_PROTO https;
proxy_set_header Host bookstack.example.org;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://<internal ip>:6880;
}
}
Second server sites-enabled/bookstack:
server {
listen 80;
listen [::]:80;
server_name bookstack.example.org;
index index.html index.htm index.nginx-debian.html index.php;
location / {
proxy_pass http://<local docker container ip>:6880;
proxy_redirect off;
}
}
The docker container also deploys its own nginx config, but i didn't touch that one yet.
I solved the problem. Above nginx settings should work. The issue was with the docker container still using the wrong APP_URL. Running the artisan update script (bookstackapp.com/docs/admin/commands) did not suffice, but reinstalling the docker container did it.
Followed guide ( https://michalklempa.com/2019/04/nifi-registry-nginx-proxy-tls-basic-auth/ ) to set up nginx basic auth, however instead of proxy for nifi-registry I set it up for nifi. Auth is working and page is accessible but somehow processor configure window not opening. The issue is due to nginx since direct access to nifi through HTTP exposed ports works ,just not behind nginx proxy.
below is the config I am using:
server {
listen 9988 ssl;
root /usr/share/nginx/html;
index index.html;
server_name _;
ssl_certificate /etc/nginx/server_cert.pem;
ssl_certificate_key /etc/nginx/server_key.pem;
ssl_client_certificate /etc/nginx/client_cert.pem;
ssl_verify_client optional;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# enables server-side protection from BEAST attacks
ssl_prefer_server_ciphers on;
# Disabled insecure ciphers suite. For example, MD5, DES, RC4, PSK
ssl_ciphers "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4:#STRENGTH";
# -!MEDIUM:exclude encryption cipher suites using 128 bit encryption.
# -!LOW: exclude encryption cipher suites using 64 or 56 bit encryption algorithms
# -!EXPORT: exclude export encryption algorithms including 40 and 56 bits algorithms.
# -!aNULL: exclude the cipher suites offering no authentication. This is currently the anonymous DH algorithms and anonymous ECDH algorithms.
# These cipher suites are vulnerable to a "man in the middle" attack and so their use is normally discouraged.
# -!eNULL:exclude the "NULL" ciphers that is those offering no encryption.
# Because these offer no encryption at all and are a security risk they are disabled unless explicitly included.
# #STRENGTH:sort the current cipher list in order of encryption algorithm key length.
location / {
if ($ssl_client_verify = SUCCESS) {
set $auth_basic off;
}
if ($ssl_client_verify != SUCCESS) {
set $auth_basic "Restricted Content. Please provide Nifi Authentication:";
}
auth_basic $auth_basic;
auth_basic_user_file /etc/nginx/nginx.htpasswd;
proxy_pass http://172.18.0.77:8181/; # actual container ip/port of nifi
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-User $remote_user;
proxy_set_header Authorization "";
proxy_set_header X-ProxyScheme $scheme;
proxy_set_header X-ProxyHost $hostname;
proxy_set_header X-ProxyPort $server_port;
proxy_set_header X-ProxyContextPath "/";
}
}
I tried passing container ip of nifi/host/nginx for X-ProxyHost but instead of giving "Unable to communicate to nifi" immediately it spins for a while and eventually gives the same error. What needs to be modified here? Any help would be appreciated.
nginx noob here!
After much fiddling with multiple ip/hostname combinations I was able to fix it with below config changes.
Had to add nifi env properties to the docker-compose:
environment:
- NIFI_REMOTE_INPUT_HOST=<private ip of nifi container e.g. 172.18.0.77>
- NIFI_WEB_PROXY_CONTEXT_PATH=/
- NIFI_WEB_HTTP_HOST=<private ip of nifi container>
- NIFI_WEB_HTTP_PORT=8181
And for nginx config: modified proxy_set_header to "localhost" (since nginx server needed proxyHost defined as loopback server):
proxy_set_header X-ProxyHost localhost;
Hope this helps someone scratching their head who are in the same boat :)
I set up the Letsencrypt certificate directly to an AWS EC2 Ubuntu instance running Nginx and a docker server using port 9998. The domain is set up on Route 53. Http is redirected to https.
So https://example.com is working fine but https://example.com:9998 gets ERR_SSL_PROTOCOL_ERROR. If I use the IP address like http://10.10.10.10:9997 is working and checked the server using port 9998 okay.
The snapshot of the server on docker is:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
999111000 img-server "/bin/sh -c 'java -j…" 21 hours ago Up 21 hours 0.0.0.0:9998->9998/tcp hellowworld
It seems something is missing between Nginx and the server using port 9998. How can I fix it?
Where have you configured the ssl certificate ? Only Nginx?
The reason why you cannot visit https://example.com:9998 using ssl protocal is that that port provides http service rather than https.
I suggest not to publish 9998 of hellowworld and proxy all the traffic with nginx (if Nginx is also started with docker and in the same network).
Configure https in Nginx and the origin sever provides http.
This is a sample configuration https://github.com/newnius/scripts/blob/master/nginx/config/conf.d/https.conf
server {
listen 80;
server_name example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443;
server_name example.com;
access_log logs/example.com/access.log main;
error_log /var/log/nginx/debug.log debug;
ssl on;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location / {
proxy_pass http://apache:80;
proxy_set_header Host $host;
proxy_set_header CLIENT-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location ~ /.well-known {
allow all;
proxy_pass http://apache:80;
}
# Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store(Mac).
location ~ /\. {
deny all;
}
}
I have set up a docker registry that is accessible only via localhost, and then a nginx proxy that would be accessible to the outside world, and would redirect the requests to the registry, if they are authorized.
I use client certificates for this purpose, following this tutorial.
I finally got nginx running and authorizing correctly the requests coming from a browser (if I import the .pfx certificate into the browser server responds. If not, 403 is returned, which is the desired behavior).
I now try to communicate with my registry (through nginx) from a docker client, using login, pull and push:
docker login 10.11.2.7:5043
docker pull 10.11.2.7:5043/my-ubuntu
docker push 10.11.2.7:5043/my-ubuntu
The problem that I face is that, no matter what I have tried, I always get a 400 response with a small html saying:
No required SSL certificate was sent
Both the registry and the docker client that does the pull/push run under ubuntu.
The same problem happens when requesting the pull/push from both the same machine that the registry/ngnix run, and by requesting from an other docker client.
I tried
to follow this tutorial without success.
to insert the certificated under /usr/local/share/ca-certificates/test and run
sudo update-ca-certificates
to create the file /etc/docker/daemon.json and insert the following content:
{
"insecure-registries" : [ "10.11.2.7:5043" ]
}
Restarted docker engine in all cases.
Still, the same error appears.
here is my /etc/docker/certs.d/ content:
10.11.2.7:5043/
user.cert
user.key
Additional info: registry listens to port 5000. Nginx listens to 5043 (https). After every pull/push attempt, the following logs appear on nginx:
Aug 17 15:55:15 alkis-Latitude-E6530 dockerd[18438]: time="2018-08-17T15:55:15.323847790+02:00" level=info msg="Attempting next endpoint for pull after error: error parsing HTTP 400 response body: invalid character '<' looking for beginning of value: \"<html>\\r\\n<head><title>400 No required SSL certificate was sent</title></head>\\r\\n<body bgcolor=\\\"white\\\">\\r\\n<center><h1>400 Bad Request</h1></center>\\r\\n<center>No required SSL certificate was sent</center>\\r\\n<hr><center>nginx/1.15.2</center>\\r\\n</body>\\r\\n</html>\\r\\n\""
Aug 17 15:55:15 alkis-Latitude-E6530 dockerd[18438]: time="2018-08-17T15:55:15.323944310+02:00" level=error msg="Handler for POST /v1.38/images/create returned error: error parsing HTTP 400 response body: invalid character '<' looking for beginning of value: \"<html>\\r\\n<head><title>400 No required SSL certificate was sent</title></head>\\r\\n<body bgcolor=\\\"white\\\">\\r\\n<center><h1>400 Bad Request</h1></center>\\r\\n<center>No required SSL certificate was sent</center>\\r\\n<hr><center>nginx/1.15.2</center>\\r\\n</body>\\r\\n</html>\\r\\n\""
My nginx.conf:
events {
worker_connections 1024;
}
http {
map $upstream_http_docker_distribution_api_version $docker_distribution_api_version {
'' 'registry/2.0';
}
upstream docker-registry {
server registry:5000;
}
server {
listen 443 ssl; #ngnix is itself a docker container. The actual port for the outside world is 5043.
server_name my_registry.com;
# SSL
ssl_certificate /etc/nginx/conf.d/domain_new.crt;
ssl_certificate_key /etc/nginx/conf.d/domain_new.key;
#ssl_dhparam /etc/nginx/conf.d/dhparam.pem;
# client certificate
ssl_client_certificate /etc/nginx/conf.d/user.crt;
# make verification optional, so we can display a 403 message to those
# who fail authentication
ssl_verify_client on;
#ssl_crl /etc/nginx/conf.d/ca.crl;
# Recommendations from https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
client_max_body_size 0; # 0 means no limit
chunked_transfer_encoding on;
location /v2/ {
# Do not allow connections from docker 1.5 and earlier
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
return 404;
}
if ($ssl_client_verify != SUCCESS) {
return 403;
}
# To add basic authentication to v2 use auth_basic setting.
add_header 'Docker-Distribution-Api-Version' $docker_distribution_api_version always;
proxy_pass http://docker-registry;
proxy_set_header Host $http_host; # required for docker client's sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
#the following "location" section is only for testing purposes. It serves a couple of small html files.
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}