Missing openssl certificate in localhost environment - docker

I am using nginx - docker - laravel and I want to add a certificate in my local environment.
Dockercompose for nginx:
findraNginx:
container_name: findra_nginx
hostname: nginx
image: nginx:1.21
restart: unless-stopped
depends_on:
- nadal
- findraMinio
ports:
- "5001:80"
- "5002:443"
volumes:
- ./docker/virtualhost.conf:/etc/nginx/conf.d/default.conf
- ./nadal/docker/ssl/cert.pem:/etc/nginx/ssl/cert.pem
- ./nadal/docker/ssl/cert-key.pem:/etc/nginx/ssl/cert-key.pem
Virtualhost.conf
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
index index.php index.html index.htm;
root /var/www/public;
ssl_certificate /etc/nginx/ssl/cert.pem;
ssl_certificate_key /etc/nginx/ssl/cert-key.pem;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ ^/.+\.php(/|$) {
fastcgi_pass nadal:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
I followed these steps to generate .pem files for nginx
Generate RSA
openssl genrsa -aes256 -out ca-key.pem 4096
Generate a public CA Cert
openssl req -new -x509 -sha256 -days 365 -key ca-key.pem -out ca.pem
Create a RSA key
openssl genrsa -out cert-key.pem 4096
Create a Certificate Signing Request (CSR)
openssl req -new -sha256 -subj "/CN=myname" -key cert-key.pem -out
cert.csr
Create a extfile with all the alternative names
echo "subjectAltName=DNS:your-dns.record,IP:mylocalip" >> extfile.cnf
Create cert
openssl x509 -req -sha256 -days 365 -in cert.csr -CA ca.pem -CAkey
ca-key.pem -out cert.pem -extfile extfile.cnf -CAcreateserial
and when I try to open the app in chrome

Related

Docker nginx self-signed certificate - can't connect to https

I have been following a few tutorials to try and get my SSL cert working with my docker enviroment. I have decided to go down the route of a self-signed certificate with letsencrypt. I have generated the certificate with the following command
certbot certonly --manual \
--preferred-challenges=dns \
--email {email_address} \
--server https://acme-v02.api.letsencrypt.org/directory \
--agree-tos \
--manual-public-ip-logging-ok \
-d "*.servee.co.uk"
NOTE: I am using multi tenancy so I need the wildcard on my domain
This works, the certificate has been generated on my server. I am now trying to use this with my docker nginx container.
My docker-compose.yml files looks like this
...
services:
nginx:
build:
context: docker/nginx
dockerfile: Dockerfile
ports:
- 433:433
- 80:80
volumes:
- ./src:/var/www/html:delegated
depends_on:
- app
- mysql
networks:
- laravel
...
This is my Dockerfile
FROM nginx:stable-alpine
COPY ./fullchain.pem /etc/nginx/fullchain.pem
COPY ./privkey.pem /etc/nginx/privkey.pem
ADD nginx.conf /etc/nginx/nginx.conf
ADD default.conf /etc/nginx/conf.d/default.conf
RUN mkdir -p /var/www/html
RUN addgroup -g 1000 laravel && adduser -G laravel -g laravel -s /bin/sh -D laravel
RUN chown laravel:laravel /var/www/html
I am copying the pem files into the nginx container so I can use them.
Here is my default.conf file which should be loading my certificate
server {
listen 80;
index index.php index.html;
server_name servee.co.uk;
root /var/www/html/public;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
server {
listen 443 ssl;
server_name servee.co.uk;
ssl_certificate /etc/nginx/fullchain.pem;
ssl_certificate_key /etc/nginx/privkey.pem;
index index.php index.html;
location / {
proxy_pass http://servee.co.uk; #for demo purposes
}
}
The nginx container builds successfully and when I bash into it I can find the pem files. The issue is when I go to https://servee.co.uk I just get Unable to connect error. If I go to http://servee.co.uk it works fine.
I'm not sure what I have missed, this has really put me off docker because its such a pain to get SSL working so hopefully its an easy fix.
You need to update your docker-compose.yml file to use port 443 instead of 433 to match your nginx.conf. Try the below docker-compose.yml file.
...
services:
nginx:
build:
context: docker/nginx
dockerfile: Dockerfile
ports:
- 443:443
- 80:80
volumes:
- ./src:/var/www/html:delegated
depends_on:
- app
- mysql
networks:
- laravel
...

How to enable SSL for custom port in docker (without redirection)

I'm having trouble setting up SSL with custom port in docker (without redirection).
This is my files:
docker-compose.yml
version: "3.6"
services:
apache-php:
container_name: apache-php
image: php:7.4.8-apache
restart: unless-stopped
volumes:
- ./web:/var/www/html
- ./ssl:/etc/apache2/ssl
- ./sites-enabled:/etc/apache2/sites-enabled
- ./ports.conf:/etc/apache2/ports.conf
working_dir: /var/www/html
ports:
- 80:80
- 443:443
- 2805:2805
sites-enabled/example.com.conf
LoadModule rewrite_module modules/mod_rewrite.so
LoadModule headers_module modules/mod_headers.so
LoadModule ssl_module modules/mod_ssl.so
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so
Listen 2805
Listen 443
<VirtualHost *:80>
ServerName example.com
DocumentRoot /var/www/html
ErrorLog /var/www/logs/error.log
CustomLog /var/www/logs/access.log combined
</VirtualHost>
<VirtualHost *:2805>
ServerName example.com
DocumentRoot /var/www/manage
ErrorLog /var/www/logs/manage-error.log
CustomLog /var/www/logs/manage-access.log combined
</VirtualHost>
<VirtualHost *:443>
ServerName example.com
SSLEngine On
SSLCertificateFile /etc/apache2/ssl/example.com.crt
SSLCertificateKeyFile /etc/apache2/ssl/example.com.key
SSLProxyEngine On
<Location />
ProxyPass http://example.com:2805/
ProxyPassReverse http://example.com:2805/
</Location>
</VirtualHost>
I generated certificate files using the command of lynxman at https://serverfault.com/a/224127
openssl genrsa 2048 > ssl/example.com.key
chmod 400 ssl/example.com.key
openssl req -new -x509 -nodes -sha256 -days 365 -key ssl/example.com.key -out ssl/example.com.crt
Then I run docker-compose up command, everything works fine
I can access to http://example.com:2805 but can't access to domain with SSL https://example.com:2805
And I have received the message of the browser:
This site can't provide a secure connection
example.com sent an invalid response.
ERR_SSL_PROTOCOL_ERROR
Any help is much appreciated as I am really struggling here.

Certbot command in docker-compose issues SSL certificate with invalid CA

The problem
I'm trying to use certbot to auto-generate a TLS certificate for Nginx in my multi-container Docker configuration. Everything works as expected except the Certificate Authority (CA) is invalid.
When I visit my site, I see that Fake LE Intermediate X1, an invalid authority, issued the certificate:
My setup
Here is the docker-compose.yml file where I call certbot to generate the certificate:
version: '2'
services:
apollo:
restart: always
networks:
- app-network
build: .
ports:
- '1337:1337'
certbot:
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- web-root:/var/www/html
depends_on:
- webserver
command: certonly --noninteractive --keep-until-expiring --webroot --webroot-path=/var/www/html --email myemail#example.com --agree-tos --no-eff-email -d mydomain.com
webserver:
image: nginx:latest
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- web-root:/var/www/html
- ./nginx.conf:/etc/nginx/nginx.conf
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- dhparam:/etc/ssl/certs
depends_on:
- apollo
networks:
- app-network
volumes:
postgres: ~
certbot-etc:
certbot-var:
dhparam:
driver: local
driver_opts:
type: none
device: /home/user/project_name/dhparam/
o: bind
web-root:
networks:
app-network:
I don't think that Nginx is the issue because the HTTP -> HTTPS redirect works, and the browser receives a certificate. But just in case it's relevant: here's the nginx.conf where I refer to the certificate and configure an HTTP -> HTTPS redirect.
events {}
http {
server {
listen 80;
listen [::]:80;
server_name mydomain.com;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
location / {
rewrite ^ https://$host$request_uri? permanent;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mydomain.com;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;
ssl_buffer_size 8k;
ssl_dhparam /etc/ssl/certs/dhparam-2048.pem;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8;
location / {
try_files $uri #apollo;
}
location #apollo {
proxy_pass http://apollo:1337;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
}
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
}
}
What I've tried
Initially, I called certonly with the --staging argument in the cerbot container definition in docker-compose.yml. This could definitely cause the invalid CA problem. However, I have since tried revoking the CA and re-running the command multiple times, but no luck.
I have tried removing the --keep-until-expiring flag in the cerbot container definition of docker-compose.yml. This causes cerbot to generate a new certificate, but it did not resolve the CA issue.
Visiting crt.sh, I can see that certbot did issue valid certificates for my domain:
So, the problem seems to lie not in the generation of these certificates, but in the way my docker-compose/cerbot configuration is referring to them.
You can try to add the --force-renewal flag:
command: >-
certonly
--webroot
--webroot-path=/var/www/html
--email myemail#example.com
--agree-tos
--no-eff-email
--force-renewal
-d mydomain.com

Self-signed certificate not working on docker nginx container

I can't get SSL certificate to work on my local setup. Everything starts ok, but the browsers are reporting "connection not secure". I've added cert to MacOs keychain and selected "always trust". I have dnsmasq in place for all .test domain to resolve to my local setup but I've also tried adding domain to /etc/hosts. Nothing works.
This is my nginx dockerfile
FROM nginx:latest
COPY ./default.conf /etc/nginx/conf.d/default.conf
RUN apt-get update && \
apt-get install -y openssl && \
mkdir /etc/nginx/ssl && \
openssl dhparam -out "/etc/nginx/ssl/dh.pem" 2048 && \
openssl req -x509 -newkey rsa:2048 -keyout "/etc/nginx/ssl/key.pem" -out "/etc/nginx/ssl/cert.pem" -days 365 -nodes \
-subj "/C=HR/ST=Zagreb/L=Zagreb/O=XOO/OU=IT Department/CN=domain.test"
This is the main part of default.conf
server {
listen 80 default_server;
root /var/www/html;
index index.html index.php;
listen 443 ssl;
server_name default_server;
ssl_certificate /etc/nginx/ssl/cert.pem;
ssl_certificate_key /etc/nginx/ssl/key.pem;
ssl_dhparam /etc/nginx/ssl/dh.pem;
charset utf-8;
# only this domain
add_header Strict-Transport-Security "max-age=31536000";
# apply also on subdomains
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
# protocols
ssl_protocols TLSv1.2 TLSv1.3;
# ciphers
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
# SSL stapling
ssl_stapling on;
ssl_stapling_verify on;
And finally this is a part of my docker-compose.yml
version: '1'
services:
nginx:
build: ./docker/nginx/
ports:
- 80:80
- 443:443
links:
- php
volumes_from:
- app
networks:
- default

Reverse proxy on windows using docker - nginx is not forwarding https to IIS

I have a win10 box where I run docker and two containers with Windows as depicted by the diagram below where one is running Nginx and acts as a reverse proxy to the other container running IIS.
It works fine for http but the redirection from nginx to IIS fails for https.
The individual containers accept https on its own so I know the certificates are installed correctly. I use self-signed certificates.
I'm thinking that there might be a setting in nginx.conf that I am not aware of that is causing it.
I can do
+---------------------------+--------------------------+------+
| https://localhost | points to nginx | OK |
+---------------------------|--------------------------|------|
| https://localhost:5003 | points to iis | OK |
+---------------------------|--------------------------|------|
| https://localhost/mysite | points to iis via nginx | FAIL |
+---------------------------+--------------------------+------+
And the error:
There are questions e.g. this and this but they refer to http only.
There is a tutorial on DigitalOcean that describes how to set up nginx with https which I have largely followed but it still doesn't work.
Nginx - access.log:
"GET /mysite HTTP/1.1" 504 585 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"
Nginx - error.log:
*5 upstream timed out (10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) while connecting to upstream, client: 172.18.0.1, server: localhost, request: "GET /mysite HTTP/1.1", upstream: "https://172.18.0.2:5003/", host: "localhost"
IIS Logs:
C:\inetpub\logs is empty
Question
How can make nginx forward https to the IIS container?
Setup
Setting up docker network:
docker network create -d nat --subnet=172.18.0.0/16 nginx-proxy-network
Build commands:
cd nginx-proxy
docker build -t nginx-proxy .
Cd ..\iis
Docker build -t iis .
Starting nginx container:
docker run -d -p 80:80 -p 443:443 --network nginx-proxy-network --ip 172.18.0.3 nginx-proxy
Starting iis container:
docker run -d -p 5002:80 -p 5003:443 --network nginx-proxy-network --ip 172.18.0.2 iis
Nginx
Generate certificates for nginx1:
C:\openssl\openssl.exe genrsa -des3 -out localhost.key 2048
C:\openssl\openssl.exe req -new -key localhost.key -out localhostcsr -config C:\openssl\openssl.conf
C:\openssl\openssl.exe x509 -req -days 365 -in localhost.csr -signkey localhost.key -out localhost.crt
It asks for a password that I then store in a txt file.
Nginx.conf:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost ;
location /mysite {
proxy_pass http://172.18.0.2/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location / {
root html;
index index.html index.htm;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
# HTTPS server
server {
listen *:443 ssl;
server_name localhost ;
ssl on;
ssl_password_file C:\cert\pwdcert.txt;
ssl_certificate C:\cert\localhost.crt;
ssl_certificate_key C:\cert\localhost.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options "SAMEORIGIN";
location /mysite {
proxy_pass https://172.18.0.2:5003/;
# proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location / {
root html;
index index.html index.htm;
}
}
}
Nginx - Docker file:
FROM microsoft/windowsservercore
COPY nginx/ /nginx
RUN mkdir "C:\\cert"
COPY *.crt /cert
COPY *.key /cert
COPY pwdcert.txt /cert
WORKDIR /nginx
CMD ["nginx"]
IIS
IIS Docker file:
FROM microsoft/aspnet
COPY iisscripts.ps1 /
RUN powershell -noexit "C:\iisscripts.ps1"
COPY mysite/ /inetpub/wwwroot/
iisscripts.ps1:
$cert = New-SelfSignedCertificate -DnsName "localhost" - CertStoreLocation cert:\LocalMachine\My
New-WebBinding -Name "Default Web Site" -IP "*" -Port 443 -Protocol https
new-item -path IIS:\SslBindings\0.0.0.0!443 -Value $cert
Are you able to curl the IIS URL from the container running nginx?
exec into the nginx container using ssh then:-
https://[my domain or IP address]
Just as I thought. As the certificate you have installed in your IIS contianer is not trusted by the container running NGINX it will not proxy the connection. You either need to tell NGINX to not verify ssl by adding the following to your NGINX configuration.
ssl_verify_client off
Or use a trusted certificate.

Resources