Docker nginx error: openssl: command not found - docker

I am using nginx as a proxy to forward requests to other components (servers).
Each component, including nginx is implemented as docker container, i.e. I have a docker container for 'nginx-proxy', 'dashboard-server', 'backend-server' (REST API), and 'landing-server' (Landing Page). The latter 3 components are all NodeJS Express servers and working properly when I use the command docker-compose build there are no errors but when I start the containers with docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d the NodeJS containers work fine, but the nginx container gives me this error using docker-compose logs nginx-proxy:
Attaching to docker_nginx-proxy_1
nginx-proxy_1 | /start.sh: line 5: openssl: command not found
nginx-proxy_1 | Creating dhparams…\c
nginx-proxy_1 | ok
nginx-proxy_1 | Starting nginx…
nginx-proxy_1 | 2017/08/23 23:27:20 [emerg] 6#6:
BIO_new_file(“/etc/letsencrypt/live/admin.domain.com/fullchain.pem”)
failed (SSL: error:02001002:system library:fopen:No such file or directory:
fopen(‘/etc/letsencrypt/live/admin.domain.com/fullchain.pem’,‘r’) error:2006D080:BIO routines:BIO_new_file:no such file)
nginx-proxy_1 | nginx: [emerg]
BIO_new_file(“/etc/letsencrypt/live/admin.domain.com/fullchain.pem”) failed (SSL: error:02001002:system library:fopen:
No such file or directory:fopen(‘/etc/letsencrypt/live/admin.domain.com/fullchain.pem’,‘r’)error:2006D080:BIO routines:BIO_new_file:no such file)
I am using Lets Encrypt for the SSL certificates, however the command certbot certonly --webroot -w /var/www/letsencrypt -d admin.domain.com -d api.domain.com -d www.domain.com -d domain.com results in the error Connection Refused because the nginx server does not start to handle the requests.
My nginx Dockerfile (nginx-proxy/Dockerfile):
FROM nginx:1.12
COPY start.sh /start.sh
RUN chmod u+x /start.sh
COPY conf.d /etc/nginx/conf.d
COPY sites-enabled /etc/nginx/sites-enabled
ENTRYPOINT ["/start.sh"]
My start.sh file (nginx-proxy/start.sh):
#!/bin/bash
if [ ! -f /etc/nginx/ssl/dhparam.pem ]; then
echo "Creating dhparams…\c"
openssl dhparam -out /etc/nginx/ssl/dhparam.pem 2048
echo "ok"
fi
echo "Starting nginx…"
nginx -g 'daemon off;
My default.conf file (nginx-proxy/conf.d/default.conf):
include /etc/nginx/sites-enabled/*.conf;
My api.conf file (the others are similar) (nginx-proxy/sites-enabled/api.conf):
server {
listen 80;
server_name api.domain.com;
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /var/www/letsencrypt;
}
location = /.well-known/acme-challenge/ {
return 404;
}
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name api.domain.com;
ssl on;
ssl_certificate /etc/letsencrypt/live/api.domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.domain.com/privkey.pem;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:1m;
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
client_max_body_size 0;
chunked_transfer_encoding on;
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /var/www/letsencrypt;
}
location = /.well-known/acme-challenge/ {
return 404;
}
location / {
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://backend-server:3000;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
}
}
Any ideas?

I found the solution.
In my nginx Dockerfile, I had to use
FROM nginx:1.12-alpine
RUN apk update \
&& apk add openssl
...
Then the openssl command worked properly.

At first, you can try edit your start.sh file at line openssl to /usr/bin/openssl. Did /usr/bin/openssl exists?
Second, your nginx server will not start until /etc/letsencrypt/live/api.domain.com/fullchain.pem and /etc/letsencrypt/live/api.domain.com/privkey.pem file exists.
So delete or comment all the server block that handling 443 port, keep server block that handling 80 port. Your api.conf will become this:
server {
listen 80;
server_name api.domain.com;
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /var/www/letsencrypt;
}
location = /.well-known/acme-challenge/ {
return 404;
}
return 301 https://$host$request_uri;
}
Then start your nginx server and retry install Let's Encrypt certificate.

Related

Mailu with docker on a Raspberry Pi running a "native" nginx web server

I configured Nextcloud on a Raspberry Pi 4 running Ubuntu Server 18.04.4 64-bit, following Carsten Rieger guide so now on the Pi is installed and running nginx. Then, using Mailu configuration I installed a mail server with Docker Compose. I chenged standard configuration because conflicting ports 80 and 443 used by "native" nginx and "docker container" nginx, so in container I use 8080 and 8443.
How must I configure native nginx so when I visit my mail.mydomain.com redirect to 8080 and 8443 ports?
How can obtains certificates for HTTPS for mail.mydomain.com with Let's Encrypt?
If I understand your problem correct, you like to use the nginx on the host as a proxy server to redirect traffic to the docker container.
Extend your nginx.conf on the host:
http {
...
# redirect http to https from your domain
server {
listen 80;
server_name localhost, <your domain>, <secondary domain>;
return 301 https://<your domain>$request_uri;
}
# simple reverse-proxy
server {
listen 443;
server_name localhost, <your domain>, <secondary domain>;
ssl on;
# if you use let's encrypt (certbot) /etc/letsencrypt/live/<your domain>/fullchain.pem
ssl_certificate <path to certificate>;
# if you use let's encrypt (certbot) /etc/letsencrypt/live/<your domain>/privkey.pem
ssl_certificate_key <path to key>;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # don’t use SSLv3 ref: POODLE
client_max_body_size 200M;
# pass requests for dynamic content to rails/turbogears/zope, et al
location / {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forward-Proto http;
proxy_set_header X-Nginx-Proxy true;
proxy_redirect off;
}
}
...
}
To Your second question, yes you can use Let's Encrypt to obtain the certificate. Either as standalone or with the nginx plugin.
$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update
$ sudo apt-get install python-certbot-nginx
Obtain your certificate without nginx plugin (you need to stop the nginx first because certbot uses the port 80)
$ sudo certbot certonly --standalone -d <your domain> -d <secondary domain>
Obtain your certificate with the nginx plugin
$ sudo certbot --nginx -d <your domain> -d <secondary domain>
In any case you need to reload the nginx after a certificate was retrieved or renewed:
$ sudo service nginx reload

Reverse proxy on windows using docker - nginx is not forwarding https to IIS

I have a win10 box where I run docker and two containers with Windows as depicted by the diagram below where one is running Nginx and acts as a reverse proxy to the other container running IIS.
It works fine for http but the redirection from nginx to IIS fails for https.
The individual containers accept https on its own so I know the certificates are installed correctly. I use self-signed certificates.
I'm thinking that there might be a setting in nginx.conf that I am not aware of that is causing it.
I can do
+---------------------------+--------------------------+------+
| https://localhost | points to nginx | OK |
+---------------------------|--------------------------|------|
| https://localhost:5003 | points to iis | OK |
+---------------------------|--------------------------|------|
| https://localhost/mysite | points to iis via nginx | FAIL |
+---------------------------+--------------------------+------+
And the error:
There are questions e.g. this and this but they refer to http only.
There is a tutorial on DigitalOcean that describes how to set up nginx with https which I have largely followed but it still doesn't work.
Nginx - access.log:
"GET /mysite HTTP/1.1" 504 585 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"
Nginx - error.log:
*5 upstream timed out (10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) while connecting to upstream, client: 172.18.0.1, server: localhost, request: "GET /mysite HTTP/1.1", upstream: "https://172.18.0.2:5003/", host: "localhost"
IIS Logs:
C:\inetpub\logs is empty
Question
How can make nginx forward https to the IIS container?
Setup
Setting up docker network:
docker network create -d nat --subnet=172.18.0.0/16 nginx-proxy-network
Build commands:
cd nginx-proxy
docker build -t nginx-proxy .
Cd ..\iis
Docker build -t iis .
Starting nginx container:
docker run -d -p 80:80 -p 443:443 --network nginx-proxy-network --ip 172.18.0.3 nginx-proxy
Starting iis container:
docker run -d -p 5002:80 -p 5003:443 --network nginx-proxy-network --ip 172.18.0.2 iis
Nginx
Generate certificates for nginx1:
C:\openssl\openssl.exe genrsa -des3 -out localhost.key 2048
C:\openssl\openssl.exe req -new -key localhost.key -out localhostcsr -config C:\openssl\openssl.conf
C:\openssl\openssl.exe x509 -req -days 365 -in localhost.csr -signkey localhost.key -out localhost.crt
It asks for a password that I then store in a txt file.
Nginx.conf:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost ;
location /mysite {
proxy_pass http://172.18.0.2/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location / {
root html;
index index.html index.htm;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
# HTTPS server
server {
listen *:443 ssl;
server_name localhost ;
ssl on;
ssl_password_file C:\cert\pwdcert.txt;
ssl_certificate C:\cert\localhost.crt;
ssl_certificate_key C:\cert\localhost.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options "SAMEORIGIN";
location /mysite {
proxy_pass https://172.18.0.2:5003/;
# proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location / {
root html;
index index.html index.htm;
}
}
}
Nginx - Docker file:
FROM microsoft/windowsservercore
COPY nginx/ /nginx
RUN mkdir "C:\\cert"
COPY *.crt /cert
COPY *.key /cert
COPY pwdcert.txt /cert
WORKDIR /nginx
CMD ["nginx"]
IIS
IIS Docker file:
FROM microsoft/aspnet
COPY iisscripts.ps1 /
RUN powershell -noexit "C:\iisscripts.ps1"
COPY mysite/ /inetpub/wwwroot/
iisscripts.ps1:
$cert = New-SelfSignedCertificate -DnsName "localhost" - CertStoreLocation cert:\LocalMachine\My
New-WebBinding -Name "Default Web Site" -IP "*" -Port 443 -Protocol https
new-item -path IIS:\SslBindings\0.0.0.0!443 -Value $cert
Are you able to curl the IIS URL from the container running nginx?
exec into the nginx container using ssh then:-
https://[my domain or IP address]
Just as I thought. As the certificate you have installed in your IIS contianer is not trusted by the container running NGINX it will not proxy the connection. You either need to tell NGINX to not verify ssl by adding the following to your NGINX configuration.
ssl_verify_client off
Or use a trusted certificate.

Running NGinX with SSL on Docker container

I am trying to run NGinX in a docker container and access it over HTTPS. The container starts and works fine on the port 80 but fails on 443. Please help!
Here are the steps I followed:
Default.conf
server {
listen 443;
server_name localhost;
ssl on;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
#ssl_ciphers HIGH:!aNULL:!MD5;
#ssl_prefer_server_ciphers on;
ssl_prefer_server_ciphers off;
#ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
#ssl_ecdh_curve auto;
#ssl_session_cache shared:SSL:10m;
ssl_session_cache none;
ssl_session_tickets off;
#ssl_stapling on;
#ssl_stapling_verify on;
#ssl_stapling_verify off;
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
keepalive_timeout 1200;
client_body_timeout 1200;
client_header_timeout 1200;
proxy_read_timeout 300;
client_max_body_size 200M;
client_body_buffer_size 200M;
proxy_request_buffering off;
proxy_max_temp_file_size 0;
proxy_set_header X-Forwarded-For $proxy_protocol_addr;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
ssi on;
}
Docker file is
FROM nginx
ENV LANG C.UTF-8
RUN apt-get update; apt-get install -y \
openssl
COPY default.conf /etc/nginx/conf.d/
COPY index.html /usr/share/nginx/html/
CMD ["echo", "Moved conf files"]
ADD cert.sh /cert.sh
RUN chmod a+x /cert.sh
CMD ["echo", "Added cert.sh"]
RUN ./cert.sh;
EXPOSE 443
CMD ["nginx", "-g", "daemon off;"]
I am building docker image with docker build -t nginx . Running the container with docker run --name mynginx4 -P -d nginx
I can see /etc/nginx/ssl/nginx.crt and /etc/nginx/ssl/nginx.key are present in the container. But when I curl I get this error
BANL151bc2453:docker-nginx-ssl-secure pgupta14$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a2dc5dd90ef0 nginx "nginx -g 'daemon of…" 48 seconds ago Up 35 seconds 0.0.0.0:32817->80/tcp, 0.0.0.0:32816->443/tcp mynginx4
curl https://localhost:32816
curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to localhost:32816
Output of curl -vvv
curl https://localhost:32816 -vvv
* Rebuilt URL to: https://localhost:32816/
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 32816 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to localhost:32816
* Closing connection 0
curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to localhost:32816
In the output of curl -vvv, I see it is using /etc/ssl/cert.pem. Shouldn't it be using my certificates which are present at /etc/nginx/ssl/?

service nginx restart fails

I checked the config syntax by run nginx -t then get the results:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
But when I run service nginx restart goes fail.
I have a config file named a.com in the sites-enabled folder, here's the content:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name a.com;
# root /usr/share/nginx/html;
# index index.html index.htm;
root /home/a/public;
client_max_body_size 10G;
location / {
proxy_pass http://localhost:3000;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
}
}
I'm at Ubuntu 14.10 and want to deploy a rails server.
I kill the nginx's process manually, then start nginx again, solved the problem.
I had this issue and using sudo solved it:
sudo service nginx restart
It might help to enable logs to checks the errors:
https://www.nginx.com/resources/admin-guide/logging-and-monitoring/

Nginx + Passenger to serve rails apps in different sub URIs

I'm running a rails app in a Debian server (ip 192.168.1.193) with passenger as a standalone
$ cd /home/hector/webapps/first
$ passenger start -a 127.0.0.1 -p 3000
And I want to serve this app throw Nginx with reverse proxy in a different sub folder as:
http://192.168.1.193/first
My nginx.conf server:
...
server {
listen 80;
server_name 127.0.0.1;
root /home/hector/webapps/first/public;
passenger_base_uri /first/;
location /first/ {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
}
}
...
Then I run the Nginx server
$ /opt/nginx/sbin/nginx
With one rails app running with this configuration everything seems to work ok.
But when I try to add my second app
$ cd /home/hector/webapps/second
$ passenger start -a 127.0.0.1 -p 3001
with this nginx.conf file:
...
server {
listen 80;
server_name 127.0.0.1;
root /home/hector/webapps/first/public;
passenger_base_uri /first/;
location /first/ {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
}
}
server {
listen 80;
server_name 127.0.0.1;
root /home/hector/webapps/second/public;
passenger_base_uri /second/;
location /second/ {
proxy_pass http://127.0.0.1:3001;
proxy_set_header Host $host;
}
}
…
and I reload the Nginx server configuration
$ /opt/nginx/sbin/nginx -s reload
nginx: [warn] conflicting server name "127.0.0.1" on 0.0.0.0:80, ignored
I get a warning and I cannot access the second app from
http://192.168.1.193/second/
The server returns 404 for the second app and the first app is still running.
I think you just have to put both locations into the same server:
server {
listen 80;
server_name 127.0.0.1;
location /first/ {
root /home/hector/webapps/first/public;
passenger_base_uri /first/;
proxy_pass http://127.0.0.1:3000/;
proxy_set_header Host $host;
}
location /second/ {
root /home/hector/webapps/second/public;
passenger_base_uri /second/;
proxy_pass http://127.0.0.1:3001/;
proxy_set_header Host $host;
}
}

Resources