I've a docker-compose which setting up an nginx with a react static site.
When I'm running up the docker-compose I'm trying to sahre with the container a folder which contains my ssl certificates, but the nginx is not able to find them.
version: "3.3"
services:
app:
container_name: frontend
image: colymore/frontend:latest
ports:
- 80:80
- 443:443
restart: always
volumes:
- /etc/letsencrypt/live/colymore.me/:/certs/:ro
labels:
- com.centurylinklabs.watchtower.enable=true
networks:
net:
In the nginx confi I've
ssl_certificate /certs/fullchain.pem;
ssl_certificate_key /certs/privkey.pem;
And the file exists:
colymore#colymore.me$ ls /etc/letsencrypt/live/colymore.me/fullchain.pem
/etc/letsencrypt/live/colymore.me/fullchain.pem
But when IO run the ngins it's giving an error:
52a5118c5c40_frontend | nginx: [emerg] cannot load certificate "/certs/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/certs/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
Related
When I raise the container, I get the following errors:
nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx | /docker-entrypoint.sh: Configuration complete; ready for start up
nginx | 2022/09/04 23:08:42 [emerg] 1#1: cannot load certificate "/etc/ssl/5master.crt": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/ssl/5master.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file)
nginx | nginx: [emerg] cannot load certificate "/etc/ssl/5master.crt": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/ssl/5master.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file)
nginx exited with code 1
Although in the DockerFile I copied the certificate files that are in the command invocation folder.
nginx conf:
server {
listen 80;
listen 443 ssl;
server_name 5master.com;
ssl_certificate /etc/ssl/5master.crt;
ssl_certificate_key /etc/ssl/5master.key;
}
DockerFile:
FROM python:3.8
WORKDIR /usr/src/app
ADD . /usr/src/app
COPY requirements.txt ./
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . .
EXPOSE 5000
CMD ["uwsgi", "app.ini"]
COPY nginx.conf /etc/nginx/conf.d
COPY 5master.crt /etc/ssl/5master.crt
COPY 5master.key /etc/ssl/5master.key
docker-compose:
version: "3.8"
services:
api:
build: .
restart: "always"
environment:
FLASK_APP: run.py
volumes:
- .:/usr/src/app
nginx:
build: ./nginx
container_name: nginx
restart: always
volumes:
- /application/static/:/static
depends_on:
- api
ports:
- "80:80"
- "443:443"
What could be the problem?
You are installing the certificates into your Python API image, not into your nginx image. That is, in your docker-compose.yaml you are building two images:
api:
build: .
nginx:
build: ./nginx
The Dockerfile in your question appears to be for the Python API
image. Since that image isn't used by nginx, it doesn't make any sense
to install the certificates there.
You need to modify your nginx/Dockerfile to install the
certificates.
I have attempted to build a Django / Gunicorn / Nginx configuration to run on AWS. The database container is running separately. When performing the docker-compose build step, however, the nginx step is failing. The files are shown below:
The dockerfile.prod:
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile.prod
container_name: app
command: gunicorn The6ix.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
networks:
- dbnet
expose:
- 8000
environment:
aws_access_key_id: ${aws_access_key_id}
aws_secret_access_key: ${aws_secret_access_key}
nginx:
build: ./nginx
ports:
- 1337:80
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
depends_on:
- web
volumes:
static_volume:
media_volume:
networks:
dbnet:
external: true
My nginx Dockerfile (in ./nginx folder):
FROM nginx:1.21-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
My nginx.conf file (in ./nginx folder):
upstream The6ix {
server web:8000;
}
server {
listen 80;
location / {
proxy_pass http://The6ix;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /home/app/web/staticfiles/;
}
location /media/ {
alias /home/app/web/mediafiles/;
}
The error log from my nginx container is as follows:
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf is not a file or does not exist
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/01/03 17:33:59 [emerg] 1#1: host not found in upstream "web:8000" in /etc/nginx/conf.d/nginx.conf:2
nginx: [emerg] host not found in upstream "web:8000" in /etc/nginx/conf.d/nginx.conf:2
Some posts have mentioned usage of the volume creation step in the .yaml file as being the culprit. But is there a better way to sequence this to enable correct run of nginx?
I had suspected that the error was the result of my configuration copy. In the end, it was actually a network error. The following modifications corrected the issue:
In a shell:
docker network create nginx_network
Modification to the docker-compose.yaml file:
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile.prod
command: gunicorn The6ix.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
networks:
- dbnet
- nginx_network
ports:
- "8000:8000"
environment:
aws_access_key_id: ${aws_access_key_id}
aws_secret_access_key: ${aws_secret_access_key}
nginx:
build: ./nginx
ports:
- 80:80
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
depends_on:
- web
networks:
- nginx_network
volumes:
static_volume:
media_volume:
networks:
dbnet:
external: true
nginx_network:
external: true
I have a docker container that is running NGINX. Within the container I have an SSL cert that is currently being copied into the container. I would like to avoid using this approach and instead have the value of the SSL cert being passed in, so it is not stored on the container. In the docker-compose file, I have specified the public and private portions of the SSL certs as volumes and I have removed the commands from the Dockerfile that copies the values onto the image. However I am getting an error when running docker-compose up that the certificate cannot be loaded due to it not existing. Any advice on how I can accomplish this would be helpful. Thanks!
Docker Compose
version: "3"
services:
nginx:
container_name: nginx
build: nginx
volumes:
- ./:/var/www
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/files/localhost.crt:/etc/nginx/ssl/nginx.crt
- ./nginx/files/localhost.key:/etc/nginx/ssl/nginx.key
ports:
- 80:80
- 443:443
networks:
- MyNetwork
networks:
MyNetwork:
Dockerfile
FROM nginx:latest
COPY nginx.conf /etc/nginx/conf.d/
Error
[emerg] 1#1: cannot load certificate "/etc/nginx/nginx/files/localhost.crt": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/nginx/nginx/files/localhost.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file)
I have the following docker-compose setup using Certbot and Nginx
version: "3"
services:
web:
image: nginx:latest
user: root
restart: always
volumes:
- ./public:/var/www/html
- ./conf.d:/etc/nginx/conf.d
- certbot:/etc/nginx/ssl
- certbot:/var/www/certbot
ports:
- 80:80
- 443:443
certbot:
image: certbot/certbot:latest
command: certonly --webroot --webroot-path=/var/www/certbot --email email#gmail.com --agree-tos --no-eff-email -d www.domain.com -d domain.com
volumes:
- certbot:/etc/letsencrypt
- ./logs/certbot:/var/log/letsencrypt
- certbot:/var/www/certbot
volumes:
certbot:
Eveything works as expected, certbot creates the SSL certificates and puts them in the correct place for Nginx to read them.
The issue is Nginx does not have the correct permissions to read the files giving the following error on the logs.
2020/12/04 09:50:03 [error] 27#27: *1 cannot load certificate "/etc/nginx/ssl/live/domain.com/fullchain.pem": BIO_new_file() failed (SSL: error:0200100D:system library:fopen:Permission denied:fopen('/etc/nginx/ssl/live/domain.com/fullchain.pem','r') error:2006D002:BIO routines:BIO_new_file:system lib) while SSL handshaking, client: 31.187.58.166, server: 0.0.0.0:443
I am running the Nginx image as root as seen by user: root in the docker-compose yet Nginx still doesn't have permissions to read this file. How is this possible?
This can be a false message. It 'can' load the certificate, but maybe it cannot validate it.
inx: [warn] "ssl_stapling" ignored, issuer certificate not found for certificate
Try this
sudo nginx -t
nginx will tell you what really happened
I am trying to reverse proxy a request through an nginx container in a swarm to a standalone container which shares the same overlay network.
tldr; I receive the following error:
2018/03/15 19:00:35 [emerg] 1#1: invalid host in upstream "http://nginx" in /etc/nginx/nginx.conf:96
nginx: [emerg] invalid host in upstream "http://nginx" in /etc/nginx/nginx.conf:96
The standalone container contains an app which has another nginx frontend:
version: "3"
services:
nginx:
restart: always
container_name: my.nginx
build: ./nginx
networks:
- default
- my-overlay-network
depends_on:
- another-service
... other services
networks:
my-overlay-network:
external: true
I start this app with docker-compose up -d.
My swarm contains the reverse proxy:
version: "3"
services:
reverseproxy:
build: ./reverseproxy
image: reverse_proxy
networks:
- my-overlay-network
ports:
- "80:80"
- "443:443"
volumes:
- /etc/letsencrypt:/etc/letsencrypt
deploy:
replicas: 10
restart_policy:
condition: on-failure
networks:
my-overlay-network:
external: true
If I startup the nginx swarm without specifying a proxy_pass for the standalone app, I can successfully ping the other host like so:
ping http://nginx/
I can confirm the other host receives this request based on the nginx logs.
However, if I specify the docker standalone app in the reverse proxy:
upstream standalone {
server http://nginx/;
}
and
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
... other stuff ...
location / {
proxy_pass http://standalone/;
}
}
I get the following errors:
2018/03/15 19:00:35 [emerg] 1#1: invalid host in upstream "http://nginx" in /etc/nginx/nginx.conf:96
nginx: [emerg] invalid host in upstream "http://nginx" in /etc/nginx/nginx.conf:96
Try to add resolver before upstream:
resolver 127.0.0.11;
upstream standalone {
server http://nginx/;
}
127.0.0.11 is an address of embedded docker DNS server
The issue is that the syntax for upstream is incorrect: it takes a host and port, so it should be:
upstream standalone {
server nginx;
}