CERTBOT can't find config file in docker container - docker

Two months ago, I set up a website with SSL thanks to Let's Encrypt. The details of how I did it are now quite blurry.
The site is hosted inside several docker containers (nginx, PHP, MySQL). There is a certbot container which should perform the renewal of the SSL certificate. This container is launched once a week and aborts immediately.
I have checked the logs and found this error. My research were unsuccessful and I have no idea what file certbot is complaining about.
usage:
certbot [SUBCOMMAND] [options] [-d DOMAIN] [-d DOMAIN] ...
Certbot can obtain and install HTTPS/TLS/SSL certificates. By default,
it will attempt to use a webserver both for obtaining and installing the
certificate.
certbot: error: Unable to open config file: certonly -n --webroot --webroot-path=/var/lib/challenge --email contact#**********.com --agree-tos --no-eff-email -d www.**********.com --key-type ecdsa. Error: No such file or directory
Do you have any idea of the problem?
Thanks in advance,
EDIT:
The contents of /etc/letsencrypt are
accounts archive cli.ini csr keys live renewal renewal-hooks
Inside cli.ini, I have :
key-type = ecdsa
elliptic-curve = secp384r1
rsa-key-size = 4096
email = contact#attom.eu
authenticator = webroot
webroot-path = /var/lib/challenge
agree-tos = true
The docker-compose.yml contains :
version: '3'
services:
nginx:
build: ./nginx
container_name: nginx
restart: unless-stopped
depends_on:
- php
networks:
- app-network
volumes:
- {{ mounted_dir_app }}/public:/var/www/html:ro
- certbotdata:/etc/letsencrypt:ro
- challenge:/home/challenge
ports:
- "80:80"
- "443:443"
env_file:
- .env
- ".env.$ENV"
healthcheck:
test: curl -IsLk $$SITE_URL | head -n 1 | grep -q -e ^HTTP -e 200
start_period: 30s
interval: 10s
timeout: 3s
retries: 5
php:
#skip
mysql:
#skip
certbot:
depends_on:
nginx:
condition: service_healthy
build:
context: ./certbot
args:
- "ENV=$ENV"
container_name: certbot
env_file:
- .env
- ".env.$ENV"
volumes:
- certbotdata:/etc/letsencrypt
- challenge:/var/lib/challenge
networks:
app-network:
driver: bridge
volumes:
dbdata:
certbotdata:
challenge:
Edit:
The CERTBOT Dockerfile is
ARG ENV
FROM certbot/certbot as cert-prod
CMD certonly -n --webroot --webroot-path=/var/lib/challenge --email contact#**********.com --agree-tos --no-eff-email -d www.**********.com --key-type ecdsa
FROM alpine as cert-dev
RUN apk update && apk add openssl
CMD mkdir -p /etc/letsencrypt/live/www.**********.com && \
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-subj "/C=**/ST=**********/L=**********" \
-keyout /etc/letsencrypt/live/www.**********.com/privkey.pem -out /etc/letsencrypt/live/www.**********.com/fullchain.pem
FROM cert-${ENV}

Related

docker-compose nginx certbot not found certificate

I want to create a docker-compose with several services and in it I want to generate a certificate for my domain name with Certbot/LetsEncryt. But when I run it, I always get an error saying it can't find a certificate! While normally I do everything necessary to generate it.
version: '3.8'
services:
proxy-nginx:
build: .
ports:
- 80:80
- 443:443
volumes:
- ./certbot/www:/var/www/certbot/
- ./certbot/conf/:/etc/nginx/ssl/
depends_on:
- nestjs
restart: unless-stopped
certbot:
image: certbot/certbot:latest
depends_on:
- proxy-nginx
volumes:
- ./certbot/www/:/var/www/certbot/
- ./certbot/conf/:/etc/letsencrypt/
command: certonly --webroot --webroot-path=/var/www/certbot --email emain#gmail.com --agree-tos --no-eff-email --staging 0 --force-renewal -d www.mydomaine -d mydomaine
nestjs:
build:
context: ./BACKEND
dockerfile: Dockerfile
ports:
- 3000:3000
Here is the result :
cannot load certificate "/etc/nginx/ssl/live/mydomaine/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/nginx/ssl/live/mydomaine/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
```
In my nginx.conf file
I have 1 proxy server and 1 server for the front-end and back-end of my application. But the problem is nginx can't find the certificate. I don't know why.
normally the certificate is generated in the folder /etc/nginx/ssl/live/mydomaine.be/ but it's not the case.
This is how I use it and it works.
docker-compose.yml
services:
node:
container_name: node-server
build: .
environment: # process.env.
NODE_ENV: production
networks:
- app-network
nginx:
image: 'nginx:1.23.3'
container_name: nginx-server
depends_on:
- node
volumes:
- './volumes/nginx/production/nginx.conf:/etc/nginx/nginx.conf:ro'
- './volumes/nginx/production/conf.d/:/etc/nginx/conf.d'
- './volumes/certbot/letsencrypt:/etc/letsencrypt'
- './volumes/certbot/www:/var/www/certbot'
networks:
- app-network
ports:
- '80:80' # To access nginx from outside
- '443:443' # To access nginx from outside
networks:
app-network:
driver: bridge
Docker run certbot
docker run --rm --name temp_certbot \
-v /home/app-folder/volumes/certbot/letsencrypt:/etc/letsencrypt \
-v /home/app-folder/volumes/certbot/www:/tmp/letsencrypt \
-v /home/app-folder/volumes/certbot/log:/var/log \
certbot/certbot:v1.8.0 \
certonly --webroot --agree-tos --renew-by-default \
--preferred-challenges http-01 --server https://acme-v02.api.letsencrypt.org/directory \
--text --email info#domain.com \
-w /tmp/letsencrypt -d domain.com

nest js connect to elastic in docker compose with xpack.security.http.ssl.enabled=true

I'm stucked with connecting to elasticsearch from my nest js app running in docker. I'm getting this error message
ResponseError: security_exception: [security_exception] Reason: missing authentication credentials for REST request [/companies]
this is my docker-compose file
version: "3.8"
services:
postgres:
container_name: benchy-db
image: postgres:latest
volumes:
- ./db_data:/var/lib/postgresql/data
ports:
- "5432:5432"
environment:
- POSTGRES_DB=benchy
- POSTGRES_USER=root
- POSTGRES_PASSWORD=0000
networks:
- elastic
server:
container_name: benchy-api
build:
context: ./
restart: on-failure
command: bash -c "npm run db:run && npm run rebuild"
ports:
- "4000:4000"
depends_on:
- postgres
- kibana
environment:
DB_HOST: postgres
DB_PORT: 5432
networks:
- elastic
setup:
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- ./certs:/usr/share/elasticsearch/config/certs
user: "0"
command: >
bash -c '
if [ ! -f config/certs/ca.zip ]; then
echo "Creating CA";
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
unzip config/certs/ca.zip -d config/certs;
fi;
if [ ! -f config/certs/certs.zip ]; then
echo "Creating certs";
echo -ne \
"instances:\n"\
" - name: esnode1\n"\
" dns:\n"\
" - esnode1\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
fi;
echo "Setting file permissions"
chown -R root:root config/certs;
find . -type d -exec chmod 750 \{\} \;;
find . -type f -exec chmod 640 \{\} \;;
echo "Waiting for Elasticsearch availability";
until curl -s --cacert config/certs/ca/ca.crt https://esnode1:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
echo "Setting kibana_system password";
until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://esnode1:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
echo "Good to go!";
'
networks:
- elastic
esnode1:
depends_on:
- setup
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esnode1-data:/usr/share/elasticsearch/data
ports:
- ${ES_PORT}:9200
environment:
- node.name=esnode1
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- discovery.type=single-node
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/esnode1/esnode1.key
- xpack.security.http.ssl.certificate=certs/esnode1/esnode1.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/esnode1/esnode1.key
- xpack.security.transport.ssl.certificate=certs/esnode1/esnode1.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
ulimits:
memlock:
soft: -1
hard: -1
networks:
- elastic
kibana:
depends_on:
- esnode1
image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
volumes:
- certs:/usr/share/kibana/config/certs
- kibana-data:/usr/share/kibana/data
ports:
- ${KIBANA_PORT}:5601
environment:
- SERVERNAME=kibana
- ELASTICSEARCH_HOSTS=https://esnode1:9200
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
networks:
- elastic
networks:
elastic:
name: elastic
driver: bridge
volumes:
db_data:
certs:
esnode1-data:
driver: local
kibana-data:
driver: local
this is my .env
#ELASTIC VARIABLES
ELASTIC_PASSWORD=DKS481!~=KS!KDJ
KIBANA_PASSWORD=DKS481!~=KS!KDJ
ELASTIC_USERNAME=elastic
STACK_VERSION=8.2.2
CLUSTER_NAME=docker-cluster
LICENSE=basic
ES_PORT=9200
# ES_PORT=127.0.0.1:9200
KIBANA_PORT=5601
MEM_LIMIT=1073741824
and this is my connection from nest js app
const elasticClient = new Client({
node: 'https://esnode1:9200',
auth: {
username: process.env.ELASTIC_USERNAME,
password: process.env.ELASTIC_PASSWORD,
},
tls: {
ca: readFileSync('./certs/ca/ca.crt'),
rejectUnauthorized: false
}
});
this is certs folder generated by elastic and I'm using this cert
elastic with kibana working correctly and I can login in kibana, but from my nest js app I'm not able to do this. in my environment elasticsearch will be used just in my VM, when kibana should be accessible from outside. in terms of it I'm thinking weather I need to use xpack security for elastic at all. maybe I can secure just kibana.
Appreciate any help!
Appologies! the issue fixed - I didn't import env in file where I'm initing elasticClient, so the message was complitely clear - "credentials missing", when I've been thinking there was something wrong with certs or whatever.

LetsEncrypt cert renewal script not working via docker compose

I have a web site running SSL done using lets encrypt. I have written/used a script following this guide but the cert are not renewed automatically. Every 90 days I need to manually run the lets encrypt renewal command to get new certs for my website.
This is how my docker-compose looks like for nginx and certbot
nginx:
build: nginx-image
image: km-nginx
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- 80:80
- 443:443
depends_on:
- keycloak
- km-app
links:
- keycloak
- km-app
environment:
- PRODUCTION=true
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew --webroot -w /var/www/certbot; sleep 12h & wait $${!}; done;'"

Docker compose isnt writing files or directories to host

I am following the digital ocean tutorial to install wordpress via docker
https://www.digitalocean.com/community/tutorials/how-to-install-wordpress-with-docker-compose
It says if the certbot is other than 0 I get the following error, there are no log files where I it says to look. Newish to docker thanks for helping all!
Edit: I’m noting none of the volumes that this docker-compose were created on the host
Name Command State Ports
-------------------------------------------------------------------------
certbot certbot certonly --webroot ... Exit 1
db docker-entrypoint.sh --def ... Up 3306/tcp, 33060/tcp
webserver nginx -g daemon off; Up 0.0.0.0:80->80/tcp
wordpress docker-entrypoint.sh php-fpm Up 9000/tcp
Docker-compose.yml here
version: '3'
services:
db:
image: mysql:8.0
container_name: db
restart: unless-stopped
env_file: .env
environment:
- MYSQL_DATABASE=wordpress
volumes:
- dbdata:/var/lib/mysql
command: '--default-authentication-plugin=mysql_native_password'
networks:
- app-network
wordpress:
depends_on:
- db
image: wordpress:5.1.1-fpm-alpine
container_name: wordpress
restart: unless-stopped
env_file: .env
environment:
- WORDPRESS_DB_HOST=db:3306
- WORDPRESS_DB_USER=$MYSQL_USER
- WORDPRESS_DB_PASSWORD=$MYSQL_PASSWORD
- WORDPRESS_DB_NAME=wordpress
volumes:
- wordpress:/var/www/html
networks:
- app-network
webserver:
depends_on:
- wordpress
image: nginx:1.15.12-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- wordpress:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
networks:
- app-network
certbot:
depends_on:
- webserver
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- wordpress:/var/www/html
command: certonly --webroot --webroot-path=/var/www/html --email sammy#example.com --agree-tos --no-eff-email --staging -d example.com -d www.example.com
volumes:
certbot-etc:
wordpress:
dbdata:
networks:
app-network:
driver: bridge
The volumes being created here are named volumes.
To check named volumes run:
docker-compose volume ls
Also, per the comment above, you could check certbot logs with:
docker-compose logs certbot
The volumes and container logs won’t show up using docker unless you use the specific container and volume names which you can find with:
docker-compose ls and docker-compose volume ls
Or use the docker-compose variants above

Portainer - how to specify SSL in docker-compose.yml?

I'm trying to deploy an instance of Portainer to a docker swarm. I'm not sure how to set the correct flag to enable SSL.
From the docs:
$ docker run -d -p 443:9000 --name portainer --restart always -v ~/local-certs:/certs -v portainer_data:/data portainer/portainer --ssl --sslcert /certs/portainer.crt --sslkey /certs/portainer.key
https://portainer.readthedocs.io/en/stable/deployment.html
But how do you translate that into a docker compose yml file?
Possibly I'm a bit late to the party, but it looks what you have to use Portainer's flags to enable ssl for your Portainer (as said in documentation) and composerize.com lost that part somewhere, so you should add this to your compose:
command:
--sslcert /certs/portainer.crt
--sslkey /certs/portainer.key
or for full compose file:
version: 3
services:
portainer:
image: portainer/portainer
container_name: portainer
restart: always
ports:
- '443:9000'
volumes:
- '~/local-certs:/certs'
- 'portainer_data:/data'
command:
--sslcert /certs/portainer.crt
--sslkey /certs/portainer.key
According to Portainer documentation:
By default, Portainer’s web interface and API is exposed over HTTP.
This is not secured, it’s recommended to enable SSL in a production
environment.
To do so, you can use the following flags --ssl, --sslcert and
--sslkey:
$ docker run -d -p 443:9000 --name portainer --restart always -v
~/local-certs:/certs -v portainer_data:/data portainer/portainer --ssl
--sslcert /certs/portainer.crt --sslkey /certs/portainer.key
You can use the following commands to generate the required files:
$ openssl genrsa -out portainer.key 2048
$ openssl ecparam -genkey -name secp384r1 -out portainer.key
$ openssl req -new -x509 -sha256 -key portainer.key -out portainer.crt -days 3650
Note that Certbot could be used as well to generate a certificate and a key.
As Rubin suggests, you can use https://composerize.com/ to generate a docker-compose.yml from docker command.
So, your docker-compose file should be something like this:
version: '3'
services:
portainer:
image: portainer/portainer
container_name: portainer
restart: always
ports:
- '443:9000'
volumes:
- '~/local-certs:/certs'
- 'portainer_data:/data'
command:
--ssl
--sslcert /certs/portainer.crt
--sslkey /certs/portainer.key
volumes:
portainer_data:
https://composerize.com/ can help to translate your docker command into a docker-compose.yml
The following works for me:
version: '3'
services:
portainer:
image: portainer/portainer-ce
volumes:
- "/local-certs:/certs"
- "portainer_data:/data"
restart: always
ports:
- "9000:9000"
container_name: portainer
command:
- --ssl
- --sslcert
- /certs/wildcard.crt
- --sslkey
- /certs/wildcard.key

Resources