Traefik certificate resolver folder/file privilege - docker

I set traefik in docker compose. I create folder acme and give it to certificate Resolvers and in code down. Then traefik creates acme.json.
I see that owner:group is same as owner of folder acme. Is there some restraints in what owner/group of acme folder or file acme.json it needs to be? Because this certificate is used by traefik and mode is 600. So I assume owner is important but it seems to work for any owner I set there.
certificatesResolvers:
certResolver:
acme:
email: "admin#setlog.com"
storage: "/acme/acme.json"
httpChallenge:
entryPoint: "web"

Related

Traefik yml acme email value from environment variables

I use a compose file with two services (Python app & Traefik), in the docker file I load all the environment variables.
For Traefik I use a YML file to define the services, In that YML file I have a node for certificateResolvers, that node looks like this:
certificatesResolvers:
letsencrypt:
acme:
email: "email#domain.com"
storage: /etc/traefik/acme/acme.json
httpChallenge:
entryPoint: web
I want to set the email from a environment variable so the YML file should looks like this:
certificatesResolvers:
letsencrypt:
acme:
email: '{{env "USER_EMAIL"}}'
storage: /etc/traefik/acme/acme.json
httpChallenge:
entryPoint: web
Having the YML in this way I got this in the Logs:
level=info msg="Starting provider *acme.Provider {\"email\":\"{{env \\\"USER_EMAIL\\\"}}\",\"caServer\":\"https://acme-v02.api.letsencrypt.org/directory\",\"storage\":\"/etc/traefik/acme/acme.json\",\"keyType\":\"RSA4096\",\"httpChallenge\":{\"entryPoint\":\"web\"},\"ResolverName\":\"letsencrypt\",\"store\":{},\"ChallengeStore\":{}}"
level=error msg="Unable to obtain ACME certificate for domains \"domain.com\": cannot get ACME client acme: error: 400 :: POST :: https://acme-v02.api.letsencrypt.org/acme/new-acct :: urn:ietf:params:acme:error:invalidEmail :: Error creating new account :: \"{{env \\\"USER_EMAIL\\\"}}\" is not a valid e-mail address, url: " providerName=letsencrypt.acme routerName=web-secure-router#file rule="Host(`domain.com`)"
I tried with:
email: '{{env "USER_EMAIL"}}'
email: '`{{env "USER_EMAIL"}}`'
email: "{{env 'USER_EMAIL'}}"
email: "{{env USER_EMAIL}}"
But none of those worked.
In the same YML file I have a node that looks like this:
http:
routers:
web-secure-router:
rule: 'Host(`{{env "PROJECT_HOSTNAME"}}`)'
entryPoints:
- web-secure
service: fastapi
tls:
certResolver: letsencrypt
In that section, I get the right value of the PROJECT_HOSTNAME variable, in this case domain.com as you can see in the Logs above
this may not be the solution, but it is a different way of doing things, you can try with:
instead of using traefik yml, use commands in the docker compose yml;
https://github.com/nasatome/docker-network-utils/blob/389324b6795d07684dac9bfe7dc5315bcd7eef7c/reverse-proxy/traefik/docker-compose.yml
Another thing to try would be to use:
${USER_EMAIL}
instead of
{{env "USER_EMAIL"}}
To clarify on why you cannot use your own user defined environment variable for certificatesResolvers is because this is part of the static configuration, whereas the http is part of the dynamic configuration (where you can use your own like PROJECT_HOSTNAME)
You can still use TRAEFIK Environment variables to set the email for your certificate resolver with the variable TRAEFIK_CERTIFICATESRESOLVERS_<NAME>_ACME_EMAIL.
I haven't tested this myself, but I think the following should do the trick:
services:
traefik:
environment:
TRAEFIK_CERTIFICATESRESOLVERS_LETSENCRYPT_ACME_EMAIL: ${USER_EMAIL}

Traefik: no space left on device

I'm trying to enable file provider for registering dynamic configuration, but I get the error:
Cannot start the provider *file.Provider: error adding file watcher: no space left on device
Traefik uses fsnotify for adding new watchers and it get a limit from variable of Linux: /proc/sys/fs/inotify/max_user_watches
I tried to change the variable inside the docker container by sudo:
sudo sysctl -w fs.inotify.max_user_watches=12288
but I'm getting a error:
sysctl: error setting key 'fs.inotify.max_user_watches': Read-only file system
Traefik configuration:
entryPoints:
web:
address: ":80"
websecure:
address: ":443"
providers:
docker: {}
file:
directory: '/config'
watch: true
api:
dashboard: true
certificatesResolvers:
le:
acme:
email: myemail#mail.com
storage: acme.json
httpChallenge:
entryPoint: web
Traefik version: 2.2.1
When I run traefik on another machine or my Mac or when I set a watch of configuration to false then it works like a charm, but I need to watch file changes
Please, tell me how I can change the variable by sudo in Alpine container or how to solve this issue in another way
Well, I try to change max_user_watches inside docker container. It was a wrong idea. I need to change max_user_watches inside my linux machine where I run docker containers.
After run command:
sudo sysctl -w fs.inotify.max_user_watches=12288
it worked fine

Keycloak SSL setup using docker image

I am trying to deploy keycloak using docker image (https://hub.docker.com/r/jboss/keycloak/ version 4.5.0-Final) and facing an issue with setting up SSL.
According to the docs
Keycloak image allows you to specify both a
private key and a certificate for serving HTTPS. In that case you need
to provide two files:
tls.crt - a certificate tls.key - a private key Those files need to be
mounted in /etc/x509/https directory. The image will automatically
convert them into a Java keystore and reconfigure Wildfly to use it.
I followed the given steps and provided the volume mount setting with a folder with the necessary files (tls.crt and tls.key), But I am facing issues with SSL handshake, getting
ERR_SSL_VERSION_OR_CIPHER_MISMATCH
error, blocking keycloak load in browser when trying to access it.
I have used letsencrypt to generate pem files and used openssl to create .crt and .key files.
Also tried just openssl to create those files to narrow down issue and the behavior is same(some additional info if this should matter)
By default, when I simply specify just the port binding -p 8443:8443 without specifying the cert volume mount /etc/x509/https the keycloak server generates a self signed certificate and I don't see issue in viewing the app in browser
I guess this might be more of a certificate creation issue than anything specific to keycloak, But, unsure how to get this to working.
Any help is appreciated
I also faced the issue of getting an ERR_SSL_VERSION_OR_CIPHER_MISMATCH error, using the jboss/keycloak Docker image and free certificates from letsencrypt. Even after considering the advices from the other comments. Now, I have a working (and quite easy) setup, which might also help you.
1) Generate letsencrypt certificate
At first, I generated my letsencrypt certificate for domain sub.example.com using the certbot. You can find detailed instructions and alternative ways to gain a certificate at https://certbot.eff.org/ and the user guide at https://certbot.eff.org/docs/using.html.
$ sudo certbot certonly --standalone
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Please enter in your domain name(s) (comma and/or space separated) (Enter 'c' to cancel): sub.example.com
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for sub.example.com
Waiting for verification...
Cleaning up challenges
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/sub.example.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/sub.example.com/privkey.pem
Your cert will expire on 2020-01-27. To obtain a new or tweaked
version of this certificate in the future, simply run certbot
again. To non-interactively renew *all* of your certificates, run
"certbot renew"
2) Prepare docker-compose environment
I use docker-compose to run keycloak via docker. The config and data files are stored in path /srv/docker/keycloak/.
Folder config contains the docker-compose.yml
Folder data/certs contains the certificates I generated via letsencrypt
Folder data/keycloack_db is mapped to the database container to make its data persistent.
Put the certificate files to the right path
When I first had issues using the original letscrypt certificates for keycloak, I tried the workaround of converting the certificates to another format, as mentioned in the comments of the former answers, which also failed. Eventually, I realized that my problem was caused by permissions set to the mapped certificate files.
So, what worked for me is to just to copy and rename the files provided by letsencrypt, and mount them to the container.
$ cp /etc/letsencrypt/live/sub.example.com/fullchain.pem /srv/docker/keycloak/data/certs/tls.crt
$ cp /etc/letsencrypt/live/sub.example.com/privkey.pem /srv/docker/keycloak/data/certs/tls.key
$ chmod 755 /srv/docker/keycloak/data/certs/
$ chmod 604 /srv/docker/keycloak/data/certs/*
docker-compose.yml
In my case, I needed to use the host network of my docker host. This is not best practice and should not be required for your case. Please find information about configuration parameters in the documentation at hub.docker.com/r/jboss/keycloak/.
version: '3.7'
networks:
default:
external:
name: host
services:
keycloak:
container_name: keycloak_app
image: jboss/keycloak
depends_on:
- mariadb
restart: always
ports:
- "8080:8080"
- "8443:8443"
volumes:
- "/srv/docker/keycloak/data/certs/:/etc/x509/https" # map certificates to container
environment:
KEYCLOAK_USER: <user>
KEYCLOAK_PASSWORD: <pw>
KEYCLOAK_HTTP_PORT: 8080
KEYCLOAK_HTTPS_PORT: 8443
KEYCLOAK_HOSTNAME: sub.example.ocm
DB_VENDOR: mariadb
DB_ADDR: localhost
DB_USER: keycloak
DB_PASSWORD: <pw>
network_mode: host
mariadb:
container_name: keycloak_db
image: mariadb
volumes:
- "/srv/docker/keycloak/data/keycloak_db:/var/lib/mysql"
restart: always
environment:
MYSQL_ROOT_PASSWORD: <pw>
MYSQL_DATABASE: keycloak
MYSQL_USER: keycloak
MYSQL_PASSWORD: <pw>
network_mode: host
Final directory setup
This is how my final file and folder setup looks like.
$ cd /srv/docker/keycloak/
$ tree
.
├── config
│ └── docker-compose.yml
└── data
├── certs
│ ├── tls.crt
│ └── tls.key
└── keycloak_db
Start container
Finally, I was able to start my software using docker-compose.
$ cd /srv/docker/keycloak/config/
$ sudo docker-compose up -d
We can see the mounted certificates within the container.
$ cd /srv/docker/keycloak/config/
$ sudo docker-compose up -d
We can doublecheck the mounted certificates within the container.
## open internal shell of keycloack container
$ sudo docker exec -it keycloak_app /bin/bash
## open directory of certificates
$ cd /etc/x509/https/
$ ll
-rw----r-- 1 root root 3586 Oct 30 14:21 tls.crt
-rw----r-- 1 root root 1708 Oct 30 14:20 tls.key
Considerung the setup from the docker-compose.yml, keycloak is now available at https://sub.example.com:8443
After some research the following method worked (for self-signed certs, I still have to figure out how to do with letsencrypt CA for prod)
generate a self-signed cert using the keytool
keytool -genkey -alias localhost -keyalg RSA -keystore keycloak.jks -validity 10950
convert .jks to .p12
keytool -importkeystore -srckeystore keycloak.jks -destkeystore keycloak.p12 -deststoretype PKCS12
generate .crt from .p12 keystore
openssl pkcs12 -in keycloak.p12 -nokeys -out tls.crt
generate .key from .p12 keystore
openssl pkcs12 -in keycloak.p12 -nocerts -nodes -out tls.key
Then use the tls.crt and tls.key for volume mount /etc/x509/https
Also, on the securing app, in the keycloak.json file specify the following properties
"truststore" : "path/to/keycloak.jks",
"truststore-password" : "<jks-pwd>",
For anyone who is trying to run Keycloak with a passphrase protected private key file:
Keycloak runs the script /opt/jboss/tools/x509.sh to generate the keystore based on the provided files in /etc/x509/https as described in https://hub.docker.com/r/jboss/keycloak - Setting up TLS(SSL).
This script takes no passphrase into account unfortunately. But with a little modification at Docker build time you can fix it by yourself:
Within your Dockerfile add:
RUN sed -i -e 's/-out "${KEYSTORES_STORAGE}\/${PKCS12_KEYSTORE_FILE}" \\/-out "${KEYSTORES_STORAGE}\/${PKCS12_KEYSTORE_FILE}" \\\n -passin pass:"${SERVER_KEYSTORE_PASSWORD}" \\/' /opt/jboss/tools/x509.sh
This command modifies the script and appends the parameter to pass in the passphrase
-passin pass:"${SERVER_KEYSTORE_PASSWORD}"
The value of the parameter is an environment variable which you are free to set: SERVER_KEYSTORE_PASSWORD
Tested with Keycloak 9.0.0

Docker Registry behind TLS enabled reverse proxy (Traefik) - Remote Error: Bad Certificate

So I am doing everything Dockerized here. Traefik is running in a container, as is my docker Registry instance. I am able to push/pull just fine from the registry if I hit it at mydomain.com:5000/myimage.
The problem comes when I try to hit it through 443 using mydomain.com/myimage. The setup I have here is Traefik reverse proxy listening on 443 at mydomain.com, and forwarding that request internally to :5000 of my Registry instance.
When I go to push/pull from the Traefik url, it hangs and counts down waiting to retry on a loop. When I look at the logs of Registry, each I can see the instance IS in fact in communication with the reverse proxy Traefik, however, I get this error in the log over and over (on each push retry from the client side):
2018/05/31 21:10:43 http: TLS handshake error from proxy_container_ip:port: remote error: tls: bad certificate
Docker Registry is really tight and strict when it comes to the TLS issue. I'm using all self signed certs here, as I'm still in development. Any idea what is causing this error? I'm assuming that either the Traefik proxy detects that the certificate offered from Registry is not to be trusted (self-signed), and therefore does not complete the "push" request, or the other way around - Registry, when sending the response back through to the Traefik proxy detects that it is not to be trusted.
I can provide additional information if needed. Current setup is that both Traefik and Registry have their own set of .crt and .key files. Both (of course) TLS enabled.
Thanks.
Here is a working solution with a self-signed certificate that you can try out on https://labs.play-with-docker.com
Server
Add a new instance node1 in your Docker playground. We configure it as our server. Create a directory for the certificates:
mkdir /root/certs
Create wildcard certificate *.domain.local:
$ openssl req -newkey rsa:2048 -nodes -keyout /root/certs/domain.local.key -x509 -days 365 -out /root/certs/domain.local.crt
Generating a 2048 bit RSA private key
...........+++
...........+++
writing new private key to '/root/certs/domain.local.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) []:
State or Province Name (full name) []:
Locality Name (eg, city) []:
Organization Name (eg, company) []:
Organizational Unit Name (eg, section) []:
Common Name (eg, fully qualified host name) []:*.domain.local
Email Address []:
Create two files docker-compose.yml and traefik.toml in directory /root. You can download them using:
wget https://gist.github.com/maiermic/cc9c9aab939f7ea791cff3d974725e4a/raw/8c5d787998d33c752f2ab369a9393905780d551c/docker-compose.yml
wget https://gist.github.com/maiermic/cc9c9aab939f7ea791cff3d974725e4a/raw/8c5d787998d33c752f2ab369a9393905780d551c/traefik.toml
docker-compose.yml
version: '3'
services:
frontproxy:
image: traefik
command: --api --docker --docker.swarmmode
ports:
- "80:80"
- "443:443"
volumes:
- ./certs:/etc/ssl:ro
- ./traefik.toml:/etc/traefik/traefik.toml:ro
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events
deploy:
labels:
- traefik.port=8080
- traefik.frontend.rule=Host:traefik.domain.local
docker-registry:
image: registry:2
deploy:
labels:
- traefik.port=5000 # default port exposed by the registry
- traefik.frontend.rule=Host:registry.domain.local
- traefik.frontend.auth.basic=user:$$apr1$$9Cv/OMGj$$ZomWQzuQbL.3TRCS81A1g/ # user:password, see https://docs.traefik.io/configuration/backends/docker/#on-containers
traefik.toml
defaultEntryPoints = ["http", "https"]
# Redirect HTTP to HTTPS and use certificate, see https://docs.traefik.io/configuration/entrypoints/
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "/etc/ssl/domain.local.crt"
keyFile = "/etc/ssl/domain.local.key"
# Docker Swarm Mode Provider, see https://docs.traefik.io/configuration/backends/docker/#docker-swarm-mode
[docker]
endpoint = "tcp://127.0.0.1:2375"
domain = "docker.localhost"
watch = true
swarmMode = true
Initialize Docker Swarm (replace <ip-of-node1> with the IP address of node1, for example 192.168.0.13):
docker swarm init --advertise-addr <ip-of-node1>
Deploy traefik and Docker registry:
docker stack deploy myregistry -c ~/docker-compose.yml
Client
Since we don't have a DNS server, we change /etc/hosts (replace <ip-of-node1> with the IP address of our server node1, for example 192.168.0.13):
echo "<ip-of-node1> registry.domain.local traefik.domain.local" >> /etc/hosts
You should be able now to request the health status from traefik
$ curl -ksS https://traefik.domain.local/health | jq .
{
"pid": 1,
"uptime": "1m37.501499911s",
"uptime_sec": 97.501499911,
"time": "2018-07-19 07:30:35.137546789 +0000 UTC m=+97.600568916",
"unixtime": 1531985435,
"status_code_count": {},
"total_status_code_count": {},
"count": 0,
"total_count": 0,
"total_response_time": "0s",
"total_response_time_sec": 0,
"average_response_time": "0s",
"average_response_time_sec": 0
}
and you should be able to request all images (none) from our registry
$ curl -ksS -u user:password https://registry.domain.local/v2/_catalog | jq .
{
"repositories": []
}
Let's configure docker on our client. Create the directory for the registry certificates:
mkdir -p /etc/docker/certs.d/registry.domain.local/
Get the certificate from our server:
scp root#registry.domain.local:/root/certs/domain.local.crt /etc/docker/certs.d/registry.domain.local/ca.crt # Are you sure you want to continue connecting (yes/no)? yes
Now you should be able to login to our registry and add an image:
docker login -u user -p password https://registry.domain.local
docker pull hello-world:latest
docker tag hello-world:latest registry.domain.local/hello-world:latest
docker push registry.domain.local/hello-world:latest
If you request all images from our registry after that, you should see
$ curl -ksS -u user:password https://registry.domain.local/v2/_catalog | jq .
{
"repositories": [
"hello-world"
]
}

Docker compose: ensure volume mounted before running CMD

I've built a container that has nginx and some config for HTTPS inside it.
The certificates are generated automatically by another container using https://letsencrypt.org/. The nginx container also provides some default self signed certificates to use until the certbot container has generated the good ones. This is how my config looks:
version: '2'
services:
# Nginx, the master of puppets, listens in port 80
nginx:
image: mycompany/nginx:v1.2.8
depends_on: [api, admin, front, postgres, redis, certbot]
ports: ["80:80", "443:443"]
volumes:
- acme_challenge:/var/www/acme_challenge
- ssl_certs:/var/certs
environment:
ACME_CHALLENGE_PATH: /var/www/acme_challenge
# Where will the container put the default certs
DEFAULT_SSL_CERTS_PATH: /var/default_certs
# Use temporary self signed keys by default
SSL_CERTIFICATE: /var/default_certs/selfsigned.crt
SSL_CERTIFICATE_KEY: /var/default_certs/selfsigned.key
# Once certbot generates certs I change config to this and recreate the container
# SSL_CERTIFICATE: /var/cerst/mycompany.com/fullchain.pem
# SSL_CERTIFICATE_KEY: /var/certs/mycompany.com/privkey.pem
# Certbot renews SSL certificates periodically
certbot:
image: mycompany/certbot:v1.0.9
restart: on-failure:3
environment:
- WEBROOT_PATH=/var/www/acme_challenge
- SIGNING_EMAIL=info#yavende.com
- DOMAINS=mycompany.com, api.mycompany.com
volumes:
- acme_challenge:/var/www/acme_challenge
- ssl_certs:/etc/letsencrypt/live
volumes:
acme_challenge:
ssl_certs:
This is more or less how stuff works:
The nginx container is configured to use some self-signed certificates
docker compose up -d launches certbot and nginx on parallel.
Meanwhile certbot runs a process to generate the certificates. Assume this succeeded.
After a while, I attach to the nginx container and run ls /var/certs and the certbot generated certs are there. Nice!
I modify the configuration of nginx container to use those new certificates (via env vars SSL_CERTIFICATE*) and recreate the container.
Nginx fails to run because the files are not there, even when I know that the files are there (checked with a lot of methods)
I suspect that the command of the image (CMD) is run regardless of whether the volumes where yet attached to the container or not.
Is this true? Should I write some bash to wait until this files are present?
Disclaimer: this is a plug for my own docker image.
I have made a very nice docker image based on nginx for this exact purpose, with features such as automatic letsencrypt management, http basic auth, virtual hosts etc. managed through passing a simple json config through an environment variable. I use it in production, so it is stable.
You can find it here, and it's at tcjn/json-webrouter on docker hub.
All you need to do is pass something like this in to the CONFIG environment variable:
{"servers": [
{"ServerName": "example.com", "Target": "192.168.2.52:32407", "Https": true},
{"ServerName": "*.example.com", "Target": "192.168.2.52:4444", "Https": true},
{"ServerName": "secret.example.com", "Target": "192.168.2.52:34505", "Https": true, "Auth": {"Realm": "Login for secret stuff", "Set": "secret_users"}}
], "auth": {
"secret_users": {"bob": "HASH GENERATED BY openssl passwd"}
}}
And yes, it is just as simple as "Https": true. You can find all the possible options in the github repo.

Resources