Traefik and multiple domains some with letsencrypt and some from SSL provider - docker

I'm configuring multiple domains in docker-compose and to route traffic I would like to use Traefik. He has very nice ability to create and manage letsencrypt certificate but some of my domains have enterprise SSL certificates from ssl provider (comodo).
My question is it possible to configure traefik such that it will be generate, manage letsencrypt certificates and at the same time he will be handle "static" certificates from another providers, e.g. comodo?

Related

Install ssl on Kubernetes digital ocean load balancer

I am having a website running on container port 80 and 443. I am using ready docker image from docker hub.
I have enabled SSL on Kubernetes using ingress.
I can not generate a certificate inside the pod. If I create cert inside the pod manually then at the time of service Apache restart then the container will restart and the whole pod will be created again. The WHOLE setup will change as default in docker image.
So how to install SSL on this website. Currently, it is giving an error of self-signed certificate.
It is like you are describing, you are using a self-signed certificate.
If you want to remove the warning or error you will have to get a certificate from a well known CA like Comodo or Symantec. If not you will have to trust the CA that you used to create your self-signed certificate. This is an example on how to do it on Ubuntu.
Hope it helps!

Is this okay to use double proxy-server setup? One on host and one in centainer

One nginx server on host machine with renewable ssl certificate (via certbot). And another nginx server inside docker container. First one should redirect all traffic to the second one.
Currently I have one nginx service inside container and I connect certificates that stored on host to that server via volumes. But with this kind of setup I can't renew certificates automatically...
You could try to get and renew certificate at container and somehow share it with proxy server(mount any common method dir with certificates).
renew cert at container
reload nginx at container server
reload nginx at proxy server
its not a simple solution - you should monitor always if your proxy server lost mounted directory, but it looks like a solution.

How to make nginx-proxy generate a self-signed certificate for every Docker container?

On my VPS, I can have my Docker containers automatically listen to a domain on port 443 and retrieve https certificates for those domains with nginx-proxy, docker-gen and docker-letsencrypt-nginx-proxy-companion.
On my localhost, for development purposes, I'm using nginx-proxy and docker-gen in conjunction with dnsmasq to make every container listen to a domain ending in .test on port 80.
It would be useful for certain aspects of development to be able to test local sites on https as well. Many browser features are blocked on http domains.
Is it possible to do something similar to the letsencrypt method for servers, but use self-signed sertificates signed with a self-trusted root authority, which I can import into Google Chrome?

Entire SSL Certificate Chain on Traefik

We have a Docker Swarm Cluster with Consul + Traefik as a proxy for our microservices. Traefik v1.6.1 was installed and now we have to configure de SSL certificate.
This certificate is a wildcard certificate (*.mydomain.com) to support our micro services availables in subdomains as "microservice2.mydomain.com".
For this purpose, we would like to know if it is possible to set the entire certificate chain on Traefik, and how could it be done.
Regards,

Configure docker repo with https without domain name

I have a website that I'm running on a digital ocean droplet, which I want to continuously deploy via docker and a Teamcity build server which I have running on my home server. I want to enable https on my docker repo, with a self signed certificate, and without a domain name.
Let's say my home's ip address is 10.10.10.10 and the docker repo is running on port 5000.
I followed the steps here, however docker on my website complained that it cannot connect to the docker repo on my homeserver because it doesn't specify an IP in the SAN extension.
Okay. So I created a new certificate without the CN field and only an IP in the SAN, and now my cert config on my website looks like...
/etc/docker/certs.d/10.10.10.10:5000/ca.crt
I also added the cert to my general certs (Ubuntu 16.04 btw)
Then I try to pull the image from my home server to my website...
docker pull 10.10.10.10:5000/personal_site:latest
However, I'm getting this error.
Error response from daemon: Get https://10.10.10.10:5000/v1/_ping: x509:
certificate signed by unknown authority (possibly because of "x509:
invalid signature: parent certificate cannot sign this kind of
certificate" while trying to verify candidate authority certificate "serial:xxx")
I thought by adding my cert to the /etc/docker/... it would accept a self-signed cert. Anyone have any advice here?
You can't used a self signed certificate for this, it needs to be a CA certificate. Follow the same steps required to create a certificate for a docker host and store your CA in /etc/docker/certs.d/.... Or you can also define 10.10.10.10 as an insecure registry as part of the docker daemon startup (dockerd --insecure-registry 10.10.10.10:5000 ...) and docker should ignore any certificate issues.
I just did the same thing with this instructions create private repo with password without domain and ssl. That will require you to add certificate on client and domain on host file (if you love to have an domain yourself without registering new domain)

Resources