I have a running docker droplet on DigitalOcean but this droplet running on http and I want to run it on https.
I don't want to purchase any domain name. Actually I don't need a domain name, ip address is enough.
Certbot and Letsencrypt are not allowing creating SSL certificate on bare IP address.
Is there any solution for this problem?
How can I do that?
Thank you for your collaborations.
Use openssl for generating the certificates and make the docker daemon trust those certificates.
Related
I'm on a corporate intranet. The container host is a Windows server and the container will be deployed in a Nano Server. I have an ASP.NET Core site that's using Kestrel.
I'm able to get a self-signed cert successfully installed via a volume but our browsers throw up a cert validation error. The issue of getting a valid cert installed is that the container site is only viewable via an IP address and I'm aware you can't tie a non-self-signed cert to an IP. Being this is intranet, the IP address is obviously not public. Would a corporate-level cert work for a container host that's a member server?
Any ideas are appreciated.
I am having a website running on container port 80 and 443. I am using ready docker image from docker hub.
I have enabled SSL on Kubernetes using ingress.
I can not generate a certificate inside the pod. If I create cert inside the pod manually then at the time of service Apache restart then the container will restart and the whole pod will be created again. The WHOLE setup will change as default in docker image.
So how to install SSL on this website. Currently, it is giving an error of self-signed certificate.
It is like you are describing, you are using a self-signed certificate.
If you want to remove the warning or error you will have to get a certificate from a well known CA like Comodo or Symantec. If not you will have to trust the CA that you used to create your self-signed certificate. This is an example on how to do it on Ubuntu.
Hope it helps!
One nginx server on host machine with renewable ssl certificate (via certbot). And another nginx server inside docker container. First one should redirect all traffic to the second one.
Currently I have one nginx service inside container and I connect certificates that stored on host to that server via volumes. But with this kind of setup I can't renew certificates automatically...
You could try to get and renew certificate at container and somehow share it with proxy server(mount any common method dir with certificates).
renew cert at container
reload nginx at container server
reload nginx at proxy server
its not a simple solution - you should monitor always if your proxy server lost mounted directory, but it looks like a solution.
I have a website that I'm running on a digital ocean droplet, which I want to continuously deploy via docker and a Teamcity build server which I have running on my home server. I want to enable https on my docker repo, with a self signed certificate, and without a domain name.
Let's say my home's ip address is 10.10.10.10 and the docker repo is running on port 5000.
I followed the steps here, however docker on my website complained that it cannot connect to the docker repo on my homeserver because it doesn't specify an IP in the SAN extension.
Okay. So I created a new certificate without the CN field and only an IP in the SAN, and now my cert config on my website looks like...
/etc/docker/certs.d/10.10.10.10:5000/ca.crt
I also added the cert to my general certs (Ubuntu 16.04 btw)
Then I try to pull the image from my home server to my website...
docker pull 10.10.10.10:5000/personal_site:latest
However, I'm getting this error.
Error response from daemon: Get https://10.10.10.10:5000/v1/_ping: x509:
certificate signed by unknown authority (possibly because of "x509:
invalid signature: parent certificate cannot sign this kind of
certificate" while trying to verify candidate authority certificate "serial:xxx")
I thought by adding my cert to the /etc/docker/... it would accept a self-signed cert. Anyone have any advice here?
You can't used a self signed certificate for this, it needs to be a CA certificate. Follow the same steps required to create a certificate for a docker host and store your CA in /etc/docker/certs.d/.... Or you can also define 10.10.10.10 as an insecure registry as part of the docker daemon startup (dockerd --insecure-registry 10.10.10.10:5000 ...) and docker should ignore any certificate issues.
I just did the same thing with this instructions create private repo with password without domain and ssl. That will require you to add certificate on client and domain on host file (if you love to have an domain yourself without registering new domain)
From reading the Docker Remote API documentation:
Docker Daemon over SSL
Ruby Docker-API
It appears the the correct way to connect to remote Docker machines is by letting the application know the location of the certificates to connect to a machine and connect using SSL/TLS with the certificates.
Is there a way to not have a user hand over the certificate, key, and CA? This would give whomever has those certificates root access to a docker machine.