docker certification version issue - docker

I run a web server as a haphroxy. Certificate issued using let's encrypt
Subsequently, when you run the docker container by applying the certificate to the pem file, the certificate is applied normally.
A certificate version error occurs in the browser when distributing from Docker Swarm to stack in the same way.
tls1.2 is specified as the minimum version in the haphroxy setting. Is the certificate applied to deploy as docker swarm or stack different?

Related

Install ssl on Kubernetes digital ocean load balancer

I am having a website running on container port 80 and 443. I am using ready docker image from docker hub.
I have enabled SSL on Kubernetes using ingress.
I can not generate a certificate inside the pod. If I create cert inside the pod manually then at the time of service Apache restart then the container will restart and the whole pod will be created again. The WHOLE setup will change as default in docker image.
So how to install SSL on this website. Currently, it is giving an error of self-signed certificate.
It is like you are describing, you are using a self-signed certificate.
If you want to remove the warning or error you will have to get a certificate from a well known CA like Comodo or Symantec. If not you will have to trust the CA that you used to create your self-signed certificate. This is an example on how to do it on Ubuntu.
Hope it helps!

Docker for Windows not using client certificates

I'm having issue with connecting to private Docker registry.
Setup:
Windows10 (using HyperV)
Docker for Windows version: 18.03.0-ce
Steps I did:
Placed client certificates in C:/ProgramData/Docker/certs.d/<private_registry_url>/
Placed client certificates in C:/Users//.docker/certs.d/<private_registry_url>/
Root certificate imported in Windows "Trusted Root Certification Authorities" (otherwise I'm getting "X509: certificate signed by unknown authority")
Result: Executing docker login <private_registry_url> and inserting correct credentials gives me:
Error response from daemon: Get https://<private_registry_url>/v2/: remote error: tls: bad certificate
which means that Docker is not sending correct client certificates. (When I execute curl -client <file.cert> -key <file.key> https://<private_registry_url>/v2/ with client certificates I'm able to connect.
When I connect to running HyperV machine there is missing /etc/docker/certs.d/ folder. If I manually create /etc/docker/certs.d/<private_registry_url> and put my client certificates inside, everything starts to work! But after restarting Docker for Windows, folder certs.d in HyperV machine is missing and I'm not able to connect to private registry again.
Any ideas what am I doing wrong?

Self-signed docker registry in CircleCI

I'm using CircleCI 2.0 and I have a private docker registry with self-signed certificate. I'm able to configure my local docker, just like documented here, the problem is in CircleCI:
I'm using remote dockers so when I try to login in Docker registry it's failing with Error response from daemon: Get https://docker-registry.mycompany.com/v2/: x509: certificate signed by unknown authority.
Is there a way to install the certificate in a remote docker? I don't have access to the docker host's shell. I don't want to use machine executor type.
It's not possible. It could only be accomplished by using CircleCI's Enterprise level system.

Configure docker repo with https without domain name

I have a website that I'm running on a digital ocean droplet, which I want to continuously deploy via docker and a Teamcity build server which I have running on my home server. I want to enable https on my docker repo, with a self signed certificate, and without a domain name.
Let's say my home's ip address is 10.10.10.10 and the docker repo is running on port 5000.
I followed the steps here, however docker on my website complained that it cannot connect to the docker repo on my homeserver because it doesn't specify an IP in the SAN extension.
Okay. So I created a new certificate without the CN field and only an IP in the SAN, and now my cert config on my website looks like...
/etc/docker/certs.d/10.10.10.10:5000/ca.crt
I also added the cert to my general certs (Ubuntu 16.04 btw)
Then I try to pull the image from my home server to my website...
docker pull 10.10.10.10:5000/personal_site:latest
However, I'm getting this error.
Error response from daemon: Get https://10.10.10.10:5000/v1/_ping: x509:
certificate signed by unknown authority (possibly because of "x509:
invalid signature: parent certificate cannot sign this kind of
certificate" while trying to verify candidate authority certificate "serial:xxx")
I thought by adding my cert to the /etc/docker/... it would accept a self-signed cert. Anyone have any advice here?
You can't used a self signed certificate for this, it needs to be a CA certificate. Follow the same steps required to create a certificate for a docker host and store your CA in /etc/docker/certs.d/.... Or you can also define 10.10.10.10 as an insecure registry as part of the docker daemon startup (dockerd --insecure-registry 10.10.10.10:5000 ...) and docker should ignore any certificate issues.
I just did the same thing with this instructions create private repo with password without domain and ssl. That will require you to add certificate on client and domain on host file (if you love to have an domain yourself without registering new domain)

Uploading and storing Docker certificates for connecting to remote Docker machines

From reading the Docker Remote API documentation:
Docker Daemon over SSL
Ruby Docker-API
It appears the the correct way to connect to remote Docker machines is by letting the application know the location of the certificates to connect to a machine and connect using SSL/TLS with the certificates.
Is there a way to not have a user hand over the certificate, key, and CA? This would give whomever has those certificates root access to a docker machine.

Resources