How to add private registry certs to Docker Machine - docker

I upgraded my Mac (OS X) from an older Docker installation to Docker Toolbox, meaning that I'm now working with Docker Machine, and in the process discovered that certs I had working for push/pull with a private registry are not there, and I can't for the life of me figure out how to get them in place. At the moment when I try a test pull I get the dreaded x509: certificate signed by unknown authority error. I've searched around, looked at issues in Github, but nothing has worked for me. I even tried ssh'ing into the machine VM and manually copying them into /etc/ssl/certs, and various other things, with no luck. And I certainly don't want to get into the "insecure-registry" stuff. This used to work with boot2docker prior to moving to docker-machine.
This seems like a very simple question: I have a couple of .crt files that I need put in the right place so that I can do a push/pull. How does one do this? And secondarily, how can this not be documented anywhere? Can we wish for a docker-machine add-cert command someday?
Thanks for any help, and I hope a good answer here can stick around to assist others who run into this.

Okay so let's imagine I have a registry running at the address: 192.168.188.190:5000 and I have a proper certificate for this address.
I would now run the following commands to install the root certificate into my machine:
docker-machine scp ./dockerCA.crt $MACHINE_NAME:dockerCA.crt
docker-machine ssh $MACHINE_NAME sudo mkdir -p /etc/docker/certs.d/192.168.188.190:5000
docker-machine ssh $MACHINE_NAME sudo mv dockerCA.crt /etc/docker/certs.d/192.168.188.190:5000/dockerCA.crt
Set the variable MACHINE_NAME to whatever the name of your machine is. The machine will now trust your root certificate.

Having the same issue I read the Documentation in Docker on how to add a certificate to my computer.
As you mentioned that you are on a updated Mac OS X, proceed by doing the following:
Copy the cert file from your docker registry to your hard drive, e.g.
scp user#docker.reg.ip:/path/to/crt/domain.crt /tmp/domain.crt
Add the certificate to your trusted certificates using the following command
sudo security add-trusted-cert -d -r trustRoot \
-k /Library/Keychains/System.keychain /tmp/domain.crt
Restart your local docker handler and now you should be able to upload your local Docker images to the Docker registry.
If you are running on any other operating systems please check this site on how to add trusted root certificates.

Related

.NET Core in Docker, expired SSL CA certificate

I have a Docker image, based on the microsoft/dotnet:2.1-runtime image (Linux).
However, since 1/6, the .NET code (and other code) in the container is not able to connect to certain sites that use a COMODO CA certificate.
E.g. the following fails inside the container, due to the expired certificate:
curl https://code.jquery.com
Result:
curl: (60) SSL certificate problem: certificate has expired
I have tried calling update-ca-certificates inside the container, but that does not change anything.
My desktop browsers somehow have updated the CA certs themselves, but the docker have not.
I don't really need curl to work (that was just a test to show the issue), but I do need my own .NET code to work (which causes similar error). Any ideas how I can force the container to update the CA certs, or make this work somehow? I obviously do not want to skip certificate validation!
Not sure if this is the answer. After you update the certificate, updating the docker container image itself. the pseudo commands look like below:
$ docker run -p <port>:<port> <image> bash --name <image name>
root#xxxx <ca-cert folder>: update-ca-certificates
Don't exit out of the container.
On the host machine:
$ docker commit <image name>
docker commit will create a new image from the running container.
Theory
Probably you are running update-ca-certificates after you start a container instance.
using some steps shared in these answers
This will probably work one time if your docker run commands look something like below
$ docker run -p 8000:80 <image name> bash
and inside the bash, you updated the certificate.
This will only run for the lifetime of this container. When this container dies and a new one is created it's created with the same old image (with expired cert).

Sharing docker credentials between Window and WSL

Environment
Windows version and build Version 2004 (OS Build 19037.1)
Docker Edge version 2.1.6.1
Ubuntu 18.04 on WSL 2
Current setup and status:
docker installed on windows
created aliases for docker, docker-compose, docker-credential-desktop, etc ...
Running commands such as docker build, docker ps, docker pull, docker images all work fine. Now I would like push an image and so of course I have to login first.
Problem: logging into docker hub.
I run docker login in the WSL terminal
I put in my username and password
I get the following error
Error saving credentials: error storing credentials - err: exec: "docker-credential-desktop": executable file not found in %PATH
%, out: ``
What I've tried so far
docker login from powershell works fine. So I created a symbolic link between /mnt/c/Users/<winusername>/.docker and /home/<wslusername>/.docker. The equivalent works fine for .aws, but for .docker it was not able to share or even acknowledge the credentials, so it asked again for the user and password and threw the same error as above.
This worked for me,
sudo ln -s /mnt/c/Program\ Files/Docker/Docker/resources/bin/docker-credential-desktop.exe /usr/bin/docker-credential-desktop.exe
Linking the executable from windows path to linux path or you can add the windows PATH on you linux PATH.
Refer: https://github.com/docker/for-win/issues/6652
Update Feb 2021
This is all much simpler now. If you are using WSL2 on a recent release of Windows, just install docker on the Windows side and ensure to configurations:
In General: us the WSL 2 based engine
In Resource/WSL Integration: enable integration with your default WSL distro
You will have to restart docker. Once it is done, everything works transparently.
Below here can be ignored
It turns out that the integration between Docker and WSL is better than I thought. Though it could have been better documented. I was going to change tack and try to install docker in the WSL. So I got rid of all the aliases and restarted my session. Lo and behold, when I ran docker there was still something running.
This is because the edge version of docker create the appropriate symbolic links and now I login into docker hub without any problem.

Docker pull image without ssl in Kubernetes with docker private registry

When I try to deploy something with docker registry I every time view errors:
x509: cannot validate certificate for 10.2.10.7 because it doesn't contain any IP SANs
Question:
How I can disable ssl from deploy image in docker registry to Kubernetes ?
Assuming relaxed security is OK for your environment, a way to accomplish in Kubernetes what you want is to configure Docker to connect to the private registry as an insecure registry.
Per the doc here:
With insecure registries enabled, Docker goes through the following
steps:
First, try using HTTPS. If HTTPS is available but the certificate is invalid, ignore the error about the certificate.
If HTTPS is not available, fall back to HTTP.
Notice that the change to /etc/docker/daemon.json described in that doc - adding "insecure-registries" configuration - has to be applied to all nodes in the Kubernetes cluster on which pods/containers can be scheduled to run. Plus, Docker has to be restarted for the change to take effect.
It is also to note that the above assumes the cluster uses the Docker container runtime and not some other runtime (e.g. CRI-O) that supports the Docker image format and registry.
As you're using self signed TLS certificate, you need to add the certificate to the known certificates list.
Grab you .crt file and cope it to the client machine's ssl certificates directory.
For ubuntu:
$ sudo cp registry.crt /usr/local/share/ca-certificates/registry.crt
$ sudo update-ca-certificates
Now restart docker:
$ sudo systemctl restart docker
For CentOS 7:
copy the certificate inside /etc/pki/ca-trust/source/anchors/
Use update-ca-trust command
My problem was with certificates because I used self-signed TLS certificates. It is not good idea. You might encounter with known certificates list and you will need to add certificates each time and use command update-ca-certificates (if you are using Centos 7). However, you might encounter another issue with certificates with another error code.
To resolve this issue i've used 3rd party Certificate Authority called Let'sEncrypt.

docker-machine install fails due to 'Couldn't read CA cert' error

I am trying to setup docker-machine locally on my Windows machine and I followed the install instructions at the Docker Machine Page.
Per the instructions, I ran the following commands in my bash terminal
To install Docker client binary
$curl -L https://github.com/docker/machine/releases/download/v0.3.0/docker-machine_windows-amd64.exe > /bin/docker-machine
and to install Docker machine binary
$ curl -L https://github.com/docker/machine/releases/download/v0.3.0/docker-machine_windows-amd64.exe > /bin/docker-machine
when I try to run docker-machine -v I get the following error
FATAL[0000] Couldn't read ca cert 'C:\Users\Me\.boot2docker\certs\boot2doker-vm\ca.pm: open 'C:\Users\Me\.boot2docker\certs\boot2docker-vm'\ca.pem: The filename, directory name, or volume label syntax is incorrect.
I did some searching and came across a few posts, but can't really see any connection to what would be causing my issues...
https://github.com/hypriot/kitematic/pull/1
https://github.com/docker/machine/issues/908
I installed docker machine today on my Windows 7 machine and run the command without any problem.
Did you use boot2docker before on your machine? If you did, it might be related as mine is a clean machine without any pre-existing docker installations.
Its referring to boot2docker environment.
see this:
DOCKER_CERT_PATH="/Users//.docker/machine/machines/dev"
I got the same answer and was able to resolve it by changing the path pattern to be unix-style in the environment variable.
Inside msysgit bash shell:
export DOCKER_CERT_PATH=/C/Users/Me/.boot2docker/certs/boot2docker-vm
This resolved boot2docker.
Note that I also tried using docker-machine before using boot2docker which previously failed with the same error. That was not resolved the same way boot2docker could be resolved. For now only boot2docker is working for me.

Unable to connect to docker hub from China

I'm getting the same thing every time trying to run busybox either with docker on fedora 20 or running boot2docker in VirtualBox:
[me#localhost ~]$ docker run -it busybox Unable to find image
'busybox:latest' locally Pulling repository busybox FATA[0105] Get
https://index.docker.io/v1/repositories/library/busybox/images: read
tcp 162.242.195.84:443: i/o timeout
I can open https://index.docker.io/v1/repositories/library/busybox/images in a browser and sometimes without using a vpn tunnel so tried to set a proxy in the network settings to the proxy provided by Astrill when using VPN sharing but it will always time out.
Currently in China where there basically is no Internet due to the firewall, npm, git and wget seem to use the Astrill proxy in the terminal (when setting it in network setting of Fedora 20) but somehow I either can't get the docker daemon to use it or something else is wrong.
It seems the answer was not so complicated according to the following documentation (had read it before but thought setting proxy in network settings ui would take care of it)
So added the following to /etc/systemd/system/docker.service.d/http-proxy.conf (after creating the docker.service.d directory and conf file):
[Service]
Environment="HTTP_PROXY=http://localhost:3213/"
Environment="HTTPS_PROXY=http://localhost:3213/"
In the Astrill app (I'm sure other provider application provide something similar) there is an option for vpn sharing which will create a proxy; it can be found under settings => vpn sharing.
For git, npm and wget setting the proxy in the ui (gnome-control-center => Network => network proxy) is enough but when doing a sudo it's better to do a sudo su, set the env and then run the command needing a proxy, for example:
sudo su
export http_proxy=http://localhost:3213/
export ftp_proxy=http://localhost:3213/
export all_proxy=socks://localhost:3213/
export https_proxy=http://localhost:3213/
export no_proxy=localhost,127.0.0.0/8,::1
export NO_PROXY="/var/run/docker.sock"
npm install -g ...
I'd like to update the solution for people who still encounter this issue today
I don't know the details, but when using the wireguard protocol on Astrill, docker build and docker run will use the VPN. If for some reason it doesn't work, try restarting the docker service sudo service docker restart while the VPN is active
Hope it helps, I just wasted one hour trying to figure out why it stopped working

Resources