From Ubuntu running in WSL2, I'm attempting to push a docker image to a private docker registry. It fails with the following error:
Get "https://docker.example.com/v2/": x509: certificate signed by unknown authority
I'm using Docker Desktop with the WSL2 integration enabled.
Where do I install the root CA certificate so that docker can find it? I've tried the following:
Installing it in the Trusted Root Certification Authority store in Windows and restarting Docker Desktop Service.
Installing it in the Ubuntu CA store.
Neither of these works.
The CA certificate should be placed in the directory C:\ProgramData\docker\certs.d\. Then restart the docker service with restart-service docker from an admin powershell.
Related
I have a ticketing system (OTOBO/OTRS) that runs in docker on Ubuntu 20.04 and uses a nginx reverse proxy for HTTPS. We only renew our certificates for 1 year at a time and its now time to update the existing SSL cert. How do I go about updating the certificate thats currently being used. The current certs exist in a Docker volume and are located in /etc/nginx/ssl/.
I have tried just copying the certificates into the Nginx proxy container replacing the existing ones. After a reboot, the site was no longer reachable. Below is the example of commands I ran.
sudo docker cp cert.crt container_id:/etc/nginx/ssl/
Sudo docker cp cert.key container_id:/etc/nginx/ssl/
sudo docker exec otobo_nginx_ssl nginx -s reload
Does the above look correct or am I missing a step? I hardly ever have to use docker and am very green to it.
I have 2 machines :
1) MacOs laptop
2) Windows 7 pc.
Mac is too slow for deploying the whole application. So, i've created the docker machine on windows.
docker-machine create -d virtualbox my-app
docker-machine env my-app > connect.bat
I have the docker machine, that is accessible from the Windows by the tcp://192.168.99.100:2376.
Then I am able to build and run my docker images in that machine by
docker-compose build <image>
docker-compose up <image>
The issue is, that I need to have the application sources on this machine to be able to build containers from it, and as a result I need to code on this machine as well and connect to it through remote desktop that is very unconvinient.
The desired configuration is next: develop on Mac, then compile sources on it, build docker images and deploy to the docker machine that is running on Windows inside VirtualBox.
I've started trying to connect to this docker machine from Mac, copied the certificate files and try to connect
docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=<WINDOWS_MACHINE_HOST>:2376 version
and get error during connect: Get https://<WINDOWS_MACHINE_HOST>:2376/v1.32/version: x509: certificate is valid for 192.168.99.100, not <WINDOWS_MACHINE_HOST>.
Attempts to regenerate the certificates by the instruction (https://docs.docker.com/engine/security/https/#secure-by-default) didn't help. I still got the same error.
The question is such case ever possible? If yes, the question is how? Would appreciate for any help
I have an exisiting VM with docker installed (CoreOS) and I can connect to docker with the following powershell command.
docker-machine create --driver generic --generic-ip-address=$IP --generic-ssh-key=$keyPath --generic-ssh-user=user vm
docker-machine env vm | Invoke-Expression # Set Environment Variables
Everything worked fine. I was able to build and run containers.
Then I told my build server to run the powershell script and it was running successfully. But then I lost the connection on my dev machine and got the following exception
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "$IP": x509: certificate signed by unknown authority
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.
When I recreate my config with docker-machine rm vm it works again.
How can I share an SSH key to a remote docker host without recreating the docker-machine?
I installed Docker Desktop for Windows on Windows 10 with https://docs.docker.com/docker-for-windows/install/#install-docker-for-windows. It not uses VirtualBox and default VM to host docker.
I am able to run containers but how I connect to a docker with ssh?
docker-machine ls does not show my docker host.
Tried to connect to docker#10.0.75.1 but it requires password. And tcuser that used for boot2docker VM not matching:
ssh docker#10.0.75.1 Could not create directory '/home/stan/.ssh'. The
authenticity of host '10.0.75.1 (10.0.75.1)' can't be established. RSA
key fingerprint is .... Are you sure you want to continue connecting
(yes/no)? yes Failed to add the host to the list of known hosts
(/home/stan/.ssh/known_hosts). docker#10.0.75.1's password: Write
failed: Connection reset by peer
Run this:
docker run -it --rm --privileged --pid=host justincormack/nsenter1
Just run this from your CLI and it'll drop you in a container with
full permissions on the Moby VM. Only works for Moby Linux VM (doesn't
work for Windows Containers). Note this also works on Docker for Mac.
Reference:
https://www.bretfisher.com/getting-a-shell-in-the-docker-for-windows-vm/
As far as I know you can't connect to the docker VM using SSH and you cannot connect to the console/terminal using Hyper-V Manager either. https://forums.docker.com/t/how-can-i-ssh-into-the-betas-mobylinuxvm/10991/17
Does Docker Swarm support usage of Docker Registry with self-signed certificate?
I've created my cluster based on step from official Docker documentation, it uses swarm master/nodes running inside containers.
It works well, but as soon as I try to login to my Docker Registry I'm getting error message:
$ docker -H :4000 login https://...:443
...
Error response from daemon: Get https://.../v1/users/: x509: certificate signed by unknown authority
Is there an additional option which needs to be set, like --insecure-registry? Or do I need to somehow update Docker Swarm container?
You need to add your self signed cert or personal CA to the list of trusted certificates on the host. For some reason, docker doesn't use the certificates on the daemon for this authentication. Here are the commands for a debian host:
sudo mkdir -p /usr/local/share/ca-certificates
sudo cp ca.pem /usr/local/share/ca-certificates/ca-local.crt
sudo update-ca-certificates
sudo systemctl restart docker
The docker restart at the end is required for the daemon to reload the OS certificates.
As luka5z saw in the latest documentation, you can also add the certs directly to each docker engine by copying the cert to /etc/docker/certs.d/myregistrydomain.com:5000/ca.crt. This avoids trusting the self signed CA on the entire OS.
is there a way I could update it with required certificates?
Docker 17.06 will bring the command docker swarm ca (PR 48).
Meaning a docker swarm ca --rotate will be enough.
root#ubuntu:~# docker swarm ca --help
Usage: docker swarm ca [OPTIONS]
Manage root CA
Options:
--ca-cert pem-file Path to the PEM-formatted root CA certificate to use for the new cluster
--ca-key pem-file Path to the PEM-formatted root CA key to use for the new cluster
--cert-expiry duration Validity period for node certificates (ns|us|ms|s|m|h) (default 2160h0m0s)
-d, --detach Exit immediately instead of waiting for the root rotation to converge
--external-ca external-ca Specifications of one or more certificate signing endpoints
--help Print usage
-q, --quiet Suppress progress output
--rotate Rotate the swarm CA - if no certificate or key are provided, new ones will be generated
Here is a demo.
I also encountered your problem.
I was not able to identify the root cause of this, or what sets this limitation.
But i managed a workaround:
if it is insecure make sure you start each docker daemon accordingly on each host.
you can find info on how to change daemon options: https://docs.docker.com/engine/admin/systemd/
eg: from my conf. --insecure-registry <private registry> after that:
systemctl daemon-reload
systemctl restart docker
docker login <private registry>
on each docker host and pull the needed images.
after that you have all the images and it will not try to pull them anymore.
i know this is not the best solution :(
PS: I also had to add these parameters to each docker daemon:
--cluster-advertise=<host:ip> --cluster-store=consul://<consul ip:consul port>
without these i could not run containers on different hosts. They were all running on one host randomly chosen.