Does Docker Swarm support usage of Docker Registry with self-signed certificate?
I've created my cluster based on step from official Docker documentation, it uses swarm master/nodes running inside containers.
It works well, but as soon as I try to login to my Docker Registry I'm getting error message:
$ docker -H :4000 login https://...:443
...
Error response from daemon: Get https://.../v1/users/: x509: certificate signed by unknown authority
Is there an additional option which needs to be set, like --insecure-registry? Or do I need to somehow update Docker Swarm container?
You need to add your self signed cert or personal CA to the list of trusted certificates on the host. For some reason, docker doesn't use the certificates on the daemon for this authentication. Here are the commands for a debian host:
sudo mkdir -p /usr/local/share/ca-certificates
sudo cp ca.pem /usr/local/share/ca-certificates/ca-local.crt
sudo update-ca-certificates
sudo systemctl restart docker
The docker restart at the end is required for the daemon to reload the OS certificates.
As luka5z saw in the latest documentation, you can also add the certs directly to each docker engine by copying the cert to /etc/docker/certs.d/myregistrydomain.com:5000/ca.crt. This avoids trusting the self signed CA on the entire OS.
is there a way I could update it with required certificates?
Docker 17.06 will bring the command docker swarm ca (PR 48).
Meaning a docker swarm ca --rotate will be enough.
root#ubuntu:~# docker swarm ca --help
Usage: docker swarm ca [OPTIONS]
Manage root CA
Options:
--ca-cert pem-file Path to the PEM-formatted root CA certificate to use for the new cluster
--ca-key pem-file Path to the PEM-formatted root CA key to use for the new cluster
--cert-expiry duration Validity period for node certificates (ns|us|ms|s|m|h) (default 2160h0m0s)
-d, --detach Exit immediately instead of waiting for the root rotation to converge
--external-ca external-ca Specifications of one or more certificate signing endpoints
--help Print usage
-q, --quiet Suppress progress output
--rotate Rotate the swarm CA - if no certificate or key are provided, new ones will be generated
Here is a demo.
I also encountered your problem.
I was not able to identify the root cause of this, or what sets this limitation.
But i managed a workaround:
if it is insecure make sure you start each docker daemon accordingly on each host.
you can find info on how to change daemon options: https://docs.docker.com/engine/admin/systemd/
eg: from my conf. --insecure-registry <private registry> after that:
systemctl daemon-reload
systemctl restart docker
docker login <private registry>
on each docker host and pull the needed images.
after that you have all the images and it will not try to pull them anymore.
i know this is not the best solution :(
PS: I also had to add these parameters to each docker daemon:
--cluster-advertise=<host:ip> --cluster-store=consul://<consul ip:consul port>
without these i could not run containers on different hosts. They were all running on one host randomly chosen.
Related
I am running GitLab in a Docker container and wanted to configure https following these instructions. To do this, I invoked the terminal with the following command in Powershell:
docker exec -ti -u root b836c4cdfd37 bash
After entering the command sudo ufw allow https, the following error message is displayed:
WARN: initcaps
[Errno 2] iptables v1.8.4 (legacy): can't initialize iptables table `filter': Permission denied (you must be root)
Maybe iptables or your kernel needs to be upgraded.
Skipping adding existing rule
Skipping adding existing rule (v6)
How can I execute the sudo ufw allow https command without errors?
The instructions you are following are for installing GitLab directly on an Ubuntu host, not running GitLab inside of a docker container, which requires a different set of installation and networking steps. In your scenario, ufw will not work as described in the guide you're following because docker manages the networking for your container by default. Docker's networking can interfere with trying to manage your firewall configuration with ufw or iptables. Even if you manage to get the command to work, you'll find that docker's network management can bypass your ufw configurations in your container anyhow.
To install GitLab in docker, you should follow the official docker installation instructions. You can also review all the other installation methods for GitLab for additional context.
If you really want to continue installing gitlab "manually" inside of a container, just skip the UFW steps and make sure you have configured port mapping for the GitLab container from the docker host (e.g. to map http/https, docker run -p 80:80 -p 443:443 ...).
I have a ticketing system (OTOBO/OTRS) that runs in docker on Ubuntu 20.04 and uses a nginx reverse proxy for HTTPS. We only renew our certificates for 1 year at a time and its now time to update the existing SSL cert. How do I go about updating the certificate thats currently being used. The current certs exist in a Docker volume and are located in /etc/nginx/ssl/.
I have tried just copying the certificates into the Nginx proxy container replacing the existing ones. After a reboot, the site was no longer reachable. Below is the example of commands I ran.
sudo docker cp cert.crt container_id:/etc/nginx/ssl/
Sudo docker cp cert.key container_id:/etc/nginx/ssl/
sudo docker exec otobo_nginx_ssl nginx -s reload
Does the above look correct or am I missing a step? I hardly ever have to use docker and am very green to it.
I have an exisiting VM with docker installed (CoreOS) and I can connect to docker with the following powershell command.
docker-machine create --driver generic --generic-ip-address=$IP --generic-ssh-key=$keyPath --generic-ssh-user=user vm
docker-machine env vm | Invoke-Expression # Set Environment Variables
Everything worked fine. I was able to build and run containers.
Then I told my build server to run the powershell script and it was running successfully. But then I lost the connection on my dev machine and got the following exception
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "$IP": x509: certificate signed by unknown authority
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.
When I recreate my config with docker-machine rm vm it works again.
How can I share an SSH key to a remote docker host without recreating the docker-machine?
I´ve been looking in google but i cannot find any answer.
It is possible connect to a virtualbox docker container that I just start up. I have the IP of the virtual machine, but if I try to connect by SSH of course ask me for a password.
Regards.
see
https://github.com/BITPlan/docker-stackoverflowanswers/tree/master/33232371
to repeat steps.
On my Mac OS X machine
docker-machine env default
shows
export DOCKER_HOST="tcp://192.168.99.100:2376"
So i added an entry
192.168.99.100 docker
to my /etc/hosts
so that ping docker works.
As a Dockerfile i am using:
# Ubuntu image
FROM ubuntu:14.04
which I am building with
docker build -t bitplan/sshtest:0.0.1 .
and testing with
docker run -it bitplan/sshtest:0.0.1 /bin/bash
Now ssh docker will react with
The authenticity of host 'docker (192.168.99.100)' can't be established.
ECDSA key fingerprint is SHA256:osRuE6B8bCIGiL18uBBrtySH5+iGPkiHHiq5PZNfDmc.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'docker,192.168.99.100' (ECDSA) to the list of known hosts.
wf#docker's password:
But here you are connecting to the docker machine not your image!
The ssh port is at port 22. You need to redirect it to another port and configure your image to support ssh to root or a valid user.
See e.g. https://docs.docker.com/examples/running_ssh_service/
Are you trying to connect to a running container or trying to connect to the virtualbox image running the docker daemon?
If the first, you cannot just SSH into a running container unless that container is running an ssh daemon. The easiest way to get a shell into a running container is with docker exec -ti <container name/id> /bin/sh. Do a docker ps to see running containers.
If the second, if your host was created with docker-machine then you can ssh into it with docker-machine ssh <machine name>. You can see all of you're running machines with docker-machine ls.
If this doesn't help can you clarify your question a little and provide details around how your creating your image and starting the container.
You can use ssh keys to access passwordless.
Here's some intro
https://wiki.archlinux.org/index.php/SSH_keys
I'm trying to run a docker daemon on ubuntu 14.04. I have a private registry running on the same host on port 5000. The registry is running on http, not https, which is fine for my purposes.
When I try to start the docker daemon with sudo service docker.io start, I see this error in syslog:
kernel: [9200489.966734] init: docker.io main process (9328) terminated with status 2
/etc/default/docker.io has just this one option
DOCKER_OPTS="--insecure-registry 192.168.0.100:5000"
When I try to start the daemon by hand with sudo docker.io --insecure-registry 192.168.0.100:5000 -d I get an error message saying flag provided but not defined: --insecure-registry
I've read the documentation on this, and it looks like I'm doing everything right, but clearly I'm missing something. What am I doing wrong?
I have a feeling you are running docker v1.2 not v1.3 - you might need to update your docker version, take a look at docker -h and see if the flag is available.