How to secure docker client connection by default? - docker

I'm using https to protect the docker daemon socket. Followed all the steps as mentioned here. The environment variables are set as below,
DOCKER_TLS_VERIFY=1
DOCKER_CERT_PATH=~/.docker == All my client, ca & server certificates + keys exist here
DOCKER_HOST=tcp://$HOST:2376
The below command works (when I pass ca, client certificate & key):
docker --tlsverify --tlscacert=~/.docker/ca.pem --tlscert=~/.docker/client-cert.pem --tlskey=~/.docker/client-key.pem -H=$HOST:2376 ps
According to Docker documentation , I can secure docker client connections by default and do not need to pass certificates every time but the command "docker ps" , doesn't work for me. It always expects client certificate to be passed.
I also tried executing the below,
docker-compose --tlsverify --tlscacert=~/.docker/ca.pem --tlscert=~/.docker/client-cert.pem --tlskey=~/.docker/client-key.pem -H=$HOST:2376 up
ERROR: TLS configuration is invalid - make sure your DOCKER_TLS_VERIFY and DOCKER_CERT_PATH are set correctly
How can I secure the docker client connections by default ? I just want to execute like "docker ps" without passing client certificate every time since it already exists in ~/.docker
I have also referred a similar question here

I found the answer myself ! The client certificate and key generated are having the names as cert.pem and key.pem when I followed the official documentation instructions. I renamed the cert.pem to client-cert.pem and key to client-key.pem in my ~/.docker directory.
Apparently, docker picks the client certificate by default, only if it has name as cert.pem and key.pem. So, my issue here is because of changing the client certificate / key names.

Related

Getting 'Host key verification failed' when using ssh in docker context

i am setting up docker context like described here and cofigured the ssh key and the context. Unfortunately I keep getting an error from docker while i'm in the new context:
docker context use myhostcontext
docker ps
error during connect: Get "http://docker.example.com/v1.24/containers/json": command [ssh -l user -- myhost docker system dial-stdio] has exited with exit status 255, please make sure the URL is valid, and Docker 18.09 or later is installed on the remote host: stderr=ssh_askpass: exec(/usr/bin/ssh-askpass): No such file or directory
Host key verification failed.
Suprisingly when i ssh into user#myhost the connection is established as it should be.
ssh -vv user#myhost shows that it uses the given key in ~/.ssh/config
Additional Info:
Platform: Ubuntu 20.04
Docker: 20.10.23
OpenSSH_8.2p1 Ubuntu-4ubuntu0.5, OpenSSL 1.1.1f 31 Mar 2020
Here is what i've done:
I've created a docker context with
docker context create myhostcontext --docker "host=ssh://user#myhost"
I also created a new ssh keypair with ssh-keygen (tried with rsa and ecdsa),
executed ssh-add /path/to/key and ssh-copy-id -i /path/to/key user#myhost
I tried with using "id_rsa" as keyname as well as "myhost" to make sure its not just a default naming problem.
Looking at several instructions (e.g. This question) unfortunately did not help. I also checked the authorized_keys on the remote host and the public key on my local machine, they match.
My ~/.ssh/config looks like this
Host myhost
HostName myhost
User user
StrictHostKeyChecking no
IdentityFile ~/.ssh/myhost
Also removing entries from known_host did not help.
Using the remote hosts IP instead of its name did not help either.
Installing ssh-askpass just shows me, that the authenticity could not be established (default message when using ssh on a host for the first time). Since I later want to use docker context in a CI/CD environment i don't want to have any non-cli stuff.
The only other possible "issue" that comes to my mind is that the user of the remote host is different that the one i am using on the client. But - if understood correctly - that should not be an issue and also i would not know how to manage that.
Any help or suggestion is highly appreciated, since I am struggling with this for days.
Thanks in advance :)

How do I configure ssl to run correctly on docker

Last week, I created a self-signed SSL certificate and it only works on port 80 I then tried to replace it with a lets-encrypt certificate. I deleted all of the certificate files, I have tried to restart docker and the daemon. I have also tried to remove the override file that I tried to put the certificate in.
To create the self-signed SSL certificate, I used the following script
https://raw.githubusercontent.com/AlexisAhmed/DockerSecurityEssentials/main/Docker-TLS-Authentication/secure-docker-daemon.sh
For the override.conf file within /etc/systemd/system/docker.service.d/override.conf I used the following configuration:
[Service]ExecStart=ExecStart=/usr/bin/dockerd -D -H unix:///var/run/docker.sock --tlsverify --tlscert=/home/<user>/.docker/server-cert.pem --tlscacert=/home/<user>/.docker/ca.pem --tlskey=/home/<user>/.docker/server-key.pem -H tcp://0.0.0.0:2376
I have restarted the docker container but that has also failed to work.
Furthermore, I have also deleted the ssl certificate so I have no idea where it is pulling the old on from.

Docker pull image without ssl in Kubernetes with docker private registry

When I try to deploy something with docker registry I every time view errors:
x509: cannot validate certificate for 10.2.10.7 because it doesn't contain any IP SANs
Question:
How I can disable ssl from deploy image in docker registry to Kubernetes ?
Assuming relaxed security is OK for your environment, a way to accomplish in Kubernetes what you want is to configure Docker to connect to the private registry as an insecure registry.
Per the doc here:
With insecure registries enabled, Docker goes through the following
steps:
First, try using HTTPS. If HTTPS is available but the certificate is invalid, ignore the error about the certificate.
If HTTPS is not available, fall back to HTTP.
Notice that the change to /etc/docker/daemon.json described in that doc - adding "insecure-registries" configuration - has to be applied to all nodes in the Kubernetes cluster on which pods/containers can be scheduled to run. Plus, Docker has to be restarted for the change to take effect.
It is also to note that the above assumes the cluster uses the Docker container runtime and not some other runtime (e.g. CRI-O) that supports the Docker image format and registry.
As you're using self signed TLS certificate, you need to add the certificate to the known certificates list.
Grab you .crt file and cope it to the client machine's ssl certificates directory.
For ubuntu:
$ sudo cp registry.crt /usr/local/share/ca-certificates/registry.crt
$ sudo update-ca-certificates
Now restart docker:
$ sudo systemctl restart docker
For CentOS 7:
copy the certificate inside /etc/pki/ca-trust/source/anchors/
Use update-ca-trust command
My problem was with certificates because I used self-signed TLS certificates. It is not good idea. You might encounter with known certificates list and you will need to add certificates each time and use command update-ca-certificates (if you are using Centos 7). However, you might encounter another issue with certificates with another error code.
To resolve this issue i've used 3rd party Certificate Authority called Let'sEncrypt.

How to access to container docker from browser in windows 10

I am running docker in windows 10 professional edition. I need to access to container with browser.
screenshot of running container
I tried to access by typing : http://172.17.0.2:9000 and http://localhost:9000
But my browser says:
This site can’t be reached
172.17.0.2 took too long to respond.
Any ideas to resolve this?
use simpleDockerUI which is a chrome extension. and enter you docker daemon IP https://"docker-machine ip":2376
before connecting via simpleDockerUI, import the docker certificates
inside the chrome certificates
go to the folder where docker certificates are installed(in my machine it was in C:\Users\"name"\.docker\machine\machines\default)
then do the following steps
1) $ cat cert.pm ca.pem >> clientcertchain.pem
2) $ openssl pkcs12 -inkey key.pm -in clientcertchain.pem -export -out import.pfx -passout pass:"password"
3) now go to google chrome setting --> manage certificates
4) under trusted root certification authoirities import ca.pem. it will prompt for password ( same as above)
5) import import.pfx as a personal certificate under personal certificate tab
(it will ask to set the password so set it)
to test the connection open new tab in google chrome and type https://ip:2376/_ping
you should get OK response
or use portainer image
docker run -d -p 9000:9000 portainer/portainer
Your container web service should start using 0.0.0.0 host instead localhost, in that way you can access from your local machine.
Simply
Go to Settings -> General -> activate Expose daemon
Expose daemon on tcp://localhost:2375 without TLS: Click this option
to enable legacy clients to connect to the Docker daemon. You must use
this option with caution as exposing the daemon without TLS can result
in remote code execution attacks.
https://docs.docker.com/docker-for-windows/

How to browse container files in Docker for Windows? My folder mapping didn't work

I run Windows machine and I'm super new to docker, I'm trying to setup LetsEncrypt on my site for HomeAssistant purpose.
I create a folder in C:/Docker/LetsEncrypt in my Windows machine and then I run this command.
PS C:\Users\test> docker run -it --rm -p 80:80 --name certbot -v "C:Docker/LetsEncrypt/etc/letsencrypt:/etc/letsencrypt" -v "C:Docker/LetsEncrypt/var/lib/letsencrypt:/var/lib/letsencrypt" -v "C:Docker/LetsEncrypt/var/log/letsencrypt:/var/log/letsencrypt" quay.io/letsencrypt/letsencrypt:latest certonly --standalone --standalone-supported-challenges http-01 --email myemail#mail.com -d mysite.duckdns.org
This is the result I got
Warning: This Docker image will soon be switching to Alpine Linux.
You can switch now using the certbot/certbot repo on Docker Hub.
The standalone specific supported challenges flag is deprecated. Please use the --preferred-challenges flag instead.
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
/opt/certbot/venv/local/lib/python2.7/site-packages/josepy/jwa.py:107: CryptographyDeprecationWarning: signer and verifier have been deprecated. Please use sign and verify instead.
signer = key.signer(self.padding, self.hash)
-------------------------------------------------------------------------------
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf. You must
agree in order to register with the ACME server at
https://acme-v01.api.letsencrypt.org/directory
-------------------------------------------------------------------------------
(A)gree/(C)ancel: a
-------------------------------------------------------------------------------
Would you be willing to share your email address with the Electronic Frontier
Foundation, a founding partner of the Let's Encrypt project and the non-profit
organization that develops Certbot? We'd like to send you email about EFF and
our work to encrypt the web, protect its users and defend digital rights.
-------------------------------------------------------------------------------
(Y)es/(N)o: y
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for mysite.duckdns.org
Waiting for verification...
Cleaning up challenges
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/mysite.duckdns.org/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/mysite.duckdns.org/privkey.pem
Your cert will expire on 2018-06-22. To obtain a new or tweaked
version of this certificate in the future, simply run certbot
again. To non-interactively renew *all* of your certificates, run
"certbot renew"
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
- If you like Certbot, please consider supporting our work by:
Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le
Looks like everything is fine except I can't find the file fullchaim.pem and privkey.pem in my Windows machine which is suppose to be inside C:\Docker\LetsEncrypt\etc\letsencrypt.
What am I missing?
Here is the command you executed
PS C:\Users\test> docker run -it --rm -p 80:80 --name certbot
-v "C:Docker/LetsEncrypt/etc/letsencrypt:/etc/letsencrypt"
-v "C:Docker/LetsEncrypt/var/lib/letsencrypt:/var/lib/letsencrypt"
-v "C:Docker/LetsEncrypt/var/log/letsencrypt:/var/log/letsencrypt"
quay.io/letsencrypt/letsencrypt:latest
certonly --standalone --standalone-supported-challenges
http-01 --email myemail#mail.com -d mysite.duckdns.org
docker allows you to mount directories on our local machine such that internal to the launched container those same directories are mapped to new names however the directory contents are identical. For example in above it says
-v "C:Docker/LetsEncrypt/etc/letsencrypt:/etc/letsencrypt"
which is a volume pair where left of : delimiter is a directory local to your machine C:Docker/LetsEncrypt/etc/letsencrypt and on right hand side is what that same directory gets called from perspective inside container as per /etc/letsencrypt ... this mapping frees up the container's internal perspective to be isolated from a given person's local directory structure ... now look closely at this message :
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/mysite.duckdns.org/fullchain.pem
that is from perspective of inside the container ... so now your are armed with the knowledge to discover where you missing keys are
SOLUTION when inside of container it says
/etc/letsencrypt/live/mysite.duckdns.org/fullchain.pem
that same file is mapped to your local machine at location
C:Docker/LetsEncrypt/etc/letsencrypt/live/mysite.duckdns.org/fullchain.pem

Resources