My company is using self-signed TLS certificates for internal IT systems. In order to connect to said systems from Linux servers (Ubuntu 20.04 LTS), e.g., by means of curl, we have to put the CA certificate mycompany.crt in /usr/local/share/ca-certificates and do a sudo update-ca-certificates. Then everything works fine on the servers.
Now, when I run a container by executing podman run -it ubuntu:20.04 and do a
curl https://myinternalserver.mycompany/api/foo
I get an error
curl: (60) SSL certificate problem: unable to get local issuer certificate
Please note that curl was only an example. In our production case there are .NET applications and other programs inside the container that fail with similar errors due to missing CA certificates.
What's the easiest way to make our internal CA certificates from the host OS (in /usr/local/share/ca-certificates) known to the container?
Should I mount /usr/local/share/ca-certificates into the container and execute update-ca-certificates in my ENTRYPOINT/ CMD?
Or should I even bake the CA certificates into my container images? But then I would have to build custom images for each and every third-party container only for the purpose of the CA certificates.
The only viable way to work with containers and certificates is volumes. Baking certificates into images is a nightmare. Thankfully, this question has been quite thoroughly answered here. Hopefully this helps
In general, we need a consistent way to add/remove ca-certificates to the set of ca-certificates that tools like podman and docker start with for basic communication with the rest of the world.
Where does podman get its ca-certificates from? I'm NOT talking about registry credentials.
The location of that directory/file on at least two platform groups, Debian and Redhat would be invaluable.
If I can adjust the CA certs on my host to allow curl to function, why can't that configuration convey to curl running in a container on the same host? On ubuntu curl looks at /etc/ssl/certs/ca-certificates.crt.
Apologies to those offering to manipulate the content of Dockerfiles (I've got 10,000+ Dockerfiles and they come from multiple sources and they're constantly being updated) or change the command line arguments used to launch a container ( podman/docker used by k8s for example ) - these are one off non-scalable solutions that avoid answering the underlying problem.
Related
My company uses a proxy to inspect our SSL/TLS traffic, this often causes some hurdles when developing locally because we need to ensure that various tools trust the company's CA.
One case where this causes problems is when building Docker images: for many of our devs build steps that need to download assets from the internet (for example apt or pip install) will fail due to certificate validation errors. One workaround we have found is to copy the company's CA cert into the build container and then appending it in various locations as necessary, e.g.:
COPY company_ca.crt company_ca.crt
RUN cat company_ca.crt >> /etc/ssl/certs/ca-certificates.crt
RUN cat company_ca.crt >> `python -c "import requests; print(requests.certs.where())"`
However, for other devs these steps are not necessary. My suspicion is that if the host system is correctly configured to trust company's CA than this should automatically carry over to the docker engine and any containers. But I don't know enough about how docker handles TLS to say this for certain.
Which brings me the actual question: how does docker handle SSL/TLS certificate verification? Does each container verify against its own cert store? Or is there some way of discovering trusted certs from the host? Or does the docker engine act as a sort of SSL/TLS termination proxy, where it makes requests on behalf of containers so that they don't need to concern themselves with certs at all? Is any of this different for the docker build tools vs the container runtime?
Is it possible to run publicly available containers as-is when running reverse proxies that sign traffic with a custom root CA?
Example: Zscaler internet security
Corporate environments often run proxies.
While it is possible to install a custom root CA certificate file into a custom-built docker image and successfully run the container (e.g. COPY ... custom certificate ... and RUN ... install custom certificate ...) and it is also possible to mount the certificate into a container and then run a custom "entrypoint" command to install the certificate, it does not seem possible to simply tell Docker to trust what the host trusts.
For example, when Zscaler signs responses with their root CA, docker container network requests will fail to validate the response, because they do not recognize the Zscaler root CA.
Scenario:
Run a public docker image on a Windows computer with Zscaler Client installed
When the container starts, if it makes network requests, they are routed through Zscaler
Most and perhaps all network requests will fail to process the response, because the container OS and the tools do not trust the Zscaler certificate
This problem is highlighted when tools like Docker Compose or Kubernetes Helm attempt to run multiple containers at a time and many of them require network (of course).
In the distant future, it might be possible to use something like OCI hooks.
I am running a secure private docker registry on a machine (server) within my local network. My kubernetes and helm clients are running on another machine (host).
I am using a Certification Authority (CA) in the server machine in order to verify https pull requests from the host. That is why I am copying the registry's Certificate Authority (CA) to the host as mentioned here:
Instruct every Docker daemon to trust that certificate. For Linux:
Copy the domain.crt file to
/etc/docker/certs.d/myregistrydomain.com:5000/ca.crt on every Docker
host. You do not need to restart Docker
Even though this solution works as a charm, but only for few nodes.
From kubernetes or HELM perspective, is there a way to generate a secret or stating username/password inside the charts. Sot that, when 1000 hosts exist, there will be no need to copy the CA to every and each single host?
Server and hosts run Centos7
What would be some use case for keeping Docker clients or CLI and Docker daemon on separate machines?
Why would you keep the two separate?
You should never run the two separately. The only exception is with very heavily managed docker-machine setups where you're confident that Docker has set up all of the required security controls. Even then, I'd only use that for a local VM when necessary (as part of Docker Toolbox; to demonstrate a Swarm setup) and use more purpose-built tools to provision cloud resources.
Consider this Docker command:
docker run --rm -v /:/host busybox vi /host/etc/shadow
Anyone who can run this command can change any host user's password to anything of their choosing, and easily take over the whole system. There are probably more direct ways to root the host. The only requirement to run this command is that you have access and permissions to access the Docker socket.
This means: anyone who can access the Docker socket can trivially root the host. If it's network accessible, anyone who can reach port 2375 on your system can take it over.
This isn't an acceptable security position for the mild convenience of not needing to ssh to a remote server to run docker commands. The various common system-automation tools (Ansible, Chef, Salt Stack) all can invoke Docker as required, and using one of these tools is almost certainly preferable to trying to configure TLS for Docker.
If you run into a tutorial or other setup advising you to start the Docker daemon with a -H option to publish the Docker socket over the network (even just to the local system) be aware that it's a massive security vulnerability, equivalent to disabling your root password.
(I hinted above that it's possible to use TLS encryption on the network socket. This is a tricky setup, and it involves sharing around a TLS client certificate that has root-equivalent power over the host. I wouldn't recommend trying it; ssh to the target system or use an automation tool to manage it instead.)
I created a docker stack to deploy to a swarm. Now I´m a bit confused how the proper way looks like to deploy it to a real server?
Of course I can
scp my docker-stack.yml file to a node of my swarm
ssh into the node
run docker stack deploy -c docker-stack.yml stackname
So there is the docker-machine tool I thought.
I tried
docker-machine -d none --url=tcp://<RemoteHostIp>:2375 node1
what only seems to work if you open the port without TLS?
I received following:
$ docker-machine env node1
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "192.168.178.49:2375": dial tcp 192.168.178.49:2375: connectex: No connection could be made because the target machine actively refused it.
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.
I already tried to generate a certificate & copy it over to the host:
ssh-keygen -t rsa
ssh-copy-id myuser#node1
Then I ran
docker-machine --tls-ca-cert PathToMyCert --tls-client-cert PathToMyCert create -d none --url=tcp://192.168.178.49:2375 node1
With the following result:
$ docker-machine env node1
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "node1:2375": There was an error reading certificate
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.
I also tried it with the generic driver
$ docker-machine create -d generic --generic-ssh-port "22" --generic-ssh-user "MyRemoteUser" --generic-ip-address 192.168.178.49 node1
Running pre-create checks...
Creating machine...
(node1) No SSH key specified. Assuming an existing key at the default location.
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Error creating machine: Error detecting OS: OS type not recognized
How do I add the remote docker host with docker-machine properly with TLS?
Or is there a better way to deploy stacks to a server/into production?
I read often that you shouldn´t expose the docker port but not once how to do it. And I can´t believe that they doesn´t provide a simple way to do this.
Update & Solution
I think both answers have there qualification.
I found Deploy to Azure Offical Doc (it´s the same for AWS).
The answer from #Tarun Lalwani pointed me into the right direction and it´s almost the official solution. Thats the reason I accepted his answer.
For me the following commands worked:
ssh -fNL localhost:2374:/var/run/docker.sock myuser#node1
Then you can run either:
docker -H localhost:2374 stack deploy -c stack-compose.yml stackname
or
DOCKER_HOST=localhost:2374
docker stack deploy -c stack-compose.yml stackname
The answer from #BMitch is also valid and the security concern he mentioned shouldn´t be ignored.
Update 2
The answer from #bretf is a awesome way to connect to your swarm. Especially if you have more than one. It´s still beta but works for swarms which are available to the internet and don´t have a ARM architecture.
I would prefer not opening/exposing the docker port even if I am thinking of TLS. I would rather use a SSH tunnel and then do the deployment
ssh -L 2375:127.0.0.1:2375 myuser#node1
And then use
DOCKER_HOST=tcp://127.0.0.1:2375
docker stack deploy -c docker-stack.yml stackname
You don't need docker-machine for this. Docker has the detailed steps to configure TLS in their documentation. The steps involve:
creating a CA
create and sign a server certificate
configuring the dockerd daemon to use this cert and validate client certs
create and sign a client certificate
copy the ca and client certificate files to your client machine's .docker folder
set some environment variables to use the client certificates and remote docker host
I wouldn't use the ssh tunnel method on a multi-user environment since any user with access to 127.0.0.1 would have root access to the remote docker host without a password or any auditing.
If you're using Docker for Windows or Docker for Mac, Docker Cloud has a more automated way to setup your TLS certs and get you securely connected to a remote host for free. Under "Swarms" there's "Bring your own Swarm" which runs an agent container on your Swarm managers to let you easily use your local docker cli without manual cert setup. It still requires the Swarm port open to internet, but this setup ensures it has TLS mutual auth enabled.
Here's a youtube video showing how to set it up. It can also support group permissions for adding/removing other admins from remotely accessing the Swarm.