My company uses a proxy to inspect our SSL/TLS traffic, this often causes some hurdles when developing locally because we need to ensure that various tools trust the company's CA.
One case where this causes problems is when building Docker images: for many of our devs build steps that need to download assets from the internet (for example apt or pip install) will fail due to certificate validation errors. One workaround we have found is to copy the company's CA cert into the build container and then appending it in various locations as necessary, e.g.:
COPY company_ca.crt company_ca.crt
RUN cat company_ca.crt >> /etc/ssl/certs/ca-certificates.crt
RUN cat company_ca.crt >> `python -c "import requests; print(requests.certs.where())"`
However, for other devs these steps are not necessary. My suspicion is that if the host system is correctly configured to trust company's CA than this should automatically carry over to the docker engine and any containers. But I don't know enough about how docker handles TLS to say this for certain.
Which brings me the actual question: how does docker handle SSL/TLS certificate verification? Does each container verify against its own cert store? Or is there some way of discovering trusted certs from the host? Or does the docker engine act as a sort of SSL/TLS termination proxy, where it makes requests on behalf of containers so that they don't need to concern themselves with certs at all? Is any of this different for the docker build tools vs the container runtime?
Related
My company is using self-signed TLS certificates for internal IT systems. In order to connect to said systems from Linux servers (Ubuntu 20.04 LTS), e.g., by means of curl, we have to put the CA certificate mycompany.crt in /usr/local/share/ca-certificates and do a sudo update-ca-certificates. Then everything works fine on the servers.
Now, when I run a container by executing podman run -it ubuntu:20.04 and do a
curl https://myinternalserver.mycompany/api/foo
I get an error
curl: (60) SSL certificate problem: unable to get local issuer certificate
Please note that curl was only an example. In our production case there are .NET applications and other programs inside the container that fail with similar errors due to missing CA certificates.
What's the easiest way to make our internal CA certificates from the host OS (in /usr/local/share/ca-certificates) known to the container?
Should I mount /usr/local/share/ca-certificates into the container and execute update-ca-certificates in my ENTRYPOINT/ CMD?
Or should I even bake the CA certificates into my container images? But then I would have to build custom images for each and every third-party container only for the purpose of the CA certificates.
The only viable way to work with containers and certificates is volumes. Baking certificates into images is a nightmare. Thankfully, this question has been quite thoroughly answered here. Hopefully this helps
In general, we need a consistent way to add/remove ca-certificates to the set of ca-certificates that tools like podman and docker start with for basic communication with the rest of the world.
Where does podman get its ca-certificates from? I'm NOT talking about registry credentials.
The location of that directory/file on at least two platform groups, Debian and Redhat would be invaluable.
If I can adjust the CA certs on my host to allow curl to function, why can't that configuration convey to curl running in a container on the same host? On ubuntu curl looks at /etc/ssl/certs/ca-certificates.crt.
Apologies to those offering to manipulate the content of Dockerfiles (I've got 10,000+ Dockerfiles and they come from multiple sources and they're constantly being updated) or change the command line arguments used to launch a container ( podman/docker used by k8s for example ) - these are one off non-scalable solutions that avoid answering the underlying problem.
Is it possible to run publicly available containers as-is when running reverse proxies that sign traffic with a custom root CA?
Example: Zscaler internet security
Corporate environments often run proxies.
While it is possible to install a custom root CA certificate file into a custom-built docker image and successfully run the container (e.g. COPY ... custom certificate ... and RUN ... install custom certificate ...) and it is also possible to mount the certificate into a container and then run a custom "entrypoint" command to install the certificate, it does not seem possible to simply tell Docker to trust what the host trusts.
For example, when Zscaler signs responses with their root CA, docker container network requests will fail to validate the response, because they do not recognize the Zscaler root CA.
Scenario:
Run a public docker image on a Windows computer with Zscaler Client installed
When the container starts, if it makes network requests, they are routed through Zscaler
Most and perhaps all network requests will fail to process the response, because the container OS and the tools do not trust the Zscaler certificate
This problem is highlighted when tools like Docker Compose or Kubernetes Helm attempt to run multiple containers at a time and many of them require network (of course).
In the distant future, it might be possible to use something like OCI hooks.
What would be some use case for keeping Docker clients or CLI and Docker daemon on separate machines?
Why would you keep the two separate?
You should never run the two separately. The only exception is with very heavily managed docker-machine setups where you're confident that Docker has set up all of the required security controls. Even then, I'd only use that for a local VM when necessary (as part of Docker Toolbox; to demonstrate a Swarm setup) and use more purpose-built tools to provision cloud resources.
Consider this Docker command:
docker run --rm -v /:/host busybox vi /host/etc/shadow
Anyone who can run this command can change any host user's password to anything of their choosing, and easily take over the whole system. There are probably more direct ways to root the host. The only requirement to run this command is that you have access and permissions to access the Docker socket.
This means: anyone who can access the Docker socket can trivially root the host. If it's network accessible, anyone who can reach port 2375 on your system can take it over.
This isn't an acceptable security position for the mild convenience of not needing to ssh to a remote server to run docker commands. The various common system-automation tools (Ansible, Chef, Salt Stack) all can invoke Docker as required, and using one of these tools is almost certainly preferable to trying to configure TLS for Docker.
If you run into a tutorial or other setup advising you to start the Docker daemon with a -H option to publish the Docker socket over the network (even just to the local system) be aware that it's a massive security vulnerability, equivalent to disabling your root password.
(I hinted above that it's possible to use TLS encryption on the network socket. This is a tricky setup, and it involves sharing around a TLS client certificate that has root-equivalent power over the host. I wouldn't recommend trying it; ssh to the target system or use an automation tool to manage it instead.)
Am I understanding correctly that the docs discuss how to protect the Docker daemon when commands are issued (docker run,...) with a remote machine as the target? When controlling docker locally this does not concern me.
Running Docker swarm does not require this step either as the security between the nodes is handled by Docker automatically. For example, using Portainer in a swarm with multiple agents does not require extra security steps due to overlay network in a swarm being encrypted by default.
Basically, when my target machine will always be localhost there are no extra security steps to be taken, correct?
Remember that anyone who can run any Docker command can almost trivially get unrestricted root-level access on the host:
docker run -v/:/host busybox sh
# vi /host/etc/passwd
So yes, if you're using a remote Docker daemon, you must run through every step in that document, correctly, or your system will get rooted.
If you're using a local Docker daemon and you haven't enabled the extremely dangerous -H option, then security is entirely controlled by Unix permissions on the /var/run/docker.sock special file. It's common for that socket to be owned by a docker group, and to add local users to that group; again, anyone who can run docker ps can also trivially edit the host's /etc/sudoers file and grant themselves whatever permissions they want.
So: accessing docker.sock implies trust with unrestricted root on the host. If you're passing the socket into a Docker container that you're trusting to launch other containers, you're implicitly also trusting it to not mount system directories off the host when it does. If you're trying to launch containers in response to network requests, you need to be insanely careful about argument handling lest a shell-injection attack compromise your system; you are almost always better off finding some other way to run your workload.
In short, just running Docker isn't a free pass on security concerns. A lot of common practices, if convenient, are actually quite insecure. A quick Web search for "Docker cryptojacking" can very quickly find you the consequences.
I am trying to find an effective way to use the docker remote API in a secure way.
I have a docker daemon running in a remote host, and a docker client on a different machine. I need my solution to not be client/server OS dependent, so that it would be relevant to any machine with a docker client/daemon etc.
So far, the only way I found to do such a thing is to create certs on a Linux machine with openssl and copy the certs to the client/server manually, as in this example:
https://docs.docker.com/engine/security/https/
and then configure docker on both sides to use the certificates for encryption and authentication.
This method is rather clunky in my opinion, because some times it's a problem to copy files and put them on each machine I want to use remote API from.
I am looking for something more elegant.
Another solution I've found is using a proxy for basic HTTP authentication, but in this method the traffic is not encrypted and it is not really secure that way.
Does anyone have a suggestion for a different solution or for a way to improve one of the above?
Your favorite system automation tool (Chef, SaltStack, Ansible) can probably directly manage the running Docker containers on a remote host, without opening another root-equivalent network path. There are Docker-oriented clustering tools (Docker Swarm, Nomad, Kubernetes, AWS ECS) that can run a container locally or remotely, but you have less control over where exactly (you frequently don't actually care) and they tend to take over the machines they're running on.
If I really had to manage systems this way I'd probably use some sort of centralized storage to keep the TLS client keys, most likely Vault, which has the property of storing the keys encrypted, requiring some level of authentication to retrieve them, and being able to access-control them. You could write a shell function like this (untested):
dockerHost() {
mkdir -p "$HOME/.docker/$1"
JSON=$(vault kv get -format=json "secret/docker/$1")
for f in ca.pem cert.pem key.pem; do
echo "$JSON" | jq ".data.data.[\"$f\"]" > "$HOME/.docker/$1/$f"
done
export DOCKER_HOST="https://$1:2376"
export DOCKER_CERT_PATH="$HOME/.docker/$1"
}
While your question makes clear you understand this, it bears repeating: do not enable unauthenticated remote access to the Docker daemon, since it is trivial to take over a host with unrestricted root access if you can access the socket at all.
Based on your comments, I would suggest you go with Ansible if you don't need the swarm functionality and require only single host support. Ansible only requires SSH access which you probably already have available.
It's very easy to use an existing service that's defined in Docker Compose or you can just invoke your shell scripts in Ansible. No need to expose the Docker daemon to the external world.
A very simple example file (playbook.yml)
- hosts: all
tasks:
- name: setup container
docker_container:
name: helloworld
image: hello-world
Running the playbook
ansible-playbook -i username#mysshhost.com, playbook.yml
Ansible provides pretty much all of the functionality you need to interact with Docker via its module system:
docker_service
Use your existing Docker compose files to orchestrate containers on a single Docker daemon or on Swarm. Supports compose versions 1 and 2.
docker_container
Manages the container lifecycle by providing the ability to create, update, stop, start and destroy a container.
docker_image
Provides full control over images, including: build, pull, push, tag and remove.
docker_image_facts
Inspects one or more images in the Docker host’s image cache, providing the information as facts for making decision or assertions in a playbook.
docker_login
Authenticates with Docker Hub or any Docker registry and updates the Docker Engine config file, which in turn provides password-free pushing and pulling of images to and from the registry.
docker (dynamic inventory)
Dynamically builds an inventory of all the available containers from a set of one or more Docker hosts.