use docker's remote API in a secure manner - docker

I am trying to find an effective way to use the docker remote API in a secure way.
I have a docker daemon running in a remote host, and a docker client on a different machine. I need my solution to not be client/server OS dependent, so that it would be relevant to any machine with a docker client/daemon etc.
So far, the only way I found to do such a thing is to create certs on a Linux machine with openssl and copy the certs to the client/server manually, as in this example:
https://docs.docker.com/engine/security/https/
and then configure docker on both sides to use the certificates for encryption and authentication.
This method is rather clunky in my opinion, because some times it's a problem to copy files and put them on each machine I want to use remote API from.
I am looking for something more elegant.
Another solution I've found is using a proxy for basic HTTP authentication, but in this method the traffic is not encrypted and it is not really secure that way.
Does anyone have a suggestion for a different solution or for a way to improve one of the above?

Your favorite system automation tool (Chef, SaltStack, Ansible) can probably directly manage the running Docker containers on a remote host, without opening another root-equivalent network path. There are Docker-oriented clustering tools (Docker Swarm, Nomad, Kubernetes, AWS ECS) that can run a container locally or remotely, but you have less control over where exactly (you frequently don't actually care) and they tend to take over the machines they're running on.
If I really had to manage systems this way I'd probably use some sort of centralized storage to keep the TLS client keys, most likely Vault, which has the property of storing the keys encrypted, requiring some level of authentication to retrieve them, and being able to access-control them. You could write a shell function like this (untested):
dockerHost() {
mkdir -p "$HOME/.docker/$1"
JSON=$(vault kv get -format=json "secret/docker/$1")
for f in ca.pem cert.pem key.pem; do
echo "$JSON" | jq ".data.data.[\"$f\"]" > "$HOME/.docker/$1/$f"
done
export DOCKER_HOST="https://$1:2376"
export DOCKER_CERT_PATH="$HOME/.docker/$1"
}
While your question makes clear you understand this, it bears repeating: do not enable unauthenticated remote access to the Docker daemon, since it is trivial to take over a host with unrestricted root access if you can access the socket at all.

Based on your comments, I would suggest you go with Ansible if you don't need the swarm functionality and require only single host support. Ansible only requires SSH access which you probably already have available.
It's very easy to use an existing service that's defined in Docker Compose or you can just invoke your shell scripts in Ansible. No need to expose the Docker daemon to the external world.
A very simple example file (playbook.yml)
- hosts: all
tasks:
- name: setup container
docker_container:
name: helloworld
image: hello-world
Running the playbook
ansible-playbook -i username#mysshhost.com, playbook.yml
Ansible provides pretty much all of the functionality you need to interact with Docker via its module system:
docker_service
Use your existing Docker compose files to orchestrate containers on a single Docker daemon or on Swarm. Supports compose versions 1 and 2.
docker_container
Manages the container lifecycle by providing the ability to create, update, stop, start and destroy a container.
docker_image
Provides full control over images, including: build, pull, push, tag and remove.
docker_image_facts
Inspects one or more images in the Docker host’s image cache, providing the information as facts for making decision or assertions in a playbook.
docker_login
Authenticates with Docker Hub or any Docker registry and updates the Docker Engine config file, which in turn provides password-free pushing and pulling of images to and from the registry.
docker (dynamic inventory)
Dynamically builds an inventory of all the available containers from a set of one or more Docker hosts.

Related

What is the best way to deliver docker containers to remote host?

I'm new in docker and docker-compose. I'm using of docker-compose file with several services. I have containers and images on the local machine when I work with docker-compose and my task is to deliver them remote host.
I found several solutions:
I could build my images and push them to the some registry and pull
them on production server. But for this option I need private
registry. And as I think- registry is an unnecessary element. I
wanna run countainers directly.
Save docker image to tar and load it to remote host. I saw this post
Moving docker-compose containersets around between hosts
, but in this case I need to have shell scripts. Or I can use
docker directly
(Docker image push over SSH (distributed)),
but in this case I'm losing the benefits of docker-compose.
Use docker-machine (https://github.com/docker/machine) with general driver. But in this case I can use it for deployng only from
one machine, or I need to configure certificates
(How to set TLS Certificates for a machine in docker-machine). And, again, it isn't simple solution, as for me.
Use docker-compose and parameter host (-H) - But in the last option I need to build images on remote host. Is it possible to
build image on local mashine and push it to remote host?
I could use docker-compose push (https://docs.docker.com/compose/reference/push/) to remote host,
but for this I need to create registry on remote host and I need to
add and pass hostname as parameter to docker compose every time.
What is the best practice to deliver docker containers to remote host?
Via a registry (your first option). All container-oriented tooling supports it, and it's essentially required in cluster environments like Kubernetes. You can use Docker Hub, or an image registry from a public-cloud provider, or a third-party option, or run your own.
If you can't use a registry then docker save/docker load is the next best choice, but I'd only recommend it if you're in something like an air-gapped environment where there's no network connectivity between the build system and the production systems.
There's no way to directly push an image from one system to another. You should avoid enabling the Docker network API for security reasons: anyone who can reach a network-exposed Docker socket can almost trivially root its host.
Independently of the images you will also need to transfer the docker-compose.yml file itself, plus any configuration files you bind-mount into the containers. Ordinary scp or rsync works fine here. There is no way to transfer these within the pure Docker ecosystem.

Scenarios for Docker daemon and client / CLI on separate boxes?

What would be some use case for keeping Docker clients or CLI and Docker daemon on separate machines?
Why would you keep the two separate?
You should never run the two separately. The only exception is with very heavily managed docker-machine setups where you're confident that Docker has set up all of the required security controls. Even then, I'd only use that for a local VM when necessary (as part of Docker Toolbox; to demonstrate a Swarm setup) and use more purpose-built tools to provision cloud resources.
Consider this Docker command:
docker run --rm -v /:/host busybox vi /host/etc/shadow
Anyone who can run this command can change any host user's password to anything of their choosing, and easily take over the whole system. There are probably more direct ways to root the host. The only requirement to run this command is that you have access and permissions to access the Docker socket.
This means: anyone who can access the Docker socket can trivially root the host. If it's network accessible, anyone who can reach port 2375 on your system can take it over.
This isn't an acceptable security position for the mild convenience of not needing to ssh to a remote server to run docker commands. The various common system-automation tools (Ansible, Chef, Salt Stack) all can invoke Docker as required, and using one of these tools is almost certainly preferable to trying to configure TLS for Docker.
If you run into a tutorial or other setup advising you to start the Docker daemon with a -H option to publish the Docker socket over the network (even just to the local system) be aware that it's a massive security vulnerability, equivalent to disabling your root password.
(I hinted above that it's possible to use TLS encryption on the network socket. This is a tricky setup, and it involves sharing around a TLS client certificate that has root-equivalent power over the host. I wouldn't recommend trying it; ssh to the target system or use an automation tool to manage it instead.)

Dealing with dockers and containers in production

I am new to the containers topic and would appreciate if this forum is the right place to ask this question.
I am learning dockers and containers and I now have some skills using the docker commands and dealing with containers. I understand that docker has two main parts, the docket client (docker.exe) and the docker server (dockerd.exe). Now in the development life both are installed on my local machine (I am manually installed them on windows server 2016) followed Nigel Poulton tutorial here https://app.pluralsight.com/course-player?clipId=f1f27565-e2bf-4e58-96f3-bc2c3b160ec9. Now when it comes to the real production life, then, how would I configure my docker client to communicate with a remote docker server. I tried to make some research on the internet but honestly could not find a simple answer for this question. I installed docker for desktop on my windows 10 machine and noticed that it created a hyper-v machine which might be Linux machine, my understanding is that this machine has the docker server that my docker client interacts with but do not understand how is this interaction gets done.
I would appreciate if I get some guidance or clear answer to my inquiries.
In production environments you never have a remote Docker daemon. Generally you interact with Docker either through a dedicated orchestrator (Kubernetes, Docker Swarm, Nomad, AWS ECS), or through a general-purpose system automation tool (Chef, Ansible, Salt Stack), or if you must by directly ssh'ing to the system and running docker commands there.
Remote access to the Docker daemon is something of a security disaster. If you can access the Docker daemon at all, you can edit any file on the host system as root, and pretty trivially take over the whole thing. (Google "Docker cryptojacking" for some real-world examples.) In principle you can secure it with mutual TLS, but this is a tricky setup.
The other important best practice is that Docker images should be self-contained. Don't try to deploy a Docker image to production, and also separately copy your application code. The same Ansible setup that can deploy a Docker container can also install Node directly on the target system, avoiding a layer; it's tricky to copy application code into a Kubernetes volume, especially when Kubernetes pods can restart outside your direct control. Deploy (and test!) your images with all of the code COPYd in a Dockerfile, minimizing the use of bind mounts.

Adding user management for our container via dockerfile

We're using docker for our project. We've a monitoring service (for our native application) which is running on Docker.
Currently there is no user management for this monitoring service. Is there any way we can add user management from Dockerfile?
Please note that I'm not looking docker container user management.
In simple words functionality that I'm looking for is:
Add any user and password in dockerfile.
While accessing external IP, same user and password must be provided to view running monitoring service.
From your few information about your setup, I would say the authentification should be handled by your monitoring service. If this was a kind of webapp you could use a simple basic auth as a first step.
Docker doesn't know what's running in your container, and its networking setup is limited to a simple pass-through between containers or from host ports to containers. It's typical to run programs with many different network protocols in Docker (Web servers, MySQL, PostgreSQL, and Redis all have different wire protocols) and there's no way Docker on its own could inject an authorization step.
Many non-trivial Docker setups include some sort of HTTP reverse proxy container (frequently Nginx, occasionally Apache) that can both serve a Javascript UI's built static files and also route HTTP requests to a backend service. You can add an authentication control there. The mechanics of doing this would be specific to your choice of proxy server.
Consider that any information you include in the Dockerfile or docker build --args options can be easily retrieved by anyone who has your image (looking at its docker history, or docker run a debug shell on it). You may need to inject the credentials at run time using a bind mount if you think they're sensitive and you're not storing your image somewhere with strong protections (if anyone can docker pull it from Docker Hub).

Protecting Docker daemon

Am I understanding correctly that the docs discuss how to protect the Docker daemon when commands are issued (docker run,...) with a remote machine as the target? When controlling docker locally this does not concern me.
Running Docker swarm does not require this step either as the security between the nodes is handled by Docker automatically. For example, using Portainer in a swarm with multiple agents does not require extra security steps due to overlay network in a swarm being encrypted by default.
Basically, when my target machine will always be localhost there are no extra security steps to be taken, correct?
Remember that anyone who can run any Docker command can almost trivially get unrestricted root-level access on the host:
docker run -v/:/host busybox sh
# vi /host/etc/passwd
So yes, if you're using a remote Docker daemon, you must run through every step in that document, correctly, or your system will get rooted.
If you're using a local Docker daemon and you haven't enabled the extremely dangerous -H option, then security is entirely controlled by Unix permissions on the /var/run/docker.sock special file. It's common for that socket to be owned by a docker group, and to add local users to that group; again, anyone who can run docker ps can also trivially edit the host's /etc/sudoers file and grant themselves whatever permissions they want.
So: accessing docker.sock implies trust with unrestricted root on the host. If you're passing the socket into a Docker container that you're trusting to launch other containers, you're implicitly also trusting it to not mount system directories off the host when it does. If you're trying to launch containers in response to network requests, you need to be insanely careful about argument handling lest a shell-injection attack compromise your system; you are almost always better off finding some other way to run your workload.
In short, just running Docker isn't a free pass on security concerns. A lot of common practices, if convenient, are actually quite insecure. A quick Web search for "Docker cryptojacking" can very quickly find you the consequences.

Resources