What would be some use case for keeping Docker clients or CLI and Docker daemon on separate machines?
Why would you keep the two separate?
You should never run the two separately. The only exception is with very heavily managed docker-machine setups where you're confident that Docker has set up all of the required security controls. Even then, I'd only use that for a local VM when necessary (as part of Docker Toolbox; to demonstrate a Swarm setup) and use more purpose-built tools to provision cloud resources.
Consider this Docker command:
docker run --rm -v /:/host busybox vi /host/etc/shadow
Anyone who can run this command can change any host user's password to anything of their choosing, and easily take over the whole system. There are probably more direct ways to root the host. The only requirement to run this command is that you have access and permissions to access the Docker socket.
This means: anyone who can access the Docker socket can trivially root the host. If it's network accessible, anyone who can reach port 2375 on your system can take it over.
This isn't an acceptable security position for the mild convenience of not needing to ssh to a remote server to run docker commands. The various common system-automation tools (Ansible, Chef, Salt Stack) all can invoke Docker as required, and using one of these tools is almost certainly preferable to trying to configure TLS for Docker.
If you run into a tutorial or other setup advising you to start the Docker daemon with a -H option to publish the Docker socket over the network (even just to the local system) be aware that it's a massive security vulnerability, equivalent to disabling your root password.
(I hinted above that it's possible to use TLS encryption on the network socket. This is a tricky setup, and it involves sharing around a TLS client certificate that has root-equivalent power over the host. I wouldn't recommend trying it; ssh to the target system or use an automation tool to manage it instead.)
Related
I've been reading about security issues with building docker images within a docker container by mounting the docker socket.
In my case, I am accessing docker via an API , docker-py.
Now I am wondering, are there security issues with building images using docker-py on a plain ubuntu host (not in a docker container) since it also communicates on the docker socket?
I'm also confused as to why there would be security differences between running docker from the command line vs this sdk, since they both go through the socket?
Any help is appreciated.
There is no difference, if you have access to the socket, you can send a request to run a container with access matching that of the dockerd engine. That engine is typically running directly on the host as root, so you can use the API to get root access directly on the host.
Methods to lock this down include running the dockerd daemon inside of a container, however that container is typically privileged which itself is not secure, so you can gain root in the other container and use the privileged access to gain root on the host.
The best options I've seen include running the engine rootless, and an escape from the container would only get you access to the user the daemon is running as. However, realize rootless has it's drawbacks, including needing to pre-configure the host to support this, and networking and filesystem configuration being done at the user level which has functionality and performance implications. And the second good option is to run the build without a container runtime at all, however this has it's own drawbacks, like not having a 1-for-1 replacement of the Dockerfile RUN syntax, so your image is built mainly from the equivalent of COPY steps plus commands run on the host outside of any container.
I conduct security testing on programs and applications. Often, when I want to run a tool, I need to add libraries and other dependencies onto the system that I'm testing in order for the tool to work properly. Also, the test environment doesn't have internet access, which makes installing these dependencies more difficult.
I was thinking about containerizing multiple tools so that I could put them onto the systems that I'm testing and they will have all of their dependencies. I am considering doing this in Docker, so anytime you see a reference to a container it implies that it's a Docker container.
Some of the tools I would like to use are nmap, strace, wireshark, and others for monitoring network traffic, processes and memory.
My questions are:
Can I run these tools "locally", or will there be networking required, as though they are coming from a different machine?
Is there anything required to put onto the test system for the container to run properly?
Docker allows you to connect to devices on the host itself using the --privileged flag.
Additionally you can add the --network=host flag which means your docker container network stack is no longer isolated from the host.
using the flags above should allow you to run your tools in docker without additional requirements on the host (other then a Docker runtime + your docker container).
However, it does mean you need to load your Docker container into the host. Normally downloading a docker container requires networking. But what you can do is tar a docker image with docker save and import that tar with docker import
Docker is, first and foremost, an isolation system that hides details of the host system from the container environment. All of the tools you describe need low-level access to the host network and process space: you actively do not want the isolation that Docker is providing here.
To give two specific examples: without special setup, you can’t tcpdump the host’s network interface, because a container has its own isolated network stack; and you can’t strace host processes because a container has its own process ID space and restricted Linux capabilities that disallow it.
(From a security point of view, also consider that anyone who can run any docker command can trivially root the host, and there is a reasonably common misconfiguration to publish this ability to the network. You’d prefer to not need to install Docker if you’re interested in securing an existing system.)
I’d probably put this set of tools on a USB drive (if that’s allowed in your environment) and run them directly off of there. A tar file of tools would work as well, if that’s easier to transfer. Either building them into static binaries or providing a chroot environment would make them independent of what’s installed in the host environment, but also wouldn’t block you from observing all of the things you’re trying to observe.
Am I understanding correctly that the docs discuss how to protect the Docker daemon when commands are issued (docker run,...) with a remote machine as the target? When controlling docker locally this does not concern me.
Running Docker swarm does not require this step either as the security between the nodes is handled by Docker automatically. For example, using Portainer in a swarm with multiple agents does not require extra security steps due to overlay network in a swarm being encrypted by default.
Basically, when my target machine will always be localhost there are no extra security steps to be taken, correct?
Remember that anyone who can run any Docker command can almost trivially get unrestricted root-level access on the host:
docker run -v/:/host busybox sh
# vi /host/etc/passwd
So yes, if you're using a remote Docker daemon, you must run through every step in that document, correctly, or your system will get rooted.
If you're using a local Docker daemon and you haven't enabled the extremely dangerous -H option, then security is entirely controlled by Unix permissions on the /var/run/docker.sock special file. It's common for that socket to be owned by a docker group, and to add local users to that group; again, anyone who can run docker ps can also trivially edit the host's /etc/sudoers file and grant themselves whatever permissions they want.
So: accessing docker.sock implies trust with unrestricted root on the host. If you're passing the socket into a Docker container that you're trusting to launch other containers, you're implicitly also trusting it to not mount system directories off the host when it does. If you're trying to launch containers in response to network requests, you need to be insanely careful about argument handling lest a shell-injection attack compromise your system; you are almost always better off finding some other way to run your workload.
In short, just running Docker isn't a free pass on security concerns. A lot of common practices, if convenient, are actually quite insecure. A quick Web search for "Docker cryptojacking" can very quickly find you the consequences.
I am setting up a series of Linux command line challenges (for internal use/training), similar to those at OverTheWire.org's Bandit. From some reading I have done of their infrastructure, they setup things as such:
All ssh-based games on OverTheWire run in Docker containers. When you
login with SSH to one of the games, a fresh Docker container is
created just for you. Noone else is logged in into your container, nor
are there any files from other players lying around. We opted for this
setup to provide each player with a clean environment to experiment
and learn in, which is automatically cleaned up when you log out.
This seems like an ideal solution, since everyone who logs in gets a completely clean environment (destroyed on logout) so that simultaneous players do not interfere with each other.
I am very new to Docker and understand it in principle, but am unsure about how to setup a similar system - particularly spawn new Docker instances on SSH login to a server and then destroy the instance on logout/disconnection.
I'd appreciate any advice on how to design/implement this kind of setup.
It seems to me there are two main goals here. First undestand what docker really makes and how it works. Second the sistem that orquestates the whole sistem.
Let me make some brief and short introduction. I won't go into details but mainly docker is a plaform that works like a system virtualization that lets you isolate a process, operating system or a whole aplication without any kind of hypervisor. The container shares the kernel of the host system and all that it cointains is islated from the host and the rest of the containers.
So the basic principle you are looking for is a system that orchestrates containers that has an ssh server with the port 22 open. Although there are many ways of how you could reach this goal, one way it can be with this docker sshd server image.
docker run -itd --rm rastasheep/ubuntu-sshd bash
Docker needs a process to keep alive. By using -it you are creating an interactive session with the "bash" interpreter. This will keep alive the container plus lets you start a bash terminal inside an isolated virtual ubuntu server.
--rm: will remove the container once you exists from the container.
rastasheep/ubuntu-sshd: it is the docker image id.
As you can see, there is a lack of a system that connects between your aplication and this docker platform. One approach would it be with a library that python has that uses the docker client programaticaly. As an advice I would recomend you to install docker in your computer and to try to create a couple of ubuntu servers with ssh server and to connect into it from your host. It will help you to see if it's really necesary to have sshd server, the network requisites you will need if so, to traffic all the clients into the containers. Read the oficial docker network documentation.
With the example I had described a new fresh terminal is started and there is no need to connect to the docker via ssh. By using this way you won't need to route the traffic, indentify the host free ports to connect your host to the containers or to check and shutdown the container once the connection has finished. Otherwhise the container will keep alive.
There are many ways where your system can be made and I would strongly recomend to you to start by creating some containers with the docker tool and start to understand how it works.
I am trying to find an effective way to use the docker remote API in a secure way.
I have a docker daemon running in a remote host, and a docker client on a different machine. I need my solution to not be client/server OS dependent, so that it would be relevant to any machine with a docker client/daemon etc.
So far, the only way I found to do such a thing is to create certs on a Linux machine with openssl and copy the certs to the client/server manually, as in this example:
https://docs.docker.com/engine/security/https/
and then configure docker on both sides to use the certificates for encryption and authentication.
This method is rather clunky in my opinion, because some times it's a problem to copy files and put them on each machine I want to use remote API from.
I am looking for something more elegant.
Another solution I've found is using a proxy for basic HTTP authentication, but in this method the traffic is not encrypted and it is not really secure that way.
Does anyone have a suggestion for a different solution or for a way to improve one of the above?
Your favorite system automation tool (Chef, SaltStack, Ansible) can probably directly manage the running Docker containers on a remote host, without opening another root-equivalent network path. There are Docker-oriented clustering tools (Docker Swarm, Nomad, Kubernetes, AWS ECS) that can run a container locally or remotely, but you have less control over where exactly (you frequently don't actually care) and they tend to take over the machines they're running on.
If I really had to manage systems this way I'd probably use some sort of centralized storage to keep the TLS client keys, most likely Vault, which has the property of storing the keys encrypted, requiring some level of authentication to retrieve them, and being able to access-control them. You could write a shell function like this (untested):
dockerHost() {
mkdir -p "$HOME/.docker/$1"
JSON=$(vault kv get -format=json "secret/docker/$1")
for f in ca.pem cert.pem key.pem; do
echo "$JSON" | jq ".data.data.[\"$f\"]" > "$HOME/.docker/$1/$f"
done
export DOCKER_HOST="https://$1:2376"
export DOCKER_CERT_PATH="$HOME/.docker/$1"
}
While your question makes clear you understand this, it bears repeating: do not enable unauthenticated remote access to the Docker daemon, since it is trivial to take over a host with unrestricted root access if you can access the socket at all.
Based on your comments, I would suggest you go with Ansible if you don't need the swarm functionality and require only single host support. Ansible only requires SSH access which you probably already have available.
It's very easy to use an existing service that's defined in Docker Compose or you can just invoke your shell scripts in Ansible. No need to expose the Docker daemon to the external world.
A very simple example file (playbook.yml)
- hosts: all
tasks:
- name: setup container
docker_container:
name: helloworld
image: hello-world
Running the playbook
ansible-playbook -i username#mysshhost.com, playbook.yml
Ansible provides pretty much all of the functionality you need to interact with Docker via its module system:
docker_service
Use your existing Docker compose files to orchestrate containers on a single Docker daemon or on Swarm. Supports compose versions 1 and 2.
docker_container
Manages the container lifecycle by providing the ability to create, update, stop, start and destroy a container.
docker_image
Provides full control over images, including: build, pull, push, tag and remove.
docker_image_facts
Inspects one or more images in the Docker host’s image cache, providing the information as facts for making decision or assertions in a playbook.
docker_login
Authenticates with Docker Hub or any Docker registry and updates the Docker Engine config file, which in turn provides password-free pushing and pulling of images to and from the registry.
docker (dynamic inventory)
Dynamically builds an inventory of all the available containers from a set of one or more Docker hosts.