Alice and Bob are both members of the docker group on the same host. Alice wants to run some long-running calculations in a docker container, then copy the results to her home folder. Bob is very nosy, and Alice doesn't want him to be able to read the data that her calculation is using.
Is there anything that the system administrator can do to keep Bob out of Alice's docker containers?
Here's how I think Alice should get data in and out of her container, based on named volumes and the docker cp command, as described in this question and this one.
$ pwd
/home/alice
$ date > input1.txt
$ docker volume create sandbox1
sandbox1
$ docker run --name run1 -v sandbox1:/data alpine echo OK
OK
$ docker cp input1.txt run1:/data/input1.txt
$ docker run --rm -v sandbox1:/data alpine sh -c "cp /data/input1.txt /data/output1.txt && date >> /data/output1.txt"
$ docker cp run1:/data/output1.txt output1.txt
$ cat output1.txt
Thu Oct 5 16:35:30 PDT 2017
Thu Oct 5 23:36:32 UTC 2017
$ docker container rm run1
run1
$ docker volume rm sandbox1
sandbox1
$
I create an input file, input1.txt and a named volume, sandbox1. Then I start a container named run1 just so I can copy files into the named volume. That container just prints an "OK" message and quits. I copy the input file, then run the main calculation. In this example, it copies the input to the output and adds a second timestamp to it.
After the calculation finishes, I copy the output file, then remove the container and the named volume.
Is there any way to stop Bob from loading his own container that mounts the named volume and shows him Alice's data? I've set up Docker to use a user namespace, so Alice and Bob don't have root access to the host, but I can't see how to make Alice and Bob use different user namespaces.
Alice and Bob have been granted virtual root access to the host by being in the docker group.
The docker group grants them access to the Docker API via a socket file. There is no facility in Docker at the moment to differentiate between users of the Docker API. The Docker daemon runs as root and by virtue of what the Docker API allows, Alice and Bob will be able to work around any barriers that you did try to put in place.
User Namespaces
The use of the user namespace isolation stops users inside a container breaking out of a container as a privileged or different user, so in effect the container process is now running as an unprivileged user.
An example would be
Alice is given ssh access to container A running in namespace_a.
Bob is given ssh access to container B in namespace_b.
Because the users are now only inside the container, they won't be able to modify each others files on the host. Say if both containers mapped the same host volume, files without world read/write/execute will be safe from each others containers. As they have no control over the daemon, they can't do anything to break out.
Docker Daemon
The namespace doesn't secure the Docker daemon and API itself, which is still a privileged process. The first way around a user name space is setting the host namespace on the command line:
docker run --privileged --userns=host busybox fdisk -l
The docker exec, docker cp and docker export commands will give someone with access to the Docker API the contents of any created containers.
Restricting Docker Access
It is possible to restrict access to the API but you can't have users with shell access in the docker group.
Allowing a limited set of docker commands via sudo or providing sudo access to scripts that hard code the docker parameters:
#!/bin/sh
docker run --userns=whom image command
For automated systems, access can be provided via an additional shim API with appropriate access controls in front of the Docker API that then passes on the "controlled" request to Docker. dockerode or docker-py can be easily plugged into a REST service and interface with Docker.
Related
I am working on Docker and before i execute any command on Docker CLI , I need to switch to root used using the command
sudo su - root
Can anyone please tell me why we need to switch to root user to perform any operation on Docker Engine?
you don't need to switch to root for docker cli commands and it is common to add your user to the docker group
sudo groupadd docker
sudo usermod -aG docker $USER
see: https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user
the reason why docker is run as root:
The Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can only access it using sudo. The Docker daemon always runs as the root user.
Using docker commands, you can trivially get root-level access to any part of the host filesystem. The very most basic example is
docker run --rm -v /:/host busybox cat /host/etc/shadow
which will get you a file of encrypted passwords that you can crack offline at your leisure; but if I wanted to actually take over the machine I'd just write my own line into /host/etc/passwd and /host/etc/shadow creating an alternate uid-0 user with no password and go to town.
Docker doesn't really have any way to limit what docker commands you can run or what files or volumes you can mount. So if you can run any docker command at all, you have unrestricted root access to the host. Putting it behind sudo is appropriate.
The other important corollary to this is that using the dockerd -H option to make the Docker socket network-accessible is asking for your system to get remotely rooted. Google "Docker cryptojacking" for some more details and prominent real-life examples.
$ docker run --rm -it busybox
/ # who
<empty>
In the next session I'm trying to attach to this docker container and expecting second user will appear, but no luck again:
$ docker attach `docker container ls | grep busybox | cut -d" " -f1`
/ # who
<empty again>
So the question is - why there are no logons happened not by first run-and-attach, not by consequent attaches? And why there is no even a single logon into this container?
who reads the list of users from /var/run/utmp. On a regular Linux system, the login program prompts for the username and password and then starts the user's shell. It also updates /var/run/utmp with the new user.
The same thing happens for SSH and Telnet servers. They are expected to update /var/run/utmp.
In a Docker container, login is usually not executed. Docker isolates resources from the host system with Linux Namespaces, it does not provide a complete Linux system. When you enter a Docker container, the given entrypoint or command is executed with PID 1.
Subsequent docker exec calls are handled in a similar way. Docker enters the namespace of the container and executes the given command.
EDIT: after some reading I see Alexander answer as more to the point. Couple of useful links I've read along that way:
https://docs.docker.com/engine/security/security/
https://lwn.net/Articles/531114
As far as I understand busybox docker container is very basic and does not support all functionality of full-fledged Linux.
Here I thought I understood Docker until I saw the BusyBox docker image there is a discussion about what that image is and what it is for.
The Problem:
Let's say you need to be able to create containers in your host from inside a container, Why?!!! Imagine you have your "continuous everything" process automated in a Jenkins Pipeline and this process includes creation of container or services for testing.
Even Though container and virtual machines enforces isolation from the host, this is a valid scenario.
The solution:
Sorry WinTel guys, did you expect this answer includes Windows?... Well just a clue, you can enable tcp://localhost:2375
Coming back to production grade answer, follow the next steps:
Spin up your instance binding "/var/run/docker.sock" from your host to your container:
docker container run --name container -v /var/run/docker.sock:/var/run/docker.sock image
docker.sock as any file exposes its user id and group id, any user having as group "docker" is allowed to "talk" with docker using the client, so run the following script:
#!/usr/bin/env bash
DOCKER_SOCKET=/var/run/docker.sock
DOCKER_GROUP=docker
if [ -S ${DOCKER_SOCKET} ]; then
DOCKER_GID=$(stat -c '%g' ${DOCKER_SOCKET})
groupadd -for -g ${DOCKER_GID} ${DOCKER_GROUP}
usermod -aG ${DOCKER_GROUP} youruser
fi
Don't freak out, this won't harm your system, basically, if the file (socket)
docker.sock exists (as it should), the script will get it group id, will create a group call
docker and will set the same group id as the docker's group one in the host
(confused?!?!, remember that we are inside the container we want to have access
to host docker, we executed "docker container exec -it -u root container bash"
in order to access the container), then, the user called "youruser" will be
modified by being added to "docker" group.
(Almost there!!!) Install docker client inside your container, use your
favorite package manager and install the docker client, I have the same version
of client and server and works like a charm but I suppose I could work with
other versions but come on!! mixing versions??? seriously???
After following these steps, you will be able to run docker commands using the common process, just remember that it is possible to do anything!!! even shooting you in the foot!!!
I can view the list of running containers with docker ps or equivalently docker container ls (added in Docker 1.13). However, it doesn't display the user who launched each Docker container. How can I see which user launched a Docker container? Ideally I would prefer to have the list of running containers along with the user for launched each of them.
You can try this;
docker inspect $(docker ps -q) --format '{{.Config.User}} {{.Name}}'
Edit: Container name added to output
There's no built in way to do this.
You can check the user that the application inside the container is configured to run as by inspecting the container for the .Config.User field, and if it's blank the default is uid 0 (root). But this doesn't tell you who ran the docker command that started the container. User bob with access to docker can run a container as any uid (this is the docker run -u 1234 some-image option to run as uid 1234). Most images that haven't been hardened will default to running as root no matter the user that starts the container.
To understand why, realize that docker is a client/server app, and the server can receive connections in different ways. By default, this server is running as root, and users can submit requests with any configuration. These requests may be over a unix socket, you could sudo to root to connect to that socket, you could expose the API to the network (not recommended), or you may have another layer of tooling on top of docker (e.g. Kubernetes with the docker-shim). The big issue in that list is the difference between the network requests vs a unix socket, because network requests don't tell you who's running on the remote host, and if it did, you'd be trusting that remote client to provide accurate information. And since the API is documented, anyone with a curl command could submit a request claiming to be a different user.
In short, every user with access to the docker API is an anonymized root user on your host.
The closest you can get is to either place something in front of docker that authenticates users and populates something like a label. Or trust users to populate that label and be honest (because there's nothing in docker validating these settings).
$ docker run -l "user=$(id -u)" -d --rm --name test-label busybox tail -f /dev/null
...
$ docker container inspect test-label --format '{{ .Config.Labels.user }}'
1000
Beyond that, if you have a deployed container, sometimes you can infer the user by looking through the configuration and finding volume mappings back to that user's home directory. That gives you a strong likelihood, but again, not a guarantee since any user can set any volume.
I found a solution. It is not perfect, but it works for me.
I start all my containers with an environment variable ($CONTAINER_OWNER in my case) which includes the user. Then, I can list the containers with the environment variable.
Start container with environment variable
docker run -e CONTAINER_OWNER=$(whoami) MY_CONTAINER
Start docker compose with environment variable
echo "CONTAINER_OWNER=$(whoami)" > deployment.env # Create env file
docker-compose --env-file deployment.env up
List containers with the environment variable
for container_id in $(docker container ls -q); do
echo $container_id $(docker exec $container_id bash -c 'echo "$CONTAINER_OWNER"')
done
As far as I know, docker inspect will show only the configuration that
the container started with.
Because of the fact that commands like entrypoint (or any init script) might change the user, those changes will not be reflected on the docker inspect output.
In order to work around this, you can to overwrite the default entrypoint set by the image with --entrypoint="" and specify a command like whoami or id after it.
You asked specifically to see all the containers running and the launched user, so this solution is only partial and gives you the user in case it doesn't appear with the docker inspect command:
docker run --entrypoint "" <image-name> whoami
Maybe somebody will proceed from this point to a full solution (:
Read more about entrypoint "" in here.
If you are used to ps command, running ps on the Docker host and grep with parts of the process your process is running. For example, if you have a Tomcat container running, you may run the following command to get details on which user would have started the container.
ps -u | grep tomcat
This is possible because containers are nothing but processes managed by docker. However, this will only work on single host. Docker provides alternatives to get container details as mentioned in other answer.
this command will print the uid and gid
docker exec <CONTAINER_ID> id
ps -aux | less
Find the process's name (the one running inside the container) in the list (last column) and you will see the user ran it in the first column
I am launch a jenkins docker container for CI work. And the host OS I am using is CoreOS. Inside the jenkins container, I also installed docker-cli in order to run build on docker containers in the host system. In order to do that, I use below configuration to mount /var/run on the jenkins container for mapper Docker socket:
volumes:
- /jenkins/data:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock:rw
when I launch the container and run docker command, I got below error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.29/containers/json: dial unix /var/run/docker.sock: connect: permission denied
The /var/run is root permission but my user is jenkins. How can I solve the permission issue to allow jenkins user to use docker command through mapper socket?
I have tried below command but the container doesn't allow me to run sudo:
$ sudo usermod -a -G docker jenkins
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
sudo: no tty present and no askpass program specified
There's nothing magical about permissions in Docker: they work just like permissions outside of Docker. That is, if you want a user to have access to a file (like /var/run/docker.sock), then either that file needs to be owned by the user, or they need to be a member of the appropriate group, or the permissions on the file need to permit access to anybody.
Exposing /var/run/docker.sock to a non-root user is a little tricky, because typical solutions (just chown/chmod things from inside the container) will potentially break things on your host.
I suspect the best solution may be:
Ensure that /var/run/docker.sock on your host is group-writable (e.g., create a docker group on your host and make sure that users in that group can use Docker).
Pass the numeric group id of your docker group into the container as an environment variable.
Have an ENTRYPOINT script in your container that runs as root that (a) creates a group with a matching numeric gid, and (b) modifies the Jenkins users to be a member of that group, and then (c) exec your docker CMD as the jenkins user.
So, your entrypoint script might look something like this (assuming that you have passed in a value for $DOCKER_GROUP_ID in your docker-compose.yml):
#!/bin/sh
groupadd -g $DOCKER_GROUP_ID docker
usermod -a -G docker jenkins
exec runuser -u jenkins "$#"
You would need to copy this into your image and add the appropriate ENTRYPOINT directive to your Dockerfile.
You may not have the runuser command. You can accomplish something similar using sudo or su or other similar commands.