From a security perspective, I am trying to understand the difference between
Running the image itself as a non-root user with USER directive in dockerfile
Running the image as root and running the service alone as non-root
user
In second option, there will be a startup script which runs as root and starts the service as non-root user.
Are these two equivalent? Is the second option vulnerable considering the startup script is running as root? Does it matter if it exits (or if it doesn't) after starting the service ?
Excuse me if this question is already asked and answered.
The really important thing is that, once your service is up and running, it’s not running as root. (This is a little less important in Docker than not, but “don’t be root” is still considered a best practice.) Both options you propose are valid approaches.
Your second option, “start up as root and then drop privileges”, isn’t common in Docker, but it matches in spirit what most Unix daemons do. The official Hashicorp Consul image is the one thing I know of that actually does it. In particular, it expects to start up with some data directory mounted, so it chown -R consul the data directory as root before the daemon proper starts. I’d expect this to be a pretty typical use of this pattern.
If you don’t need to do this sort of pre-launch setup, specifying some arbitrary non-root USER at the end of your Dockerfile is mechanically easier and checks the same “don’t be root” box.
There are many SO questions around trying to run a container as the current host user, or trying to run a tool packaged in Docker against the host filesystem. This is awkward, since a key design goal of Docker is to isolate the containers from these host details. If you need to choose the user the container process is running as, you want the standard docker run -u option and would need the first option.
Related
During my studies, I came across the fact that Docker containers don't support neither sudo nor systemd services. Not that I need these tools but I'm just curious about the topic and couldn't find an adequate reasoning.
Docker is aimed at being minimal, since there can be many, many containers running at the same time. The idea is to reduce memory and disk usage. Since containers already run as root to begin with unless otherwise specified, there's no need to have sudo. Also, since most containers only ever run one process, there's no need for a service manager like systemd. Even if they did need to run more than one process, there are smaller programs like supervisord.
sudo is unnecessary in Docker. A container generally runs a single process, and if you intend it to run as not-root, you don't generally want it to be able to become root arbitrarily. In a Dockerfile, you can use USER to switch users as many times as you'd like; outside of Docker, you can use docker run -u root or docker exec -u root to get a root shell no matter how the container is configured.
Mechanically, sudo is bad for non-interactive environments (especially, it's very prone to asking for a user password) and users in Docker aren't usually configured with passwords at all. The most common recipe I see involves echo plain-text-password | passwd user, in a file committed to source control, and also easily retrieved via docker history; this is not good security practice.
systemd is unnecessary in Docker. A container generally runs a single process, so you don't need a process manager. Running systemd instead of the process you're trying to run also means you don't get anything useful from docker logs, can't use Docker restart policies effectively, and generally miss out on the core Docker ecosystem.
systemd also runs against the Unix philosophy of "make each program do one thing well". If you look at the set of things listed out on the systemd home page it sets up a ton of stuff; much of that is system-level things that belong to the host (swap, filesystem mounts, kernel parameters) and other things that you can't run in Docker (console getty processes). This also means you usually can't run systemd in a container without it being --privileged, which in turn means it can interfere with this system-level configuration.
There are some good technical reasons to run a dedicated init process in Docker, but a lightweight single-process init like tini is a better choice.
Beside what #Aplet123 mentioned,consider that since the containers themselves don't have root access and even cannot see other processes in the system(unless created by the --ipc option), they cannot cause any harm to your system by any means even if all the processes within the container have root access.So there's no need to limit that already-limited environment with non-root users.And when there is only one user,there's no need to have sudo.
Also starting and stopping the containers as services can be done by docker itself,so the docker daemon(which itself has been started via systemd) is in fact the Master SystemD for all containers.So there's no need to have systemd too for example when you want to start your apache HTTP server.
Due to reasons, we have two networks. On network A, the USER who should execute the process in the container might be usera. On network B, the User might be userb. The uid/gid of the users must match ldap definitions, and these are well defined. There are persistent files written by the process to bind mounted directories on the SAN (and clearly a different SAN on each network), so the process owner is important.
If there were only a single USER, I would do the following:
FROM <base image>
RUN groupadd -g 999 usera && useradd -u 999 -g 999 usera
USER usera
CMD ["process", "'params"]
Then the running process would be owned by usera, and all would be well.
However, it would be nice if it were possible to build a single container, but at the point of container startup have the user be set via some parameter.
I suspect it might be possible by having an ENTRYPOINT added to the Dockerfile, and then perhaps sending values via the docker run -e USER=[usera|userb], but I am just coming up to speed with Docker, so I'm not sure exactly how that would work.
I've looked at processes in containers should not run as root, which gave some suggestions. Also, we absolutely cannot have the container run as root. I also looked at Docker Replicate UID/GID in container, which provided a hint on possibly sending values via -e, but the admonishment about the id mismatch on the build system and running system doesn't apply.
How may I achieve this different user owning a process, possibly by passing in a value (though, if I can have a sophisticated enough script, I can detect what network the container is running on, and I could potentially set some variable automatically)?
Edit: due to auditing and review requirements, it would be cleaner if it were possible to ensure the user setting (or fail to start if one were not provided), rather than using, e.g., the --user parameter to the docker run. Nonetheless, if the only/best approach is the --user, then so be it.
You have two options:
Run a script as root (this will be the entrypoint), pass the UID/GID as environment variables, use usermod|groupmod to change the user/group id then exec the real process using the new user. Check the gogs/gogs image for an example of a container where the UID/GID can be customized.
Use the --user switch on the docker run command so the process starts with the correct UID/GID. You don't need to create a user on the Dockerfile with this option as the UID will be overriden with the one from the command line.
The problem with the second approach is that you must prepare the filesystem permissions beforehand, as you cannot chown/chmod once the process is started.
I currently run the official Tensorflow Docker Container (GPU) with Nvidia-Docker:
https://hub.docker.com/r/tensorflow/tensorflow/
https://gcr.io/tensorflow/tensorflow/
However, I can't find a way to set a default user for the container. The default user for this container is "root", which is dangerous in term of security and problematic because it gives root access to the shared folders.
Let's say my host machine run with the user "CNNareCute", is there any way to launch my containers with the same user ?
Docker containers by default run as root. You can override the user by passing --user <user> to docker run command. Note however this might be problematic in case the container process needs root access inside the container.
The security concern you mention is handled in docker using User Namespaces. Usernamespaces basically map users in the container to a different pool of users on the host. Thus you can map the root user inside the container to a normal user on the host and the security concern should be mitigated.
AFAIK, docker images run by default as root. This means that any Dockerfile using the image as a base, doesn't have to jump through hoops to modify it. You could carry out user modification in a Dockerfile - same way you would on any other linux box which would give you the configuration you need.
You won't be able to use users (dynamically) from your host in the containers without creating them in the container first - and they will be in effect separate users of the same name.
You can run commands and ssh into containers as a specific user provided it exists on the container. For example, a PHP application needing commands run that retain www-data privileges, would be run as follows:
docker exec --user www-data application_container_1 sh -c "php something"
So in short, you can set up whatever users you like and use them to run scripts but the default will be root and it will exist unless you remove it which may also have repercussions...
When I run a container as a normal user I can map and modify directories owned by root on my host filesystem. This seems to be a big security hole. For example I can do the following:
$ docker run -it --rm -v /bin:/tmp/a debian
root#14da9657acc7:/# cd /tmp/a
root#f2547c755c14:/tmp/a# mv df df.orig
root#f2547c755c14:/tmp/a# cp ls df
root#f2547c755c14:/tmp/a# exit
Now my host filesystem will execute the ls command when df is typed (mostly harmless example). I cannot believe that this is the desired behavior, but it is happening in my system (debian stretch). The docker command has normal permissions (755, not setuid).
What am I missing?
Maybe it is good to clarify a bit more. I am not at the moment interested in what the container itself does or can do, nor am I concerned with the root access inside the container.
Rather I notice that anyone on my system that can run a docker container can use it to gain root access to my host system and read/write as root whatever they want: effectively giving all users root access. That is obviously not what I want. How to prevent this?
There are many Docker security features available to help with Docker security issues. The specific one that will help you is User Namespaces.
Basically you need to enable User Namespaces on the host machine with the Docker daemon stopped beforehand:
dockerd --userns-remap=default &
Note this will forbid the container from running in privileged mode (a good thing from a security standpoint) and restart the Docker daemon (it should be stopped before performing this command). When you enter the Docker container, you can restrict it to the current non-privileged user:
docker run -it --rm -v /bin:/tmp/a --user UID:GID debian
Regardless, try to enter the Docker container afterwards with your default command of
docker run -it --rm -v /bin:/tmp/a debian
If you attempt to manipulate the host filesystem that was mapped into a Docker volume (in this case /bin) where files and directories are owned by root, then you will receive a Permission denied error. This proves that User Namespaces provide the security functionality you are looking for.
I recommend going through the Docker lab on this security feature at https://github.com/docker/labs/tree/master/security/userns. I have done all of the labs and opened Issues and PRs there to ensure the integrity of the labs there and can vouch for them.
Access to run docker commands on a host is access to root on that host. This is the design of the tool since the functionality to mount filesystems and isolate an application requires root capabilities on linux. The security vulnerability here is any sysadmin that grants access to users to run docker commands that they wouldn't otherwise trust with root access on that host. Adding users to the docker group should therefore be done with care.
I still see Docker as a security improvement when used correctly, since applications run inside a container are restricted from what they can do to the host. The ability to cause damage is given with explicit options to running the container, like mounting the root filesystem as a rw volume, direct access to devices, or adding capabilities to root that permit escaping the namespace. Barring the explicit creation of those security holes, an application run inside a container has much less access than it would if it was run outside of the container.
If you still want to try locking down users with access to docker, there are some additional security features. User namespacing is one of those which prevents root inside of the container from having root access on the host. There's also interlock which allows you to limit the commands available per user.
You're missing that containers run as uid 0 internally by default. So this is expected. If you want to restrict the permission more inside the container, build it with a USER statement in Dockerfile. This will setuid to the named user at runtime, instead of running as root.
Note that the uid of this user it not necessarily predictable, as it is assigned inside the image you build, and it won't necessarily map to anything on the outside system. However, the point is, it won't be root.
Refer to Dockerfile reference for more information.
Docker kind of always had a USER command to run a process as a specific user, but in general a lot of things had to run as ROOT.
I have seen a lot of images that use an ENTRYPOINT with gosu to de-elevate the process to run.
I'm still a bit confused about the need for gosu. Shouldn't USER be enough?
I know quite a bit has changed in terms of security with Docker 1.10, but I'm still not clear about the recommended way to run a process in a docker container.
Can someone explain when I would use gosu vs. USER?
Thanks
EDIT:
The Docker best practice guide is not very clear: It says if the process can run without priviledges, use USER, if you need sudo, you might want to use gosu.
That is confusing because one can install all sorts of things as ROOT in the Dockerfile, then create a user and give it proper privileges, then finally switch to that user and run the CMD as that user.
So why would we need sudo or gosu then?
Dockerfiles are for creating images. I see gosu as more useful as part of a container initialization when you can no longer change users between run commands in your Dockerfile.
After the image is created, something like gosu allows you to drop root permissions at the end of your entrypoint inside of a container. You may initially need root access to do some initialization steps (fixing uid's, host mounted volume permissions, etc). Then once initialized, you run the final service without root privileges and as pid 1 to handle signals cleanly.
Edit:
Here's a simple example of using gosu in an image for docker and jenkins: https://github.com/bmitch3020/jenkins-docker
The entrypoint.sh looks up the gid of the /var/lib/docker.sock file and updates the gid of the docker user inside the container to match. This allows the image to be ported to other docker hosts where the gid on the host may differ. Changing the group requires root access inside the container. Had I used USER jenkins in the dockerfile, I would be stuck with the gid of the docker group as defined in the image which wouldn't work if it doesn't match that of the docker host it's running on. But root access can be dropped when running the app which is where gosu comes in.
At the end of the script, the exec call prevents the shell from forking gosu, and instead it replaces pid 1 with that process. Gosu in turn does the same, switching the uid and then exec'ing the jenkins process so that it takes over as pid 1. This allows signals to be handled correctly which would otherwise be ignored by a shell as pid 1.
I am using gosu and entrypoint.sh because I want the user in the container to have the same UID as the user that created the container.
Docker Volumes and Permissions.
The purpose of the container I am creating is for development. I need to build for linux but I still want all the connivence of local (OS X) editing, tools, etc. My keeping the UIDs the same inside and outside the container it keeps the file ownership a lot more sane and prevents some errors (container user cannot edit files in mounted volume, etc)
Advantage of using gosu is also signal handling. You may trap for instance SIGHUP for reloading the process as you would normally achieve via systemctl reload <process> or such.