Single container definition with different users - docker

Due to reasons, we have two networks. On network A, the USER who should execute the process in the container might be usera. On network B, the User might be userb. The uid/gid of the users must match ldap definitions, and these are well defined. There are persistent files written by the process to bind mounted directories on the SAN (and clearly a different SAN on each network), so the process owner is important.
If there were only a single USER, I would do the following:
FROM <base image>
RUN groupadd -g 999 usera && useradd -u 999 -g 999 usera
USER usera
CMD ["process", "'params"]
Then the running process would be owned by usera, and all would be well.
However, it would be nice if it were possible to build a single container, but at the point of container startup have the user be set via some parameter.
I suspect it might be possible by having an ENTRYPOINT added to the Dockerfile, and then perhaps sending values via the docker run -e USER=[usera|userb], but I am just coming up to speed with Docker, so I'm not sure exactly how that would work.
I've looked at processes in containers should not run as root, which gave some suggestions. Also, we absolutely cannot have the container run as root. I also looked at Docker Replicate UID/GID in container, which provided a hint on possibly sending values via -e, but the admonishment about the id mismatch on the build system and running system doesn't apply.
How may I achieve this different user owning a process, possibly by passing in a value (though, if I can have a sophisticated enough script, I can detect what network the container is running on, and I could potentially set some variable automatically)?
Edit: due to auditing and review requirements, it would be cleaner if it were possible to ensure the user setting (or fail to start if one were not provided), rather than using, e.g., the --user parameter to the docker run. Nonetheless, if the only/best approach is the --user, then so be it.

You have two options:
Run a script as root (this will be the entrypoint), pass the UID/GID as environment variables, use usermod|groupmod to change the user/group id then exec the real process using the new user. Check the gogs/gogs image for an example of a container where the UID/GID can be customized.
Use the --user switch on the docker run command so the process starts with the correct UID/GID. You don't need to create a user on the Dockerfile with this option as the UID will be overriden with the one from the command line.
The problem with the second approach is that you must prepare the filesystem permissions beforehand, as you cannot chown/chmod once the process is started.

Related

Root User Docker

I understand that it's considered a bad security practice to run Docker images as root, but I have a specific situation that I wanted to pass by the community to see if anyone can help.
We are currently using a pipeline on an Amazon Linux 2 instance with a single user called ec2-user. Unfortunately, a lot of the scripts we're using for our pipeline have hard-coded paths baked in (notably /home/ec2-user/) ... which may or may not reference the $HOME variable.
I've been talking to one of the engineers that is building a Docker image for our pipeline and suggested that he creates a new user entirely so root user isn't running our pipeline.
For example:
# add clip user
RUN groupadd-r clip && useradd -r -g clip clip
# disable root
RUN chsh -s /usr/sbin/nologin root
# set environment variables
ENV HOME /home/clip
ENV DEBIAN FRONTEND-noninteractive
However, the engineer mentioned that the clip user inside the container will have some uid that may or may not exist in the host machine. For example, if the clip user had uid 1001 in the container, but 1001 was john in the host, all the files created as the clip user inside the container would be owned by john on the outside.
Further, he is more concerned about the situation where the clip user has a uid in the container that doesn’t exist in the host’s passwd. In that case files created by the clip user in the container would be owned by a bare unassociated uid on the host.
If we decided to pass in ids from the host as the user/group to run the image. The kernel will be ok with it (same kernel as the host), and when all is said and done files created inside the container will then be owned by the user/group you pass in. However, the container wouldn’t know who that user/group are, so it’ll just use the raw ids, and stuff like $HOME or whoami won’t work.
With that said, we're curious if anyone else has experienced these problems and if anyone has found solutions?
Everything you say is totally normal. The container has its own /etc/passwd file, and so a given numeric user ID might map to different user names (or to not at all) in the host and in the container. Beyond some cosmetic issues around debug shells, it shouldn't usually matter if the current numeric uid is actually present in the container /etc/passwd, and there's no reason a container uid would need to be mapped in the host /etc/passwd.
Note that there are a couple of ways to directly assume another user ID in Docker, either using the docker run -u option or the Dockerfile USER directive. The RUN chsh command you propose doesn't really do anything and doesn't prevent becoming root inside a container.
clip user inside the container will have some uid that may or may not exist in the host machine.
True, totally normal.
For example, if the clip user had uid 1001 in the container, but 1001 was john in the host, all the files created as the clip user inside the container would be owned by john on the outside.
This is partially true, but only in the case where you've explicitly mapped a host directory into the container with a docker run -v option. Otherwise, the host user with uid 1001 won't be able to navigate to the /var/lib/docker/... directory that actually contains the container files, so it doesn't matter that they could hypothetically write them.
The more usual case around this is to explicitly supply a host uid so that the container process can save its state in a mapped host directory. Pass a numeric uid to the docker run -u option; there's no particular need for that uid to exist in the container's /etc/passwd.
docker run \
-u $(id -u) \
-v "$PWD/data:/data" \
...
the container wouldn’t know who that user/group are, so it’ll just use the raw ids, and stuff like $HOME or whoami won’t work.
Unless your application explicitly calls these things, they won't usually matter. "Home directory" is a pretty poorly defined concept in a Docker container since it's usually a wrapper around a single process.

Running docker image as root user and service as non root user

From a security perspective, I am trying to understand the difference between
Running the image itself as a non-root user with USER directive in dockerfile
Running the image as root and running the service alone as non-root
user
In second option, there will be a startup script which runs as root and starts the service as non-root user.
Are these two equivalent? Is the second option vulnerable considering the startup script is running as root? Does it matter if it exits (or if it doesn't) after starting the service ?
Excuse me if this question is already asked and answered.
The really important thing is that, once your service is up and running, it’s not running as root. (This is a little less important in Docker than not, but “don’t be root” is still considered a best practice.) Both options you propose are valid approaches.
Your second option, “start up as root and then drop privileges”, isn’t common in Docker, but it matches in spirit what most Unix daemons do. The official Hashicorp Consul image is the one thing I know of that actually does it. In particular, it expects to start up with some data directory mounted, so it chown -R consul the data directory as root before the daemon proper starts. I’d expect this to be a pretty typical use of this pattern.
If you don’t need to do this sort of pre-launch setup, specifying some arbitrary non-root USER at the end of your Dockerfile is mechanically easier and checks the same “don’t be root” box.
There are many SO questions around trying to run a container as the current host user, or trying to run a tool packaged in Docker against the host filesystem. This is awkward, since a key design goal of Docker is to isolate the containers from these host details. If you need to choose the user the container process is running as, you want the standard docker run -u option and would need the first option.

Understanding Docker user/uid creation

Even after going through lot of materials and SO answers still I'm not clear on docker uid/user usage or implementation.
I understand the below points:
An instance of an image is called a container.
uid/gid is maintained by the underlying kernel, not by Container.
Kernel understand uid/gid number not username/groupname and name is an alias and just for human readable.
All containers are processes maintained by docker daemon and will be visible as process in host machine (ps -ef)
root (id = 0) is the default user within a container and this can be changed either by USER instruction in Dockerfile or by passing -u flag in docker run
With the above all said, when I have the below command in my Dockerfile, I presume that a new user (my-user) will be created with incremented uid.
RUN addgroup my-group && adduser -D my-user -G my-group
What happens if I run the same image multiple times i.e multiple containers? Will the same uid be assigned to all processes?
What happens if I add same above command in another image and run that image as container? - will I get new uid or same uid as the previous one?
How the uid increment happens in Container in relation with the host machine.
Any pointers would be helpful.
Absent user namespace remapping, there are only two things that matter:
What the numeric user ID is; and
What's in the /etc/passwd file.
Remember that each container and the host have separate filesystems, so each of these things could have separate /etc/passwd files.
What happens if I run the same image multiple times i.e multiple containers? Will the same uid be assigned to all processes?
Yes, because each container gets a copy of the same /etc/passwd file from the image.
What happens if I add same above command in another image and run that image as container? - will I get new uid or same uid as the previous one?
It depends on what adduser actually does; it could be the same or different.
How the uid increment happens in Container in relation with the host machine.
They're completely and totally independent.
Also remember that you can docker push/docker pull a built image to run it on a different host. That will bring the image's /etc/passwd file along with it, but the host environment could be totally different. Correspondingly, it's not a best practice to try to match some specific host's uid mapping in a Dockerfile, because it will be wrong if you try to run the same image anywhere else.
When you try to add users in the RUN statement, it does not create a user on the host. If you do not specify an user with the USER statement in your Dockerfile or the -u flag while starting container (Assuming the parent Dockerfiles also do not include the USER statement), the container process on host will simple run as root user if you have started the docker daemon as root.
So if you create a user using RUN addgroup my-group && adduser -D my-user -G my-group it will simply create an user in the container i.e. the user is local to the container. So each instance (container) of that image you run will have the same uid of the user inside the container. Note: That user will not exist on the host.
If you want to run the container process on host as another user (which exists on host) then you have 3 options:
Add a USER statement in the Dockerfile
Use the -u flag while running the container
You can use docker's user namespace feature
I highly recommend understanding the user namespace and mappings by reading this documentation: Isolate containers with a user namespace

How to run official Tensorflow Docker Image as a non root user?

I currently run the official Tensorflow Docker Container (GPU) with Nvidia-Docker:
https://hub.docker.com/r/tensorflow/tensorflow/
https://gcr.io/tensorflow/tensorflow/
However, I can't find a way to set a default user for the container. The default user for this container is "root", which is dangerous in term of security and problematic because it gives root access to the shared folders.
Let's say my host machine run with the user "CNNareCute", is there any way to launch my containers with the same user ?
Docker containers by default run as root. You can override the user by passing --user <user> to docker run command. Note however this might be problematic in case the container process needs root access inside the container.
The security concern you mention is handled in docker using User Namespaces. Usernamespaces basically map users in the container to a different pool of users on the host. Thus you can map the root user inside the container to a normal user on the host and the security concern should be mitigated.
AFAIK, docker images run by default as root. This means that any Dockerfile using the image as a base, doesn't have to jump through hoops to modify it. You could carry out user modification in a Dockerfile - same way you would on any other linux box which would give you the configuration you need.
You won't be able to use users (dynamically) from your host in the containers without creating them in the container first - and they will be in effect separate users of the same name.
You can run commands and ssh into containers as a specific user provided it exists on the container. For example, a PHP application needing commands run that retain www-data privileges, would be run as follows:
docker exec --user www-data application_container_1 sh -c "php something"
So in short, you can set up whatever users you like and use them to run scripts but the default will be root and it will exist unless you remove it which may also have repercussions...

Docker using gosu vs USER

Docker kind of always had a USER command to run a process as a specific user, but in general a lot of things had to run as ROOT.
I have seen a lot of images that use an ENTRYPOINT with gosu to de-elevate the process to run.
I'm still a bit confused about the need for gosu. Shouldn't USER be enough?
I know quite a bit has changed in terms of security with Docker 1.10, but I'm still not clear about the recommended way to run a process in a docker container.
Can someone explain when I would use gosu vs. USER?
Thanks
EDIT:
The Docker best practice guide is not very clear: It says if the process can run without priviledges, use USER, if you need sudo, you might want to use gosu.
That is confusing because one can install all sorts of things as ROOT in the Dockerfile, then create a user and give it proper privileges, then finally switch to that user and run the CMD as that user.
So why would we need sudo or gosu then?
Dockerfiles are for creating images. I see gosu as more useful as part of a container initialization when you can no longer change users between run commands in your Dockerfile.
After the image is created, something like gosu allows you to drop root permissions at the end of your entrypoint inside of a container. You may initially need root access to do some initialization steps (fixing uid's, host mounted volume permissions, etc). Then once initialized, you run the final service without root privileges and as pid 1 to handle signals cleanly.
Edit:
Here's a simple example of using gosu in an image for docker and jenkins: https://github.com/bmitch3020/jenkins-docker
The entrypoint.sh looks up the gid of the /var/lib/docker.sock file and updates the gid of the docker user inside the container to match. This allows the image to be ported to other docker hosts where the gid on the host may differ. Changing the group requires root access inside the container. Had I used USER jenkins in the dockerfile, I would be stuck with the gid of the docker group as defined in the image which wouldn't work if it doesn't match that of the docker host it's running on. But root access can be dropped when running the app which is where gosu comes in.
At the end of the script, the exec call prevents the shell from forking gosu, and instead it replaces pid 1 with that process. Gosu in turn does the same, switching the uid and then exec'ing the jenkins process so that it takes over as pid 1. This allows signals to be handled correctly which would otherwise be ignored by a shell as pid 1.
I am using gosu and entrypoint.sh because I want the user in the container to have the same UID as the user that created the container.
Docker Volumes and Permissions.
The purpose of the container I am creating is for development. I need to build for linux but I still want all the connivence of local (OS X) editing, tools, etc. My keeping the UIDs the same inside and outside the container it keeps the file ownership a lot more sane and prevents some errors (container user cannot edit files in mounted volume, etc)
Advantage of using gosu is also signal handling. You may trap for instance SIGHUP for reloading the process as you would normally achieve via systemctl reload <process> or such.

Resources