Switching Between Root and Non-Root Users in Docker - docker

So I'm trying to deploy a Django app on Minikube. But in one of the containers, the image requires me to be in root for certain tasks and then switch the postgres user to create some databases and then switch back to root to run more commands.
I know I can use the USER functionality for Docker but that messes up certain task depending on what user I'm in. I have also tried running su - postgres but that returns an error saying that the command has to be from the terminal.
Any thoughts on how to fix this?

The typical tool for this in is gosu. When included in your container, you'd run gosu postgres $cmd where the command is whatever you need to run. If it's the only command you need to have running in the container at the end of your entrypoint script, then you'd exec gosu postgres $cmd. The gosu page includes details of why you'd use their tool, the main reasons being TTY and signal handling. Note the end of their readme also lists a few other alternatives which are worth considering.

If say container is based on the official Postgres image, you can try create a script for all your root tasks and COPY that script to the container's /docker-entrypoint-initdb.d folder. Any .sql and .sh scripts in this folder will be executed AFTER the entrypoint calls initdb, with gosu postgres as seen in the entrypoint script.
If you need to sandwich the initdb between two sets of root tasks, then you will have to carve your own entrypoint script.

Related

Dockerfile hide variables (user creation)

I am trying to generate a docker image from Ubuntu 18.04.
To administrate the container I am creating a default user :
# set default user
RUN useradd -m docker && echo "docker:docker" | chpasswd && adduser docker sudo
USER docker
My problem is I would like to set a secured password on it, and my dockerfile is intended to be versioned with Git.
So my question is : is there a way to load variables in dockerfile from a .env file or anything else ?
I have seen an option on the docker run command, but not for the docker build, am I wrong ?
Anything you write in the Dockerfile can be trivially retrieved in plain text with docker history. Any file in the image can be very easily retrieved by anyone who can run any docker command. There is no way around either limitation.
Do NOT try to set user passwords for your Docker images like this. In most cases it shouldn't be necessary to formally "log in" to a container at all. Let the container run the single application process it needs to run, and don't try to set up an ssh daemon, sudo, or other things you'd have in a more complete server environment.
(You shouldn't usually need a shell inside a container; you don't for other kinds of processes like your Nginx server, for example. If you do, you can get one with docker exec, and if your main process runs as a non-root user, you can add a -u root option to be root in that shell. Again, you can't prevent an end user from being able to do this.)
If you are using a standalone container, then you can use a script with the variables and run docker RUN, or ENTRYPOINT to run the script. This would contain your password information, and then you can carry on with the build of your image.
If you are using Docker Swarm, you can use secrets, more information on the following link, and differences if you are using Windows or Linux are explained as well.
https://docs.docker.com/engine/swarm/secrets/

docker-compose user mapping [duplicate]

I would like to volume mount a directory from a Docker container to my work station, so when I edit the content in the volume mount from my work station it updated in the container as well. It would be very useful for testing and develop web applications in general.
However I get a permission denied in the container, because the UID's in the container and host isn't the same. Isn't the original purpose of Docker that it should make development faster and easier?
This answer works around the issue I am facing when volume mounting a Docker container to my work station. But by doing this, I make changes to the container that I won't want in production, and that defeats the purpose of using Docker during development.
The container is Alpine Linux, work station Fedora 29, and editor Atom.
Question
Is there another way, so both my work station and container can read/write the same files?
There are multiple ways to do this, but the central issue is that bind mounts do not include any UID mapping capability, the UID on the host is what appears inside the container and vice versa. If those two UID's do not match, you will read/write files with different UID's and likely experience permission issues.
Option 1: get a Mac or deploy docker inside of VirtualBox. Both of these environments have a filesystem integration that dynamically updates the UID's. For Mac, that is implemented with OSXFS. Be aware that this convenience comes with a performance penalty.
Option 2: Change your host. If the UID on the host matches the UID inside the container, you won't experience any issues. You'd just run a usermod on your user on the host to change your UID there, and things will happen to work, at least until you run a different image with a different UID inside the container.
Option 3: Change your image. Some will modify the image to a static UID that matches their environment, often to match a UID in production. Others will pass a build arg with something like --build-arg UID=$(id -u) as part of the build command, and then the Dockerfile with something like:
FROM alpine
ARG UID=1000
RUN adduser -u ${UID} app
The downside of this is each developer may need a different image, so they are either building locally on each workstation, or you centrally build multiple images, one for each UID that exists among your developers. Neither of these are ideal.
Option 4: Change the container UID. This can be done in the compose file, or on a one off container with something like docker run -u $(id -u) your_image. The container will now be running with the new UID, and files in the volume will be accessible. However, the username inside the container will not necessarily map to your UID which may look strange to any commands you run inside the container. More importantly, any files own by the user inside the container that you have not hidden with your volume will have the original UID and may not be accessible.
Option 5: Give up, run everything as root, or change permissions to 777 allowing everyone to access the directory with no restrictions. This won't map to how you should run things in production, and the container may still write new files with limited permissions making them inaccessible to you outside the container. This also creates security risks of running code as root or leaving filesystems open to both read and write from any user on the host.
Option 6: Setup an entrypoint that dynamically updates your container. Despite not wanting to change your image, this is my preferred solution for completeness. Your container does need to start as root, but only in development, and the app will still be run as the user, matching the production environment. However, the first step of that entrypoint will be to change the user's UID/GID inside the container to match your volume's UID/GID. This is similar to option 4, but now files inside the image that were not replaced by the volume have the right UID's, and the user inside the container will now show with the changed UID so commands like ls show the username inside the container, not a UID to may map to another user or no one at all. While this is a change to your image, the code only runs in development, and only as a brief entrypoint to setup the container for that developer, after which the process inside the container will look identical to that in a production environment.
To implement this I make the following changes. First the Dockerfile now includes a fix-perms script and gosu from a base image I've pushed to the hub (this is a Java example, but the changes are portable to other environments):
FROM openjdk:jdk as build
# add this copy to include fix-perms and gosu or install them directly
COPY --from=sudobmitch/base:scratch / /
RUN apt-get update \
&& apt-get install -y maven \
&& useradd -m app
COPY code /code
RUN mvn build
# add an entrypoint to call fix-perms
COPY entrypoint.sh /usr/bin/
ENTRYPOINT ["/usr/bin/entrypoint.sh"]
CMD ["java", "-jar", "/code/app.jar"]
USER app
The entrypoint.sh script calls fix-perms and then exec and gosu to drop from root to the app user:
#!/bin/sh
if [ "$(id -u)" = "0" ]; then
# running on a developer laptop as root
fix-perms -r -u app -g app /code
exec gosu app "$#"
else
# running in production as a user
exec "$#"
fi
The developer compose file mounts the volume and starts as root:
version: '3.7'
volumes:
m2:
services:
app:
build:
context: .
target: build
image: registry:5000/app/app:dev
command: "/bin/sh -c 'mvn build && java -jar /code/app.jar'"
user: "0:0"
volumes:
- m2:/home/app/.m2
- ./code:/code
This example is taken from my presentation available here: https://sudo-bmitch.github.io/presentations/dc2019/tips-and-tricks-of-the-captains.html#fix-perms
Code for fix-perms and other examples are available in my base image repo: https://github.com/sudo-bmitch/docker-base
Since the UID in your containers are baked into the container definition, you can safely assume that they are relatively static. In this case, you can create a user in your host system with the machine UID and GID. Change user to the new account, and then make your edits to the files. Your host OS will not complain since it thinks it's just the user accessing its own files, and your container OS will see the same.
Alternatively, you can consider editing these files as root.

Docker: How to disable root user in container?

Delivering images to customers they usually make
$ docker-compose up -d
to deploy those in production. It is easy to get root and to see / modify all file quite easy:
$ docker-compose exec <service> /bin/sh
/bin/sh(root)# ...
How can I avoid for customers to get full access rights to all files as root when running the container. Maybe this is not possible at all in Docker but then it should at least be more complicated for users to get full access to anything inside the container.
Is there a best practice to intrdoce non root accounts in containers?
You can’t. You can always run
docker exec -u 0 (container ID) sh
to get a root shell. (Assuming the image has a shell, but almost all do.)
Also remember that anyone who can run any docker command can edit any file on the host, and from there can trivially become root, and can prod around in /var/lib/docker to their heart’s content.
It’s generally considered good practice to set containers to run as non-root by RUN adduser to create a user using the base distribution’s tools and then a Dockerfile USER directive, but an operator can override this at runtime if they really want to.

Docker using gosu vs USER

Docker kind of always had a USER command to run a process as a specific user, but in general a lot of things had to run as ROOT.
I have seen a lot of images that use an ENTRYPOINT with gosu to de-elevate the process to run.
I'm still a bit confused about the need for gosu. Shouldn't USER be enough?
I know quite a bit has changed in terms of security with Docker 1.10, but I'm still not clear about the recommended way to run a process in a docker container.
Can someone explain when I would use gosu vs. USER?
Thanks
EDIT:
The Docker best practice guide is not very clear: It says if the process can run without priviledges, use USER, if you need sudo, you might want to use gosu.
That is confusing because one can install all sorts of things as ROOT in the Dockerfile, then create a user and give it proper privileges, then finally switch to that user and run the CMD as that user.
So why would we need sudo or gosu then?
Dockerfiles are for creating images. I see gosu as more useful as part of a container initialization when you can no longer change users between run commands in your Dockerfile.
After the image is created, something like gosu allows you to drop root permissions at the end of your entrypoint inside of a container. You may initially need root access to do some initialization steps (fixing uid's, host mounted volume permissions, etc). Then once initialized, you run the final service without root privileges and as pid 1 to handle signals cleanly.
Edit:
Here's a simple example of using gosu in an image for docker and jenkins: https://github.com/bmitch3020/jenkins-docker
The entrypoint.sh looks up the gid of the /var/lib/docker.sock file and updates the gid of the docker user inside the container to match. This allows the image to be ported to other docker hosts where the gid on the host may differ. Changing the group requires root access inside the container. Had I used USER jenkins in the dockerfile, I would be stuck with the gid of the docker group as defined in the image which wouldn't work if it doesn't match that of the docker host it's running on. But root access can be dropped when running the app which is where gosu comes in.
At the end of the script, the exec call prevents the shell from forking gosu, and instead it replaces pid 1 with that process. Gosu in turn does the same, switching the uid and then exec'ing the jenkins process so that it takes over as pid 1. This allows signals to be handled correctly which would otherwise be ignored by a shell as pid 1.
I am using gosu and entrypoint.sh because I want the user in the container to have the same UID as the user that created the container.
Docker Volumes and Permissions.
The purpose of the container I am creating is for development. I need to build for linux but I still want all the connivence of local (OS X) editing, tools, etc. My keeping the UIDs the same inside and outside the container it keeps the file ownership a lot more sane and prevents some errors (container user cannot edit files in mounted volume, etc)
Advantage of using gosu is also signal handling. You may trap for instance SIGHUP for reloading the process as you would normally achieve via systemctl reload <process> or such.

Can I run a bash script from a file in a separate docker volume before a container starts?

I have a situation where i need to source a bash environment file that lives in a separate volume (pulled in via volumes_from in docker-compose) when a container starts so that all future commands run on the container will execute under that bash environment (it runs some scripts and sets a lot of dynamic variables pulled in from other places). The reason I'm using a volume instead of just adding this command directly to the image is because the environment file I need to include is outside the Dockerfile context, and Dockerfiles don't support that.
I tried adding a source /path/to/volume/envfile line in to the root user's .bashrc file in the hopes that it would be run when the container started, but that didn't work. I'm assuming that's because the volumes aren't actually mounted until after the container / shell has started and .bashrc commands have already run (which makes sense).
Does anyone have any idea on how I can accomplish something like this? I'm open to alternative methods, however the one thing I can't change here is moving the file I need inside of the Docker context, as that would break quite a number of other things.
My (slightly edited) Dockerfile and docker-compose.yml files: https://gist.github.com/joeellis/235d90799eb647ab00ec
EDIT: And as a test, I'm trying to run a rake db:create:all on the container, like docker-compose run app rake db:create:all which is returning an error that the environment file I need cannot be found / loaded. Interestingly enough, if I shell into the container and run the command, it all seems to work just great. So maybe when a container is given a command via run, it doesn't necessarily open up a shell, but uses something else?
The problem is that the shell within which your /src/app/bin/start-app is ran is not an interactive shell => .bashrc is not read!
You can fix this by doing this:
Add the env file to instead:
/root/.profile
RUN echo "source /src/puppet/path/to/file/env/bootstrap" >> /root/.profile
And also run your command as login shell via (sh is bash anyhow for you via the hack in the Dockerfile :) ):
command: "sh -lc '/src/app/bin/start-app'"
as the command. This should work fine :)
The problem really is just missing to source that file because you're running in a non-interactive shell when running via the docker command instruction.
It works when you shell into the container, because bam you got an interactive shell that sources that file :)

Resources