Making docker container write files that the host machine can delete - docker

I have a docker-based build environment - in order to build my project, I run a docker container with the --volume parameter, so it can access my project directory and build it.
The problem is that the files created by the container cannot be deleted by the host machine. The only workaround I currently have is to start an interactive container with the directory mounted and delete it.
Bottom line question: It is possible to make docker write to the mounted area files with permissions such that the host can later delete them?

This has less to do with Docker and more to do with basic Unix file permissions. Your docker containers are running as root, which means any files created by the container are owned by root on your host. You fix this the way you fix any other file permission problem, by either (a) ensuring that that the files/directories are created with your user id or (b) ensuring that permissions allow you do delete the files even if they're not owned by you or (c) using elevated privileges (e.g., sudo rm ...) to delete the files.
Depending on what you're doing, option (a) may be easy. If you can run the contanier as a non-root user, e.g:
docker run -u $UID -v $HOME/output:/some/container/path ...
...then everything will Just Work, because the files will be created with your userid.
If the container must run as root initially, you may be able to take care of root actions in your ENTRYPOINT or CMD script, and then switch to another uid to run the main application. To do this, you would need to pass your user id into the container (e.g., as an environment variable), and then later use something like runuser to switch to the new userid:
exec runuser -u $TARGE_UID /some/command
If neither of the above is an option, then sudo rm -rf mydirectory should work just as well as spinning up an interactive container.

If you need your build artifacts just to put them to the docker image on the next stage then it is probably worth to use multi-stage build option.

Related

Dockerfile hide variables (user creation)

I am trying to generate a docker image from Ubuntu 18.04.
To administrate the container I am creating a default user :
# set default user
RUN useradd -m docker && echo "docker:docker" | chpasswd && adduser docker sudo
USER docker
My problem is I would like to set a secured password on it, and my dockerfile is intended to be versioned with Git.
So my question is : is there a way to load variables in dockerfile from a .env file or anything else ?
I have seen an option on the docker run command, but not for the docker build, am I wrong ?
Anything you write in the Dockerfile can be trivially retrieved in plain text with docker history. Any file in the image can be very easily retrieved by anyone who can run any docker command. There is no way around either limitation.
Do NOT try to set user passwords for your Docker images like this. In most cases it shouldn't be necessary to formally "log in" to a container at all. Let the container run the single application process it needs to run, and don't try to set up an ssh daemon, sudo, or other things you'd have in a more complete server environment.
(You shouldn't usually need a shell inside a container; you don't for other kinds of processes like your Nginx server, for example. If you do, you can get one with docker exec, and if your main process runs as a non-root user, you can add a -u root option to be root in that shell. Again, you can't prevent an end user from being able to do this.)
If you are using a standalone container, then you can use a script with the variables and run docker RUN, or ENTRYPOINT to run the script. This would contain your password information, and then you can carry on with the build of your image.
If you are using Docker Swarm, you can use secrets, more information on the following link, and differences if you are using Windows or Linux are explained as well.
https://docs.docker.com/engine/swarm/secrets/

Permissions on Volume files created by Docker

Well I'm running a container which will create/modify some files that I need to persist outside the life-cycle of the container. As a result, I am voluming a folder into the container for that purpose:
docker run -v $(pwd)/data:/app/data my-image
On the first run, $(pwd)/data does not exist, so Docker will create it for me. However, it creates it as the user its own daemon process is being executed as, which is root. This is problematic since the user within the container obviously is not root.
To fix this permission issue, I have created a user group in the host machine with the same ID as the user within the container, and have given them access to the parent folder of $(pwd)/data, and added group permission settings on the folder (chmod g+s). So if I run the following command as root:
sudo mkdir whatver
the folder whatever will be modifiable by that group and by extension, by the container's user. however when Docker creates the folder $(pwd)/data, it still is created as belonging to the group root, and again the user within the container cannot modify data inside it.
Right now, I have worked around this by making the required folders before running the container. However this is a dirty work-around, and I am looking for a cleaner solution. Has anyone else faced this issue? Is it an issue with Docker not respecting the group permission settings on the parent folder or am I missing something here / doing something wrong?
I don't consider making the folders first a dirty workaround, that's a best practice to me. Docker is running a bind mount for you, and this will fail when the source directory does not exist. For user convenience, docker will create this directory for you when you make a host mount and the directory is missing, but there's no need to use this functionality and you'll find other types of mounts will not perform the directory creation for you (e.g. the --mount syntax often used by swarm mode).
You can start your container as root and fix the permissions on the directory with an entrypoint, before running something like an exec gosu app_user app_server at the end to drop from root to the user. I do this in the entrypoint in my docker-base images, which is useful for a developer workflow where developers often need to dynamically update the container to work with their host environment.
Otherwise, you can switch to named volumes. These will be initialized when they are new or empty using the directory contents and permissions from the image. They are a best practice when you need persistence but not direct filesystem access to the contents of the volume from users on the host. You would instead access the named volume from inside of containers that mount the volume.

docker-compose user mapping [duplicate]

I would like to volume mount a directory from a Docker container to my work station, so when I edit the content in the volume mount from my work station it updated in the container as well. It would be very useful for testing and develop web applications in general.
However I get a permission denied in the container, because the UID's in the container and host isn't the same. Isn't the original purpose of Docker that it should make development faster and easier?
This answer works around the issue I am facing when volume mounting a Docker container to my work station. But by doing this, I make changes to the container that I won't want in production, and that defeats the purpose of using Docker during development.
The container is Alpine Linux, work station Fedora 29, and editor Atom.
Question
Is there another way, so both my work station and container can read/write the same files?
There are multiple ways to do this, but the central issue is that bind mounts do not include any UID mapping capability, the UID on the host is what appears inside the container and vice versa. If those two UID's do not match, you will read/write files with different UID's and likely experience permission issues.
Option 1: get a Mac or deploy docker inside of VirtualBox. Both of these environments have a filesystem integration that dynamically updates the UID's. For Mac, that is implemented with OSXFS. Be aware that this convenience comes with a performance penalty.
Option 2: Change your host. If the UID on the host matches the UID inside the container, you won't experience any issues. You'd just run a usermod on your user on the host to change your UID there, and things will happen to work, at least until you run a different image with a different UID inside the container.
Option 3: Change your image. Some will modify the image to a static UID that matches their environment, often to match a UID in production. Others will pass a build arg with something like --build-arg UID=$(id -u) as part of the build command, and then the Dockerfile with something like:
FROM alpine
ARG UID=1000
RUN adduser -u ${UID} app
The downside of this is each developer may need a different image, so they are either building locally on each workstation, or you centrally build multiple images, one for each UID that exists among your developers. Neither of these are ideal.
Option 4: Change the container UID. This can be done in the compose file, or on a one off container with something like docker run -u $(id -u) your_image. The container will now be running with the new UID, and files in the volume will be accessible. However, the username inside the container will not necessarily map to your UID which may look strange to any commands you run inside the container. More importantly, any files own by the user inside the container that you have not hidden with your volume will have the original UID and may not be accessible.
Option 5: Give up, run everything as root, or change permissions to 777 allowing everyone to access the directory with no restrictions. This won't map to how you should run things in production, and the container may still write new files with limited permissions making them inaccessible to you outside the container. This also creates security risks of running code as root or leaving filesystems open to both read and write from any user on the host.
Option 6: Setup an entrypoint that dynamically updates your container. Despite not wanting to change your image, this is my preferred solution for completeness. Your container does need to start as root, but only in development, and the app will still be run as the user, matching the production environment. However, the first step of that entrypoint will be to change the user's UID/GID inside the container to match your volume's UID/GID. This is similar to option 4, but now files inside the image that were not replaced by the volume have the right UID's, and the user inside the container will now show with the changed UID so commands like ls show the username inside the container, not a UID to may map to another user or no one at all. While this is a change to your image, the code only runs in development, and only as a brief entrypoint to setup the container for that developer, after which the process inside the container will look identical to that in a production environment.
To implement this I make the following changes. First the Dockerfile now includes a fix-perms script and gosu from a base image I've pushed to the hub (this is a Java example, but the changes are portable to other environments):
FROM openjdk:jdk as build
# add this copy to include fix-perms and gosu or install them directly
COPY --from=sudobmitch/base:scratch / /
RUN apt-get update \
&& apt-get install -y maven \
&& useradd -m app
COPY code /code
RUN mvn build
# add an entrypoint to call fix-perms
COPY entrypoint.sh /usr/bin/
ENTRYPOINT ["/usr/bin/entrypoint.sh"]
CMD ["java", "-jar", "/code/app.jar"]
USER app
The entrypoint.sh script calls fix-perms and then exec and gosu to drop from root to the app user:
#!/bin/sh
if [ "$(id -u)" = "0" ]; then
# running on a developer laptop as root
fix-perms -r -u app -g app /code
exec gosu app "$#"
else
# running in production as a user
exec "$#"
fi
The developer compose file mounts the volume and starts as root:
version: '3.7'
volumes:
m2:
services:
app:
build:
context: .
target: build
image: registry:5000/app/app:dev
command: "/bin/sh -c 'mvn build && java -jar /code/app.jar'"
user: "0:0"
volumes:
- m2:/home/app/.m2
- ./code:/code
This example is taken from my presentation available here: https://sudo-bmitch.github.io/presentations/dc2019/tips-and-tricks-of-the-captains.html#fix-perms
Code for fix-perms and other examples are available in my base image repo: https://github.com/sudo-bmitch/docker-base
Since the UID in your containers are baked into the container definition, you can safely assume that they are relatively static. In this case, you can create a user in your host system with the machine UID and GID. Change user to the new account, and then make your edits to the files. Your host OS will not complain since it thinks it's just the user accessing its own files, and your container OS will see the same.
Alternatively, you can consider editing these files as root.

Docker: How to disable root user in container?

Delivering images to customers they usually make
$ docker-compose up -d
to deploy those in production. It is easy to get root and to see / modify all file quite easy:
$ docker-compose exec <service> /bin/sh
/bin/sh(root)# ...
How can I avoid for customers to get full access rights to all files as root when running the container. Maybe this is not possible at all in Docker but then it should at least be more complicated for users to get full access to anything inside the container.
Is there a best practice to intrdoce non root accounts in containers?
You can’t. You can always run
docker exec -u 0 (container ID) sh
to get a root shell. (Assuming the image has a shell, but almost all do.)
Also remember that anyone who can run any docker command can edit any file on the host, and from there can trivially become root, and can prod around in /var/lib/docker to their heart’s content.
It’s generally considered good practice to set containers to run as non-root by RUN adduser to create a user using the base distribution’s tools and then a Dockerfile USER directive, but an operator can override this at runtime if they really want to.

how to disallow docker cp option

I have a created a docker image for my running environment.
for some reason I need to put some encryption keys in the container since it requires it for it's operation .
is there some way I can block the option to execute docker cp and pull those keys?
thanks
No.
Docker doesn't have any way to selectively limit which commands a user can run. Also, if you can docker run anything at all, you can, for instance, put yourself in the host's /etc/sudoers file and start poking around in /var/lib/docker for *.key files: anyone who can run Docker commands has unrestricted root access to the host.

Resources