Dockerfile hide variables (user creation) - docker

I am trying to generate a docker image from Ubuntu 18.04.
To administrate the container I am creating a default user :
# set default user
RUN useradd -m docker && echo "docker:docker" | chpasswd && adduser docker sudo
USER docker
My problem is I would like to set a secured password on it, and my dockerfile is intended to be versioned with Git.
So my question is : is there a way to load variables in dockerfile from a .env file or anything else ?
I have seen an option on the docker run command, but not for the docker build, am I wrong ?

Anything you write in the Dockerfile can be trivially retrieved in plain text with docker history. Any file in the image can be very easily retrieved by anyone who can run any docker command. There is no way around either limitation.
Do NOT try to set user passwords for your Docker images like this. In most cases it shouldn't be necessary to formally "log in" to a container at all. Let the container run the single application process it needs to run, and don't try to set up an ssh daemon, sudo, or other things you'd have in a more complete server environment.
(You shouldn't usually need a shell inside a container; you don't for other kinds of processes like your Nginx server, for example. If you do, you can get one with docker exec, and if your main process runs as a non-root user, you can add a -u root option to be root in that shell. Again, you can't prevent an end user from being able to do this.)

If you are using a standalone container, then you can use a script with the variables and run docker RUN, or ENTRYPOINT to run the script. This would contain your password information, and then you can carry on with the build of your image.
If you are using Docker Swarm, you can use secrets, more information on the following link, and differences if you are using Windows or Linux are explained as well.
https://docs.docker.com/engine/swarm/secrets/

Related

docker-compose user mapping [duplicate]

I would like to volume mount a directory from a Docker container to my work station, so when I edit the content in the volume mount from my work station it updated in the container as well. It would be very useful for testing and develop web applications in general.
However I get a permission denied in the container, because the UID's in the container and host isn't the same. Isn't the original purpose of Docker that it should make development faster and easier?
This answer works around the issue I am facing when volume mounting a Docker container to my work station. But by doing this, I make changes to the container that I won't want in production, and that defeats the purpose of using Docker during development.
The container is Alpine Linux, work station Fedora 29, and editor Atom.
Question
Is there another way, so both my work station and container can read/write the same files?
There are multiple ways to do this, but the central issue is that bind mounts do not include any UID mapping capability, the UID on the host is what appears inside the container and vice versa. If those two UID's do not match, you will read/write files with different UID's and likely experience permission issues.
Option 1: get a Mac or deploy docker inside of VirtualBox. Both of these environments have a filesystem integration that dynamically updates the UID's. For Mac, that is implemented with OSXFS. Be aware that this convenience comes with a performance penalty.
Option 2: Change your host. If the UID on the host matches the UID inside the container, you won't experience any issues. You'd just run a usermod on your user on the host to change your UID there, and things will happen to work, at least until you run a different image with a different UID inside the container.
Option 3: Change your image. Some will modify the image to a static UID that matches their environment, often to match a UID in production. Others will pass a build arg with something like --build-arg UID=$(id -u) as part of the build command, and then the Dockerfile with something like:
FROM alpine
ARG UID=1000
RUN adduser -u ${UID} app
The downside of this is each developer may need a different image, so they are either building locally on each workstation, or you centrally build multiple images, one for each UID that exists among your developers. Neither of these are ideal.
Option 4: Change the container UID. This can be done in the compose file, or on a one off container with something like docker run -u $(id -u) your_image. The container will now be running with the new UID, and files in the volume will be accessible. However, the username inside the container will not necessarily map to your UID which may look strange to any commands you run inside the container. More importantly, any files own by the user inside the container that you have not hidden with your volume will have the original UID and may not be accessible.
Option 5: Give up, run everything as root, or change permissions to 777 allowing everyone to access the directory with no restrictions. This won't map to how you should run things in production, and the container may still write new files with limited permissions making them inaccessible to you outside the container. This also creates security risks of running code as root or leaving filesystems open to both read and write from any user on the host.
Option 6: Setup an entrypoint that dynamically updates your container. Despite not wanting to change your image, this is my preferred solution for completeness. Your container does need to start as root, but only in development, and the app will still be run as the user, matching the production environment. However, the first step of that entrypoint will be to change the user's UID/GID inside the container to match your volume's UID/GID. This is similar to option 4, but now files inside the image that were not replaced by the volume have the right UID's, and the user inside the container will now show with the changed UID so commands like ls show the username inside the container, not a UID to may map to another user or no one at all. While this is a change to your image, the code only runs in development, and only as a brief entrypoint to setup the container for that developer, after which the process inside the container will look identical to that in a production environment.
To implement this I make the following changes. First the Dockerfile now includes a fix-perms script and gosu from a base image I've pushed to the hub (this is a Java example, but the changes are portable to other environments):
FROM openjdk:jdk as build
# add this copy to include fix-perms and gosu or install them directly
COPY --from=sudobmitch/base:scratch / /
RUN apt-get update \
&& apt-get install -y maven \
&& useradd -m app
COPY code /code
RUN mvn build
# add an entrypoint to call fix-perms
COPY entrypoint.sh /usr/bin/
ENTRYPOINT ["/usr/bin/entrypoint.sh"]
CMD ["java", "-jar", "/code/app.jar"]
USER app
The entrypoint.sh script calls fix-perms and then exec and gosu to drop from root to the app user:
#!/bin/sh
if [ "$(id -u)" = "0" ]; then
# running on a developer laptop as root
fix-perms -r -u app -g app /code
exec gosu app "$#"
else
# running in production as a user
exec "$#"
fi
The developer compose file mounts the volume and starts as root:
version: '3.7'
volumes:
m2:
services:
app:
build:
context: .
target: build
image: registry:5000/app/app:dev
command: "/bin/sh -c 'mvn build && java -jar /code/app.jar'"
user: "0:0"
volumes:
- m2:/home/app/.m2
- ./code:/code
This example is taken from my presentation available here: https://sudo-bmitch.github.io/presentations/dc2019/tips-and-tricks-of-the-captains.html#fix-perms
Code for fix-perms and other examples are available in my base image repo: https://github.com/sudo-bmitch/docker-base
Since the UID in your containers are baked into the container definition, you can safely assume that they are relatively static. In this case, you can create a user in your host system with the machine UID and GID. Change user to the new account, and then make your edits to the files. Your host OS will not complain since it thinks it's just the user accessing its own files, and your container OS will see the same.
Alternatively, you can consider editing these files as root.

Is there a good and secure way to allow non-root user to start a docker image?

I have a scenario where I want to let non-root users start a docker image and run it. It's a very simple image - we have a stupid proprietary piece of software that insists on blocking a certain port, making concurrent runs of that software impossible. I was thinking to fix that with docker.
Problem is that normal users (it's a part of a compile process) should be able to spin this up. How do I go about that in a sane and secure fashion?
If the desired docker command is static, create a simple start script, store in in /usr/local/bin and make it executeable. Make an entry in /etc/sudoers to allow desired users to run this command with sudo without a password.
E.g create file /usr/local/bin/alpine.docker:
#! /bin/sh
docker run --rm -it alpine sh
Make the script secure (non root user should not be able to edit it):
sudo chown root:root /usr/local/bin/alpine.docker
Set reasonable permissions and make it executeable:
sudo chmod 554 /usr/local/bin/alpine.docker
Create an entry in /etc/sudoers with visudo:
username ALL = (root) NOPASSWD: /usr/local/bin/alpine.docker
Now the user username can run sudo alpine.docker without a password.
Warning:
Don't add users to group docker if they should not have root privileges.
Note:
For this solution, you need to install sudo. But the user username does not need to be member of group sudo.
Note 2:
A similar setup is possible with policykit / pkexec. But I am not familar with it.
I prefer https://stackoverflow.com/a/50876910/348975 solution, but an alternative is to use something like docker machine https://stackoverflow.com/a/50876910/348975 or dind https://hub.docker.com/_/docker/ to create a brand new throwaway docker.
Then you set the environment variable export DOCKER_HOST=tcp://${IP_ADDRESS}:2376 and can use that docker without root.
This is probably not necessary for OPs case, but where it would come in handy is if the image had to be run with arbitrary privileges:
docker container run --privileged ...
Can you escalate from --privileged to root? I don't know you can not. I would rather assume you can and isolate the docker.
Since OP has one simple static predetermined docker command that OP is confident can not be escalated, I feel https://stackoverflow.com/a/50876910/348975 is the preferred solution.
If you are paranoid, you can use both https://stackoverflow.com/a/50876910/348975 and my solution together.
Create the docker group and add your user to the docker group.
$ sudo groupadd docker
$ sudo usermod -aG docker $USER
Log out and log back in so that your group membership is re-evaluated.
You can follow docker documentation for more details manage-docker-as-a-non-root-user

How to run official Tensorflow Docker Image as a non root user?

I currently run the official Tensorflow Docker Container (GPU) with Nvidia-Docker:
https://hub.docker.com/r/tensorflow/tensorflow/
https://gcr.io/tensorflow/tensorflow/
However, I can't find a way to set a default user for the container. The default user for this container is "root", which is dangerous in term of security and problematic because it gives root access to the shared folders.
Let's say my host machine run with the user "CNNareCute", is there any way to launch my containers with the same user ?
Docker containers by default run as root. You can override the user by passing --user <user> to docker run command. Note however this might be problematic in case the container process needs root access inside the container.
The security concern you mention is handled in docker using User Namespaces. Usernamespaces basically map users in the container to a different pool of users on the host. Thus you can map the root user inside the container to a normal user on the host and the security concern should be mitigated.
AFAIK, docker images run by default as root. This means that any Dockerfile using the image as a base, doesn't have to jump through hoops to modify it. You could carry out user modification in a Dockerfile - same way you would on any other linux box which would give you the configuration you need.
You won't be able to use users (dynamically) from your host in the containers without creating them in the container first - and they will be in effect separate users of the same name.
You can run commands and ssh into containers as a specific user provided it exists on the container. For example, a PHP application needing commands run that retain www-data privileges, would be run as follows:
docker exec --user www-data application_container_1 sh -c "php something"
So in short, you can set up whatever users you like and use them to run scripts but the default will be root and it will exist unless you remove it which may also have repercussions...

Making docker container write files that the host machine can delete

I have a docker-based build environment - in order to build my project, I run a docker container with the --volume parameter, so it can access my project directory and build it.
The problem is that the files created by the container cannot be deleted by the host machine. The only workaround I currently have is to start an interactive container with the directory mounted and delete it.
Bottom line question: It is possible to make docker write to the mounted area files with permissions such that the host can later delete them?
This has less to do with Docker and more to do with basic Unix file permissions. Your docker containers are running as root, which means any files created by the container are owned by root on your host. You fix this the way you fix any other file permission problem, by either (a) ensuring that that the files/directories are created with your user id or (b) ensuring that permissions allow you do delete the files even if they're not owned by you or (c) using elevated privileges (e.g., sudo rm ...) to delete the files.
Depending on what you're doing, option (a) may be easy. If you can run the contanier as a non-root user, e.g:
docker run -u $UID -v $HOME/output:/some/container/path ...
...then everything will Just Work, because the files will be created with your userid.
If the container must run as root initially, you may be able to take care of root actions in your ENTRYPOINT or CMD script, and then switch to another uid to run the main application. To do this, you would need to pass your user id into the container (e.g., as an environment variable), and then later use something like runuser to switch to the new userid:
exec runuser -u $TARGE_UID /some/command
If neither of the above is an option, then sudo rm -rf mydirectory should work just as well as spinning up an interactive container.
If you need your build artifacts just to put them to the docker image on the next stage then it is probably worth to use multi-stage build option.

How to allow users to run (but not manage) docker containers?

I would like to allow users of my docker containers (on a shared Linux server) to do
docker run
But not any of the other commands: build, inspect, ...
My use case is that of wrapped applications inside containers.
I was wondering if there is a best practice for this?
Typically, you could use a sudoers configuration in order to allow to execute docker command only for docker run.
See "How can I use docker without sudo?" for the theory.
Make sure your user is not from the docker group, and use sudo to execute only docker run as root.
See as an example "sudo / su to user in a specific group"

Resources