How to invoke docker container within docker without root? - docker

I know there are solutions using docker in docker (docker/dind), but seems there are people giving reasons not to use it, but rather expose the socket to the first docker container by adding option:
--volume /var/run/docker.sock:/var/run/docker.sock
I am a docker user on a server (in docker user group), and I am able to run docker container with above option, but once I am in the docker container, when I want to invoke another docker container by: docker run image_name, I got error:
dial unix /var/run/docker.sock: connect: permission denied
I know this error is expected, as the user in my invoked docker container is not in the group docker of the host, I saw people provide solution by adding USER root to the docker file. Since I don't have sudo access to the server, I just wonder if there is a way to enable invoking docker in docker with out root?
Many thanks!

On the docker host, do not change the file permissions on the docker.sock to anything like 777. Doing so would expose a security risk that anyone on the host, including every untrusted user, with a command like:
docker run -it --rm -v /:/host busybox sh
To access the docker socket from inside of a container, you'll want to either run your container as root, e.g.:
docker run -it --rm -u "0:0" -v /var/run/docker.sock:/var/run/docker.sock docker docker version
Or you can run your container with the docker gid inside the container:
docker run -it --rm -u "1000:$(getent group docker | cut -f3 -d:)" -v /var/run/docker.sock:/var/run/docker.sock docker docker version
In production, you would make all the docker hosts with a predictable GID on the docker group on each host, and add your container user inside the image with that GID. That would be part of your Dockerfile with something like:
ARG DOCKER_GID=999
RUN useradd -u 5000 -g $DOCKER_GID app
USER app
My preferred solution for portable environments, particularly development environments, is to start the container as root, and dynamically adjust the group id inside the container to match the file gid of a volume mount. For an example of this, there's a fix-perms script in my docker-base repo that can be run in an entrypoint. The fix-perms script contains code like:
# update the gid
if [ -n "$opt_g" ]; then
OLD_GID=$(getent group "${opt_g}" | cut -f3 -d:)
NEW_GID=$(stat -c "%g" "$1")
if [ "$OLD_GID" != "$NEW_GID" ]; then
echo "Changing GID of $opt_g from $OLD_GID to $NEW_GID"
groupmod -g "$NEW_GID" -o "$opt_g"
if [ -n "$opt_r" ]; then
find / -xdev -group "$OLD_GID" -exec chgrp -h "$opt_g" {} \;
fi
fi
fi
And then an entrypoint would check for being root before fixing the permissions, and then drop to running as a non-root user, e.g.:
if [ "$(id -u)" = "0" -a -e /var/run/docker.sock ]; then
fix-perms -r -g docker /var/run/docker.sock
fi
# run process as the container user "app" if currently root
if [ "$(id -u)" = "0" ]; then
exec gosu app "$#"
else
exec "$#"
fi
And by doing this check, the same image can be locked down in the production environment by using a predictable GID on the docker hosts that matches what's inside the image builds. And for all other hosts that didn't control the docker GID it can start the container as root, fix the permissions, and then drop to the app user inside the container.

Related

Docker entrypoint to run commands with different users in interactive mode

I have a custom image with docker installed (docker-in-docker). When running the image, the user needs to be $USERNAME (and not root). However the docker service required root to be started.
Getting docker to run as non-root seems to be overly complicated. So I have attempted to use su in the entry-point instead, which works, but it is not interactive.
FROM ubuntu:18.04
# ... A lot of steps here to install stuff that are not really relevant to the problem.
COPY container-helpers/entrypoint.sh .
USER root
ENV ENTRYUSER $USERNAME
ENTRYPOINT [ "./entrypoint.sh" ]
CMD "pulumi up"
And entrypoint.sh is:
#!/bin/bash
set -e
service docker start
export ENV_PATH=$PATH
su $ENTRY_USER -lp <<EOSU
set -e
export PATH=$ENV_PATH
. $NVM_DIR/nvm.sh
pulumi stack select -c dev
npx meteor-deploy stack configure default
$# # Run given argument as a command
EOSU
I run it as:
$ docker run --env-file local.env --privileged -it meteor-deploy-leaderboard
* Starting Docker: docker [ OK ]
Logging in using access token from PULUMI_ACCESS_TOKEN
error: --yes must be passed in to proceed when running in non-interactive mode
Or, if you don't want to take pulumi's word for it:
$ docker run --env-file local.env --privileged -it meteor-deploy-leaderboard bash; echo "exited"
* Starting Docker: docker [ OK ]
Logging in using access token from PULUMI_ACCESS_TOKEN
exited
Any idea how I can pass on the tty to the su command properly?

Docker: Map external to internal user (howto apply '--user', howte execute .bashrc)?

Running a docker image with a command line such as:
> docker run -it -v $OutsideDir:$InsideDir -u $(id -u):$(id -g) c0ffeebaba bash
I am able to work on my data as the current user on the host from inside the docker container. However, asking inside the container 'whoami' gives the response that the UID is unknown.
So the shell is executed on a user without a home-directory. How
can I get some initialization being done for that user? Is there a way to map the user id and group id of an external user to a specific user name from inside? Can this be done dynamically, so that it would work for any user, specified through the '--user' flag as shown above?
My first approach would have been to use 'CMD' in the Dockerfile such as
CMD ["source", "/home/the_user/.bashrc" ]
But, that does not work.
A relatively simple solution would be to wrap the docker run in a script, mapping in the /etc/passwd and /etc/group files from the host onto the container, as well as the user's home directory, so something like:
#!/bin/bash -p
# command starts with mapping passwd and group files
cmd=(docker run -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro)
# add home directory:
myhome=$(getent passwd $(id -nu) | awk -F: '{print $6}')
cmd+=(-v $myhome:$myhome)
# add userid and groupid mappings:
cmd+=(-u $(id -u):$(id -g))
# then pass through any other arguments:
cmd+=("$#")
"${cmd[#]}"
This can be run as:
./runit.sh -it --rm alpine id
or, for a shell (alpine doesn't have bash by default):
./runit.sh -it --rm centos bash --login
You can throw in a -w $HOME to get it to start in the user's home directory, etc.

how to correctly use system user in docker container

I'm starting containers from my docker image like this:
$ docker run -it --rm --user=999:998 my-image:latest bash
where the uid and gid are for a system user called sdp:
$ id sdp uid=999(sdp) gid=998(sdp) groups=998(sdp),999(docker)
but: container says "no"...
groups: cannot find name for group ID 998
I have no name!#75490c598f4c:/home/myfolder$ whoami
whoami: cannot find name for user ID 999
what am I doing wrong?
Note that I need to run containers based on this image on multiple systems and cannot guarantee that the uid:gid of the user will be the same across systems which is why I need to specify it on the command line rather than in the Dockerfile.
Thanks in advance.
This sort of error will happen when the uid/gid does not exist in the /etc/passwd or /etc/group file inside the container. There are various ways to work around that. One is to directly map these files from your host into the container with something like:
$ docker run -it --rm --user=999:998 \
-v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro \
my-image:latest bash
I'm not a fan of that solution since files inside the container filesystem may now have the wrong ownership, leading to potential security holes and errors.
Typically, the reason people want to change the uid/gid inside the container is because they are mounting files from the host into the container as a host volume and want permissions to be seamless across the two. In that case, my solution is to start the container as root and use an entrypoint that calls a script like:
if [ -n "$opt_u" ]; then
OLD_UID=$(getent passwd "${opt_u}" | cut -f3 -d:)
NEW_UID=$(stat -c "%u" "$1")
if [ "$OLD_UID" != "$NEW_UID" ]; then
echo "Changing UID of $opt_u from $OLD_UID to $NEW_UID"
usermod -u "$NEW_UID" -o "$opt_u"
if [ -n "$opt_r" ]; then
find / -xdev -user "$OLD_UID" -exec chown -h "$opt_u" {} \;
fi
fi
fi
The above is from a fix-perms script that I include in my base image. What's happening there is the uid of the user inside the container is compared to the uid of the file or directory that is mounted into the container (as a volume). When those id's do not match, the user inside the container is modified to have the same uid as the volume, and any files inside the container with the old uid are updated. The last step of my entrypoint is to call something like:
exec gosu app_user "$#"
Which is a bit like an su command to run the "CMD" value as the app_user, but with some exec logic that replaces pid 1 with the "CMD" process to better handle signals. I then run it with a command like:
$ docker run -it --rm --user=0:0 -v /host/vol:/container/vol \
-e RUN_AS app_user --entrypoint /entrypoint.sh \
my-image:latest bash
Have a look at the base image repo I've linked to, including the example with nginx that shows how these pieces fit together, and avoids the need to run containers in production as root (assuming production has known uid/gid's that can be baked into the image, or that you do not mount host volumes in production).
It's strange to me that there's no built-in command-line option to simply run a container with the "same" user as the host so that file permissions don't get messed up in the mounted directories. As mentioned by OP, the -u $(id -u):$(id -g) approach gives a "cannot find name for group ID" error.
I'm a docker newb, but here's the approach I've been using in case it helps others:
# See edit below before using this.
docker run --rm -it -v /foo:/bar ubuntu:20.04 sh -c "useradd -m -s /bin/bash $USER && usermod -a -G sudo $USER && su - $USER"
I.e. add a user (useradd) with a matching name, make it sudo (usermod), then open a terminal with that user (su -).
Edit: I've just found that this causes a E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied) error when trying to use apt. Using sudo gives the error -su: sudo: command not found because sudo isn't install by default on the image I'm using. So the command becomes even more hacky and requires running an apt update and apt install sudo at launch:
docker run --rm -it -v /foo:/bar ubuntu:20.04 sh -c "useradd -m -s /bin/bash $USER && usermod -a -G sudo $USER && apt update && apt install sudo && passwd -d $USER && su - $USER"
Not ideal! I'd have hoped there was a much more simple way of doing this (using command-line options, not creating a new image), but I haven't found one.
1) Make sure that the user 999 has right privilege on the current directory, you need to try something like this in your docker file
FROM
RUN mkdir /home/999-user-dir && \
chown -R 999:998 /home/999-user-dir
WORKDIR /home/999-user-dir
USER 999
try to spin up the container using this image without the user argument and see if that works.
2) other reason could be permission issue on the below files, make sure your group 998 has read permission on these files
-rw-r--r-- 1 root root 690 Jan 2 06:27 /etc/passwd
-rw-r--r-- 1 root root 372 Jan 2 06:27 /etc/group
Thanks
So, on your host you probably see your user and group:
$ cat /etc/passwd
sdp:x:999:998::...
But inside the container, you will not see them in /etc/passwd.
This is the expected behavior, the host and the container are completely separated as long as you don't mount the /etc/passwd file inside the container (and you shouldn't do it from security perspective).
Now if you specified a default user inside your Dockerfile, the --user operator overrides the USER instruction, so you left without a username inside your container, but please notice that specifying the uid:gid option means that the container have the permissions of the user with the same uid value in the host.
Now for your request not to specify a user in the Dockerfile - that shouldn't be a problem. You can set it on runtime like you did as long as that uid matches an existing user uid on the host.
If you have to run some of the containers in privileged mode - please consider using user namespace.

Connect to docker container as user other than root

BY default when you run
docker run -it [myimage]
OR
docker attach [mycontainer]
you connect to the terminal as root user, but I would like to connect as a different user. Is this possible?
For docker run:
Simply add the option --user <user> to change to another user when you start the docker container.
docker run -it --user nobody busybox
For docker attach or docker exec:
Since the command is used to attach/execute into the existing process, therefore it uses the current user there directly.
docker run -it busybox # CTRL-P/Q to quit
docker attach <container id> # then you have root user
/ # id
uid=0(root) gid=0(root) groups=10(wheel)
docker run -it --user nobody busybox # CTRL-P/Q to quit
docker attach <container id>
/ $ id
uid=99(nobody) gid=99(nogroup)
If you really want to attach to the user you want to have, then
start with that user run --user <user> or mention it in your Dockerfile using USER
change the user using `su
You can run a shell in a running docker container using a command like:
docker exec -it --user root <container id> /bin/bash
As an updated answer from 2020. --user, -u option is Username or UID (format: <name|uid>[:<group|gid>]).
Then, it works for me like this,
docker exec -it -u root:root container /bin/bash
Reference: https://docs.docker.com/engine/reference/commandline/exec/
You can specify USER in the Dockerfile. All subsequent actions will be performed using that account. You can specify USER one line before the CMD or ENTRYPOINT if you only want to use that user when launching a container (and not when building the image). When you start a container from the resulting image, you will attach as the specified user.
The only way I am able to make it work is by:
docker run -it -e USER=$USER -v /etc/passwd:/etc/passwd -v `pwd`:/siem mono bash
su - magnus
So I have to both specify $USER environment variable as well a point the /etc/passwd file. In this way, I can compile in /siem folder and retain ownership of files there not as root.
My solution:
#!/bin/bash
user_cmds="$#"
GID=$(id -g $USER)
UID=$(id -u $USER)
RUN_SCRIPT=$(mktemp -p $(pwd))
(
cat << EOF
addgroup --gid $GID $USER
useradd --no-create-home --home /cmd --gid $GID --uid $UID $USER
cd /cmd
runuser -l $USER -c "${user_cmds}"
EOF
) > $RUN_SCRIPT
trap "rm -rf $RUN_SCRIPT" EXIT
docker run -v $(pwd):/cmd --rm my-docker-image "bash /cmd/$(basename ${RUN_SCRIPT})"
This allows the user to run arbitrary commands using the tools provides by my-docker-image. Note how the user's current working directory is volume mounted
to /cmd inside the container.
I am using this workflow to allow my dev-team to cross-compile C/C++ code for the arm64 target, whose bsp I maintain (the my-docker-image contains the cross-compiler, sysroot, make, cmake, etc). With this a user can simply do something like:
cd /path/to/target_software
cross_compile.sh "mkdir build; cd build; cmake ../; make"
Where cross_compile.sh is the script shown above. The addgroup/useradd machinery allows user-ownership of any files/directories created by the build.
While this works for us. It seems sort of hacky. I'm open to alternative implementations ...
For docker-compose. In the docker-compose.yml:
version: '3'
services:
app:
image: ...
user: ${UID:-0}
...
In .env:
UID=1000
Execute command as www-data user: docker exec -t --user www-data container bash -c "ls -la"
This solved my use case that is: "Compile webpack stuff in nodejs container on Windows running Docker Desktop with WSL2 and have the built assets under your currently logged in user."
docker run -u 1000 -v "$PWD":/build -w /build node:10.23 /bin/sh -c 'npm install && npm run build'
Based on the answer by eigenfield. Thank you!
Also this material helped me understand what is going on.

How to run docker image as a non-root user?

I'm new to docker. When I run a docker images like ubuntu image by using the command,
sudo docker run -i -t ubuntu:14.04
By default, it is entering into the container as root like this.
I searched regarding this, but I couldn't get any of how to start a docker image as a non root user as I'm completely a starter for this topic.
It would be great if someone explains with an example of how to run a docker image as a non root user.
the docker run command has the -u parameter to allow you to specify a different user. In your case, and assuming you have a user named foo in your docker image, you could run:
sudo docker run -i -t -u foo ubuntu:14.04 /bin/bash
NOTE: The -u parameter is the equivalent of the USER instruction for Dockerfile.
This is admittedly hacky, but good for those quick little containers you start just to test something quickly:
#!/bin/bash
set -eu
NAME=$1
IMG=$2
#UID=$(id -u)
USER=$(id -un)
GID=$(id -g)
GROUP=$(id -gn)
docker run -d -v /tmp:/tmp -v "/home/$USER:/home/$USER" -h "$NAME" --name "$NAME" "$IMG" /bin/bash
docker exec "$NAME" /bin/bash -c "groupadd -g $GID $GROUP && useradd -M -s /bin/bash -g $GID -u $UID $USER"
Full version of the script I use here:
https://github.com/ericcurtin/staging/blob/master/d-run
udocker is a basic variant of docker which runs in user space:
udocker is a basic user tool to execute simple docker containers in user space without requiring root privileges. Enables download and execution of docker containers by non-privileged users in Linux systems where docker is not available. It can be used to pull and execute docker containers in Linux batch systems and interactive clusters that are managed by other entities such as grid infrastructures or externally managed batch or interactive systems.
It is not advisable to allow running docker without sudo as Docker has no auditing or logging built in, while sudo does.
If you want to give docker access to non-root users Red Hat recommends setting up sudo.
Add an entry like the following to /etc/sudoers.
dwalsh ALL=(ALL) NOPASSWD: /usr/bin/docker
Now, set up an alias in ~/.bashrc for running the docker command:
alias docker="sudo /usr/bin/docker"
Now when the user executes the docker command as non-root it will be allowed and get proper logging.
docker run -ti --privileged -v /:/host fedora chroot /host
Look at the journal or /var/log/messages.
journalctl -b | grep docker.*privileged
Aug 04 09:02:56 dhcp-10-19-62-196.boston.devel.redhat.com sudo[23422]: dwalsh : TTY=pts/3 ; PWD=/home/dwalsh/docker/src/github.com/docker/docker ; USER=root ; COMMAND=/usr/bin/docker run -ti --privileged -v /:/host fedora chroot /host

Resources