I'm starting containers from my docker image like this:
$ docker run -it --rm --user=999:998 my-image:latest bash
where the uid and gid are for a system user called sdp:
$ id sdp uid=999(sdp) gid=998(sdp) groups=998(sdp),999(docker)
but: container says "no"...
groups: cannot find name for group ID 998
I have no name!#75490c598f4c:/home/myfolder$ whoami
whoami: cannot find name for user ID 999
what am I doing wrong?
Note that I need to run containers based on this image on multiple systems and cannot guarantee that the uid:gid of the user will be the same across systems which is why I need to specify it on the command line rather than in the Dockerfile.
Thanks in advance.
This sort of error will happen when the uid/gid does not exist in the /etc/passwd or /etc/group file inside the container. There are various ways to work around that. One is to directly map these files from your host into the container with something like:
$ docker run -it --rm --user=999:998 \
-v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro \
my-image:latest bash
I'm not a fan of that solution since files inside the container filesystem may now have the wrong ownership, leading to potential security holes and errors.
Typically, the reason people want to change the uid/gid inside the container is because they are mounting files from the host into the container as a host volume and want permissions to be seamless across the two. In that case, my solution is to start the container as root and use an entrypoint that calls a script like:
if [ -n "$opt_u" ]; then
OLD_UID=$(getent passwd "${opt_u}" | cut -f3 -d:)
NEW_UID=$(stat -c "%u" "$1")
if [ "$OLD_UID" != "$NEW_UID" ]; then
echo "Changing UID of $opt_u from $OLD_UID to $NEW_UID"
usermod -u "$NEW_UID" -o "$opt_u"
if [ -n "$opt_r" ]; then
find / -xdev -user "$OLD_UID" -exec chown -h "$opt_u" {} \;
fi
fi
fi
The above is from a fix-perms script that I include in my base image. What's happening there is the uid of the user inside the container is compared to the uid of the file or directory that is mounted into the container (as a volume). When those id's do not match, the user inside the container is modified to have the same uid as the volume, and any files inside the container with the old uid are updated. The last step of my entrypoint is to call something like:
exec gosu app_user "$#"
Which is a bit like an su command to run the "CMD" value as the app_user, but with some exec logic that replaces pid 1 with the "CMD" process to better handle signals. I then run it with a command like:
$ docker run -it --rm --user=0:0 -v /host/vol:/container/vol \
-e RUN_AS app_user --entrypoint /entrypoint.sh \
my-image:latest bash
Have a look at the base image repo I've linked to, including the example with nginx that shows how these pieces fit together, and avoids the need to run containers in production as root (assuming production has known uid/gid's that can be baked into the image, or that you do not mount host volumes in production).
It's strange to me that there's no built-in command-line option to simply run a container with the "same" user as the host so that file permissions don't get messed up in the mounted directories. As mentioned by OP, the -u $(id -u):$(id -g) approach gives a "cannot find name for group ID" error.
I'm a docker newb, but here's the approach I've been using in case it helps others:
# See edit below before using this.
docker run --rm -it -v /foo:/bar ubuntu:20.04 sh -c "useradd -m -s /bin/bash $USER && usermod -a -G sudo $USER && su - $USER"
I.e. add a user (useradd) with a matching name, make it sudo (usermod), then open a terminal with that user (su -).
Edit: I've just found that this causes a E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied) error when trying to use apt. Using sudo gives the error -su: sudo: command not found because sudo isn't install by default on the image I'm using. So the command becomes even more hacky and requires running an apt update and apt install sudo at launch:
docker run --rm -it -v /foo:/bar ubuntu:20.04 sh -c "useradd -m -s /bin/bash $USER && usermod -a -G sudo $USER && apt update && apt install sudo && passwd -d $USER && su - $USER"
Not ideal! I'd have hoped there was a much more simple way of doing this (using command-line options, not creating a new image), but I haven't found one.
1) Make sure that the user 999 has right privilege on the current directory, you need to try something like this in your docker file
FROM
RUN mkdir /home/999-user-dir && \
chown -R 999:998 /home/999-user-dir
WORKDIR /home/999-user-dir
USER 999
try to spin up the container using this image without the user argument and see if that works.
2) other reason could be permission issue on the below files, make sure your group 998 has read permission on these files
-rw-r--r-- 1 root root 690 Jan 2 06:27 /etc/passwd
-rw-r--r-- 1 root root 372 Jan 2 06:27 /etc/group
Thanks
So, on your host you probably see your user and group:
$ cat /etc/passwd
sdp:x:999:998::...
But inside the container, you will not see them in /etc/passwd.
This is the expected behavior, the host and the container are completely separated as long as you don't mount the /etc/passwd file inside the container (and you shouldn't do it from security perspective).
Now if you specified a default user inside your Dockerfile, the --user operator overrides the USER instruction, so you left without a username inside your container, but please notice that specifying the uid:gid option means that the container have the permissions of the user with the same uid value in the host.
Now for your request not to specify a user in the Dockerfile - that shouldn't be a problem. You can set it on runtime like you did as long as that uid matches an existing user uid on the host.
If you have to run some of the containers in privileged mode - please consider using user namespace.
Related
I have this Dockerfile:
FROM chekote/gulp:latest
USER root
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y sudo libltdl-dev
ARG dockerUser='my-user-name';
ARG group='docker';
# crate group if not exists
RUN if ! grep -q -E "^$group:" /etc/group; then groupadd $group; fi
# create user if not exists
RUN if ! grep -q -E "^$dockerUser:" /etc/passwd; then useradd -c 'Docker image creator' -m -s '/bin/bash' -g $group $dockerUser; fi
# add user to the group (if it was present and not created at the line above)
RUN usermod -a -G ${group} ${dockerUser}
# set default user that runs the container
USER ${dockerUser}
That I build this way:
docker build --tag my-gulp:latest .
and finally run by script this way:
#!/bin/bash
image="my-gulp:latest";
workDir='/home/gulp/project';
docker run -it --rm \
-v $(pwd):${workDir} \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/bin/docker:/usr/bin/docker \
${image} /bin/bash
that logs me into the docker container properly but when I want to see images
docker images
or try to pull image
docker pull hello-world:latest
I get this error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.38/images/json: dial unix /var/run/docker.sock: connect: permission denied
How to create docker image from chekote/gulp:latest so I can use docker inside it without the error?
Or maybe the error is because of wrong docker run command?
A quick way to avoid that. Add your user to the group.
sudo gpasswd -a $USER docker
Then set the proper permissions.
sudo setfacl -m "user:$USER:rw" /var/run/docker.sock
Should be good from there.
The permission matching happens only on numeric user ID and group ID. If the socket file is mode 0660 and owned by user ID 0 and group ID 32, and you're calling it as a user with user ID 1000 and group IDs 1000 and 16, it doesn't matter if one /etc/group file names gid 32 as docker and the other one names gid 16 the same; the numeric gids are different and you can't access the file. Also, since the actual numeric gid of the Docker group will vary across systems, this isn't something you can bake into the Dockerfile.
Many Docker images just run as root; if they do, they can access a bind-mounted Docker socket file regardless of its permissions.
If you run as a non-root user, you can use the docker run --group-add option to add a (numeric) gid to the effective user; it doesn't specifically need to be mentioned in the /etc/groups file. On a Linux host you might run:
docker run --group-add $(stat -c '%g' /var/run/docker.sock) ...
You wouldn't usually install sudo in a Dockerfile (it doesn't work well for non-interactive programs, you usually don't do a whole lot in interactive shells because of the ephemeral nature of containers, and you can always docker exec -u 0 to get a root shell) though installing some non-root user is often considered a best practice. You could reduce the Dockerfile to
FROM node:8
RUN apt-get update
# Trying to use the host's `docker` binary may not work well
RUN apt-get install -y docker.io
# Install the single node tool you need
RUN npm install -g gulp
# Get your non-root user
RUN adduser myusername
# Normal Dockerfile bits
WORKDIR ...
COPY ...
RUN gulp
USER myusername
CMD ["npm", "run", "start"]
(That Docker base image has a couple of things that don't really match Docker best practices, and doesn't seem to be updated routinely; I'd just use the standard node image as a base and add the one build tool you need on top of it.)
open terminal and type this command
sudo chmod 666 /var/run/docker.sock
let me know the results...
You need the --privileged flag with your docker run command.
By the way , you can just use the docker in docker , image from docker for this kind of use case.
https://asciinema.org/a/24707
https://hub.docker.com/_/docker/
The error has nothing to do with docker pull or docker image subcommand, but rather that you need to call the docker command as either a user with write access to the docker socket (for example, by being root, using sudo, or by being in the docker group).
I need to compile some programs using files in a docker container. Once compiled, the container is no longer used.
Therefore I always use the following command. docker run --rm -v my_file:docker_file my_images my_command
But I find that there are always some problems.
For example, take a simple C language program that outputs "hello, world" as an example.
docker run -it --rm -v /home/cuiyujie/workspace/workGem5/gem5/hello.c:/home/cuiyujie/workspace/workGem5/gem5/hello.c -v /home/cuiyujie/workspace/workGem5/gem5/build:/home/cuiyujie/workspace/workGem5/gem5/build gerrie/gem5-bare-env
After entering the container, execute gcc hello.c -o hello, cp hello build.
I found outside the container that the hello file belongs to root.
-rwxr-xr-x 1 root root 16696 2月 23 10:23 hello*
I don't have permission to delete it. what should I do to make it become the permissions of the host user?
If you run your container as your own UID, files created in the host volumes will be owned by your UID. That comes with the disclaimer that your container needs to be designed to run as a user other than root (e.g. not need access to files owned by root inside the container). Here's an example of running as your uid/gid and full access to your home directory using bash on Linux (the $(id -u) expansion may not work in other environments):
docker container run \
-u "$(id -u):$(id -g)" -w "$(pwd)" \
-v /etc/passwd:/etc/passwd:ro \
-v /etc/group:/etc/group:ro \
-v "$HOME:$HOME" \
<your_image>
You can use chown to change the ownership of a file. You'll need permission to run it with sudo though.
$ sudo chown $USER hello
If you also want to change the group of the file to your primary group, you can put a . after the user:
$ sudo chown $USER. hello
I'm trying to let a docker container access a letsencrypt certificate from the host file system.
I do not want to run the docker container as root, but rather as a user with very specific access rights.
Neither do I want to change the permissions of the certificate.
All I want, is for the given user, to have access to read the certificate inside the docker container.
The certificate has the following setup:
-rw-r----- 1 root cert-group
The user who's going to run the docker container, is in the cert-group:
uid=113(myuser) gid=117(myuser) groups=117(myuser),999(cert-group),998(docker)
This works as long as we're on the host - I am able to read the file as expected with the user "myuser".
Now I want to do this within a docker container with the certificate mounted as a volume.
I have done multiple test cases, but none with any luck.
A simple docker-compose file for testing:
version: '3.7'
services:
test:
image: alpine:latest
volumes:
- /etc/ssl/letsencrypt/cert.pem:/cert.pem:ro
command: >
sh -c 'ls -l / && cat /etc/passwd && cat /etc/group && cat /cert.pem'
user: "113:117"
restart: "no"
This ouputs a lot, but most important is:
test_1 | -rw-r----- 1 root ping 3998 Jul 15 09:51 cert.pem
test_1 | cat: can't open '/cert.pem': Permission denied
test_1 | ping:x:999:
Here I assume that "ping" is an internal group for docker alpine, however, im getting some mixed information about how this collaborates with the host.
From this article https://medium.com/#mccode/understanding-how-uid-and-gid-work-in-docker-containers-c37a01d01cf my takeaway is, that there's a single kernel handling all permissions (the host) and therefore if the same uid and gid is used, the permissions would inherit from the host. However, even though that the running user is 113:117, which on the host is part of the group 999 it still doesnt give me access to read the file.
Next I found this article https://medium.com/#nielssj/docker-volumes-and-file-system-permissions-772c1aee23ca where especially this bullet point caught my attention:
The container OS enforces file permissions on all operations made in
the container runtime according to its own configuration. For example,
if a user A exists in both host and container, adding user A to group
B on the host will not allow user A to write to a directory owned by
group B inside the container unless group B is created inside the
container as well and user A is added to it.
This made me think, that maybe a custom Dockerfile was needed, to add the user inside docker, and make the user part of 999 (which is known as ping as earlier stated):
FROM alpine:latest
RUN adduser -S --uid 113 -G ping myuser
USER myuser
Running this gives me the exact same result, now with myuser appended to passwd though:
test_1 | myuser:x:113:999:Linux User,,,:/home/myuser:/sbin/nologin
This is just a couple of things that I've tried.
Another is syncing /etc/passwd and /etc/group with volumes found in some other blog
volumes:
- /etc/passwd:/etc/passwd
- /etc/group:/etc/group
This makes it visually look correct inside the container, but it doesnt change the end result - still permission denied.
Any help or pointers in the right direction would be really appreciated since I'm running out of ideas.
Docker containers do not know the uid/gid of the user running the container on the host. All requests to run containers go through the docker socket, and then to the docker engine that is often running as root, and no uid/gid's are passed in those API calls. The docker engine is just running the container as the user specified in the Dockerfile or as part of the container create command (in this case, from the docker-compose.yml).
Once inside the container, the mapping from uid/gid to names is done with the /etc/passwd and /etc/group file that is inside the container. Importantly, at the filesystem level, uid/gid values are not being mapped between the container and the host (with the exception of user namespaces, but if implemented properly, that would only make this problem worse). And all filesystem operations happen at the uid/gid level, not based on names. So when you do a host volume mount, the uid/gid's are passed directly through.
The issue you are encountering here is how you are telling the container to pick the uid/gid to run the container processes. By specifying user: "113:117" you have told the container to not only specify the uid (113), but also the gid (117) of the process. When that's done, none of the secondary groups from /etc/group are assigned to the user. To get those secondary groups assigned, you want to only specify the uid, user: "113", which will then lookup the group assignments from the /etc/passwd and /etc/group file inside the container. E.g.:
user: "113"
Unfortunately, the lookup for group membership is done by docker before any volumes are mounted, so you have the following scenario.
First, create an image with an example user assigned to a few groups:
$ cat df.users
FROM alpine:latest
RUN addgroup -g 4242 group1 \
&& addgroup -g 8888 group2 \
&& adduser -u 1000 -D -H test \
&& addgroup test group1 \
&& addgroup test group2
$ docker build -t test-users -f df.users .
...
Next, run that image, comparing the id on the host to the id inside the container:
$ id
uid=1000(bmitch) gid=1000(bmitch) groups=1000(bmitch),24(cdrom),25(floppy),...
$ docker run -it --rm -u bmitch -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro test-users:latest id
docker: Error response from daemon: unable to find user bmitch: no matching entries in passwd file.
Woops, docker doesn't see the entry from /etc/passwd, lets try with the test user we created in the image:
$ docker run -it --rm -u test -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro test-users:latest id
uid=1000(bmitch) gid=1000(bmitch) groups=4242,8888
That works, and assigns the groups from the /etc/group file in the image, not the one we mounted. We can also see that uid works too:
$ docker run -it --rm -u 1000 -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro test-users:latest id
uid=1000(bmitch) gid=1000(bmitch) groups=4242,8888
As soon as we specify the gid, the secondary groups are gone:
$ docker run -it --rm -u 1000:1000 -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro test-users:latest id
uid=1000(bmitch) gid=1000(bmitch)
And if we run without overriding the /etc/passwd and /etc/group file, we can see the correct permissions:
$ docker run -it --rm -u test test-users:latest id
uid=1000(test) gid=1000(test) groups=4242(group1),8888(group2)
Likely the best option is to add a container user with the group membership matching the uid/gid values from the host. For host volumes, I've also solved this problem with a base image that dynamically adjusts the user or group inside the container to match the uid/gid of the file mounted in a volume. This is done as root, and then gosu is used to drop permissions back to the user. You can see that at sudo-bmitch/docker-base on github, specifically the fix-perms script that I would run as part of an entrypoint.
Also, be aware that mounting the /etc/passwd and /etc/group can break file permissions of other files within the container filesystem, and this user may have access inside that container that is not appropriate (e.g. you may have special access to the ping command that gives the ability to modify files or run ping commands that a normal user wouldn't have access to). This is why I tend to adjust the container user/group rather than completely replace these files.
Actually your solution is not wrong. I did the same with few differences.
This is my Dockerfile:
FROM alpine:latest
RUN addgroup -S cert-group -g 117 \
&& adduser -S --uid 113 -G cert-group myuser
USER myuser
And my docker-compose.yml:
version: '3.7'
services:
test:
build:
dockerfile: ./Dockerfile
context: .
command: >
sh -c 'ls -l / && cat /etc/passwd && cat /etc/group && cat /cert.pem'
volumes:
- "/tmp/test.txt:/cert.pem:ro"
restart: "no"
My '/tmp/test.txt' is assigned to 113:117.
IMHO, I think the problem in your docker-compose.yml that doesn't use your image. You should remove the image: and add build:
I have gone through the same issue today and luckily, the below solution helped me.
"Add :Z to your volumes mounts"
Reference: https://github.com/moby/moby/issues/41202
Note: Unfortunately It's issue with only Centos, I didn't face any problem with Ubuntu.
Running a docker image with a command line such as:
> docker run -it -v $OutsideDir:$InsideDir -u $(id -u):$(id -g) c0ffeebaba bash
I am able to work on my data as the current user on the host from inside the docker container. However, asking inside the container 'whoami' gives the response that the UID is unknown.
So the shell is executed on a user without a home-directory. How
can I get some initialization being done for that user? Is there a way to map the user id and group id of an external user to a specific user name from inside? Can this be done dynamically, so that it would work for any user, specified through the '--user' flag as shown above?
My first approach would have been to use 'CMD' in the Dockerfile such as
CMD ["source", "/home/the_user/.bashrc" ]
But, that does not work.
A relatively simple solution would be to wrap the docker run in a script, mapping in the /etc/passwd and /etc/group files from the host onto the container, as well as the user's home directory, so something like:
#!/bin/bash -p
# command starts with mapping passwd and group files
cmd=(docker run -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro)
# add home directory:
myhome=$(getent passwd $(id -nu) | awk -F: '{print $6}')
cmd+=(-v $myhome:$myhome)
# add userid and groupid mappings:
cmd+=(-u $(id -u):$(id -g))
# then pass through any other arguments:
cmd+=("$#")
"${cmd[#]}"
This can be run as:
./runit.sh -it --rm alpine id
or, for a shell (alpine doesn't have bash by default):
./runit.sh -it --rm centos bash --login
You can throw in a -w $HOME to get it to start in the user's home directory, etc.
BY default when you run
docker run -it [myimage]
OR
docker attach [mycontainer]
you connect to the terminal as root user, but I would like to connect as a different user. Is this possible?
For docker run:
Simply add the option --user <user> to change to another user when you start the docker container.
docker run -it --user nobody busybox
For docker attach or docker exec:
Since the command is used to attach/execute into the existing process, therefore it uses the current user there directly.
docker run -it busybox # CTRL-P/Q to quit
docker attach <container id> # then you have root user
/ # id
uid=0(root) gid=0(root) groups=10(wheel)
docker run -it --user nobody busybox # CTRL-P/Q to quit
docker attach <container id>
/ $ id
uid=99(nobody) gid=99(nogroup)
If you really want to attach to the user you want to have, then
start with that user run --user <user> or mention it in your Dockerfile using USER
change the user using `su
You can run a shell in a running docker container using a command like:
docker exec -it --user root <container id> /bin/bash
As an updated answer from 2020. --user, -u option is Username or UID (format: <name|uid>[:<group|gid>]).
Then, it works for me like this,
docker exec -it -u root:root container /bin/bash
Reference: https://docs.docker.com/engine/reference/commandline/exec/
You can specify USER in the Dockerfile. All subsequent actions will be performed using that account. You can specify USER one line before the CMD or ENTRYPOINT if you only want to use that user when launching a container (and not when building the image). When you start a container from the resulting image, you will attach as the specified user.
The only way I am able to make it work is by:
docker run -it -e USER=$USER -v /etc/passwd:/etc/passwd -v `pwd`:/siem mono bash
su - magnus
So I have to both specify $USER environment variable as well a point the /etc/passwd file. In this way, I can compile in /siem folder and retain ownership of files there not as root.
My solution:
#!/bin/bash
user_cmds="$#"
GID=$(id -g $USER)
UID=$(id -u $USER)
RUN_SCRIPT=$(mktemp -p $(pwd))
(
cat << EOF
addgroup --gid $GID $USER
useradd --no-create-home --home /cmd --gid $GID --uid $UID $USER
cd /cmd
runuser -l $USER -c "${user_cmds}"
EOF
) > $RUN_SCRIPT
trap "rm -rf $RUN_SCRIPT" EXIT
docker run -v $(pwd):/cmd --rm my-docker-image "bash /cmd/$(basename ${RUN_SCRIPT})"
This allows the user to run arbitrary commands using the tools provides by my-docker-image. Note how the user's current working directory is volume mounted
to /cmd inside the container.
I am using this workflow to allow my dev-team to cross-compile C/C++ code for the arm64 target, whose bsp I maintain (the my-docker-image contains the cross-compiler, sysroot, make, cmake, etc). With this a user can simply do something like:
cd /path/to/target_software
cross_compile.sh "mkdir build; cd build; cmake ../; make"
Where cross_compile.sh is the script shown above. The addgroup/useradd machinery allows user-ownership of any files/directories created by the build.
While this works for us. It seems sort of hacky. I'm open to alternative implementations ...
For docker-compose. In the docker-compose.yml:
version: '3'
services:
app:
image: ...
user: ${UID:-0}
...
In .env:
UID=1000
Execute command as www-data user: docker exec -t --user www-data container bash -c "ls -la"
This solved my use case that is: "Compile webpack stuff in nodejs container on Windows running Docker Desktop with WSL2 and have the built assets under your currently logged in user."
docker run -u 1000 -v "$PWD":/build -w /build node:10.23 /bin/sh -c 'npm install && npm run build'
Based on the answer by eigenfield. Thank you!
Also this material helped me understand what is going on.