Distroless docker + sudo - docker

I have dissected Google's distroless images to see how they're built as I'd like to use them but my company wants to have images we have built ourselves.
It looks like Google build glibc, libssl, and openssl in their base images so I did the same. I added a statically linked busybox, curl, and sudo for testing purposes. However, when I login as my user sudo tells me that it is unable to read libraries that exist on the system. At first I thought it had to do with root's environment because setuid but passwd is also setuid and it works.
Here is some output:
Distroless 🐳 [gdanko#5bd77574a894]:~ $ ldd /usr/bin/sudo
linux-vdso.so.1 (0x00007fff5e728000)
libutil.so.1 => /lib64/libutil.so.1 (0x00007f4bf61a1000)
libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007f4bf5f6a000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f4bf5d4c000)
libc.so.6 => /lib64/libc.so.6 (0x00007f4bf59b3000)
/lib64/ld-linux-x86-64.so.2 (0x00007f4bf6621000)
Distroless 🐳 [gdanko#5bd77574a894]:~ $ ls -l /lib64/libutil.so.1
lrwxrwxrwx 1 root root 15 Oct 2 17:45
/lib64/libutil.so.1 -> libutil-2.25.so
Distroless 🐳 [gdanko#5bd77574a894]:~ $ ls -l /lib64/libcrypt.so.1
lrwxrwxrwx 1 root root 16 Oct 2 17:45
/lib64/libcrypt.so.1 -> libcrypt-2.25.so
Distroless 🐳 [gdanko#5bd77574a894]:~ $ ls -l /lib64/libpthread.so.0
lrwxrwxrwx 1 root root 18 Oct 2 17:45
/lib64/libpthread.so.0 -> libpthread-2.25.so
Distroless 🐳 [gdanko#5bd77574a894]:~ $ ls -l /lib64/libc.so.6
lrwxrwxrwx 1 root root 12 Oct 2 17:45
/lib64/libc.so.6 -> libc-2.25.so
Distroless 🐳 [gdanko#5bd77574a894]:~ $ sudo
sudo: error while loading shared libraries: libutil.so.1: cannot open
shared object file: No such file or directory
If I su - root and execute sudo it works
Distroless 🐳 [gdanko#5bd77574a894]:~ $ su - root
Password:
Distroless 🐳 [root#5bd77574a894]:~ # sudo
usage: sudo -h | -K | -k | -V
usage: sudo -v [-AknS] [-g group] [-h host] [-p prompt] [-u user]
usage: sudo -l [-AknS] [-g group] [-h host] [-p prompt] [-U user] [-u user] [command]
usage: sudo [-AbEHknPS] [-C num] [-g group] [-h host] [-p prompt] [-T timeout] [-u user] [VAR=value] [-i|-s] [<command>]
usage: sudo -e [-AknS] [-C num] [-g group] [-h host] [-p prompt] [-T timeout] [-u user] file ...
I was wondering if it was in how Google built glibc but I could not find anything glaring.
The other thing is, if I use Google's distroless base for my image and copy my sudo over, busybox works in the container. I have tried a number of different things trying to troubleshoot this but I am at a loss. Can anyone see something I may be missing??

Related

How to have consistent ownership of mounted volumes in docker?

I'm trying to understand some weird behavior that I'm seeing on docker. I want to run a container which writes cached data to a mounted volume from the host which I can later reuse for future executions of the container and that can also be read from the host.
On the host, I see that my user has the user id 1000:
# This is on the host
$ id
uid=1000(juan-munoz) gid=1000(juan-munoz) groups=1000(juan-munoz)...
I'm running the container without any special flags for the user so it runs as root:
# This is on the container
$ id
uid=0(root) gid=0(root) groups=0(root)
Also, there is already a user with id=1000:
# The image is provided by AWS and apparently it includes a user with this ID
$ id -nu 1000
ec2-user
I have mounted a directory with -v /some/local/directory:/var/mounted. Locally, this directory is owned by my user (id=1000):
# On the host
ls -ld /some/local/directory
drwx------ 2 juan-munoz juan-munoz 4.0K Jun 21 16:21 /some/local/directory
In the container, if I go check the directory I see that it's currently owned by root. This is the first part that I don't understand.
# ls -ld /var/mounted
drwx------ 2 root root 4096 Jun 22 03:36 /var/mounted
Why root? I would have expected that with the mount the user id would be respected.
If I then try to change the user to 1000, this happens:
# Inside the container
$ chown -R 1000:1000 /var/mounted
ls -ld /var/mounted
drwx------ 2 ec2-user 1000 4096 Jun 22 03:36 /var/mounted
Which looks good to me, but when I do that, if I look at what happened on the host I see the following:
# On the host
ls -ld /some/local/directory
drwx------ 2 100999 100999 4.0K Jun 21 20:36 /some/local/directory
So either the host or the container has a messed up owner. If I chown 1000:1000 on the host, the container sees it as root, if I chown 1000:1000 on the container, the host sees it as 100999.
What am I doing wrong here?
Repro steps
$ mkdir $HOME/testing
$ docker run -it --name=ubuntu-test --entrypoint="/bin/bash" --rm -v $HOME/testing:$HOME/testing ubuntu:latest
# Inside the container
$ cd /home/myusername/testing
$ touch file.txt
root#b98b7a5445e3:/home/juan-munoz/testing# ls -l
total 0
-rw-r--r-- 1 root root 0 Jun 30 23:52 file.txt
# Outside the container
$ ls -l $HOME/testing
total 0
-rw-r--r-- 1 juan-munoz juan-munoz 0 Jun 30 16:52 file.txt
Observed behavior:
In some computers, from outside the container, the file is owned by the local user. In others, it's owned by root.
Expected behavior:
To be consistent across computers
I encountered the same issue under Ubuntu 22.04 with Docker Desktop installed. After the Docker Desktop is uninstalled and the Docker Engine is reinstalled, things just go the way I want them to: having consistent ownership of mounted directories between the host and the container.
Here is my use case: I want to start an Ubuntu 20.04 container and use it as the compiling environment for my application. The Dockerfile is:
FROM ubuntu:20.04
ARG USER=docker
ARG UID=1000
ARG GID=1000
# create a new user with same UID & PID but no password
RUN groupadd --gid ${GID} ${USER} && \
useradd --create-home ${USER} --uid=${UID} --gid=${GID} --groups root && \
passwd --delete ${USER}
# add user to the sudo group and set sudo group to no passoword
RUN apt update && \
apt install -y sudo && \
adduser ${USER} sudo && \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
# prevent tzdata to ask for configuration
RUN DEBIAN_FRONTEND=noninteractive TZ=Etc/UTC apt -y install tzdata
RUN apt install -y build-essential git cmake
# setup default user when enter the container
USER ${UID}:${GID}
WORKDIR /home/${USER}
The building script:
#!/bin/bash
set -e
UID=$(id -u)
GID=$(id -g)
docker build --build-arg USER="$USER" \
--build-arg UID="$UID" \
--build-arg GID="$GID" \
--tag "ubuntu-env" \
--file ./Dockerfile \
--progress=plain \
.
Since I use it as the compiling environment, the whole home directory is mounted for convenience:
#!/bin/bash
set -e
docker run -it \
--name "build-env" \
--user "${USER}" \
--workdir "${PWD}" \
--env TERM=xterm-256color \
--volume="$HOME":"$HOME" \
--detach \
"ubuntu-env" \
/bin/bash
Somehow, with the Docker Desktop installed, the owner of the mounted home directory is always the root rather than the host user. After it is uninstalled, the mounted directory now gets the expected owner in the container, aka the host user.

prevent changing of permissions in mounts with rootless container

In rootful containers, the solution to this problem is run with --user "$(id -u):$(id -g)" however this does not work for rootless contain systems (rootless docker, or in my case podman):
$ mkdir x
$ podman run --user "$(id -u):$(id -g)" -v "$PWD/x:/x:rw" ubuntu:focal bash -c 'echo hi >> /x/test'
bash: /x/test: Permission denied
so for rootless container systems I should remove --user since the root user is automatically mapped to the calling user:
$ podman run -v "$PWD/x:/x:rw" ubuntu:focal bash -c 'echo hi >> /x/test'
$ ls -al x
total 12
drwxr-xr-x 2 asottile asottile 4096 Sep 3 10:02 .
drwxrwxrwt 18 root root 4096 Sep 3 10:01 ..
-rw-r--r-- 1 asottile asottile 3 Sep 3 10:02 test
but, because this is now the root user, they can change the ownership to users which are undeleteable outside container:
$ podman run -v "$PWD/x:/x:rw" ubuntu:focal bash -c 'mkdir -p /x/1/2/3 && chown -R nobody /x/1'
$ ls -al x/
total 16
drwxr-xr-x 3 asottile asottile 4096 Sep 3 10:03 .
drwxrwxrwt 18 root root 4096 Sep 3 10:01 ..
drwxr-xr-x 3 165533 asottile 4096 Sep 3 10:03 1
-rw-r--r-- 1 asottile asottile 3 Sep 3 10:02 test
$ rm -rf x/
rm: cannot remove 'x/1/2/3': Permission denied
so my question is: how do I allow writes to a mount, but prevent changing ownership for rootless containers?
I think --user $(id -u):$(id -g) --userns=keep-id will get what you want.
$ id -un
erik
$ id -gn
erik
$ mkdir x
$ podman run -v "$PWD/x:/x:Z" --user $(id -u):$(id -g) --userns=keep-id docker.io/library/ubuntu:focal bash -c 'mkdir -p /x/1/2/3 && chown -R nobody /x/1'
chown: changing ownership of '/x/1/2/3': Operation not permitted
chown: changing ownership of '/x/1/2': Operation not permitted
chown: changing ownership of '/x/1': Operation not permitted
$ ls x
1
$ ls -l x
total 0
drwxr-xr-x. 3 erik erik 15 Sep 6 19:34 1
$ ls -l x/1
total 0
drwxr-xr-x. 3 erik erik 15 Sep 6 19:34 2
$ ls -l x/1/2
total 0
drwxr-xr-x. 2 erik erik 6 Sep 6 19:34 3
$
Regarding deleting files and directories that are not owned by your normal UID and GID (but from the extra ranges in /etc/subuid and /etc/subgid) , you could
use podman unshare rm filepath
and podman unshare rm -rf directorypath

Getting Read Only Filesystem Error inside a docker container

This command
echo 1 | sudo tee /proc/sys/net/ipv6/conf/all/disable_ipv6
when run inside a CentOS docker container (running on Mac), gives:
echo 1 | sudo tee /proc/sys/net/ipv6/conf/all/disable_ipv6
tee: /proc/sys/net/ipv6/conf/all/disable_ipv6: Read-only file system
1
When run inside a CentOS virtual machine, it succeeds and gives no error.
The directory permissions inside docker container and VM are exactly the same:
VM:
$ ls -ld /proc/sys/net/ipv6/conf/all/disable_ipv6
-rw-r--r-- 1 root root 0 Jan 4 21:09 /proc/sys/net/ipv6/conf/all/disable_ipv6
docker:
$ ls -ld /proc/sys/net/ipv6/conf/all/disable_ipv6
-rw-r--r-- 1 root root 0 Jan 5 05:05 /proc/sys/net/ipv6/conf/all/disable_ipv6
This is a fresh, brand new container.
Docker version:
$ docker --version
Docker version 18.09.0, build 4d60db4
What am I missing?
Try hackish solution and add extended privileges to the container with --privileged:
$ docker run --rm -ti centos \
bash -c "echo 1 | tee /proc/sys/net/ipv6/conf/all/disable_ipv6"
tee: /proc/sys/net/ipv6/conf/all/disable_ipv6: Read-only file system
1
vs
$ docker run --privileged --rm -ti centos \
bash -c "echo 1 | tee /proc/sys/net/ipv6/conf/all/disable_ipv6"
1
You can use --cap-add to add precise privilege instead of --privileged.
However --sysctl looks like the best solution, instead of hacking networking in the container with --privileged:
$ docker run --sysctl net.ipv6.conf.all.disable_ipv6=1 \
--rm -ti centos bash -c "cat /proc/sys/net/ipv6/conf/all/disable_ipv6"
1

How to fix "dial unix /var/run/docker.sock: connect: permission denied" when group permissions seem correct?

I'm suddenly having issues after an update of Ubuntu 18.04: previously I've used docker without issue on the system, but suddenly I cannot. As far as I can tell, the permissions look correct:
$ docker run hello-world
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.35/containers/create: dial unix /var/run/docker.sock: connect: permission denied.
See 'docker run --help'.
$ ls -last /var/run/docker.sock
0 srw-rw---- 1 root docker 0 Jul 14 09:10 /var/run/docker.sock
$ whoami
brandon
$ cat /etc/group | grep docker
docker:x:995:brandon
nvidia-docker:x:994:
EDIT:
Group information:
$ groups
brandon
$ groups brandon
brandon : brandon adm cdrom sudo dip plugdev games lpadmin sambashare docker
$ whoami
brandon
Update
Since the original post where I upgraded a system from 17.04 to 18.04, I've done two upgrades from 16.04 to 18.04, and neither of the later systems had the issue. So it might be something to do with the 17.04 to 18.04 upgrade process. I've yet to perform a fresh 18.04 installation.
sudo setfacl --modify user:<user name or ID>:rw /var/run/docker.sock
It doesn't require a restart and is more secure than usermod or chown.
as #mirekphd pointed out, the user ID is required when the user name only exists inside the container, but not on the host.
add the user to the docker group.
sudo usermod -aG docker $USER
sudo reboot
Just try to give the right permission to docker.sock file by:
sudo chmod 666 /var/run/docker.sock
The way to fix it is to run:
sudo addgroup --system docker
sudo adduser $USER docker
newgrp docker
that works for me :)
Ubuntu 18:04
sudo setfacl --modify user:$USER:rw /var/run/docker.sock
Somehow, i found this page when i have't correct permissons on my docker.sock after my Docker installation. So, if you have the same issue, you can read this:
$ sudo adduser $USER docker does not work because the group is "root"
not "docker"
$ ls -l /var/run/docker.sock srw-rw---- 1 root root 0 Jul 11 09:48
/var/run/docker.sock so it should be $ sudo adduser $USER root
from a non-snap installed machine, the group is "docker"
# ls -l /var/run/docker.sock srw-rw---- 1 root docker 0 Jul 3 04:18 /var/run/docker.sock The correct way is, according to docker.help you
have to run the followings BEFORE sudo snap install docker
$ sudo addgroup --system docker $ sudo adduser $USER docker $ newgrp
docker then the group will be "docker"
$ ls -l /var/run/docker.sock srw-rw---- 1 root docker 0 Jul 11 10:59
/var/run/docker.sock
Source: https://github.com/docker-archive/docker-snap/issues/1 (yes, first issue :D)
The easyest way to fix it is to run:
$ sudo setfacl -m "g:docker:rw" /var/run/docker.sock
And then, as it already metioned, run following commands for your user:
$sudo addgroup --system docker
$sudo adduser $USER docker
$newgrp docker
That's it :) Have fun!
I did the quick fix and it worked immediately.
sudo chmod 777 /var/run/docker.sock
It looks like a permission issue:
sudo addgroup --system docker
sudo adduser $USER docker
newgrp docker
sudo setfacl -m "g:docker:rw" /var/run/docker.sock
or Simply use this command below, which will fix this issue.
sudo chmod -x /var/run/docker.sock
Specific to Ubuntu, there is a known issue with lightdm that removes secondary groups from the user as part of the GUI login. You can follow that issue here: https://bugs.launchpad.net/lightdm/+bug/1781418
You can try switching off of lightdm or apply the workaround mentioned in the bug report:
[Comment out the below lines from /etc/pam.d/lightdm:]
auth optional pam_kwallet.so
auth optional pam_kwallet5.so
Temporary options include logging into your machine with something like an ssh or su -l command, or running the newgrp docker command. These will only affect the current shell and would need to be done again with each new terminal.
Outside of this issue, the general commands to give a user direct access to the docker socket (and therefore root access to the host) are:
sudo usermod -aG docker $(id -un) # you can often use $USER in place of the id command
newgrp docker # affects the current shell, logging out should affect all shells
I was able to solve this on my Linux Machine using the below command.
> sudo setfacl --modify user:$USER:rw /var/run/docker.sock
Note: Please checck if you have sudo access. Otherwise this command will fail.
How to check sudo access?
$ whoami
> rahul
$ groups
> useracc
$ groups useracc
> Here you can see sudo and other access details.
I fixed this issue by using the following command:
sudo chmod -x /var/run/docker.sock
Please note: not only group name is important, but apparently also gid of the groups.
So if docker group on host system has gid of i.e. 995,
cat /etc/group | grep docker
docker:x:995:brandon
You must make sure gid of docker group
You can do this as a part of a launch script, or simply by using exec and doing it manually:
groupmod -g 995 docker
Hope it helps anyone who comes here, it took me a while to find this answear.
This issue is resolved following below process
Check whether the "docker" group is created or not
cmd: cat /etc/group | grep docker
output: docker:x:995
Check the permissions of "/var/run/docker.sock" file
cmd: ls -l /var/run/docker.sock
output: rw-rw---- 1 root root 0 Jul 14 09:10 /var/run/docker.sock
add docker group to "/var/run/docker.sock" file
cmd: sudo setfacl -m "g:docker:rw" /var/run/docker.sock
output: rw-rw---- 1 root docker 0 Jul 14 09:10 /var/run/docker.sock
Now it will work, if possible restart the docker service.
To restart the docker service
**cmd:**sudo systemctl restart docker

Permission denied when get contents generated by a docker container on the local fileystem

I use the following command to run a container:
docker run -it -v /home/:/usr/ ubuntu64 /bin/bash
Then I run a program in the container, the program generates some files in the folder:/usr/ which also appear in /home/ but I can't access the generated files with an error: Permission denied outside the container.
I think this may because the files generated by root in the container but outside the container, the user have no root authority, but how to solve it?
What I want to do is accessing the files generated by the program(installed in the container) outside the container.
You need to use the -u flag
docker run -it -v $PWD:/data -w /data alpine touch nouser.txt
docker run -u `id -u` -it -v $PWD:/data -w /data alpine touch onlyuser.txt
docker run -u `id -u`:`id -g` -it -v $PWD:/data -w /data alpine touch usergroup.txt
Now if you do ls -alh on the host system
$ ls -alh
total 8.0K
drwxrwxr-x 2 vagrant vagrant 4.0K Sep 9 05:22 .
drwxrwxr-x 30 vagrant vagrant 4.0K Sep 9 05:19 ..
-rw-r--r-- 1 root root 0 Sep 9 05:21 nouser.txt
-rw-r--r-- 1 vagrant root 0 Sep 9 05:21 onlyuser.txt
-rw-r--r-- 1 vagrant vagrant 0 Sep 9 05:22 usergroup.txt

Resources