Why does simple Dockerfile give "permission denied"? - docker

I am learning to use Docker with ROS, and I am surprised by this error message:
FROM ros:kinetic-robot-xenial
# create non-root user
ENV USERNAME ros
RUN adduser --ingroup sudo --disabled-password --gecos "" --shell /bin/bash --home /home/$USERNAME $USERNAME
RUN bash -c 'echo $USERNAME:ros | chpasswd'
ENV HOME /home/$USERNAME
USER $USERNAME
RUN apt-get update
Gives this error message
Step 7/7 : RUN apt-get update
---> Running in 95c40d1faadc
Reading package lists...
E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied)
The command '/bin/sh -c apt-get update' returned a non-zero code: 100

apt-get generally needs to run as root, but once you've run a USER command, commands don't run as root any more.
You'll frequently run commands like this at the start of the Dockerfile: you want to take advantage of Docker layer caching if you can, and you'll usually be installing dependencies the rest of the Dockerfile needs. Also for layer-caching reasons, it's important to run apt-get update and other installation steps in a single step. So your Dockerfile would typically look like
FROM ros:kinetic-robot-xenial
# Still root
RUN apt-get update \
&& apt-get install ...
# Copy in application (still as root, won't be writable by other users)
COPY ...
CMD ["..."]
# Now as the last step create a user and default to running as it
RUN adduser ros
USER ros
If you need to, you can explicitly USER root to switch back to root for subsequent commands, but it's usually easier to read and maintain Dockerfiles with less user switching.
Also note that neither sudo nor user passwords are really useful in Docker. It's hard to run sudo in a script just in general and a lot of Docker things happen in scripts. Containers also almost never run things like getty or sshd that could potentially accept user passwords, and they're trivial to read back from docker history, so there's no point in setting one. Conversely, if you're in a position to get a shell in a container, you can always pass -u root to the docker run or docker exec command to get a root shell.

switch to the root user by:
USER root
and then every command should work

Try putting this line at the end of your dockerfile
USER $USERNAME (once this line appears in dockerfile...u will assume this users permissions...which in this case does not have to install anything)
by default you are root

You add the user ros to the group sudo but you try to apt-get update without making use of sudo. Therefore you run the command unprivileged and you get the permission denied.
Use do run the command (t):
FROM ros:kinetic-robot-xenial
RUN whoami
RUN apt-get update
# create non-root user
RUN apt-get install sudo
RUN echo "ros ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers
ENV USERNAME ros
RUN adduser --ingroup sudo --disabled-password --gecos "" --shell /bin/bash --home /home/$USERNAME $USERNAME
RUN bash -c 'echo $USERNAME:ros | chpasswd'
ENV HOME /home/$USERNAME
USER $USERNAME
RUN whoami
RUN sudo apt-get update
All in all that does not make much sense. It is OK to prepare a docker image (eg. install software etc.) with its root user. If you are concerned about security (which is a good thing) leave the sudo stuff and make sure that the process(es) that run when the image is executed (eg the container is created) with your unprivileged user...
Also consider multi stage builds if you want to separate the preparation of the image from the actual runnable thing:
https://docs.docker.com/develop/develop-images/multistage-build/

Related

Docker: run privileged background process as unprivileged user

I'm building a Dockerfile where I want to add a background process which is run as an privileged user, but then switch to an unprivileged user for the entrypoint.
Like so:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y openvpn
COPY some.ovpn some.ovpn
RUN openvpn --config some.ovpn --daemon
RUN useradd -ms /bin/bash newuser
USER 1000
I know that the process wont exist anymore if I start the the container - thats why I need to have a service or something like that.
What have I tried
setuid
supervisord
systemctl (didnt get a working PoC)
add the command to sudoers (didnt get a working PoC)
Now Im thinking of cronjobs - but there has to be a much easier solution to have a root process run in the background while having a unprivilged container

Docker build with same Dockerfile produces different results on Windows vs. Linux

I have a simple Dockerfile based on eclipse-temurin:11.0.15_10-jre-focal which is just creating a user and copying a jar and some config files into the user's home directory:
FROM eclipse-temurin:11.0.15_10-jre-focal
RUN apt-get update -y && apt-get install -y vim-tiny iputils-ping && rm -rf /var/lib/apt/lists/*
ARG APP_USR=bulkload
RUN useradd --user-group --create-home --base-dir /opt --shell /bin/bash $APP_USR
USER $APP_USR
COPY --chown=$APP_USR:$APP_USR target/${project.artifactId}-${project.version}-all.jar src/test/resources/bulk-load-config.json /opt/$APP_USR/
COPY --chown=$APP_USR:$APP_USR src/test/resources/*.properties /opt/$APP_USR/config/
COPY --chown=$APP_USR:$APP_USR content /opt/$APP_USR/content/
CMD ["bash"]
When I build it on Windows ("DockerVersion": "20.10.14"), everything is as expected. When I build it using Azure DevOps pipeline ("DockerVersion": "20.10.11" on Linux), there are anomalies:
User's home directory is owned by root
All the files and directories copied via COPY command are also owned by root (in spite of --chown switch)
I don't understand this behavior. The useradd command is executed inside the container so it shouldn't matter if the build is done on Windows or Linux. Furthermore, I assume that the COPY commands should fail if the --chown instruction couldn't be done, but it didn't. I suppose I must be doing something wrong but what?

Variables in Dockerfile don't seem to be recognized?

I am building an image using Dockfile. I would like to set the Username of the container via the command line to avoid permission issues.
The Dockfile is shown below, I used the variables of USER_NAME, GROUP_ID. But when I build, the problem keeps appearing.
The error is: groupadd: option '--gid' requires an argument
I'm guessing that both ${GROUP_ID} and ${USER_NAME} are recognized as empty strings, but shouldn't they be assigned values ​​when the container is created?
I've googled a few examples and based on the examples, I don't quite see where the problem is?
Please help me!
Thanks!
FROM matthewfeickert/docker-python3-ubuntu:latest
ARG USER_NAME
ARG USER_ID
ARG GROUP_ID
RUN groupadd -r --gid ${GROUP_ID} ${USER_NAME}
RUN useradd --no-log-init -r -g ${GROUP_ID} -u ${USER_ID} ${USER_NAME}
USER ${USER_NAME}
WORKDIR /usr/local/src
When you run the container, you can specify an arbitrary user ID with the docker run -u option.
docker run -u 1003 ... my-image
This doesn't require any special setup in the image. The user ID won't exist in the container's /etc/passwd file but there aren't really any consequences to this, beyond some cosmetic issues with prompts in interactive debugging shells.
A typical use of this is to give your container access to a bind-mounted data directory:
docker run \
-e DATA_DIR=/data \
-v "$PWD/app-data:/data" \
-u $(id -u) \
... \
my-image
I'd generally recommend not passing a specific user ID into your image build. This would make the user ID "baked in", and if someone with a different host uid wanted to run the image, they'd have to rebuild it.
It's often a good practice to set up some non-root user, but it doesn't matter what its user ID is so long as it's not zero. In turn, it's also typically a good practice to leave most of your application source code owned by the root user so that the application can't accidentally overwrite itself.
FROM matthewfeickert/docker-python3-ubuntu:latest
# Create an arbitrary non-root user; we don't care about its uid
# or other properties
RUN useradd --system user
# Still as root, do the normal steps to install and build the application
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY ./ ./
# Still as root, make sure the data directory exists
ENV DATA_DIR=/data
RUN mkdir "$DATA_DIR" && chown user "$DATA_DIR"
# VOLUME ["/data"]
# Normal metadata to run the container, only switching users now
EXPOSE 5000
USER user
CMD ["./app.py"]
This setup will still work with the extended docker run command shown initially: the docker run -v option will cause the container's /data directory to take on its numeric uid owner from the host, which (hopefully) matches the docker run -u uid.
You can pass the build args as shown below.
docker build --build-arg USER_NAME=test --build-arg USER_ID=805 --build-arg GROUP_ID=805 -t tag1 .
Also, as a best practice consider adding default vales to the args. So if the user doesn't specify the args the default values will be picked.

Validating: problems found while running docker build

When I try to build a Docker image using docker build -t audio:1.0.1 ., it builds an image (with an IMAGE ID, but not the name I intended during the build) that automatically runs and stops (but does not get removed) immediately
after the build process finishes with the following last lines of output:
The image shows up, without having a TAG or being in a REPOSITORY, when I execute docker images:
How do I troubleshoot this to build a "normal" image?
My Docker version is 18.09.1, and I am using it on macOS Mojave Version 10.14.1
Following is the content of my Dockerfile:
FROM ubuntu:latest
# Run a system update to get it up to speed
# Then install python3 and pip3 as well as redis-server
RUN apt-get update && apt-get install -y python3 python3-pip \
&& pip3 install --trusted-host pypi.python.org jupyter \
&& jupyter nbextension enable --sys-prefix widgetsnbextension
# Create a new system user
RUN useradd -ms /bin/bash audio
# Change to this new user
USER audio
# Set the container working directory to the user home folder
# WORKDIR /home/jupyter
WORKDIR /home/audio
EXPOSE 8890
# Start the jupyter notebook
ENTRYPOINT ["jupyter", "notebook", "--ip=0.0.0.0", "--port=8890"]
How do I troubleshoot this to build a "normal" image?
You have the error right there on the screenshot. useradd failed to create the group because it already exists so the docker build was aborted. Note the the audio group is a system one so maybe you don't want to use that.
So either create a user with a different name or pass -g audio to the useradd command to it uses the existing group.
If you need to make the user creation conditional then you can use the getent command to check the user/group existence, for example:
# create the user if doesn't exists
RUN [ ! $(getent passwd audio) ] && echo "useradd -ms /bin/bash audio"
# create the user and use the existing group if it exists
RUN [ ! $(getent group audio) ] && echo "useradd -ms /bin/bash audio -g audio"

How to view GUI apps from inside a docker container

When I try to run a GUI, like xclock for example I get the error:
Error: Can't open display:
I'm trying to use Docker to run a ROS container, and I need to see the GUI applications that run inside of it.
I did this once just using a Vagrant VM and was able to use X11 to get it done.
So far I've tried putting way #1 and #2 into a docker file based on the info here:
http://wiki.ros.org/docker/Tutorials/GUI
Then I tried copying most of the dockerfile here:
https://hub.docker.com/r/mjenz/ros-indigo-gui/~/dockerfile/
Here's my current docker file:
# Set the base image to use to ros:kinetic
FROM ros:kinetic
# Set the file maintainer (your name - the file's author)
MAINTAINER me
# Set ENV for x11 display
ENV DISPLAY $DISPLAY
ENV QT_X11_NO_MITSHM 1
# Install an x11 app like xclock to test this
run apt-get update
run apt-get install x11-apps --assume-yes
# Stuff I copied to make a ros user
ARG uid=1000
ARG gid=1000
RUN export uid=${uid} gid=${gid} && \
groupadd -g ${gid} ros && \
useradd -m -u ${uid} -g ros -s /bin/bash ros && \
passwd -d ros && \
usermod -aG sudo ros
USER ros
WORKDIR /home/ros
# Sourcing this before .bashrc runs breaks ROS completions
RUN echo "\nsource /opt/ros/kinetic/setup.bash" >> /home/ros/.bashrc
# Copy entrypoint script into the image, this currently echos hello world
COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
My personal preference is to inject the display variable and share the unix socket or X windows with something like:
docker run -it --rm -e DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v /etc/localtime:/etc/localtime:ro \
my-gui-image
Sharing the localtime just allows the timezone to match up as well, I've been using this for email apps.
The other option is to spin up a VNC server, run your app on that server, and then connect to the container with a VNC client. I'm less a fan of that one since you end up with two processes running inside the container making signal handling and logs a challenge. It does have the advantage that the app is better isolated so if hacked, it doesn't have access to your X display.

Resources