Docker container fails to write to a (non root) folder - docker

I run a docker container in order to extract files from a source folder into a destination folder. The source folder resides in my user's home directory so there is no problem to read from it or write. The destination folder on the other hand, is accessed only by a nonrootuser.
When I ran the docker container with the nonrootuser, I cannot write in the container's folders (permission denied).
On the other hand when I ran the container with my user, I cannot write to the destination folder.
Setup
I build the image like this
docker build -t lftp .
based on the following Dockerfile:
Dockerfile
FROM debian:10
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install lftp dos2unix man
# Adding the scripts
COPY scripts /scripts
WORKDIR /work
# Adding the nonrootuser and his uid (`id -u nonrootuser`)
RUN useradd -u 47001 nonrootuser && mkhomedir_helper nonrootuser
Then I ran the container while binding the following volumes :
download_folder
destination_folder <-> this folder need to be accessed by a nonrootuser
docker run -ti --rm --name=lftp_untar -u `id -u nonrootuser`:`id -g nonrootuser` -v ${download_folder}:/source -v ${destination_folder}:/target lftp bash /scripts/execute_untar.sh /source /target
Where:
execute_untar.sh
#!/bin/bash
source=$1
target=$2
if [ ! -d $source ]; then
echo Can\'t access $source
exit 1
fi
if [ ! -d $target ]; then
echo Can\'t access $target
exit 1
fi
if [ ! -w $target ]; then
echo Can\'t write to $target
exit 1
fi
# Then Read files from /scripts and /work folder
exclude_file=$(readlink -f /scripts/exclude.txt)
log_file=$(readlink -f untar.log)

The issue with the access denied has to do with the fact that when you mount the directories =>
-v ${destination_folder}:/target
-v ${download_folder}:/source
will require root permissions from the perspective of the container environment. Also take a look at Can I control the owner of a bind-mounted volume in a docker image?
I would suggest when you run the containers mount the target, source folders under the nonrootuser home directory, in order to match their permissions. This way you will have the needed write access
docker run -ti --rm --name=lftp_untar -u `id -u nonrootuser`:`id -g nonrootuser` -v ${download_folder}:/home/nonrootuser/source -v ${destination_folder}:/home/nonrootuser/target lftp bash /scripts/execute_untar.sh /home/nonrootuser/source /home/nonrootuser/target

Related

Docker entrypoint user switch

I am creating a docker image to be used as base for other applications. The requirements are:
application must run as non root user
optionally, certificates must be loaded before executing the application
I created the following Dockerfile
FROM node:14.15.1-alpine3.11
# Specify node/npm related envs
ENV NPM_CONFIG_LOGLEVEL=warn \
NO_UPDATE_NOTIFIER=1
# Change cwd for next commands
WORKDIR /home/node/code
# Set local registry
RUN echo "registry=http://192.168.100.175:4873" > /home/node/.npmrc && \
chown -R node:node /home/node && \
apk add --update --no-cache tzdata=2021a-r0 ca-certificates=20191127-r2 su-exec=0.2-r1
# Need root to update CA certificates in entrypoint.sh and then switch back to restricted user
USER root
COPY entrypoint.sh entrypoint.sh
ENTRYPOINT [ "./entrypoint.sh" ]
# Execute the service entrypoint
CMD ["sh"]
and entrypoint.sh
#!/bin/sh
DIR_CRT="/home/node/certificates"
if [ "$(ls -A ${DIR_CRT})" ]; then
cp -r "${DIR_CRT}/." /usr/local/share/ca-certificates/
update-ca-certificates
echo "******* Updated CA certificates *******"
fi
exec su-exec node "$#"
This seems to cover the requirements but I noticed that if I open a shell inside the image it is always with node user even if I specify a different one from parameters:
$ docker run --rm -it -u root docker.repo.asts.com/scc-2.0/app-tg:1.6.0-beta50 whoami
node
Is it possible to have both the requirements and the possibility to execute a direct command with required user?
docker run -u xxx works only if you did not use exec in entrypoint to change PID1. E.g.
$ docker run --rm -it node:14.15.1-alpine3.11 whoami
root
$ docker run --rm -u node -it node:14.15.1-alpine3.11 whoami
node
After you use exec su-exec node "$#" to change the user to node, you won't have way to use -u xxx again. The only solution is override the entrypoint like next, but I don't see the meaning here:
docker run --rm -u root --entrypoint=/bin/sh xxx
But, you still could use docker exec -u root or docker exec -u node to get a shell for that user in exist container.

docker volume masks parent folder in container?

I'm trying to use a Docker container to build a project that uses rust; I'm trying to build as my user. I have a Dockerfile that installs rust in $HOME/.cargo, and then I'm trying to docker run the container, map the sources from $HOME/<some/subdirs/to/project> on the host in the same subfolder in the container. The Dockerfile looks like this:
FROM ubuntu:16.04
ARG RUST_VERSION
RUN \
export DEBIAN_FRONTEND=noninteractive && \
apt-get update && \
# install library dependencies
apt-get install [... a bunch of stuff ...] && \
curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain $RUST_VERSION && \
echo 'source $HOME/.cargo/env' >> $HOME/.bashrc && \
echo apt-get DONE
The build container is run something like this:
docker run -i -t -d --net host --privileged -v /mnt:/mnt -v /dev:/dev --volume /home/stefan/<path/to/project>:/home/stefan/<path/to/project>:rw --workdir /home/stefan/<path/to/project> --name <container-name> -v /etc/group:/etc/group:ro -v /etc/passwd:/etc/passwd:ro -v /etc/shadow:/etc/shadow:ro -u 1000 <image-name>
And then I try to exec into it and run the build script, but it can't find rust or $HOME/.cargo:
docker exec -it <container-name> bash
$ ls ~/.cargo
ls: cannot access '/home/stefan/.cargo': No such file or directory
It looks like the /home/stefan/<path/to/project> volume is masking the contents of /home/stefan in the container. Is this expected? Is there a workaround possible to be able to map the source code from a folder under $HOME on the host, but keep $HOME from the container?
I'm un Ubuntu 18.04, docker 19.03.12, on x86-64.
Dockerfile read variable in physical machine. So you user don't have in virtual machine.
Try change: $HOME to /root
echo 'source /root/.cargo/env' >> /root/.bashrc && \
I'll post this as an answer, since I seem to have figured it out.
When the Dockerfile is expanded, $HOME is /root, and the user is root. I couldn't find a way to reliably introduce my user in the build step / Dockerfile. I tried something like:
ARG BUILD_USER
ARG BUILD_GROUP
RUN mkdir /home/$BUILD_USER
ENV HOME=/home/$BUILD_USER
USER $BUILD_USER:$BUILD_GROUP
RUN \
echo "HOME is $HOME" && \
[...]
But didn't get very far, because inside the container, the user doesn't exist:
unable to find user stefan: no matching entries in passwd file
So what I ended up doing was to docker run as my user, and run the rust install from there - that is, from the script that does the actual build.
I also realized why writing to /home/$USER doesn't work - there is no /home/$USER in the container; mapping /etc/passwd and /etc/group in the container teaches it about the user, but does not create any directory. I could've mapped $HOME from the host, but then the container would control the rust versions on the host, and would not be that self contained. I also ended up needing to install rust in a non-standard location, since I don't have a writable $HOME in the container: I had to set CARGO_HOME and RUSTUP_HOME to do that.

Can't Delete file created via Docker

I used a docker image to run a program on our school's server using this command.
docker run -t -i -v /target/new_directory 990210oliver/mycc.docker:v1 /bin/bash
After I ran it it created a firectory on my account called new_directory. Now I don't have permissions to delete or modify the files.
How do I remove this directory?
I also had this problem.
After:
docker run --name jenkins -p 8080:8080 -v $HOME/jenkins:/var/jenkins_home jenkins jenkins
I couldn't remove files in $HOME/jenkins.
Ricardo Branco's answer didn't work for me because chown gave me:
chown: changing ownership of '/var/jenkins_home': Operation not permitted
Solution:
exec /bin/bash into container as a root user:
docker exec -it --privileged --user root container_id /bin/bash
then:
cd /var/jenkins_home/ && rm -r * .*
I made #siulkilulki's answer into one line:
docker exec --privileged --user root <CONTAINER_ID> chown -R "$(id -u):$(id -g)" <TARGET_DIR>
Note that here the CONTAINER must be up.
Change the owner of all the files on the directory to your used ID within the container running as root, then exit the container and remove the directory.
docker run --rm -v /target/new_directory 990210oliver/mycc.docker:v1 chown -R $(id -un):$(id -un) /target/new_directory
exit
rm -rf $HOME/new_directory
I had the same problem. I am using ubuntu 18.04. I ran the following code and then I was able to delete files locally. I have app dir inside docker project dir
cd to your docker project dir
sudo chown -R $(whoami):$(whoami) app/
docker run -v {absolute path to dir with the file}:/to_delete -it ubuntu /bin/bash
Then just:
$ cd to_delete
$ rm -rf <file/dir>
Here is a solution that does not require --privileged.
Game Plan
Determine UIDs of all offending files created by previous docker runs. Use docker to find them, since in-container UID is not the same as host UID. An offending file is any file not owned by container user root which maps to the current user running docker.
Run a container using each discovered UID and delete the offending files (or chown them).
Code
# Assumes that current dir is the volume
# find files owned by docker internal uuids (not root) on the mounted volume:
BAD_FILE_UIDS=$(docker run --rm -v $(pwd):/build alpine sh -c 'find /build -mindepth 1 -not -user root | xargs stat -c "%u" | sort -u')
if [ -n "${BAD_FILE_UIDS}" ] ; then
for uuid in $BAD_FILE_UIDS ; do
echo "Cleaning up files owned by $uuid using docker"
docker run --rm -v $(pwd):/build --user $uuid:0 alpine find /build -mindepth 1 -user $uuid -delete
done
fi
You can change the -delete to -exec chown SOME_USER {} \; to chown.
The above works well for use in CI as post-build cleanup.
Try this:
docker stop $CONTAINER_NAME
docker rm -v $CONTAINER_NAME
I guess this should remove the mounted dir. If it doesn't, do this explicitly:
sudo rm -rf /target/new_directory

Unable to find user root: no matching entries in passwd file in Docker

I have containers for multiple Atlassian products; JIRA, Bitbucket and Confluence. When I'm trying to access the running containers I'm usually using:
docker exec -it -u root ${DOCKER_CONTAINER} bash
With this command I'm able to access as usual, but after running a script to extract and compress log files, I can't access that one container anymore.
Excerpt from the 'clean up script'
This is the first point of failure, and the script is running once each week (scheduled by Jenkins).
docker cp ${CLEAN_UP_SCRIPT} ${DOCKER_CONTAINER}:/tmp/${CLEAN_UP_SCRIPT}
if [ $? -eq 0 ]; then
docker exec -it -u root ${DOCKER_CONTAINER} bash -c "cd ${LOG_DIR} && /tmp/compressOldLogs.sh ${ARCHIVE_FILE}"
fi
When the script executes these two lines towards the Bitbucket container the result is:
unable to find user root: no matching entries in passwd file
It's failing on the 'docker cp'-command, but only towards the Bitbucket container. After the script has ran, the container is unaccessible with both the 'bitbucket' (defined in Dockerfile) and 'root' users.
I was able to copy /etc/passwd out of the container, and it contains all of the users as expected. When trying to access by uid, I get the following error:
rpc error: code = 2 desc = oci runtime error: exec failed: process_linux.go:75: starting setns process caused "fork/exec /proc/self/exe: no such file or directory"
Dockerfile for Bitbucket image:
FROM java:openjdk-8-jre
ENV BITBUCKET_HOME /var/atlassian/application-data/bitbucket
ENV BITBUCKET_INSTALL_DIR /opt/atlassian/bitbucket
ENV BITBUCKET_VERSION 4.12.0
ENV DOWNLOAD_URL https://downloads.atlassian.com/software/stash/downloads/atlassian-bitbucket-${BITBUCKET_VERSION}.tar.gz
ARG user=bitbucket
ARG group=bitbucket
ARG uid=1000
ARG gid=1000
RUN mkdir -p $(dirname $BITBUCKET_HOME) \
&& groupadd -g ${gid} ${group} \
&& useradd -d "$BITBUCKET_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
RUN mkdir -p ${BITBUCKET_HOME} \
&& mkdir -p ${BITBUCKET_HOME}/shared \
&& chmod -R 700 ${BITBUCKET_HOME} \
&& chown -R ${user}:${group} ${BITBUCKET_HOME} \
&& mkdir -p ${BITBUCKET_INSTALL_DIR}/conf/Catalina \
&& curl -L --silent ${DOWNLOAD_URL} | tar -xz --strip=1 -C "$BITBUCKET_INSTALL_DIR" \
&& chmod -R 700 ${BITBUCKET_INSTALL_DIR}/ \
&& chown -R ${user}:${group} ${BITBUCKET_INSTALL_DIR}/
${BITBUCKET_INSTALL_DIR}/bin/setenv.sh
USER ${user}:${group}
EXPOSE 7990
EXPOSE 7999
WORKDIR $BITBUCKET_INSTALL_DIR
CMD ["bin/start-bitbucket.sh", "-fg"]
Additional info:
Docker version 1.12.0, build 8eab29e
docker-compose version 1.8.0, build f3628c7
All containers are running at all times, even Bitbucket works as usual after the issue occurres
The issue disappears after a restart of the container
You can use this command to access to the container with root user:
docker exec -u 0 -i -t {container_name_or_hash} /bin/bash
try debug with that. i think the script maybe remove or disable root user.
This issue is caused by a docker engine bug but which is tracked privately, Docker is asking users to restart the engine!
It seems that the bug is likely to be older than two years!
https://success.docker.com/article/ucp-health-checks-fail-unable-to-find-user-nobody-no-matching-entries-in-passwd-file-observed
https://forums.docker.com/t/unable-to-find-user-root-no-matching-entries-in-passwd-file/26545/7
... what can I say, someone is doing his best to get more funding.
Its a Long standing issue, replicated on my old version 1.10.3 to at least 1.17
As mentioned by #sorin the the docker forum says Running docker stop and then docker start fixes the problem but is hardly a long-term solution...
The docker exec -u 0 -i -t {container_name_or_hash} /bin/bash solution also in the same forum post mentioned here by #ObranZoltan might work for you, but does not work for many. See my output below
$ sudo docker exec -u 0 -it berserk_nobel /bin/bash
exec: "/bin/bash": stat /bin/bash: input/output error

Why is /etc/hosts file empty in my docker container?

I created a minimal docker container, following https://github.com/snoyberg/haskell-scratch containing a single Haskell application. When run the application works fine except it cannot resolve hosts from /etc/hosts because it is empty which implies linking does not work correctly (or at least I need to use numeric addresses which is impractical...).
I can see the file pointed at by HostsPath in container config is correctly populated but it seems it gets overwritten at some point when container starts.
docker version is 1.6.2 on Mac OS X Yosemite.
Container is built in several stages. First stage builds a container with a specially populated filesystem:
FROM ubuntu:trusty
MAINTAINER arnaud#capital-match.com
RUN apt-get install -qqy libgmp-dev netbase
ADD . /
RUN chmod +x /create_rootfs.sh
RUN /create_rootfs.sh
the create_rootfs.sh file contains the following:
#!/bin/sh
ROOTFS=/rootfs
echo "Creating directories"
mkdir -p /rootfs/bin
mkdir -p /rootfs/lib
mkdir /rootfs/lib/x86_64-linux-gnu
mkdir /rootfs/lib64
mkdir -p /rootfs/usr/lib/x86_64-linux-gnu/gconv
# mkdir -p /rootfs/etc
echo "Copying library files"
cp -L /bin/sh /rootfs/bin/
#cp -L /etc/protocols /rootfs/etc
#cp -L /etc/services /rootfs/etc
cp -L /lib/x86_64-linux-gnu/libc.so.6 /rootfs/lib/x86_64-linux-gnu/
cp -L /lib/x86_64-linux-gnu/libdl.so.2 /rootfs/lib/x86_64-linux-gnu/
cp -L /lib/x86_64-linux-gnu/libm.so.6 /rootfs/lib/x86_64-linux-gnu/
cp -L /lib/x86_64-linux-gnu/libpthread.so.0 /rootfs/lib/x86_64-linux-gnu/
cp -L /lib/x86_64-linux-gnu/libutil.so.1 /rootfs/lib/x86_64-linux-gnu/
cp -L /lib/x86_64-linux-gnu/librt.so.1 /rootfs/lib/x86_64-linux-gnu/
cp -L /lib/x86_64-linux-gnu/libz.so.1 /rootfs/lib/x86_64-linux-gnu/
cp -L /lib/x86_64-linux-gnu/libnss_files.so.2 /rootfs/lib/x86_64-linux-gnu/
cp -L /lib/x86_64-linux-gnu/libnss_dns.so.2 /rootfs/lib/x86_64-linux-gnu/
cp -L /lib/x86_64-linux-gnu/libresolv.so.2 /rootfs/lib/x86_64-linux-gnu/
cp -L /lib64/ld-linux-x86-64.so.2 /rootfs/lib64/
cp -L /usr/lib/x86_64-linux-gnu/gconv/UTF-16.so /rootfs/usr/lib/x86_64-linux-gnu/gconv/
cp -L /usr/lib/x86_64-linux-gnu/gconv/UTF-32.so /rootfs/usr/lib/x86_64-linux-gnu/gconv/
cp -L /usr/lib/x86_64-linux-gnu/gconv/UTF-7.so /rootfs/usr/lib/x86_64-linux-gnu/gconv/
cp -L /usr/lib/x86_64-linux-gnu/gconv/gconv-modules /rootfs/usr/lib/x86_64-linux-gnu/gconv/
cp -L /usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache /rootfs/usr/lib/x86_64-linux-gnu/gconv/
cp -L /usr/lib/x86_64-linux-gnu/libgmp.so.10 /rootfs/usr/lib/x86_64-linux-gnu/
Then I export the content of this filesystem building a
docker run capitalmatch/tinybuilder tar -cC /rootfs . | docker import - capitalmatch/tiny
The final container is built from "tiny", adding some .tar.gz files. It is then run as:
docker run --link stunnel:monitor capitalmatch/app
The stunnel container is run as:
docker run --name=stunnel -p 5555:5555 -v $(pwd)/stunnel:/etc/stunnel capitalmatch/stunnel
I expect /etc/hosts to contain an entry for monitor which is indeed the case before it is mounted. When I run another container built in a more "classical" way, e.g. based on ubuntu:trusty, I found the/etc/hosts` file to be correctly populated and everything works fine so I suspect it is the way the container is built that gets in the way.
/etc/hosts is regenerated every time base on how you run your container.
Moreover, if you put something into this file in Dockerfile... this will last till to end of building process of all layers, but will be wiped out when container will be started.
Editing networking config files
Starting with Docker v.1.2.0, you can now edit /etc/hosts, /etc/hostname and /etc/resolve.conf in a running container. This is useful if you need to install bind or other services that might override one of those files.
Note, however, that changes to these files will not be saved by docker commit, nor will they be saved during docker run. That means they won't be saved in the image, nor will they persist when a container is restarted; they will only "stick" in a running container.
source: https://docs.docker.com/articles/networking/#editing-networking-config-files
If your /etc/hosts file in your container doesn't contain expected entries, this means, that probably you not initialize container properly.
Please provide information how you actually run your containers or for simplification, just prepared docker-compose.yml file.
I don't have an answer but I have a workaround: use
FROM busybox
...
and everything works ok.

Resources