docker volume masks parent folder in container? - docker

I'm trying to use a Docker container to build a project that uses rust; I'm trying to build as my user. I have a Dockerfile that installs rust in $HOME/.cargo, and then I'm trying to docker run the container, map the sources from $HOME/<some/subdirs/to/project> on the host in the same subfolder in the container. The Dockerfile looks like this:
FROM ubuntu:16.04
ARG RUST_VERSION
RUN \
export DEBIAN_FRONTEND=noninteractive && \
apt-get update && \
# install library dependencies
apt-get install [... a bunch of stuff ...] && \
curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain $RUST_VERSION && \
echo 'source $HOME/.cargo/env' >> $HOME/.bashrc && \
echo apt-get DONE
The build container is run something like this:
docker run -i -t -d --net host --privileged -v /mnt:/mnt -v /dev:/dev --volume /home/stefan/<path/to/project>:/home/stefan/<path/to/project>:rw --workdir /home/stefan/<path/to/project> --name <container-name> -v /etc/group:/etc/group:ro -v /etc/passwd:/etc/passwd:ro -v /etc/shadow:/etc/shadow:ro -u 1000 <image-name>
And then I try to exec into it and run the build script, but it can't find rust or $HOME/.cargo:
docker exec -it <container-name> bash
$ ls ~/.cargo
ls: cannot access '/home/stefan/.cargo': No such file or directory
It looks like the /home/stefan/<path/to/project> volume is masking the contents of /home/stefan in the container. Is this expected? Is there a workaround possible to be able to map the source code from a folder under $HOME on the host, but keep $HOME from the container?
I'm un Ubuntu 18.04, docker 19.03.12, on x86-64.

Dockerfile read variable in physical machine. So you user don't have in virtual machine.
Try change: $HOME to /root
echo 'source /root/.cargo/env' >> /root/.bashrc && \

I'll post this as an answer, since I seem to have figured it out.
When the Dockerfile is expanded, $HOME is /root, and the user is root. I couldn't find a way to reliably introduce my user in the build step / Dockerfile. I tried something like:
ARG BUILD_USER
ARG BUILD_GROUP
RUN mkdir /home/$BUILD_USER
ENV HOME=/home/$BUILD_USER
USER $BUILD_USER:$BUILD_GROUP
RUN \
echo "HOME is $HOME" && \
[...]
But didn't get very far, because inside the container, the user doesn't exist:
unable to find user stefan: no matching entries in passwd file
So what I ended up doing was to docker run as my user, and run the rust install from there - that is, from the script that does the actual build.
I also realized why writing to /home/$USER doesn't work - there is no /home/$USER in the container; mapping /etc/passwd and /etc/group in the container teaches it about the user, but does not create any directory. I could've mapped $HOME from the host, but then the container would control the rust versions on the host, and would not be that self contained. I also ended up needing to install rust in a non-standard location, since I don't have a writable $HOME in the container: I had to set CARGO_HOME and RUSTUP_HOME to do that.

Related

Using current user when running container in docker-compose

Is there a way to execute or login as current user to a bash of specific container . I tried running docker-compose exec -u $USER phoenix bash but it says unable to find user raz: no matching entries in passwd file
I tried another way by adding a useradd command in a dockerfile.
FROM elixir:latest
ARG USER_ID
ARG GROUP_ID
RUN addgroup --gid $GROUP_ID raz
RUN adduser --disabled-password --gecos '' --uid $USER_ID --gid $GROUP_ID raz
USER raz
RUN apt-get update && \
apt-get install -y postgresql-client && \
apt-get install -y inotify-tools && \
apt-get install -y nodejs && \
curl -L https://npmjs.org/install.sh | sh && \
mix local.hex --force && \
mix archive.install hex phx_new 1.5.3 --force && \
mix local.rebar --force
COPY . /app
WORKDIR /app
COPY ./entrypoint.sh /entrypoint.sh
RUN ["chmod", "+x", "/entrypoint.sh"]
ENTRYPOINT ["/entrypoint.sh"]
but when I run docker-compose build I get a permission denied error when running the apt-get commands.
I also look for gosu as a step down root user but it seems complicated.
Is it possible for added user in Dockerfile command to have same permission as my current user?
I'm running WSL2 btw.
This question is pretty interesting. Let me begin with a short explanation:
Understanding the problem
In fact the user that exists inside container will be valid only inside the container itself. What you're trying to do is to use a user that exists outside a container, i.e. your docker host, inside a container. Unfortunately this movement can't be done in a normal way.
For instance, let me try to change to my user in order to get this container:
$ docker run -it --rm --user jon ubuntu whoami
docker: Error response from daemon: unable to find user jon: no matching entries in passwd file.
I tried to run a classic ubuntu container inside my docker host; Although the user exists on my local machine, the Docker image says that didn't find the user.
$ id -a
uid=1000(jon) gid=1001(jon) groups=1001(jon),3(sys),90(network),98(power),108(vboxusers),962(docker),991(lp),998(wheel),1000(autologin)
The command above was executed on my computer, proving that "jon" username exists.
Making my username available inside a container: a docker trick
I suppose that you didn't create a user inside your container. For demonstration I'm going to use the ubuntu docker image.
The trick is to mount both files responsible for handling your user and group definition inside the container, enabling the container to see you inside of it.
$ docker run -it --rm --volume /etc/passwd:/etc/passwd:ro --volume /etc/group:/etc/group:ro --user $(id -u) ubuntu whoami
jon
For a more complete example:
$ docker run -it --rm --volume /etc/passwd:/etc/passwd:ro --volume /etc/group:/etc/group:ro --user $(id -u):$(id -g) ubuntu "id"
uid=1000(jon) gid=1001(jon) groups=1001(jon)
Notice that I used two volumes pointing to two files? /etc/password and /etc/group?
Both I mounted read only (appending ":ro") just for safety.
Also notice that I used the id -u, which brings me the user id (1000 on my case), forcing the user id for being the same of mine defined on my /etc/password file.
Caveat
If you try to set the username to jon rather than the UID, you're going to run into an issue:
$ docker run -it --rm --volume /etc/passwd:/etc/passwd:ro --volume /etc/group:/etc/group:ro --user jon ubuntu whoami
docker: Error response from daemon: unable to find user jon: no matching entries in passwd file.
This happens because the docker engine would try to change the username before mouting the volumes and this should exists before running the container. If you provide a numeric representation of the user, this one doesn't needs to exist within the container, causing the trick to work;
https://docs.docker.com/engine/reference/run/#user
I hope being helpful. Be safe!
Building on top of the answer by Joepreludian, focusing on docker-compose:
You can use the user: and volumes: options in the compose file. For example:
my-service:
image: ubuntu:latest
user: ${MY_UID}:${MY_GID}
volumes:
- /etc/passwd:/etc/passwd:ro
- /etc/group:/etc/group:ro
and define these variables where you are starting your compose:
MY_UID="$(id -u)" MY_GID="$(id -g)" docker-compose up

run docker container as a arbitrary user passed to it while running the image

I want to run a docker container as an arbitrary user which is passed to the image while running it. For example docker run -u 1000 myimage.
The above is possible. However I want to create a home directory with this user 1000 while starting the container(possibly through CMD) and do my container service stuff within that directory.
Is this possible and some pointers would be useful on ways to achieve it.
First save your current user and group in variables:
export uid=$(id -u)
export gid=$(id -g)
Then to run your image,you have two options:
1) Run the image from the location of the app directory itself:
sudo docker run -d \
--user $uid:$gid \
-v $(pwd):/home/$USER \
--workdir="/home/$USER" \
myimage
2) Create a new directory for the app, e.g. at /home/$USER/app, but then you will have to write in command line your CMD from the docker file.
For example if this was your Dockerfile:
FROM node:7
WORKDIR /app
COPY package.json /app
COPY . /app
CMD node bin/www
Your would run it like that:
sudo docker run -d \
--user $uid:$gid \
-v $(pwd):/home/$USER \
--workdir="/home/$USER" \
hello-express \
bash -c "cp -rf /app/* /home/$USER/; node bin/www"
Here you pass the user to the container using $uid:$gid and you mount the user's home directory as a volume and then set it as the working directory.
I know it's quite complex, but it's the only way to achieve exactly what you want.
If you want a simpler solution, consider planning it differently. See this example for running a docker container as a non-root user.

How to retrieve file from docker container?

I have a simple Dockerfile which creates a zip file and I'm trying retrieve the zip file once it is ready. My Dockerfile looks like this:
FROM ubuntu
RUN apt-get update && apt-get install -y build-essentials gcc
ENTRYPOINT ["zip","-r","-9"]
CMD ["/lib64.zip", "/lib64"]
After reading through the docs I fee like something like this should do it but I can't quite get it to work.
docker build -t ubuntu-libs .
docker run -d --name ubuntu-libs --mount source=$(pwd)/,target=/lib64.zip ubuntu-libs
One other side question: Is is possible to rename the zip file from the command line?
Edit:
This is different than the duplicate question mentioned in the comments because while they're using cp to copy file from a running Docker container I'm trying to mount a directory upon instantiation.
There are multiple ways to do this.
Using docker cp:
docker cp <container_hash>:/path/to/zip/file.zip /path/on/host/new_name.zip
Using docker volumes:
As you were leading to in your question, you can also mount a path from the container to your host. You can either do this by specifying where on the host you want the mount point to be or don't specify where the mount point is and let docker choose. Both these paths require different approaches.
Let docker choose host mount location
docker volume create random_volume_name
docker run -d --name ubuntu-libs -v random_volume_name:<path/to/mount/in/container> ubuntu-libs
The content will be located on your host, here:
ls -l /var/lib/docker/volumes/random_volume_name/_data/
Let me choose host mount location
docker run -d --name ubuntu-libs -v <existing/mount/point/on/host>:<path/to/mount/in/container> ubuntu-libs
This creates a clean/empty location that is shared as per the locations defined in the command. Now you need to modify your Dockerfile to copy the artifacts to this path, something like:
FROM ubuntu
RUN apt-get update && apt-get install -y build-essentials gcc
ENTRYPOINT ["zip","-r","-9"]
CMD ["sh", "-c", "/lib64.zip", "/lib64", "cp", "path/to/zip/file.zip", "<path/to/mount/in/container>"]
The content will now be located on your host, here:
ls -l <existing/mount/point/on/host>
I got to give a shout out to #joaofnfernandes from here, who does a great job explaining.
As #flagg19 commented, you should be binding a directory onto a directory. You can make up directories inside the container, and you can override the RUN arguments. Doing both plus adding type=bind leads to great success:
docker run -d --rm --mount type=bind,source="$(pwd)",target=/out ubuntu-libs /out/lib64.zip /lib64
Or of course you could change the Dockerfile RUN command to write to /out/lib64.zip instead of /lib64.zip:
FROM ubuntu
RUN apt-get update && apt-get install -y build-essentials gcc && mkdir /out
ENTRYPOINT ["zip","-r","-9"]
CMD ["/out/lib64.zip", "/lib64"]
docker run -d --rm --mount type=bind,source="$(pwd)",target=/out ubuntu-libs
Either way, I recommend adding --rm and getting rid of --name. No need to keep around the container after it's done.

Unable to find user root: no matching entries in passwd file in Docker

I have containers for multiple Atlassian products; JIRA, Bitbucket and Confluence. When I'm trying to access the running containers I'm usually using:
docker exec -it -u root ${DOCKER_CONTAINER} bash
With this command I'm able to access as usual, but after running a script to extract and compress log files, I can't access that one container anymore.
Excerpt from the 'clean up script'
This is the first point of failure, and the script is running once each week (scheduled by Jenkins).
docker cp ${CLEAN_UP_SCRIPT} ${DOCKER_CONTAINER}:/tmp/${CLEAN_UP_SCRIPT}
if [ $? -eq 0 ]; then
docker exec -it -u root ${DOCKER_CONTAINER} bash -c "cd ${LOG_DIR} && /tmp/compressOldLogs.sh ${ARCHIVE_FILE}"
fi
When the script executes these two lines towards the Bitbucket container the result is:
unable to find user root: no matching entries in passwd file
It's failing on the 'docker cp'-command, but only towards the Bitbucket container. After the script has ran, the container is unaccessible with both the 'bitbucket' (defined in Dockerfile) and 'root' users.
I was able to copy /etc/passwd out of the container, and it contains all of the users as expected. When trying to access by uid, I get the following error:
rpc error: code = 2 desc = oci runtime error: exec failed: process_linux.go:75: starting setns process caused "fork/exec /proc/self/exe: no such file or directory"
Dockerfile for Bitbucket image:
FROM java:openjdk-8-jre
ENV BITBUCKET_HOME /var/atlassian/application-data/bitbucket
ENV BITBUCKET_INSTALL_DIR /opt/atlassian/bitbucket
ENV BITBUCKET_VERSION 4.12.0
ENV DOWNLOAD_URL https://downloads.atlassian.com/software/stash/downloads/atlassian-bitbucket-${BITBUCKET_VERSION}.tar.gz
ARG user=bitbucket
ARG group=bitbucket
ARG uid=1000
ARG gid=1000
RUN mkdir -p $(dirname $BITBUCKET_HOME) \
&& groupadd -g ${gid} ${group} \
&& useradd -d "$BITBUCKET_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
RUN mkdir -p ${BITBUCKET_HOME} \
&& mkdir -p ${BITBUCKET_HOME}/shared \
&& chmod -R 700 ${BITBUCKET_HOME} \
&& chown -R ${user}:${group} ${BITBUCKET_HOME} \
&& mkdir -p ${BITBUCKET_INSTALL_DIR}/conf/Catalina \
&& curl -L --silent ${DOWNLOAD_URL} | tar -xz --strip=1 -C "$BITBUCKET_INSTALL_DIR" \
&& chmod -R 700 ${BITBUCKET_INSTALL_DIR}/ \
&& chown -R ${user}:${group} ${BITBUCKET_INSTALL_DIR}/
${BITBUCKET_INSTALL_DIR}/bin/setenv.sh
USER ${user}:${group}
EXPOSE 7990
EXPOSE 7999
WORKDIR $BITBUCKET_INSTALL_DIR
CMD ["bin/start-bitbucket.sh", "-fg"]
Additional info:
Docker version 1.12.0, build 8eab29e
docker-compose version 1.8.0, build f3628c7
All containers are running at all times, even Bitbucket works as usual after the issue occurres
The issue disappears after a restart of the container
You can use this command to access to the container with root user:
docker exec -u 0 -i -t {container_name_or_hash} /bin/bash
try debug with that. i think the script maybe remove or disable root user.
This issue is caused by a docker engine bug but which is tracked privately, Docker is asking users to restart the engine!
It seems that the bug is likely to be older than two years!
https://success.docker.com/article/ucp-health-checks-fail-unable-to-find-user-nobody-no-matching-entries-in-passwd-file-observed
https://forums.docker.com/t/unable-to-find-user-root-no-matching-entries-in-passwd-file/26545/7
... what can I say, someone is doing his best to get more funding.
Its a Long standing issue, replicated on my old version 1.10.3 to at least 1.17
As mentioned by #sorin the the docker forum says Running docker stop and then docker start fixes the problem but is hardly a long-term solution...
The docker exec -u 0 -i -t {container_name_or_hash} /bin/bash solution also in the same forum post mentioned here by #ObranZoltan might work for you, but does not work for many. See my output below
$ sudo docker exec -u 0 -it berserk_nobel /bin/bash
exec: "/bin/bash": stat /bin/bash: input/output error

Why does docker "--filter ancestor=imageName" find the wrong container?

I have a deployment script that builds new images, stop the existing containers with the same image names, then starts new containers from those images.
I stop the container by image name using the answer here: Stopping docker containers by image name - Ubuntu
But this command stops containers that don't have the specified image name. What am I doing wrong?
See here to watch docker stopping the wrong container:
Here is the dockerfile:
FROM ubuntu:14.04
MAINTAINER j#eka.com
# Settings
ENV NODE_VERSION 5.11.0
ENV NVM_DIR /root/.nvm
ENV NODE_PATH $NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Install libs
RUN apt-get update
RUN apt-get install curl -y
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh | bash \
&& chmod +x $NVM_DIR/nvm.sh \
&& source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
RUN apt-get clean
# Install app
RUN mkdir /app
COPY ./app /app
#Run the app
CMD ["node", "/app/src/app.js"]
I build like so:
docker build -t "$serverImageName" .
and start like so:
docker run -d -p "3000:"3000" -e db_name="$db_name" -e db_username="$db_username" -e db_password="$db_password" -e db_host="$db_host" "$serverImageName"
Why not use the container name to differentiate you environments?
docker run -d --rm --name nginx-dev nginx
40ca9a6db09afd78e8e76e690898ed6ba2b656f777b84e7462f4af8cb4a0b17d
docker run -d --rm --name nginx-qa nginx
347b32c85547d845032cbfa67bbba64db8629798d862ed692972f999a5ff1b6b
docker run -d --rm --name nginx nginx
3bd84b6057b8d5480082939215fed304e65eeac474b2ca12acedeca525117c36
Then use docker ps
docker ps -f name=nginx$
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3bd84b6057b8 nginx "nginx -g 'daemon ..." 30 seconds ago Up 28 seconds 80/tcp, 443/tcp nginx
According to the docs --filter ancestor could be finding the wrong containers if they are in any way children of other containers.
So to be sure my images are separate right from the start I added this line to the start of my dockerfile, after the FROM and MAINTAINER commands:
RUN echo DEVTESTLIVE: This line ensures that this container will never be confused as an ancestor of another environment
Then in my build scripts after copying the dockerfile to the distribution folder I replace DEVTESTLIVE with the appropriate environment:
sed -i -e "s/DEVTESTLIVE/$env/g" ../dist/server/dockerfile
This seems to have worked; I now have containers for all three environments running simultaneously and can start and stop them automatically through their image names.

Resources