Can't Delete file created via Docker - docker

I used a docker image to run a program on our school's server using this command.
docker run -t -i -v /target/new_directory 990210oliver/mycc.docker:v1 /bin/bash
After I ran it it created a firectory on my account called new_directory. Now I don't have permissions to delete or modify the files.
How do I remove this directory?

I also had this problem.
After:
docker run --name jenkins -p 8080:8080 -v $HOME/jenkins:/var/jenkins_home jenkins jenkins
I couldn't remove files in $HOME/jenkins.
Ricardo Branco's answer didn't work for me because chown gave me:
chown: changing ownership of '/var/jenkins_home': Operation not permitted
Solution:
exec /bin/bash into container as a root user:
docker exec -it --privileged --user root container_id /bin/bash
then:
cd /var/jenkins_home/ && rm -r * .*

I made #siulkilulki's answer into one line:
docker exec --privileged --user root <CONTAINER_ID> chown -R "$(id -u):$(id -g)" <TARGET_DIR>
Note that here the CONTAINER must be up.

Change the owner of all the files on the directory to your used ID within the container running as root, then exit the container and remove the directory.
docker run --rm -v /target/new_directory 990210oliver/mycc.docker:v1 chown -R $(id -un):$(id -un) /target/new_directory
exit
rm -rf $HOME/new_directory

I had the same problem. I am using ubuntu 18.04. I ran the following code and then I was able to delete files locally. I have app dir inside docker project dir
cd to your docker project dir
sudo chown -R $(whoami):$(whoami) app/

docker run -v {absolute path to dir with the file}:/to_delete -it ubuntu /bin/bash
Then just:
$ cd to_delete
$ rm -rf <file/dir>

Here is a solution that does not require --privileged.
Game Plan
Determine UIDs of all offending files created by previous docker runs. Use docker to find them, since in-container UID is not the same as host UID. An offending file is any file not owned by container user root which maps to the current user running docker.
Run a container using each discovered UID and delete the offending files (or chown them).
Code
# Assumes that current dir is the volume
# find files owned by docker internal uuids (not root) on the mounted volume:
BAD_FILE_UIDS=$(docker run --rm -v $(pwd):/build alpine sh -c 'find /build -mindepth 1 -not -user root | xargs stat -c "%u" | sort -u')
if [ -n "${BAD_FILE_UIDS}" ] ; then
for uuid in $BAD_FILE_UIDS ; do
echo "Cleaning up files owned by $uuid using docker"
docker run --rm -v $(pwd):/build --user $uuid:0 alpine find /build -mindepth 1 -user $uuid -delete
done
fi
You can change the -delete to -exec chown SOME_USER {} \; to chown.
The above works well for use in CI as post-build cleanup.

Try this:
docker stop $CONTAINER_NAME
docker rm -v $CONTAINER_NAME
I guess this should remove the mounted dir. If it doesn't, do this explicitly:
sudo rm -rf /target/new_directory

Related

Docker container fails to write to a (non root) folder

I run a docker container in order to extract files from a source folder into a destination folder. The source folder resides in my user's home directory so there is no problem to read from it or write. The destination folder on the other hand, is accessed only by a nonrootuser.
When I ran the docker container with the nonrootuser, I cannot write in the container's folders (permission denied).
On the other hand when I ran the container with my user, I cannot write to the destination folder.
Setup
I build the image like this
docker build -t lftp .
based on the following Dockerfile:
Dockerfile
FROM debian:10
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install lftp dos2unix man
# Adding the scripts
COPY scripts /scripts
WORKDIR /work
# Adding the nonrootuser and his uid (`id -u nonrootuser`)
RUN useradd -u 47001 nonrootuser && mkhomedir_helper nonrootuser
Then I ran the container while binding the following volumes :
download_folder
destination_folder <-> this folder need to be accessed by a nonrootuser
docker run -ti --rm --name=lftp_untar -u `id -u nonrootuser`:`id -g nonrootuser` -v ${download_folder}:/source -v ${destination_folder}:/target lftp bash /scripts/execute_untar.sh /source /target
Where:
execute_untar.sh
#!/bin/bash
source=$1
target=$2
if [ ! -d $source ]; then
echo Can\'t access $source
exit 1
fi
if [ ! -d $target ]; then
echo Can\'t access $target
exit 1
fi
if [ ! -w $target ]; then
echo Can\'t write to $target
exit 1
fi
# Then Read files from /scripts and /work folder
exclude_file=$(readlink -f /scripts/exclude.txt)
log_file=$(readlink -f untar.log)
The issue with the access denied has to do with the fact that when you mount the directories =>
-v ${destination_folder}:/target
-v ${download_folder}:/source
will require root permissions from the perspective of the container environment. Also take a look at Can I control the owner of a bind-mounted volume in a docker image?
I would suggest when you run the containers mount the target, source folders under the nonrootuser home directory, in order to match their permissions. This way you will have the needed write access
docker run -ti --rm --name=lftp_untar -u `id -u nonrootuser`:`id -g nonrootuser` -v ${download_folder}:/home/nonrootuser/source -v ${destination_folder}:/home/nonrootuser/target lftp bash /scripts/execute_untar.sh /home/nonrootuser/source /home/nonrootuser/target

How to migrate volume data from docker-for-mac to colima

How do I move volumes from docker-for-mac into colima?
Will copy all the volumes from docker-for-mac and move them to colima.
Note: there will be a lot of volumes you may not want to copy over since they're temporary ones, you can ignore them by simply adding a | grep "YOUR FILTER" to the for loop, either before or after the awk.
The following code makes 2 assumptions:
you have docker-for-mac installed and running
you have colima running
That is all you need, now copy-and-paste this into your terminal. No need to touch anything.
(
# set -x # uncomment to debug
set -e
# ssh doesn't like file descriptor piping, we need to write the configuration into someplace real
tmpconfig=$(mktemp);
# Need to have permissions to copy the volumes, and need to remove the ControlPath and add ForwardAgent
(limactl show-ssh --format config colima | grep -v "^ ControlPath\| ^User"; echo " ForwardAgent=yes") > $tmpconfig;
# Setup root account
ssh -F $tmpconfig $USER#lima-colima "sudo mkdir -p /root/.ssh/; sudo cp ~/.ssh/authorized_keys /root/.ssh/authorized_keys"
# Loop over each volume inside docker-for-mac
for volume_name in $(DOCKER_CONTEXT=desktop-linux docker volume ls | awk '{print $2}'); do
echo $volume_name;
# Make the volume backup
DOCKER_CONTEXT=desktop-linux docker run -d --rm --mount source=$volume_name,target=/volume --name copy-instance busybox sleep infinate;
DOCKER_CONTEXT=desktop-linux docker exec copy-instance sh -c "tar czf /$volume_name.tar /volume";
DOCKER_CONTEXT=desktop-linux docker cp copy-instance:/$volume_name.tar /tmp/$volume_name.tar;
DOCKER_CONTEXT=desktop-linux docker kill copy-instance;
# Restore the backup inside colima
DOCKER_CONTEXT=colima docker volume create $volume_name;
ssh -F $tmpconfig root#lima-colima "rm -rf /var/lib/docker/volumes/$volume_name; mkdir -p /var/lib/docker/volumes/$volume_name/_data";
scp -r -F $tmpconfig /tmp/$volume_name.tar root#lima-colima:/tmp/$volume_name.tar;
ssh -F $tmpconfig root#lima-colima "tar -xf /tmp/$volume_name.tar --strip-components=1 --directory /var/lib/docker/volumes/$volume_name/_data";
done
)

How to delete the files generated in the docker container outside the container?

I need to compile some programs using files in a docker container. Once compiled, the container is no longer used.
Therefore I always use the following command. docker run --rm -v my_file:docker_file my_images my_command
But I find that there are always some problems.
For example, take a simple C language program that outputs "hello, world" as an example.
docker run -it --rm -v /home/cuiyujie/workspace/workGem5/gem5/hello.c:/home/cuiyujie/workspace/workGem5/gem5/hello.c -v /home/cuiyujie/workspace/workGem5/gem5/build:/home/cuiyujie/workspace/workGem5/gem5/build gerrie/gem5-bare-env
After entering the container, execute gcc hello.c -o hello, cp hello build.
I found outside the container that the hello file belongs to root.
-rwxr-xr-x 1 root root 16696 2月 23 10:23 hello*
I don't have permission to delete it. what should I do to make it become the permissions of the host user?
If you run your container as your own UID, files created in the host volumes will be owned by your UID. That comes with the disclaimer that your container needs to be designed to run as a user other than root (e.g. not need access to files owned by root inside the container). Here's an example of running as your uid/gid and full access to your home directory using bash on Linux (the $(id -u) expansion may not work in other environments):
docker container run \
-u "$(id -u):$(id -g)" -w "$(pwd)" \
-v /etc/passwd:/etc/passwd:ro \
-v /etc/group:/etc/group:ro \
-v "$HOME:$HOME" \
<your_image>
You can use chown to change the ownership of a file. You'll need permission to run it with sudo though.
$ sudo chown $USER hello
If you also want to change the group of the file to your primary group, you can put a . after the user:
$ sudo chown $USER. hello

how to correctly use system user in docker container

I'm starting containers from my docker image like this:
$ docker run -it --rm --user=999:998 my-image:latest bash
where the uid and gid are for a system user called sdp:
$ id sdp uid=999(sdp) gid=998(sdp) groups=998(sdp),999(docker)
but: container says "no"...
groups: cannot find name for group ID 998
I have no name!#75490c598f4c:/home/myfolder$ whoami
whoami: cannot find name for user ID 999
what am I doing wrong?
Note that I need to run containers based on this image on multiple systems and cannot guarantee that the uid:gid of the user will be the same across systems which is why I need to specify it on the command line rather than in the Dockerfile.
Thanks in advance.
This sort of error will happen when the uid/gid does not exist in the /etc/passwd or /etc/group file inside the container. There are various ways to work around that. One is to directly map these files from your host into the container with something like:
$ docker run -it --rm --user=999:998 \
-v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro \
my-image:latest bash
I'm not a fan of that solution since files inside the container filesystem may now have the wrong ownership, leading to potential security holes and errors.
Typically, the reason people want to change the uid/gid inside the container is because they are mounting files from the host into the container as a host volume and want permissions to be seamless across the two. In that case, my solution is to start the container as root and use an entrypoint that calls a script like:
if [ -n "$opt_u" ]; then
OLD_UID=$(getent passwd "${opt_u}" | cut -f3 -d:)
NEW_UID=$(stat -c "%u" "$1")
if [ "$OLD_UID" != "$NEW_UID" ]; then
echo "Changing UID of $opt_u from $OLD_UID to $NEW_UID"
usermod -u "$NEW_UID" -o "$opt_u"
if [ -n "$opt_r" ]; then
find / -xdev -user "$OLD_UID" -exec chown -h "$opt_u" {} \;
fi
fi
fi
The above is from a fix-perms script that I include in my base image. What's happening there is the uid of the user inside the container is compared to the uid of the file or directory that is mounted into the container (as a volume). When those id's do not match, the user inside the container is modified to have the same uid as the volume, and any files inside the container with the old uid are updated. The last step of my entrypoint is to call something like:
exec gosu app_user "$#"
Which is a bit like an su command to run the "CMD" value as the app_user, but with some exec logic that replaces pid 1 with the "CMD" process to better handle signals. I then run it with a command like:
$ docker run -it --rm --user=0:0 -v /host/vol:/container/vol \
-e RUN_AS app_user --entrypoint /entrypoint.sh \
my-image:latest bash
Have a look at the base image repo I've linked to, including the example with nginx that shows how these pieces fit together, and avoids the need to run containers in production as root (assuming production has known uid/gid's that can be baked into the image, or that you do not mount host volumes in production).
It's strange to me that there's no built-in command-line option to simply run a container with the "same" user as the host so that file permissions don't get messed up in the mounted directories. As mentioned by OP, the -u $(id -u):$(id -g) approach gives a "cannot find name for group ID" error.
I'm a docker newb, but here's the approach I've been using in case it helps others:
# See edit below before using this.
docker run --rm -it -v /foo:/bar ubuntu:20.04 sh -c "useradd -m -s /bin/bash $USER && usermod -a -G sudo $USER && su - $USER"
I.e. add a user (useradd) with a matching name, make it sudo (usermod), then open a terminal with that user (su -).
Edit: I've just found that this causes a E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied) error when trying to use apt. Using sudo gives the error -su: sudo: command not found because sudo isn't install by default on the image I'm using. So the command becomes even more hacky and requires running an apt update and apt install sudo at launch:
docker run --rm -it -v /foo:/bar ubuntu:20.04 sh -c "useradd -m -s /bin/bash $USER && usermod -a -G sudo $USER && apt update && apt install sudo && passwd -d $USER && su - $USER"
Not ideal! I'd have hoped there was a much more simple way of doing this (using command-line options, not creating a new image), but I haven't found one.
1) Make sure that the user 999 has right privilege on the current directory, you need to try something like this in your docker file
FROM
RUN mkdir /home/999-user-dir && \
chown -R 999:998 /home/999-user-dir
WORKDIR /home/999-user-dir
USER 999
try to spin up the container using this image without the user argument and see if that works.
2) other reason could be permission issue on the below files, make sure your group 998 has read permission on these files
-rw-r--r-- 1 root root 690 Jan 2 06:27 /etc/passwd
-rw-r--r-- 1 root root 372 Jan 2 06:27 /etc/group
Thanks
So, on your host you probably see your user and group:
$ cat /etc/passwd
sdp:x:999:998::...
But inside the container, you will not see them in /etc/passwd.
This is the expected behavior, the host and the container are completely separated as long as you don't mount the /etc/passwd file inside the container (and you shouldn't do it from security perspective).
Now if you specified a default user inside your Dockerfile, the --user operator overrides the USER instruction, so you left without a username inside your container, but please notice that specifying the uid:gid option means that the container have the permissions of the user with the same uid value in the host.
Now for your request not to specify a user in the Dockerfile - that shouldn't be a problem. You can set it on runtime like you did as long as that uid matches an existing user uid on the host.
If you have to run some of the containers in privileged mode - please consider using user namespace.

Connect to docker container as user other than root

BY default when you run
docker run -it [myimage]
OR
docker attach [mycontainer]
you connect to the terminal as root user, but I would like to connect as a different user. Is this possible?
For docker run:
Simply add the option --user <user> to change to another user when you start the docker container.
docker run -it --user nobody busybox
For docker attach or docker exec:
Since the command is used to attach/execute into the existing process, therefore it uses the current user there directly.
docker run -it busybox # CTRL-P/Q to quit
docker attach <container id> # then you have root user
/ # id
uid=0(root) gid=0(root) groups=10(wheel)
docker run -it --user nobody busybox # CTRL-P/Q to quit
docker attach <container id>
/ $ id
uid=99(nobody) gid=99(nogroup)
If you really want to attach to the user you want to have, then
start with that user run --user <user> or mention it in your Dockerfile using USER
change the user using `su
You can run a shell in a running docker container using a command like:
docker exec -it --user root <container id> /bin/bash
As an updated answer from 2020. --user, -u option is Username or UID (format: <name|uid>[:<group|gid>]).
Then, it works for me like this,
docker exec -it -u root:root container /bin/bash
Reference: https://docs.docker.com/engine/reference/commandline/exec/
You can specify USER in the Dockerfile. All subsequent actions will be performed using that account. You can specify USER one line before the CMD or ENTRYPOINT if you only want to use that user when launching a container (and not when building the image). When you start a container from the resulting image, you will attach as the specified user.
The only way I am able to make it work is by:
docker run -it -e USER=$USER -v /etc/passwd:/etc/passwd -v `pwd`:/siem mono bash
su - magnus
So I have to both specify $USER environment variable as well a point the /etc/passwd file. In this way, I can compile in /siem folder and retain ownership of files there not as root.
My solution:
#!/bin/bash
user_cmds="$#"
GID=$(id -g $USER)
UID=$(id -u $USER)
RUN_SCRIPT=$(mktemp -p $(pwd))
(
cat << EOF
addgroup --gid $GID $USER
useradd --no-create-home --home /cmd --gid $GID --uid $UID $USER
cd /cmd
runuser -l $USER -c "${user_cmds}"
EOF
) > $RUN_SCRIPT
trap "rm -rf $RUN_SCRIPT" EXIT
docker run -v $(pwd):/cmd --rm my-docker-image "bash /cmd/$(basename ${RUN_SCRIPT})"
This allows the user to run arbitrary commands using the tools provides by my-docker-image. Note how the user's current working directory is volume mounted
to /cmd inside the container.
I am using this workflow to allow my dev-team to cross-compile C/C++ code for the arm64 target, whose bsp I maintain (the my-docker-image contains the cross-compiler, sysroot, make, cmake, etc). With this a user can simply do something like:
cd /path/to/target_software
cross_compile.sh "mkdir build; cd build; cmake ../; make"
Where cross_compile.sh is the script shown above. The addgroup/useradd machinery allows user-ownership of any files/directories created by the build.
While this works for us. It seems sort of hacky. I'm open to alternative implementations ...
For docker-compose. In the docker-compose.yml:
version: '3'
services:
app:
image: ...
user: ${UID:-0}
...
In .env:
UID=1000
Execute command as www-data user: docker exec -t --user www-data container bash -c "ls -la"
This solved my use case that is: "Compile webpack stuff in nodejs container on Windows running Docker Desktop with WSL2 and have the built assets under your currently logged in user."
docker run -u 1000 -v "$PWD":/build -w /build node:10.23 /bin/sh -c 'npm install && npm run build'
Based on the answer by eigenfield. Thank you!
Also this material helped me understand what is going on.

Resources