Docker Issue: Removing a Bind-Mounted Volume - docker

I have been unable to find any help online for this simple mistake I made, so I was looking for some help. I am using a server to run a docker image in a container and I mistyped and have caused an annoyance for myself. I ran the command
docker run --rm -v typo:location docker_name
and since I had a typo with the directory to mount it created a directory on the host machine, and when the container ended the directory remained. I tried to remove it, but i just get the error
rm -rf typo/
rm: cannot remove 'typo': Permission denied
I know now that I should have used --mount instead of -v for safety, but the damage is done; how can I remove this directory without having access to the container that created it?
I apologize in advance, my knowledge of docker is quite limited. I have mostly learned it only to use a particular image and I do a bunch of Google searches to do the rest.

The first rule of Docker security is, if you can run any docker command at all, you can get unrestricted root access over the entire host.
So you can fix this issue by running a container, running as root and bind-mounting the parent directory, that can delete the directory in question:
docker run \
--rm \
-v "$PWD:/pwd" \
busybox \
rm -rf /pwd/typo
I do not have sudo permissions
You can fix that
docker run --rm -v /:/host busybox vi /host/etc/sudoers
(This has implications in a lot of places. Don't indiscriminately add users to a docker group, particularly on a multi-user system, since they can trivially get root access. Be careful publishing the host's Docker socket into a container, since the container can then root the host; perhaps redesign your application to avoid needing it. Definitely do not expose the Docker socket over the network.)

Related

code changes in the docker container as a root user

I'm experimenting with the docker concept for C++ development. I wrote a Dockerfile that includes instructions for installing all of the necessary libraries and tools. CMake is being used to build C++ code and install binaries. Because of a specific use case, the binaries should be executed as root user. I'm running the docker container with the following command.
docker run -it --rm -u 0 -v $(pwd):/mnt buildmcu:focal
The issue now is that I am a root user inside the Docker container, and if I make any code changes/create a new file inside the Docker container, I am getting a permission error on the host machine If I try to access it.I need to run the sudo chmod ... to change the permissions.Is there any way to allow the source modification in docker container and host machine without permission error?

Deleting shared volume created by a docker container

I created a docker container that shares a folder with its host using the following command:
docker run -dit -P -v $(pwd):/home/shared ubuntu
The container became unresponsive so I deleted it and pruned it using the following steps mentioned here:
docker rm $(docker ps -aq)
docker rmi $(docker images -q)
docker system prune
Now, I would like to delete the file that resides on the host and was created by the container. However, I keep getting this error msg:
rm file1
rm: remove write-protected regular file ‘file1’? Y
rm: cannot remove ‘file1’: Permission denied
I don't have sudo privilege on this host, can I do something about it, or do I need to ask the system administrator's help for this?
Thank you
The Docker daemon runs as root and your user(apparently in the docker group, given the lack of sudo and the file permission issues) has access to the API(typically exposed via a local unix socket). Through the docker API you should be able to delete the created files.
If it is truly a volume, you should simply be able to use the docker volume rm command.
docker volume rm <your-volume>
If you instead mounted a path from the host, you should be able to launch a new container to delete the file(s), re-using the same mount point(for instance, start a basic bash container with docker run, rm the specified files, and then remove the new container).

Open/Edit/Save from on the host with app running in container

We recently have a lot of problem deploying the Linux version of our app for the client (updated libraries, missing libraries, install path), and we are looking to use Docker for deployment.
Our app as a UI, so we naturally map that using
-e DISPLAY:$DISPLAY -v /tmp/X11-unix:/tmp/X11-unix
and we can actually see the UI popping up.
But when it's time to open a file, the problem start there. We want to only browse the host system and save any output file on the host (output directory is determined by the location of the opened file).
Which strategy would you suggest for this?
We want the client to not see the difference between the app running locally or inside Docker. We are working on a launch script so it looks like the client would still be double-clicking on it to start the app. We can add all the configuration we need in there for the docker run command.
After recommendations by #CFrei and #Robert here's a solution that seems to work well:
docker run \
-ti \
-u $(id -u):$(id -g) \
-v /home/$(id -un):/home/$(id -un) \
-v /etc/passwd:/etc/passwd \
-w $(pwd) \
MyDockerImage
And now, every file created inside that container is perfectly located in the right directory on the host with the ownership by the user.
And from inside the container, it really looks like the host, which will be very useful for the client.
Thanks again for your help guys!
And I hope this can help someone else.
As you may know, the container hast its own filesystem that is provided by the image where it runs on top.
You can map a host's directory or file to a path inside the container, where your program expects it to be. This is known as docker volume. You're already doing that with the X11 socket communication (the -v flag).
For instance, for a file:
docker run -v /absolute/path/in/the/host/some.file:/path/inside/container/some.file
For a directory:
docker run -v /absolute/path/in/the/host/some/dir:/path/inside/container/some/dir
You can provide as many -v flags as you might need.
Here you can find more useful information.

How to sync dir from a container to a dir from the host?

I'm using vagrant so the container is inside vm. Below is my shell provision:
#!/bin/bash
CONFDIR='/apps/hf/hf-container-scripts'
REGISTRY="tutum.co/xxx"
VER_APP="0.1"
NAME=app
cd $CONFDIR
sudo docker login -u xxx -p xxx -e xxx#gmail.com tutum.co
sudo docker build -t $REGISTRY/$NAME:$VER_APP .
sudo docker run -it --rm -v /apps/hf:/hf $REGISTRY/$NAME:$VER_APP
Everything runs fine and the image is built. However, the syncing command(the last one) above doesn't seem to work. I checked in the container, /hf directory exists and it has files in it.
One other problem also is if I manually execute the syncing command, it succeeds but I can only see the files from host if I ls /hf. It seems that docker empties /hf and places the files from the host into it. I want it the other way around or better yet, merge them.
Yeah, that's just how volumes work I'm afraid. Basically, a volume is saying, "don't use the container file-system for this directory, instead use this directory from the host".
If you want to copy files out of the container and onto the host, you can use the docker cp command.
If you tell us what you're trying to do, perhaps we can suggest a better alternative.

Docker - how can I copy a file from an image to a host?

My question is related to this question on copying files from containers to hosts; I have a Dockerfile that fetches dependencies, compiles a build artifact from source, and runs an executable. I also want to copy the build artifact (in my case it's a .zip produced by sbt dist in '../target/`, but I think this question also applies to jars, binaries, etc.
docker cp works on containers, not images; do I need to start a container just to get a file out of it? In a script, I tried running /bin/bash in interactive mode in the background, copying the file out, and then killing the container, but this seems kludgey. Is there a better way?
On the other hand, I would like to avoid unpacking a .tar file after running docker save $IMAGENAME just to get one file out (but that seems like the simplest, if slowest, option right now).
I would use docker volumes, e.g.:
docker run -v hostdir:out $IMAGENAME /bin/cp/../blah.zip /out
but I'm running boot2docker in OSX and I don't know how to directly write to my mac host filesystem (read-write volumes are mounting inside my boot2docker VM, which means I can't easily share a script to extract blah.zip from an image with others. Thoughts?
To copy a file from an image, create a temporary container, copy the file from it and then delete it:
id=$(docker create image-name)
docker cp $id:path - > local-tar-file
docker rm -v $id
Unfortunately there doesn't seem to be a way to copy files directly from Docker images. You need to create a container first and then copy the file from the container.
However, if your image contains a cat command (and it will do in many cases), you can do it with a single command:
docker run --rm --entrypoint cat yourimage /path/to/file > path/to/destination
If your image doesn't contain cat, simply create a container and use the docker cp command as suggested in Igor's answer.
docker cp $(docker create --name tc registry.example.com/ansible-base:latest):/home/ansible/.ssh/id_rsa ./hacked_ssh_key && docker rm tc
wanted to supply a one line solution based on pure docker functionality (no bash needed)
edit: container does not even has to be run in this solution
edit2: thanks to #Jonathan Dumaine for --rm so the container will be removed after, i just never tried, because it sounded illogical to copy something from somewhere which has been already removed by the previous command, but i tried it and it works
edit3: due the comments we found out --rm is not working as expected, it does not remove the container because it never runs, so I added functionality to delete the created container afterwards(--name tc=temporary-container)
edit 4: this error appeared, seems like a bug in docker, because t is in a-z and this did not happen a few months before.
Error response from daemon: Invalid container name (t), only [a-zA-Z0-9][a-zA-Z0-9_.-] are allowed
A much faster option is to copy the file from running container to a mounted volume:
docker run -v $PWD:/opt/mount --rm --entrypoint cp image:version /data/libraries.tgz /opt/mount/libraries.tgz
real 0m0.446s
** VS **
docker run --rm --entrypoint cat image:version /data/libraries.tgz > libraries.tgz
real 0m9.014s
Parent comment already showed how to use cat. You could also use tar in a similar fashion:
docker run yourimage tar -c -C /my/directory subfolder | tar x
Another (short) answer to this problem:
docker run -v $PWD:/opt/mount --rm -ti image:version bash -c "cp /source/file /opt/mount/"
Update - as noted by #Elytscha Smith this only works if your image has bash built in
Not a direct answer to the question details, but in general, once you pulled an image, the image is stored on your system and so are all its files. Depending on the storage driver of the local Docker installation, these files can usually be found in /var/lib/docker/overlay2 (requires root access). overlay2 should be the most common storage driver nowadays, but the path may differ.
The layers associated with an image can be found using $ docker inspect image IMAGE_NAME:TAG, look for a GraphDriver attribute.
At least in my local environment, the following also works to quickly see all layers associated with an image:
docker inspect image IMAGE_NAME:TAG | jq ".[0].GraphDriver.Data"
In one of these diff directories, the wanted file can be found.
So in theory, there's no need to create a temporary container. Ofc this solution is pretty inconvenient.
First pull docker image using docker pull
docker pull <IMG>:<TAG>
Then, create a container using docker create command and store the container id is a variable
img_id=$(docker create <IMG>:<TAG>)
Now, run the docker cp command to copy folders and files from docker container to host
docker cp $img_id:/path/in/container /path/in/host
Once the files/folders are moved, delete the container using docker rm
docker rm -v $img_id
You essentially had the best solution already. Have the container copy out the files for you, and then remove itself when it's complete.
This will copy the files from /inside/container/ to your machine at /path/to/hostdir/.
docker run --rm -v /path/to/hostdir:/mnt/out "$IMAGENAME" /bin/cp -r /inside/container/ /mnt/out/
Update - here's a better version without the tar file:
$id = & docker create image-name
docker cp ${id}:path .
docker rm -v $id
Old answer
PowerShell variant of Igor Bukanov's answer:
$id = & docker create image-name
docker cp ${id}:path - > local-file.tar
docker rm -v $id
I am using boot2docker on MacOS. I can assure you that scripts based on "docker cp" are portable. Because any command is relayed inside boot2docker but then the binary stream is relayed back to the docker command line client running on your mac. So write operations from the docker client are executed inside the server and written back to the executing client instance!
I am sharing a backup script for docker volumes with any docker container I provide and my backup scripts are tested both on linux and MacOS with boot2docker. The backups can be easily exchanged between platforms. Basically I am executing the following command inside my script:
docker run --name=bckp_for_volume --rm --volumes-from jenkins_jenkins_1 -v /Users/github/jenkins/backups:/backup busybox tar cf /backup/JenkinsBackup-2015-07-09-14-26-15.tar /jenkins
Runs a new busybox container and mounts the volume of my jenkins container with the name jenkins_jenkins_1. The whole volume is written to the file backups/JenkinsBackup-2015-07-09-14-26-15.tar
I have already moved archives between the linux container and my mac container without any adjustments to the backup or restore script. If this is what you want you find the whole script an tutorial here: blacklabelops/jenkins
You could bind a local path on the host to a path on the container, and then cp the desired file(s) to that path at the end of your script.
$ docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app \
nginx:latest
Then there is no need to copy afterwards.

Resources