I have docker container and execute command
# rm -rf /etc/
rm: cannot remove '/etc/hosts': Device or resource busy
rm: cannot remove '/etc/resolv.conf': Device or resource busy
rm: cannot remove '/etc/hostname': Device or resource busy
How to recover deleted files and directories?
UPD: according to comment about https://unix.stackexchange.com/questions/91297/how-to-undelete-just-deleted-directory-with-rm-r-command
It doest not work for me beacause I have removed /etc/ directory and unable to install any additional software inside docker container.
Your not going to be able to recover them reliably without a backup which would most likely come in the form of a docker commit of the container or a snapshot of the underlying docker filesystem.
You can get the original /etc back from the image you started the container from though, which is at least better than where you are now.
docker run {your_image} tar -cf - /etc | docker exec {your_container_missing_etc} tar -xvf -
{your_image} being the image your container is running.
{your_container_missing_etc} being the id or name of the container missing /etc
Related
I am using Docker and Docker Compose to manage my containers. For backup reasons, I previously had all my Docker files (volumes etc.) running on /home/docker which was symlinked via /var/lib/docker -> /home/docker.
After a while I decided moving my /home/docker directory to a different SSD using
$ cp -r /home/docker /my/new/ssd/docker
$ rm /var/lib/docker
$ ln -s /my/new/ssd/docker /var/lib/docker
$ rm -r /home/docker
which I fear changed all the permissions since I can't run most of the containers anymore due to permission issues.
Example:
Azuracast throws following error:
{"level":"error","time":"2022-07-22T23:30:02.243","sender":"service","message":"error initializing data provider: open /var/azuracast/sftpgo/sftpgo.db: permission denied"}
where /var/azuracast is being stored on a docker volume.
I now want to restore all those permissions.
Is there a way to restore Docker permissions for all existing volumes or to tell Docker to take care of this?
What I tried so far:
I recursively changed all permissions to root:root using chown -R root:root /my/new/ssd/docker.
This problem is causing serious issues for my server environment and I'm aware that using cp -r instead of rsync -aAX was a huge mistake so I would greatly appreciate any help here.
Thanks a lot in advance.
I have a docker container that I downloaded from docker hub that has an entire filesystem inside it. Due to certain reasons, I want to remove all the files from inside the container and recreate the filesystem on my local machine.
I can't run the image as a container on it's own for more than a few seconds because it relies on some other commands.
THINGS I HAVE TRIED:
I have seen answers with docker export and docker save but both of these are giving me a tarball which has 20-30 folders inside it, each of which has a tar directory inside. I do not want to manually go in and do this.
I wrote a simple bash script that helped me remove the files from .tar to a simple directory for d in ./*/ ; do (cd "$d" && tar xvf ./*.tar); done. This gave me a mess of files that I would have to build into a filesystem on my own.
If the image has the tar command available, you can run:
docker run --rm yourimage tar -C / -cf- | tar -C /path/to/root -xf-
This will tar up the contents up the image, and then untar it on your host in the location of your choice (/path/to/root in the above example).
I once wrote a tool called undocker for extracting a docker image to a local directory; you would use it like this:
docker save myimage | undocker
I haven't used it much in the past several years, but it seems to work on a few test images. This is useful if you're trying to extract the contents of an image in which you can't run tar.
When using docker cp to move files from my local machine /tmp/data.txt to the container, it fails with the error:
lstat /tmp/data.txt: no such file or directory
The file exists and I can run stat /tmp/data.txt and cat /tmp/data.txt without any issues.
Even if I create another file in /tmp like data2.txt I get the exact same error.
But if I create a file outside /tmp like in ~/documents and copy it with docker cp it works fine.
I checked out the documentation for docker cp and it mentions:
It is not possible to copy certain system files such as resources under /proc, /sys, /dev, tmpfs, and mounts created by the user in the container
but doesn't mention /tmp as such a directory.
I'm running on Debian 10, but a friend of mine who is on Ubuntu 20.04 can do it just fine.
We're both using the same version of docker (19.03.11).
What could be the cause?
I figured out the solution.
I had install docker as a snap. I uninstalled it (sudo snap remove docker) and installed it using the official Docker guidelines for installing on Debian.
After this, it worked just fine.
I think it might've been due to snap packages having limited access to system resources - but I don't know for sure.
I am using linux RedHat 7.
I have no options to change the configuration of my server.
When I run
docker load -i images.tar
Error processing tar file(exit status 1): write /665b3743d81d9b5952e83a3f55aec18bd8eb082696215e534fa1da6247e99855/layer.tar: no space left on device
There is very little space on the / mount, but I have lots available in /apps
How do I tell docker to use my /apps mount when I run docker load?
I found the answer to my question.
Basically move the docker folder in the /var/lib folder to the apps folder, then create a sym link to it back in the /var/lib folder
sudo mv /var/lib/docker /apps/docker
ln -s /apps/docker /var/lib/docker
Here is the relevant part of my Dockerfile :
RUN cd /path/to/future/volume \
&& if [ ! -d "init_data" ]; then tar xvzf init_data.tar.gz; fi \
&& chmod 775 init_data
In my docker-compose, I'm using a named volume mapped on /path/to/future/volume (my-volume:/path/to/future/volume), so Docker is supposed to copy all files from /path/to/future/volume to my host directory.
The copy itself works (and all users and groups ownership seem good) EXCEPT my chmod 775 init_data is not applied.
If after the volume creation, I go into the container and lauch "chmod 775 init_data" it works inside the container and in the volume, but I need to set this write permission on group at build time.
Why is this happening and what could I do as a workaround ?
Docker only initializes named volumes when they are empty. Once they have data inside, the initialization step is skipped (otherwise it would risk overwriting or deleting your data).
So to see changes introduced from your new image, you need to remove not only your old container, but also your old volume.
Note that this doesn't apply to host volumes (binding a path directly from the docker host into the container). Even when empty, host volumes are never initialized.