docker volume permissions broke after copying - docker

I am using Docker and Docker Compose to manage my containers. For backup reasons, I previously had all my Docker files (volumes etc.) running on /home/docker which was symlinked via /var/lib/docker -> /home/docker.
After a while I decided moving my /home/docker directory to a different SSD using
$ cp -r /home/docker /my/new/ssd/docker
$ rm /var/lib/docker
$ ln -s /my/new/ssd/docker /var/lib/docker
$ rm -r /home/docker
which I fear changed all the permissions since I can't run most of the containers anymore due to permission issues.
Example:
Azuracast throws following error:
{"level":"error","time":"2022-07-22T23:30:02.243","sender":"service","message":"error initializing data provider: open /var/azuracast/sftpgo/sftpgo.db: permission denied"}
where /var/azuracast is being stored on a docker volume.
I now want to restore all those permissions.
Is there a way to restore Docker permissions for all existing volumes or to tell Docker to take care of this?
What I tried so far:
I recursively changed all permissions to root:root using chown -R root:root /my/new/ssd/docker.
This problem is causing serious issues for my server environment and I'm aware that using cp -r instead of rsync -aAX was a huge mistake so I would greatly appreciate any help here.
Thanks a lot in advance.

Related

Can't use docker cp to copy file from /tmp

When using docker cp to move files from my local machine /tmp/data.txt to the container, it fails with the error:
lstat /tmp/data.txt: no such file or directory
The file exists and I can run stat /tmp/data.txt and cat /tmp/data.txt without any issues.
Even if I create another file in /tmp like data2.txt I get the exact same error.
But if I create a file outside /tmp like in ~/documents and copy it with docker cp it works fine.
I checked out the documentation for docker cp and it mentions:
It is not possible to copy certain system files such as resources under /proc, /sys, /dev, tmpfs, and mounts created by the user in the container
but doesn't mention /tmp as such a directory.
I'm running on Debian 10, but a friend of mine who is on Ubuntu 20.04 can do it just fine.
We're both using the same version of docker (19.03.11).
What could be the cause?
I figured out the solution.
I had install docker as a snap. I uninstalled it (sudo snap remove docker) and installed it using the official Docker guidelines for installing on Debian.
After this, it worked just fine.
I think it might've been due to snap packages having limited access to system resources - but I don't know for sure.

docker load : no space left on device

I am using linux RedHat 7.
I have no options to change the configuration of my server.
When I run
docker load -i images.tar
Error processing tar file(exit status 1): write /665b3743d81d9b5952e83a3f55aec18bd8eb082696215e534fa1da6247e99855/layer.tar: no space left on device
There is very little space on the / mount, but I have lots available in /apps
How do I tell docker to use my /apps mount when I run docker load?
I found the answer to my question.
Basically move the docker folder in the /var/lib folder to the apps folder, then create a sym link to it back in the /var/lib folder
sudo mv /var/lib/docker /apps/docker
ln -s /apps/docker /var/lib/docker

Docker data volume and permission

I'm trying to write a Dockerfile file to run Pydio community edition. I've an almost working Dockerfile.
RUN mv pydio-core-${PYDIO_VERSION} /var/www/pydio-core
RUN chmod -R 770 /var/www/pydio-core
RUN chmod -R 777 /var/www/pydio-core/data/files/ /var/www/pydio-core/data/personal/
RUN chown -R www-data:www-data /var/www/pydio-core
VOLUME /var/www/pydio-core/data/files
VOLUME /var/www/pydio-core/data/personal
This works except that when the container is started for the first time, the access rights of the files and personal folders is 755 and their owner is not www-data but 1000. So once started, I must connect the container to fix permissions (770) and ownership (www-data) and everything works.
I just wonder if it may have something in my Dockerfile which could explain the problem, or if the issue probably comes from the Pydio source code itself.

Undelete files in docker

I have docker container and execute command
# rm -rf /etc/
rm: cannot remove '/etc/hosts': Device or resource busy
rm: cannot remove '/etc/resolv.conf': Device or resource busy
rm: cannot remove '/etc/hostname': Device or resource busy
How to recover deleted files and directories?
UPD: according to comment about https://unix.stackexchange.com/questions/91297/how-to-undelete-just-deleted-directory-with-rm-r-command
It doest not work for me beacause I have removed /etc/ directory and unable to install any additional software inside docker container.
Your not going to be able to recover them reliably without a backup which would most likely come in the form of a docker commit of the container or a snapshot of the underlying docker filesystem.
You can get the original /etc back from the image you started the container from though, which is at least better than where you are now.
docker run {your_image} tar -cf - /etc | docker exec {your_container_missing_etc} tar -xvf -
{your_image} being the image your container is running.
{your_container_missing_etc} being the id or name of the container missing /etc

How can I make a host directory mount with the container directory's contents?

What I am trying to do is set up a docker container for ghost where I can easily modify the theme and other content. So I am making /opt/ghost/content a volume and mounting that on the host.
It looks like I will have to manually copy the theme into the host directory because when I mount it, it is an empty directory. So my content directory is totally empty. I am pretty sure I am doing something wrong.
I have tried a few different variations including using ADD with default themes folder, putting VOLUME at the end of the Dockerfile. I keep ending up with an empty content directory.
Does anyone have a Dockerfile doing something similar that is already working that I can look at?
Or maybe I can use the docker cp command somehow to populate the volume?
I may be missing something obvious or have made a silly mistake in my attempts to achieve this. But the basic thing is I want to be able to upload a new set of files into the ghost themes directory using a host-mounted volume and also have the casper theme in there by default.
This is what I have in my Dockerfile right now:
FROM ubuntu:12.04
MAINTAINER Jason Livesay "ithkuil#gmail.com"
RUN apt-get install -y python-software-properties
RUN add-apt-repository ppa:chris-lea/node.js
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get -qq update
RUN apt-get install -y sudo curl unzip nodejs=0.10.20-1chl1~precise1
RUN curl -L https://en.ghost.org/zip/ghost-0.3.2.zip > /tmp/ghost.zip
RUN useradd ghost
RUN mkdir -p /opt/ghost
WORKDIR /opt/ghost
RUN unzip /tmp/ghost.zip
RUN npm install --production
# Volumes
RUN mkdir /data
ADD run /usr/local/bin/run
ADD config.js /opt/ghost/config.js
ADD content /opt/ghost/content/
RUN chown -R ghost:ghost /opt/ghost
ENV NODE_ENV production
ENV GHOST_URL http://my-ghost-blog.com
EXPOSE 2368
CMD ["/usr/local/bin/run"]
VOLUME ["/data", "/opt/ghost/content"]
As far as I know, empty host-mounted (bound) volumes still will not receive contents of directories set up during the build, BUT data containers referenced with --volumes-from WILL.
So now I think the answer is, rather than writing code to work around non-initialized host-mounted volumes, forget host-mounted volumes and instead use data containers.
Data containers use the same image as the one you are trying to persist data for (so they have the same directories etc.).
docker run -d --name myapp_data mystuff/myapp echo Data container for myapp
Note that it will run and then exit, so your data containers for volumes won't stay running. If you want to keep them running you can use something like sleep infinity instead of echo, although this will obviously take more resources and isn't necessary or useful unless you have some specific reason -- like assuming that all of your relevant containers are still running.
You then use --volumes-from to use the directories from the data container:
docker run -d --name myapp --volumes-from myapp_data
https://docs.docker.com/userguide/dockervolumes/
You need to place the VOLUME directive before actually adding content to it.
My answer is completely wrong! Look here it seems there is actually a bug. If the VOLUME command happens after the directory already exists in the container, then changes are not persisted.
The Dockerfile should always end with a CMD or an ENTRYPOINT.
UPDATE
My solution would be to ADD files in the container home directory, then use a shell script as an entry point in which I'll copy the file in the shared volume and do all the other tasks.
I've been looking into the same thing. The problem I encountered was that I was using a relative local mount path, something like:
docker run -i -t -v ../data:/opt/data image
Switching to an absolute local path fixed this up for me:
docker run -i -t -v /path/to/my/data:/opt/data image
Can you confirm whether you were doing a relative path, and whether this helps?
Docker V1.8.1 preserves data in a volume if you mount it with the run command. From the docker docs:
Volumes are initialized when a container is created. If the container’s
base image contains data at the specified mount point, that existing
data is copied into the new volume upon volume initialization.
Example: An image defines the
/var/www/html
as a volume and populates it with the data of a web application. Your docker hosts provides a mount directory
/my/host/dir
You start the image by
docker run -v /my/host/dir:/var/www/html image
then you will get all the data from /var/www/html in the hosts /my/host/dir
This data will persist even if you delete the container or the image.

Resources