I am using linux RedHat 7.
I have no options to change the configuration of my server.
When I run
docker load -i images.tar
Error processing tar file(exit status 1): write /665b3743d81d9b5952e83a3f55aec18bd8eb082696215e534fa1da6247e99855/layer.tar: no space left on device
There is very little space on the / mount, but I have lots available in /apps
How do I tell docker to use my /apps mount when I run docker load?
I found the answer to my question.
Basically move the docker folder in the /var/lib folder to the apps folder, then create a sym link to it back in the /var/lib folder
sudo mv /var/lib/docker /apps/docker
ln -s /apps/docker /var/lib/docker
Related
I am using Docker and Docker Compose to manage my containers. For backup reasons, I previously had all my Docker files (volumes etc.) running on /home/docker which was symlinked via /var/lib/docker -> /home/docker.
After a while I decided moving my /home/docker directory to a different SSD using
$ cp -r /home/docker /my/new/ssd/docker
$ rm /var/lib/docker
$ ln -s /my/new/ssd/docker /var/lib/docker
$ rm -r /home/docker
which I fear changed all the permissions since I can't run most of the containers anymore due to permission issues.
Example:
Azuracast throws following error:
{"level":"error","time":"2022-07-22T23:30:02.243","sender":"service","message":"error initializing data provider: open /var/azuracast/sftpgo/sftpgo.db: permission denied"}
where /var/azuracast is being stored on a docker volume.
I now want to restore all those permissions.
Is there a way to restore Docker permissions for all existing volumes or to tell Docker to take care of this?
What I tried so far:
I recursively changed all permissions to root:root using chown -R root:root /my/new/ssd/docker.
This problem is causing serious issues for my server environment and I'm aware that using cp -r instead of rsync -aAX was a huge mistake so I would greatly appreciate any help here.
Thanks a lot in advance.
I have a docker container that I downloaded from docker hub that has an entire filesystem inside it. Due to certain reasons, I want to remove all the files from inside the container and recreate the filesystem on my local machine.
I can't run the image as a container on it's own for more than a few seconds because it relies on some other commands.
THINGS I HAVE TRIED:
I have seen answers with docker export and docker save but both of these are giving me a tarball which has 20-30 folders inside it, each of which has a tar directory inside. I do not want to manually go in and do this.
I wrote a simple bash script that helped me remove the files from .tar to a simple directory for d in ./*/ ; do (cd "$d" && tar xvf ./*.tar); done. This gave me a mess of files that I would have to build into a filesystem on my own.
If the image has the tar command available, you can run:
docker run --rm yourimage tar -C / -cf- | tar -C /path/to/root -xf-
This will tar up the contents up the image, and then untar it on your host in the location of your choice (/path/to/root in the above example).
I once wrote a tool called undocker for extracting a docker image to a local directory; you would use it like this:
docker save myimage | undocker
I haven't used it much in the past several years, but it seems to work on a few test images. This is useful if you're trying to extract the contents of an image in which you can't run tar.
When using docker cp to move files from my local machine /tmp/data.txt to the container, it fails with the error:
lstat /tmp/data.txt: no such file or directory
The file exists and I can run stat /tmp/data.txt and cat /tmp/data.txt without any issues.
Even if I create another file in /tmp like data2.txt I get the exact same error.
But if I create a file outside /tmp like in ~/documents and copy it with docker cp it works fine.
I checked out the documentation for docker cp and it mentions:
It is not possible to copy certain system files such as resources under /proc, /sys, /dev, tmpfs, and mounts created by the user in the container
but doesn't mention /tmp as such a directory.
I'm running on Debian 10, but a friend of mine who is on Ubuntu 20.04 can do it just fine.
We're both using the same version of docker (19.03.11).
What could be the cause?
I figured out the solution.
I had install docker as a snap. I uninstalled it (sudo snap remove docker) and installed it using the official Docker guidelines for installing on Debian.
After this, it worked just fine.
I think it might've been due to snap packages having limited access to system resources - but I don't know for sure.
I'm building a docker image which also involves a small yum install. I'm currently in a location where firewall's and access controls makes docker pull, yum install etc extremely slow.
In my case, its a JRE8 docker image using this official image script
My problem:
Building the image requires just 2 libraries (gzip + tar) which combined is only of (132 kB + 865 kB). But the yum inside docker build script will first download the repo information which is over 80 MB. While 80 MB is generally small, here, this took over 1 hour just to download. If my colleagues need to build, this would be sheer waste of productive time, not to mention frustration.
Workarounds I'm aware of:
Since this image may not need the full yum power, I can simply grab the *.rpm files, COPY in container script and use rpm -i instead of yum
I can save the built image and locally distribute
I could also find closest mirror for docker-hub, but not yum
My bet:
I've copy of the linux CD with about the same version
I can add commands in dockerfile to rename the *.repo to *.repo.old
Add a cdrom.repo in /etc/yum.repos.d/ inside the container
Use yum to load most common libraries from the CDROM instead of internet
My problem:
I'm not able to make out how to create a mount point to a cdrom repo from inside the container build without using httpd.
In plain linux I do this:
mkdir /cdrom
mount /dev/cdrom /cdrom
cat > /etc/yum.repos.d/cdrom.repo <<EOF
[cdrom]
name=CDROM Repo
baseurl=file:///cdrom
enabled=1
gpgcheck=1
gpgkey=file:///cdrom/RPM-GPG-KEY-oracle
EOF
Any help appreciated.
Docker containers cannot access host devices. I think you will have to write a wrapper script around the docker build command to do the following
First mount the CD ROM to a directory within the docker context ( that would be a sub-directory where your DockerFile exists).
call docker build command using contents from this directory
Un-mount the CD ROM.
so,
cd docker_build_dir
mkdir cdrom
mount /dev/cdrom cdrom
docker build "$#" .
umount cdrom
In the DockerFile, you would simple do this:
RUN cd cdrom && rpm -ivh rpms_you_need
I have docker container and execute command
# rm -rf /etc/
rm: cannot remove '/etc/hosts': Device or resource busy
rm: cannot remove '/etc/resolv.conf': Device or resource busy
rm: cannot remove '/etc/hostname': Device or resource busy
How to recover deleted files and directories?
UPD: according to comment about https://unix.stackexchange.com/questions/91297/how-to-undelete-just-deleted-directory-with-rm-r-command
It doest not work for me beacause I have removed /etc/ directory and unable to install any additional software inside docker container.
Your not going to be able to recover them reliably without a backup which would most likely come in the form of a docker commit of the container or a snapshot of the underlying docker filesystem.
You can get the original /etc back from the image you started the container from though, which is at least better than where you are now.
docker run {your_image} tar -cf - /etc | docker exec {your_container_missing_etc} tar -xvf -
{your_image} being the image your container is running.
{your_container_missing_etc} being the id or name of the container missing /etc