Can't use docker cp to copy file from /tmp - docker

When using docker cp to move files from my local machine /tmp/data.txt to the container, it fails with the error:
lstat /tmp/data.txt: no such file or directory
The file exists and I can run stat /tmp/data.txt and cat /tmp/data.txt without any issues.
Even if I create another file in /tmp like data2.txt I get the exact same error.
But if I create a file outside /tmp like in ~/documents and copy it with docker cp it works fine.
I checked out the documentation for docker cp and it mentions:
It is not possible to copy certain system files such as resources under /proc, /sys, /dev, tmpfs, and mounts created by the user in the container
but doesn't mention /tmp as such a directory.
I'm running on Debian 10, but a friend of mine who is on Ubuntu 20.04 can do it just fine.
We're both using the same version of docker (19.03.11).
What could be the cause?

I figured out the solution.
I had install docker as a snap. I uninstalled it (sudo snap remove docker) and installed it using the official Docker guidelines for installing on Debian.
After this, it worked just fine.
I think it might've been due to snap packages having limited access to system resources - but I don't know for sure.

Related

docker volume permissions broke after copying

I am using Docker and Docker Compose to manage my containers. For backup reasons, I previously had all my Docker files (volumes etc.) running on /home/docker which was symlinked via /var/lib/docker -> /home/docker.
After a while I decided moving my /home/docker directory to a different SSD using
$ cp -r /home/docker /my/new/ssd/docker
$ rm /var/lib/docker
$ ln -s /my/new/ssd/docker /var/lib/docker
$ rm -r /home/docker
which I fear changed all the permissions since I can't run most of the containers anymore due to permission issues.
Example:
Azuracast throws following error:
{"level":"error","time":"2022-07-22T23:30:02.243","sender":"service","message":"error initializing data provider: open /var/azuracast/sftpgo/sftpgo.db: permission denied"}
where /var/azuracast is being stored on a docker volume.
I now want to restore all those permissions.
Is there a way to restore Docker permissions for all existing volumes or to tell Docker to take care of this?
What I tried so far:
I recursively changed all permissions to root:root using chown -R root:root /my/new/ssd/docker.
This problem is causing serious issues for my server environment and I'm aware that using cp -r instead of rsync -aAX was a huge mistake so I would greatly appreciate any help here.
Thanks a lot in advance.

Docker -v command wipes the container

I am creating a docker container that will run a minecraft server. (Yes i know, these already exist). And of course i want the world to be saved when the container is turned off.
This is my dockerfile:
FROM anapsix/alpine-java
COPY ./ /home
CMD ["java","-jar","/home/main.jar"]
EXPOSE 25565
Then i build the container:
docker build -t minecraftdev .
Run the container:
docker run -dp 25565:25565 -v C:/Users/user/server:/home minecraftdev
And then the files in the image, server.properies, the server jar file and EULA.txt is wiped.
Is there another way i don't now of to get the container to store data? And this is without placing the files in the server folder.
Thank you for your answers, i was able to fix it by -v C:/Users/user/server/world:/home/world As the world files are stored in that folder, Instead of changing out all the files in the folder as i didn't know -v did.
Minecraft makes the server.jar file and i don't know how to change so it stores all the files in another place.

Image's rootfs is incomplete while building from Dockerfile

I'm building an image for Jetson from a Dokerfile. Here's an excerpt from it:
FROM nvcr.io/nvidia/l4t-pytorch:r32.4.4-pth1.6-py3
# some installation
RUN ls -l /usr/local/cuda-10.2/targets/aarch64-linux/lib/
# more installation
The ls command returns just a couple of files. However when I run the resulting container and use its shell, this directory contains many more files.
The problem is that I need some of the libraries from that folder to install something. I want to be able to install it from the Dockerfile but only can do so from the container's shell.
Why is the directory incomplete and is there a way to force-build it so it's ready when I need it?
Thanks.
Solved it by adding "default-runtime": "nvidia" to /etc/docker/daemon.json. Further details here: https://github.com/dusty-nv/jetson-containers#docker-default-runtime

Mount-ing a CDROM repo during docker build

I'm building a docker image which also involves a small yum install. I'm currently in a location where firewall's and access controls makes docker pull, yum install etc extremely slow.
In my case, its a JRE8 docker image using this official image script
My problem:
Building the image requires just 2 libraries (gzip + tar) which combined is only of (132 kB + 865 kB). But the yum inside docker build script will first download the repo information which is over 80 MB. While 80 MB is generally small, here, this took over 1 hour just to download. If my colleagues need to build, this would be sheer waste of productive time, not to mention frustration.
Workarounds I'm aware of:
Since this image may not need the full yum power, I can simply grab the *.rpm files, COPY in container script and use rpm -i instead of yum
I can save the built image and locally distribute
I could also find closest mirror for docker-hub, but not yum
My bet:
I've copy of the linux CD with about the same version
I can add commands in dockerfile to rename the *.repo to *.repo.old
Add a cdrom.repo in /etc/yum.repos.d/ inside the container
Use yum to load most common libraries from the CDROM instead of internet
My problem:
I'm not able to make out how to create a mount point to a cdrom repo from inside the container build without using httpd.
In plain linux I do this:
mkdir /cdrom
mount /dev/cdrom /cdrom
cat > /etc/yum.repos.d/cdrom.repo <<EOF
[cdrom]
name=CDROM Repo
baseurl=file:///cdrom
enabled=1
gpgcheck=1
gpgkey=file:///cdrom/RPM-GPG-KEY-oracle
EOF
Any help appreciated.
Docker containers cannot access host devices. I think you will have to write a wrapper script around the docker build command to do the following
First mount the CD ROM to a directory within the docker context ( that would be a sub-directory where your DockerFile exists).
call docker build command using contents from this directory
Un-mount the CD ROM.
so,
cd docker_build_dir
mkdir cdrom
mount /dev/cdrom cdrom
docker build "$#" .
umount cdrom
In the DockerFile, you would simple do this:
RUN cd cdrom && rpm -ivh rpms_you_need

How can I make a host directory mount with the container directory's contents?

What I am trying to do is set up a docker container for ghost where I can easily modify the theme and other content. So I am making /opt/ghost/content a volume and mounting that on the host.
It looks like I will have to manually copy the theme into the host directory because when I mount it, it is an empty directory. So my content directory is totally empty. I am pretty sure I am doing something wrong.
I have tried a few different variations including using ADD with default themes folder, putting VOLUME at the end of the Dockerfile. I keep ending up with an empty content directory.
Does anyone have a Dockerfile doing something similar that is already working that I can look at?
Or maybe I can use the docker cp command somehow to populate the volume?
I may be missing something obvious or have made a silly mistake in my attempts to achieve this. But the basic thing is I want to be able to upload a new set of files into the ghost themes directory using a host-mounted volume and also have the casper theme in there by default.
This is what I have in my Dockerfile right now:
FROM ubuntu:12.04
MAINTAINER Jason Livesay "ithkuil#gmail.com"
RUN apt-get install -y python-software-properties
RUN add-apt-repository ppa:chris-lea/node.js
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get -qq update
RUN apt-get install -y sudo curl unzip nodejs=0.10.20-1chl1~precise1
RUN curl -L https://en.ghost.org/zip/ghost-0.3.2.zip > /tmp/ghost.zip
RUN useradd ghost
RUN mkdir -p /opt/ghost
WORKDIR /opt/ghost
RUN unzip /tmp/ghost.zip
RUN npm install --production
# Volumes
RUN mkdir /data
ADD run /usr/local/bin/run
ADD config.js /opt/ghost/config.js
ADD content /opt/ghost/content/
RUN chown -R ghost:ghost /opt/ghost
ENV NODE_ENV production
ENV GHOST_URL http://my-ghost-blog.com
EXPOSE 2368
CMD ["/usr/local/bin/run"]
VOLUME ["/data", "/opt/ghost/content"]
As far as I know, empty host-mounted (bound) volumes still will not receive contents of directories set up during the build, BUT data containers referenced with --volumes-from WILL.
So now I think the answer is, rather than writing code to work around non-initialized host-mounted volumes, forget host-mounted volumes and instead use data containers.
Data containers use the same image as the one you are trying to persist data for (so they have the same directories etc.).
docker run -d --name myapp_data mystuff/myapp echo Data container for myapp
Note that it will run and then exit, so your data containers for volumes won't stay running. If you want to keep them running you can use something like sleep infinity instead of echo, although this will obviously take more resources and isn't necessary or useful unless you have some specific reason -- like assuming that all of your relevant containers are still running.
You then use --volumes-from to use the directories from the data container:
docker run -d --name myapp --volumes-from myapp_data
https://docs.docker.com/userguide/dockervolumes/
You need to place the VOLUME directive before actually adding content to it.
My answer is completely wrong! Look here it seems there is actually a bug. If the VOLUME command happens after the directory already exists in the container, then changes are not persisted.
The Dockerfile should always end with a CMD or an ENTRYPOINT.
UPDATE
My solution would be to ADD files in the container home directory, then use a shell script as an entry point in which I'll copy the file in the shared volume and do all the other tasks.
I've been looking into the same thing. The problem I encountered was that I was using a relative local mount path, something like:
docker run -i -t -v ../data:/opt/data image
Switching to an absolute local path fixed this up for me:
docker run -i -t -v /path/to/my/data:/opt/data image
Can you confirm whether you were doing a relative path, and whether this helps?
Docker V1.8.1 preserves data in a volume if you mount it with the run command. From the docker docs:
Volumes are initialized when a container is created. If the container’s
base image contains data at the specified mount point, that existing
data is copied into the new volume upon volume initialization.
Example: An image defines the
/var/www/html
as a volume and populates it with the data of a web application. Your docker hosts provides a mount directory
/my/host/dir
You start the image by
docker run -v /my/host/dir:/var/www/html image
then you will get all the data from /var/www/html in the hosts /my/host/dir
This data will persist even if you delete the container or the image.

Resources