Storing local files in Docker Volume for sharing - docker

I'm new to Docker, so this may be an obvious question that I'm just not using the right search terms to find an answer to, so my apologies if that is the case.
I'm trying to stand up a new CI/CD Pipeline using a purpose built container. So far, I've been using someone else's container, but I need more control over the available dependencies, so I need my own container. To that end, I've built a container (Ubuntu), and I have a local (host) directory for the dependencies, and another for the project I'm building. Both are connected to the container using Docker Volumes (-v option), like this.
docker run --name buildbox \
-v /projectpath:/home/project/ \
-v /dependencies:/home/libs \
buildImage buildScript.sh
Since this is going to eventually live in a Docker repo and be accessed by a GitLab CI/CD Pipeline, I want to store the dependencies directory in as small of a container as possible that I can push up to the Docker repo alongside my Ubuntu build container. That way I can have the Pipeline pull both containers, map the dependencies container to the build container (--volumes-from), and map the project to be built using the -v option; e.g.:
docker run --name buildbox \
-v /projectpath:/home/project/ \
--volumes-from depend_vol \
buildImage buildScript.sh
Thus, I pull buildImage and depend_vol from the Docker repo, run buildImage while attaching the dependencies container and project directory as volumes, then run the build script (and extract the build artifact when it's done). The reason I want them separate is in case I want to create different build containers that use common libraries, or if I want to create version specific dependency containers without having a full OS stored (I have plans for this).
Now, I could just start a lightweight generic container (like busybox) and copy everything into it, but I was wondering if there was simply a way to attach the volume and then store the contents in the image when the container shuts down. Everything I've seen about making a portable data store / volume starts with all the data already copied into the container.
But I want to take my local host dependencies directory and store it in a container. Is there a straightforward way to do this? Am I missing something obvious?

So this works, if not what I was hoping for, since I'm still doing a lot of file copy (just with tarballs).
# Create a tarball of the files on the host to store, don't store the full path
tar -cvf /home/projectFiles.tar -C /home/projectFiles/ .
# Start a lightweight docker container (busybox) with a volume connection to the host (/home:/backup), then extract the tarball into the container
# cd to the drive root and untar the tarball
docker run --name libraryVolume \
-v /home:/backup \
busybox \
/bin/sh -c \
"cd / && mkdir /projectLibs && tar -xvf /backup/projectFiles.tar -C /projectLibs"
# Don't forget to commit the container image
docker commit libraryVolume
That's it. Then push to the repo.
To use it, pull the repo, then start the data volume:
docker run --name projLib \
-v /projectLibs \
--entrypoint "/bin/sh" \
libraryVolume
Then start the container (projBuild) that is going to reference the data volume (projLib).
docker run --it --name projBuild \
--volumes-from=projLib \
-v /home/mySourceCode:/buildProject \
--entrypoint /buildProject/buildScript.sh \
builderImage
Seems to work.

Related

Copying a file from container to locally by using volume mounts

Trying to copy files from the container to the local first
So, I have a custom Dockerfile, RUN mkdir /test1 && touch /test1/1.txt and then I build my image and I have created an empty folder in local path /root/test1
and docker run -d --name container1 -v /root/test1:/test1 Image:1
I tried to copy files from containers to the local folder, and I wanted to use it later on. but it is taking my local folder as a preceding and making my container empty.
Could you please someone help me here?
For example, I have built my own custom Jenkins file, for the first time while launching it I need to copy all the configurations and changes locally from the container, and later if wanted to delete my container and launch it again don't need to configure from the scratch.
Thanks,
The relatively new --mount flag replaces the -v/--volume mount. It's easier to understand (syntactically) and is also more verbose (see https://docs.docker.com/storage/volumes/).
You can mount and copy with:
docker run -i \
--rm \
--mount type=bind,source="$(pwd)"/root/test1,target=/test1 \
/bin/bash << COMMANDS
cp <files> /test1
COMMANDS
where you need to adjust the cp command to your needs. I'm not sure if you need the "$(pwd)" part.
Off the top, without testing to confirm, i think it is
docker cp container1:/path/on/container/filename /path/on/hostmachine/
EDIT: Yes that should work. Also "container1" is used here because that was the container's name provided in the example
In general it works like this
container to host
docker cp containername:/containerpath/ /hostpath/
host to container
docker cp /hostpath/ containername:/containerpath/

Docker, mount all user directories to container

add -v option can mount directories to the container, for example, mounting /home/me/my_code to the container, and when in the container, we can see the directory.
Currently, in my Dockerfile, the user is docker and the workspace is /home/docker, and how can I mount all my directories in /home/me to /home/docker? So that when I enter into the container, it would be very convenient to run my task and explore files like in /home/me.
While building a image through dockerfile, COPY or ADD is used to copy a file with necessary content in the process of building the image example, installing npm binaries and all.
Since you are looking to have the flexibility of having a same local FS as inside the conatiner, you can try out "Bind Mounts".
bash-3.2$ docker run \
> -it \
> --name devtest \
> --mount type=bind,source=/Users/anku/,target=/app \
> nginx:latest \
> bash
root#c072896c7bb2:/#
root#c072896c7bb2:/# pwd
/
root#c072896c7bb2:/# cd app
root#c072896c7bb2:/app# ls
Applications Documents Library Music Projects PycharmProjects anaconda3 'iCloud Drive (Archive)' 'pCloud Drive' testrun.bash
Desktop Downloads Movies Pictures Public 'VirtualBox VMs' gitlab minikube-linux-amd64 starup.sh
root#c072896c7bb2:/app#
There are two kinds of mechanism to mange persisting data.
Volumes are completely managed by Docker.
Bind Mount, mounts a file or directory on the host machine into container. Any changes made from the host machine or from inside the container are synced.
Suggest to go through Differences between --volume and --mount behavior
Choose what best work for you.

Mount a nfs share in docker build to install software

I am building a docker image from a dockerfile. However I am doing some installs from files that are currently hosted on a NFS share. In regular Centos I mount the drive with mount.nfs, then run the commands to do the install and point to the NFS share as repository for the install files.
Is there any way to do this with dockers? I read a few posts of docker run -v, but I am not ready to run the docker yet, I first need to create the image.
The alternative is copy the whole repository via zip or tar, then unarchive, do the install and then delete files. However I think this will end up in a huge image.
You'll need experimental software (as of writing) for doing this.
First of all, you have to create a buildx builder instance:
docker buildx create --name insecure --driver docker-container \
--driver-opt image=moby/buildkit:master \
--buildkitd-flags '--allow-insecure-entitlement security.insecure \
--allow-insecure-entitlement network.host'
As of today, the latest release (v0.9.0) of buildkit doesn't have the --insecure support, so you need master.
You should issue this command as the user which does the build.
Then you'll need to add these to your Dockerfile:
# syntax = docker/dockerfile:experimental
RUN --security=insecure mkdir /nfs && \
mount -t nfs -o nolock -o vers=4 $SERVER_IP:/nfs /nfs && \
ls -la /nfs
Third, you have to do your build with buildx and give the following options (--allow and --builder along with your normal options):
docker buildx build --allow security.insecure,network.host \
--builder insecure \
-t image:tag --file=Dockerfile .
You should then have your NFS server mounted at /nfs.
Be aware that this mount will be present only in the same RUN context, because all those steps run in a different container. The next RUN line will see only an empty /nfs directory.
So you should do everything which needs data from /nfs from that RUN step!
When you are building docker image you have full access to host's file system what means that you should easily write in Dockerfile
ADD /nfs-path/file /path-inside-docker-image/file
You don't need any additional action in docker to do that.

Docker \ commit a container with its data

lots of documentation, but I still missing something. My Goal is to run one-time registry (2.0) push to it couple of images and export\commit the container.
I need to take it in zip file to machine without internet.
Thing is - the images I pushed to registry aren't kept. whenever I import the regsitry to test - it comes empty. I understand that commit\export will not work on mounted volumes - how do I "disable" the volumes of the initial registry docker?
I would rather suggest you decouple the image (registry v2) from the data for transport by copying the needed images seperately and then mounting them into the registry container when running it.
Kind of like this:
On the machine you are preparing the registry, run a registry container using something like
docker run -d \
--name registry \
--restart=always \
-e SEARCH_BACKEND=sqlalchemy \
-e STORAGE_PATH=/srv/docker-registry \
-v /srv/data/docker-registry:/srv/docker-registry \
-p 127.0.0.1:5002:5000 \
registry:2.0.0
Then tag your images to localhost:5000/repo-name/image-name and execute
docker push localhost:5000/repo-name/image-name
After that, tar/zip/whatever /srv/data/docker-registry and do
docker save -o ~/docker-registry-v2 registry:2.0.0
Copy the two archives to the target machine,
docker load -i ~/docker-registry-v2
Untar/Unzip/Whatever the image archive and run the registry again wieht a similar run command as above, supplying the dir you unpacked the image archive to as the first path after -v.
With this technique, the repos and images in your registry will also survive container destroys and restarts.

Docker - how can I copy a file from an image to a host?

My question is related to this question on copying files from containers to hosts; I have a Dockerfile that fetches dependencies, compiles a build artifact from source, and runs an executable. I also want to copy the build artifact (in my case it's a .zip produced by sbt dist in '../target/`, but I think this question also applies to jars, binaries, etc.
docker cp works on containers, not images; do I need to start a container just to get a file out of it? In a script, I tried running /bin/bash in interactive mode in the background, copying the file out, and then killing the container, but this seems kludgey. Is there a better way?
On the other hand, I would like to avoid unpacking a .tar file after running docker save $IMAGENAME just to get one file out (but that seems like the simplest, if slowest, option right now).
I would use docker volumes, e.g.:
docker run -v hostdir:out $IMAGENAME /bin/cp/../blah.zip /out
but I'm running boot2docker in OSX and I don't know how to directly write to my mac host filesystem (read-write volumes are mounting inside my boot2docker VM, which means I can't easily share a script to extract blah.zip from an image with others. Thoughts?
To copy a file from an image, create a temporary container, copy the file from it and then delete it:
id=$(docker create image-name)
docker cp $id:path - > local-tar-file
docker rm -v $id
Unfortunately there doesn't seem to be a way to copy files directly from Docker images. You need to create a container first and then copy the file from the container.
However, if your image contains a cat command (and it will do in many cases), you can do it with a single command:
docker run --rm --entrypoint cat yourimage /path/to/file > path/to/destination
If your image doesn't contain cat, simply create a container and use the docker cp command as suggested in Igor's answer.
docker cp $(docker create --name tc registry.example.com/ansible-base:latest):/home/ansible/.ssh/id_rsa ./hacked_ssh_key && docker rm tc
wanted to supply a one line solution based on pure docker functionality (no bash needed)
edit: container does not even has to be run in this solution
edit2: thanks to #Jonathan Dumaine for --rm so the container will be removed after, i just never tried, because it sounded illogical to copy something from somewhere which has been already removed by the previous command, but i tried it and it works
edit3: due the comments we found out --rm is not working as expected, it does not remove the container because it never runs, so I added functionality to delete the created container afterwards(--name tc=temporary-container)
edit 4: this error appeared, seems like a bug in docker, because t is in a-z and this did not happen a few months before.
Error response from daemon: Invalid container name (t), only [a-zA-Z0-9][a-zA-Z0-9_.-] are allowed
A much faster option is to copy the file from running container to a mounted volume:
docker run -v $PWD:/opt/mount --rm --entrypoint cp image:version /data/libraries.tgz /opt/mount/libraries.tgz
real 0m0.446s
** VS **
docker run --rm --entrypoint cat image:version /data/libraries.tgz > libraries.tgz
real 0m9.014s
Parent comment already showed how to use cat. You could also use tar in a similar fashion:
docker run yourimage tar -c -C /my/directory subfolder | tar x
Another (short) answer to this problem:
docker run -v $PWD:/opt/mount --rm -ti image:version bash -c "cp /source/file /opt/mount/"
Update - as noted by #Elytscha Smith this only works if your image has bash built in
Not a direct answer to the question details, but in general, once you pulled an image, the image is stored on your system and so are all its files. Depending on the storage driver of the local Docker installation, these files can usually be found in /var/lib/docker/overlay2 (requires root access). overlay2 should be the most common storage driver nowadays, but the path may differ.
The layers associated with an image can be found using $ docker inspect image IMAGE_NAME:TAG, look for a GraphDriver attribute.
At least in my local environment, the following also works to quickly see all layers associated with an image:
docker inspect image IMAGE_NAME:TAG | jq ".[0].GraphDriver.Data"
In one of these diff directories, the wanted file can be found.
So in theory, there's no need to create a temporary container. Ofc this solution is pretty inconvenient.
First pull docker image using docker pull
docker pull <IMG>:<TAG>
Then, create a container using docker create command and store the container id is a variable
img_id=$(docker create <IMG>:<TAG>)
Now, run the docker cp command to copy folders and files from docker container to host
docker cp $img_id:/path/in/container /path/in/host
Once the files/folders are moved, delete the container using docker rm
docker rm -v $img_id
You essentially had the best solution already. Have the container copy out the files for you, and then remove itself when it's complete.
This will copy the files from /inside/container/ to your machine at /path/to/hostdir/.
docker run --rm -v /path/to/hostdir:/mnt/out "$IMAGENAME" /bin/cp -r /inside/container/ /mnt/out/
Update - here's a better version without the tar file:
$id = & docker create image-name
docker cp ${id}:path .
docker rm -v $id
Old answer
PowerShell variant of Igor Bukanov's answer:
$id = & docker create image-name
docker cp ${id}:path - > local-file.tar
docker rm -v $id
I am using boot2docker on MacOS. I can assure you that scripts based on "docker cp" are portable. Because any command is relayed inside boot2docker but then the binary stream is relayed back to the docker command line client running on your mac. So write operations from the docker client are executed inside the server and written back to the executing client instance!
I am sharing a backup script for docker volumes with any docker container I provide and my backup scripts are tested both on linux and MacOS with boot2docker. The backups can be easily exchanged between platforms. Basically I am executing the following command inside my script:
docker run --name=bckp_for_volume --rm --volumes-from jenkins_jenkins_1 -v /Users/github/jenkins/backups:/backup busybox tar cf /backup/JenkinsBackup-2015-07-09-14-26-15.tar /jenkins
Runs a new busybox container and mounts the volume of my jenkins container with the name jenkins_jenkins_1. The whole volume is written to the file backups/JenkinsBackup-2015-07-09-14-26-15.tar
I have already moved archives between the linux container and my mac container without any adjustments to the backup or restore script. If this is what you want you find the whole script an tutorial here: blacklabelops/jenkins
You could bind a local path on the host to a path on the container, and then cp the desired file(s) to that path at the end of your script.
$ docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app \
nginx:latest
Then there is no need to copy afterwards.

Resources