Docker, mount all user directories to container - docker

add -v option can mount directories to the container, for example, mounting /home/me/my_code to the container, and when in the container, we can see the directory.
Currently, in my Dockerfile, the user is docker and the workspace is /home/docker, and how can I mount all my directories in /home/me to /home/docker? So that when I enter into the container, it would be very convenient to run my task and explore files like in /home/me.

While building a image through dockerfile, COPY or ADD is used to copy a file with necessary content in the process of building the image example, installing npm binaries and all.
Since you are looking to have the flexibility of having a same local FS as inside the conatiner, you can try out "Bind Mounts".
bash-3.2$ docker run \
> -it \
> --name devtest \
> --mount type=bind,source=/Users/anku/,target=/app \
> nginx:latest \
> bash
root#c072896c7bb2:/#
root#c072896c7bb2:/# pwd
/
root#c072896c7bb2:/# cd app
root#c072896c7bb2:/app# ls
Applications Documents Library Music Projects PycharmProjects anaconda3 'iCloud Drive (Archive)' 'pCloud Drive' testrun.bash
Desktop Downloads Movies Pictures Public 'VirtualBox VMs' gitlab minikube-linux-amd64 starup.sh
root#c072896c7bb2:/app#
There are two kinds of mechanism to mange persisting data.
Volumes are completely managed by Docker.
Bind Mount, mounts a file or directory on the host machine into container. Any changes made from the host machine or from inside the container are synced.
Suggest to go through Differences between --volume and --mount behavior
Choose what best work for you.

Related

Storing local files in Docker Volume for sharing

I'm new to Docker, so this may be an obvious question that I'm just not using the right search terms to find an answer to, so my apologies if that is the case.
I'm trying to stand up a new CI/CD Pipeline using a purpose built container. So far, I've been using someone else's container, but I need more control over the available dependencies, so I need my own container. To that end, I've built a container (Ubuntu), and I have a local (host) directory for the dependencies, and another for the project I'm building. Both are connected to the container using Docker Volumes (-v option), like this.
docker run --name buildbox \
-v /projectpath:/home/project/ \
-v /dependencies:/home/libs \
buildImage buildScript.sh
Since this is going to eventually live in a Docker repo and be accessed by a GitLab CI/CD Pipeline, I want to store the dependencies directory in as small of a container as possible that I can push up to the Docker repo alongside my Ubuntu build container. That way I can have the Pipeline pull both containers, map the dependencies container to the build container (--volumes-from), and map the project to be built using the -v option; e.g.:
docker run --name buildbox \
-v /projectpath:/home/project/ \
--volumes-from depend_vol \
buildImage buildScript.sh
Thus, I pull buildImage and depend_vol from the Docker repo, run buildImage while attaching the dependencies container and project directory as volumes, then run the build script (and extract the build artifact when it's done). The reason I want them separate is in case I want to create different build containers that use common libraries, or if I want to create version specific dependency containers without having a full OS stored (I have plans for this).
Now, I could just start a lightweight generic container (like busybox) and copy everything into it, but I was wondering if there was simply a way to attach the volume and then store the contents in the image when the container shuts down. Everything I've seen about making a portable data store / volume starts with all the data already copied into the container.
But I want to take my local host dependencies directory and store it in a container. Is there a straightforward way to do this? Am I missing something obvious?
So this works, if not what I was hoping for, since I'm still doing a lot of file copy (just with tarballs).
# Create a tarball of the files on the host to store, don't store the full path
tar -cvf /home/projectFiles.tar -C /home/projectFiles/ .
# Start a lightweight docker container (busybox) with a volume connection to the host (/home:/backup), then extract the tarball into the container
# cd to the drive root and untar the tarball
docker run --name libraryVolume \
-v /home:/backup \
busybox \
/bin/sh -c \
"cd / && mkdir /projectLibs && tar -xvf /backup/projectFiles.tar -C /projectLibs"
# Don't forget to commit the container image
docker commit libraryVolume
That's it. Then push to the repo.
To use it, pull the repo, then start the data volume:
docker run --name projLib \
-v /projectLibs \
--entrypoint "/bin/sh" \
libraryVolume
Then start the container (projBuild) that is going to reference the data volume (projLib).
docker run --it --name projBuild \
--volumes-from=projLib \
-v /home/mySourceCode:/buildProject \
--entrypoint /buildProject/buildScript.sh \
builderImage
Seems to work.

Copying a file from container to locally by using volume mounts

Trying to copy files from the container to the local first
So, I have a custom Dockerfile, RUN mkdir /test1 && touch /test1/1.txt and then I build my image and I have created an empty folder in local path /root/test1
and docker run -d --name container1 -v /root/test1:/test1 Image:1
I tried to copy files from containers to the local folder, and I wanted to use it later on. but it is taking my local folder as a preceding and making my container empty.
Could you please someone help me here?
For example, I have built my own custom Jenkins file, for the first time while launching it I need to copy all the configurations and changes locally from the container, and later if wanted to delete my container and launch it again don't need to configure from the scratch.
Thanks,
The relatively new --mount flag replaces the -v/--volume mount. It's easier to understand (syntactically) and is also more verbose (see https://docs.docker.com/storage/volumes/).
You can mount and copy with:
docker run -i \
--rm \
--mount type=bind,source="$(pwd)"/root/test1,target=/test1 \
/bin/bash << COMMANDS
cp <files> /test1
COMMANDS
where you need to adjust the cp command to your needs. I'm not sure if you need the "$(pwd)" part.
Off the top, without testing to confirm, i think it is
docker cp container1:/path/on/container/filename /path/on/hostmachine/
EDIT: Yes that should work. Also "container1" is used here because that was the container's name provided in the example
In general it works like this
container to host
docker cp containername:/containerpath/ /hostpath/
host to container
docker cp /hostpath/ containername:/containerpath/

Link a docker container folder to a host folder

I am new to docker and I am trying to do the following: I would like to have a folder on my host machine which is synched with a folder in the Docker container. I need this since I would like to write on some files of the container folder with the usual software tools I use on my host machine (e.g., sublime text, vscode). Then, once I am done editing the files on my host computer, I will compile them in the docker container and test them directly there.
My workflow is the following:
In the DOCKERFILE I clone a git repository, let's call it repo1 and it will then be in the docker container in /root/repo1
I build the container (and I remove the old ones, not important for this question)
# Run docker, setup and keep running
echo Running docker, setting it up and keep runnning ...
docker run -dt \
--privileged \
-v /path_to_existing_folder_on_host_machine:/root/repo1 \
-e DISPLAY=:0 \
-p 14556:14556/udp \
--name name_container_1 \
name_container_1
echo ... Finished setting up docker and kept it running in the background
The folders are synched: if I create a file on the host machine, I can see it from the docker container. However, I get a folder on both the host and the container that is empty.
EDIT: I understood that what I was doing is wrong since mounting a volume from the host machine will effectively "override" files that exist in the container. Therefore, I think that I have to find another solution.
Maybe you want to mount your host folder like this:
docker run -v <host-file-system-directory>:<docker-file-system-directory>
Refer Access a bash script variable outside the docker container in which the script is running

Exporting a Docker container with contents of mounted volume

I am trying to export a docker container that uses a mounted local volume as its root, which I run with docker run --privileged -v /path/to/local/files:/root --name cse303dev -it cse303 to mount that local directory.
What is the best way of exporting the container AND all of the contents of that local mounted directory into a simple tar file of some sort? And then how would I easily re-import this container and run it so that I can see and use all of those files copied from the local machine that exported it? Is this possible?
You almost never "export" a container per se. Containers are generally intended to be freely destroyed and recreated.
In your case you already have the data you care about stored outside the container (in a bind-mounted folder, which is the easy case), so you can just copy that directory tree to wherever else and then run a new copy of the container there.
(cd /path/to/local/files; tar cvzf ~/local-files.tar.gz .)
scp local-files.tar.gz there:
ssh there
mkdir files
(cd files; tar xvzf ../local-files.tar.gz)
docker run -v $PWD/files:/root cse303
This is trickier if you're storing the data in a named volume. The Docker documentation describes how to back up the contents of a named volume and you'd have to go through that procedure.
If you want to export the entire contents of the container (including the mounted volumes, which might be a bad idea depending upon what you have mounted), then you want to run tar inside the container and pipe out the data to a file:
docker run --privileged -v /path/to/local/files:/root ${IMAGE_NAME} \
tar -cf - -C / --exclude=proc --exclude=sys . | gzip > myfile.tgz
I would highly recommend excluding /proc and /sys (as in the above example) or you will likely encounter issues.

how to map a local folder as the volume the docker container or image?

I am wondering if I can map the volume in the docker to another folder in my linux host. The reason why I want to do this is, if I don't misunderstand, the default mapping folder is in /var/lib/docker/... and I don't have the access to this folder. So I am thinking about changing that to a host's folder I have access (for example /tmp/) when I create the image. I'm now able to modify Dockerfile if this can be done before creating the image. Or must this be done after creating the image or after creating the container?
I found this article which helps me to use a local directory as the volume in docker.
https://docs.docker.com/engine/userguide/containers/dockervolumes/
Command I use while creating a new container:
docker run -d -P -name randomname -v /tmp/localfolder:/volumepath imageName
Docker doesn't have any tools I know of to map named or container volumes back to the host, though they are just sub directories under /var/lib/docker so writing your own tool wouldn't be impossible, but you'd need root access to run it. Note that with access to docker on the host, there are likely a lot of ways to access root privileges on the host. Creating a hard link to the target folder should be all that's needed if both source and target are on the same file system.
The docker way to access the named volume would be to create a disposable container to access your files. You can even create an additional host volume to export the data. E.g.
docker run -it --rm \
-v test:/source -v `pwd`/data:/target \
busybox /bin/sh -c "tar -cC /source . | tar -xC /target"'
Where "test" is the named volume you want to export/copy. You may need to also run a chown -R $uid /target in a container to change everything to your uid on the host.

Resources