How to merge host folder with container folder in Docker? - docker

I have Wikimedia running on Docker. Wikimedia's extensions reside in extensions/ folder which initially contain built-in extensions (one extensions = one subfolder)
Now I wish to add new extensions. However I don't prefer the option of modifying the Dockerfile or creating new commit on the existing container.
Is it possible to create a folder in the host (e.g. /home/admin/wikimedia/extensions/) which is to be merged (not to overwrite) with the extension folder in the container? So whenever I want to install new extension, I just copy the extension folder to the host /home/admin/wikimedia/extensions/

You can mount a volume from your host to a separate location than the extension folder, then in your startup script, you can copy the contents to the container's directory. You will need to rebuild your host once.
For example:
Dockerfile:
RUN cp startup-script /usr/local/bin/startup-script
CMD /usr/local/bin/startup-script
startup-script:
#!/bin/bash
cp /mnt/extensions /path/to/wikipedia/extensions
/path/to/old-startup-script $#
docker run -d -v /home/admin/wikimedia/extensions:/mnt/extensions wikipedia
That is one way to get around this problem, the other way would be to maintain a separate data container for extensions, then you will mount this and maintain it outside of the wikipedia container. It would have to have all the extensions in it.
You can start one like so:
docker run -d -v /home/admin/wikimedia/extensions:/path/to/wikipedia/extensions --name extensions busybox tail -f /dev/null
docker run -d --volumes-from extensions wikipedia

Related

Docker : Dynamically created file copy to local machine

I am new to docker, I'm dynamically creating a file which is in docker container and want to copy that local machine at the same time, please let me know how it is possible through volumes.
For now, I have to use the below command again and again to check the file data :
docker cp source destination
How it can be done through volumes, the file format will be in .csv or .xlsx? I mean what should I write the command in docker files so that it can copy the file
What you need is volume. You have to add your current directory as a volume to the docker container when you first create the container so that they are the same folder. By doing this, you'll be able to sync the files in that folder automatically. But I'm assuming you're using docker for development environment.
This is how I run my container.
docker run -d -it --name {container_name} --volume $PWD:{directory_in_container} --entrypoint /bin/bash {image_name}
In addition to your run command, you have to add --volume $PWD:{directory_in_container} to your run script.
If you have a problem again, just add more detail to your question.
Things you can add might be your Dockerfile, and how you first run your container.

Copying a file from container to locally by using volume mounts

Trying to copy files from the container to the local first
So, I have a custom Dockerfile, RUN mkdir /test1 && touch /test1/1.txt and then I build my image and I have created an empty folder in local path /root/test1
and docker run -d --name container1 -v /root/test1:/test1 Image:1
I tried to copy files from containers to the local folder, and I wanted to use it later on. but it is taking my local folder as a preceding and making my container empty.
Could you please someone help me here?
For example, I have built my own custom Jenkins file, for the first time while launching it I need to copy all the configurations and changes locally from the container, and later if wanted to delete my container and launch it again don't need to configure from the scratch.
Thanks,
The relatively new --mount flag replaces the -v/--volume mount. It's easier to understand (syntactically) and is also more verbose (see https://docs.docker.com/storage/volumes/).
You can mount and copy with:
docker run -i \
--rm \
--mount type=bind,source="$(pwd)"/root/test1,target=/test1 \
/bin/bash << COMMANDS
cp <files> /test1
COMMANDS
where you need to adjust the cp command to your needs. I'm not sure if you need the "$(pwd)" part.
Off the top, without testing to confirm, i think it is
docker cp container1:/path/on/container/filename /path/on/hostmachine/
EDIT: Yes that should work. Also "container1" is used here because that was the container's name provided in the example
In general it works like this
container to host
docker cp containername:/containerpath/ /hostpath/
host to container
docker cp /hostpath/ containername:/containerpath/

Docker, mount all user directories to container

add -v option can mount directories to the container, for example, mounting /home/me/my_code to the container, and when in the container, we can see the directory.
Currently, in my Dockerfile, the user is docker and the workspace is /home/docker, and how can I mount all my directories in /home/me to /home/docker? So that when I enter into the container, it would be very convenient to run my task and explore files like in /home/me.
While building a image through dockerfile, COPY or ADD is used to copy a file with necessary content in the process of building the image example, installing npm binaries and all.
Since you are looking to have the flexibility of having a same local FS as inside the conatiner, you can try out "Bind Mounts".
bash-3.2$ docker run \
> -it \
> --name devtest \
> --mount type=bind,source=/Users/anku/,target=/app \
> nginx:latest \
> bash
root#c072896c7bb2:/#
root#c072896c7bb2:/# pwd
/
root#c072896c7bb2:/# cd app
root#c072896c7bb2:/app# ls
Applications Documents Library Music Projects PycharmProjects anaconda3 'iCloud Drive (Archive)' 'pCloud Drive' testrun.bash
Desktop Downloads Movies Pictures Public 'VirtualBox VMs' gitlab minikube-linux-amd64 starup.sh
root#c072896c7bb2:/app#
There are two kinds of mechanism to mange persisting data.
Volumes are completely managed by Docker.
Bind Mount, mounts a file or directory on the host machine into container. Any changes made from the host machine or from inside the container are synced.
Suggest to go through Differences between --volume and --mount behavior
Choose what best work for you.

Mounting multiple files with same extension via docker run

I am aware that a single file, say hello_world.py, in my local file system can be mounted (not copied) on a docker container by
docker run -v local_directory/hello_world.py:docker_directory/hello_world.py other_params
My question is if it is possible to use a similar syntax to mount multiple files with the same extension in a directory to a docker container? I was experimenting with using *.py to no avail.
docker run -v local_directory/*.py:docker_directory/*.py other_params
Is my only option to explicitly write individual -v statements for each .py file in the docker run command?
While *-formatted mappings are not possible, there are certainly ways around it so you don't have to individually map each file. One possibility is to mount the local_directory into the container, then create symlinks using a for() loop:
docker run -v local_directory:custom_directory other_params
for i in `ls local_directory/*py`
do
docker exec -it ln -s custom_directory/${i} docker_directory <container_name>
done
No, it isn't possible to use relative paths at all to mount files in Docker, nor use regular expressions as of Docker version 19.03.2, build 6a30dfc
A clean sollution for your case would be to mount the entire folder and use a command to point the executables to the correct folder such as:
docker run -v my_folder:/docker/my_folder python:3 python /docker/my_folder/my-script.py
more info

how to map a local folder as the volume the docker container or image?

I am wondering if I can map the volume in the docker to another folder in my linux host. The reason why I want to do this is, if I don't misunderstand, the default mapping folder is in /var/lib/docker/... and I don't have the access to this folder. So I am thinking about changing that to a host's folder I have access (for example /tmp/) when I create the image. I'm now able to modify Dockerfile if this can be done before creating the image. Or must this be done after creating the image or after creating the container?
I found this article which helps me to use a local directory as the volume in docker.
https://docs.docker.com/engine/userguide/containers/dockervolumes/
Command I use while creating a new container:
docker run -d -P -name randomname -v /tmp/localfolder:/volumepath imageName
Docker doesn't have any tools I know of to map named or container volumes back to the host, though they are just sub directories under /var/lib/docker so writing your own tool wouldn't be impossible, but you'd need root access to run it. Note that with access to docker on the host, there are likely a lot of ways to access root privileges on the host. Creating a hard link to the target folder should be all that's needed if both source and target are on the same file system.
The docker way to access the named volume would be to create a disposable container to access your files. You can even create an additional host volume to export the data. E.g.
docker run -it --rm \
-v test:/source -v `pwd`/data:/target \
busybox /bin/sh -c "tar -cC /source . | tar -xC /target"'
Where "test" is the named volume you want to export/copy. You may need to also run a chown -R $uid /target in a container to change everything to your uid on the host.

Resources