Move docker bind-mount to volume - docker

Actually, I run my containers like this, for example :
docker run -v /nexus-data:/nexus-data sonatype/nexus3
^
After reading the documentation, I discover volumes that are completely managed by docker. For some reasons, I want to change the way to run my containers, to do something like this :
docker run -v nexus-data:/nexus-data sonatype/nexus3
^
I want to transfer my existing bind-mount to volumes.
But I don't want to lose the data into /nexus-data folder, is there a possibility to transfer this folder, to the new volume, whitout restart everything ? Because I've also Jenkins and Sonar containers for example, I just want to change the way to have persistent data. The is a proper way to do this ?

You can try out following steps so that you will not loose your current nexus-data.
#>docker run -v nexus-data:/nexus-data sonatype/nexus3
#>docker copy /nexus-data/. <container-name-or-id>:/nexus-data/
#>docker stop <container-name-or-id>
#>docker start <container-name-or-id>
docker copy will copy data from your host-machine's /nexus-data folder to container's FS /nexus-data folder which is your mounted volume.
Let me know if you face any issue while performing following steps.

Here's another way to do this, that I just used successfully with a Heimdall container. It's outlined in the documentation for the sonatype/nexus3 image:
Stop the running container (e.g. named nexus3)
Create a docker volume called nexus-data, creating it with the following command: docker volume create nexus-data)
By default, Docker will store the volume's content at /var/lib/docker/volumes/nexus-data/_data/
Simply copy the directory where you previously had been using a bind mount to the aforementioned volume directory (you'll need super user privileges to do this, or for the user to be part of the docker group): cp -R /path/to/nexus-data/* /var/lib/docker/volumes/nexus-data/_data/
Restart the nexus3 container with $ docker run -v nexus-data:/nexus-data sonatype/nexus3 --name=nexus3
Your container will be back up and running, with the files persisted in the directory /path/to/nexus-data/ now mirrored in the docker volume. Check if functionality is the same, of course, and if so, you can delete the /path/to/nexus-data/ directory
Q.E.D.

Related

Combing VOLUME + docker run -v

I was looking for an explanation on the VOLUME entry when writing a Dockerfile and came across this statement
A volume is a persistent data stored in /var/lib/docker/volumes/...
You can either declare it in a Dockerfile, which means each time a container is started from the image, the volume is created (empty), even if you don't have any -v option.
You can declare it on runtime docker run -v [host-dir:]container-dir.
combining the two (VOLUME + docker run -v) means that you can mount the content of a host folder into your volume persisted by the container in /var/lib/docker/volumes/...
docker volume create creates a volume without having to define a Dockerfile and build an image and run a container. It is used to quickly allow other containers to mount said volume.
But I'm having a hard time understanding this line:
...combining the two (VOLUME + docker run -v) means that you can mount the content of a host folder into your volume persisted by the container in /var/lib/docker/volumes/...
For example, let's say I have a config file on my host machine and I run the container based off the image I made with the Dockerfile I wrote. Will it copy the config file into where the volume that I stated in my the volume entry?
Would it be something like (pseudocode)
#dockerfile
From Ubuntu
Run apt-get update
Run apt-get install mysql
Volume . /etc/mysql/conf.d
Cmd systemcl start MySQL
And when I run it
docker run -it -v /path/to/config/file: ubuntu_based_image
Is this what they mean?
You probably don't want VOLUME in your Dockerfile. It's not necessary to mount files or directories at runtime, and it has confusing side effects like making subsequent RUN commands silently lose state.
If an image does have a VOLUME, and you don't mount anything else there when you start the container, Docker will create an anonymous volume and mount it for you. This can result in space leaks if you don't clean these volumes up.
You can use a docker run -v option on any container directory regardless of whether or not it's declared as a VOLUME.
If you docker run -v /host/path:/container/path, the two directories are actually the same; nothing is copied, and writes to one are (supposed to be) immediately visible on the other.
docker run -v /host/path:/container/path bind mounts aren't visible in /var/lib/docker at all.
You shouldn't usually be looking at content in /var/lib/docker (and can't if you're not on a native-Linux host). If you need to access the volume file content directly, use a bind mount rather than a named or anonymous volume.
Bind mounts like you've shown are appropriate for injecting config files into containers, and for reading log files back out. Named volumes are appropriate for stateful applications' storage, like the data for a MySQL database. Neither type of volume is appropriate for code or libraries; build these directly into Docker images instead.

Docker NodeRed committed container does not maintain flows and modules

I'm working on a project using NodeRed deployed with docker and I would like to save the state of my deployment, including flows, settings and new added modules so that I can save the image and load it on another host replicating exactly the same NodeRed instance.
I created the container using:
docker run -itd --name my-nodered node-red
After implementing the flows and installing some custom modules, with the container running I used this command:
docker commit my-nodered my-project-nodered/my-nodered:version1
docker save my-project-nodered/my-nodered:version1 > tar-archive.tar.gz
And on another machine I'd imported the image using:
docker load < tar-archive.tar.gz
And run it using:
docker run -itd my-project-nodered/my-nodered:version1
And I obtain a vanilla NodeRed docker container with a default /data directory and just the files on the data directory maintained.
What am I missing? It could be possibile that my /data directory is overwrittenm as well as my settings.js file in the home directory? And in this case, which is the best practice to achieve my target?
Thank you a lot in advance
commit will not work, as you can see that there is volume defined in the Dockerfile.
# User configuration directory volume
VOLUME ["/data"]
That makes it impossible to create a derived image with any different content in that directory tree. (This is the same reason you can't create a mysql or postgresql image with prepopulated data.)
docker commit doesn't consider volumes at all, so you'll get an unchanged image with nothing preloaded in it.
You can see the offical documentation
Managing User Data
Once you have Node-RED running with Docker, we need to ensure any
added nodes or flows are not lost if the container is destroyed. This
user data can be persisted by mounting a data directory to a volume
outside the container. This can either be done using a bind mount or a
named data volume.
Node-RED uses the /data directory inside the container to store user
configuration data.
nodered-user-data-in-docker
one way is to restore the your config file on another machine, for example backup-config then
docker run -it -p 1880:1880 -v $PWD/backup-config/:/data --name mynodered nodered/node-red-docker
or if you want to full for some repo then you can try
docker run -it --rm -v "$PWD/$(wget https://raw.githubusercontent.com/openenergymonitor/oem_node-red/master/flows_emonpi.json)":/data/ nodered/node-red-docker

Docker volume bind empty volume or convert files to folders

I'm running a container by sending to docker daemon so it can run a sibling container and in that container I try to run another container and mount a volume to access some data, however in the sibling container, the volume is either empty or the file is converted to a folder...
Running the first container:
$ docker run -v /var/run/docker.sock:/var/run/docker.sock -it example /bin/bash
root#3aa35965846a:/home/node/example# ls some_volume/
test.txt
root#3aa35965846a:/home/node/example# cat some_volume/test.txt
hello
// Running the second container
root#3aa35965846a:/home/node/example# docker run -v /home/node/example/some_volume/:/some_volume/ -it node:10 /bin/bash
root#6a84739fbb92:/# ls /some_volume/
* test.txt
root#6a84739fbb92:/# cat /some_volume/test.txt/
cat: /some_volume/test.txt/: Is a directory
The first time I run the second container the volume is empty, if I try to mount a file directly it is converted to a folder, and after that if I try to mount the folder like the example above, there is only the file I tried to mount earlier and it is a folder.
How is this possible ? If i try to mount a volume outside the first container I don't have any problem, how can I fix this ?
The first path in the docker run -v option is always on the host system. For example, if you
docker run -v /etc:/x busybox cat /x/shadow
it will dump out the host's encrypted password file, regardless of whether you ran this command directly from the host or from a container.
There isn't a way to share an arbitrary directory from one container to another. If the launching container knows something about its own directory structure (in particular that some directory was mounted from a specific host path or named volume) then it can replicate that to the other container, but that's not a generic answer. The other behaviors you're seeing are just a consequence of those directories not existing on the host system.
In general I would advise not using Docker for short-lived processes that principally interact with the outside world through the filesystem. Take whatever program you'd run in the other container, install it in your image's Dockerfile, and run it directly without going through Docker.
If you really can't avoid this workflow, the only thing I've found to work reliably is to docker create the container, docker cp files in, docker start it, and docker wait for it to finish. When it's done, docker cp the result out before docker rm it. That's a kind of painstaking workflow but it gets around the problem of the two containers not sharing any filesystem space.

How to mount current directory as read-only but still allow changes inside the container?

I have a situation where:
I want to mount a directory ~/tmp/mycode to /mycode readonly
I want to be able to edit the files in the directory, so I can't just run -v /my/local/path/tmp/mycode:/mycode
I want it to not persist changes on the host filesystem though so I can't mount it read/write
~/tmp/mycode is rather large
Basically I want to be able to edit the files in the mounted volume but not have those changes persisted.
My current workflow is to create a dummy container using a dockerfile:
ADD . /mycode
and then execute that container.
However as the repository grows, this step takes longer and longer to perform, because the only way I can think is to make a complete copy of ~/tmp/mycode in order to be able to manipulate the files in the container.
I've also thought about mounting the directory and copying it inside the container and committing that container, but that has the same issue.
Is there a way to run a docker container to allow file edits without persisting them on the host short of copying the whole directory?
I am using the latest docker for mac, currently Version 17.03.1-ce-mac5 (16048).
This is fairly trivial to do with docker and overlay:
docker run --name myenv --privileged -v /my/local/path/tmp/mycode:/mnt/rocode:ro -it ubuntu /bin/bash
docker exec -d myenv /sbin/mount -t overlay overlay -o lowerdir=/mnt/rocode,upperdir=/mycode,workdir=/mnt/code-workdir /mycode
This should mount the code from your directory read only and create the overlay inside the container so that /mnt/rocode is read only, but /mycode is writable.
Make sure that your kernel is 3.18+ and that you have overlay in your /proc/filesystems.

Change volume configuration in docker-compose without losing the data

My docker-compose has a data container which isn't mapped to a local directory in the host machine, and I want to change it from:
volumes:
- /var/www/html
to
volumes:
- /html:/var/www/html
But when I will restart the container, it will remove the current data container and replace it with a new one.
I know that the container is actually still there, but is there an easy way to do it without the creation of a new data container.
My docker-compose version is 1.7.1 (under boot2docker).
Thanks.
Try at your own risk:
create your host directory /htmlas you wish
docker inspect {container_name} | grep Source and grab your volume path on the host system. It'll be something like /var/lib/docker/volumes/abdb15a2eff[...]/_data
copy the content of that directory to your host directory
recreate the container as you wish.
One safe way to do this is to create a backup of the data from inside the Docker image. Then restore that backup to the directory on your host machine. The Docker Volumes Tutorial mentions a process like this near the bottom.
Here's how you'd do it:
First, mount a directory from your host machine into the container if you don't already have one mounted in. Maybe a volume like ./:/backup. Next, run a backup command like this:
docker-compose run service-name tar czvf /backup/html_data.tar.gz /var/www/html
Now you have html_data.tar.gz in your current directory. Extract it wherever you want and be on your way!
(I'm assuming, based on the way you indicated your volumes, that you're using docker-compose. The process is similar for vanilla Docker.)
Alternate approach, with --volumes-from
Get the name (or hash) of the container with the data you want to copy. You can do this with docker ps. For this example, let's call it container1. Now run this command to back up its data:
docker run --rm --volumes-from container1 -v $(pwd):/backup ubuntu:latest tar czvf /backup/html_data.tar.gz /var/www/html
Note that the image you use (ubuntu:latest) is not important as long as it can tar things.

Resources