Rancher volume on Local File System - docker-volume

I'm using rancher with only one host for test. When deploying a container, I'd need to have access the container persistent data from the host, to simplify my test. One option I see would be that I mount a local filesystem path to the container as /srv/myfolder:/etc/myfolder. I've already done that with Docker.
I've tried to do it from Rancher, but it does work. Do I need to do something specific?
The second option would be to have a docker volume. I've tried, and it works. But I don't know how I could access it from the docker host. Is there a way to do it or is it not possible by default?
Thank you
Fabrice

I think it is related to this answer.
You can create a docker volume binded to local directory of your desire. Like this:
docker volume create -d local -o type=none -o o=bind \
-o device=/srv/myfolder container_etc_volume
Then you can use it like:
docker run -d -v container_etc_volume:/etc/myfolder .....
Then you can access it from host:
ls -la /srv/myfolder

Related

Mounting host directory to container

I am launching a container for my application. But my app needs few config files to login. Files are stored in host directory. How can I mount the host filepath to container?
host directory : /opt/myApp/config
Docker command used currently :
sudo docker run container -d --name myApp-container -p 8090:8080 myApp-image
Please suggest the changes in docker command to achieve this.
You need to use -v/--volume key in such way:
-v <host dir>:<container dir>:ro
In your case it will be:
-v /opt/myApp/config:/opt/myApp/config:ro
You can use this key multiple times. You can also drop :ro part if you want directory to be writable.
See Docker documentation on volumes.

How to mount the root directory of docker container as a NFS mount point

I'm new to docker, and I'm trying mount the root directory of docker container as a NFS mount point.
for example, I had a NFS mount point test:/home/user/3243, and I'm trying:
docker run -it -v "test:/home/user/3243":/ centos7 /bin/bash
absolutely, it's failed. So I tried this:
mount -t nfs test:/home/user/3243 /mnt/nfs/3243
docker run -it -v /mnt/nfs/3243:/ centos7 /bin/bash
but failed again, so how to do this? Could it be worked out?
A couple of issues here:
You cannot mount to the root directory of a container. So docker run -v /foo:/ will never work.
With the syntax of your first attempt, -v test:/foo:bar, Docker would see this as wanting to create a "named" volume called "test".
You should be able to first do the NFS mount, then do docker run -v /mnt/nfs/3243:/foo to have the nfs path mounted to /foo.
But again, you can't mount to /.
That is currently discussed (since mid 2014) in issue 4213.
One recent workaround by Jeroen van Bemmel (jbemmel) was:
It appears that NFS functionality depends on the underlying storage driver ( aufs, devicemapper, etc. ), as well as the sharing of file handles between processes ( see blog post "docker: devicemapper fix for “device or resource busy” (EBUSY)") i.e. 'unshare' may have an impact on NFS mounts.
I've moved away from using the 'MOUNTPOINT=/vm/nfs' as I am not sure if that event is even emitted.
Instead I created an upstart file like this:
cat > /etc/init/ecdn.conf << EOF
description "eCDN container"
author "Jeroen van Bemmel"
# mounted MOUNTPOINT=/vm/nfs doesn't seem to work, at least not the first time
start on started docker and virtual-filesystems
stop on starting rc RUNLEVEL=[016]
respawn
script
exec /usr/bin/docker start -a ecdn
end script
pre-stop script
/usr/bin/docker stop ecdn
# dont /usr/bin/docker rm ecdn
end script
EOF
and then create the container like this:
script -c "docker create -it --name='ecdn' --volume /vm:/usr/share/nginx/html/vm:ro image/name"

Mounting volumes on Bluemix containers and sharing between them does not work

I've created a volume with
$ cf ic volume create mosquitto_config
This information shows up as expected:
$ cf ic volume list
mosquitto_config
Then, I've created two containers that are based on an image, which contains the VOLUME ["/etc/mosquitto"] line in its Dockerfile, and on which I'm able to log in via SSH:
$ cf ic run -p 22:22 --volume mosquitto_config:/etc/mosquitto --name ssh-test registry.ng.bluemix.net/{reg-name}/{image-name}:latest
$ cf ic run -p 22:22 --volume mosquitto_config:/etc/mosquitto --name ssh-test-2 registry.ng.bluemix.net/{reg-name}/{image-name}:latest
After logging in, I see the mount point /etc/mosquitto as directory on both containers. However, if I create a file in that directory within one container, the new file does not show up in the other container. As far as I understand the volume concept, the new file should show up in the other container. Is it currently not working or how do you set it up correctly?
this kind of way to share volumes I think is not supported by docker.
In order to give a container access to another container’s volumes, you can simply give the –volumes-from argument to docker run. For example:
$ docker run -it -h NEWCONTAINER --volumes-from container-test debian /bin/bash
All the volumes mounted in 'container-test' will be available to 'NEWCONTAINER' (with the same mount options)
It’s important to note that it works even if the container-test is not running: a volume will never be deleted as long as a container is linked to it.
For further help check this url
http://container-solutions.com/understanding-volumes-docker/

Docker volume conflict

I have a dockerized web application that I'm running in a HA setup. I have a cron setup that runs dockup every midnight to backup my important information stored on other containers. Now I would like to backup and aggregate my logs from my web application too. Problem is, how do I that? If I use the VOLUME key in Dockerfile to expose /logs to the host machine, there would be a collision because there would be two /logs directories on the dockup container?
I have checked dockup. It does not have a /logs directory. Seems it uses /var/logs for log output.
$ docker run -it --name dockup borja/dockup bash
Otherwise, yes it would be a problem because the volume will be mounted under the mentioned name and also the current container processes will log to the folder. Not good.
Use a logging container like fluentd. In this tutorial it also offers writing to S3 buckets like dockup. Tutorial can be founder here.
Tweak your container, e.g. with symbolic links to log or relay the log to a different volume.
Access log not through containers but native docker and copy it to S3 yourself or running dockup on your local mounted log file.
$ docker logs container/name > logfile.log
$ docker run --rm \
--env-file env.txt \
-v $(pwd)/logfile.log:/customlogs/logfile.txt \
--name dockup borja/dockup
Now you can take the folder /customlogs/ as your backup path inside the env.txt.

Port data out of docker container

I use this method below to port data out of one container.
docker run --volumes-from <data container> ubuntu tar -cO <volume path> | gzip -c > volume.tgz
But there is one problem with it is every time it performs a backup, there will be a zombie container left. What is the good way to get that id and remove the zombie container afterward.
Thanks
Apparently, you just want to be able to export volume data. To do that, you just need to start your initial container with a volume pointing to a directory on the host with the -v option. You can tar on the host without creating a container for it. Your current tactic seems a bit over-engineered ;)
The easy way to remove the container after executing the command, is to use the option --rm, from here
However, if you feel that the container you are creating will have data that you will need to
1. update in real time
2. access after the container has been created
then you may also mount a host directory as a container volume and access the contents of that directory from the host.
If you start a container using the -volume option, you can also call reference the directory created on this host
$ docker run -v /volume_directory ubuntu
$ container=$(docker ps -n=1 -q)
$ docker inspect -f '{{.Volumes}}' $container

Resources