Cant get docker run -v to work - docker

I want the /home/moodle folder on the host to be the same as the /var/www/html folder in the container.
I tried running this command:
sudo docker run -d -P --name moodle --link DB:DB -p 8080:80 -v /home/moodle:/var/www/html jhardison/moodle
It adds this to docker inspect:
"Mounts": [
{
"Type": "bind",
"Source": "/home/moodle",
"Destination": "/var/www/html",
"Mode": "",
"RW": true,
"Propagation": ""
},
But the /home/moodle folder is empty and not the same as /var/www/html in the container

Here an extract from the docker documentation
docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the
container at /webapp. If the path /webapp already exists inside the
container’s image, the /src/webapp mount overlays but does not remove
the pre-existing content. Once the mount is removed, the content is
accessible again. This is consistent with the expected behavior of the
mount command.
Mounting a empty folder will overlay whatever contents your container had at that folder. For sharing data from within a container you could mount your folder to some empty directory and let your application copy data into that directory or run bash commands from your container to do the same as described in another thread:
docker exec -it mycontainer /bin/bash
Or you could use docker copy as described in another stackoverflow thread:
docker cp <containerId>:/file/path/within/container /host/path/target

If you mount a host directory in an image directory that previously exited, the content of the image directory is not removed, but the content of your host directory will also be in your container directory.
Take a look to Docker docs to further information: https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-directory-as-a-data-volume

Related

Docker: what is the use of local volumes and some observation about volumes

I have the following docker file
RUN touch /root/testing
VOLUME ["/root"]
after i build and inpect and under config i see
"Volumes": {
"/root": {}
},
after i run /bin/bash and inpect
"Mounts": [
{
"Type": "volume",
"Name": "fc1dc25de37d6d7593a21443cd2bef74a0a6a4e3276b8353199054404665c398",
"Source": "/var/lib/docker/volumes/fc1dc25de37d6d7593a21443cd2bef74a0a6a4e3276b8353199054404665c398/_data",
"Destination": "/root",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
When i start a container it creates a local volume and mount it on /root. It also copies the contents of /root into the local mount
if i do on the host we can see testing file in it
ls /var/lib/docker/volumes/fc1dc25de37d6d7593a21443cd2bef74a0a6a4e3276b8353199054404665c398/_data
testing
But the local volume will be destroyed immediately after the container is killed.
So what is the purpose of local volume. Because sometimes i if by mistake kill the container and still i want to have some data craeted by my container on the local volume, then its not possible since the local volume is also deleted.
I wanted to try named volumes.
I created
docker volume create test
then i my docker file:
RUN touch /root/testing
VOLUME [{"Name":"test","Destination":"/root","external":"true"}]
OR
VOLUME [ "Name:{"Destination":"/root","external":"true"}"]
When i try to build i get:
Error response from daemon: when using JSON array syntax, arrays must be comprised of strings only
Then the only option left out is mount volume from command line rather than Dockerfile
docker run --rm -it --mount source=test,destination=/root archlinux/test /bin/bash
[root#7c7001221c14 /]# ls /root
testing
Now i check the test volume contents:
$ docker run --rm -it --mount source=test,destination=/tmp/myvolume archlinux/base ls /tmp/myvolume
testing
Here since test volume is completely empty so it copied the contents of the /root (i.e file testing) from the image when i do docker run --rm -it --mount source=test,destination=/root archlinux/test /bin/bash into the volume test
But if the test volume is not empty befor i docker run --rm -it --mount source=test,destination=/root archlinux/test /bin/bash: i.e
sudo cd /var/lib/docker/volumes/test/_data
rm -rf *
mkdir hellophp
and then do
docker run --rm -it --mount source=test,destination=/root archlinux/test /bin/bash
[root#7c7001221c14 /]# ls /root
hellophp
So my observations are:
---- VOLUME ["/path/in/container/"] will only create local volumes we cant use named volumes here
---- If i want to use named volumes then
a) create a named volume
docker volume create test
b) mount the named volume into the container path
--mount source=test,destination=/path/in/container
------ *** Most important observation
IF named volume is empty (no files in it) then after runnnig
docker run --rm -it --mount source=test,destination=/path/in/container IMAGENAME CMD
it will copy the contents of /path/in/container to test volume and then mount test volume at /path/in/container
ELSE (i.e named volume has some file in it) then after running
docker run --rm -it --mount source=test,destination=/path/in/container IMAGENAME CMD
It will not change the test volume by copying files from /path/in/container to test volume before mounting.
It will mount test volume at /path/in/container. So any files existing in the /path/in/container will not be available in the container.
If you are running a database in docker you can mount a local directory directly into your container using the -v option on the run command.
docker run -d \
-v <local path>:<container path>:z \
..
..
<your image>
The actual storage will be persistent on your local filesystem, and accessible in the container when the container is running.
Also read this
https://docs.docker.com/storage/volumes/

can I mount a file from volume in docker to the container?

I put one security file like id_rsa in the docker volume larrycai_vol, trying to mount this into the container as a file.
$ docker volume inspect larrycai-vol
[
{
"CreatedAt": "2018-05-18T06:02:24Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/larrycai-vol/_data",
"Name": "larrycai-vol",
"Options": {},
"Scope": "local"
}
]
Do we have some command like below (it doesn't work now)
docker run -it -v larrycai-vol/id_rsa:/root/.ssh/id_rsa own_ubuntu
I know it works if I mount file from docker host
docker run -it -v $PWD/.ssh/id_rsa:/root/.ssh/id_rsa own_ubuntu
Please, let me a couple of considerations furthermore the proposed solution to avoid further misunderstandings.
Docker allows mounting volumes (deployed from path dir of your host to docker path, i.e, -v $PWD/.ssh:/root/.ssh = it'd replace complete .ssh folder in destination, so, it's not recommended although possible)
Docker allows mounting named volumes (deployed from named volume name to docker path, i.e, -v larrycai-vol:/root/.ssh = it'd replace complete .ssh folder in destination, so, it's not recommended although possible)
Docker allows mounting files from host to a docker (example: -v $PWD/.ssh/id_rsa:/root/.ssh/id_rsa = your second example)
Docker allows mounting files from named volume to a docker (example: -v /var/lib/docker/volumes/YOUR_NAMED_VOLUME_NAME/_data/file:/dest_dir/file = what you're trying to do)
Your mistake is that you're telling docker to mount id_rsa from your larrycai-vol directory, not from docker named volume.
In other words, three following commands are equivalent:
docker run -it -v ./larrycai-vol/id_rsa:/root/.ssh/id_rsa own_ubuntu
docker run -it -v $PWD/larrycai-vol/id_rsa:/root/.ssh/id_rsa own_ubuntu
docker run -it -v larrycai-vol/id_rsa:/root/.ssh/id_rsa own_ubuntu
So, if larrycai-vol directory (not named volume, but directory in your host) doesn't exist, command doesn't work as you want.
Definitively, to do what you're trying you have to create a bind volumeto the directory where named volume store data.
docker run -it -v
/var/lib/docker/volumes/larrycai-vol/_data/id_rsa:/root/.ssh/id_rsa
own_ubuntu

Docker volume access from host

I have a docker file that looks like this. How can I access this volume from the host? I checked the volumes folder where Docker is installed.
FROM busybox
MAINTAINER Erik Kaareng-sunde <esu#enonic.com>
RUN mkdir -p /enonic-xp/home
RUN adduser -h /enonic-xp/ -H -u 1337 -D -s /bin/sh enonic-xp
RUN chown -R enonic-xp /enonic-xp/
VOLUME /enonic-xp/home
ADD logo.txt /logo.txt
CMD cat /logo.txt
ls
$ docker volume ls
DRIVER VOLUME NAME
local b4e99290fd4d5f7a3fe700ae9b616c2e66b1f758c497662415cdb47905427719
I would like to be able to cd into that volume.
inspect
docker volume inspect b4e99290fd4d5f7a3fe700ae9b616c2e66b1f758c497662415cdb47905427719
[
{
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/b4e99290fd4d5f7a3fe700ae9b616c2e66b1f758c497662415cdb47905427719/_data",
"Name": "b4e99290fd4d5f7a3fe700ae9b616c2e66b1f758c497662415cdb47905427719",
"Options": {},
"Scope": "local"
}
]
After looking at a lot of posts, I finally found a post that address the question asked here.
Getting path and accessing persistent volumes in Docker for Mac
Note: this works only for Mac.
The path for the tty may also be present here:
~/Library/Containers/com.docker.docker/Data/vm/*/tty
Instead of doing it within the dockerfile, you can simply mount with docker run -v /path/in/host:/path/in/container image-name....
Docker volume ls lists all volumes docker volume inspect lets you inspect a volume. If you cant find your volume with docker volume ls try docker inspect your container and check for info there

Can I export a container with data and everything to spawn a complete copy on another computer?

I am looking into Docker and trying to wrap my head around it, so I might have misunderstood the concept.
I have installed the sebp/elk (ElasticSearch-Logstash-Kibana) and have a working container running. I have setup some indices and posted some data to logstash, which is stored with the container. If I restart the container everything works as expected. Now I am interested in exporting the container as it is, to launch on another computer with the configurations and data I have setup.
So I have tried export the container and import it as a new image and run the container from the new image. The container works, but it starts up as a new container without all the data, that I put into the original container.
I also tried to commit my changes to the image, then save the image and load it again and then run the container from the new image. That also works, but again without any data.
So when I inspect the original container, I can see that it has a mounted volume, so I figured that I should try to export the elasticsearch data to a .tar file and then import into the new container. But that didn't work either.
This is the mount inspection of the original container:
"Mounts": [
{
"Name": "fe17e920f9d17e177ac899b1617a8c51231c8a3b34007f463d082e5be2677412",
"Source": "/var/lib/docker/volumes/fe17e920f9d17e177ac899b1617a8c51231c8a3b34007f463d082e5be2677412/_data",
"Destination": "/var/lib/elasticsearch",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
]
Here is how I tried to export it:
docker run --rm --volumes-from elk -v $(pwd):/volumes original/sebp/elk:exported tar cvf /volumes/elk-volume.tar /var/lib/elasticsearch
... and this is how I tried to import it:
docker run --rm --volumes-from elk-imported -v $(pwd):/volumes original/sebp/elk:exported bash -c "cd /volumes && tar xvf /volumes/elk-volume.tar --strip 1"
Is it possible to export a Docker container to get an exact copy of it with data and everything or am I approaching this problem the wrong way?
Your approach is correct, docker export command does not export the contents of volumes associated with the container, so you have to export the container than backup volume data.
Be sure you have elk-imported container already running before doing volume restore.
docker run -v /volumes --name elk-imported original/sebp/elk:exported /bin/bash
docker run --rm --volumes-from elk-imported -v $(pwd):/volumes original/sebp/elk:exported bash -c "cd /volumes && tar xvf /volumes/elk-volume.tar --strip 1"

Files unavailable when mouting the VOLUME with -v

I have a Docker Image which has a VOLUME ["/log"].
While running the container I am mounting a folder present on the host.
I want all the logs written by docker at VOLUME ["/log"] to be available to the host.
docker run --name=test -v ${pwd}/hostlogfolder:/log dockerimage:1
The logs are not getting written to by hostlogfolder
But the logs are available inside docker at location /log
docker exec -it test bash
cd /log
What is the correct way to mount the folder?
You are almost there, the command needs a small correction:
docker run --name=test -v $(pwd)/hostlogfolder:/log dockerimage:1
Note that the brackets are different:
wrong: ${pwd}
right: $(pwd)
Once running, you can verify the mounted volumes using:
docker inspect <container id> -- you can get the container id using docker ps
Check the Mounts section of the command output.
"Mounts": [
{
"Source": "<host path>",
"Destination": "<container path>",
"Mode": "",
"RW": true
}]
If you want to create an image with predefined settings like add volume, then you have to use Dockerfile. Or, if you don't need to create an image, but only need a temporary container to pass it when you create a container.
I think you need to read the documentation about creating volumes and other directives. It might be very useful for you.
docker run -d -P --name web -v /webapp training/webapp python app.py
https://docs.docker.com/engine/userguide/containers/dockervolumes/

Resources