The mounting a folder to a Docker image remains indefinitely stuck - docker

I'm trying to mount a folder to a docker image in Ubuntu 20.04:
(base) raphy#pc:~$ sudo docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.tigergraph.com/tigergraph latest 6c55bb15e2a6 7 days ago 10.6GB
hello-world latest feb5d9fea6a5 6 weeks ago 13.3kB
(base) raphy#pc:~$ sudo docker run -t -i -v /home/raphy/ConceptNet/ 6c55bb15e2a6
It doesn't give any error, but it remains indefinitely stuck
Update 1)
(base) raphy#pc:~$ sudo docker run -t -i -v /home/raphy
/ConceptNet:/6c55bb15e2a6/ConceptNet bash
Unable to find image 'bash:latest' locally
latest: Pulling from library/bash
a0d0a0d46f8b: Pull complete
ae2d64a5f3ef: Pull complete
1e5367194cc8: Pull complete
Digest:
sha256:91767623eb341f1717bb37b059e77e8de439c8044064808f6f9bfdc942e8d30c
Status: Downloaded newer image for bash:latest
bash-5.1# ^C
What am I doing wrongly?
SOLVED in this way:
(base) raphy#pc:~$ sudo docker run -d -p 14022:22 -p 9000:9000 -p
14240:14240 --name tigergraph --ulimit nofile=1000000:1000000 -v
~/ConceptNet/:/home/tigergraph/myconceptnet -t
docker.tigergraph.com/tigergraph:latest
https://docs.tigergraph.com/start/get-started/docker#2.-prepare-a-shared-folder-on-host-os-to-be-shared-with-docker-container

Your docker command is insufficient to run tigergraph... it's not a simple run command will do, follow the instruction at https://docs.tigergraph.com/start/get-started/docker

your intention is to bind a local path on your host to your container, but what you are really doing now is attaching a new local empty volume to /home/raphy/ConceptNet/ on your container, just exec:
docker exec {your container id} ls /home/raphy/ConceptNet/
to see the path is created inside your container.
also you can use:
docker inspect {your container id} | less
and check the "Mounts" part to see what volumes you have really attached to your container, the output will be something like:
"Mounts": [
{
"Type": "volume",
"Name": "43f6d9846728547b77666705d2b5a4be1d1e644af80f3bb53d86fe105f57bfc6",
"Source": "/var/lib/docker/volumes/43f6d9846728547b77666705d2b5a4be1d1e644af80f3bb53d86fe105f57bfc6/_data",
"Destination": "/home/raphy/ConceptNet/",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
while
"Name": "43f6d9846728547b77666705d2b5a4be1d1e644af80f3bb53d86fe105f57bfc6"
is the name of the volume you've unintentionally created and attached on path /home/raphy/ConceptNet/ in your container.
if you want to mount a local directory to your container, just use:
sudo docker run -t -i -v /home/raphy/ConceptNet/:/some_path/ 6c55bb15e2a6
and if you want to have shell inside your container its better to include your command at the end of docker run like:
sudo docker run -t -i -v /home/raphy/ConceptNet/:/some_path/ 6c55bb15e2a6 /bin/sh

Try this
-v, --volume=[host-src:]container-dest[:<options>]:
Reference Link

Related

Why are files in /var/lib/docker/volumes/<volume>/_data/ not visible from within a docker container?

Learning about docker (on an Ubuntu 18.04 LTE (bionic)), and specifically about managing persistent data, I found docker volumes.
Following the example over there, I tried to add some files to a volume, and then list them from within a container:
root#srv /v/l/machines# docker volume create hello
hello
root#srv /v/l/machines# docker run -d -v hello:/world busybox ls /world
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
57c14dd66db0: Pull complete
Digest: sha256:7964ad52e396a6e045c39b5a44438424ac52e12e4d5a25d94895f2058cb863a0
Status: Downloaded newer image for busybox:latest
d488dd535de01209ccc4f4bbf9a269d7932868ca41c9fe538d7a95fad66cefae
There is no data in the volume so the ls output is empty. This is OK.
root#srv /v/l/machines# docker volume inspect hello
[
{
"CreatedAt": "2019-01-14T14:57:47+01:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/hello/_data",
"Name": "hello",
"Options": {},
"Scope": "local"
}
]
root#srv /v/l/machines# date > /var/lib/docker/volumes/hello/_data/a.txt
root#srv /v/l/machines# date > /var/lib/docker/volumes/hello/_data/b.txt
root#srv /v/l/machines# docker run -d -v hello:/world busybox ls /world
ced5591203511f2f9a0194431ba8fca81df8442c38be993de454cadb1b93da09
root#srv /v/l/machines# docker run -d -v hello:/world busybox ls /world
7987ce187747016e81469cb1a150aa0a85ded58521fbc03f1a0f55e2e07358f0
root#srv /v/l/machines# ls /var/lib/docker/volumes/hello/_data/
a.txt b.txt
This part I do not understand. I added some files to the place pointed out by docker volume inspect but they do not seem to be visible form within a docker container which mounted that volume. Why is it so?
Your container is running in detached mode, which is why you do not see any output.
Try running docker logs <container-id> and it should show the result of your ls-command.
Alternatively, you could omit the -d flag to run the container in foreground. This is useful particularly in cases when you just want to try things.
Documentation: docker run – detached vs foreground

Docker volumn path in windows [duplicate]

This question already has answers here:
Locating data volumes in Docker Desktop (Windows)
(17 answers)
Closed 4 years ago.
I am relatively new to docker. I want to use a database with volume to persist. I am in windows 10.
I want to check where the volumns are created in my machine.
When i run the command
C:\Users\satul>docker volume inspect 368984d12c3525d8752d249347cfd563afb46c847e1c109afa9785bf54b89701 [
{
"CreatedAt": "2018-06-25T22:43:29Z",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/368984d12c3525d8752d249347cfd563afb46c847e1c109afa9785bf54b89701/_data",
"Name": "368984d12c3525d8752d249347cfd563afb46c847e1c109afa9785bf54b89701",
"Options": null,
"Scope": "local"
} ]
Since this is a windows box, i donot have folder /var/lib/docker/volumes/. Where exactly is the volumn folder in windows so that i can backit up if required.
You should not back up volumes by backing up /var/lib/docker/volumes directory. Instead you should use command (it will create the backup in your current working directory):
docker run --rm --volumes-from container-name -v $(pwd):/backup ubuntu tar cvf /backup/backup_name.tar /mount/point/inside/container
Eg. for Docker registry the command looks like this:
docker run --rm --volumes-from registry -v $(pwd):/backup ubuntu tar cvf /backup/registry_backup.tar /var/lib/registry
And to restore the backup you should use command:
docker run --rm --volumes-from container-name -v $(pwd):/backup ubuntu bash -c "cd /mount/point/inside/container && tar xvf /backup/registry_backup.tar --strip number_of_leading_directory_components_in_mount_point_path"
Eg. to restore backup of Docker registry:
docker run --rm --volumes-from registry -v $(pwd):/backup ubuntu bash -c "cd /var/lib/registry && tar xvf /backup/registry_backup.tar --strip 3"
Usually /var/lib/docker is mounted on C:\Users\Public\Documents\Hyper-V\Virtual hard disks. You can check it out by looking at your docker settings.
A docker volume is just a directory on your host machine with all your container data, so you could use any methods you wish to backup your data.You can see more about docker volumes in official documentation
See also

Docker volume access from host

I have a docker file that looks like this. How can I access this volume from the host? I checked the volumes folder where Docker is installed.
FROM busybox
MAINTAINER Erik Kaareng-sunde <esu#enonic.com>
RUN mkdir -p /enonic-xp/home
RUN adduser -h /enonic-xp/ -H -u 1337 -D -s /bin/sh enonic-xp
RUN chown -R enonic-xp /enonic-xp/
VOLUME /enonic-xp/home
ADD logo.txt /logo.txt
CMD cat /logo.txt
ls
$ docker volume ls
DRIVER VOLUME NAME
local b4e99290fd4d5f7a3fe700ae9b616c2e66b1f758c497662415cdb47905427719
I would like to be able to cd into that volume.
inspect
docker volume inspect b4e99290fd4d5f7a3fe700ae9b616c2e66b1f758c497662415cdb47905427719
[
{
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/b4e99290fd4d5f7a3fe700ae9b616c2e66b1f758c497662415cdb47905427719/_data",
"Name": "b4e99290fd4d5f7a3fe700ae9b616c2e66b1f758c497662415cdb47905427719",
"Options": {},
"Scope": "local"
}
]
After looking at a lot of posts, I finally found a post that address the question asked here.
Getting path and accessing persistent volumes in Docker for Mac
Note: this works only for Mac.
The path for the tty may also be present here:
~/Library/Containers/com.docker.docker/Data/vm/*/tty
Instead of doing it within the dockerfile, you can simply mount with docker run -v /path/in/host:/path/in/container image-name....
Docker volume ls lists all volumes docker volume inspect lets you inspect a volume. If you cant find your volume with docker volume ls try docker inspect your container and check for info there

Can I export a container with data and everything to spawn a complete copy on another computer?

I am looking into Docker and trying to wrap my head around it, so I might have misunderstood the concept.
I have installed the sebp/elk (ElasticSearch-Logstash-Kibana) and have a working container running. I have setup some indices and posted some data to logstash, which is stored with the container. If I restart the container everything works as expected. Now I am interested in exporting the container as it is, to launch on another computer with the configurations and data I have setup.
So I have tried export the container and import it as a new image and run the container from the new image. The container works, but it starts up as a new container without all the data, that I put into the original container.
I also tried to commit my changes to the image, then save the image and load it again and then run the container from the new image. That also works, but again without any data.
So when I inspect the original container, I can see that it has a mounted volume, so I figured that I should try to export the elasticsearch data to a .tar file and then import into the new container. But that didn't work either.
This is the mount inspection of the original container:
"Mounts": [
{
"Name": "fe17e920f9d17e177ac899b1617a8c51231c8a3b34007f463d082e5be2677412",
"Source": "/var/lib/docker/volumes/fe17e920f9d17e177ac899b1617a8c51231c8a3b34007f463d082e5be2677412/_data",
"Destination": "/var/lib/elasticsearch",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
]
Here is how I tried to export it:
docker run --rm --volumes-from elk -v $(pwd):/volumes original/sebp/elk:exported tar cvf /volumes/elk-volume.tar /var/lib/elasticsearch
... and this is how I tried to import it:
docker run --rm --volumes-from elk-imported -v $(pwd):/volumes original/sebp/elk:exported bash -c "cd /volumes && tar xvf /volumes/elk-volume.tar --strip 1"
Is it possible to export a Docker container to get an exact copy of it with data and everything or am I approaching this problem the wrong way?
Your approach is correct, docker export command does not export the contents of volumes associated with the container, so you have to export the container than backup volume data.
Be sure you have elk-imported container already running before doing volume restore.
docker run -v /volumes --name elk-imported original/sebp/elk:exported /bin/bash
docker run --rm --volumes-from elk-imported -v $(pwd):/volumes original/sebp/elk:exported bash -c "cd /volumes && tar xvf /volumes/elk-volume.tar --strip 1"

How can I add a volume to an existing Docker container?

I have a Docker container that I've created simply by installing Docker on Ubuntu and doing:
sudo docker run -i -t ubuntu /bin/bash
I immediately started installing Java and some other tools, spent some time with it, and stopped the container by
exit
Then I wanted to add a volume and realised that this is not as straightforward as I thought it would be. If I use sudo docker -v /somedir run ... then I end up with a fresh new container, so I'd have to install Java and do what I've already done before just to arrive at a container with a mounted volume.
All the documentation about mounting a folder from the host seems to imply that mounting a volume is something that can be done when creating a container. So the only option I have to avoid reconfiguring a new container from scratch is to commit the existing container to a repository and use that as the basis of a new one whilst mounting the volume.
Is this indeed the only way to add a volume to an existing container?
You can commit your existing container (that is create a new image from container’s changes) and then run it with your new mounts.
Example:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5a8f89adeead ubuntu:14.04 "/bin/bash" About a minute ago Exited (0) About a minute ago agitated_newton
$ docker commit 5a8f89adeead newimagename
$ docker run -ti -v "$PWD/somedir":/somedir newimagename /bin/bash
If it's all OK, stop your old container, and use this new one.
You can also commit a container using its name, for example:
docker commit agitated_newton newimagename
That's it :)
We don't have any way to add volume in running container, but to achieve this objective you may use the below commands:
Copy files/folders between a container and the local filesystem:
docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH
docker cp [OPTIONS] SRC_PATH CONTAINER:DEST_PATH
For reference see:
https://docs.docker.com/engine/reference/commandline/cp/
I've successfully mount /home/<user-name> folder of my host to the /mnt folder of the existing (not running) container. You can do it in the following way:
Open configuration file corresponding to the stopped container, which can be found at /var/lib/docker/containers/99d...1fb/config.v2.json (may be config.json for older versions of docker).
Find MountPoints section, which was empty in my case: "MountPoints":{}. Next replace the contents with something like this (you can copy proper contents from another container with proper settings):
"MountPoints":{"/mnt":{"Source":"/home/<user-name>","Destination":"/mnt","RW":true,"Name":"","Driver":"","Type":"bind","Propagation":"rprivate","Spec":{"Type":"bind","Source":"/home/<user-name>","Target":"/mnt"},"SkipMountpointCreation":false}}
or the same (formatted):
"MountPoints": {
"/mnt": {
"Source": "/home/<user-name>",
"Destination": "/mnt",
"RW": true,
"Name": "",
"Driver": "",
"Type": "bind",
"Propagation": "rprivate",
"Spec": {
"Type": "bind",
"Source": "/home/<user-name>",
"Target": "/mnt"
},
"SkipMountpointCreation": false
}
}
Restart the docker service: service docker restart
This works for me with Ubuntu 18.04.1 and Docker 18.09.0
Jérôme Petazzoni has a pretty interesting blog post on how to Attach a volume to a container while it is running. This isn't something that's built into Docker out of the box, but possible to accomplish.
As he also points out
This will not work on filesystems which are not based on block devices.
It will only work if /proc/mounts correctly lists the block device node (which, as we saw above, is not necessarily true).
Also, I only tested this on my local environment; I didn’t even try on a cloud instance or anything like that
YMMV
Unfortunately the switch option to mount a volume is only found in the run command.
docker run --help
-v, --volume list Bind mount a volume (default [])
There is a way you can work around this though so you won't have to reinstall the applications you've already set up on your container.
Export your container
docker container export -o ./myimage.docker mycontainer
Import as an image
docker import ./myimage.docker myimage
Then docker run -i -t -v /somedir --name mycontainer myimage /bin/bash
A note for using Docker Windows containers after I had to look for this problem for a long time!
Condiditions:
Windows 10
Docker Desktop (latest version)
using Docker Windows Container for image microsoft/mssql-server-windows-developer
Problem:
I wanted to mount a host dictionary into my windows container.
Solution as partially discripted here:
create docker container
docker run -d -p 1433:1433 -e sa_password=<STRONG_PASSWORD> -e ACCEPT_EULA=Y microsoft/mssql-server-windows-developer
go to command shell in container
docker exec -it <CONTAINERID> cmd.exe
create DIR
mkdir DirForMount
stop container
docker container stop <CONTAINERID>
commit container
docker commit <CONTAINERID> <NEWIMAGENAME>
delete old container
docker container rm <CONTAINERID>
create new container with new image and volume mounting
docker run -d -p 1433:1433 -e sa_password=<STRONG_PASSWORD> -e ACCEPT_EULA=Y -v C:\DirToMount:C:\DirForMount <NEWIMAGENAME>
After this i solved this problem on docker windows containers.
My answer will be little different. You can stop your container, add the volume and restart it. How to do it, follow the steps.
docker volume create ubuntu-volume
docker stop <container-name>
sudo docker run -i -t --mount source=ubuntu-volume,target=<target-path-in-container> ubuntu /bin/bash
You can stop and remove the container, append the existing volume in a startup script, and restart from the image. If the already existing existing partitions do keep the data, you shouldn't experience any loss of information. This should also work the same way with Dockerfile and Docker composer.
eg (solr image).
(initial script)
#!/bin/sh
docker pull solr:8.5
docker stop my_solr
docker rm solr:8.5
docker create \
--name my_solr \
-v "/XXXX/docker/solr/solrdata":/var/solr \
-p 8983:8983 \
--restart unless-stopped \
--user 1000:1000 \
-e SOLR_HEAP=1g \
--log-opt max-size=10m \
--log-opt max-file=3 \
solr:8.5
docker cp /home/XXXX/docker/solr/XXXXXXXX.jar my_solr:/opt/solr/contrib/dataimporthandler-extras/lib
docker start my_solr
file with the second volume
#!/bin/sh
docker pull solr:8.5
docker stop my_solr
docker rm solr:8.5
docker create \
--name my_solr \
-v "/XXXX/docker/solr/solrdata":/var/solr \
-v "/XXXX/backups/solr_snapshot_folder":/var/solr_snapshots \
-p 8983:8983 \
--restart unless-stopped \
--user 1000:1000 \
-e SOLR_HEAP=1g \
--log-opt max-size=10m \
--log-opt max-file=3 \
solr:8.5
docker cp /home/XXXX/docker/solr/XXXXXXXX.jar my_solr:/opt/solr/contrib/dataimporthandler-extras/lib
docker start my_solr
Use symlink to the already mounted drive:
ln -s Source_path targer_path_which_is_already_mounted_on_the_running_docker
The best way is to copy all the files and folders inside a directory on your local file system by: docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH
SRC_PATH is on container
DEST_PATH is on localhost
Then do docker-compose down attach a volume to the same DEST_PATH and run Docker containers by using docker-compose up -d
Add volume by following in docker-compose.yml
volumes:
- DEST_PATH:SRC_PATH

Resources