Unable to share/mount Volume with Docker Toolbox on Windows 10 - docker

I am trying to setup my project with docker. I am using Docker Toolbox on Windows 10 Home. I am very new to docker. To my understanding I have to copy my files to new container and add a volume so that I can persist changes made by gulp.
Here is my folder structure
-- src
|- dist
|- node-modules
|- gulpfile.js
|- package.json
|- Dockerfile
The Dockerfile code
FROM node:8.9.4-alpine
RUN npm install -g gulp
CMD [ "ls", 'source' ]
I tried many solutions for *docker run -v *
e.g
docker run -v /$(pwd):/source <container image>
docker run -v //c/Users/PcUser/Desktop/proj:/source <container image>
docker run -v //c/Users/PcUser/Desktop/proj:/source <container image>
docker run -v //d/proj:/source <container image>
docker run -v /d/proj:/source <container image>
* But No luck *
Can anyone describe how would you set it up for yourself with the same structure. And why am I not able to mount my host folder.
P.S: If I use two containers one for compiling my code with gulp and one with nginx to serve the content of dist folder. How will I do that.

#sxm1972 Thank you for your effort and help.
You are probably using Windows Pro or a server edition. I am using Windows 10 Home edition
Here is how I solved it, so other people using same setup can solve their issue.
There may be a better way to solve this, please comment if there is an efficient way.
So...
First, the question... Why I don't see my shared volume from PC in my container.
Ans: If we use docker's Boot2Docker with VirtualBox (which I am) then whenever a volume is mounted it refers to a folder inside the Boot2Docker VM
Image: Result of -v with docker in VirtualBox Boot2Docker
So with this if we try to use $ ls it will show an empty folder which in my case it did.
So we have to actually mount the folder to Boot2Docker VM if we want to share our files from Windows environment to Container.
Image: Resulting Mounts Window <-> Boot2Docker <-> Container
To achieve this we have to manually mount the folder to VM with the following command
vboxmanage sharedfolder add default --name "<folder_name_on_vm>" --hostpath "<path_to_folder_on_windows>" --automount
IF YOU GET ERROR RUNNING THE COMMAND, SAYING vboxmanager NOT FOUND ADD VIRTUAL BOX FOLDER PATH TO YOUR SYSTEM PATH. FOR ME IT WAS C:\Program Files\Oracle\VirtualBox
After running the command, you'll see <folder_name_on_vm> on root. You can check it by docker-machine ssh default and then ls /. After confirming that the folder <folder_name_on_vm> exist, you can use it as volume to your container.
docker run -it -v /<folder_name_on_vm>:/source <container> sh
Hope this helps...!
P.S If you are feeling lazy and don't wan't to mount a folder, you can place your project inside your C:/Users folder as it is mounted by default on the VM as show in the image.

The problem is because the base-image you use runs the node REPL as its ENTRYPOINT. If you run the image as docker run -it node:8.9.4-alpine you will see a node prompt and it will not run the npm command like you want.
The way I worked around the problem is to create your own base image using the following Dockerfile:
FROM node:8.9.4-alpine
CMD ["sh"]
Build it as follows:
docker built -t mynodealpine .
Then build your image using this modified Dockerfile:
FROM mynodealpine
RUN npm install -g gulp
CMD [ "/bin/sh", "-c", "ls source" ]
For the problem regarding mounting of volumes, since you are using Docker for Windows, you need to go into Settings (click on the icon in the system tray) and then go and enable Shared Drives.
Here is the output I was able to get:
PS C:\users\smallya\testnode> dir
Directory: C:\users\smallya\testnode
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 2/18/2018 11:11 AM dist
d----- 2/18/2018 11:11 AM node_modules
-a---- 2/18/2018 11:13 AM 77 Dockerfile
-a---- 2/18/2018 11:12 AM 26 gulpfile.js
-a---- 2/18/2018 11:12 AM 50 package.json
PS C:\users\smallya\testnode> docker run -it -v c:/users/smallya/testnode:/source mynodealpinenew
Dockerfile dist gulpfile.js node_modules package.json
PS C:\users\smallya\testnode>

Thanks for the question, possible more simple configuration via a VirtualBox graphical dialogue, worked for me, without the use of command line, albeit maybe not necessarily more versatile:
configuring the sharing folder inside VirtualBox shared folders configuration dialogue,
and then calling mount like this
docker run --volume //d/docker/nginx:/etc/nginx
I will be binding the /etc/nginx directory in my container to
D:\Program Files\Docker Toolbox\nginx
source:
https://medium.com/#Charles_Stover/fixing-volumes-in-docker-toolbox-4ad5ace0e572#fromHistory#fromHistory

Related

How to print the current directory of the docker image which is running in a centOS7 OS from windows docker desktop [duplicate]

I've noticed with docker that I need to understand what's happening inside a container or what files exist in there. One example is downloading images from the docker index - you don't have a clue what the image contains so it's impossible to start the application.
What would be ideal is to be able to ssh into them or equivalent. Is there a tool to do this, or is my conceptualisation of docker wrong in thinking I should be able to do this.
Here are a couple different methods...
A) Use docker exec (easiest)
Docker version 1.3 or newer supports the command exec that behave similar to nsenter. This command can run new process in already running container (container must have PID 1 process running already). You can run /bin/bash to explore container state:
docker exec -t -i mycontainer /bin/bash
see Docker command line documentation
B) Use Snapshotting
You can evaluate container filesystem this way:
# find ID of your running container:
docker ps
# create image (snapshot) from container filesystem
docker commit 12345678904b5 mysnapshot
# explore this filesystem using bash (for example)
docker run -t -i mysnapshot /bin/bash
This way, you can evaluate filesystem of the running container in the precise time moment. Container is still running, no future changes are included.
You can later delete snapshot using (filesystem of the running container is not affected!):
docker rmi mysnapshot
C) Use ssh
If you need continuous access, you can install sshd to your container and run the sshd daemon:
docker run -d -p 22 mysnapshot /usr/sbin/sshd -D
# you need to find out which port to connect:
docker ps
This way, you can run your app using ssh (connect and execute what you want).
D) Use nsenter
Use nsenter, see Why you don't need to run SSHd in your Docker containers
The short version is: with nsenter, you can get a shell into an
existing container, even if that container doesn’t run SSH or any kind
of special-purpose daemon
UPDATE: EXPLORING!
This command should let you explore a running docker container:
docker exec -it name-of-container bash
The equivalent for this in docker-compose would be:
docker-compose exec web bash
(web is the name-of-service in this case and it has tty by default.)
Once you are inside do:
ls -lsa
or any other bash command like:
cd ..
This command should let you explore a docker image:
docker run --rm -it --entrypoint=/bin/bash name-of-image
once inside do:
ls -lsa
or any other bash command like:
cd ..
The -it stands for interactive... and tty.
This command should let you inspect a running docker container or image:
docker inspect name-of-container-or-image
You might want to do this and find out if there is any bash or sh in there. Look for entrypoint or cmd in the json return.
NOTE: This answer relies on commen tool being present, but if there is no bash shell or common tools like ls present you could first add one in a layer if you have access to the Dockerfile:
example for alpine:
RUN apk add --no-cache bash
Otherwise if you don't have access to the Dockerfile then just copy the files out of a newly created container and look trough them by doing:
docker create <image> # returns container ID the container is never started.
docker cp <container ID>:<source_path> <destination_path>
docker rm <container ID>
cd <destination_path> && ls -lsah
see docker exec documentation
see docker-compose exec documentation
see docker inspect documentation
see docker create documentation
In case your container is stopped or doesn't have a shell (e.g. hello-world mentioned in the installation guide, or non-alpine traefik), this is probably the only possible method of exploring the filesystem.
You may archive your container's filesystem into tar file:
docker export adoring_kowalevski > contents.tar
Or list the files:
docker export adoring_kowalevski | tar t
Do note, that depending on the image, it might take some time and disk space.
Before Container Creation :
If you to explore the structure of the image that is mounted inside the container you can do
sudo docker image save image_name > image.tar
tar -xvf image.tar
This would give you the visibility of all the layers of an image and its configuration which is present in json files.
After container creation :
For this there are already lot of answers above. my preferred way to do
this would be -
docker exec -t -i container /bin/bash
The most upvoted answer is working for me when the container is actually started, but when it isn't possible to run and you for example want to copy files from the container this has saved me before:
docker cp <container-name>:<path/inside/container> <path/on/host/>
Thanks to docker cp (link) you can copy directly from the container as it was any other part of your filesystem.
For example, recovering all files inside a container:
mkdir /tmp/container_temp
docker cp example_container:/ /tmp/container_temp/
Note that you don't need to specify that you want to copy recursively.
The file system of the container is in the data folder of docker, normally in /var/lib/docker. In order to start and inspect a running containers file system do the following:
hash=$(docker run busybox)
cd /var/lib/docker/aufs/mnt/$hash
And now the current working directory is the root of the container.
you can use dive to view the image content interactively with TUI
https://github.com/wagoodman/dive
Try using
docker exec -it <container-name> /bin/bash
There might be possibility that bash is not implemented. for that you can use
docker exec -it <container-name> sh
On Ubuntu 14.04 running Docker 1.3.1, I found the container root filesystem on the host machine in the following directory:
/var/lib/docker/devicemapper/mnt/<container id>/rootfs/
Full Docker version information:
Client version: 1.3.1
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): 4e9bbfa
OS/Arch (client): linux/amd64
Server version: 1.3.1
Server API version: 1.15
Go version (server): go1.3.3
Git commit (server): 4e9bbfa
In my case no shell was supported in container except sh. So, this worked like a charm
docker exec -it <container-name> sh
I use another dirty trick that is aufs/devicemapper agnostic.
I look at the command that the container is running e.g. docker ps
and if it's an apache or java i just do the following:
sudo -s
cd /proc/$(pgrep java)/root/
and voilá you're inside the container.
Basically you can as root cd into /proc/<PID>/root/ folder as long as that process is run by the container. Beware symlinks will not make sense wile using that mode.
The most voted answer is good except if your container isn't an actual Linux system.
Many containers (especially the go based ones) don't have any standard binary (no /bin/bash or /bin/sh). In that case, you will need to access the actual containers file directly:
Works like a charm:
name=<name>
dockerId=$(docker inspect -f {{.Id}} $name)
mountId=$(cat /var/lib/docker/image/aufs/layerdb/mounts/$dockerId/mount-id)
cd /var/lib/docker/aufs/mnt/$mountId
Note: You need to run it as root.
Only for LINUX
The most simple way that I use was using proc dir, the container must be running in order to inspect the docker container files.
Find out the process id (PID) of the container and store it into some variable
PID=$(docker inspect -f '{{.State.Pid}}' your-container-name-here)
Make sure the container process is running, and use the variable name to get into the container folder
cd /proc/$PID/root
If you want to get through the dir without finding out the PID number, just use this long command
cd /proc/$(docker inspect -f '{{.State.Pid}}' your-container-name-here)/root
Tips:
After you get inside the container, everything you do will affect the actual process of the container, such as stopping the service or changing the port number.
Hope it helps
Note:
This method only works if the container is still running, otherwise, the directory wouldn't exist anymore if the container has stopped or removed
None of the existing answers address the case of a container that exited (and can't be restarted) and/or doesn't have any shell installed (e.g. distroless ones). This one works as long has you have root access to the Docker host.
For a real manual inspection, find out the layer IDs first:
docker inspect my-container | jq '.[0].GraphDriver.Data'
In the output, you should see something like
"MergedDir": "/var/lib/docker/overlay2/03e8df748fab9526594cfdd0b6cf9f4b5160197e98fe580df0d36f19830308d9/merged"
Navigate into this folder (as root) to find the current visible state of the container filesystem.
This will launch a bash session for the image:
docker run --rm -it --entrypoint=/bin/bash
On newer versions of Docker you can run docker exec [container_name] which runs a shell inside your container
So to get a list of all the files in a container just run docker exec [container_name] ls
I wanted to do this, but I was unable to exec into my container as it had stopped and wasn't starting up again due to some error in my code.
What worked for me was to simply copy the contents of the entire container into a new folder like this:
docker cp container_name:/app/ new_dummy_folder
I was then able to explore the contents of this folder as one would do with a normal folder.
For me, this one works well (thanks to the last comments for pointing out the directory /var/lib/docker/):
chroot /var/lib/docker/containers/2465790aa2c4*/root/
Here, 2465790aa2c4 is the short ID of the running container (as displayed by docker ps), followed by a star.
For docker aufs driver:
The script will find the container root dir(Test on docker 1.7.1 and 1.10.3 )
if [ -z "$1" ] ; then
echo 'docker-find-root $container_id_or_name '
exit 1
fi
CID=$(docker inspect --format {{.Id}} $1)
if [ -n "$CID" ] ; then
if [ -f /var/lib/docker/image/aufs/layerdb/mounts/$CID/mount-id ] ; then
F1=$(cat /var/lib/docker/image/aufs/layerdb/mounts/$CID/mount-id)
d1=/var/lib/docker/aufs/mnt/$F1
fi
if [ ! -d "$d1" ] ; then
d1=/var/lib/docker/aufs/diff/$CID
fi
echo $d1
fi
This answer will help those (like myself) who want to explore the docker volume filesystem even if the container isn't running.
List running docker containers:
docker ps
=> CONTAINER ID "4c721f1985bd"
Look at the docker volume mount points on your local physical machine (https://docs.docker.com/engine/tutorials/dockervolumes/):
docker inspect -f {{.Mounts}} 4c721f1985bd
=> [{ /tmp/container-garren /tmp true rprivate}]
This tells me that the local physical machine directory /tmp/container-garren is mapped to the /tmp docker volume destination.
Knowing the local physical machine directory (/tmp/container-garren) means I can explore the filesystem whether or not the docker container is running. This was critical to helping me figure out that there was some residual data that shouldn't have persisted even after the container was not running.
If you are using Docker v19.03, you follow the below steps.
# find ID of your running container:
docker ps
# create image (snapshot) from container filesystem
docker commit 12345678904b5 mysnapshot
# explore this filesystem
docker run -t -i mysnapshot /bin/sh
For an already running container, you can do:
dockerId=$(docker inspect -f {{.Id}} [docker_id_or_name])
cd /var/lib/docker/btrfs/subvolumes/$dockerId
You need to be root in order to cd into that dir. If you are not root, try 'sudo su' before running the command.
Edit: Following v1.3, see Jiri's answer - it is better.
another trick is to use the atomic tool to do something like:
mkdir -p /path/to/mnt && atomic mount IMAGE /path/to/mnt
The Docker image will be mounted to /path/to/mnt for you to inspect it.
My preferred way to understand what is going on inside container is:
expose -p 8000
docker run -it -p 8000:8000 image
Start server inside it
python -m SimpleHTTPServer
If you are using the AUFS storage driver, you can use my docker-layer script to find any container's filesystem root (mnt) and readwrite layer :
# docker-layer musing_wiles
rw layer : /var/lib/docker/aufs/diff/c83338693ff190945b2374dea210974b7213bc0916163cc30e16f6ccf1e4b03f
mnt : /var/lib/docker/aufs/mnt/c83338693ff190945b2374dea210974b7213bc0916163cc30e16f6ccf1e4b03f
Edit 2018-03-28 :
docker-layer has been replaced by docker-backup
The docker exec command to run a command in a running container can help in multiple cases.
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
Options:
-d, --detach Detached mode: run command in the background
--detach-keys string Override the key sequence for detaching a
container
-e, --env list Set environment variables
-i, --interactive Keep STDIN open even if not attached
--privileged Give extended privileges to the command
-t, --tty Allocate a pseudo-TTY
-u, --user string Username or UID (format:
[:])
-w, --workdir string Working directory inside the container
For example :
1) Accessing in bash to the running container filesystem :
docker exec -it containerId bash
2) Accessing in bash to the running container filesystem as root to be able to have required rights :
docker exec -it -u root containerId bash
This is particularly useful to be able to do some processing as root in a container.
3) Accessing in bash to the running container filesystem with a specific working directory :
docker exec -it -w /var/lib containerId bash
Often times I only need to explore the docker filesystem because my build won't run, so docker run -it <container_name> bash is impractical. I also do not want to waste time and memory copying filesystems, so docker cp <container_name>:<path> <target_path> is impractical too.
While possibly unorthodox, I recommend re-building with ls as the final command in the Dockerfile:
CMD [ "ls", "-R" ]
I've found the easiest, all-in-one solution to View, Edit, Copy files with a GUI app inside almost any running container.
mc editing files in docker
inside the container install mc and ssh: docker exec -it <container> /bin/bash, then with prompt install mc and ssh packages
in same exec-bash console, run mc
press ESC then 9 then ENTER to open menu and select "Shell link..."
using "Shell link..." open SCP-based filesystem access to any host with ssh server running (including the one running docker) by it's IP address
do your job in graphical UI
this method overcomes all issues with permissions, snap isolation etc., allows to copy directly to any machine and is the most pleasant to use for me
I had an unknown container, that was doing some production workload and did not want to run any command.
So, I used docker diff.
This will list all files that the container had changed and therefore good suited to explore the container file system.
To get only a folder you can just use grep:
docker diff <container> | grep /var/log
It will not show files from the docker image. Depending on your use case this can help or not.
Late to the party, but in 2022 we have VS Code

Docker bind mount directory in /tmp not working

I'm trying to mount a directory in /tmp to a directory in a container, namely /test. To do this, I've run:
docker run --rm -it -v /tmp/tmpl42ydir5/:/test alpine:latest ls /test
I expect to see a few files when I do this, but instead I see nothing at all.
I tried moving the folder into my home directory and running again:
docker run --rm -it -v /home/theolodus/tmpl42ydir5/:/test alpine:latest ls /test
at which point I see the expected output. This makes me thing I have mis-configured something and/or the permissions have bitten me. Have I missed a step in installing docker? I did it via sudo snap install docker, and then configured docker to let me run as non-root by adding myself to the docker group. Running as root doesn't help...
Host machine is Ubuntu 20.04, docker version is 19.03.11
When running docker as a snap...
all files that Docker uses, such as dockerfiles, to be in $HOME.
Ref: https://snapcraft.io/docker
The /tmp filesystem simply isn't accessible to the docker engine when it's running within the snap isolation. You can install docker directly on Ubuntu from the upstream Docker repositories for a more traditional behavior.

docker bind mounting not work with "not home" folder

I'm tring to play with bind mounting and i encourred in a strange behavior, i understand that bind mounting mount a host's folder in the container file system obscuring the original container content. Now when i try to do for examle:
docker run -it -v /home/user:/tmp ubuntu bash
in the /tmp folder of contaner there is the user's home but when i try to bind a "not home folder" like /var/lib:
docker run -it -v /var/lib:/tmp ubuntu bash
the /tmp folder inside a container is empty, why this appen?
Moreover if i do inside at the last container for example "touch foo" and i run another container with the same binding:
docker run -it -v /var/lib:/tmp ubuntu bash
I'll find the foo file inside /tmp folder
additional info: i run a ubuntu 19 server inside a VMaware virtual machine
i found a "dirty" solution, i had previoussly installed docker via snap, i reinstalled docker via apt and now work fine, this will remain a minstery

Volume not mounting properly when running shell inside a container

I want to encrypt my Kubernetes file to integrate it with Travis CI and for that, I am installing Travis CI CLI via docker container. When the container runs and I mount my current working directory to /app It just creates an empty folder.
I have added the folder in shared folders as well in the Virtual Box but nothing seems to work. I am using Docker Toolbox on Windows 10 home.
docker run -it -v ${pwd}:/app ruby:2.3 sh
It creates the empty app folder along with the other folders in the container but does not mount the volumes.
I also tried using
docker run -it -v //c/complex:/app ruby:2.3 sh
as someone suggested to use the name you specify in the Virtual Box.
Docker run -it -v full path of current directory:/app ruby:2

using local folder into docker container

Hi I have a windows machine and I installed a docker desktop on it and created a ubuntu container on it.
In docker settings I checked my C: Drive under shared drive option. and I created a folder under /opt named /mydata in this container
Now I run this command:
docker my_container_name run -v /Users/john/Documents/DOCKER_FOLDER:/opt/mydata
But I don't see the files under DOCKER_FOLDER to be in /opt/mydata folder.
Not sure what I a doing wrong.
the right command is:
docker my_container_name run -v c:/Users/john/Documents/DOCKER_FOLDER:/opt/mydata ls /opt/mydata
so you need to specify the volume letter and a command to run

Resources