I'm searching for more than two days and unfortunately, none of the solutions worked so far.
I want to know where the mapped folder is
let say
$host> docker run -ti -v /home:/home2 continuumio/anaconda3 bin/bash
$root#9fce1cb15cae> cd /home2
$root#9fce1cb15cae: /home2# ls
$root#9fce1cb15cae: /home2# touch test.txt
$root#9fce1cb15cae: /home2# ls
test.txt
now if I come out of the container, and search for test.txt I can't find it anywhere
if I look into the wsl folder I have these two distro
and if I run the wsl -l I get this
└ $ wsl -l
Windows Subsystem for Linux Distributions:
docker-desktop (Default)
docker-desktop-data
so I'm assuming the mapping should be available inside the docker-desktop as it's the default
if I run wsl ls /home I get empty result
the docker-desktop-data doesn't have a home folder to begin with
if inside my windows I query for /home there isn't such folder
if I look for the docker volume, there isn't any volume available
and if I inspect the mounted volume
so in summary
there is no local folder in my windows named home
the docker-desktop home folder is empty
the docker-desktop-data doesn't have any home folder
in drive C: I don't have any home folder
so my big question is where is the mapped folder?
interestingly if I write a windows path as the mapping, the mapping works quite well
$host> docker run -ti -v c:\users\me\repo\testing-docker:/home2 continuumio/anaconda3 bin/bash
$root#9fce1cb15cae> cd /home2
$root#9fce1cb15cae: /home2# ls
$root#9fce1cb15cae: /home2# touch test.txt
$root#9fce1cb15cae: /home2# ls
test.txt
if I go to the testing-docker I can see test.txt
so magically if I write a windows path, it works well, but as long as it's a linux path it map it to somewhere that I don't know and I can't find it. it doesn't throw any errors. if I remap the /home to another container, I can see the test.txt is there. so I'm sure the file is somewhere, but I don't know where. so the question is where it is get mapped, where it could be that isn't inside the wsl? I tried all the folder that mentioned in other SO answers, but none of them worked for me
I'm using Docker version 20.10.20, build 9fdeb9c
Related
I'm tring to play with bind mounting and i encourred in a strange behavior, i understand that bind mounting mount a host's folder in the container file system obscuring the original container content. Now when i try to do for examle:
docker run -it -v /home/user:/tmp ubuntu bash
in the /tmp folder of contaner there is the user's home but when i try to bind a "not home folder" like /var/lib:
docker run -it -v /var/lib:/tmp ubuntu bash
the /tmp folder inside a container is empty, why this appen?
Moreover if i do inside at the last container for example "touch foo" and i run another container with the same binding:
docker run -it -v /var/lib:/tmp ubuntu bash
I'll find the foo file inside /tmp folder
additional info: i run a ubuntu 19 server inside a VMaware virtual machine
i found a "dirty" solution, i had previoussly installed docker via snap, i reinstalled docker via apt and now work fine, this will remain a minstery
Hi I have a windows machine and I installed a docker desktop on it and created a ubuntu container on it.
In docker settings I checked my C: Drive under shared drive option. and I created a folder under /opt named /mydata in this container
Now I run this command:
docker my_container_name run -v /Users/john/Documents/DOCKER_FOLDER:/opt/mydata
But I don't see the files under DOCKER_FOLDER to be in /opt/mydata folder.
Not sure what I a doing wrong.
the right command is:
docker my_container_name run -v c:/Users/john/Documents/DOCKER_FOLDER:/opt/mydata ls /opt/mydata
so you need to specify the volume letter and a command to run
I'm on a windows 10 machine, in powershell, trying to mount a folder. I thought using "." in the path, it would be mapped to the current directory, but obviously not
cd c:/
docker run -it -v ./uniquename:/var/uniquename alpine sh
/ # touch /var/uniquename/test
/ # ls /var/uniquename
test
/ # exit
cd c:/tmp
docker run -it -v ./uniquename:/var/uniquename alpine sh
/ # ls /var/uniquename
test
/ # exit
Now, my question is not how to map a volume relative to the current directory - that would be $(pwd) or "use absolute paths" or whatever. My question is:
Where on my host is the mapped folder and newly created file located?
Turns out docker strips out the ./ resulting in the -v flag grabbing the "uniquename" as the name of a docker volume.
> docker volume ls
DRIVER VOLUME NAME
local uniquename
Mystery solved. Figuring out where docker stores volumes is left as an excersize to the reader.
I am trying to setup my project with docker. I am using Docker Toolbox on Windows 10 Home. I am very new to docker. To my understanding I have to copy my files to new container and add a volume so that I can persist changes made by gulp.
Here is my folder structure
-- src
|- dist
|- node-modules
|- gulpfile.js
|- package.json
|- Dockerfile
The Dockerfile code
FROM node:8.9.4-alpine
RUN npm install -g gulp
CMD [ "ls", 'source' ]
I tried many solutions for *docker run -v *
e.g
docker run -v /$(pwd):/source <container image>
docker run -v //c/Users/PcUser/Desktop/proj:/source <container image>
docker run -v //c/Users/PcUser/Desktop/proj:/source <container image>
docker run -v //d/proj:/source <container image>
docker run -v /d/proj:/source <container image>
* But No luck *
Can anyone describe how would you set it up for yourself with the same structure. And why am I not able to mount my host folder.
P.S: If I use two containers one for compiling my code with gulp and one with nginx to serve the content of dist folder. How will I do that.
#sxm1972 Thank you for your effort and help.
You are probably using Windows Pro or a server edition. I am using Windows 10 Home edition
Here is how I solved it, so other people using same setup can solve their issue.
There may be a better way to solve this, please comment if there is an efficient way.
So...
First, the question... Why I don't see my shared volume from PC in my container.
Ans: If we use docker's Boot2Docker with VirtualBox (which I am) then whenever a volume is mounted it refers to a folder inside the Boot2Docker VM
Image: Result of -v with docker in VirtualBox Boot2Docker
So with this if we try to use $ ls it will show an empty folder which in my case it did.
So we have to actually mount the folder to Boot2Docker VM if we want to share our files from Windows environment to Container.
Image: Resulting Mounts Window <-> Boot2Docker <-> Container
To achieve this we have to manually mount the folder to VM with the following command
vboxmanage sharedfolder add default --name "<folder_name_on_vm>" --hostpath "<path_to_folder_on_windows>" --automount
IF YOU GET ERROR RUNNING THE COMMAND, SAYING vboxmanager NOT FOUND ADD VIRTUAL BOX FOLDER PATH TO YOUR SYSTEM PATH. FOR ME IT WAS C:\Program Files\Oracle\VirtualBox
After running the command, you'll see <folder_name_on_vm> on root. You can check it by docker-machine ssh default and then ls /. After confirming that the folder <folder_name_on_vm> exist, you can use it as volume to your container.
docker run -it -v /<folder_name_on_vm>:/source <container> sh
Hope this helps...!
P.S If you are feeling lazy and don't wan't to mount a folder, you can place your project inside your C:/Users folder as it is mounted by default on the VM as show in the image.
The problem is because the base-image you use runs the node REPL as its ENTRYPOINT. If you run the image as docker run -it node:8.9.4-alpine you will see a node prompt and it will not run the npm command like you want.
The way I worked around the problem is to create your own base image using the following Dockerfile:
FROM node:8.9.4-alpine
CMD ["sh"]
Build it as follows:
docker built -t mynodealpine .
Then build your image using this modified Dockerfile:
FROM mynodealpine
RUN npm install -g gulp
CMD [ "/bin/sh", "-c", "ls source" ]
For the problem regarding mounting of volumes, since you are using Docker for Windows, you need to go into Settings (click on the icon in the system tray) and then go and enable Shared Drives.
Here is the output I was able to get:
PS C:\users\smallya\testnode> dir
Directory: C:\users\smallya\testnode
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 2/18/2018 11:11 AM dist
d----- 2/18/2018 11:11 AM node_modules
-a---- 2/18/2018 11:13 AM 77 Dockerfile
-a---- 2/18/2018 11:12 AM 26 gulpfile.js
-a---- 2/18/2018 11:12 AM 50 package.json
PS C:\users\smallya\testnode> docker run -it -v c:/users/smallya/testnode:/source mynodealpinenew
Dockerfile dist gulpfile.js node_modules package.json
PS C:\users\smallya\testnode>
Thanks for the question, possible more simple configuration via a VirtualBox graphical dialogue, worked for me, without the use of command line, albeit maybe not necessarily more versatile:
configuring the sharing folder inside VirtualBox shared folders configuration dialogue,
and then calling mount like this
docker run --volume //d/docker/nginx:/etc/nginx
I will be binding the /etc/nginx directory in my container to
D:\Program Files\Docker Toolbox\nginx
source:
https://medium.com/#Charles_Stover/fixing-volumes-in-docker-toolbox-4ad5ace0e572#fromHistory#fromHistory
I'm trying to mount a directory containing a file some credentials onto a container with the -v flag, but instead of mounting that file as a file, it mounts it as a directory:
Run script:
$ mkdir creds/
$ cp key.json creds/key.json
$ ls -l creds/
total 4
-rw-r--r-- 1 root root 2340 Oct 12 22:59 key.json
$ docker run -v /*pathToWorkingDirectory*/creds:/creds *myContainer* *myScript*
When I look at the debug spew of the docker run command, I see that it creates the /creds/ directory, but for some reason creates key.json as a subdirectory under that, rather than copying the file.
I've seen some other posts saying that if you tell docker run to mount a file it can't find, it will create a directory on the container with the filename, but since I didn't specify the filename and it knew to name the new directory 'key.json' it seems like it was able to find the file, but created it as a directory anyway? Has anyone run into this before?
In case it's relevant, the script is being run in Docker-in-Docker in another container as part of GitLab's CI process.
You are running Docker-in-Docker, this means when you specify a -v volume, Docker will look for this directory on the host since the shared sock enabling Docker-in-Docker actualy means your run command starts a container alongside the runner container.
I explain this in more detail in this SO answer:
https://stackoverflow.com/a/46441426/2078207
Also notice the comment below this answer to get a direction for a solution.