Shared folder in Docker. With Windows. Not only "C/user/" path - docker

I'm new to Docker, I come from Vagrant.
I'm using Docker (1.9.1) inside my "D:/Works/something/DockerFirstTime" folder.
Now I create the machine with
docker-machine create first
and simple Dockerfile:
FROM ruby:2.2-onbuild
and simple Gemfile:
source 'https://rubygems.org'
gem 'rails'
Now with this command I want to use a shared folder like in Vagrant in the same hard drive of my Dockerfile:
docker run -it -v //d/Works/something/DockerFirstTime:/usr/src/app -w /usr/src/app ruby:2.2 bundle install
But it doesn't works.
How to do this?
I know that Docker only shares the /c/User/folder, is that right?
How can I use the folder with the files and modify my files with editor in Windows and then restart server like in a normal shell on a single PC or like in Vagrant?

This question and this question have a similar root problem, mounting a non C:/ drive folder in boot2docker. I wrote an in-depth answer to the other question that provide the same information that is in the first half of #VonC's answer.
From Docker Docs:
All other paths come from your virtual machine’s filesystem. [...] In
the case of VirtualBox you need to make the host folder available as a
shared folder in VirtualBox. Then, you can mount it using the Docker
-v flag.
To get your folder mounted in a container:
This mounts your entire D:\ drive, you can simply change the file paths to be more granular and specific.
Share the directory with VBox:
This only needs to be done once.
In windows CMD:
VBoxManage sharedfolder add "boot2docker-vm" --name "d-share" --hostpath "D:\"
Mount the shared directory in your VM:
This will need to be done each time you restart the VM.
In the Boot2Docker VM terminal:
mount -t vboxsf -o uid=1000,gid=50 d-share /d
To see sources and explanation for how this works see my full answer to the other similar question
After this you can use the -v/--volume flag in Docker to mount this folder or any sub-folders or files into containers. If you mounted your whole D:\ drive you can use that exact docker run command from your question and it should now work. If you mounted a specific part of you drive you will have to change the paths to match.
To edit in windows, run in docker:
Also from Docker Docs:
Mounting a host directory can be useful for testing. For example, you
can mount source code inside a container. Then, change the source code
and see its effect on the application in real time.
As a VBox shared directory you should be able to see changes made from the Windows side reflected in the boot2docker vm.
You may need to restart containers to see the changes actually appear, this depends on how the program running inside the container, in your case ruby, uses the files. If the files are compiled into an app when the container starts, for example, you will definitely need to restart the container to see the changes.
Note:
Beware the CR LF vs. LF line ending difference when writing files in Windows and reading them in Linux. Make sure your text editor is saving files with Unix line endings or else you may start to see errors caused by '^M' appended to the end of all your lines.

I know that Docker only shares the /c/User/folder, is that right?
It does, and it is able to do so because the VirtualBox VM used for providing a Linux host for docker is sharing C:\Users.
For docker to see another folder, you would need to:
use VBoxmanage sharedfolder add "VM name" --name "sharename" --hostpath "D:\Works"
then mount /D/Works within a VM session, as mentioned in "share windows folder (other than c/Users/) with docker container (using docker windows client)", and mentioned in boot2docker:
mount -t vboxsf -o uid=1000,gid=50 sharename /some/mount/location
The issue with that last alternative is described in "
Introduction to boot2docker" (scroll down to the "Shared folders" section)
The main issue with vboxsf is that it does not do any sort of caching sort of caching so when you are attempting to share a large amount of small files (big git repo’s) or anything that is filesystem read heavy (grunt) performance becomes a factor.
The best solution I have come up with so far is using vagrant with a customized version of boot2docker with NFS support enabled, which has very little “hacking” to get working which is nice.
And a good enough selling point for me is the speed increase by using NFS instead of vboxsf, it’s pretty staggering actually.
This is the project that I have been using https://vagrantcloud.com/yungsang/boxes/boot2docker.
The magic sauce in the volume sharing is in this line.
config.vm.synced_folder ".", "/vagrant", type: "nfs"
Which tells Vagrant to share your current directory in to the boot2docker VM in the /vagrant directory, using NFS.
However, that project seems quite old and would need to be adapted in order to include the latest boot2docker.iso (docker 1.9.1).

Related

Can I access some files which created in Docker Container from my local?(ex C drive or Desktop Folder)

I have window10 and SSD(e.g samsung SSD 256G)
If i created A Docker ubuntu container and access somewhere in there(e.g /home/myname)
and i created test.txt which contains "hello world", it might be in "/home/myname/test.txt"
and test.txt might have it's own size(8kb) i think it should get his room from samsung.SSD
i can access test.txt using 'docker attach' and also i know how to mount using -v option then i can change or update that file(i know it is just duplicated from Container)
But I wanna see or access test.txt file from My Window10 C-drive or window10-Desktop or using find/search function given from window10 how test.txt exists or using my samsung.SSD
sorry for lack of en, basic computing system.
the following comes from "https://docs.docker.com/storage/" it works not enough for me
By default all files created inside a container are stored on a writable container layer. This means that:
The data doesn’t persist when that container no longer exists, and it can be difficult to get the data out of the container if another process needs it.
A container’s writable layer is tightly coupled to the host machine where the container is running. You can’t easily move the data somewhere else.
Writing into a container’s writable layer requires a storage driver to manage the filesystem. The storage driver provides a union filesystem, using the Linux kernel. This extra abstraction reduces performance as compared to using data volumes, which write directly to the host filesystem.
Docker has two options for containers to store files in the host machine, so that the files are persisted even after the container stops: volumes, and bind mounts. If you’re running Docker on Linux you can also use a tmpfs mount. If you’re running Docker on Windows you can also use a named pipe.
Keep reading for more information about these two ways of persisting data.
Try the suggestions here:
https://stackoverflow.com/a/27320731/13064727
think this is a 2 step process, maybe you are missing the first step.
so seems you don't udnerstand how -v works
$ docker run -ti --rm -v "<your_windows_path>:/apps -w /apps ubuntu bash
root#b2fd40f5f423:/apps# echo "helloworld" > test.txt
-w /apps (WORKDIR) to make sure you create the file in container will be the same path reflected to your windows path.
from your windows system, you should be fine to search this file under local disk or SSD disk with path <your_windows_path>

File is empty with shared volume

I have a little problem with the docker.
I'm trying to use a volume share with my computer. I can see the files on my computer, but they are empty from my container.
I tried to create a file in the /root of my container (outside the shared volume) and I can see the file without any problem.
If I do echo test > test.txt (in my shared volume), the file content is empty.
I execute this command :
docker run -v "D:\My App:/home/app" -it MyImage /bin/bash
In the /home/app folder, I can see the files on my computer. But if I do:
cat /home/app/test.txt
It tells me there's nothing in the file. While there is a text (the file exists)
If I create a file from my container, in the shared volume, I find it on my computer (and it is not empty).
If I create a file from my computer, I find it in the container, but it is empty when I try to display it.
Currently, when I do a cat test.txt, it doesn't display anything.
This should display this is a test
Do check first your Docker for Windows settings:
If your D:\ drive is not shared, you won't see much in your container.
docker/for-win issue 25 points out multiple possible issues:
If you are using Docker Toolbox:
In my case Docker Toolbox created a VM named default in Virtualbox and I added the Shared Folder in the VM; Virtualbox -> default (VM) -> Settings -> Shared Folders -> Add:
Then you can specify the paths in both your machine and the mapped path in the VM, like:
The 1st field is the path in your machine, like D:\my\app
The 2nd is the path in the VM, like /my-vm/app
Choose to Mount Automatically
Another:
One of the issues I had when learning, was to try and mount a volume in my container, but then have a folder that conflicted.
For example, I'd make my workingdir /foo/bar, then try to use a volume for /foo/bar/private as well, BUT already have a folder called private in my initial mount.
I would see no error, but I'd see the first folder and not my 2nd volume
Or:
docker/for-win issue 2151: "Volumes mounted from a Linux WSL instance don't resolve in container".
It refers to "how to use Docker with WSL".
The last thing we need to do is set things up so that volume mounts work. This tripped me up for a while because check this out…
When using WSL, Docker for Windows expects you to supply your volume paths in a format that matches this: /c/Users/nick/dev/myapp.
But, WSL doesn’t work like that. Instead, it uses the /mnt/c/Users/nick/dev/myapp format.
Honestly I think Docker should change their path to use /mnt/c because it’s more clear on what’s going on, but that’s a discussion for another time.

Where docker volumes are located?

Need to know where docker volumes are located when using the docker machine on macOS.
The installation is using boot2docker, so the VM works behind.
Example:
docker volume create test-data
docker inspect shows a path, but where can I find the specific (physical) location?
It’s inside the virtual machine and isn’t directly accessible from the host.
Debug-level commands like docker volume inspect will give you a path, but they really are only for emergency debugging and not for routine use. If you have a way to get a shell in the VM you can see that path, but you really shouldn’t be directly accessing files there, and you shouldn’t be routinely docker inspecting anything.
macOS use a virtual machine it's different to linux where you can access to volumes from /var/lib/docker/volumes.
For macOS you should connect to a VM to find your volumes.
If you use persistent data volumes in Docker, and you want to access them with command-line.
If your docker host is Linux, that’s not a problem; you can find Docker volumes by /var/lib/docker/volumes path.
However, that’s not the case when you use Docker for Mac.
Try to cd /var/lib/docker/volumes from your MacOS terminal, you ‘ll get nothing.
You see, your Mac machine isn’t a real Docker host. Docker for Mac runs a virtual machine and hides it from you to make things simple.
So, to access persistent volumes created by Docker for Mac, you need to connect on that VM.
In order to accomplish this, we need to use a serial terminal on Mac. There’s a terminal application called “screen” that’s going to help us.
We need to “screen into” the Docker driver by executing a command:
screen
~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
You should see a blank screen, just press Enter , and after a while, you should see a command line prompt
Now you’re inside Docker’s VM and you can cd into volumes dir by typing: cd /var/lib/docker/volumes
Profit, you got there!
If you need to transfer files from your MacOS host into Docker host you can refer to File Sharing
Hope this helps you!
If you have installed docker using snap then volumes are located at:
/var/snap/docker/common/var-lib-docker/volumes/
location of volumes when using docker official install
/var/lib/docker/volumes/
Normally, if you want to "know" where a volume lives, you would want to map a volume to the local filesystem. When you create a named volume you are just allocating "shared" storage. However, if your really need to know, run this command:
docker volume inspect test-data

Mounting development docker container directory on host

I am using docker for software development, as I can bundle all my dependencies (compilers, libraries, ...) within a nice contained environment, without polluting the host.
The way I usually do things (which I guess is pretty common): I have a directory on the host that only contains the source code, which is mounted into a development container using a docker volume, where my software gets built and executed. Thanks to volumes being in sync, any changes in the source is reflected within the container.
Here is the pitfall: when using a code editor, software dependencies are considered broken as they are not accessible from the host. Therefore linting, etc... does not work.
I would like to be able to mount, let's say /usr/local/include from the container onto the host so that, be correctly configuring my editor, I can fix all the warnings.
I guess docker volume is not the solution here, because it would override the contained file system...
Also, I'm using Windows (no choice here) therefore my flow is:
Windows > Samba > Linux Host > Docker > Container
and I'd prefer not switching IDE (VS Code).
Any ideas? Thank you!
You basically wish you could reverse mount a volume from the container to the host. This is unfortunately not possible with Docker, and there are variants of this question here: How to mount a directory in docker container to host
You're stuck with copying the files from the container to the host. As far as the host path matching /usr/local/include or having to use a different folder depends upon your setup.
The easiest solution which would not require changing the docker image would be to use docker cp to copy the files.
Otherwise, you could automate this by having the image on entry (after installing all dependencies) copy the files to /tmp/include and mount a host volume to that location.
I use https://forums.docker.com/t/how-to-mount-docker-volume-along-with-subfolders-on-the-host/120482/13 to expose python libraries from inside the container to a folder locally so that neovim can read the libraries for autocomplete/jump to definitions.

How can I use a local file on container?

I'm trying create a container to run a program. I'm using a pre configurate image and now I need run the program. However, it's a machine learning program and I need a dataset from my computer to run.
The file is too large to be copied to the container. It would be best if the program running in the container searched the dataset in a local directory of my computer, but I don't know how I can do this.
Is there any way to do this reference with some docker command? Or using Dockerfile?
Yes, you can do this. What you are describing is a bind mount. See https://docs.docker.com/storage/bind-mounts/ for documentation on the subject.
For example, if I want to mount a folder from my home directory into /mnt/mydata in a container, I can do:
docker run -v /Users/andy/mydata:/mnt/mydata myimage
Now, /mnt/mydata inside the container will have access to /Users/andy/mydata on my host.
Keep in mind, if you are using Docker for Mac or Docker for Windows there are specific directories on the host that are allowed by default:
If you are using Docker Machine on Mac or Windows, your Docker Engine daemon has only limited access to your macOS or Windows filesystem. Docker Machine tries to auto-share your /Users (macOS) or C:\Users (Windows) directory. So, you can mount files or directories on macOS using.
Update July 2019:
I've updated the documentation link and naming to be correct. These type of mounts are called "bind mounts". The snippet about Docker for Mac or Windows no longer appears in the documentation but it should still apply. I'm not sure why they removed it (my Docker for Mac still has an explicit list of allowed mounting paths on the host).

Resources