Mounting development docker container directory on host - docker

I am using docker for software development, as I can bundle all my dependencies (compilers, libraries, ...) within a nice contained environment, without polluting the host.
The way I usually do things (which I guess is pretty common): I have a directory on the host that only contains the source code, which is mounted into a development container using a docker volume, where my software gets built and executed. Thanks to volumes being in sync, any changes in the source is reflected within the container.
Here is the pitfall: when using a code editor, software dependencies are considered broken as they are not accessible from the host. Therefore linting, etc... does not work.
I would like to be able to mount, let's say /usr/local/include from the container onto the host so that, be correctly configuring my editor, I can fix all the warnings.
I guess docker volume is not the solution here, because it would override the contained file system...
Also, I'm using Windows (no choice here) therefore my flow is:
Windows > Samba > Linux Host > Docker > Container
and I'd prefer not switching IDE (VS Code).
Any ideas? Thank you!

You basically wish you could reverse mount a volume from the container to the host. This is unfortunately not possible with Docker, and there are variants of this question here: How to mount a directory in docker container to host
You're stuck with copying the files from the container to the host. As far as the host path matching /usr/local/include or having to use a different folder depends upon your setup.
The easiest solution which would not require changing the docker image would be to use docker cp to copy the files.
Otherwise, you could automate this by having the image on entry (after installing all dependencies) copy the files to /tmp/include and mount a host volume to that location.

I use https://forums.docker.com/t/how-to-mount-docker-volume-along-with-subfolders-on-the-host/120482/13 to expose python libraries from inside the container to a folder locally so that neovim can read the libraries for autocomplete/jump to definitions.

Related

Overlay a folder in docker by one from host

My situation is the following:
I am having a docker image/container in which I am compiling. I had to install some components to $HOME via the Dockerfile (so while creating the image).
Let's say one of those components is in ~/.config, but also other folders.
I would like to have the possibility to override the files in .config by mounting a home folder from the host on top of the one inside docker. Whenever you place a file in the mounted folder, it overrides the one which is already inside the container.
So in theory, this is exactly what an OverlayFS does, right? While the lower directory would be the one inside the Docker container, the upper directory would be the one on my Host.
Is there a way to accomplish that?
Until now I found the following related topics:
https://serverfault.com/questions/841238/how-to-use-overlayfs-with-docker-volumes
Drawback: The answer does only show how to use overlayfs on the host, but getting acccess to the lower container/image directory is not that self-explaining and also feels dirty.
Can I mount docker host directory as copy on write/overlay?
Drawback: Using mount -t overlay inside docker does not work on newer kernels because of the disabled overlay-on/over-overlay option
I also thought about manipulating the docker files on host directly, i.e. the directories where docker stores the files, but that feels a bit dirty.
To do so, I would declare VOLUME /home/user at the end of the Dockerfile. Then I would find my files of that directory in /var/lib/docker/volumes/user/_data. I could then create a overlayfs on my host, using that directory as lower, my other folder as upper. I could then remount that new directory using docker run --volume. Unfortunately this would involve su rights to access the /var/lib directory.
The other way around would be to bind-mount single files, but that's maybe a bit hackish too.

Docker for Windows - Export Volume Data

I created two docker containers with compose on Docker for Windows, using wordpress and mariadb. I've created a volume for wordpress that points to my PC's normal filesystem, but mariaDB's is still contained within the Hyper-V's Virtual Hard Disk.
The mount point is at /var/lib/docker/volumes/1995...ca3/_data
I've tried looking at previous answers, but the link that would explain how to backup, copy, or restore volumes redirects to a general volume explanation. Most plugins or scripts I've seen for Docker typically refers to a *nix environment.
Would anyone know of a modern method to export and import volumes mounted to Linux containers in Docker for Windows?
The way I normally do this is to start a container that mounts two volumes, the source volume and the destination volume, and I run a command in that container that copies the contents of one volume to another. I don't have a copy of windows at hand to find out how to copy all files recursively, but I'm sure it can do it quite easily.

How can I configure go sdk and GOPATH from docker container?

I'm trying to configure golang project with Jetbrains Gogland and docker compose. I want to use GOPATH and go from the docker container. I mean using the go installation from the container for the autocomplete etc without installing golang on the local machine.
the project structure is:
project root
docker-compose.yml
back|
Dockerfile
main.go
some other packages
front|
all the front files...
After that, I want to deploy my back folder to the /go/src/app in the docker container. The problem is that when I develop the project I can''t use autocomplete as this project is not in my local GOPATH and there are different golang versions in the docker container and on my local machine
I already read this question but I still can't solve my issue.
At the moment this is not possible. Nor do I see how it could be possible in the future. Mounting a volume in docker means you "hide" the contents of that folder from the container and use the files on the host instead. As such, any time you'll mount the directory from your machine, your container files from that instance won't be available to the machine. This means you can't have Go installed in the container and then mount a folder and use that location for the Go sources. If you are thinking: I'll just mount things in another place, do some symlink magic / copy files around, that's just a bad idea that leads to nowhere.
Gogland supports remote debugging as of EAP 10, released a a few weeks ago. This allows you to debug applications running in containers or on remote hosts. As such, you can install Go, and the source code on your machine but have them running in containers.

How can I use a local file on container?

I'm trying create a container to run a program. I'm using a pre configurate image and now I need run the program. However, it's a machine learning program and I need a dataset from my computer to run.
The file is too large to be copied to the container. It would be best if the program running in the container searched the dataset in a local directory of my computer, but I don't know how I can do this.
Is there any way to do this reference with some docker command? Or using Dockerfile?
Yes, you can do this. What you are describing is a bind mount. See https://docs.docker.com/storage/bind-mounts/ for documentation on the subject.
For example, if I want to mount a folder from my home directory into /mnt/mydata in a container, I can do:
docker run -v /Users/andy/mydata:/mnt/mydata myimage
Now, /mnt/mydata inside the container will have access to /Users/andy/mydata on my host.
Keep in mind, if you are using Docker for Mac or Docker for Windows there are specific directories on the host that are allowed by default:
If you are using Docker Machine on Mac or Windows, your Docker Engine daemon has only limited access to your macOS or Windows filesystem. Docker Machine tries to auto-share your /Users (macOS) or C:\Users (Windows) directory. So, you can mount files or directories on macOS using.
Update July 2019:
I've updated the documentation link and naming to be correct. These type of mounts are called "bind mounts". The snippet about Docker for Mac or Windows no longer appears in the documentation but it should still apply. I'm not sure why they removed it (my Docker for Mac still has an explicit list of allowed mounting paths on the host).

Shared folder in Docker. With Windows. Not only "C/user/" path

I'm new to Docker, I come from Vagrant.
I'm using Docker (1.9.1) inside my "D:/Works/something/DockerFirstTime" folder.
Now I create the machine with
docker-machine create first
and simple Dockerfile:
FROM ruby:2.2-onbuild
and simple Gemfile:
source 'https://rubygems.org'
gem 'rails'
Now with this command I want to use a shared folder like in Vagrant in the same hard drive of my Dockerfile:
docker run -it -v //d/Works/something/DockerFirstTime:/usr/src/app -w /usr/src/app ruby:2.2 bundle install
But it doesn't works.
How to do this?
I know that Docker only shares the /c/User/folder, is that right?
How can I use the folder with the files and modify my files with editor in Windows and then restart server like in a normal shell on a single PC or like in Vagrant?
This question and this question have a similar root problem, mounting a non C:/ drive folder in boot2docker. I wrote an in-depth answer to the other question that provide the same information that is in the first half of #VonC's answer.
From Docker Docs:
All other paths come from your virtual machine’s filesystem. [...] In
the case of VirtualBox you need to make the host folder available as a
shared folder in VirtualBox. Then, you can mount it using the Docker
-v flag.
To get your folder mounted in a container:
This mounts your entire D:\ drive, you can simply change the file paths to be more granular and specific.
Share the directory with VBox:
This only needs to be done once.
In windows CMD:
VBoxManage sharedfolder add "boot2docker-vm" --name "d-share" --hostpath "D:\"
Mount the shared directory in your VM:
This will need to be done each time you restart the VM.
In the Boot2Docker VM terminal:
mount -t vboxsf -o uid=1000,gid=50 d-share /d
To see sources and explanation for how this works see my full answer to the other similar question
After this you can use the -v/--volume flag in Docker to mount this folder or any sub-folders or files into containers. If you mounted your whole D:\ drive you can use that exact docker run command from your question and it should now work. If you mounted a specific part of you drive you will have to change the paths to match.
To edit in windows, run in docker:
Also from Docker Docs:
Mounting a host directory can be useful for testing. For example, you
can mount source code inside a container. Then, change the source code
and see its effect on the application in real time.
As a VBox shared directory you should be able to see changes made from the Windows side reflected in the boot2docker vm.
You may need to restart containers to see the changes actually appear, this depends on how the program running inside the container, in your case ruby, uses the files. If the files are compiled into an app when the container starts, for example, you will definitely need to restart the container to see the changes.
Note:
Beware the CR LF vs. LF line ending difference when writing files in Windows and reading them in Linux. Make sure your text editor is saving files with Unix line endings or else you may start to see errors caused by '^M' appended to the end of all your lines.
I know that Docker only shares the /c/User/folder, is that right?
It does, and it is able to do so because the VirtualBox VM used for providing a Linux host for docker is sharing C:\Users.
For docker to see another folder, you would need to:
use VBoxmanage sharedfolder add "VM name" --name "sharename" --hostpath "D:\Works"
then mount /D/Works within a VM session, as mentioned in "share windows folder (other than c/Users/) with docker container (using docker windows client)", and mentioned in boot2docker:
mount -t vboxsf -o uid=1000,gid=50 sharename /some/mount/location
The issue with that last alternative is described in "
Introduction to boot2docker" (scroll down to the "Shared folders" section)
The main issue with vboxsf is that it does not do any sort of caching sort of caching so when you are attempting to share a large amount of small files (big git repo’s) or anything that is filesystem read heavy (grunt) performance becomes a factor.
The best solution I have come up with so far is using vagrant with a customized version of boot2docker with NFS support enabled, which has very little “hacking” to get working which is nice.
And a good enough selling point for me is the speed increase by using NFS instead of vboxsf, it’s pretty staggering actually.
This is the project that I have been using https://vagrantcloud.com/yungsang/boxes/boot2docker.
The magic sauce in the volume sharing is in this line.
config.vm.synced_folder ".", "/vagrant", type: "nfs"
Which tells Vagrant to share your current directory in to the boot2docker VM in the /vagrant directory, using NFS.
However, that project seems quite old and would need to be adapted in order to include the latest boot2docker.iso (docker 1.9.1).

Resources