File is empty with shared volume - docker

I have a little problem with the docker.
I'm trying to use a volume share with my computer. I can see the files on my computer, but they are empty from my container.
I tried to create a file in the /root of my container (outside the shared volume) and I can see the file without any problem.
If I do echo test > test.txt (in my shared volume), the file content is empty.
I execute this command :
docker run -v "D:\My App:/home/app" -it MyImage /bin/bash
In the /home/app folder, I can see the files on my computer. But if I do:
cat /home/app/test.txt
It tells me there's nothing in the file. While there is a text (the file exists)
If I create a file from my container, in the shared volume, I find it on my computer (and it is not empty).
If I create a file from my computer, I find it in the container, but it is empty when I try to display it.
Currently, when I do a cat test.txt, it doesn't display anything.
This should display this is a test

Do check first your Docker for Windows settings:
If your D:\ drive is not shared, you won't see much in your container.
docker/for-win issue 25 points out multiple possible issues:
If you are using Docker Toolbox:
In my case Docker Toolbox created a VM named default in Virtualbox and I added the Shared Folder in the VM; Virtualbox -> default (VM) -> Settings -> Shared Folders -> Add:
Then you can specify the paths in both your machine and the mapped path in the VM, like:
The 1st field is the path in your machine, like D:\my\app
The 2nd is the path in the VM, like /my-vm/app
Choose to Mount Automatically
Another:
One of the issues I had when learning, was to try and mount a volume in my container, but then have a folder that conflicted.
For example, I'd make my workingdir /foo/bar, then try to use a volume for /foo/bar/private as well, BUT already have a folder called private in my initial mount.
I would see no error, but I'd see the first folder and not my 2nd volume
Or:
docker/for-win issue 2151: "Volumes mounted from a Linux WSL instance don't resolve in container".
It refers to "how to use Docker with WSL".
The last thing we need to do is set things up so that volume mounts work. This tripped me up for a while because check this out…
When using WSL, Docker for Windows expects you to supply your volume paths in a format that matches this: /c/Users/nick/dev/myapp.
But, WSL doesn’t work like that. Instead, it uses the /mnt/c/Users/nick/dev/myapp format.
Honestly I think Docker should change their path to use /mnt/c because it’s more clear on what’s going on, but that’s a discussion for another time.

Related

Overlay a folder in docker by one from host

My situation is the following:
I am having a docker image/container in which I am compiling. I had to install some components to $HOME via the Dockerfile (so while creating the image).
Let's say one of those components is in ~/.config, but also other folders.
I would like to have the possibility to override the files in .config by mounting a home folder from the host on top of the one inside docker. Whenever you place a file in the mounted folder, it overrides the one which is already inside the container.
So in theory, this is exactly what an OverlayFS does, right? While the lower directory would be the one inside the Docker container, the upper directory would be the one on my Host.
Is there a way to accomplish that?
Until now I found the following related topics:
https://serverfault.com/questions/841238/how-to-use-overlayfs-with-docker-volumes
Drawback: The answer does only show how to use overlayfs on the host, but getting acccess to the lower container/image directory is not that self-explaining and also feels dirty.
Can I mount docker host directory as copy on write/overlay?
Drawback: Using mount -t overlay inside docker does not work on newer kernels because of the disabled overlay-on/over-overlay option
I also thought about manipulating the docker files on host directly, i.e. the directories where docker stores the files, but that feels a bit dirty.
To do so, I would declare VOLUME /home/user at the end of the Dockerfile. Then I would find my files of that directory in /var/lib/docker/volumes/user/_data. I could then create a overlayfs on my host, using that directory as lower, my other folder as upper. I could then remount that new directory using docker run --volume. Unfortunately this would involve su rights to access the /var/lib directory.
The other way around would be to bind-mount single files, but that's maybe a bit hackish too.

Can I access some files which created in Docker Container from my local?(ex C drive or Desktop Folder)

I have window10 and SSD(e.g samsung SSD 256G)
If i created A Docker ubuntu container and access somewhere in there(e.g /home/myname)
and i created test.txt which contains "hello world", it might be in "/home/myname/test.txt"
and test.txt might have it's own size(8kb) i think it should get his room from samsung.SSD
i can access test.txt using 'docker attach' and also i know how to mount using -v option then i can change or update that file(i know it is just duplicated from Container)
But I wanna see or access test.txt file from My Window10 C-drive or window10-Desktop or using find/search function given from window10 how test.txt exists or using my samsung.SSD
sorry for lack of en, basic computing system.
the following comes from "https://docs.docker.com/storage/" it works not enough for me
By default all files created inside a container are stored on a writable container layer. This means that:
The data doesn’t persist when that container no longer exists, and it can be difficult to get the data out of the container if another process needs it.
A container’s writable layer is tightly coupled to the host machine where the container is running. You can’t easily move the data somewhere else.
Writing into a container’s writable layer requires a storage driver to manage the filesystem. The storage driver provides a union filesystem, using the Linux kernel. This extra abstraction reduces performance as compared to using data volumes, which write directly to the host filesystem.
Docker has two options for containers to store files in the host machine, so that the files are persisted even after the container stops: volumes, and bind mounts. If you’re running Docker on Linux you can also use a tmpfs mount. If you’re running Docker on Windows you can also use a named pipe.
Keep reading for more information about these two ways of persisting data.
Try the suggestions here:
https://stackoverflow.com/a/27320731/13064727
think this is a 2 step process, maybe you are missing the first step.
so seems you don't udnerstand how -v works
$ docker run -ti --rm -v "<your_windows_path>:/apps -w /apps ubuntu bash
root#b2fd40f5f423:/apps# echo "helloworld" > test.txt
-w /apps (WORKDIR) to make sure you create the file in container will be the same path reflected to your windows path.
from your windows system, you should be fine to search this file under local disk or SSD disk with path <your_windows_path>

How to overwrite a file inside docker on Runtime

I want to overwrite a file that is inside a docker container with a file of same name on my host operating system.
Docker run -v does not overwrites this file as both file's name is same.
Any idea how I can achieve this?
I think you mean to keep the file synced between your host and your docker instance. You should use the docker run -v option as you wrote in the question but instead of giving the file name, you should use the folder name.
Further read on the topic: https://github.com/moby/moby/issues/15793
You can use docker cp to replace the file inside the Container with the one from the host. Thus you dont have to restart the container as e.g using volume mounts. Docker Cp Ref.

How to keep dot files in docker container?

I have installed some software in a docker image. When I run the software, it creates some setting files (dot files) under the root home folder. The problem is docker container wipes those files when I quit the container.
Is there a way to keep those dot files after I quite containers? I know I can manually save the container into a image. But that is not an elegant solution. That means every time I used the container, I need to save it to a image.
Any better solutions?
Thanks!
A simple solution would be to use a volume.
docker volume create configuration
And then you just run each container with it.
docker run -d -v configuration:container_configuration_dir your_image_name
Left side of : is name of volume created with first command and right side is dir inside container where your dot files are created.
Keep in mind how mounts work and for more details check docker docs on volumes.

Shared folder in Docker. With Windows. Not only "C/user/" path

I'm new to Docker, I come from Vagrant.
I'm using Docker (1.9.1) inside my "D:/Works/something/DockerFirstTime" folder.
Now I create the machine with
docker-machine create first
and simple Dockerfile:
FROM ruby:2.2-onbuild
and simple Gemfile:
source 'https://rubygems.org'
gem 'rails'
Now with this command I want to use a shared folder like in Vagrant in the same hard drive of my Dockerfile:
docker run -it -v //d/Works/something/DockerFirstTime:/usr/src/app -w /usr/src/app ruby:2.2 bundle install
But it doesn't works.
How to do this?
I know that Docker only shares the /c/User/folder, is that right?
How can I use the folder with the files and modify my files with editor in Windows and then restart server like in a normal shell on a single PC or like in Vagrant?
This question and this question have a similar root problem, mounting a non C:/ drive folder in boot2docker. I wrote an in-depth answer to the other question that provide the same information that is in the first half of #VonC's answer.
From Docker Docs:
All other paths come from your virtual machine’s filesystem. [...] In
the case of VirtualBox you need to make the host folder available as a
shared folder in VirtualBox. Then, you can mount it using the Docker
-v flag.
To get your folder mounted in a container:
This mounts your entire D:\ drive, you can simply change the file paths to be more granular and specific.
Share the directory with VBox:
This only needs to be done once.
In windows CMD:
VBoxManage sharedfolder add "boot2docker-vm" --name "d-share" --hostpath "D:\"
Mount the shared directory in your VM:
This will need to be done each time you restart the VM.
In the Boot2Docker VM terminal:
mount -t vboxsf -o uid=1000,gid=50 d-share /d
To see sources and explanation for how this works see my full answer to the other similar question
After this you can use the -v/--volume flag in Docker to mount this folder or any sub-folders or files into containers. If you mounted your whole D:\ drive you can use that exact docker run command from your question and it should now work. If you mounted a specific part of you drive you will have to change the paths to match.
To edit in windows, run in docker:
Also from Docker Docs:
Mounting a host directory can be useful for testing. For example, you
can mount source code inside a container. Then, change the source code
and see its effect on the application in real time.
As a VBox shared directory you should be able to see changes made from the Windows side reflected in the boot2docker vm.
You may need to restart containers to see the changes actually appear, this depends on how the program running inside the container, in your case ruby, uses the files. If the files are compiled into an app when the container starts, for example, you will definitely need to restart the container to see the changes.
Note:
Beware the CR LF vs. LF line ending difference when writing files in Windows and reading them in Linux. Make sure your text editor is saving files with Unix line endings or else you may start to see errors caused by '^M' appended to the end of all your lines.
I know that Docker only shares the /c/User/folder, is that right?
It does, and it is able to do so because the VirtualBox VM used for providing a Linux host for docker is sharing C:\Users.
For docker to see another folder, you would need to:
use VBoxmanage sharedfolder add "VM name" --name "sharename" --hostpath "D:\Works"
then mount /D/Works within a VM session, as mentioned in "share windows folder (other than c/Users/) with docker container (using docker windows client)", and mentioned in boot2docker:
mount -t vboxsf -o uid=1000,gid=50 sharename /some/mount/location
The issue with that last alternative is described in "
Introduction to boot2docker" (scroll down to the "Shared folders" section)
The main issue with vboxsf is that it does not do any sort of caching sort of caching so when you are attempting to share a large amount of small files (big git repo’s) or anything that is filesystem read heavy (grunt) performance becomes a factor.
The best solution I have come up with so far is using vagrant with a customized version of boot2docker with NFS support enabled, which has very little “hacking” to get working which is nice.
And a good enough selling point for me is the speed increase by using NFS instead of vboxsf, it’s pretty staggering actually.
This is the project that I have been using https://vagrantcloud.com/yungsang/boxes/boot2docker.
The magic sauce in the volume sharing is in this line.
config.vm.synced_folder ".", "/vagrant", type: "nfs"
Which tells Vagrant to share your current directory in to the boot2docker VM in the /vagrant directory, using NFS.
However, that project seems quite old and would need to be adapted in order to include the latest boot2docker.iso (docker 1.9.1).

Resources