Unable to use the volume flag with a windows container - docker-volume

I am trying to map a host folder to the guest in the same way that is easily accomplished on linux/mac via -v "$(pwd)":/code. I can't come up with a simple example to make this work with Windows Containers.
docker build -t="webdav" .
docker run --rm -it -v C:\junk:C:\code --name webdav webdav powershell
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: container f0fa313478fddb73e34d47699de0fc3c2a3bdb202ddcfc2a124c5c8b7523ec09 encountered an error during Start: failure in a Windows system call: The connection with the Virtual Machine hosting the container was closed. (0xc037010a).
I have tried countless other variations, and the accepted answer here gives the same error.
The docs seem to only refer to Docker Toolbox. The example only gives me invalid bind mount spec.
My Dockerfile
FROM microsoft/windowsservercore
RUN powershell -Command Add-WindowsFeature Web-Server
RUN powershell -Command mkdir /code
WORKDIR /code
ADD * /code/
OS Name: Microsoft Windows 10 Pro
OS Version: 10.0.14393 N/A Build 14393
Version 17.03.1-ce-win5 (10743)
Channel: stable
b18e2a5
Disclaimer: I originally posted this on the docker forums but haven't had any responses.

EDIT:
Found it. https://docs.docker.com/engine/reference/builder/#volume
"When using Windows-based containers, the destination of a volume inside the container must be one of: a non-existing or empty directory; or a drive other than C:"
Or here: https://docs.docker.com/engine/reference/commandline/run/#mount-volume--v---read-only
"The following examples will fail when using Windows-based containers, as the destination of a volume or bind-mount inside the container must be one of: a non-existing or empty directory; or a drive other than C:. Further, the source of a bind mount must be a local directory, not a file."
It strikes me that these are non-obvious places to document this difference. Where did you look for documentation of this issue? I'll add this there :)
Is there a general need for a summary of differences between Linux and Windows?
OLD ANSWER (for context)
Here's a step-by-step guide on mounting volumes with the GUI:
https://rominirani.com/docker-on-windows-mounting-host-directories-d96f3f056a2c
From reading through some other forum posts it sounds like special characters in passwords may trip things up.
If you are still having issues here is one thread you could read through:
https://github.com/docker/docker/issues/23992
Hope this helps!

I'm not sure if/where the moby repo docs publish to Docker docs, but this issue indicates that a volume cannot reference an existing folder in the container. In my example, I was first creating c:\code. If I change the command:
docker run --rm -it -v C:\junk:C:\code2 --name webdav webdav powershell
... it will create and mount c:\code2 in the container to point to c:\junk on the host.

Related

docker run not syncing local folder in windows

I want to sync my local folder with that of a docker container. I am using a windows system with Wsl 2 backend. I tried running the following command as per the instructions of a docker course instructor but it didn't seem to have synced it:
docker run -v ${pwd}:\app:ro --env-file ./.env -d -p 3000:4000 --name node-app node-app-image
I faced a similar issue when I started syncing local folders with that of a docker container in my windows system. The solution was actually quite simple, instead of using -v ${pwd}:\app:ro in your first volume it should be -v ${pwd}:/app:ro. Notice the / instead of \. Since your docker container is a Linux container the path should have /.
As #Sysix pointed out, docker will always overwrite the folder in the container with the one on the host (no matter if it already existed or not). Only those files will be in that folder/volume that were created either on the host, or in the container during runtime.
Learn more about bind mounts and volumes here.

file mounts as directory instead of file in docker-in-docker (dind)

When this command docker run --rm -v $(pwd)/api_tests.conf:/usr/config/api_tests.conf --name api-automation local.artifactory.swg-devops.com/api-automation is ran, api_tests.conf file is mounting as a directory in container instead of file.
I went through Single file volume mounted as directory in Docker and few other similar questions on stack overflow but unable to get the right solution.
I have tested the same code in local mac laptop and here file from local machine mounts to container as a file but locally i don't have docker-in-docker setup.
I have Dockerfile as below.
FROM alpine:latest
MAINTAINER Basavaraj
RUN apk add --no-cache python3 \
&& pip3 install --upgrade pip
WORKDIR /api-automation
COPY . /api-automation
RUN pip --no-cache-dir install .
ENTRYPOINT "some command"
and I have the build.sh file as below,
#!/bin/bash
docker pull local.artifactory.swg-devops.com/api-automation
# creating file with name "api_tests.conf" by adding configuration data
echo "configuration data" > api_tests.conf
# it displays all the configuration data written to api_tests.conf
cat $(pwd)/api_tests.conf
docker run --rm -v $(pwd)/api_tests.conf:/usr/.aiops/config/api_tests.conf --name api-automation local.artifactory.swg-devops.com/api-automation
Now we are calling build.sh file from gocd environment.
Looks like docker run command executed in docker-in-docker(dind) and as a result client which spawns the docker container on a different host and the file (api_tests.conf) being created does not exist on that different host.
because of this file (api_tests.conf) is mounting as empty directory in container.
what are the different solutions to mount the file in docker-in-docker environment?
Can we share the file (api_tests.conf) which we created to host where the docker container is spawned?
I think the problem you're having is most likely because of using dind, although it's worth pointing out that this issue would also occur if you had mounted the docker socket into another container as well.
This is because when you ask the docker daemon to mount a directory, you're docker client (cli) actually mount the file/directory itself, it's just passing a request to the docker daemon to mount this location from its local file system. And this is where the problem is, because usually this isn't where you think it is, if you're using dind or sharing the docker.socket, and hence the file/directory doesn't exist from the daemons point of view.
So in your case the $(pwd) is possibly being expanded to some well known/existing directory path, and then the docker daemon is mounting this directory portion, since the file doesn't exist. That's my guess at least, as I've seen similar behaviour before when using dind/docker.socket sharing in other set ups.
One crazy solution to this would be to bind mount the files you want into the dind container at startup, and then you could try subsequently bind mounting those files from within the dind container into any subsequent containers. However bear in mind this is precisely the kind of file system usage that's warned against in the dind documentation because of instability and potential data loss, so be warned.
Hope this helps.

Docker run -v to link volume from windows host to linux container

My scenario is a virtual machine running windows where I am using docker with linux containers.
My purpose is to launch a sql server container and I am having a hard time to have persistent data.
My question is how to and an explanation if posible of the command run -v to link a host folder with container.
I have read here but I am sorry I dont understand totally. Also docker documentation did not clarify at all.
My attempts failed:
docker create volume sql-data
docker run -v sql-data:C:/temp/
Error response from daemon: invalid mode: /temp/
What I read is a known error but can't find solution nor updated information about the error.
Thanks in advance.
Your problem is directly addressed in the docker docs, see here (I recommend you read the Mount volume section in full, it is fairly short). Actually, the docs specifically point out that your syntax will NOT work. To get your command to work, your destination path should be one of the following (from the same link):
a non-existing or empty directory; or a drive other than C:. Further,
the source of a bind mount must be a local directory, not a file.
Furthermore, the docs specify:
On Windows, the paths must be specified using Windows-style semantics.
Apply the above statement to your command and it should work. I'm not a windows guy but I would try:
docker run -v sql-data:c:\emptyDir

Share windows directory to Linux docker container

I've been trying the whole day to accomplish a simplistic example of sharing a Windows directory to Linux container running on Windows Docker host.
Have read all the guidelines and run the following:
docker run -it --rm -p 5002:80 --name mount-test --mount type=bind,source=D:\DockerArea\PortScanner,target=/app/PortScannerWorkingDirectory barebonewebapi:latest
The origin PortScanner directory on host machine has got some text file in it. The container is created successfully.
The issue is that when I'm trying to
docker exec -it mount-test /bin/bash
and then list the mounted directory in the container PortScannerWorkingDirectory - it just shows that it's empty. Nor the C# code can read the contents of the host file in the mapped directory.
Am I missing something simple here? I feel like I got stuck and can't share files on the host Windows machine to Linux container.
After several days of dealing with the issue I've found quite apparent answer. Although I had had C and D drives already shared to Docker in Docker settings I did an experiment and re-shared both drives (there's a special button Reset Credentials for that purpose in Docker agent settings for Windows). After that the issue is resolved. So saving it here with the hope that it may help someone else since this seems to be a glitch with permissions or similar.
The issue is quite hard to diagnose - when there's an issue the Docker container just silently writes into its writable layer and no error pops up.
Go to the docker settings -> shared drives -> reset credentials.
and then click the drive and click apply button.
then execute following command as suggested by docker
docker run --rm -v c:/Users:/data alpine ls /data

Docker tries to mkdir the folder that I mount

Why is Docker trying to create the folder that I'm mounting? If I cd to C:\Users\szx\Projects
docker run --rm -it -v "${PWD}:/src" ubuntu /bin/bash
This command exits with the following error:
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: error while creating mount source path '/c/Users/szx/Projects': mkdir /c/Users/szx/Projects: file exists.
I'm using Docker Toolbox on Windows 10 Home.
For anyone running mac/osx and encountering this, I restarted docker desktop in order to resolve this issue.
Edit: It would appear this also fixes the issue on Windows 10
My trouble was a fuse-mounted volume (e.g. sshfs, etc.) that got mounted again into the container. I didn't help that the fuse-mount had the same ownership as the user inside the container.
I assume the underlying problem is that the docker/root supervising process needs to get a hold of the fuse-mount as well when setting up the container.
Eventually it helped to mount the fuse volume with the allow_other option. Be aware that this opens access to any user. Better might be allow_root – not tested, as blocked for other reasons.
I got this error after changing my Windows password. I had to go into Docker settings and do "Reset credentials" under "Shared Drives", then restart Docker.
Make sure the folder is being shared with the docker embedded VM. This differs with the various types of docker for desktop installs. With toolbox, I believe you can find the shared folders in the VirtualBox configuration. You should also note that these directories are case sensitive. One way to debug is to try:
docker run --rm -it -v "/:/host" ubuntu /bin/bash
And see what the filesystem looks like under "/host".
I have encountered this problem on Docker (Windows) after upgrading to 2.2.0.0 (42247). The issue was with casing in the folder name that I've provided in my arguments to docker command.
Did you use this container before? You could try to remove all the docker-volumes before re-executing your command.
docker volume rm `(docker volume ls -qf dangling=true)`
I tried your command locally (MacOS) without any error.
I met this problem too.
I used to run the following command to share the folder with container
docker run ... -v c:/seleniumplus:/dev/seleniumplus ...
But it cannot work anymore.
I am using the Windows 10 as host.
My docker has recently been upgraded to "19.03.5 build 633a0e".
I did change my windows password recently.
I followed the instructions to re-share the "C" drive, and restarted the docker and even restarted the computer, but it didn't work :-(.
All of sudden, I found that the folder is "C:\SeleniumPlus" in the file explorer, so I ran
docker run ... -v C:/SeleniumPlus:/dev/seleniumplus ...
And it did work. So it is case-sensitive when we specify the windows shared folder in the latest docker ("19.03.5 build 633a0e").
I am working in Linux (WSL2 under Windows, to be more precise) and my problem was that there existed a symlink for that folder on my host:
# docker run --rm -it -v /etc/localtime:/etc/localtime ...
docker: Error response from daemon: mkdir /etc/localtime: file exists.
# ls -al /etc/localtime
lrwxrwxrwx 1 root root 25 May 23 2019 /etc/localtime -> ../usr/share/zoneinfo/UTC
It worked for me to bind mount the source /usr/share/zoneinfo/UTC instead.
I had this issue when I was working with Docker in a CryFS -encrypted directory in Ubuntu 20.04 LTS. The same probably happens in other UNIX-like OS-es.
The problem was that by default the CryFS-mounted virtual directory is not accessible by root, but Docker runs as root. The solution is to enable root access for FUSE-mounted volumes by editing /etc/fuse.conf: just comment out the use_allow_other setting in it. Then mount the encrypted directory with the command cryfs <secretdir> <opendir> -o allow_root (where <secretdir> and <opendir> are the encrypted directory and the FUSE mount point for the decrypted virtual directory, respectively).
Credits to the author of this comment on GitHub for calling my attention to the -o allow_root option.
Had the exact error. In my case, I used c instead of C when changing into my directory.
I solved this by restarting docker and rebuilding the images.
I have put the user_allow_other in /etc/fuse.conf.
Then mounting as in the example below has solved the problem.
$ sshfs -o allow_other user#remote_server:/directory/
I had this issue in WSL, likely caused by leaving some containers alive too long. None of the advice here worked for me. Finally, based on this blog post, I managed to fix it with the following commands, which wipe all the volumes completely to start fresh.
docker-compose down
docker rm -f $(docker ps -a -q)
docker volume rm $(docker volume ls -q)
docker-compose up
Then, I restarted WSL (wsl --shutdown), restarted docker desktop, and tried my command again.
In case you work with a separate Windows user, with which you share the volume (C: usually): you need to make sure it has access to the folders you are working with -- including their parents, up to your home directory.
Also make sure that EFS (Encrypting File System) is disabled for the shared folders.
See also my answer here.
I had the same issue when developing using docker. After I moved the project folder locally, Docker could not mount files that were listed with relatives paths, and tried to make directories instead.
Pruning docker volumes / images / containers did not solve the issue. A simple restart of docker-desktop did the job.
This error crept up for me because the problem was that my docker-compose file was looking for the APPDATA path on my machine on mac OS. MacOS doesn't have an APPDATA environment variable so I just created a .env file with the contents:
APPDATA=~/Library/
And my problem was solved.
I faced this error when another running container was already using folder that is being mounted in docker run command. Please check for the same & if not needed then stop the container. Best solution is to use volume by using following command -
docker volume create
then Mount this created volume if required to be used by multiple containers..
For anyone having this issue in linux based os, try to remount your remote folders which are used by docker image. This helped me in ubuntu:
sudo mount -a
I am running docker desktop(docker engine v20.10.5) on Windows 10 and faced similar error. I went ahead and removed the existing image from docker-desktop UI, deleted the folder in question(for me deleting the folder was an option because i was just doing some local testing), removed the existing container, restarted the docker and it worked
In my case my volume path (in a .env file for docker-compose) had a space in it
/Volumes/some\ thing/folder
which did work on Docker 3 but didn't after updating to Docker 4. So I had to set my env variable to :
"/Volumes/some thing/folder"
I had this problem when the directory on my host was inside a directory mounted with gocryptfs. By default even root can't see the directory mounted by gocryptfs, only the user who executed the gocryptfs command can. To fix this add user_allow_other to /etc/fuse.conf and use the -allow_other flag e.g. gocryptfs -allow_other encrypted mnt
In my specific instance, Windows couldn't tell me who owned my SSL certs (probably docker). I took control of the SSL certs again under Properties, added read permission for docker-users and my user, and it seemed to have fixed the problem. After tearing my hair out for 3 days with just the Daemon: Access Denied error, I finally got a meaningful error regarding another answer above "mkdir failed" or whataever on a mounted file (the SSL cert).

Resources