Docker: cannot start mssql windows container after copying files into volume - docker

After creating a docker image as follow:
PS> docker run -d -p 1433:1433 --name sql1 -v sql1data:C:/sqldata -e sa_password=MyPass123 -e ACCEPT_EULA=Y microsoft/mssql-server-windows-developer
I stopped my container and copied a backup file into my volume:
PS> docker cp .\DataBase.bak sql1:C:\data
After that I can no longer start my container, the error message is as follows:
Error response from daemon: container 5fe22f4ac151d7fc42541b9ad2142206c67b43579ec6814209287dbd786287dc encountered an error during Start: failure in a Windows system call: Le système de calcul s’est fermé de façon inattendue. (0xc0370106)
Error: failed to start containers: sql1
I can start and stop any other container, the problem occurs only after copying the file into the volume.
I'm using windows containers
my docker version is 18.06.0-ce-win72 (19098)
The only workaround i found is to not copy any files into my container volume.

Seems like it's because of files ownership and permissions. When you make a backup with copying the files and use those files for a new Docker Container, MYSQL Daemon in your Docker Container finds that the ownership and permissions of it's files are changed.
I think the best thing to do is to create a raw MySQL Docker Container and see who is the owner of your backup files in that container (i guess it must be 1000). then change the owner of your backup files to that user id and then create a Container with Volumes mapped to your backup files.

Related

My changes were lost in new Docker container

Steps to reproduce:
Download and run postgres:9.6.24:
docker run --name my_container --restart=always -d -p 127.0.0.1:5432:5432 -e POSTGRES_PASSWORD=pgmypass postgres:9.6.24
Here result:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
879883bfc84a postgres:9.6.24 "docker-entrypoint.s…" 26 seconds ago Up 25 seconds 127.0.0.1:5432->5432/tcp my_container
OK.
Open file inside container /var/lib/postgresql/data/pg_hba.conf
docker exec -it my_container bash
root#879883bfc84a:/# cat /var/lib/postgresql/data/pg_hba.conf
IPv4 local connections:
host all all 127.0.0.1/32 trust
Replace file /var/lib/postgresql/data/pg_hba.conf inside container by my file. Copy and overwrite my file from host to container:
tar --overwrite -c pg_hba.conf | docker exec -i my_container /bin/tar -C /var/lib/postgresql/data/ -x
Make sure the file has been modified. Go inside container and open changed file
docker exec -it my_container bash
root#879883bfc84a:/# cat /var/lib/postgresql/data/pg_hba.conf
IPv4 local connections:
host all all 0.0.0.0/0 trust
As you can see the content of file was changed.
Create new image from container
docker commit my_container
See result:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> ee57ad4bc6b4 3 seconds ago 200MB
postgres 9.6.24 027ccf656dc1 12 months ago 200MB
Now tag my new image
docker tag ee57ad4bc6b4 my_new_image:1.0.0
See reult:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
my_new_image 1.0.0 ee57ad4bc6b4 About a minute ago 200MB
postgres 9.6.24 027ccf656dc1 12 months ago 200MB
OK.
Stop and delete old continer:
docker stop my_continer
docker rm my_container
See result:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
As you can see not exit any container. OK.
Create new continer from new image
docker run --name my_new_container_test --restart=always -d -p 127.0.0.1:5432:5432 -e POSTGRES_PASSWORD=pg1210 my_new_image:1.0.0
See result:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a965dbbd991 my_new_image:1.0.0 "docker-entrypoint.s…" 7 seconds ago Up 6 seconds 127.0.0.1:5432->5432/tcp my_new_container
Open file inside container /var/lib/postgresql/data/pg_hba.conf
docker exec -it my_new_container bash
root#879883bfc84a:/# cat /var/lib/postgresql/data/pg_hba.conf
IPv4 local connections:
host all all 127.0.0.1/32 trust
As you can see my change in files are lost. The content of file is original. Not my changes.
P.S. This problem is only with file pg_hba.config. E.g if I created in the container the folder and file: /Downaloads/myfile.txt then this file not lost in the my container "my_new_container".
Editing files inside container with docker exec, in general, will in fact cause you to lose work. You mention docker commit but that's almost never a best practice. (If this was successful, but then you discovered PostgreSQL 9.6.24 exactly had some critical bug and you must upgrade, could you recreate the exact some image?)
In the case of the postgres image, the files in /var/lib/postgresql/data are always stored in a Docker volume or mount point. In your case you didn't use a docker run -v option, but the image is configured to create an anonymous volume in that directory. The volume is not included in docker commit, which is why you're not seeing it on the rebuilt container. (Also see docker postgres with initial data is not persisted over commits.)
For editing a configuration file, the easiest thing to do is to store the data on the host system. Create a directory to hold it, and extract the configuration file from the image. (Since the data directory is created by the image's startup script, you need a slightly longer path to get it out.)
mkdir pgdata
docker run -d --name pgtmp postgres:9.6.24
docker cp pgtmp:/var/lib/postgresql/data/pg_hba.conf ./pgdata
docker stop pgtmp
docker rm pgtmp
$EDITOR pgdata/pg_hba.conf
Now when you run the container, provide this data directory as a bind mount. That will inject the configuration file, but also cause the database data to persist over container exits.
docker run -v "$PWD/pgdata:/var/lib/postgresql/data" -u $(id -u) ... postgres:9.6.24
Note that this sequence doesn't use docker exec or "go inside" containers at all, and you haven't created an image without corresponding source. Everything is run with commands from the host. If you do need to reset the database data, in this setup, it's just files, and you can rm -rf pgdata, maybe saving the modified configuration file along the way.
(If I'm reading this configuration change correctly, you're trying to globally disable passwords and instead allow trust authentication for all inbound connections. That's not usually a good idea, especially since username/password authentication is standard in every database library I've encountered. You probably still want the volume to persist data, but I might not make this change to pg_hba.conf.)
Docker Container is a readyonly entity, which means if you will create a file into the container, remove it and re-create it (The container), the file is not supposed to be there.
what you want to do is one of two things,
Map your container to a local directory (volume)
Create a docker file based on the postgres image, and generate this modifications in a script, that your dockerfile reads.
docker volume usages
Dockerfile Reference

Docker run -v : Unable to mount a bind volume : "invalid volume specification"

I'm quite new to Docker. I'm running on Windows 10 Enterprise and am trying to containerize an existing app that runs on windows (so it's a Windows container). I don't know if this matters but the container is rather large (8 GB).
I need to share a config file (that lives on the host) with the container that the app will use when starting. I was thinking that a bind volume was simplest.
Problem: On running the image I get docker: Error response from daemon: invalid volume specification: '<source path>:<target path>'
Container was built with this command:
docker build -t my_image .
Here is the Dockerfile:
FROM mcr.microsoft.com/dotnet/framework/runtime:4.8
WORKDIR /app
COPY . .
ENTRYPOINT .\application.exe ..\Resources
Here is what I've tried
docker run -it -v c:/Users/my_user:/app my_image
I've tried every combination of C:/, C:\, C:\\, /c/, //c/, \c\, \\c\, etc.
I've tried multiple combinations of /app, //app, \app, \app, C:\app, etc.
I've also tried with and without :rw appended to the end
I've tried the ```--mount``` syntax which consistently outputs: docker: Error response from daemon: invalid mount config for type "bind": invalid mount path: '/app'. (tried a bunch of variations of /app here too)
I've tried every possible combination (except the right one). Please help!
Since you are using a Windows container, your file path will change. Try the below command, from the docs Persistent Storage in Windows Containers
docker run -it -v c:\Users\my_user:c:\app my_image
If you are using a powershell and trying to run docker using docker run command you can try this approach. It worked for me in windows powershell (vs code powershell)
docker run -v ${pwd}\src:/app/src -d -p 3000:3000 --name react-app-c2 react-app-image
Here react-app-c2 is container name and react-app-image is image name
-v is for volume and ${pwd} is for current working directory
/app/src is for the containerdirectory.

How to scp files from local machine directly to a docker container on a remote machine (without having to repeatedly copy)?

I'm new to Docker and I want to copy files to/from my local machine directly to a docker container that's on a remote machine without having to scp files from my local to my remote and then using docker cp to copy those files to the container. My container does not have an SSH server installed on it nor do I want to rebuild my image to include it.
I tried following solution given by the second answer here:How to SSH into Docker? . I ran the following command on my remote machine that hosts Docker:
docker run -d -p 2222:22 -v /var/run/docker.sock:/var/run/docker.sock -e CONTAINER=kind_tu -e AUTH_MECHANISM=noAuth jeroenpeeters/docker-ssh
Where kind_tu is the name of my running container.
On my local machine I then used: ssh -L 2222:localhost:2222 remote_account_name#remote_ip and then scp -P 2222 test_file remote_account_name#remote_ip:/destination/path (I'm also not familiar with port forwarding so I'm not sure if my notation is correct). When doing this, I get the following:
ssh: connect to host remote_ip port 2222: Connection refused
lost connection
Could this be an issue with the firewall since the remote machine is on my school's campus?
In all, I'm not sure if what I'm doing is even remotely correct.
According to your comment as a reply to David's, here is the explanation how to bind-mount the directory for your visualization files to your container:
On the host system create a directory, e.g. mkdir /home/sarah/viz/. Then, mount it to your docker container, using e.g.
docker run -v /home/sarah/viz:/data/viz … kind_tu …
Your viz software inside the kind_tu container should place the files in the directory /data/viz – which then lands in /home/sarah/viz/ on the host system, where you can download them to your local computer with scp or rsync or however you can connect to the remote machine.
You can also use docker-compose to have a more persistent environment. Write a file docker-compose.yml with the bind-mount and all the other configuration of the kind_tu container:
version: '3'
services:
kind_tu:
image: your_viz_software_image:latest
volumes:
- /home/sarah/viz:/data/viz:rw
…
Then, instead of docker run … you can just do docker-compose up -d and everything acts according to the config in the compose-file.

Docker Volume point to host Directory in Dockerfile

I have the following Dockerfile :
FROM jboss/wildfly
USER jboss
RUN mkdir -p /opt/jboss/wildfly/standalone/log
VOLUME /opt/jboss/wildfly/standalone/log
CMD /bin/bash
# CMD true
This resulting image is started with docker run -ti --name=data_volume data/volume. The next Dockerfile
FROM jboss/wildfly
RUN sed -i 's|<file relative-to="jboss.server.log.dir"
path="server.log"/>|\<file relative-to="jboss.server.log.dir"
path="\${jboss.host.name}-server.log"/\>|'
/opt/jboss/wildfly/standalone/configuration/standalone.xml
overrides the logging of the resulting jboss to log to "servername"-server.log in the logging dir. When I start the resulting image with docker run -ti --name=wild-01 --volumes-from=data_volume my/wildfly and docker run -ti --name=wild-02 --volumes-from=data_volume my/wildfly I have two log files in my data_colume container. So fine so good.
I would like to point my volume to a directory on the host eg. /var/log/wildfly.
How can I achieve this in Dockerfiles and not with the -v parameter when running data/volume
Thanks a lot in advance
Inside dockerfiles you can only define volumes in /var/lib/docker/volumes. This is because every host can be different from the other.
Docker uses /var/lib/docker as "docker area" where it stores all docker-related data. It's the directory that's guaranteed on every host because it gets created on installation.
If you were to point out a volume in the dockerfile, let's say to /home/mbieren/docker_vol, the image would result in multiple errors when executed on a different host, as that directory does not exist and the user probably has insufficient permissions to create it.
Docker goes around that problem by not allowing custom mount-paths to be set in the dockerfile.
I would like to point my volume to a directory on the host eg. /var/log/wildfly.
remove all mention of volumes from your Dockerfile ... launch your container using
docker run -d -v /var/log/wildfly:/var/log/wildfly your-image-name
then in your code just reference the normal path
/var/log/wildfly
Your syntax to launch the container using docker run -ti makes the container shell interactive whereas -d is the normal mode to spin it up as a daemon running in the background

Are you trying to mount a directory onto a file (or vice-versa)?

I have a docker with version 17.06.0-ce. When I trying to install NGINX using docker with command:
docker run -p 80:80 -p 8080:8080 --name nginx -v $PWD/www:/www -v $PWD/conf/nginx.conf:/etc/nginx/nginx.conf -v $PWD/logs:/wwwlogs -d nginx:latest
It shows that
docker: Error response from daemon: oci runtime error:
container_linux.go:262: starting container process caused
"process_linux.go:339: container init caused \"rootfs_linux.go:57:
mounting \\"/appdata/nginx/conf/nginx.conf\\" to rootfs
\\"/var/lib/docker/aufs/mnt/dcea22444e9ffda114593b18fc8b574adfada06947385aedc2ac09f199188fa0\\"
at
\\"/var/lib/docker/aufs/mnt/dcea22444e9ffda114593b18fc8b574adfada06947385aedc2ac09f199188fa0/etc/nginx/nginx.conf\\"
caused \\"not a directory\\"\""
: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
If do not mount the nginx.conf file, everything is okay. So, how can I mount the configuration file?
This should no longer happen (since v2.2.0.0), see here
If you are using Docker for Windows, this error can happen if you have recently changed your password.
How to fix:
First make sure to delete the broken container's volume
docker rm -v <container_name>
Update: The steps below may work without needing to delete volumes first.
Open Docker Settings
Go to the "Shared Drives" tab
Click on the "Reset Credentials..." link on the bottom of the window
Re-Share the drives you want to use with Docker
You should be prompted to enter your username/password
Click "Apply"
Go to the "Reset" tab
Click "Restart Docker"
Re-create your containers/volumes
Credit goes to BaranOrnarli on GitHub for the solution.
TL;DR: Remove the volumes associated with the container.
Find the container name using docker ps -a then remove that container using:
docker rm -v <container_name>
Problem:
The error you are facing might occur if you previously tried running the docker run command while the file was not present at the location where it should have been in the host directory.
In this case docker daemon would have created a directory inside the container in its place, which later fails to map to the proper file when the correct files are put in the host directory and the docker command is run again.
Solution:
Remove the volumes that are associated with the container. If you are not concerned about other container volumes, you can also use:
# WARNING, THIS WILL REMOVE ALL VOLUMES
docker volume rm $(docker volume ls -q)
Because docker will recognize $PWD/conf/nginx.conf as a folder and not as a file. Check whether the $PWD/conf/ directory contains nginx.conf as a directory.
Test with
> cat $PWD/conf/nginx.conf
cat: nginx.conf/: Is a directory
Otherwise, open a Docker issue.
It's working fine for me with same configuration.
The explanation given by #Ayushya was the reason I hit this somewhat confusing error message and the necessary housekeeping can be done easily like this:
$ docker container prune
$ docker volume prune
Answer for people using Docker Toolbox
There have been at least 3 answers here touching on the problem, but not explaining it properly and not giving a full solution. This is just a folder mounting problem.
Description of the problem:
Docker Toolbox bypasses the Hyper-V requirement of Docker by creating a virtual machine (in VirtualBox, which comes bundled). Docker is installed and ran inside the VM. In order for Docker to function properly, it needs to have access to the from the host machine. Which here it doesn't.
After I installed Docker Toolbox it created the VirtualBox VM and only mounted C:\Users to the machine, as \c\Users\. My project was in C:\projects so nowhere on the mounted volume. When I was sending the path to the VM, it would not exist, as C:\projects isn't mounted. Hence, the error above.
Let's say I had my project containing my ngnix config in C:/projects/project_name/
Fixing it:
Go to VirtualBox, right click on Default (the VM from Docker) > Settings > Shared Folders
Clicking the small icon with the plus on the right side, Add a new share. I used the following settings:
The above will map C:\projects to /projects (ROOT/projects) in the VM, meaning that now you can reference any path in projects like this: /projects/project_name - because project_name from C:\projects\project_name is now mounted.
To use relative paths, please consider naming the path c/projects not projects
Restart everything and it should now work properly. I manually stopped the virtual machine in VirtualBox and restarted the Docker Toolbox CLI.
In my docker file, I now reference the nginx.conf like this:
volumes:
- /projects/project_name/docker_config/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
Where nginx.conf actually resides in C:\projects\project_name\docker_config\nginx\nginx.conf
I had the same problem. I was using Docker Desktop with WSL in Windows 10 17.09.
Cause of the problem:
The problem is that Docker for Windows expects you to supply your volume paths in a format that matches this:
/c/Users/username/app
BUT, WSL instead uses the format:
/mnt/c/Users/username/app
This is confusing because when checking the file in the console I saw it, and for me everything was correct. I wasn't aware of the Docker for Windows expectations about the volume paths.
Solution to the problem:
I binded the custom mount points to fix Docker for Windows and WSL differences:
sudo mount --bind /mnt/c /c
Like suggested in this amazing guide: Setting Up Docker for Windows and WSL to Work Flawlessly and everything is working perfectly now.
Before I started using WSL I was using Git Bash and I had this problem as well.
On my Mac I had to uncheck the box "Use gRPC FUSE for file sharing" in Settings -> General
Maybe someone finds this useful. My compose file had following volume mounted
./file:/dir/file
As ./file did not exist, it was mounted into ABC (by default as folder).
In my case I had a container resulted from
docker commit ABC cool_image
When I later created ./file and ran docker-compose up , I had the error:
[...] Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
The container brought up from cool_image remembered that /dir/file was a directory and it conflicted with lately created and mounted ./file .
The solution was:
touch ./file
docker run abc_image --name ABC -v ./file:/dir/file
# ... desired changes to ABC
docker commit ABC cool_image
I am using Docker ToolBox for Windows. By default C Drive is mounted automatically, so in order to mount the files, make sure your files and folders are inside C DRIVE.
Example: C:\Users\%USERNAME%\Desktop
I'll share my case here as this may save a lot of time for someone else in the future.
I had a perfectly working docker-compose on my macos, until I start using docker-in-docker in Gitlab CI. I was only given permissions to work as Master in a repository, and the Gitlab CI is self-hosted and setup by someone else and no other info was shared, about how it's setup, etc.
The following caused the issue:
volumes:
- ./.docker/nginx/wordpress/wordpress.conf:/etc/nginx/conf.d/default.conf
Only when I noticed that this might be running under windows (hours scratching the head), I tried renaming the wodpress.conf to default.conf and just set the dir pathnames:
volumes:
- ./.docker/nginx/wordpress:/etc/nginx/conf.d
This solved the problem!
I had the same issue, docker-compose was creating a directory instead of file, then crashing mid-way.
What I did:
Run the container without any mapping.
Copy the .conf file to the host location:
docker cp containername:/etc/nginx/nginx.conf ./nginx.conf
Remove the container (docker-compose down).
Put the mapping back.
Re-mount the container.
Docker Compose will find the .conf file and map it, instead of trying to create a directory.
unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
I had a similar error on niginx in Mac environment.
Docker didn't recognize the default.conf file correctly. Once changing the relative path to the absolute path, the error was fixed.
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
In Windows 10, I just get this error without changing anything in my docker-compose.yml file or Docker configuration in general.
In my case, I was using a VPN with a firewall policy that blocks port 445.
After disconnecting from the VPN the problem disappears.
So I recommend checking your firewall and not using a proxy or VPN when running Docker Desktop.
Check Docker for windows - Firewall rules for shared drives for more details.
I hope this will help someone else.
Could you please use the absolute/complete path instead of $PWD/conf/nginx.conf? Then it will work.
EX:docker run --name nginx-container5 --rm -v /home/sree/html/nginx.conf:/etc/nginx/nginx.conf -d -p 90:80 nginx
b9ead15988a93bf8593c013b6c27294d38a2a40f4ac75b1c1ee362de4723765b
root#sree-VirtualBox:/home/sree/html# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b9ead15988a9 nginx "nginx -g 'daemon of…" 7 seconds ago Up 6 seconds 0.0.0.0:90->80/tcp nginx-container5
e2b195a691a4 nginx "/bin/bash" 16 minutes ago Up 16 minutes 0.0.0.0:80->80/tcp test-nginx
I experienced the same issue using Docker over WSL1 on Windows 10 with this command line:
echo $PWD
/mnt/d/nginx
docker run --name nginx -d \
-v $PWD/conf/nginx.conf:/etc/nginx/nginx.conf \
nginx
I resolved it by changing the path for the file on the host system to a UNIX style absolute path:
docker run --name nginx -d \
-v /d/nginx/conf/nginx.conf:/etc/nginx/nginx.conf \
nginx
or using an Windows style absolute path with / instead of \ as path separators:
docker run --name nginx -d \
-v D:/nginx/conf/nginx.conf:/etc/nginx/nginx.conf \
nginx
To strip the /mnt that seems to cause problems from the path I use
bash variable extension:
-v ${PWD/mnt\/}/conf/nginx.conf:/etc/nginx/nginx.conf
Updating Virtual Box to 6.0.10 fixed this issue for Docker Toolbox
https://github.com/docker/toolbox/issues/844
I was experiencing this kind of error:
mlepisto#DESKTOP-VKJ76GO MINGW64 ~/G/Projects
$ touch resolv.conf
mlepisto#DESKTOP-VKJ76GO MINGW64 ~/G/Projects
$ docker run --rm -it -v $PWD/resolv.conf:/etc/resolv.conf ubuntu /bin/bash
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\\"/c/Users/mlepisto/G/Projects/resolv.conf\\\" to rootfs \\\"/mnt/sda1/var/lib/docker/overlay2/61eabcfe9ed7e4a87f40bcf93c2a7d320a5f96bf241b2cf694a064b46c11db3f/merged\\\" at \\\"/mnt/sda1/var/lib/docker/overlay2/61eabcfe9ed7e4a87f40bcf93c2a7d320a5f96bf241b2cf694a064b46c11db3f/merged/etc/resolv.conf\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
# mounting to some other file name inside the container did work just fine
mlepisto#DESKTOP-VKJ76GO MINGW64 ~/G/Projects/
$ docker run --rm -it -v $PWD/resolv.conf:/etc/resolv2.conf ubuntu /bin/bash
root#a5020b4d6cc2:/# exit
exit
After updating VitualBox all commands did work just fine 🎉
Had the same head scratch because I did not have the file locally so it created it as a folder.
mimas#Anttis-MBP:~/random/dockerize/tube$ ls
Dockerfile
mimas#Anttis-MBP:~/random/dockerize/tube$ docker run --rm -v $(pwd)/logs.txt:/usr/app/logs.txt devopsdockeruh/first_volume_exercise
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\\"/Users/mimas/random/dockerize/tube/logs.txt\\\" to rootfs \\\"/var/lib/docker/overlay2/75891ea3688c58afb8f0fddcc977c78d0ac72334e4c88c80d7cdaa50624e688e/merged\\\" at \\\"/var/lib/docker/overlay2/75891ea3688c58afb8f0fddcc977c78d0ac72334e4c88c80d7cdaa50624e688e/merged/usr/app/logs.txt\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
mimas#Anttis-MBP:~/random/dockerize/tube$ ls
Dockerfile logs.txt/
For me, this did not work:
volumes:
- ./:/var/www/html
- ./nginx.conf:/etc/nginx/conf.d/site.conf
But this, works fine (obviously moved my config file inside a new directory too:
volumes:
- ./:/var/www/html
- ./nginx/nginx.conf:/etc/nginx/conf.d/site.conf
I had this problem under Windows 7 because my dockerfile was on different drive.
Here's what I did to fix the problem:
Open VirtualBox Manager
Select the "default" container and edit the settings.
Select Shared Folders and click the icon to add a new shared folder
Folder Path: x:\
Folder Name: /x
Check Auto-mount and Make Permanent
Restart the virtual machine
At this point, docker-compose up should work.
I got the same error on Windows10 after an update of Docker: 2.3.0.2 (45183).
... caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
I was using absolute paths like this //C/workspace/nginx/nginx.conf and everything worked like a charm.
The update broke my docker-compose, and I had to change the paths to /C/workspace/nginx/nginx.conf with a single / for the root.
Note that this situation will also occur if you try to mount a volume from the host which has not been added to the Resources > File Sharing section of Docker Preferences.
Adding the root path as a file sharing resource will now permit Docker to access the resource to mount it to the container. Note that you may need to erase the contents on your Docker container to attempt to re-mount the volume.
For example, if your application is located at /mysites/myapp, you will want to add /mysites as the file sharing resource location.
In my case it was a problem with Docker for Windows and use partition encrypted by Bitlocker. If you have project files on encrypted files after restart and unlock drive Dokcer doesn't see project files properly.
All you need to do is just need to restart Docker
CleanWebpackPlugin can be the problem. In my case, in my Docker file I copy a file like this:
COPY --chown=node:node dist/app.js /usr/app/app.js
and then during development I mount that file via docker-compose:
volumes:
- ./dist/app.js:/usr/app/app.js
I would intermittently get the Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type. error or some version of it.
The problem was that the CleanWebpackPlugin was deleting the file and before webpack re-built. If Docker was trying to mount the file while it was deleted Docker would fail. It was intermittent.
Either remove CleanWebpackPlugin completely or configure its options to play nicer.
I had this happen when the json file on the host had the executable permission set. I don't know the reason behind this.
For me, it was enough to just do this:
docker compose down
docker compose up -d
l have solved the mount problem. I am using a Win 7 environment, and the same problem happened to me.
Are you trying to mount a directory onto a file?
The container has a default sync directory at C:\Users\, so I moved my project to C:\Users\, then recreated the project. Now it works.

Resources