Related
I am getting the below error while building an image
Step 1/10 : FROM ubuntu:14.04
Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I think the issue is that you are behind the proxy which in which case you need to write a manual configuration in Docker systemd service file. That will override the default docker.service file.
If you are using Docker for Windows, then simply set the default DNS to 8.8.8.8 on the "vEthernet (DockerNAT)" network adapter. But remember, this is not the best practice as you will be exposing from your office network.
In linux environment, you could add the environment variable as you are behind HTTP_PROXY or HTTPS_PROXY, as you are using port 80 or 443 respectively. As shown below in /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:80/"
as for HTTTP_PROXY in /etc/systemd/system/docker.service.d/https-proxy.conf
[Service]
Environment="HTTPS_PROXY=https://proxy.example.com:443/"
Then just restart docker after deamon reload:
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
Hope this works.
Reference: https://docs.docker.com/engine/admin/systemd/#httphttps-proxy
I had the same problem and the following fix has worked for me:
https://github.com/moby/moby/issues/22635#issuecomment-260063252
In my case I've added the following 2 nameserver lines to /etc/resolv.conf file.
before:
nameserver 127.0.0.53
after:
nameserver 8.8.8.8
nameserver 8.8.4.4
nameserver 127.0.0.53
I was facing the same issue when trying to build or pull an image with Docker on Win10. Changing the DNS of the Docker vEthernet(DockerNAT) network adapter to 8.8.8.8 fixed it for me, as described in this GitHub issue.
To change the DNS go to Docker (TrayIcon) -> Settings -> Resources -> Network and set a fixed DNS server ip = 8.8.8.8.
Changing the DNS server in the configuration of the windows network adapter worked too.
After restarting Docker is able to pull and build images again.
Version Info:
Windows 10 x64 Enterprise Version 1709
$ docker version
Client:
Version: 17.12.0-ce
API version: 1.35
Go version: go1.9.2
Git commit: c97c6d6
Built: Wed Dec 27 20:05:22 2017
OS/Arch: windows/amd64
Server:
Engine:
Version: 17.12.0-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.9.2
Git commit: c97c6d6
Built: Wed Dec 27 20:12:29 2017
OS/Arch: linux/amd64
Experimental: true
On Mac OS X, I fixed this issue by not using the experimental virtualization framework.
Preferences -> Experimental Features
I got the same error and it was resolved by
docker logout registry-1.docker.io
I had the same issue and only found out after 30 minutes that I was on a VPN network for work which blocks other sites. I went off the VPN, and it worked :) This is definitely a network issue. When it said "not authenticated", I thought perhaps I needed some login credentials or so.
I face this problem when performing Ansible AWX installation.
I had my own private DNS :192.168.0.254 & 192.168.0.253 but was receiving same error.
Issue got resolved after change my DNS back to 8.8.8.8 & 8.8.4.4.
This error occurs on Big Sur 11.3.1, Intel when you check the box for "Use new virtualization framework" under the Experimental Features tab. Unchecking the box and restarting Docker fixed this problem for me.
This may be the old one, but fixed available here
https://success.docker.com/article/i-get-x509-certificate-signed-by-unknown-authority-error-when-i-try-to-login-to-my-dtr-with-default-certificates
run following commands on each server
export DOMAIN_NAME=bootstrap.node1.local
export TCP_PORT=5000
openssl s_client -connect $DOMAIN_NAME:$TCP_PORT -showcerts </dev/null 2>/dev/null | openssl x509 -outform PEM | tee /etc/pki/ca-trust/source/anchors/$DOMAIN_NAME.crt
update-ca-trust
/bin/systemctl restart docker.service
I have same issue with registry deployed in swarm. Restart docker helps but after some time it occurs again.
Redeploy registry with docker-compose
sudo docker-compose up -d
and all works fine
I also had problems with pull requests timing out.
with both
docker pull hello-world
kubeadm config images pull
Perhaps this problem started for me when upgrading the VM from ubuntu 18 to 20, but there were also many kubernetes related config changes I made, so not sure.
anyway this solution resolved it for me.
https://stackoverflow.com/a/51648635/11416610
thanks #nils!
incase the above link brakes, here is a quote:
I had the same issue yesterday. Since I am behind a company proxy, I
had to define the http-proxy for the docker daemon in:
/etc/systemd/system/docker.service.d/http-proxy.conf
The problem was, that I misconfigured the https_proxy, how it is
described here. I used https:// in the https_proxy environment
variable, which caused this error.
This configuration works for me:
cat /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment=http_proxy=http://IP:PORT/
Environment=no_proxy=localhost,127.0.0.1
Environment=https_proxy=http://IP:PORT/
Remember that you have to restart the docker daemon after changing
this configuration. You can achieve this by using:
systemctl daemon-reload
systemctl restart docker
I was getting the same error. I am using a ubuntu 20.04 system
Error response from daemon: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I added the missing lines inside /etc/resolv.conf
nameserver 8.8.8.8
nameserver 8.8.4.4
sudo nano /etc/resolv.conf
This is how it looks now.
nameserver 8.8.8.8
nameserver 8.8.4.4
nameserver 127.0.0.53
options edns0 trust-ad
I faced this issue on ubuntu when I am trying to build elasticsearch:
And I got this error:
ERROR: Get https://docker.elastic.co/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
It was my network connection problem. I was using VPN.
so I disconnect my VPN connection and it's working fine.
for some reasons, it's trying to look up the domain it seems trying to search for the domain inside the local network after I disconnect the VPN everything worked fine.
Windows 10 - home PC. none of the solution worked for me. what worked is un-install docker, restart PC, "run as administrator" while installing exe. worked!!
For my case, my company needed to define my IP in the White list in order to access the cloud.docker files. So do not hesitate to tell the responsible person if you have such error.
My issue was with Windows WSL, not only do you have to set the static dns servers as mentioned above in both the Docker Desktop client, and your containers, but you also need to add
[network]
generateResolvConf = false
to the /etc/wsl.conf in your linux containers. You will need to reboot your container as outlined in https://superuser.com/questions/1126721/rebooting-ubuntu-on-windows-without-rebooting-windows, or you can reboot your pc.
None of those solutions worked for me.
I finally make it work simply by doing an update of docker. (MAC OS)
I experienced this issue when trying to push to Docker.
I updated Docker Desktop (via the GUI)
I also ran docker system prune which prompts:
WARNING! This will remove:
- all stopped containers
- all volumes not used by at least one container
- all networks not used by at least one container
- all dangling images
Confirm this by entering yes
It could be temporary network issue. I had same issue. I would these two
Re-run the command again
Restart the Docker Desktop
I had the same issue. I was getting this error while following the Udemy course. Since I was new to Docker I was actually building image by giving incorrect repository name( I was using Instructor username instead of mine docker repository username). When we push the image to Docker hub, use your docker repository name. Hence build image using your username.
docker build . -t docker_username/example:latest
where . represent current directory where your Dockerfile resides.
Please first logged in your docker repository by using Docker desktop in your system
Hope this will solve someone's problem.
my solution was :
docker image prune and docker volume prune
Experienced this Error while I was trying to : docker pull odoo
and my solution was: sudo systemctl restart docker
Just log in through the terminal and use the below command
docker login
Enter username and password
I was stuck too, tried everything I could then I tried
these:
https://fedingo.com/how-to-uninstall-docker-in-ubuntu/
Make sure you repeat step 1 and 2 mentioned in link untill step 1 shows nothing
then procceed with step 3 and the next steps
then delete docker folder from here
/etc/systemd/system/docker.service.d/
then follow :
https://docs.docker.com/engine/install/ubuntu/
I have faced this error sometimes, my docker image is built smoothly before but when I have to remove all images ( even I do not make any change with the docker configuration files or any error in code). this still happens.
so I think that it may cause by the connection for it does a "Get https://registry-1.docker.io/v2/:....."
I have change DNS to google DNS 8.8.8.8 and 8.8.4.4 then it 's worked.
good luck!
Just add at the end of your shell command --dns 8.8.8.8
I got this error from my own Internet connection. Switched to another provider, all good.
Check in case VPN is blocking..
I am using docker for the first time and I was trying to implement this -
https://docs.docker.com/get-started/part2/#tag-the-image
At one stage I was trying to connect with localhost by this command -
$ curl http://localhost:4000
which showed this error-
curl: (7) Failed to connect to localhost port 4000: Connection refused
However, I have solved this by following code -
$ docker-machine ip default
$ curl http://192.168.99.100:4000
After that everything was going fine, but in the last part, I was trying to run the app by using following line according to the tutorial...
$ docker run -p 4000:80 anibar/get-started:part1
But, I got this error
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: driver failed programming external connectivity on endpoint goofy_bohr (63f5691ef18ad6d6389ef52c56198389c7a627e5fa4a79133d6bbf13953a7c98): Bind for 0.0.0.0:4000 failed: port is already allocated.
You need to make sure that the previous container you launched is killed, before launching a new one that uses the same port.
docker container ls
docker rm -f <container-name>
Paying tribute to IgorBeaz, you need to stop running the current container. For that you are going to know current CONTAINER ID:
$ docker container ls
You get something like:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
12a32e8928ef friendlyhello "python app.py" 51 seconds ago Up 50 seconds 0.0.0.0:4000->80/tcp romantic_tesla
Then you stop the container by:
$ docker stop 12a32e8928ef
Finally you try to do what you wanted to do, for example:
$ docker run -p 4000:80 friendlyhello
I tried all the above answers, none of them worked, in my case even docker container ls doesn't show any container running. It looks like the problem is due to the fact that the docker proxy is still using ports although there are no containers running. In my case I was using ubuntu. Here's what I tried and got the problem solved, just run the following two commands:
sudo service docker stop
sudo rm -f /var/lib/docker/network/files/local-kv.db
I solved it this way:
First, I stopped all running containers:
docker-compose down
Then I executed a lsof command to find the process using the port (for me it was port 9000)
sudo lsof -i -P -n | grep 9000
Finally, I "killed" the process (in my case, it was a VSCode extension):
kill -9 <process id>
The quick fix is โโa just restart docker:
sudo service docker stop
sudo service docker start
Above two answers are correct but didn't work for me.
I kept on seeing blank like below for docker container ls
then I tried, docker container ls -a and after that it showed all the process previously exited and running.
Then docker stop <container id> or docker container stop <container id> didn't work
then I tried docker rm -f <container id> and it worked.
Now at this I tried docker container ls -a and this process wasn't present.
When I used nginx docker image, I also got this error:
docker: Error response from daemon: driver failed programming external connectivity on endpoint recursing_knuth (9186f7d7f523732b99d3510029cde9679f3f3fe7b7eb5f612d54c4aacea58220): Bind for 0.0.0.0:8080 failed: port is already allocated.
And I solved it using following commands:
$ docker container ls
$ docker stop [CONTAINER ID]
Then, running this docker container(like this) again is ok:
$ docker run -v $PWD/vueDemo:/usr/share/nginx/html -p 8080:80 -d nginx:alpine
You just need to stop the previous docker container.
I have had same problem with docker-compose, to fix it:
Killed docker-proxy processe
Restart docker
Start docker-compose again
docker ps will reveal the list of containers running on docker. Find the one running on your needed port and note down its PID.
Stop and remove that container using following commands:
docker stop PID
docker rm PID
Now run docker-compose up and your services should run as you have freed the needed port.
on linux 'sudo systemctl restart docker' solved the issue for me
For anyone having this problem with docker-compose.
When you have more than one project (i.e. in different folders) with similar services you need to run docker-compose stop in each of your other projects.
If you are using Docker-Desktop, you can quit Docker Desktop and then restart it. It solved the problem for me.
In my case, there was no process to kill.
Updating docker fixed the problem.
It might be a conflict with the same port specified in docker-compose.yml and docker-compose.override.yml or the same port specified explicitly and using an environment variable.
I had a docker-compose.yml with ports on a container specified using environment variables, and a docker-compose.override.yml with one of the same ports specified explicitly. Apparently docker tried to open both on the same container. docker container ls -a listed neither because the container could not start and list the ports.
For me the containers where not showing up running, so NOTHING was using port 9010 (in my case) BUT Docker still complained.
I did not want to reset my Docker (for Windows) so what I did to resolve it was simply:
Remove the network (I knew that before a container was using this network with the port in question (9010) docker network ls docker network rm blabla (or id)
I actually used a new network rather than the old (buggy) one but shouldn't be needed
Restart Docker
That was the only way it worked for me. I can't explain it but somehow the "old" network was still bound to that port (9010) and Docker kept on "blocking" it (whinching about it)
FOR WINDOWS;
I killed every process that docker use and restarted the docker service on services. My containers are working now.
It is about ports that is still in use by Docker even though you are not using on that moment.
On Linux, you can run sudo netstat -tulpn to see what is currently listening on that port. You can then choose to configure either that process or your Docker container to bind to a different port to avoid the conflict.
Stopping the container didn't work for me either. I changed the port in docker-compose.yml.
For me, the problem was mapping the same port twice.
Due to a parametric docker run, it ended up being something like
docker run -p 4000:80 -p 4000:80 anibar/get-started:part1
notice double mapping on port 4000.
The log is not informative enough in this case, as it doesn't state I was the cause of the double mapping, and that the port is no longer bound after the docker run command returns with a failure.
Don't forget the easiest fix of all....
Restart your computer.
I have tried most of the above and still couldn't fix it. Then just restart my Mac and then it's all back to normal.
For anyone still looking for a solution, just make sure you have binded your port the right way round in your docker-compose.yml
It goes:
- <EXTERNAL SERVER PORT>:<INTERNAL CONTAINER PORT>
Had the same problem. Went to Docker for Mac Dashboard and clicked restart. Problem solved.
my case was dump XD I was exposing port 80 twice :D
ports:
- '${APP_PORT:-80}:80'
- '${APP_PORT:-8080}:8080'
APP_PORT is defined, thus 80 was exposed twice.
I tried almost all solutions and found out the probable/possible reason/solution. So, If you are using traefik or any other networking server, they internally facilitate proxy for load balacing. That, most use the blueprint as it, works pretty fine. It then passes the load control entirely to nginx or similiar proxy servers. So, stopping, killing(networking server) or pruning might not help.
Solution for traefik with nginx,
sudo /etc/init.d/nginx stop
# or
sudo service nginx stop
# or
sudo systemctl stop nginx
Credits
How to stop docker processes
Making Docker Stop Itself <- Safe and Fast
this is the best way to stop containers and all unstoppable processes: making docker do the job.
go to docker settings > resources. change any of the resource and click apply and restart.
docker will stop itself and its every process -- even the most stubborn ones that might not be killed by other commonly used commands such as kill or more wild commands like rm suggested by others.
i ran into a similar problem before and all the good - proper - tips from my colleagues somehow did not work out. i share this safe trick whenever someone in my team asks me about this.
Error response from daemon: driver failed programming external connectivity on endpoint foobar
Bind for 0.0.0.0:8000 failed: port is already allocated
hope this helps!
simply restart your computer, so the docker service gets restarted
I have a docker with version 17.06.0-ce. When I trying to install NGINX using docker with command:
docker run -p 80:80 -p 8080:8080 --name nginx -v $PWD/www:/www -v $PWD/conf/nginx.conf:/etc/nginx/nginx.conf -v $PWD/logs:/wwwlogs -d nginx:latest
It shows that
docker: Error response from daemon: oci runtime error:
container_linux.go:262: starting container process caused
"process_linux.go:339: container init caused \"rootfs_linux.go:57:
mounting \\"/appdata/nginx/conf/nginx.conf\\" to rootfs
\\"/var/lib/docker/aufs/mnt/dcea22444e9ffda114593b18fc8b574adfada06947385aedc2ac09f199188fa0\\"
at
\\"/var/lib/docker/aufs/mnt/dcea22444e9ffda114593b18fc8b574adfada06947385aedc2ac09f199188fa0/etc/nginx/nginx.conf\\"
caused \\"not a directory\\"\""
: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
If do not mount the nginx.conf file, everything is okay. So, how can I mount the configuration file?
This should no longer happen (since v2.2.0.0), see here
If you are using Docker for Windows, this error can happen if you have recently changed your password.
How to fix:
First make sure to delete the broken container's volume
docker rm -v <container_name>
Update: The steps below may work without needing to delete volumes first.
Open Docker Settings
Go to the "Shared Drives" tab
Click on the "Reset Credentials..." link on the bottom of the window
Re-Share the drives you want to use with Docker
You should be prompted to enter your username/password
Click "Apply"
Go to the "Reset" tab
Click "Restart Docker"
Re-create your containers/volumes
Credit goes to BaranOrnarli on GitHub for the solution.
TL;DR: Remove the volumes associated with the container.
Find the container name using docker ps -a then remove that container using:
docker rm -v <container_name>
Problem:
The error you are facing might occur if you previously tried running the docker run command while the file was not present at the location where it should have been in the host directory.
In this case docker daemon would have created a directory inside the container in its place, which later fails to map to the proper file when the correct files are put in the host directory and the docker command is run again.
Solution:
Remove the volumes that are associated with the container. If you are not concerned about other container volumes, you can also use:
# WARNING, THIS WILL REMOVE ALL VOLUMES
docker volume rm $(docker volume ls -q)
Because docker will recognize $PWD/conf/nginx.conf as a folder and not as a file. Check whether the $PWD/conf/ directory contains nginx.conf as a directory.
Test with
> cat $PWD/conf/nginx.conf
cat: nginx.conf/: Is a directory
Otherwise, open a Docker issue.
It's working fine for me with same configuration.
The explanation given by #Ayushya was the reason I hit this somewhat confusing error message and the necessary housekeeping can be done easily like this:
$ docker container prune
$ docker volume prune
Answer for people using Docker Toolbox
There have been at least 3 answers here touching on the problem, but not explaining it properly and not giving a full solution. This is just a folder mounting problem.
Description of the problem:
Docker Toolbox bypasses the Hyper-V requirement of Docker by creating a virtual machine (in VirtualBox, which comes bundled). Docker is installed and ran inside the VM. In order for Docker to function properly, it needs to have access to the from the host machine. Which here it doesn't.
After I installed Docker Toolbox it created the VirtualBox VM and only mounted C:\Users to the machine, as \c\Users\. My project was in C:\projects so nowhere on the mounted volume. When I was sending the path to the VM, it would not exist, as C:\projects isn't mounted. Hence, the error above.
Let's say I had my project containing my ngnix config in C:/projects/project_name/
Fixing it:
Go to VirtualBox, right click on Default (the VM from Docker) > Settings > Shared Folders
Clicking the small icon with the plus on the right side, Add a new share. I used the following settings:
The above will map C:\projects to /projects (ROOT/projects) in the VM, meaning that now you can reference any path in projects like this: /projects/project_name - because project_name from C:\projects\project_name is now mounted.
To use relative paths, please consider naming the path c/projects not projects
Restart everything and it should now work properly. I manually stopped the virtual machine in VirtualBox and restarted the Docker Toolbox CLI.
In my docker file, I now reference the nginx.conf like this:
volumes:
- /projects/project_name/docker_config/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
Where nginx.conf actually resides in C:\projects\project_name\docker_config\nginx\nginx.conf
I had the same problem. I was using Docker Desktop with WSL in Windows 10 17.09.
Cause of the problem:
The problem is that Docker for Windows expects you to supply your volume paths in a format that matches this:
/c/Users/username/app
BUT, WSL instead uses the format:
/mnt/c/Users/username/app
This is confusing because when checking the file in the console I saw it, and for me everything was correct. I wasn't aware of the Docker for Windows expectations about the volume paths.
Solution to the problem:
I binded the custom mount points to fix Docker for Windows and WSL differences:
sudo mount --bind /mnt/c /c
Like suggested in this amazing guide: Setting Up Docker for Windows and WSL to Work Flawlessly and everything is working perfectly now.
Before I started using WSL I was using Git Bash and I had this problem as well.
On my Mac I had to uncheck the box "Use gRPC FUSE for file sharing" in Settings -> General
Maybe someone finds this useful. My compose file had following volume mounted
./file:/dir/file
As ./file did not exist, it was mounted into ABC (by default as folder).
In my case I had a container resulted from
docker commit ABC cool_image
When I later created ./file and ran docker-compose up , I had the error:
[...] Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
The container brought up from cool_image remembered that /dir/file was a directory and it conflicted with lately created and mounted ./file .
The solution was:
touch ./file
docker run abc_image --name ABC -v ./file:/dir/file
# ... desired changes to ABC
docker commit ABC cool_image
I am using Docker ToolBox for Windows. By default C Drive is mounted automatically, so in order to mount the files, make sure your files and folders are inside C DRIVE.
Example: C:\Users\%USERNAME%\Desktop
I'll share my case here as this may save a lot of time for someone else in the future.
I had a perfectly working docker-compose on my macos, until I start using docker-in-docker in Gitlab CI. I was only given permissions to work as Master in a repository, and the Gitlab CI is self-hosted and setup by someone else and no other info was shared, about how it's setup, etc.
The following caused the issue:
volumes:
- ./.docker/nginx/wordpress/wordpress.conf:/etc/nginx/conf.d/default.conf
Only when I noticed that this might be running under windows (hours scratching the head), I tried renaming the wodpress.conf to default.conf and just set the dir pathnames:
volumes:
- ./.docker/nginx/wordpress:/etc/nginx/conf.d
This solved the problem!
I had the same issue, docker-compose was creating a directory instead of file, then crashing mid-way.
What I did:
Run the container without any mapping.
Copy the .conf file to the host location:
docker cp containername:/etc/nginx/nginx.conf ./nginx.conf
Remove the container (docker-compose down).
Put the mapping back.
Re-mount the container.
Docker Compose will find the .conf file and map it, instead of trying to create a directory.
unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
I had a similar error on niginx in Mac environment.
Docker didn't recognize the default.conf file correctly. Once changing the relative path to the absolute path, the error was fixed.
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
In Windows 10, I just get this error without changing anything in my docker-compose.yml file or Docker configuration in general.
In my case, I was using a VPN with a firewall policy that blocks port 445.
After disconnecting from the VPN the problem disappears.
So I recommend checking your firewall and not using a proxy or VPN when running Docker Desktop.
Check Docker for windows - Firewall rules for shared drives for more details.
I hope this will help someone else.
Could you please use the absolute/complete path instead of $PWD/conf/nginx.conf? Then it will work.
EX:docker run --name nginx-container5 --rm -v /home/sree/html/nginx.conf:/etc/nginx/nginx.conf -d -p 90:80 nginx
b9ead15988a93bf8593c013b6c27294d38a2a40f4ac75b1c1ee362de4723765b
root#sree-VirtualBox:/home/sree/html# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b9ead15988a9 nginx "nginx -g 'daemon ofโฆ" 7 seconds ago Up 6 seconds 0.0.0.0:90->80/tcp nginx-container5
e2b195a691a4 nginx "/bin/bash" 16 minutes ago Up 16 minutes 0.0.0.0:80->80/tcp test-nginx
I experienced the same issue using Docker over WSL1 on Windows 10 with this command line:
echo $PWD
/mnt/d/nginx
docker run --name nginx -d \
-v $PWD/conf/nginx.conf:/etc/nginx/nginx.conf \
nginx
I resolved it by changing the path for the file on the host system to a UNIX style absolute path:
docker run --name nginx -d \
-v /d/nginx/conf/nginx.conf:/etc/nginx/nginx.conf \
nginx
or using an Windows style absolute path with / instead of \ as path separators:
docker run --name nginx -d \
-v D:/nginx/conf/nginx.conf:/etc/nginx/nginx.conf \
nginx
To strip the /mnt that seems to cause problems from the path I use
bash variable extension:
-v ${PWD/mnt\/}/conf/nginx.conf:/etc/nginx/nginx.conf
Updating Virtual Box to 6.0.10 fixed this issue for Docker Toolbox
https://github.com/docker/toolbox/issues/844
I was experiencing this kind of error:
mlepisto#DESKTOP-VKJ76GO MINGW64 ~/G/Projects
$ touch resolv.conf
mlepisto#DESKTOP-VKJ76GO MINGW64 ~/G/Projects
$ docker run --rm -it -v $PWD/resolv.conf:/etc/resolv.conf ubuntu /bin/bash
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\\"/c/Users/mlepisto/G/Projects/resolv.conf\\\" to rootfs \\\"/mnt/sda1/var/lib/docker/overlay2/61eabcfe9ed7e4a87f40bcf93c2a7d320a5f96bf241b2cf694a064b46c11db3f/merged\\\" at \\\"/mnt/sda1/var/lib/docker/overlay2/61eabcfe9ed7e4a87f40bcf93c2a7d320a5f96bf241b2cf694a064b46c11db3f/merged/etc/resolv.conf\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
# mounting to some other file name inside the container did work just fine
mlepisto#DESKTOP-VKJ76GO MINGW64 ~/G/Projects/
$ docker run --rm -it -v $PWD/resolv.conf:/etc/resolv2.conf ubuntu /bin/bash
root#a5020b4d6cc2:/# exit
exit
After updating VitualBox all commands did work just fine ๐
Had the same head scratch because I did not have the file locally so it created it as a folder.
mimas#Anttis-MBP:~/random/dockerize/tube$ ls
Dockerfile
mimas#Anttis-MBP:~/random/dockerize/tube$ docker run --rm -v $(pwd)/logs.txt:/usr/app/logs.txt devopsdockeruh/first_volume_exercise
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\\"/Users/mimas/random/dockerize/tube/logs.txt\\\" to rootfs \\\"/var/lib/docker/overlay2/75891ea3688c58afb8f0fddcc977c78d0ac72334e4c88c80d7cdaa50624e688e/merged\\\" at \\\"/var/lib/docker/overlay2/75891ea3688c58afb8f0fddcc977c78d0ac72334e4c88c80d7cdaa50624e688e/merged/usr/app/logs.txt\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
mimas#Anttis-MBP:~/random/dockerize/tube$ ls
Dockerfile logs.txt/
For me, this did not work:
volumes:
- ./:/var/www/html
- ./nginx.conf:/etc/nginx/conf.d/site.conf
But this, works fine (obviously moved my config file inside a new directory too:
volumes:
- ./:/var/www/html
- ./nginx/nginx.conf:/etc/nginx/conf.d/site.conf
I had this problem under Windows 7 because my dockerfile was on different drive.
Here's what I did to fix the problem:
Open VirtualBox Manager
Select the "default" container and edit the settings.
Select Shared Folders and click the icon to add a new shared folder
Folder Path: x:\
Folder Name: /x
Check Auto-mount and Make Permanent
Restart the virtual machine
At this point, docker-compose up should work.
I got the same error on Windows10 after an update of Docker: 2.3.0.2 (45183).
... caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
I was using absolute paths like this //C/workspace/nginx/nginx.conf and everything worked like a charm.
The update broke my docker-compose, and I had to change the paths to /C/workspace/nginx/nginx.conf with a single / for the root.
Note that this situation will also occur if you try to mount a volume from the host which has not been added to the Resources > File Sharing section of Docker Preferences.
Adding the root path as a file sharing resource will now permit Docker to access the resource to mount it to the container. Note that you may need to erase the contents on your Docker container to attempt to re-mount the volume.
For example, if your application is located at /mysites/myapp, you will want to add /mysites as the file sharing resource location.
In my case it was a problem with Docker for Windows and use partition encrypted by Bitlocker. If you have project files on encrypted files after restart and unlock drive Dokcer doesn't see project files properly.
All you need to do is just need to restart Docker
CleanWebpackPlugin can be the problem. In my case, in my Docker file I copy a file like this:
COPY --chown=node:node dist/app.js /usr/app/app.js
and then during development I mount that file via docker-compose:
volumes:
- ./dist/app.js:/usr/app/app.js
I would intermittently get the Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type. error or some version of it.
The problem was that the CleanWebpackPlugin was deleting the file and before webpack re-built. If Docker was trying to mount the file while it was deleted Docker would fail. It was intermittent.
Either remove CleanWebpackPlugin completely or configure its options to play nicer.
I had this happen when the json file on the host had the executable permission set. I don't know the reason behind this.
For me, it was enough to just do this:
docker compose down
docker compose up -d
l have solved the mount problem. I am using a Win 7 environment, and the same problem happened to me.
Are you trying to mount a directory onto a file?
The container has a default sync directory at C:\Users\, so I moved my project to C:\Users\, then recreated the project. Now it works.
I have changed my hard drive and before switching I copied all data from old hard drive to new one using rsync. Everything is working fine in the new SSD. However, I wasn't able to run docker containers. I tried uninstalling and reinstalling and tried the example command in the docs:
sudo docker run hello-world
But I am getting this error:
docker: Error response from daemon: Container command '/hello' not found or does not exist..
Not sure if it is related to storage drivers since it was working fine on my old HDD. I tried changing the storage drivers in the /etc/default/docker file by modifying the line:
# Use DOCKER_OPTS to modify the daemon startup options.
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 --storage-driver=devicemapper"
It switches to devicemapper but I am still getting the same error
try to change "build: ." to "image hello-world" in .yml file
How I can add new nameserver in /etc/resolv.conf (dockerfile)?
On my dockerfile I use:
FROM ubuntu:14.04
RUN echo "nameserver 10.111.122.1" >> /etc/resolv.conf
On my test I use:
docker run --rm 746cb98d6c9b echo cat /etc/resolv.conf
I didn't get my change (the new nameserver)... So I try adding mannualy with
docker run --rm 746cb98d6c9b echo "nameserver 10.111.122.1" >> /etc/resolv.conf
and I get
zsh: permission denied: /etc/resolv.conf
How I can change permission of this file OR use a root user OR use a chmod in docker files ? My real task is to add and dns server for my build of this dockerfile.
I'm using a linux mint.
I'm get a correct result with a ping test on docker run command (with --dns)
So, one of the ways you can add new DNS information to your container's build process is by adding some startup options to your Docker daemon. The documentation for that process reveals that the option you'll use is --dns. The location of your configuration file depends on your specific distro. On my Linux Mint machine, the file is in /etc/default/docker. On Linux Mint, look for the DOCKER_OPTS= line, and add the appropriate --dns=x.x.x.x entries to that line.
For example, if you want to use Google's DNS, you should change that line to look like this:
DOCKER_OPTS="--dns=8.8.4.4 --dns=8.8.8.8"
Additionally, in the absense of --dns or --dns-search startup options, Docker will use the /etc/resolv.conf of the host it's running on instead.
The DNS configuration of a Docker container may be adjusted during the creation of the container and does not need to be hard-coded in the Docker image itself.
Passing a single DNS server to the container works by providing the --dns parameter:
$ docker run --rm --dns=8.8.8.8 <image>
You're free to provide more than one DNS server and you can also define other DNS related options like the DNS search name or common DNS options:
$ docker run --rm --dns=8.8.8.8 --dns=8.8.4.4 --dns-search=your.search.domain --dns-opt=timeout:50 <image>
If you pass cat /etc/resolv.conf as command to your container, you can easily verify that the passed DNS configuration options made it into the container's DNS configuration:
$ docker run --rm --dns=8.8.4.4 --dns=8.8.8.8 --dns-search=your.domain.name --dns-opt=timeout:50 alpine cat /etc/resolv.conf
search your.domain.name
nameserver 8.8.4.4
nameserver 8.8.8.8
options timeout:50
Please also refer to the docker run configuration which can be found at https://docs.docker.com/engine/reference/commandline/run/