I am currently despairing at the attempt of setting up a docker build step in Atlassian Bamboo.
For starters, I just want to create a build configuration that runs the hello-world image as a proof of confluence. So far, I have failed.
I have tried following the steps on https://confluence.atlassian.com/bamboo0609/using-bamboo/jobs-and-tasks/configuring-tasks/configuring-the-docker-task-in-bamboo , but to no avail.
My setup is this:
We have Bamboo installed on an Ubuntu server. I also installed Docker on that server and added the bamboo user to the docker usergroup and restarted the server to make sure the permission change takes effect. At this point, docker run hello-world works when I run it directly on the server. I can also confirm that this is the server that Bamboo runs on since Bamboo went offline whenever I restarted the server that I installed Docker on.
Then, I have added the docker capability to the server (the agent is the default agent, so it inherits this capability from the server). As the docker path, I have tried various things, none of which worked (aka, the following errors remained the same for each of these):
/snap/docker (the first folder that I found on a manual search)
/usr/bin/docker (the recommended path, though on inspecting the Ubuntu server I quickly found out that no docker folder exists under /usr/bin on the Ubuntu derver)
/var/snap/docker/common/var-lib-docker (the path that Docker returns as its Root Directory when I run docker info on the Ubuntu server)
/var/snap/docker (for good measure)
Now, for the runner, I have tried two different approaches.
First, I tried using a Docker runner with the following settings:
Command: Run a Docker container
Docker image: hello-world
This returns the following error message:
┊
Error occurred while running Task 'Hello World Docker Test(5)' of type com.atlassian.bamboo.plugins.bamboo-docker-plugin:task.docker.cli.com.atlassian.bamboo.task.TaskException: Failed to execute task
┊
Caused by: com.atlassian.bamboo.docker.DockerException: Error running Docker run command
┊
Caused by: com.atlassian.utils.process.ProcessException: Error executing /snap/docker run --volume /var/atlassian/application-data/bamboo/xml-data/build-dir/CAM-DOC-JOB1:/data --workdir /data --rm hello-world
┊
The second was just to run a shell runner for the command docker run hello-world, which returned the following error:
docker: not found
At this point, I feel like I'm out of ideas. Everything points towards Bamboo for some reason not finding Docker on the server, even though I can clearly confirm that it is there. I have tried various different approaches of telling Bamboo where to find Docker, but none of them have worked.
It's obvious that I'm doing something wrong, but I can't figure out what. Or maybe the problem lies in an entirely different direction altogether? Anyway, I would be grateful for any insight shared on this matter.
Okay, I found out what caused this strange behaviour.
The problem was that I installed Docker using sudo snap install docker, and apparently installing docker via snap causes problems with Bamboo.
So I got it to work using these simple steps:
[Server] Uninstalled Snap Docker using sudo snap remove docker
[Server] Reinstalled Docker using sudo apt install docker.io
[Bamboo] Changed the path to Docker in the Server Capabilities to /usr/bin/docker
After that, the hello-world image build succeeded and printed the expected output to the log.
I have a docker with version 17.06.0-ce. When I trying to install NGINX using docker with command:
docker run -p 80:80 -p 8080:8080 --name nginx -v $PWD/www:/www -v $PWD/conf/nginx.conf:/etc/nginx/nginx.conf -v $PWD/logs:/wwwlogs -d nginx:latest
It shows that
docker: Error response from daemon: oci runtime error:
container_linux.go:262: starting container process caused
"process_linux.go:339: container init caused \"rootfs_linux.go:57:
mounting \\"/appdata/nginx/conf/nginx.conf\\" to rootfs
\\"/var/lib/docker/aufs/mnt/dcea22444e9ffda114593b18fc8b574adfada06947385aedc2ac09f199188fa0\\"
at
\\"/var/lib/docker/aufs/mnt/dcea22444e9ffda114593b18fc8b574adfada06947385aedc2ac09f199188fa0/etc/nginx/nginx.conf\\"
caused \\"not a directory\\"\""
: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
If do not mount the nginx.conf file, everything is okay. So, how can I mount the configuration file?
This should no longer happen (since v2.2.0.0), see here
If you are using Docker for Windows, this error can happen if you have recently changed your password.
How to fix:
First make sure to delete the broken container's volume
docker rm -v <container_name>
Update: The steps below may work without needing to delete volumes first.
Open Docker Settings
Go to the "Shared Drives" tab
Click on the "Reset Credentials..." link on the bottom of the window
Re-Share the drives you want to use with Docker
You should be prompted to enter your username/password
Click "Apply"
Go to the "Reset" tab
Click "Restart Docker"
Re-create your containers/volumes
Credit goes to BaranOrnarli on GitHub for the solution.
TL;DR: Remove the volumes associated with the container.
Find the container name using docker ps -a then remove that container using:
docker rm -v <container_name>
Problem:
The error you are facing might occur if you previously tried running the docker run command while the file was not present at the location where it should have been in the host directory.
In this case docker daemon would have created a directory inside the container in its place, which later fails to map to the proper file when the correct files are put in the host directory and the docker command is run again.
Solution:
Remove the volumes that are associated with the container. If you are not concerned about other container volumes, you can also use:
# WARNING, THIS WILL REMOVE ALL VOLUMES
docker volume rm $(docker volume ls -q)
Because docker will recognize $PWD/conf/nginx.conf as a folder and not as a file. Check whether the $PWD/conf/ directory contains nginx.conf as a directory.
Test with
> cat $PWD/conf/nginx.conf
cat: nginx.conf/: Is a directory
Otherwise, open a Docker issue.
It's working fine for me with same configuration.
The explanation given by #Ayushya was the reason I hit this somewhat confusing error message and the necessary housekeeping can be done easily like this:
$ docker container prune
$ docker volume prune
Answer for people using Docker Toolbox
There have been at least 3 answers here touching on the problem, but not explaining it properly and not giving a full solution. This is just a folder mounting problem.
Description of the problem:
Docker Toolbox bypasses the Hyper-V requirement of Docker by creating a virtual machine (in VirtualBox, which comes bundled). Docker is installed and ran inside the VM. In order for Docker to function properly, it needs to have access to the from the host machine. Which here it doesn't.
After I installed Docker Toolbox it created the VirtualBox VM and only mounted C:\Users to the machine, as \c\Users\. My project was in C:\projects so nowhere on the mounted volume. When I was sending the path to the VM, it would not exist, as C:\projects isn't mounted. Hence, the error above.
Let's say I had my project containing my ngnix config in C:/projects/project_name/
Fixing it:
Go to VirtualBox, right click on Default (the VM from Docker) > Settings > Shared Folders
Clicking the small icon with the plus on the right side, Add a new share. I used the following settings:
The above will map C:\projects to /projects (ROOT/projects) in the VM, meaning that now you can reference any path in projects like this: /projects/project_name - because project_name from C:\projects\project_name is now mounted.
To use relative paths, please consider naming the path c/projects not projects
Restart everything and it should now work properly. I manually stopped the virtual machine in VirtualBox and restarted the Docker Toolbox CLI.
In my docker file, I now reference the nginx.conf like this:
volumes:
- /projects/project_name/docker_config/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
Where nginx.conf actually resides in C:\projects\project_name\docker_config\nginx\nginx.conf
I had the same problem. I was using Docker Desktop with WSL in Windows 10 17.09.
Cause of the problem:
The problem is that Docker for Windows expects you to supply your volume paths in a format that matches this:
/c/Users/username/app
BUT, WSL instead uses the format:
/mnt/c/Users/username/app
This is confusing because when checking the file in the console I saw it, and for me everything was correct. I wasn't aware of the Docker for Windows expectations about the volume paths.
Solution to the problem:
I binded the custom mount points to fix Docker for Windows and WSL differences:
sudo mount --bind /mnt/c /c
Like suggested in this amazing guide: Setting Up Docker for Windows and WSL to Work Flawlessly and everything is working perfectly now.
Before I started using WSL I was using Git Bash and I had this problem as well.
On my Mac I had to uncheck the box "Use gRPC FUSE for file sharing" in Settings -> General
Maybe someone finds this useful. My compose file had following volume mounted
./file:/dir/file
As ./file did not exist, it was mounted into ABC (by default as folder).
In my case I had a container resulted from
docker commit ABC cool_image
When I later created ./file and ran docker-compose up , I had the error:
[...] Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
The container brought up from cool_image remembered that /dir/file was a directory and it conflicted with lately created and mounted ./file .
The solution was:
touch ./file
docker run abc_image --name ABC -v ./file:/dir/file
# ... desired changes to ABC
docker commit ABC cool_image
I am using Docker ToolBox for Windows. By default C Drive is mounted automatically, so in order to mount the files, make sure your files and folders are inside C DRIVE.
Example: C:\Users\%USERNAME%\Desktop
I'll share my case here as this may save a lot of time for someone else in the future.
I had a perfectly working docker-compose on my macos, until I start using docker-in-docker in Gitlab CI. I was only given permissions to work as Master in a repository, and the Gitlab CI is self-hosted and setup by someone else and no other info was shared, about how it's setup, etc.
The following caused the issue:
volumes:
- ./.docker/nginx/wordpress/wordpress.conf:/etc/nginx/conf.d/default.conf
Only when I noticed that this might be running under windows (hours scratching the head), I tried renaming the wodpress.conf to default.conf and just set the dir pathnames:
volumes:
- ./.docker/nginx/wordpress:/etc/nginx/conf.d
This solved the problem!
I had the same issue, docker-compose was creating a directory instead of file, then crashing mid-way.
What I did:
Run the container without any mapping.
Copy the .conf file to the host location:
docker cp containername:/etc/nginx/nginx.conf ./nginx.conf
Remove the container (docker-compose down).
Put the mapping back.
Re-mount the container.
Docker Compose will find the .conf file and map it, instead of trying to create a directory.
unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
I had a similar error on niginx in Mac environment.
Docker didn't recognize the default.conf file correctly. Once changing the relative path to the absolute path, the error was fixed.
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
In Windows 10, I just get this error without changing anything in my docker-compose.yml file or Docker configuration in general.
In my case, I was using a VPN with a firewall policy that blocks port 445.
After disconnecting from the VPN the problem disappears.
So I recommend checking your firewall and not using a proxy or VPN when running Docker Desktop.
Check Docker for windows - Firewall rules for shared drives for more details.
I hope this will help someone else.
Could you please use the absolute/complete path instead of $PWD/conf/nginx.conf? Then it will work.
EX:docker run --name nginx-container5 --rm -v /home/sree/html/nginx.conf:/etc/nginx/nginx.conf -d -p 90:80 nginx
b9ead15988a93bf8593c013b6c27294d38a2a40f4ac75b1c1ee362de4723765b
root#sree-VirtualBox:/home/sree/html# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b9ead15988a9 nginx "nginx -g 'daemon of…" 7 seconds ago Up 6 seconds 0.0.0.0:90->80/tcp nginx-container5
e2b195a691a4 nginx "/bin/bash" 16 minutes ago Up 16 minutes 0.0.0.0:80->80/tcp test-nginx
I experienced the same issue using Docker over WSL1 on Windows 10 with this command line:
echo $PWD
/mnt/d/nginx
docker run --name nginx -d \
-v $PWD/conf/nginx.conf:/etc/nginx/nginx.conf \
nginx
I resolved it by changing the path for the file on the host system to a UNIX style absolute path:
docker run --name nginx -d \
-v /d/nginx/conf/nginx.conf:/etc/nginx/nginx.conf \
nginx
or using an Windows style absolute path with / instead of \ as path separators:
docker run --name nginx -d \
-v D:/nginx/conf/nginx.conf:/etc/nginx/nginx.conf \
nginx
To strip the /mnt that seems to cause problems from the path I use
bash variable extension:
-v ${PWD/mnt\/}/conf/nginx.conf:/etc/nginx/nginx.conf
Updating Virtual Box to 6.0.10 fixed this issue for Docker Toolbox
https://github.com/docker/toolbox/issues/844
I was experiencing this kind of error:
mlepisto#DESKTOP-VKJ76GO MINGW64 ~/G/Projects
$ touch resolv.conf
mlepisto#DESKTOP-VKJ76GO MINGW64 ~/G/Projects
$ docker run --rm -it -v $PWD/resolv.conf:/etc/resolv.conf ubuntu /bin/bash
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\\"/c/Users/mlepisto/G/Projects/resolv.conf\\\" to rootfs \\\"/mnt/sda1/var/lib/docker/overlay2/61eabcfe9ed7e4a87f40bcf93c2a7d320a5f96bf241b2cf694a064b46c11db3f/merged\\\" at \\\"/mnt/sda1/var/lib/docker/overlay2/61eabcfe9ed7e4a87f40bcf93c2a7d320a5f96bf241b2cf694a064b46c11db3f/merged/etc/resolv.conf\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
# mounting to some other file name inside the container did work just fine
mlepisto#DESKTOP-VKJ76GO MINGW64 ~/G/Projects/
$ docker run --rm -it -v $PWD/resolv.conf:/etc/resolv2.conf ubuntu /bin/bash
root#a5020b4d6cc2:/# exit
exit
After updating VitualBox all commands did work just fine 🎉
Had the same head scratch because I did not have the file locally so it created it as a folder.
mimas#Anttis-MBP:~/random/dockerize/tube$ ls
Dockerfile
mimas#Anttis-MBP:~/random/dockerize/tube$ docker run --rm -v $(pwd)/logs.txt:/usr/app/logs.txt devopsdockeruh/first_volume_exercise
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\\"/Users/mimas/random/dockerize/tube/logs.txt\\\" to rootfs \\\"/var/lib/docker/overlay2/75891ea3688c58afb8f0fddcc977c78d0ac72334e4c88c80d7cdaa50624e688e/merged\\\" at \\\"/var/lib/docker/overlay2/75891ea3688c58afb8f0fddcc977c78d0ac72334e4c88c80d7cdaa50624e688e/merged/usr/app/logs.txt\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
mimas#Anttis-MBP:~/random/dockerize/tube$ ls
Dockerfile logs.txt/
For me, this did not work:
volumes:
- ./:/var/www/html
- ./nginx.conf:/etc/nginx/conf.d/site.conf
But this, works fine (obviously moved my config file inside a new directory too:
volumes:
- ./:/var/www/html
- ./nginx/nginx.conf:/etc/nginx/conf.d/site.conf
I had this problem under Windows 7 because my dockerfile was on different drive.
Here's what I did to fix the problem:
Open VirtualBox Manager
Select the "default" container and edit the settings.
Select Shared Folders and click the icon to add a new shared folder
Folder Path: x:\
Folder Name: /x
Check Auto-mount and Make Permanent
Restart the virtual machine
At this point, docker-compose up should work.
I got the same error on Windows10 after an update of Docker: 2.3.0.2 (45183).
... caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
I was using absolute paths like this //C/workspace/nginx/nginx.conf and everything worked like a charm.
The update broke my docker-compose, and I had to change the paths to /C/workspace/nginx/nginx.conf with a single / for the root.
Note that this situation will also occur if you try to mount a volume from the host which has not been added to the Resources > File Sharing section of Docker Preferences.
Adding the root path as a file sharing resource will now permit Docker to access the resource to mount it to the container. Note that you may need to erase the contents on your Docker container to attempt to re-mount the volume.
For example, if your application is located at /mysites/myapp, you will want to add /mysites as the file sharing resource location.
In my case it was a problem with Docker for Windows and use partition encrypted by Bitlocker. If you have project files on encrypted files after restart and unlock drive Dokcer doesn't see project files properly.
All you need to do is just need to restart Docker
CleanWebpackPlugin can be the problem. In my case, in my Docker file I copy a file like this:
COPY --chown=node:node dist/app.js /usr/app/app.js
and then during development I mount that file via docker-compose:
volumes:
- ./dist/app.js:/usr/app/app.js
I would intermittently get the Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type. error or some version of it.
The problem was that the CleanWebpackPlugin was deleting the file and before webpack re-built. If Docker was trying to mount the file while it was deleted Docker would fail. It was intermittent.
Either remove CleanWebpackPlugin completely or configure its options to play nicer.
I had this happen when the json file on the host had the executable permission set. I don't know the reason behind this.
For me, it was enough to just do this:
docker compose down
docker compose up -d
l have solved the mount problem. I am using a Win 7 environment, and the same problem happened to me.
Are you trying to mount a directory onto a file?
The container has a default sync directory at C:\Users\, so I moved my project to C:\Users\, then recreated the project. Now it works.
I am trying to integrate docker into my CI platform. After getting this working properly with a Docker-in-a-docker solution, I came across a blog post by one of the Docker maintainers, where he says that instead of using a Docker-in-a-docker solution for my CI, I should instead simply mount the /var/run/docker.sock to my CI container.
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
Simply put, when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
So I tried this. I ran the following command:
docker run -p 8080:8080 -p 50000:50000 -v /var/run/docker.sock:/var/run/docker.sock jenkins
Using jenkins as my CI container.
When running the above command, jenkins starts up properly, and I can jump into the container to see that the docker.sock file is located in the /var/run/ path.
However, when I run the command: docker, the machine returns with the following message:
bash: docker: command not found
Does anyone know what I am missing in order to make this work per the author's instructions?
I am using Docker v. 1.11.1, on a fresh CentOS 7 box.
Thanks in advance
Figured this out today. The above command will work so long as the docker daemon + dependencies are added to the container. In my case, I ended up writing a simple Dockerfile, which also included the line:
RUN curl -sSL https://get.docker.com/ | sh
This installed Docker on the container, and when I ran docker images from within the container, I could see all of the images from my host machine. I am now able to use all of the docker commands from within the container.
To create a Docker container in Bluemix we need to install container plug-ins and container extension. After installing container extension Docker should be running but it show error as :
root#oc0608248400 Desktop]# cf ic login
** Retrieving client certificates from IBM Containers
** Storing client certificates in /root/.ice/certs
Successfully retrieved client certificates
** Checking local docker configuration
Not OK
Docker local daemon may not be running. You can still run IBM Containers on the cloud
There are two ways to use the CLI with IBM Containers:
Option 1) This option allows you to use `cf ic` for managing containers on IBM Containers while still using the docker CLI directly to manage your local docker host.
Leverage this Cloud Foundry IBM Containers plugin without affecting the local docker environment:
Example Usage:
cf ic ps
cf ic images
Option 2) Leverage the docker CLI directly. In this shell, override local docker environment to connect to IBM Containers by setting these variables, copy and paste the following:
Notice: only commands with an asterisk(*) are supported within this option
export DOCKER_HOST=tcp://containers-api.ng.bluemix.net:8443
export DOCKER_CERT_PATH=/root/.ice/certs
export DOCKER_TLS_VERIFY=1
Example Usage:
docker ps
docker images
exec: "docker": executable file not found in $PATH
Please suggest what should I go next.
the error is already telling you what to do:
exec: "docker": executable file not found in $PATH
means to find the executable docker.
Thus the following should tell you where it is located and that would needed to be append to the PATH environment variable.
dockerpath=$(dirname `find / -name docker -type f -perm /a+x 2>/dev/null`)
export PATH="$PATH:$dockerpath"
What this will do is search the root of the filesystem for a file, named 'docker', and has the executable bit set while ignoring error messages and returns the absolute path to the file as $dockerpath. Then it exports this temporarily.
The problem seems to be that your docker daemon isn't running.
Try running:
sudo docker restart
If you've just installed docker you may need to reboot your machine first.