Docker image with /dev/kvm module - docker

I want to use kvm in my docker image. kvm and qemu-system-x86_64 are installed under /usr/bin but when I run eg. /usr/bin/qemu-system-x86_64 -enable-kvm
I'm getting
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: failed to initialize KVM: No such file or directory
I was able to fix to error by adding --device=/dev/kvm to docker run command.
My question is why this kvm module need to be attached from my host? Why it's required to be under /dev directory?

Related

Warning when trying run tensorflow with Docker on Windows

I cannot start tensorflow with image download from tensorflow
I used docker on windows 10 and for error ouput said this:
WARNING: You are running this container as root, which can cause new files in
mounted volumes to be created as the root user on your host machine.
To avoid this, run the container by specifying your user's userid:
$ docker run -u $(id -u):$(id -g) args...
I try search a problem for google... but cannot found, my experience with docker is null
This is a warning specifying that to access/change the files created in the mounted directory you may require sudo and you may not be able to change such files as a non sudo user, since your docker container used sudo permissions while creating them.
A quick search shows that there are many blog references available, check these -
Docker creates files as root in mounted volume
Running a Docker container as a non-root user
Setup Docker for windows using windows subsystem linux
https://jtreminio.com/blog/running-docker-containers-as-current-host-user/
https://medium.com/better-programming/running-a-container-with-a-non-root-user-e35830d1f42a
https://docs.docker.com/install/linux/linux-postinstall/

Make systemctl work from inside a container in a debian stretch image

Purpose - What do I want to achieve?
I want to access systemctl from inside a container running a kubernetes node (ami: running debian stretch)
Working setup:
Node AMI: kope.io/k8s-1.10-debian-jessie-amd64-hvm-ebs-2018-08-17
Node Directories Mounted in the container to make systemctl work:
/var/run/dbus
/run/systemd
/bin/systemctl
/etc/systemd/system
Not Working setup:
Node AMI: kope.io/k8s-1.11-debian-stretch-amd64-hvm-ebs-2018-08-17
Node Directories Mounted in the container to make systemctl work:
/var/run/dbus
/run/systemd
/bin/systemctl
/etc/systemd/system
Debugging in an attempt to solve the problem
To debug this issue with the debian-stretch image not supporting systemctl with the same mounts as debian-jessie
1) I began by spinning up a nginx deployment by mounting the above mentioned volumes in it
kubectl apply -f https://k8s.io/examples/application/deployment.yaml
kubectl exec -it nginx-deployment /bin/bash
root#nginx-deployment-788f65877d-pzzrn:/# systemctl
systemctl: error while loading shared libraries: libsystemd-shared-
232.so: cannot open shared object file: No such file or directory
2) As the above issue showed the file libsystemd-shared-232.so not found. I found the actual path by looking into the node.
admin#ip-10-0-20-11:~$ sudo find / -iname 'libsystemd-shared-232.so'
/lib/systemd/libsystemd-shared-232.so
3) Mounted the /lib/systemd in the nginx pod and ran the systemctl again
kubectl exec -it nginx-deployment /bin/bash
root#nginx-deployment-587d866f54-ghfll:/# systemctl
systemctl: error while loading shared libraries: libcap.so.2:cannot
open shared object file: No such file or directory
4) Now the systemctl was failing with a new so missing error
root#nginx-deployment-587d866f54-ghfll:/# systemctl
systemctl: error while loading shared libraries: libcap.so.2: cannot
open shared object file: No such file or directory
5) To solve the above error i again searched the node for libcap.so.2 Found it in the below path.
admin#ip-10-0-20-11:~$ sudo find / -iname 'libcap.so.2'
/lib/x86_64-linux-gnu/libcap.so.2
6) Seeing the above directory not mounted in my pod. I mounted the below path in the nginx pod.
/lib/x86_64-linux-gnu mounted in the nginx pod(deployment)
7) The nginx pod is not able to come up after adding the above mount. Getting the below error:
$ k logs nginx-deployment-f9c5ff956-b9wn5
standard_init_linux.go:178: exec user process caused "no such file
or directory"
Please suggest how to debug further. And what all mounts are required to make systemctl work from inside a container in a debian stretch environment.
Any pointers to take the debugging further could be helpful.
Rather than mounting some of the library files from the host you can just install systemd in the container.
$ apt-get -y install systemd
Now, that won't necessarily make systemctl run. You will need systemd to be running in your container which is spawned by /sbin/init on your system. /sbin/init needs to run as root so essentially you would have to run this with the privileged flag in the pod or container security context on Kubernetes. Now, this is insecure and there is a long history about running systemd in a container where the Docker folks were mostly against it (security) and the Red Hat folks said that it was needed.
Nevertheless, the Red Hat folks figured out a way to make it work without the unprivileged flag. You need:
/run mounted as a tmpfs in the container.
/sys/fs/cgroup mounted as read-only is ok.
/sys/fs/cgroup/systemd/ mounted as read/write.
Use for STOPSIGNAL SIGRTMIN+3
In Kubernetes you need an emptyDir to mount a tmpfs. The others can be mounted as host volumes.
After mounting your host's /lib directory into the container, your Pod most probably will not start because the Docker image's /lib directory contained some library needed by the Nginx server that should start in that container. By mounting /lib from the host, the libraries required by Nginx will not be accessible any more. This will result in a No such file or directory error when trying to start Nginx.
To make systemctl available from within the container, I would suggest simply installing it within the container, instead of trying to mount the required binaries and libraries into the container. This can be done in your container's Dockerfile:
FROM whatever
RUN apt-get update && apt-get install systemd
No need to mount /bin/systemd or /lib/ with this solution.
I had a similar problem where one of the lines in my Dockerfile was:
RUN apt-get install -y --reinstall systemd
but after docker restart, when I tried to use systemctl command. The output was:
Failed to connect to bus: No such file or directory.
I solved this issue by adding following to my docker-compose.yml:
volumes:
- "/sys/fs/cgroup:/sys/fs/cgroup:ro"
It can be done also by:
sudo docker run -d -v /sys/fs/cgroup:/sys/fs/cgroup:ro {other options}

Are you trying to mount a directory onto a file (or vice-versa)?

I have a docker with version 17.06.0-ce. When I trying to install NGINX using docker with command:
docker run -p 80:80 -p 8080:8080 --name nginx -v $PWD/www:/www -v $PWD/conf/nginx.conf:/etc/nginx/nginx.conf -v $PWD/logs:/wwwlogs -d nginx:latest
It shows that
docker: Error response from daemon: oci runtime error:
container_linux.go:262: starting container process caused
"process_linux.go:339: container init caused \"rootfs_linux.go:57:
mounting \\"/appdata/nginx/conf/nginx.conf\\" to rootfs
\\"/var/lib/docker/aufs/mnt/dcea22444e9ffda114593b18fc8b574adfada06947385aedc2ac09f199188fa0\\"
at
\\"/var/lib/docker/aufs/mnt/dcea22444e9ffda114593b18fc8b574adfada06947385aedc2ac09f199188fa0/etc/nginx/nginx.conf\\"
caused \\"not a directory\\"\""
: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
If do not mount the nginx.conf file, everything is okay. So, how can I mount the configuration file?
This should no longer happen (since v2.2.0.0), see here
If you are using Docker for Windows, this error can happen if you have recently changed your password.
How to fix:
First make sure to delete the broken container's volume
docker rm -v <container_name>
Update: The steps below may work without needing to delete volumes first.
Open Docker Settings
Go to the "Shared Drives" tab
Click on the "Reset Credentials..." link on the bottom of the window
Re-Share the drives you want to use with Docker
You should be prompted to enter your username/password
Click "Apply"
Go to the "Reset" tab
Click "Restart Docker"
Re-create your containers/volumes
Credit goes to BaranOrnarli on GitHub for the solution.
TL;DR: Remove the volumes associated with the container.
Find the container name using docker ps -a then remove that container using:
docker rm -v <container_name>
Problem:
The error you are facing might occur if you previously tried running the docker run command while the file was not present at the location where it should have been in the host directory.
In this case docker daemon would have created a directory inside the container in its place, which later fails to map to the proper file when the correct files are put in the host directory and the docker command is run again.
Solution:
Remove the volumes that are associated with the container. If you are not concerned about other container volumes, you can also use:
# WARNING, THIS WILL REMOVE ALL VOLUMES
docker volume rm $(docker volume ls -q)
Because docker will recognize $PWD/conf/nginx.conf as a folder and not as a file. Check whether the $PWD/conf/ directory contains nginx.conf as a directory.
Test with
> cat $PWD/conf/nginx.conf
cat: nginx.conf/: Is a directory
Otherwise, open a Docker issue.
It's working fine for me with same configuration.
The explanation given by #Ayushya was the reason I hit this somewhat confusing error message and the necessary housekeeping can be done easily like this:
$ docker container prune
$ docker volume prune
Answer for people using Docker Toolbox
There have been at least 3 answers here touching on the problem, but not explaining it properly and not giving a full solution. This is just a folder mounting problem.
Description of the problem:
Docker Toolbox bypasses the Hyper-V requirement of Docker by creating a virtual machine (in VirtualBox, which comes bundled). Docker is installed and ran inside the VM. In order for Docker to function properly, it needs to have access to the from the host machine. Which here it doesn't.
After I installed Docker Toolbox it created the VirtualBox VM and only mounted C:\Users to the machine, as \c\Users\. My project was in C:\projects so nowhere on the mounted volume. When I was sending the path to the VM, it would not exist, as C:\projects isn't mounted. Hence, the error above.
Let's say I had my project containing my ngnix config in C:/projects/project_name/
Fixing it:
Go to VirtualBox, right click on Default (the VM from Docker) > Settings > Shared Folders
Clicking the small icon with the plus on the right side, Add a new share. I used the following settings:
The above will map C:\projects to /projects (ROOT/projects) in the VM, meaning that now you can reference any path in projects like this: /projects/project_name - because project_name from C:\projects\project_name is now mounted.
To use relative paths, please consider naming the path c/projects not projects
Restart everything and it should now work properly. I manually stopped the virtual machine in VirtualBox and restarted the Docker Toolbox CLI.
In my docker file, I now reference the nginx.conf like this:
volumes:
- /projects/project_name/docker_config/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
Where nginx.conf actually resides in C:\projects\project_name\docker_config\nginx\nginx.conf
I had the same problem. I was using Docker Desktop with WSL in Windows 10 17.09.
Cause of the problem:
The problem is that Docker for Windows expects you to supply your volume paths in a format that matches this:
/c/Users/username/app
BUT, WSL instead uses the format:
/mnt/c/Users/username/app
This is confusing because when checking the file in the console I saw it, and for me everything was correct. I wasn't aware of the Docker for Windows expectations about the volume paths.
Solution to the problem:
I binded the custom mount points to fix Docker for Windows and WSL differences:
sudo mount --bind /mnt/c /c
Like suggested in this amazing guide: Setting Up Docker for Windows and WSL to Work Flawlessly and everything is working perfectly now.
Before I started using WSL I was using Git Bash and I had this problem as well.
On my Mac I had to uncheck the box "Use gRPC FUSE for file sharing" in Settings -> General
Maybe someone finds this useful. My compose file had following volume mounted
./file:/dir/file
As ./file did not exist, it was mounted into ABC (by default as folder).
In my case I had a container resulted from
docker commit ABC cool_image
When I later created ./file and ran docker-compose up , I had the error:
[...] Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
The container brought up from cool_image remembered that /dir/file was a directory and it conflicted with lately created and mounted ./file .
The solution was:
touch ./file
docker run abc_image --name ABC -v ./file:/dir/file
# ... desired changes to ABC
docker commit ABC cool_image
I am using Docker ToolBox for Windows. By default C Drive is mounted automatically, so in order to mount the files, make sure your files and folders are inside C DRIVE.
Example: C:\Users\%USERNAME%\Desktop
I'll share my case here as this may save a lot of time for someone else in the future.
I had a perfectly working docker-compose on my macos, until I start using docker-in-docker in Gitlab CI. I was only given permissions to work as Master in a repository, and the Gitlab CI is self-hosted and setup by someone else and no other info was shared, about how it's setup, etc.
The following caused the issue:
volumes:
- ./.docker/nginx/wordpress/wordpress.conf:/etc/nginx/conf.d/default.conf
Only when I noticed that this might be running under windows (hours scratching the head), I tried renaming the wodpress.conf to default.conf and just set the dir pathnames:
volumes:
- ./.docker/nginx/wordpress:/etc/nginx/conf.d
This solved the problem!
I had the same issue, docker-compose was creating a directory instead of file, then crashing mid-way.
What I did:
Run the container without any mapping.
Copy the .conf file to the host location:
docker cp containername:/etc/nginx/nginx.conf ./nginx.conf
Remove the container (docker-compose down).
Put the mapping back.
Re-mount the container.
Docker Compose will find the .conf file and map it, instead of trying to create a directory.
unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
I had a similar error on niginx in Mac environment.
Docker didn't recognize the default.conf file correctly. Once changing the relative path to the absolute path, the error was fixed.
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
In Windows 10, I just get this error without changing anything in my docker-compose.yml file or Docker configuration in general.
In my case, I was using a VPN with a firewall policy that blocks port 445.
After disconnecting from the VPN the problem disappears.
So I recommend checking your firewall and not using a proxy or VPN when running Docker Desktop.
Check Docker for windows - Firewall rules for shared drives for more details.
I hope this will help someone else.
Could you please use the absolute/complete path instead of $PWD/conf/nginx.conf? Then it will work.
EX:docker run --name nginx-container5 --rm -v /home/sree/html/nginx.conf:/etc/nginx/nginx.conf -d -p 90:80 nginx
b9ead15988a93bf8593c013b6c27294d38a2a40f4ac75b1c1ee362de4723765b
root#sree-VirtualBox:/home/sree/html# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b9ead15988a9 nginx "nginx -g 'daemon of…" 7 seconds ago Up 6 seconds 0.0.0.0:90->80/tcp nginx-container5
e2b195a691a4 nginx "/bin/bash" 16 minutes ago Up 16 minutes 0.0.0.0:80->80/tcp test-nginx
I experienced the same issue using Docker over WSL1 on Windows 10 with this command line:
echo $PWD
/mnt/d/nginx
docker run --name nginx -d \
-v $PWD/conf/nginx.conf:/etc/nginx/nginx.conf \
nginx
I resolved it by changing the path for the file on the host system to a UNIX style absolute path:
docker run --name nginx -d \
-v /d/nginx/conf/nginx.conf:/etc/nginx/nginx.conf \
nginx
or using an Windows style absolute path with / instead of \ as path separators:
docker run --name nginx -d \
-v D:/nginx/conf/nginx.conf:/etc/nginx/nginx.conf \
nginx
To strip the /mnt that seems to cause problems from the path I use
bash variable extension:
-v ${PWD/mnt\/}/conf/nginx.conf:/etc/nginx/nginx.conf
Updating Virtual Box to 6.0.10 fixed this issue for Docker Toolbox
https://github.com/docker/toolbox/issues/844
I was experiencing this kind of error:
mlepisto#DESKTOP-VKJ76GO MINGW64 ~/G/Projects
$ touch resolv.conf
mlepisto#DESKTOP-VKJ76GO MINGW64 ~/G/Projects
$ docker run --rm -it -v $PWD/resolv.conf:/etc/resolv.conf ubuntu /bin/bash
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\\"/c/Users/mlepisto/G/Projects/resolv.conf\\\" to rootfs \\\"/mnt/sda1/var/lib/docker/overlay2/61eabcfe9ed7e4a87f40bcf93c2a7d320a5f96bf241b2cf694a064b46c11db3f/merged\\\" at \\\"/mnt/sda1/var/lib/docker/overlay2/61eabcfe9ed7e4a87f40bcf93c2a7d320a5f96bf241b2cf694a064b46c11db3f/merged/etc/resolv.conf\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
# mounting to some other file name inside the container did work just fine
mlepisto#DESKTOP-VKJ76GO MINGW64 ~/G/Projects/
$ docker run --rm -it -v $PWD/resolv.conf:/etc/resolv2.conf ubuntu /bin/bash
root#a5020b4d6cc2:/# exit
exit
After updating VitualBox all commands did work just fine 🎉
Had the same head scratch because I did not have the file locally so it created it as a folder.
mimas#Anttis-MBP:~/random/dockerize/tube$ ls
Dockerfile
mimas#Anttis-MBP:~/random/dockerize/tube$ docker run --rm -v $(pwd)/logs.txt:/usr/app/logs.txt devopsdockeruh/first_volume_exercise
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\\"/Users/mimas/random/dockerize/tube/logs.txt\\\" to rootfs \\\"/var/lib/docker/overlay2/75891ea3688c58afb8f0fddcc977c78d0ac72334e4c88c80d7cdaa50624e688e/merged\\\" at \\\"/var/lib/docker/overlay2/75891ea3688c58afb8f0fddcc977c78d0ac72334e4c88c80d7cdaa50624e688e/merged/usr/app/logs.txt\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
mimas#Anttis-MBP:~/random/dockerize/tube$ ls
Dockerfile logs.txt/
For me, this did not work:
volumes:
- ./:/var/www/html
- ./nginx.conf:/etc/nginx/conf.d/site.conf
But this, works fine (obviously moved my config file inside a new directory too:
volumes:
- ./:/var/www/html
- ./nginx/nginx.conf:/etc/nginx/conf.d/site.conf
I had this problem under Windows 7 because my dockerfile was on different drive.
Here's what I did to fix the problem:
Open VirtualBox Manager
Select the "default" container and edit the settings.
Select Shared Folders and click the icon to add a new shared folder
Folder Path: x:\
Folder Name: /x
Check Auto-mount and Make Permanent
Restart the virtual machine
At this point, docker-compose up should work.
I got the same error on Windows10 after an update of Docker: 2.3.0.2 (45183).
... caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
I was using absolute paths like this //C/workspace/nginx/nginx.conf and everything worked like a charm.
The update broke my docker-compose, and I had to change the paths to /C/workspace/nginx/nginx.conf with a single / for the root.
Note that this situation will also occur if you try to mount a volume from the host which has not been added to the Resources > File Sharing section of Docker Preferences.
Adding the root path as a file sharing resource will now permit Docker to access the resource to mount it to the container. Note that you may need to erase the contents on your Docker container to attempt to re-mount the volume.
For example, if your application is located at /mysites/myapp, you will want to add /mysites as the file sharing resource location.
In my case it was a problem with Docker for Windows and use partition encrypted by Bitlocker. If you have project files on encrypted files after restart and unlock drive Dokcer doesn't see project files properly.
All you need to do is just need to restart Docker
CleanWebpackPlugin can be the problem. In my case, in my Docker file I copy a file like this:
COPY --chown=node:node dist/app.js /usr/app/app.js
and then during development I mount that file via docker-compose:
volumes:
- ./dist/app.js:/usr/app/app.js
I would intermittently get the Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type. error or some version of it.
The problem was that the CleanWebpackPlugin was deleting the file and before webpack re-built. If Docker was trying to mount the file while it was deleted Docker would fail. It was intermittent.
Either remove CleanWebpackPlugin completely or configure its options to play nicer.
I had this happen when the json file on the host had the executable permission set. I don't know the reason behind this.
For me, it was enough to just do this:
docker compose down
docker compose up -d
l have solved the mount problem. I am using a Win 7 environment, and the same problem happened to me.
Are you trying to mount a directory onto a file?
The container has a default sync directory at C:\Users\, so I moved my project to C:\Users\, then recreated the project. Now it works.

Docker External File Access Not in /Users/ on OSX

So, despite Docker 1.3 now allowing easy access to external storage on OSX through boot2docker for files in /Users/, I still need to access files not in /Users/. I have a settings file in /etc/settings/ that I'd like my container to have access to. Also, the CMD in my container writes logs to /var/log in the container, which I'd rather have it write to /var/log on the host. I've been playing around with VOLUME and passing stuff in with -v at run, but I'm not getting anywhere. Googling hasn't been much help. Can someone who has this working provide help?
As boot2docker now includes VirtualBox Guest Additions, you can now share folders on the host computer (OSX) with guest operating systems (boot2docker-vm). /Users/ is automatically mounted but you can mount/share custom folders. In your host console (OSX) :
$ vboxmanage sharedfolder add "boot2docker-vm" --name settings-share --hostpath /etc/settings --automount
Start boot2docker and ssh into it ($boot2docker up / $boot2docker ssh).
Choose where you want to mount the "settings-share" (/etc/settings) in the boot2docker VM :
$ sudo mkdir /settings-share-on-guest
$ sudo mount -t vboxsf settings-share /settings-share-on-guest
According that /settings is the volume declared in the docker container add -v /settings-share-on-guest:/settings to the docker run command to mount the host directory settings-share-on-guest as a data volume.
Works on Windows, not tested on OSX but should work.

Moving docker root folder to a new drive / partition

I am trying to move the "/var/lib/docker" folder from one disk to another since that is taking up too much space. I keep running into some errors relating to permissions!
According to these questions:
How do I move a docker container's image to a persistent disk?
How to run docker LXC containers on another partition?
My disk is mounted on "/data" and I copied the "/var/lib/docker" folder to "/data/docker"
This is what I tried:
Tried out the -g flag from DOCKER_OPTS with "/data/docker"
Tried creating a symbolic link from the new disk drive
I tried doing a bind mount from /data/docker
However in all the cases, I get an error when I try to launch services inside my container about missing permissions to write to "/dev/null" (as user root).
I simply did a copy of the folder to the new disk. This copied all the permissions as well (This is an ext4 system with same filesystem level permissions as the original disk on which docker exists now).
Specs:
The fileystem I am using is aufs.
Docker version is 0.7.6
Ubuntu 12.04
How do I move the data properly? Do I need a upgrade first?
I just did the following and it seems to work well:
as root:
service docker stop
mv /var/lib/docker /data/
# reboot and get root
service docker stop
rm -rf /var/lib/docker && ln -s /data/docker /var/lib/
service docker start
To add custom startup options to docker in Debian / Ubuntu (such as using a different data directory):
Edit /lib/systemd/system/docker.service:
[Service]
EnvironmentFile=-/etc/default/docker
ExecStart=/usr/bin/docker -d $DOCKER_OPTS -H fd://
MountFlags=slave
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
In /etc/default/docker set :
DOCKER_OPTS="-g /srv/docker"
In more recent Docker versions on Ubuntu you need to edit /etc/default/daemon.json:
{
"data-root": "/new/location"
}

Resources