Does adding :cached on a volume mount for mac performance tuning effect docker for windows volume mounts?
I'm working on a team with both mac and windows machines and it seems to still work but want to see if anyone has more to add on this.
Here is the docker docs link https://docs.docker.com/docker-for-mac/osxfs-caching/
But they don't say thing about how it effects windows?
An example they give in their docs.
docker run -v /Users/yallop/project:/project:cached alpine command
Cheers.
It may only have an effect on mac。
detail
:delegated and :cached flags are redundant since Docker Desktop 2.4.0.0 where gRPC FUSE file sharing is used by default.
Source: https://github.com/docker/for-mac/issues/5402
Related
I'm using a platform (Cytomine) on Ubuntu 18.04 to run some deep learning containerized applications (this platform handles the Docker images and containers automatically, so I only need to create the image and provide its download URL to the platform). So far it's working good but now I need to enable GPU support to run the model efficiently. Thus, I did some local tests with nvidia-docker to manually run the model container with GPU support, it was really easy to have it working because I just had to add one option to the run command:
docker run --gpus all
However, because I cannot add this option to the code on the Cytomine platform I need to find a way of adding/enabling that option by default to all the containers run by docker.
I tried adding this option to the files /etc/docker/daemon.json and /etc/docker/key.json and then restarted docker sudo systemctl restart docker. However, it didn't work.
Also, I found how to create docker config files (docker config); however, this seems to work only with Docker Swarm and I'm not going to use a Swarm for this project.
Thus, I'm looking for a straightforward solution that can be deployed properly. Is there any way to enable this option (--gpus all) by default when running any Docker container? (like somehow including it on the Dockerfile?)
Thanks!
Here is my docker-compose file (sorry for the images but the WSL terminal won't let me copy-paste indented text):
The intention being that external_stuff contains my mounts directory. When I look in the mounts directory I clearly see my drives:
However, when I run docker-compose up I only see a single folder ("c") as opposed to all of my drives, and when I navigate into that folder it appears empty:
I tried running sudo -E docker-compose up but that makes no difference.
What's going on here, and how do I fix it?
My system:
Docker desktop version 2.1.0.5
Windows build 1903 / OS Build 18362.476
I think I'm running wsl 1 but I really have no idea. If I run wsl -l from a powershell it spits out a bunch of command line options.
I'm running Ubuntu 18.04.2 LTS directly from the "Ubuntu" app in windows.
Dockerfile:
FROM python:3
Docker Desktop is running on a VM and you need to share your drives with it.
When running Docker Desktop through WSL, you will still need to share the drives you are using.
For this, you simply have to go into Docker Desktop Settings > Shared Drives and then allow sharing your drives.
Then you can work on Docker Desktop through WSL with linux commands, linux paths, etc.
Disclaimer : WSL and Docker Desktop are really unstable, with sharing volumes, permissions, inotify events, etc. You can find more information about theses problems on the answer of this question : Docker is not recompiling upon changing anything in angular project in windows
Need to know where docker volumes are located when using the docker machine on macOS.
The installation is using boot2docker, so the VM works behind.
Example:
docker volume create test-data
docker inspect shows a path, but where can I find the specific (physical) location?
It’s inside the virtual machine and isn’t directly accessible from the host.
Debug-level commands like docker volume inspect will give you a path, but they really are only for emergency debugging and not for routine use. If you have a way to get a shell in the VM you can see that path, but you really shouldn’t be directly accessing files there, and you shouldn’t be routinely docker inspecting anything.
macOS use a virtual machine it's different to linux where you can access to volumes from /var/lib/docker/volumes.
For macOS you should connect to a VM to find your volumes.
If you use persistent data volumes in Docker, and you want to access them with command-line.
If your docker host is Linux, that’s not a problem; you can find Docker volumes by /var/lib/docker/volumes path.
However, that’s not the case when you use Docker for Mac.
Try to cd /var/lib/docker/volumes from your MacOS terminal, you ‘ll get nothing.
You see, your Mac machine isn’t a real Docker host. Docker for Mac runs a virtual machine and hides it from you to make things simple.
So, to access persistent volumes created by Docker for Mac, you need to connect on that VM.
In order to accomplish this, we need to use a serial terminal on Mac. There’s a terminal application called “screen” that’s going to help us.
We need to “screen into” the Docker driver by executing a command:
screen
~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
You should see a blank screen, just press Enter , and after a while, you should see a command line prompt
Now you’re inside Docker’s VM and you can cd into volumes dir by typing: cd /var/lib/docker/volumes
Profit, you got there!
If you need to transfer files from your MacOS host into Docker host you can refer to File Sharing
Hope this helps you!
If you have installed docker using snap then volumes are located at:
/var/snap/docker/common/var-lib-docker/volumes/
location of volumes when using docker official install
/var/lib/docker/volumes/
Normally, if you want to "know" where a volume lives, you would want to map a volume to the local filesystem. When you create a named volume you are just allocating "shared" storage. However, if your really need to know, run this command:
docker volume inspect test-data
In Nvidia's developer page (https://devblogs.nvidia.com/nvidia-docker-gpu-server-application-deployment-made-easy/)
It states that nvidia-docker provides "driver-agnostic CUDA images".
I would just like to inquire/clarify if this is only driver version specific or does this also apply to OS?
For example:
Host = CentOS
Docker Image/Container = Ubuntu
Does using nvidia-docker provide a way to utilize the CentOS's nvidia driver in the Ubuntu Docker Container?
Currently what I do is I always have 2 Docker files for supporting Ubuntu Host and CentOS Host and manually mount /dev/nvidia0 and copy the library files (or install the driver) inside the docker image.
I've asked this already to the Nvidia, but still waiting for them to answer.
I'll be trying it my self too to find out but I just thought to try my luck if anyone from SO already knows the answer.
Thank you in advance guys.
I've tested this and it does work.
"driver-agnostic CUDA images" is not only limitted to different versions of the driver but also across different OS (binary)
Thank you.
I'm trying create a container to run a program. I'm using a pre configurate image and now I need run the program. However, it's a machine learning program and I need a dataset from my computer to run.
The file is too large to be copied to the container. It would be best if the program running in the container searched the dataset in a local directory of my computer, but I don't know how I can do this.
Is there any way to do this reference with some docker command? Or using Dockerfile?
Yes, you can do this. What you are describing is a bind mount. See https://docs.docker.com/storage/bind-mounts/ for documentation on the subject.
For example, if I want to mount a folder from my home directory into /mnt/mydata in a container, I can do:
docker run -v /Users/andy/mydata:/mnt/mydata myimage
Now, /mnt/mydata inside the container will have access to /Users/andy/mydata on my host.
Keep in mind, if you are using Docker for Mac or Docker for Windows there are specific directories on the host that are allowed by default:
If you are using Docker Machine on Mac or Windows, your Docker Engine daemon has only limited access to your macOS or Windows filesystem. Docker Machine tries to auto-share your /Users (macOS) or C:\Users (Windows) directory. So, you can mount files or directories on macOS using.
Update July 2019:
I've updated the documentation link and naming to be correct. These type of mounts are called "bind mounts". The snippet about Docker for Mac or Windows no longer appears in the documentation but it should still apply. I'm not sure why they removed it (my Docker for Mac still has an explicit list of allowed mounting paths on the host).