I am working on an IoTEdge Module with image detection capabilities. For the image processing/analysing I am using Detectron which needs to run in an docker nvidia runtime.
Is it possible to enable an nvidia runtime for IoTEdge Modules and Docker Moby and how? I am not able to figure out on how to make it work. There is an entry about the topic here, but I am still not able to get it work:
https://github.com/moby/moby/issues/23917
https://github.com/NVIDIA/nvidia-docker/wiki/Internals
I figured out, how to get it work with Docker CE, unfortunatly, the documentation says, Moby is not supported by IoT Edge. I havn't found any sideeffects yet, but for production it would be nice to understand the impact.
You can try setting the runtime as nvidia in the create options in your deployment.json in addition to any other create options you specify.
"createOptions": {
"HostConfig": {
"runtime": "nvidia"
}}
Related
I have 2 questions as I am currently trying to learn minikube and now wants to install it;
1- Which driver is preferable for Minikube (KVM or docker) ? Does one have some sort of advantage over other ?
2- Is it possible to install and run the minkube inside a VM managed by KVM ?
1 - There is no "better" or "worse". Using Docker is the default and with that the most supported version.
2 - Yes, it is possible to run Minikube inside a VM.
Found more answers to my question after some digging in the documentation.
https://minikube.sigs.k8s.io/docs/drivers/
As there are options available but when using minikub inside a VM then preferred driver will be either none or ssh. I ran into some networking issues when initially used docker but resolved it after digging more into the documentation. Otherwise docker is fine unless someone who is expert in resolving networking issues between the host and guest(vm)
I'm trying to set up a container-optimized OS (COS) on GCE with a GPU, following the instructions at https://cloud.google.com/container-optimized-os/docs/how-to/run-gpus. After creating the VM, it says to ssh in and run cos-extensions install gpu. That works; you can see during the install it runs nvidia-smi which prints out the driver version (440.33.01) and connects to the card.
But it installs the nvidia bins and libs in /var/lib/nvidia, which is mounted as noexec in this OS (it's very locked down). That means none of the libs or utilities work. And when you mount them to a docker container, they don't work there either; they're still noexec.
The only workaround I've found is to copy the whole /var/lib/nvidia dir to a tmpfs scratch disk and use it from there.
Am I using it wrong, or is it just broken?
This doesn't look to be a containerd issue but rather a Container-Optimized OS expected behaviour due to COS provides another level of hardening by providing security-minded default values for several features.
If you look at the documentation, for Container-Optimized OS filesystem, everything under /var is mounted as no-exec except for
/var/lib/google
/var/lib/docker
/var/lib/toolbox
Those are mounted with writable, executable and stateful properties.
On the other hand, Ubuntu containerd does not have the same strict exec/noexec depending on the mount like with COS, so, it could be a good idea to use Ubuntu based images instead of COS as a workaround.
Another option is to copy the contents of the /var/lib/nvidiaunder another mount point that was not mounted using the noexec option, as you already did.
Turns out I wasn't doing anything wrong. This is confirmed now as a bug in cos-extensions: https://issuetracker.google.com/issues/164134488
Odd, because it seems like this would have shown up in testing.
There aren't any good production workarounds at the moment, because as a user it's hard to modify COS's behavior without some advanced scripting.
How to set the shared drives in Docker for Windows? I am using the latest version 18. Stable and Edge. My settings screen is shown below. It's missing some options like Shared Drives, Advanced and Network, which are shown in the second image. Why am I missing these options?
My settings:
Screen from a website:
Seems you are Running Docker for Windows using "Windows Containers". If you switch to "Linux containers" you'll see "Shared Drives" option. Take a look this video.
According Docker documentation: shared drives for Windows containers is not implemented.
Volume mounting requires shared drives for Linux containers (not for
Windows containers).
Update:
Since 2018, Docker for Desktop is using a new UI. I recorded a new video showing how to solve this problem.
Update:
If you are using WSL2 you will be experiencing same problem. Take a look this video.
In new UIs they are placed under resources
Ended up here, because the "Shared drives" was missing on my docker settings. If you are missing it too, but docker is set for linux container then it is because WSL 2.
Because if you are using Docker on WSL 2, there is no such option, but you can directly attach volumes from filesystem with docker run -v c:\...\your-folder:/mount ... without specifying it in docker settings.
Can issues result if a docker image requires a kernel feature not provided by host OS kernel (e.g an image which requires a very specific kernel version)? Is this issue guaranteed to be prevented in some way?
Can issues result if a docker image requires a kernel feature not provided by host OS kernel
Yes, but note that the docker installation page recommend a minimun kernel level for docker itself to run.
For instance on RedHat "your kernel must be 3.10 at minimum".
If the image you run requires more recent kernel features, it won't work even though docker itself will.
Is this issue guaranteed to be prevented in some way?
Not really, as illustrated in "Docker - the pain of finding the right distribution+kernel+hardware combination "
As noted in "Can a docker image based on Ubuntu run in Redhat?"
Most Linux kernel variants are sufficiently similar that applications will not notice. However if the code relies on something specific in the kernel that is not there, Docker can't help you.
Plus:
system architecture is a limitation.
x86_64 images won't run on ARM for exemple. I.E. you won't run the official ubuntu image on a Raspberry PI.
The Apache Mesos page describes that Mesos enables task isolation through "Linux Containers". What container technology is this, LxC?
Does "native Docker support" mean that the above container technology can we swapped to Docker? What does it mean when Mesos states that Docker can be used either as an Executor or a Task? If Docker is used as an Executor, doesn't it mean that there should be a "Docker Framework" somewhere?
Actually mesos supports several containerizer:
Docker (see http://mesos.apache.org/documentation/latest/docker-containerizer/)
Mesos Containerizer (default)
Custom External Containerizer (see http://mesos.apache.org/documentation/latest/external-containerizer/)
Native docker support in my understanding refers to the support for many of the docker specific options (see for example the configuration options here: http://mesos.apache.org/documentation/latest/configuration/)
Short Update: Please note that option 3 (external containerizer) is deprecated.
Know what a Virtual Machine (like VMWare or Virtual PC) is? Docker is something like a much more 'lightweight' virtual machine (of course more superior, but let's keep it simple here). Further information can be found here http://en.wikipedia.org/wiki/Docker_%28software%29 and here https://www.docker.com/.