How to install nvidia driver in windows docker container? - docker

I am doing a feasibility study on docker(windows container only not linux). I am using Windows Server 1809 with container support in aws, which by default bundled with docker and with g3 instance(using tesla M60).
1)
I know nvidia-docker is not available for windows. I want to confirm that default docker wont support gpu also, for that i want to install nvidia driver inside docker container. The docker file I am using is shown below
FROM mcr.microsoft.com/windows/servercore:ltsc2019
COPY nvidia-driver-folder nvidia-driver-folder
WORKDIR /nvidia-driver-folder
RUN setup.exe -s -clean -noreboot -noeula
The nvidia-driver-folder contains setup.exe for installing driver. I tried the same command inside container that is
docker run -it sampleapp cmd
Then inside container
setup.exe -s -clean -noreboot -noeula
After that I verified program files inside container, no folder relating to nvidia is created. In normal system(my local machine) even if the gpu is not present, the command I have written above will create a folder NVIDIA Corportation,
2)
Is there any other means to get gpu working inside windows docker container
Please help me for the above two questions

Related

Docker compose volumes where can be found on host windows

I have docker-compose file with volumes section for given container:
video-streaming:
image: video-streaming
build:
context: ./video-streaming
dockerfile: Dockerfile-dev
container_name: video-streaming
volumes:
- /tmp/history/npm-cache:/root/.npm:z
I'm running docker on windows and image is linux based.
When I enter container and add file to /root/.npm and then close the container and run it again then the file is still there so this volume works. But the question is where can I find it's location on Windows host?
You should find the volumes in C:\ProgramData\docker\volumes. The filename will be a hash, which you can check with docker inspect.
If not, then note that you are simply mounting a host directory /tmp/history/npm-cache to your container. This directory is your volume.
When using docker for windows the question is if you are using the old Docker Toolbox or the newer ones that use WSL/WSL2
Docker Desktop configured Linux Containers and WSL/WSL2
The docker engine is actually not running on the windows, but inside the WSL instance, docker desktop makes docker commands available on the windows for ease of use.
So the volumes are probably inside that WSL instance (linux)
you can find out what WSL instances you have by typing wsl -l in powershell.
their file-system is available in \\\wsl$ path on windows.
In your case, the volume is not named, its in the exact location you specified for it.
/tmp/history/npm-cache but inside the WSL instance that docker engine is installed on.
Through WSL
in powershell write wsl ls /tmp/history, you should see npm-cache there.
wsl command allows piping linux commands that will be run on the actual linux wsl instance (default one) which is probably the one running the docker engine.
alternatively, you can connect to that linux by just typing wsl and going to that path cd /tmp/history
once inside the wsl instance you can write explorer.exe . to open explorer in that location (on windows)
notice that the path will always start with \\wsl$ so you can go to that path on windows and see all of you wsl instances and their file-systems, try to search for "npm-cache" in explorer, you might find it.
via Docker commands
docker volume ls will give you all of the available volumes. yours is not named, so its probably one of the 'UUID' ones. you can inspect each one to find its location (probably still inside the wsl instance)
docker volume inspact {the-uuid-of-the-volume}
ones you inspect it, you will see each volume has a Mountpoint field which points to the location of the volume (inside the wsl instance)
unnamed volumes are created with different permissions from your user, so you might need sudo to interact with them via the wsl terminal.
if its through windows file explorer on \\wsl$ you might not need extra permissions.

Docker Volume Mapping: File is not getting copied from Docker linux container to local windows machine

I am trying to do volume mapping from linux container to my local windows machine.
Here are the steps:
Installed latest version of Docker desktop for windows (2.4.0.0) and it's currently using WSL2 based engine.
Started a container using my own image on top of alpine image. Working directory is set to '/app' in linux container.
An output file (Report.html) is created under a folder (Reports) in my linux container once it is run. Able to view the file in container.
I would like to save the output file to a folder named 'Output' under my user directory in local windows machine.
Ran the following command in Power Shell in Admin mode:
docker run -it -v ~/Output:/app/Reports <imagename>
Issue:
Output file (Report.html) does not get copied to Output folder in local machine.
Note:
I don't see the option to select drive for file sharing in Docker settings.
Please guide me on how I can resolve this ?
Using absolute path in place of ~ worked, i.e. docker run -v C:/Users/12345/Output:/app/Reports

Docker volumes on WSL2 using Docker Desktop

I'm just trying out WSL 2 with Docker for Windows and I'm having an issues with mounted volumes :
version: "3.7"
services:
node:
build: .
container_name: node
hostname: node
volumes:
- ./app:/app
stdin_open: true
the container build and start well, I access it with docker exec nicely but the /app folder inside the container isn't bound to my laptop app folder. However the right path is actually correctly mounted on the running container :
(here I do pwd on the host to if it matches perfectly with what is mounted on the container)
➜ app pwd
/mnt/c/Users/willi/devspace/these/app
And this is screen of portainer telling me what path are mounted where in the container and everything matches.
The file I create int he app folder on the host are not visible in the app folder of the container and vice-versa. This is weird and I don't know how to debug it.
Complementary infos:
Windows 10 Pro 10.0.19041
Docker for Windows version : 2.3.0.4
docker version output in WSL : 19.03.12
docker-compose version : 1.26.2
Thanks
As #Pablo mentioned, the Best-Practice seems to be using WSL File system for mapping Volumes.
Take a look at the Docker Documentation concerning WSL2:
Best practices
To get the best out of the file system performance when bind-mounting files:
Store source code and other data that is bind-mounted into Linux containers (i.e., with docker run -v <host-path>:<container-path>) in the Linux filesystem, rather than the Windows filesystem.
Linux containers only receive file change events (“inotify events”) if the original files are stored in the Linux filesystem.
Performance is much higher when files are bind-mounted from the Linux filesystem, rather than remoted from the Windows host. Therefore avoid docker run -v /mnt/c/users:/users (where /mnt/c is mounted from Windows).
Instead, from a Linux shell use a command like docker run -v ~/my-project:/sources <my-image> where ~ is expanded by the Linux shell to $HOME.
If you have concerns about the size of the docker-desktop-data VHDX, or need to change it, take a look at the WSL tooling built into Windows.
If you have concerns about CPU or memory usage, you can configure limits on the memory, CPU, Swap size allocated to the WSL 2 utility VM.
To avoid any potential conflicts with using WSL 2 on Docker Desktop, you must uninstall any previous versions of Docker Engine and CLI installed directly through Linux distributions before installing Docker Desktop.
Everything works perfectly now, it seems that my problem was that my WSL distro was still in version 1. You can verify it with the command : wsl -l -v
NAME STATE VERSION
* docker-desktop-data Stopped 2
docker-desktop Stopped 2
Ubuntu-20.04 Running 2 <- This was at 1
Upgrade to WSL2

Running attended installer inside docker windows/servercore

I've been attempting to move an application to the cloud for a while now and have most of the services set up in pods running in a k8s cluster. The last piece has been giving me trouble, I need to set up an image with an older piece of software that cannot be installed silently. I then attempted in my dockerfile to install its .net dependencies (2005.x86, 2010.x86, 2012.x86, 2015.x86, 2015.x64) and manually transfer a local install of the program but that also did not work.
Is there any way to run through a guided install in a remote windows image or be able to determine all of the file changes made by an installer in order to do them manually?
You can track the changes done by the installer following these steps:
start a new container based on your base image
docker run --name test -d <base_image>
open a shell in the new container (I am not familiar with Windows so you might have to adapt the command below)
docker exec -ti test cmd
Run whatever commands you need to run inside the container. When you are done exit from the container
Examine the changes to the container's filesystem:
docker container diff test
You can also use docker container export to export the container's filesystem as a tar archive, and then docker image import to create an image from that archive.

Same Ubuntu image fetched for docker machine and docker container but more binaries are available in docker-machine

I created a container and logged in
docker run -it -d ubuntu bash
checked fdisk -l its NOT available.
But when I create a machine using:
docker-machine create -d "virtualbox" --swarm-image "ubuntu" dev3
The command fdisk is available in the machine.
Question: I guess binaries comes from image, how this is happening? and how can I add fdisk without creating a custom image or installing it after container creation.
Same host
Your two commands are doing completely different things.
In the first case, you're pulling down the ubuntu docker image and starting a container.
In the second case, you're building a virtual machine in Virtualbox using a VM image named ubuntu. This is a completely different operation and the ubuntu vm image has nothing to do with the ubuntu container image. The minimal set of packages required to actually boot a machine is substantially larger than that required to start a container, so it's no surprise that the virtual machine has packages you don't find in the container image.
For example, a container doesn't interact with block devices so there is no need to have fdisk installed. If you really need fdisk in a container image (which, again, is unlikely, although there are some use cases where that makes sense), you would build a custom image from a Dockerfile. E.g.:
FROM ubuntu:eoan
RUN apt-get update; apt-get -y install fdisk

Resources