docker build with nvidia runtime - docker

I have a GPU application that does unit-testing during the image building stage.
With Docker 19.03, one can specify nvidia runtime with docker run --gpus all but I also need access to the gpus for docker build because I do unit-testing. How can I achieve this goal?
For older version of docker that use nvidia-docker2 it was not possible to specifiy runtime during build stage, BUT you can set the default runtime to be nvidia, and docker build works fine that way. Can I do that in Docker 19.03 that doesn't need nvidia-docker anymore? If so, how?

You need use nvidia-container-runtime as explained in docs: "It is also the only way to have GPU access during docker build".
Steps for Ubuntu:
Install nvidia-container-runtime:
sudo apt-get install nvidia-container-runtime
Edit/create the /etc/docker/daemon.json with content:
{
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
Restart docker daemon:
sudo systemctl restart docker
Build your image (now GPU available during build):
docker build -t my_image_name:latest .

A "solution" I found is to first run a base image with the host nvidia drivers mounted on it
docker run -it --rm --gpus ubuntu
And then build my app within the container manually and commit the resulting image.
This is not ideal and it would be best to have access to nvidia-smi during the build phase.

Related

How to install qemu emulator for arm in a docker container

My goal is to build a Docker Build image that can be used as a CI stage that's capable of building a multi-archtecture image.
FROM public.ecr.aws/docker/library/docker:20.10.11-dind
# Add the buildx plugin to Docker
COPY --from=docker/buildx-bin:0.7.1 /buildx /usr/libexec/docker/cli-plugins/docker-buildx
# Create a buildx image builder that we'll then use within this container to build our multi-architecture images
RUN docker buildx create --platform linux/amd64,linux/arm64 --name=my-builder --use
^ builds the container I need, but does not include the emulator of arm64. This means when I try to use it to build a multiarchitecture image via a command like docker buildx build --platform=$SUPPORTED_ARCHITECTURES --build-arg PHP_VERSION=8.0.1 -t my-repo:latest ., I get the error:
error: failed to solve: process "/dev/.buildkit_qemu_emulator /bin/sh -c apt-get update && apt-get -y install -q ....
The solution is to run docker run --rm --privileged tonistiigi/binfmt --install arm64 as part of the CI steps, which uses the buildx container I previously built. However, I'd really like to understand why the emulator cannot seem to be installed in the container by adding something like this to the Dockerfile:
# Install arm emulator
COPY --from=tonistiigi/binfmt /usr/bin/binfmt /usr/bin/binfmt
RUN /usr/bin/binfmt --install arm64
I'd really like to understand why the emulator cannot seem to be installed in the container
Because when you perform a RUN command, the result is to capture the filesystem changes from that step, and save them to a new layer in your image. But the qemu setup command isn't really modifying the filesystem, it's modifying the host kernel, which is why it needs --privileged to run. You'll see evidence of those kernel changes in /proc/sys/fs/binfmt_misc/ on the host after configuring qemu. It's not possible to specify that flag as part of the container build, all steps run in the Dockerfile are unprivileged, without access to the host devices or the ability to alter the host kernel.
The standard practice in CI systems is to configure the host in advance, and then run the docker build. In GitHub Actions, that's done with the setup-qemu-action before running the build step.

Linking graphic card in Docker via OpenCL

I am on Ubuntu 19.10. and would like to use OpenCL inside docker.
Inside of the docker container I have installed opencl-headers,ocl-icd-opencl-dev and clinfo.
When I run clinfo on my machine outside of docker I have following response:
Number of platforms 1
Platform Name NVIDIA CUDA
Platform Vendor NVIDIA Corporation
Platform Version OpenCL 1.2 CUDA 10.2.159
...
When same is run in docker:
Number of platforms 0
I thought docker container should be able to use my graphic card, but am unsure if/how I should allow it.
Thank you for some insights
You need to start docker with the --gpus all option, eg:
docker run --rm --gpus all nvidia/opencl clinfo
You can also expose just a specific gpu:
docker run -it --rm --gpus "device=0" ubuntu nvidia-smi
Read more here: https://docs.docker.com/config/containers/resource_constraints/
If you get this error:
$ docker run --rm --gpus all nvidia/opencl clinfo
docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].
On Ubuntu, you can remedy it with:
$ sudo apt install -y nvidia-container-toolkit; sudo systemctl restart docker

Dev environnement for gcc with Docker?

I would like to create a minimalist dev environment for occasional developers which only need Docker.
The ecosystem would have:
code-server image to run Visual Studio Code
gcc image to build the code
git to push/commit the code
ubuntu with some modifications to run the code
I looked to docker-in-docker which could be a solution:
Docker
code-server
docker run -it -v ... gcc make
docker run -it -v ... git git commit ...
docker run -it -v ... ubuntu ./program
But it seems perhaps a bit overkill. What would be the proper way to have a full dev environment well separated, that only require Docker to be installed on the host machine (Linux, Windows, MacOS, Chromium)
I suggest using a Dockerfile.
This file specifies a few steps used to build an image.
The first line of the file specifies a base image(in your case, I would use Ubuntu):
FROM ubuntu:latest
Then, you can e.g. copy files to the image or select commands to run:
RUN apt install gcc make
RUN apt install git
and so on.
At the end, you may want to specify the program that is run when you start the container
CMD /bin/bash
Then you can build it with the command docker build -f Dockerfile -t devenv:latest. This builds a new image named devenv:latest (latest is the version) from the file Dockerfile.
Then, you can create a container from the file using docker run devenv:latest.
If you want to use this container multiple times, you could create it using docker run -it devenv:latest
If you want to, you can also use the code-server base image instead of ubuntu:latest.

How to extend existing docker container?

The tensorflow docker container is available at https://hub.docker.com/r/tensorflow/tensorflow/ to extend this container with additional libraries such as requests I'm aware of two options.
Run the container and run pip install requests
Append pip install requests to the dockerFile that builds this container
Is there an alternative option ? Something like creating the tensorflow/tensorflow container from a dockerFile and then installing requests on this container.
Reading How to extend an existing docker image? to accomplish this create a dockerFile with these contents ? :
FROM tensorflow/tensorflow
RUN pip install requests
Your original assertion is correct, create a new Dockerfile:
FROM tensorflow/tensorflow
RUN pip install requests
now build it (note that the name should be lower case):
docker build -t me/mytensorflow .
run it:
docker run -it me/mytensorflow
execute a shell in it (docker ps -ql gives us an id of the last container to run):
docker exec -it `docker ps -ql` /bin/bash
get logs from it:
docker logs `docker ps -ql`
The ability to extend other images is what makes docker really powerful, in addition you can go look at their Dockerfile:
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/docker
and start from there as well without extending their docker image, this is a best practice for people using docker in production so you know everything is built in-house and not by some hacker sneaking stuff into your infrastructure. Cheers! and happy building
you could enter the running container via:
docker exec -it CONTAINER_ID bin/bash
or if a name is set:
docker exec -it CONTAINER_NAME bin/bash

Docker image versioning and lifecycle management

I am getting into Docker and am trying to better understand how it works out there in the "real world".
It occurs to me that, in practice:
You need a way to version Docker images
You need a way to tell the Docker engine (running on a VM) to stop/start/restart a particular container
You need a way to tell the Docker engine which version of a image to run
Does Docker ship with built-in commands for handling each of these? If not what tools/strategies are used for accomplishing them? Also, when I build a Docker image (via, say, docker build -t myapp .), what file type is produced and where is it located on the machine?
docker has all you need to build images and run containers. You can create your own image by writing a Dockerfile or by pulling it from the docker hub.
In the Dockerfile you specify another image as the basis for your image, run command install things. Images can have tags, for example the ubuntu image can have the latest or 12.04 tag, that can be specified with ubuntu:latest notation.
Once you have built the image with docker build -t image-name . you can create containers from that image with `docker run --name container-name image-name.
docker ps to see running containers
docker rm <container name/id> to remove containers
Suppose we have a docker file like bellow:
->Build from git without versioning:
sudo docker build https://github.com/lordash/mswpw.git#fecomments:comments
in here:
fecomments is branch name and comments is the folder name.
->building from git with tag and version:
sudo docker build https://github.com/lordash/mswpw.git#fecomments:comments -t lordash/comments:v1.0
->Now if you want to build from a directory: first go to comments directory the run command sudo docker build .
->if you want to add tag you can use -t or -tag flag to do that:
sudo docker build -t lordash . or sudo docker build -t lordash/comments .
-> Now you can version your image with the help of tag:
sudo docker build -t lordash/comments:v1.0 .
->you can also apply multiple tag to an image:
sudo docker build -t lordash/comments:latest -t lordash/comments:v1.0 .

Resources