Create a GPU enabled container in Dockerfile - docker

I'm new to docker and I am trying to run tensorflow within a GPU enabled docker container. I have successfully followed this guide: https://www.tensorflow.org/install/docker. I understand that when I want to run a gpu enabled container i have to add the --gpus all argument to the run command like so: run --gpus all tensorflow/tensorflow:latest-gpu. I'm wondering however if there is a way I can create a Dockerfile that builds an image that already has gpu support enabled and the --gpus all argument can be omitted from the run command. Or can I pass this as an argument to docker build rather than docker run

I found the answer to this for my specific use case. I was trying to set up a Docker container to work with VS Code. However VS code has the concept of "dev containers". To create/use a dev container you must create a devcontainer.json file. In it you can specify runargs and add the key: value of
"runArgs": ["--gpus","all"]

Related

How to Recreate a Docker Container Without Docker Compose

TLDR: When using docker compose, I can simply recreate a container by changing its configuration and/or image in the docker-compose.yml file along with running docker-compose up. Is there any generic equivalent for recreating a container (to apply changes) which was created by a bare docker create/run command?
Elaborating a bit:
The associated docker compose documentation states:
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes).
I'm having troubles to understand which underlaying steps are actually performed during this recreation, as e.g. the docker (without compose) documentation doesn't really seem to use the recreate term at all.
Is it safe to simply run docker container rm xy and then docker container create/run (along with passing the full and modified configuration)? Or is docker compose actually doing more under the hood?
I already found answers about applying specific configuration changes like e.g. this one about port mappings, but I'm still wondering whether there is a more general answer to this.
I'm having troubles to understand which underlaying steps are actually performed during this recreation, as e.g. the docker (without compose) documentation doesn't really seem to use the recreate term at all.
docker-compose is a high level tool; it performs in a single operation what would require multiple commands using the docker cli. When docker-compose says, "docker-compose up picks up the changes by stopping and recreating the containers", it means it is doing the equivalent of:
docker stop <somecontainer>
docker rm <somecontainer>
docker run ...
(Where ... represents whatever configuration is implied by the service definition in your docker-compose.yaml).
Let's say it recognizes a change in container1 it does (not really, working via API):
docker compose rm -fs container1
docker compose create (--build) container1
docker compose start container1
What is partially close to (depending on your compose-config):
docker rm -f projectname_container1
(docker build --flags)
docker create --allDozensOfAttributes projectname_container1
docker start projectname_container1
docker network connect (--flags) projectname_networkname projectname_container1
and maybe more..
so i would advise to use the docker compose commands for single services instead of docker cli if suitable..
The issue is that the variables and settings are not exposed through any docker apis. It may be possible by way of connecting directly to the docker socket, parsing the variables, and then stopping/removing the container and recreating it.
This would be prone to all kinds of errors and would require lots of debugging to get these values.
What I do is to simply store my docker commands in a shell script. You can just save the command you need to run into a text file, name it .sh, set the -x on the file, then run it. Then when you stop/delete the container, you can just rerun the shell script.
Another thing you can do would be to replace the docker command with a function (in something like your ~/.bashrc) that stores the arguments to a text file and rechecks that text file with a passed argument (like "recreate" followed by a name). However, I'm more a fan of doing docker containers in their own shell scripts as its more portable.

Auto mount volumes

I wonder if it is possible to make Docker automatically mount volumes during build or run container phase. With podman it is easy, using /usr/share/containers/mounts.conf, but I need to use Docker CE.
If it is not, may I somehow use host RHEL subscription during Docker build phase? I need to use RHEL UBI image and I have to use companys Satellite
A container image build in docker is designed to be self contained and portable. It shouldn't matter whether you run the build on your host or a CI server in the cloud. To do that, they rely on the build context and args to the build command, rather than other settings on the host, where possible.
buildah seems to have taken a different approach with their tooling, allowing you to use components from the host in your build, giving you more flexibility, but also fragility.
That's a long way of saying the "feature" doesn't exist in docker, and if it gets created, I doubt it would look like what you're describing. Instead, with buildkit, they allow you to inject secrets from the build command line, which are mounted into the steps where they are required. An example of this is available in the buildkit docs:
# syntax = docker/dockerfile:1.3
FROM python:3
RUN pip install awscli
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials aws s3 cp s3://... ...
And to build that Dockerfile, you would pass the secret as a CLI arg:
$ docker build --secret id=aws,src=$HOME/.aws/credentials .

Is it possible to specify build options from within a Dockerfile to control the build process?

Consider a the following docker command, which builds an image from a Dockerfile:
docker image build --network host -t test -f Dockerfile .
Is it possible to specify options of docker image build in the Dockerfile instead of the command-line (in this case --network host)?
This could be useful, since the host running docker image build ... uses a set of fixed flags, which would overriden by custom flags.
In general, no.
In this particular case (network access): kinda, actually, using BuildKit, a new build system for Docker.
If you're using BuildKit (export DOCKER_BUILDKIT=1) you can add a comment at the top of the Dockerfile to enable newer syntax. And you can specify different versions of the syntax, which basically works by downloading a new builder, implemented a Docker image.
(A lot more details here: https://pythonspeed.com/articles/docker-buildkit/).
The latest experimental BuildKit syntax has an option for setting network access per build step. Scroll to bottom of https://hub.docker.com/r/docker/dockerfile/ for details, but short version:
Add #syntax=docker/dockerfile:1.2-labs as first line of Dockerfile.
Change RUN mycommand to RUN --network=host mycommand.

Is there a point in Docker start?

So, is there a point in the command "start"? like in "docker start -i albineContainer".
If I do this, I can't really do anything with the albine inside the container, I would have to do a run and create another container with the "-it" command and "sh" after (or "/bin/bash", don't remember it correctly right now).
Is that how it will go most of the times? delete and rebuilt containers and do the command "-it" if you want to do stuff in them? or would it more depend on the Dockerfile, how you define the cmd.
New to Docker in general and trying to understand the basics on how to use it. Thanks for the help.
Running docker run/exec with -it means you run the docker container and attach an interactive terminal to it.
Note that you can also run docker applications without attaching to them, and they will still run in the background.
Docker allows you to run a program (which can be bash, but does not have to be) in an isolated environment.
For example, try running the jenkins docker image: https://hub.docker.com/_/jenkins.
this will create a container, without you having attach to it, and you would still be able to use it.
You can also attach to an existing, running container by using docker exec -it [container_name] bash.
You can also use docker logs to peek at the stdout of a certain docker container, without actually attaching to its shell interactively.
You almost never use docker start. It's only possible to use it in two unusual circumstances:
If you've created a container with docker create, then docker start will run the process you named there. (But it's much more common to use docker run to do both things together.)
If you've stopped a container with docker stop, docker start will run its process again. (But typically you'll want to docker rm the container once you've stopped it.)
Your question and other comments hint at using an interactive shell in an unmodified Alpine container. Neither is a typical practice. Usually you'll take some complete application and its dependencies and package it into an image, and docker run will run that complete packaged application. Tutorials like Docker's Build and run your image go through this workflow in reasonable detail.
My general day-to-day workflow involves building and testing a program outside of Docker. Once I believe it works, then I run docker build and docker run, and docker rm the container once I'm done. I rarely run docker exec: it is a useful debugging tool but not the standard way to interact with a process. docker start isn't something I really ever run.

Difference between docker container commit and docker commit command [duplicate]

Can anyone help me in the understanding difference between docker run & docker container run?
when i do docker run --help & docker container run --help from docker cmd line. I see the following
Run a command in a new container.
Is there any difference in how they run the container internally or both are same doing same work?
As per https://forums.docker.com/t/docker-run-and-docker-container-run/30526. docker run is still the old one, which will be deprecated soon but same is not confirmed.
They are exactly the same.
Prior to docker 1.13 the docker run command was only available. The CLI commands were then refactored to have the form docker COMMAND SUBCOMMAND, wherein this case the COMMAND is container and the SUBCOMMAND is run. This was done to have a more intuitive grouping of commands since the number of commands at the time has grown substantially.
You can read more under CLI restructured.
docker run no, we aren't even hiding it, it's staying as a permanent alias.
The rest, not any time soon. Maybe in a year or two if we're good about converting all > the docs to the new form, and communicating the new canonical way of doing things.
So, they are exactly same, just format changed, see discusstion about this PR: https://github.com/moby/moby/pull/26025
Maybe a bit late, but wanted to share a wider & cleaner image from The docker Handbook about the question:
Previously [...] the generic syntax for this command is as follows:
docker run <image name>
Although this is a perfectly valid command, there is a better way of dispatching commands to the docker daemon.
Prior to version 1.13, Docker had only the previously mentioned command syntax. Later on, the command-line was restructured to have the following syntax:
docker <object> <command> <options>
In this syntax:
<object> indicates the type of Docker object you'll be manipulating. This can be a container, image, network or volume object.
<command> indicates the task to be carried out by the daemon, that is the run command.
<options> can be any valid parameter that can override the default behavior of the command, like the --publish option for port mapping.
Thus,
docker container run :
container is the object
run is the command to be executed by Docker Daemon.

Resources