How are the various Docker container build tools related to one another? - docker

The Docker technology stack has evolved multiple different tools for building container images, like docker build and docker-compose build and docker buildx build and docker buildx bake and buildctl.
How do these relate to each other? And what is recommended for a new Docker-based project?

Here is my incomplete understanding.
At first Docker Engine provided an API endpoint for requesting a build of a container image from a Dockerfile. The Docker CLI build command, docker build, invoked this API according to its command line options.
Then docker-compose (Docker Compose V1) came around and provided a way to describe the build options for a container image in YAML, invoking the same Docker Engine API.
In 2017, Docker started the Moby Project with the intention of creating open source backend components for working with containers.
One these components is BuildKit, which is like a compiler for container images. It builds images from a low-level build definition format called LLB. Frontend tools can translate inputs including, but not limited to, Dockerfiles into LLB. (buildctl is just a tool for interacting with BuildKit from the command line.)
An important thing to note is that BuildKit is independent from the Docker Engine. As BuildKit matured, existing tools aimed to switch from using the Docker Engine API to using BuildKit.
The Docker CLI buildx plugin provides one command line interface for invoking BuildKit. docker buildx build builds a single container image from command line options, and docker buildx bake can read those options from a docker-compose.yml file.
Eventually, the normal docker build command learned to be able to use BuildKit. This mode has to be enabled by setting the DOCKER_BUILDKIT environment variable, or a daemon.json configuration option. This is the default on Docker Desktop.
Until now, docker-compose used the Docker Engine API directly. A new COMPOSE_DOCKER_CLI_BUILD environment variable was added to instead invoke docker build. Most recently, the tool was rewritten from scratch, and Docker Compose V2 (now docker compose) can invoke BuildKit itself. The COMPOSE_DOCKER_CLI_BUILD option is not supported anymore.

Related

Auto mount volumes

I wonder if it is possible to make Docker automatically mount volumes during build or run container phase. With podman it is easy, using /usr/share/containers/mounts.conf, but I need to use Docker CE.
If it is not, may I somehow use host RHEL subscription during Docker build phase? I need to use RHEL UBI image and I have to use companys Satellite
A container image build in docker is designed to be self contained and portable. It shouldn't matter whether you run the build on your host or a CI server in the cloud. To do that, they rely on the build context and args to the build command, rather than other settings on the host, where possible.
buildah seems to have taken a different approach with their tooling, allowing you to use components from the host in your build, giving you more flexibility, but also fragility.
That's a long way of saying the "feature" doesn't exist in docker, and if it gets created, I doubt it would look like what you're describing. Instead, with buildkit, they allow you to inject secrets from the build command line, which are mounted into the steps where they are required. An example of this is available in the buildkit docs:
# syntax = docker/dockerfile:1.3
FROM python:3
RUN pip install awscli
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials aws s3 cp s3://... ...
And to build that Dockerfile, you would pass the secret as a CLI arg:
$ docker build --secret id=aws,src=$HOME/.aws/credentials .

Is it possible to specify build options from within a Dockerfile to control the build process?

Consider a the following docker command, which builds an image from a Dockerfile:
docker image build --network host -t test -f Dockerfile .
Is it possible to specify options of docker image build in the Dockerfile instead of the command-line (in this case --network host)?
This could be useful, since the host running docker image build ... uses a set of fixed flags, which would overriden by custom flags.
In general, no.
In this particular case (network access): kinda, actually, using BuildKit, a new build system for Docker.
If you're using BuildKit (export DOCKER_BUILDKIT=1) you can add a comment at the top of the Dockerfile to enable newer syntax. And you can specify different versions of the syntax, which basically works by downloading a new builder, implemented a Docker image.
(A lot more details here: https://pythonspeed.com/articles/docker-buildkit/).
The latest experimental BuildKit syntax has an option for setting network access per build step. Scroll to bottom of https://hub.docker.com/r/docker/dockerfile/ for details, but short version:
Add #syntax=docker/dockerfile:1.2-labs as first line of Dockerfile.
Change RUN mycommand to RUN --network=host mycommand.

Do Docker buildkit builds appear in `docker images` and `docker ps`? Why not?

Recent versions of Docker include an overhauled build system called BuildKit that can be used with export DOCKER_BUILDKIT=1. I've noticed when it is running there is no trace of the builds in docker images or docker ps. Why not?
BuildKit runs under runc and containerd rather than directly in docker. This gives it more portability to run in other environments that do not want the full docker daemon installed and running. Because of this architecture, you will only see the resulting image that is exported from BuildKit, and not each of the individual steps as untagged images.
If you're looking to clean up the BuildKit cache, there is docker builder prune.
For more details on BuildKit, including how to run it as a standalone container or process, see their github repo: https://github.com/moby/buildkit/

Docker in Docker, Building docker agents in a docker contained Jenkins Server

I am currently running a Jenkins with Docker. When trying to build docker apps, i am facing some doubt on if i should use Docker in Docker (Dind) by binding the /var/run/docker.sock file or by installing another instance of docker in my Jenkins Docker. I actually saw that previously, it was discouraged to use something else than the docker.sock.
I don't actually understand why we should use something else than the docker daemon from the host apart from not polluting it.
sources : https://itnext.io/docker-in-docker-521958d34efd
Best solution for "jenkins in docker container needs docker" case is to add your host as a node(slave) in jenkins. This will make every build step (literally everything) run in your host machine. It took me a month to find perfect setup.
Mount docker socket in jenkins container: You will lose context. The files you want to COPY inside image is located inside workspace in jenkins container and your docker is running at host. COPY fails for sure.
Install docker client in jenkins container: You have to alter official jenkins image. Adds complexity. And you will lose context too.
Add your host as jenkins node: Perfect. You have the contex. No altering the official image.
Without completely understanding why you would need to use Docker in Docker - I suspect you need to meet some special requirements considering the environment in which you build the actual image, may I suggest you multistage building of docker images? You might find it useful as it enables you to first build the building environment and then build the actual image (hence the name 'multistage-building). Check it out here: https://docs.docker.com/develop/develop-images/multistage-build/

Best practice using docker inside Jenkins?

Hi I'm learning how to use Jenkins integrated with Docker and I don't understand what should I do to communicate them.
I'm running Jenkins inside a Docker container and I want to build an image in a pipeline. So I need to execute some docker commands inside the Jenkins container.
So the thing here is where docker come from. I understand that we need to bind mount the docker host daemon (socket) to the Jenkins container but this container still needs the binaries to execute Docker.
I have seen some approaches to achieve this and I'm confused what should I do. I have seen:
bind mount the docker binary (/usr/local/bin/docker:/usr/bin/docker)
installing docker in the image
if I'm not wrong the blue ocean image comes with Docker pre-installed (I have not found any documentation of this)
Also I don't understand what Docker plugins for Jenkins can do for me.
Thanks!
Docker has a client server architecture. The server is the docker deamon and the client is basically the command line interface that allows you to execute docker ... from the command line.
Thus when running Jenkins inside Docker you will need access to connect to the deamon. This is acheieved by binding the /var/run/docker.sock into the container.
At this point you need something to communicate with the Deamon which is the server. You can either do that by providing access to docker binaries. This can be achived by either mounting the docker binaries, or installing the
client binaries inside the Jenkins container.
Alternatively, you can communicate with the deamon using the Docker Rest API without having the docker client binaries inside the Jenkins container. You can for instance build an image using the API.
Also I don't understand what Docker plugins for Jenkins can do for me
The Docker plugin for Jenkins isn't useful for the use case that you described. This plugin allows you to provision Jenkins slaves using Docker. You can for instance run a compilation inside a Docker container that gets automatically provisioned by Jenkins
It is not best practice to use Docker with Jenkins. It is also not a bad practice. The relationship between Jenkins and Docker is not determined in such a manner that having Docker is good or bad.
Jenkins is a Continuous Integration Server, which is a fancy way of saying "a service that builds stuff at various times, according to predefined rules"
If your end result is a docker image to be distributed, you have Jenkins call your docker build command, collect the output, and report on the success / failure of the docker build command.
If your end result is not a docker image, you have Jenkins call your non-docker build command, collect the output, and report on the success / failure of the non-docker build.
How you have the build launched depends on how you would build the product. Makefiles are launched with make, Apache Ant with ant, Apache Maven with mvn package, docker with docker build and so on. From Jenkin's perspective, it doesn't matter, provided you provide a complete set of rules to launch the build, collect the output, and report the success or failure.
Now, for the 'Docker plugin for Jenkins'. As #yamenk stated, Jenkins uses build slaves to perform the build. That plugin will launch the build slave within a Docker container. The thing built within that container may or may not be a docker image.
Finally, running Jenkins inside a docker container just means you need to bind your Docker-ized Jenkins to the external world, as #yamenk indicates, or you'll have trouble launching builds.
Bind mounting the docker binary into the jenkins image only works if the jenkins images is "close enough" - it has to contain the required shared libraries!
So when sing a standard jenkins/jenkins:2.150.1 within an ubuntu 18.04 this is not working unfortunately. (it looked so nice and slim ;)
So the the requirement is to build or find a docker image which contains a compatible docker client for the host docker service is.
Many people seem to install docker in their jenkins image....

Resources