How to use custom Dockerfile name in Jenkins pipeline? - jenkins

I am using Jenkins to build and publish the docker image. However I need to use two docker file in one application just for my use case.
Requirement is can I build and published two different docker image from two different Docker file from one source, or i need to use custom file name like argument *-f* in docker command so that I can build two pipeline, in Jenkins?
By default Jenkins picks the file with *Dockerfile*
ex: docker build -t dockerfile -f Dockerfile-custom-name .

def projectImage = docker.build("imageName:tag", "-f Dockerfile-custom-name .")
This way you can even add --build-arg or any other argument before the -f. It is important to have the -f and . at the end of the second parameter.
Here's the documentation for it:
Builds test-image from the Dockerfile found at ./dockerfiles/test/Dockerfile.
Edit (answer to comments):
Yes, my first solution proposal works in a Jenkins pipeline.
This should also work:
sh '''docker build -t docker1-tag:latest us.gcr.io/project-name/docker1-tag:latest -f "Dockerfile1" .'''
The only downside of this one, is that you don't have the image in a variable, so you cannot use the run, withRun, push and other methods.
In your specific case, I would build the image with the name/tag of us.gcr.io/project-name/docker1-tag:latest, so I can call the push method and the image will be pushed to the registry. To have the second tag, a separate sh '''docker tag us.gcr.io/project-name/docker1-tag:latest docker1-tag:latest''' call would suffice.

Related

Is it possible to cache multi-stage docker builds?

I recently switched to multi-stage docker builds, and it doesn't appear that there's any caching on intermediate builds. I'm not sure if this is a docker limitation, something which just isn't available or whether I'm doing something wrong.
I am pulling down the final build and doing a --cache-from at the start of the new build, but it always runs the full build.
This appears to be a limitation of docker itself and is described under this issue - https://github.com/moby/moby/issues/34715
The workaround is to:
Build the intermediate stages with a --target
Push the intermediate images to the registry
Build the final image with a --target and use multiple --cache-from paths, listing all the intermediate images and the final image
Push the final image to the registry
For subsequent builds, pull the intermediate + final images down from the registry first
Since the previous answer was posted, there is now a solution using the BuildKit backend: https://docs.docker.com/engine/reference/commandline/build/#specifying-external-cache-sources
This involves passing the argument --build-arg BUILDKIT_INLINE_CACHE=1 to your docker build command. You will also need to ensure BuildKit is being used by setting the environment variable DOCKER_BUILDKIT=1 (on Linux; I think BuildKit might be the default backend on Windows when using recent versions of Docker Desktop). A complete command line solution for CI might look something like:
export DOCKER_BUILDKIT=1
# Use cache from remote repository, tag as latest, keep cache metadata
docker build -t yourname/yourapp:latest \
--cache-from yourname/yourapp:latest \
--build-arg BUILDKIT_INLINE_CACHE=1 .
# Push new build up to remote repository replacing latest
docker push yourname/yourapp:latest
Some of the other commenters are asking about docker-compose. It works for this too, although you need to additionally specify the environment variable COMPOSE_DOCKER_CLI_BUILD=1 to ensure docker-compose uses the docker CLI (with BuildKit thanks to DOCKER_BUILDKIT=1) and then you can set BUILDKIT_INLINE_CACHE: 1 in the args: section of the build: section of your YAML file to ensure the required --build-arg is set.
I'd like to add another important point to the answer
--build-arg BUILDKIT_INLINE_CACHE=1 caches only the last layer, and works in case nothing changed.
So, to enable the caching of layers for the whole build, this argument should be replaced by --cache-to type=inline,mode=max. See the documentation

How to remove intermediate images from a build after the build?

When you build your multi-stage Dockerfile with
docker build -t myimage .
it produces the final image tagged myimage, and also intermediate images. To be completely clear we are talking here not about containers, but about images. It looks like this:
See these <none> images? These are what I'm talking about.
Now this "issue" has been discussed to some extent here and here.
Here are some relevant parts:
If these intermediate images would be purged/pruned automatically, the build cache would be gone with each build, therefore forcing you to rebuild the entire image each time.
So okay, it does not make sense to prune then automatically.
Some people do this:
For now, I'm using docker image prune -f after my docker build -t app . command to cleanup those intermediate images.
But unfortunately this is not something I can do. As one discussion participant commented:
It removes "all dangling images", so in shared environments (like Jenkins slave) it's more akin to shooting oneself in the foot. :)
And this is a scenario I found myself in.
So nothing to be "fixed" on Docker side. But how can I remove those extra images, from a single particular build only?
Update
After reading very nice answer from d4nyll below, which is a big step forward, I'd like to add some more constraints to the question ;) First, let me sum up the answer:
One can use ARG to pass a build id from CI/CD to Dockerfile builder
Then one can use LABEL syntax to add build id metadata to the stage images being built
Then one can use the --filter option of docker image prune command to remove only the images with the current build id
This is a big step forward, but I'm still struggling into how to fit it into my usage scenario without adding unnecessary complexity.
In my case a requirement is that application developers who author the Dockerfiles and check them into the source control system are responsible for making sure that their Dockerfiles build the image to their satisfaction. They are not required to craft all their Dockerfiles in a specific way, "so our CI/CD process does not break". They simply have to provide a Dockerfile that produce correct docker image.
Thus, I'm not really in a position to request them to add stuff in the Dockerfile for every single application, just for the sake of CI/CD pipeline. This is something that CI/CD pipeline is expected to handle all by itself.
The only way I can see making this work is to write a Dockerfile parser, that will detect multi-staged build and inject a label per stage and then build that modified Dockerfile. This is a complexity that I'm very hesitant to add to the CI/CD pipeline.
Do I have a better (read simpler) options?
As ZachEddy and thaJeztah mentioned in one of the issues you linked to, you can label the intermediate images and docker image prune those images based on this label.
Dockerfile (using multi-stage builds)
FROM node as builder
LABEL stage=builder
...
FROM node:dubnium-alpine
...
After you've built you image, run:
$ docker image prune --filter label=stage=builder
For Automation Servers (e.g. Jenkins)
If you are running the builds in an automation server (e.g. Jenkins), and want to remove only the intermediate images from that build, you can
Set a unique build ID as an environment variable inside your Jenkins build
Add an ARG instruction for this build ID inside your Dockerfile
Pass the build ID to docker build through the --build-arg flag
FROM node as builder
ARG BUILD_ID
LABEL stage=builder
LABEL build=$BUILD_ID
...
FROM node:dubnium-alpine
...
$ docker build --build-arg BUILD_ID .
$ docker image prune --filter label=stage=builder --filter label=build=$BUILD_ID
If you want to persists the build ID in the image (perhaps as a form of documentation accessible within the container), you can add another ENV instruction that takes the value of the ARG build argument. This also allows you to use the similar environment replacement to set the label value to the build ID.
FROM node as builder
ARG BUILD_ID
ENV BUILD_ID=$BUILD_ID
LABEL stage=builder
LABEL build=$BUILD_ID
...
FROM node:dubnium-alpine
...
We're doing exactly this, applying labels to the Dockerfile at build-time like this:
sed -i '/^FROM/a\
LABEL build_id=${env.BUILD_TAG}\
' Dockerfile
Probably too late to help the OP, but hopefully this will be useful to someone facing the same problem.
You can run docker build inserting another param wich will remove automatically the intermediate images:
docker build --force-rm -t myimage .
The easy way is to run the cmd docker rmi -f $(docker images -f "dangling=true" -q)
A little late, but the best option is
docker builder prune -a
If you do not want to use the cache at all, you can use the --no-cache=true option on the docker build command
Leverage build cache
Please use the below command for deleting all intermediate images:
docker rmi $(docker images -a|grep "<none>"|awk '$1=="<none>" {print $3}')

Is it possible to add environment variables in automated builds in docker hub?

I want to automate my build process and need to pass an environment variable to run some of the commands in the Dockerfile. I was wondering if there was any way to do this in Dockerhub. I know docker cloud has something like this, but I was wondering whether the functionality was there in Dockerhub since there is the --build-args argument in the cli for normal building.
Set up Automated builds
Docker Hub (https://hub.docker.com) can automatically build images from source code in an external repository and automatically push the built image to your Docker repositories which will be hosted under your Docker Hub repositories account Eg: https://cloud.docker.com/u/binbash/repository/list
When you set up automated builds (also called autobuilds), you create a list of branches and tags that you want to build into Docker images. When you push code to a source code branch (currently only GitHub / Bitbucket are supported) for one of those listed image tags, the push uses a webhook to trigger a new build, which produces a Docker image. The built image is then pushed to the Docker Hub registry.
For detailed implementation steps please refer to https://docs.docker.com/docker-hub/builds/
Environment variables for builds
You can set the values for environment variables (actually they are mapped to build ARG values - docker build --build-arg - to be exclusively used at build-time - https://docs.docker.com/engine/reference/commandline/build/#set-build-time-variables---build-arg).
NOT to be confused with the environment values, ENV VARS, used by your service at runtime (docker run --env MYVAR1=foo - https://docs.docker.com/v17.12/edge/engine/reference/commandline/run/#set-environment-variables--e-env-env-file)
These Environment Variables configured from the Docker Hub UI are used in your build processes when you configure an automated build. Add your build environment variables by clicking the plus sign next to the Build environment variables section, and then entering a variable name and the value.
When you set variable values from the Docker Hub UI, they can be used by the commands you set in hooks files (THIS IS VERY IMPORTANT and will be extended below), but they are stored so that only users who have admin access to the Docker Hub repository can see their values. This means you can use them to safely store access tokens or other information that should remain secret.
Build hook examples (to implement Docker Hub UI Env vars)
Adding variables from the auto-build’s web UI makes them available inside the hooks. In the hook, you’ll have to use that value to set a custom build arg using --build-arg. Finally, you have to use this custom build arg inside your Dockerfile to manually set an environment variable using ENV command or export.
Example:
Say your want an environment variable TERRAFORM_VERSION='0.12.0-beta2' in your build environment
Step 1.
Add this in the auto-build’s web UI for ‘build environment variables’
Step 2.
Create a custom build hook i.e create a folder called hooks in the same directory as your Dockerfile. Inside the hooks folder, create a file called build. This creates the custom build hook. Docker will use this to build your image. Contents of build:
#!/bin/bash
docker build -t $IMAGE_NAME --build-arg TERRAFORM_VERSION=$TERRAFORM_VERSION .
NOTE: Here $TERRAFORM_VERSION is coming from the web UI.
Step3:
In your Dockerfile
ARG TERRAFORM_VERSION
ENV TERRAFORM_VERSION $TERRAFORM_VERSION
NOTE: Here $TERRAFORM_VERSION is coming from the custom build args in your bash script file named build.
Complete example: https://github.com/binbashar/public-docker-images/tree/master/terraform-resources
That's it! It should work now. Probably renaming ‘build environment variables’ to ‘custom hook environment variables’ in Docker Hub will ease the understanding of this concept in the official documentation (https://docs.docker.com/docker-hub/builds/advanced/).
Extra Points!
There are a number of key environment arguments set upon launching a build script, all of which you can use in your hooks and which can all be useful in making custom build-args.
SOURCE_BRANCH: the name of the branch or the tag that is currently being tested.
SOURCE_COMMIT: the SHA1 hash of the commit being tested.
COMMIT_MSG: the message from the commit being tested and built.
DOCKER_REPO: the name of the Docker repository being built.
DOCKERFILE_PATH: the Dockerfile currently being built.
DOCKER_TAG: the Docker repository tag being built.
IMAGE_NAME: the name and tag of the Docker repository being built. (This variable is a combination of DOCKER_REPO:DOCKER_TAG.)
An example:
Step 1. Create the Dockerfile:
ARG NODE_VERSION
FROM node:$NODE_VERSION
Step 2. Create the hooks/build file:
#!/bin/bash
NODE_VERSION=$(echo $DOCKER_TAG | cut -d "-" -f2)
if [ $DOCKER_TAG == "latest" ]
then
docker build . --build-arg NODE_VERSION=${DOCKER_TAG} -t ${IMAGE_NAME}
else
docker build . --build-arg NODE_VERSION=${NODE_VERSION} -t ${IMAGE_NAME}
fi
Source: github.com/SamuelA

Can a variable be used in docker FROM?

I am wondering if a env variable can be used in a docker from? Reason for this is to control the tagging. For example, say I have this line in my Dockerfile:
FROM myApp
What I want is this:
FROM myApp:${VERSION}
This way I can say docker build . myApp --build-arg VERSION=9
The process to build docker images for this app is the same. I don't want to have Dockerfiles that are almost identical just to use a different base image.If I want to build version 9, it should use version 9 of the base image.
Quoting this link
:
This is now possible if anyone comes here looking for answers: https://docs.docker.com/engine/reference/builder/#understand-how-arg-and-from-interact
FROM instructions support variables that are declared by any ARG instructions that occur before the first FROM.
ARG CODE_VERSION=latest
FROM base:${CODE_VERSION}
CMD /code/run-app
FROM extras:${CODE_VERSION}
CMD /code/run-extras
For at least this docker version this is feasible
docker --version
docker version 18.09.8, build bfed4f5
It requires a preset variable in Dockerfile e.g.
ARG TAG=latest
FROM traefik:${TAG}
Then you can override this preset with the following
docker build --build-arg "TAG=2.2.8" -t my-app:$TAG
The version number will not show up during build. if you want to test if it works, reference a non-existing version - it will fail with: manifest my-app:version not found.
You could simply generate your Dockerfile from a template. Put
something like this in a Makefile:
MYTAG=latest
.PHONY: Dockerfile
Dockerfile: Dockerfile.in
sed 's/MYTAG/$(MYTAG)/' $< > $# || rm -f $#
Then you can run:
make MYTAG=8; docker build -t my-app-8 .
This would only make sense if you are frequently building images that
require a different tag in the FROM line.
It is not possible.
Although, you can use a variable tag like from myApp:latest and overwrite the latest tag when you're creating a new version.
Build your container programatically using buildah (It can take Dockerfile too).
So for your use-case:
VERSION=v0.1.0
myCon=$(buildah from myApp:${VERSION})
buildah config --cmd "sleep 1d" $myCon
buildah commit $myCon $USER/sleeping1d
You can obviously script it, save and invoke it, and one more advantage is
buildah doesn't need docker daemon running, which is great for CI. Also it's an open-source project, check out the project page.
BTW I saw this issue lately which is exactly what you want - https://github.com/projectatomic/buildah/issues/581
Unfortunately it's not possible to do that. The first line of your Dockerfile must be a FROM directive, and so that precludes the use of the ARG directive. There is a good answer there from larsks about generating a Dockerfile, but I'd also like to suggest merely creating different Dockerfiles and then specifying a particular one in your docker build command using the -f switch:
docker build -t codemiester/app:latest -f ./Dockerfile.apache2.ubuntu

Labelling images in docker

I've got a jenkins server monitoring a git repo and building a docker image on code change. The .git directory is ignored as part of the build, but I want to associate the git commit hash with the image so that I know exactly what version of the code was used to make it and check whether the image is up to date.
The obvious solution is to tag the image with something like "application-name-branch-name:commit-hash", but for many develop branches I only want to keep the last good build, and adding more tags will make cleaning up old builds harder (rather than using the jenkins build number as the image is built, then retagging to :latest and untagging the build number)
The other possibility is labels, but while this looked promising initially, they proved more complicated in practice..
The only way I can see to apply a label directly to an image is in the Dockerfile, which cannot use the build environment variables, so I'd need to use some kind of templating to produce a custom Dockerfile.
The other way to apply a label is to start up a container from the image with some simple command (e.g. bash) and passing in the labels as docker run arguments. The container can then be committed as the new image. This has the unfortunate side effect of making the image's default command whatever was used with the labelling container (so bash in this case) rather than whatever was in the original Dockerfile. For my application I cannot use the actual command, as it will start changing the application state.
None of these seem particularly ideal - has anyone else found a better way of doing this?
Support for this was added in docker v1.9.0, so updating your docker installation to that version would fix your problem if that is OK with you.
Usage is described in the pull-request below:
https://github.com/docker/docker/pull/15182
As an example, take the following Dockerfile file:
FROM busybox
ARG GIT_COMMIT=unknown
LABEL git-commit=$GIT_COMMIT
and build it into an image named test as anyone would do naïvely:
docker build -t test .
Then inspect the test image to check what value ended up for the git-commit label:
docker inspect -f '{{index .ContainerConfig.Labels "git-commit"}}' test
unkown
Now, build the image again, but this time using the --build-arg option:
docker build -t test --build-arg GIT_COMMIT=0123456789abcdef .
Then inspect the test image to check what value ended up for the git-commit label:
docker inspect -f '{{index .ContainerConfig.Labels "git-commit"}}' test
0123456789abcdef
References:
Docker build command documentation for the --build-arg option
Dockerfile reference for the ARG directive
Dockerfile reference for the LABEL directive
You can specify a label on the command line when creating your image. So you would write something like
docker build -t myproject --label "myproject.version=githash" .
instead of hard-coding the version you can also get it directly from git:
docker build -t myproject --label "myproject.version=`git describe`" .
To read out the label from your images you can use docker inspect with a format string:
docker inspect -f '{{index .Config.Labels "myproject.version"}}' myproject
If you are using docker-compose, you could add the following to the build section:
labels:
git-commit-hash: ${COMMIT_HASH}
where COMMIT_HASH is your environment variable, which holds commit hash.

Resources