Jenkins + Docker - How To Deal With Versions - jenkins

I've got Jenkins set up to do 2 things in 2 separate jobs:
Build an executable jar and push to Ivy repo
Build a docker image, pulling in the jar from the Ivy repo, and push image to a private docker registry
During step 1 the jar will have some version which will be appended to the filename (e.g. my-app-0.1-SNAPSHOT, my-app-1.0-RELEASE, etc.). The problem that I'm facing is that in the Dockerfile we have to pull in the correct jar file based on the version number from the upstream build. Additionally, I would ideally like the docker image to be tagged with that same version number.
Would love to hear from the community about any possible solutions to this problem.
Thanks in advance!!

Obviously you need a unique version from (1) to refer to in (2).
0.1 -> 0.2 -> 0.3 -> ...
Not too complicated in terms of how things work together from a build / Docker point of view. I guess the far bigger challenge is to give up SNAPSHOT builds in the development workflow.
With your current Jenkins: release every build you create a container for.
Much better alternative: Choose a CI / CD server that uses build pipelines. And if you haven't already done so, take a look at the underlaying concept here.

You could use the Groovy Postbulid Plugin to extract with a regular expression the exact name of the generated .jar file at the end of step 1.
Then for step 2, you could have a Dockerfile template and replace in it some placeholder with the exact jar name, build the image and push it to your registry.
Or, if you don't use a Dockerfile you could have in your Docker registry a premade Docker image which has everything but the jar file and add the jar to it with those steps:
create a container from the image
add the jar file into the container using the docker cp command
commit the container into a new image
push the new image to your docker registry

Same need by my customer. We ended up by putting placeholders in the Dockerfile, which are replaced using sed just before the docker build.
This way, you can use the version in multiple locations, either in the FROM or in any filenames.
Example:
FROM custom:#placeholder#
ENV VERSION #placeholder#
RUN wget ***/myjar-${VERSION}.jar && ...
Regarding the consistency, a unique version is used:
from a job parameter (Jenkins)
to build the artifact (Maven)
to tag the Docker image (Docker)
to tag the Git repository containing the Dockerfile (Git)

Related

Github: Building a docker image in a repository with multiple Dockerfile

I have a repository that has multiple directories each containing a python script, a requirements.txt, and a Dockerfile. I want to use Github actions to build an image on merge to the main branch and push this image to Github packages. Each Dockerfile defines a running environment for the accompanying python script. But want to build only that directory's Dockerfile on which changes are made and tag each image with the directory name and a version number that is different from other directories version changes. How can I accomplish this? An example workflow.yml file would be awesome.
Yes, it is possible.
In order to trigger a separate build for every Dockerfile, you will need a separate workflow - e.g for every Dockerfile - one workflow. The workflows should be triggered on push to main and specify paths for the Dockerfile.
For building the images themselves you can use GitHub Action Build and push Docker images
I saw on another thread for building specific Dockerfile which has been changed by using another github action.
Build Specific Dockerfile from set of dockerfiles in github action

Overwrite docker image with latest tag in artifactory

I am currently Pushing same image twice one with tag number and another with latest tag to artifactory.
For the next release I would like to overwrite the image with latest tag with the new image that contains latest tag. Below is the way I am trying from Azure devops build
Docker Build Command:
$(docker_registry)/$(Build.Repository.Name):$(BuildNbr)
Docker Push Command
$(docker_registry)/$(Build.Repository.Name):$(BuildNbr)
Same above with latest tag, then
docker pull $(docker_registry)/imageName:latest
docker rmi --force $(docker_registry)/imageName:latest //removing latest image from artifactory NOT WORKING
docker pull $(docker_registry)/imageName:$(BuildNbr)
docker tag $(docker_registry)/imageName:$(BuildNbr) $(docker_registry)/imageName:latest
docker push $(docker_registry)/imageName:latest
Somehow the above flow is not working and the latest image is not getting overwritten.
Am I doing any mistake ? I believe rmi command will not delete the image from artifactory.
You can achieve it through the include/exclude pattern for your permissions. You can create a new Permission Target that gives overwrite/delete permissions to tags with "latest" in it:
Remove the include pattern **
Add the include pattern **/latest/*
Add only the required docker repositories (for example "docker local")
Add anyone to this permission target that needs to be able to overwrite/delete.
Then, for all other permission targets, which defines the remainder of the tags (1.1, 1.2, etc), do not provide overwrite/delete permissions. With this, you will be able to overwrite the latest and not other tags.
You can read about include/exclude patterns as they relate to permission targets here.

How to get only list of dependencies that are installed from Dockerfile in a container and not preinstalled?

I have a docker image downloaded from docker hub and that contains pre-installed dependencies which I don't want. I want to find out all the dependencies + transitive dependencies that are installed from my "Dockerfile" on top of the docker hub base-image. I tried looking for an open source program to do this but I could not find anything that seems suitable.
You can try to build your own or simply if they share their source you can modified from them.
Sample of mysql docker source :
https://github.com/docker-library/mysql/tree/master/8.0
You can clone their repo and then modified the Dockerfile to your requirement.
But if their base image contain pre-installed that you dont want then you have to build your own.

How can Cloud Build take dynamic parameters to increment a registry tag?

I want my Cloud Build to push an image to a registry with an incremented tag. So, when the trigger arrives from GitHub, build the image, and if the latest tag was 1.10, tag the new one 1.11. Similarly, the 1.11 value will serve in multiple other steps in the build.
Reading the registry and incrementing the tag is easy (in a bash Cloud Build step), but Cloud Build has no way to pass parameters. (Substitutions come from outside the Cloud Build process, for example from the Git tags, and are not generated inside the process.)
This StackOverflow question and this article say that Cloud Build steps can communicate by writing files to the workspace directory.
That is clumsy. But worse, this requires using shell steps exclusively, not the native docker-building steps, nor the native image command.
How can I do this?
Sadly you can't. The Cloud Builder image have each time their own sandbox and only the /workspace directory is mounted. By the way, all the environment variable, binaries installed and so, doesn't persist from one container to the next one.
You have to use the shell script each time :( The easiest way is to have a file in your /workspace directory (for example env.var file)
# load the environment variable
source /workspace/env.var
# Add variable
echo "NEW=Variable" >> /workspace/env.var
For this, Cloud Build is boring...

How do I pass a docker image from one TeamCity build to another?

I need to split a teamcity build that builds and pushes a docker image into a docker registry into two separate builds.
a) The one that builds the docker image and publishes it as an artifact
b) The one that accepts the docker artifact from the first build and pushes it into a registry
The log says, that there are these three commands running:
docker build -t thingy -f /opt/teamcity-agent/work/55abcd6/docker/thingy/Dockerfile /opt/teamcity-agent/work/55abcd6
docker tag thingy docker.thingy.net/thingy/thingy:latest
docker push docker.thingy.net/thingy/thingy:latest
There's plenty of other stuff going on, but I figured that this is the important part.
So I have copied the initial build two times, with the first command in the first build, and the next two in the second build.
I have set the first build as a snapshot dependency for the second build, and run it. And what I get is:
FileNotFoundError: [Errno 2] No such file or directory: 'docker': 'docker'
Which probably is because some of the files are missing.
Now, I did want to publish the docker image as an artifact, and make the first build an artifact dependency, but I can't find where does the docker put its files and all of the searches containing a "docker" and a "file" in them just lead to a bunch of articles about what the Dockerfile is.
So what can I do to make it so that the second build could use the resulting image and/or enviroment from the first build?
in all honesty I didn't understand what exactly you are trying to do here.
However, this might help you:
You can save the image as a tar file:
docker save -o <image_file_name>.tar <image_tag>
This archive can then be moved and imported somewhere else.
You can get a lot of information about an image or a container with "docker inspect":
docker inspect <image_tag>
Hope this helps.

Resources