To give a background, we have nearly 20 micor-services each of which has its own Jenkinsfile. Although each micro service may be doing some extra steps in their builds, e.g. building extra docker image, but most of these steps are the same across the all micro services, with only difference being the parameters for each step, for example repository path etc.
Now looking at Jenkins' Multi Configuration project it seems perfect to have one of these jobs and apply the build steps to all these projects. However, there are some doubt that I have:
Are we able to use Multi Configuration to create Multi Branch jobs for each microservice?
Are we able to support extra steps that each micro service may have while the common steps are being generated by Multi Configuration?
To be more clear, let me give you an example.
micro-service-one:
|__ Jenkinsfile
|___ { step1: maven build
step2: docker build
step3: docker build (extra Dockerfile)
}
micro-service-two:
|__ Jenkinsfile
|___ { step1: maven build
step2: docker build
}
Now what I'm thinking is my Multi Configuration will look like something like this:
Axis:
name: micro-service-one micro-service-two
docker_repo: myrepo.com/micro-service-one myrepo.com/micro-service-two
DSL Script:
multibranchPipelineJob("folder-build/${name}") {
branchSources {
git {
id = 'bitbucket-${name}'
remote('git#bitbucket.org:myproject/${name}.git')
credentialsId("some_cred_id")
}
}
orphanedItemStrategy {
discardOldItems {
daysToKeep(2)
numToKeep(10)
}
defaultOrphanedItemStrategy {
pruneDeadBranches(true)
daysToKeepStr("2")
numToKeepStr("10")
}
}
triggers {
periodic(5)
}
}
But I'm not sure how to use the axis variables in the Jenkinsfile for each application? Is it even possible to make Jenkinsfile to be generated by Multi Configuration?
Note: You might ask why do I need this. To answer that, is to reduce the time we spend on modifying or updating these Jenkinsfiles. When a change is needed we need to check out nearly 20 or more repositories and modify one by one as our environment evolving and new features or fixes needed.
Related
We have a multiple jenkins pipeline jobs with steps like:
Build -> unit-tests -> push to artifactory
Build -> unit-tests -> deploy
Build -> unit-tests -> integration tests
etc.
Management wants to unify all that to a one big ass pipeline, and currently my team has 2 approaches how to do it:
a) Create on big ass pipeline job with all the stages inside
The cons of this is that we do not need to deploy or publish to artifactory each single build, so there would be some if statements inside that will skip stages if needed - which will make build history a total mess - because one build can do different thing from another (e.g. build #1 publish binaries, and build #2 run integration tests). Pros is that we have all in one workspace and jenkinsfile.
b) Create a separate job for each unit of task.
Like 'build', 'integration tests', 'publishing' and 'deploying', and then create one orchestrator job that will call smaller jobs in sequence wrapped in stages. Cons of this is that we still have CI spread over different jobs, and artifacts have to be passed in between. Pros, of course, is that we can run them independently if needed, so if you only need unit-tests - you run only unit-tests job, which will also result in normal and meaningful build history.
Could you please point out if you would go with a or b, or otherwise how would you do it instead?
If the reason for unifying them is code repetition, look at shared libraries. Your Build and unit-tests which is common to all pipelines can go into the shared library and you can just call the library code from different pipelines.
We have one "big ass pipeline", spiced up with
stage('Push') {
when {
expression { env.PUSH_TO_ARTIFACTORY }
beforeAgent true
}
steps {
etc.
Regarding history, you can change your build description, so for builds that push you can add a * symbol in the end, e.g.
def is_push = env.PUSH_TO_ARTIFACTORY ? " *" : ""
currentBuild.displayName += "${is_push}"
Having everything in one file means that you don't need to figure out which file to look at as you fix things.
We're developing an application which consists of three parts:
Backend (Java EE) (A)
Frontend (vuejs) (B)
Admin frontend (React) (C)
For each of the above applies as status quo:
Maintained in its own Git repository
Has its own docker-compose.yml
Has its own Jenkinsfile
The Jenkinsfile for each component includes a "Deploy" stage which basically just runs the following command:
docker stack deploy -c docker-compose.yml $stackName.
This approach however doesn't feel "right". We're struggling with some questions like:
How can we deploy the "complete application"? First guess was using a separate docker-compose.yml which contains services of A, B and C.
But where would we keep this file? Definitely not in one of the Git repos as it doesn't belong there. A fourth repo?
How could we start the deployment of this combined docker compose file if there are changes in one of the above repos for A, B, C?
We're aware that these question might be not quite specific but they show our confusion regarding this topic.
Do you have any good practices how to orchestrate these three service components?
Well, one way to do that is to make the 3 deployments separate pipelines, so then as the last step per application you would just call the particular deployment. For example for backend:
stage("deploy backend") {
steps {
build 'deploy backend'
}
}
Then a separate pipeline to deploy all the apps just doing
stage("deploy all") {
steps {
build 'deploy backend'
build 'deploy frontend'
build 'deploy admin frontend'
}
}
Open question would be where would you keep the docker-compose.yml?
I'm assuming that automatic deployment would be available just for your master, so I would keep it still in each project. You would also need additional Jenkins configuration file for deployment pipeline - meaning you would have a simple pipeline 'deploy backend' pointing to this new jenkins configuration file in master branch of 'backend'. But then it all depends on your gitflow.
Using the kubernetes-plugin how does one build an image in a prior stage for use in a subsequent stage?
Looking at the podTemplate API it feels like I have to declare all my containers and images up front.
In semi-pseudo code, this is what I'm trying to achieve.
pod {
container('image1') {
stage1 {
$ pull/build/push 'image2'
}
}
container('image2') {
stage2 {
$ do things
}
}
}
Jenkins Kubernetes Pipeline Plugin initializes all slave pods during Pipeline Startup. This also means that all container images which are used within the pipeline need to be available in some registry. Probably you can give us more context what you try to achieve, maybe there are other solutions for your problem.
There are for sure ways to dynamically create a pod from a build container and connect it as slave during buildtime but I feel already that this approach is not solid and will bring some complications.
I have git monorepo with different apps. Currently I have single Jenkinsfile in root folder that contains pipeline for app alls. It is very time consuming to execute full pipeline for all apps when commit changed only one app.
We use GitFlow-like approach to branching so Multibranch Pipeline jobs in Jenkins as perfect fit for our project.
I'm looking for a way to have several jobs in Jenkins, each one will be triggered only when code of appropriate application was changed.
Perfect solution for me looks like this:
I have several Multibranch Pipeline jobs in Jenkins. Each one looks for changes only to given directory and subdirectories. Each one uses own Jenkinsfile. Jobs pull git every X minutes and if there are changes to appropriate directories in existing branches - initiates build; if there are new branches with changes to appropriate directories - initiates build.
What stops me from this implementation
I'm missing a way to define commit to which folders must be ignored during scan execution by Multibranch pipeline. "Additional behaviour" for Multibranch pipeline doesn't have "Polling ignores commits to certain paths" option, while Pipeline or Freestyle jobs have. But I want to use Multibranch pipeline.
Solution described here doesnt work for me because if there will be new branch with changes only to "project1" then whenever Multibranch pipeline for "project2" will be triggered it will discover this new branch anyway and build it. Means for every new branch each of my Multibranch pipelines will be executed at least once no matter if there was changes to appropriate code or not.
Appreciate any help or suggestions how I can implement few Multibranch pipelines watching over same git repository but triggered only when appropriate pieces of code changed
This can be accomplished by using the Multibranch build strategy extension plugin. With this plugin, you can define a rule where the build only initiates when the changes belong to a sub-directory.
Install the plugin
On the Multibranch pipeline configuration, add a Build strategy
Select Build included regions strategy
Put a sub-folder on the field, such as subfolder/**
This way the changes will still be discovered, but they won't initiate a build if it doesn't belong to a certain set of files or folders.
This is the best approach I'm aware so far. But I think the best way would be a case where the changes doesn't even get discovered.
Edit: Gerrit Code Review plugin configuration
In case you're using the Gerrit Code Review plugin, you can also prevent new changes to be discovered by using a custom query:
I solved this by creating a project that builds other projects depending on the files changed. For example, from your repo root:
/Jenkinsfile
#!/usr/bin/env groovy
pipeline {
agent any
options {
timestamps()
}
triggers {
bitbucketPush()
}
stages {
stage('Build project A') {
when {
changeset "project-a/**"
}
steps {
build 'project-a'
}
}
stage('Build project B') {
when {
changeset "project-b/**"
}
steps {
build 'project-b'
}
}
}
}
You would then have other Pipeline projects with their own Jenkinsfile (i.e., project-a/Jenkinsfile).
I know that this post is quite old, but I solved this problem by changing the "include branches" parameter for SVN repositories (this can possibly also be done using the property "Filter by name (with wildcards)" for git repos). Instead of supplying only the actual branch name, I also included the subfolder. So instead of only supplying "trunk", I used "trunk/subfolder". This limits scanning to only that specific directory. Note that I have not yet fully tested this solution.
At this moment we use JJB to compile Jenkins jobs (mostly pipelines already) in order to configure about 700 jobs but JJB2 seems not to scale well to build pipelines and I am looking for a way to drop it from the equation.
Mainly i would like to be able to have all these pipelines stored in a single centralized repository.
Please note that keeping the CI config (Jenkinsfile) inside each repository and branch is not possible in our use case, we need to keep all pipelines in a single "jenkins-jobs.git" repo.
As far as I know this is not possible yet, but in progress. See: https://issues.jenkins-ci.org/browse/JENKINS-43749
I think this is the purpose of jenkins shared libraries
I didn't dev such library my-self but I am using some. Basically:
Develop the "shared code" of the jenkins pipeline in a shared library
it can contains the whole pipeline (seq of steps)
Add this library to the jenkins server
In each project, add a jenkinsfile that "import" those using #Library
as #Juh_ said, you can use jenkins shared libraries, here is a complete steps, Suppose that we have three branches:
master
develop
stage
and we want to create a single Jenkins file so that we can change in only one place. All you need is creating a new branch ex: common. This branch MUST have this structure. What we are interested for now is adding a new groovy file in vars directory, ex: common.groovy. Here we can put the common Jenkins file that you wish to be used across all branches.
Here is a sample:
def call() {
node {
stage("Install Stage from common file") {
if (env.BRANCH_NAME.equals('master')){
echo "npm install from common files master branch"
}
else if(env.BRANCH_NAME.equals('develop')){
echo "npm install from common files develop branch"
}
}
stage("Test") {
echo "npm test from common files"
}
}
}
You must wrap your code call function in order to be used in other branches. now we have finished work in common branch we need to use it in our branches. go to any branch you wish to use this pipline ex: master and create Jenkinsfile and put this one line of code:
common()
This will call the common function that you have created before in common branch and will execute the pipeline.