We're developing an application which consists of three parts:
Backend (Java EE) (A)
Frontend (vuejs) (B)
Admin frontend (React) (C)
For each of the above applies as status quo:
Maintained in its own Git repository
Has its own docker-compose.yml
Has its own Jenkinsfile
The Jenkinsfile for each component includes a "Deploy" stage which basically just runs the following command:
docker stack deploy -c docker-compose.yml $stackName.
This approach however doesn't feel "right". We're struggling with some questions like:
How can we deploy the "complete application"? First guess was using a separate docker-compose.yml which contains services of A, B and C.
But where would we keep this file? Definitely not in one of the Git repos as it doesn't belong there. A fourth repo?
How could we start the deployment of this combined docker compose file if there are changes in one of the above repos for A, B, C?
We're aware that these question might be not quite specific but they show our confusion regarding this topic.
Do you have any good practices how to orchestrate these three service components?
Well, one way to do that is to make the 3 deployments separate pipelines, so then as the last step per application you would just call the particular deployment. For example for backend:
stage("deploy backend") {
steps {
build 'deploy backend'
}
}
Then a separate pipeline to deploy all the apps just doing
stage("deploy all") {
steps {
build 'deploy backend'
build 'deploy frontend'
build 'deploy admin frontend'
}
}
Open question would be where would you keep the docker-compose.yml?
I'm assuming that automatic deployment would be available just for your master, so I would keep it still in each project. You would also need additional Jenkins configuration file for deployment pipeline - meaning you would have a simple pipeline 'deploy backend' pointing to this new jenkins configuration file in master branch of 'backend'. But then it all depends on your gitflow.
Related
To give a background, we have nearly 20 micor-services each of which has its own Jenkinsfile. Although each micro service may be doing some extra steps in their builds, e.g. building extra docker image, but most of these steps are the same across the all micro services, with only difference being the parameters for each step, for example repository path etc.
Now looking at Jenkins' Multi Configuration project it seems perfect to have one of these jobs and apply the build steps to all these projects. However, there are some doubt that I have:
Are we able to use Multi Configuration to create Multi Branch jobs for each microservice?
Are we able to support extra steps that each micro service may have while the common steps are being generated by Multi Configuration?
To be more clear, let me give you an example.
micro-service-one:
|__ Jenkinsfile
|___ { step1: maven build
step2: docker build
step3: docker build (extra Dockerfile)
}
micro-service-two:
|__ Jenkinsfile
|___ { step1: maven build
step2: docker build
}
Now what I'm thinking is my Multi Configuration will look like something like this:
Axis:
name: micro-service-one micro-service-two
docker_repo: myrepo.com/micro-service-one myrepo.com/micro-service-two
DSL Script:
multibranchPipelineJob("folder-build/${name}") {
branchSources {
git {
id = 'bitbucket-${name}'
remote('git#bitbucket.org:myproject/${name}.git')
credentialsId("some_cred_id")
}
}
orphanedItemStrategy {
discardOldItems {
daysToKeep(2)
numToKeep(10)
}
defaultOrphanedItemStrategy {
pruneDeadBranches(true)
daysToKeepStr("2")
numToKeepStr("10")
}
}
triggers {
periodic(5)
}
}
But I'm not sure how to use the axis variables in the Jenkinsfile for each application? Is it even possible to make Jenkinsfile to be generated by Multi Configuration?
Note: You might ask why do I need this. To answer that, is to reduce the time we spend on modifying or updating these Jenkinsfiles. When a change is needed we need to check out nearly 20 or more repositories and modify one by one as our environment evolving and new features or fixes needed.
At this moment we use JJB to compile Jenkins jobs (mostly pipelines already) in order to configure about 700 jobs but JJB2 seems not to scale well to build pipelines and I am looking for a way to drop it from the equation.
Mainly i would like to be able to have all these pipelines stored in a single centralized repository.
Please note that keeping the CI config (Jenkinsfile) inside each repository and branch is not possible in our use case, we need to keep all pipelines in a single "jenkins-jobs.git" repo.
As far as I know this is not possible yet, but in progress. See: https://issues.jenkins-ci.org/browse/JENKINS-43749
I think this is the purpose of jenkins shared libraries
I didn't dev such library my-self but I am using some. Basically:
Develop the "shared code" of the jenkins pipeline in a shared library
it can contains the whole pipeline (seq of steps)
Add this library to the jenkins server
In each project, add a jenkinsfile that "import" those using #Library
as #Juh_ said, you can use jenkins shared libraries, here is a complete steps, Suppose that we have three branches:
master
develop
stage
and we want to create a single Jenkins file so that we can change in only one place. All you need is creating a new branch ex: common. This branch MUST have this structure. What we are interested for now is adding a new groovy file in vars directory, ex: common.groovy. Here we can put the common Jenkins file that you wish to be used across all branches.
Here is a sample:
def call() {
node {
stage("Install Stage from common file") {
if (env.BRANCH_NAME.equals('master')){
echo "npm install from common files master branch"
}
else if(env.BRANCH_NAME.equals('develop')){
echo "npm install from common files develop branch"
}
}
stage("Test") {
echo "npm test from common files"
}
}
}
You must wrap your code call function in order to be used in other branches. now we have finished work in common branch we need to use it in our branches. go to any branch you wish to use this pipline ex: master and create Jenkinsfile and put this one line of code:
common()
This will call the common function that you have created before in common branch and will execute the pipeline.
Here is something I want to achieve:
I have a jenkins project which has 4 upstream projects. But I don't want to trigger this project when the upstream jobs are done building, but I want the trigger the project via remote API, which then waits on upstream projects until they are done building, if these projects are building.
Lets say all the 4 upstream projects can build the source code from any branch passed via API, but I want the downstream project to start only when a specific branch is passed to these upstream projects.
Scenario:
Lets say I have two clusters A and B, for the sake of this question, I want to deploy my code to cluster A, i.e front end and backend code. Now I have a project to build front end and 1 project to build backend (these two projects can build code for cluster A and B, based on the branch passed). Now, I have two deploy projects for cluster A which will deploy front end and backend. So, when I pass a branch to build code for cluster A, it will trigger the build projects. But now I only want these two deploy projects to start when this specific branch was passed.
If you want to control the builds remotely then use the Jerkins cli - I have found it very useful http://jenkinshost:8080/cli
You need to get the ssh key config right, add the public key of the user running the cli to the user you want to run the job in Jenkins using the Jenkins user configuration (not on the command line
Test key setup with
java -jar jenkins=cli.jar -s http://jenkinshost:8080 who-am-i
This should then report which user will be used to run the build in Jenkins
But I think you can use the Conditional Build Step plugin for your problem
https://wiki.jenkins-ci.org/display/JENKINS/Conditional+BuildStep+Plugin
This will allow you to put a conditional wrapper around a build step i.e.
if branch==branchA then
trigger step - deploy to clusterA
if branch==branchB then
trigger step - deploy to clusterB
Personally I find this plugin a bit clunky and it makes the job config page a little messy
Another solution I came up with was to always call the child job and then let it decide if it runs.
So I have a script step at the start of the child job to see if it should run
if [${branch}="Not the right branch name" ] ; then
echo "EXIT_GREEN"
exit 1
fi
You have now failed this job which would cause the parent job to go red but by using the Groovy Postbuild plugin https://wiki.jenkins-ci.org/display/JENKINS/Groovy+Postbuild+Plugin you can add a post build step like this
if (manager.logContains(".*EXIT_GREEN.*")) {
manager.addBadge("info.gif","This job had nothing to do")
manager.build.#result = hudson.model.Result.SUCCESS
}
Child job has run green (with an info icon against the build) but has actually not done anything. Obviously if the branch is one you want deploy then the first script step does not run the exit 1 and the job continues as normal
Every jenkins pipeline does pretty much the same thing - atleast in a small team with multiple projects.
Build (from the same sourcecode repo) --> run tests --> publish artifacts (to the same artifact repo)
We are creating many new projects and they all have very similar lifecycle. Is it possible to create a template pipeline from which I can create concrete pipleines and make necessary changes to the jobs?
There are a couple of approaches that I use that work well for me and my team.
part 1) is to identify which orchestration plugins suits you best in jenkins.
Plugins and approaches that worked well for me were:
a) Use http://ci.openstack.org/jenkins-job-builder/
It abstract the jobs definitions and flows using a higher level library. It allows you to define jobs in YAML which is fairly simple and it supports most of the common usage cases (jobs, templates, flows).
These yaml files can then be consumed by the jenkins-jobs-builder python cli tool through an orchestration tool such as ansible, puppet,chef.
You can use YAML anchors to replace blocks that are common to multiple jobs, or ever template them from a template engine (erb,jinja2)
b) Use the workflow-plugin, https://github.com/jenkinsci/workflow-plugin
The workflow plugin allows you to have a single workflow in groovy, instead of a set of jobs that chain together.
"For example, to check out and build several repositories in parallel, each on its own slave:
parallel repos.collectEntries {repo -> [/* thread label */repo, {
node {
dir('sources') { // switch to subdir
git url: "https://github.com/user/${repo}"
sh 'make all -Dtarget=../build'
}
}
}]}
"
If you build these workflow definitions from a template engine (ERB, jinja2), and integrate them with a configuration management tool (again ansible,chef,puppet).
It becomes a lot easier to make small and larger changes that affect one or all the jobs.
For example, you can template that some jenkins boxes compile, publish and deploy the artifacts into a development environment, while others simply deploy the artifacts into a QA environment.
This can all be achieved from the same template, using if/then statements and macros in jinja2/erb.
Ex (an abstraction):
if ($environment == dev=) then compile, publish, deploy($environment)
elif ($environment== qa) then deploy($environment)
part2) is to make sure all the jenkins configuration for all the jobs and flows is kept in source control, and make sure a change of a job definition in source control will be automatically propagated to the jenkins server(s) (again ansible, puppet, chef).
Or even have a jenkins jobs that monitors its own repo of jobs definitions and automatically updates itself
When you achieve #1 and #2 you should be at a position where you can with some confidence allow all your team members to make changes to their jobs/projects, giving you information of who changed what and when, and be able to rollback changes easily from change control when things go wrong.
its pretty much about getting jenkins to deploy code from a series of templated jobs that were themselves defined in code.
Another approach we've been following is managing jobs via Ansible templates. We started way before jenkins_job module became available, and are using url module to talk to jenkins, but overall approach will be the same:
j2 templates created for different jobs
loop goes over project definitions, and updates jobs and views in jenkins
by default common definition is used, and very minimal description is required:
default_project:
jobs:
Build:
template: build.xml.j2
Release: ...
projects:
DefaultProject1:
properties:
repository: git://../..
CustomProject2:
properties:
a: b
c: d
jobs:
Custom-Build:
template: custom.j2
I'm new to Jenkins. I have a requirement where I need to run part of a job on the Master node and the rest on a slave node.
I tried searching on forums but couldn't find anything related to that. Is it possible to do this?
If not, I'll have to break it into two separate jobs.
EDIT
Basically I have a job that checks out source code from svn, then compiles and builds jar files. After that it's building a wise installer for this application. I'd like to do source code checkout and compilation on the master(Linux) and delegate Wise Installer setup to a Windows slave.
It's definitely easier to do this with two separate jobs; you can make the master job trigger the slave job (or vice versa).
If you publish the files that need to be bundled into the installer as build artifacts from the master build, you can pull them onto the slave via a Jenkins URL and create the installer. Use the "Archive artifacts" post build step in the master build to do this.
The Pipeline Plugin allows you to write jobs that run on multiple slave nodes. You don't even have to go create other separate jobs in Jenkins -- just write another node statement in the Pipeline script and that block will just run on an assigned node. You can specify labels if you want to restrict the type of node it runs on.
For example, this Pipeline script will execute parts of it on two different nodes:
node('linux') {
git url: 'https://github.com/jglick/simple-maven-project-with-tests.git'
sh "make"
step([$class: 'ArtifactArchiver', artifacts: 'build/program', fingerprint: true])
}
node('windows && amd64') {
git url: 'https://github.com/jglick/simple-maven-project-with-tests.git'
sh "mytest.exe"
}
Some more information at the Pipeline plugin tutorial. (Note that it was previously called the Workflow Plugin.)
You can use the Multijob plugin which adds an the idea of a build phase which runs other jobs in parallel as a build step. You can still continue to use the regular freestyle job build and post build options as well