How do I modularize the order of execution and/or dependency relationships between jobs in a CircleCI orb? - circleci

Our CircleCI orb defines two jobs - A and B.
Our customers use, and are expected to use the jobs in a specific way.
i.e., the customers are expected to define a job C, and invoke A and B such that A runs before C and B runs after C.
Currently, our customers enforce that using the 'require' key.
Is there a way to:
abstract this detail (the order of job exec) away from our customers?
ideally, require that the jobs can only be run in that exact order.
This is an actual example of a workflow our customer is using.
customers_workflow:
jobs:
- our-orb/a:
filters:
branches:
only: master
- c:
requires: [our-orb/a]
filters:
branches:
only: master
- our-orb/b:
requires: [c]
filters:
branches:
only: master

The potential only solution I've come up with so far is requiring c to be a shell script "c.sh" instead of a CircleCI job and combining jobs a and b into one job ab, which runs the commands in a, the shell script c.sh from the invoking repo and b consecutively.

Related

Avoid trigger Bitbucket pipeline when the title starts with Draft or WIP

To automate our CI process, I need run the Bitbucket pipelines only when the title not starts with "Draft" or "WIP". Atlassian has only this features https://support.atlassian.com/bitbucket-cloud/docs/use-glob-patterns-on-the-pipelines-yaml-file/.
I tried with the regex ^(?!Draft:|WIP:).+ like this:
pipelines:
pull-requests:
'^(?!Draft:|WIP:).+':
- step:
name: Tests
but the pipeline not start under any circumstances (with or withour Draft:/WIP:). Any suggestions?
Note the PR pattern you define in the pipelines is matched against the source branch, not the PR title. Precisely, I used to feature an empty pipeline for PRs from wip/* branches, e.g.
pipelines:
pull-requests:
wip/*:
- step:
name: Pass
script:
- exit 0
"**":
- step:
name: Tests
# ...
But this workflow requires you to work on wip/* branches and changing their source branch later on. This is somewhat cumbersome and developers just did not opt-in.
This works, though.

Jenkins pipeline partitioning

We have a multiple jenkins pipeline jobs with steps like:
Build -> unit-tests -> push to artifactory
Build -> unit-tests -> deploy
Build -> unit-tests -> integration tests
etc.
Management wants to unify all that to a one big ass pipeline, and currently my team has 2 approaches how to do it:
a) Create on big ass pipeline job with all the stages inside
The cons of this is that we do not need to deploy or publish to artifactory each single build, so there would be some if statements inside that will skip stages if needed - which will make build history a total mess - because one build can do different thing from another (e.g. build #1 publish binaries, and build #2 run integration tests). Pros is that we have all in one workspace and jenkinsfile.
b) Create a separate job for each unit of task.
Like 'build', 'integration tests', 'publishing' and 'deploying', and then create one orchestrator job that will call smaller jobs in sequence wrapped in stages. Cons of this is that we still have CI spread over different jobs, and artifacts have to be passed in between. Pros, of course, is that we can run them independently if needed, so if you only need unit-tests - you run only unit-tests job, which will also result in normal and meaningful build history.
Could you please point out if you would go with a or b, or otherwise how would you do it instead?
If the reason for unifying them is code repetition, look at shared libraries. Your Build and unit-tests which is common to all pipelines can go into the shared library and you can just call the library code from different pipelines.
We have one "big ass pipeline", spiced up with
stage('Push') {
when {
expression { env.PUSH_TO_ARTIFACTORY }
beforeAgent true
}
steps {
etc.
Regarding history, you can change your build description, so for builds that push you can add a * symbol in the end, e.g.
def is_push = env.PUSH_TO_ARTIFACTORY ? " *" : ""
currentBuild.displayName += "${is_push}"
Having everything in one file means that you don't need to figure out which file to look at as you fix things.

Aggregating test results in Jenkins with Parametrized Jobs

I understand this post is similar to:
Aggregating results of downstream is no test in Jenkins
and also to:
Aggregating results of downstream parameterised jobs in Jenkins
Nonetheless, I am not able to figure out for my case, how to make this working. I am currently using Jenkins 1.655.
I have jobs A, B, C - A being the upstream job. What I want to do is to have A call B and B call C. All needs to block and wait for completion of the next. If one fails, all fails. B and C generate unit test reports. So I want to aggregate these reports in A and then Publish that result in A. So, here's the current setup of the jobs:
Job A:
Build Steps
Execute shell: echo $(date) > aggregate
Trigger Parametrized Buid Job: Job B
Post Build Steps
Aggregate downstream test results
Record fingerprints of files to track usage: set Files to fingerprint to aggregate
Publish JUnit test result report (report files from B and C)
Job B:
Build Steps
Copy artifacts from another project: copy from upstream job aggregate file
Run tests to generate unit test reports
Trigger Parametrized Build Job: Job C
It ultimately fails here because aggregate is only archived in the
Post Build Steps of Job A. How can I archive an artifact in the Build Step?
Post Build Steps
Aggregate downstream test results (unit test.xml generated)
Record fingerprints of files to track usage: set Files to fingerprint to aggregate
I won't post Job C here for simplicity but it follows pretty much what B does.
So, summing it up, I want to have interlinked jobs that depend on each other and uses the parametrized plugin and the upstream job must aggregate the test results of all downstream.
Any help appreciated, thanks!
If you have no limitation on where to run your jobs you can always specify it to run on the same workspace\machine - this will solve all your issues.
If for some reason you can't run it on the same workspace, instead of using the copy artifact plugin you can use the link in Jenkins to the WS (guessing you're using Parameterized Trigger Plugin) so it'll be easy to wget the "aggregate" file from A job from the triggered job using the defined: TRIGGERED_BUILD_NUMBER_="Last build number triggered" from A. This will also help you to keep track of the jobs B and C you triggered to get the artifacts from there.
Hope it helps!

Travis CI: branch filters in build matrix items

We are wondering whether there is any way to add filters to Travis matrix items. In our particular case, we wish to run certain jobs only on specific branches.
The following example would be an ideal way for configuring this scenario, however it doesn't seem to work:
matrix:
include:
- env: BUILD_TYPE=release
branches:
only:
- master
- env: BUILD_TYPE=ci
branches:
only:
- develop
As a workaround, we can exit from the build script immediately by checking the appropriate env vars (TRAVIS_BRANCH), but it is very far from ideal as launching the slave machine and cloning the repo takes a considerable amount of time.
You can now achieve this with the beta feature Conditional Build Stages
jobs:
include:
- stage: release
if: branch = master
env: BUILD_TYPE=release
- stage: ci
if: branch = develop
env: BUILD_TYPE=ci

Template workflows in Jenkins

Every jenkins pipeline does pretty much the same thing - atleast in a small team with multiple projects.
Build (from the same sourcecode repo) --> run tests --> publish artifacts (to the same artifact repo)
We are creating many new projects and they all have very similar lifecycle. Is it possible to create a template pipeline from which I can create concrete pipleines and make necessary changes to the jobs?
There are a couple of approaches that I use that work well for me and my team.
part 1) is to identify which orchestration plugins suits you best in jenkins.
Plugins and approaches that worked well for me were:
a) Use http://ci.openstack.org/jenkins-job-builder/
It abstract the jobs definitions and flows using a higher level library. It allows you to define jobs in YAML which is fairly simple and it supports most of the common usage cases (jobs, templates, flows).
These yaml files can then be consumed by the jenkins-jobs-builder python cli tool through an orchestration tool such as ansible, puppet,chef.
You can use YAML anchors to replace blocks that are common to multiple jobs, or ever template them from a template engine (erb,jinja2)
b) Use the workflow-plugin, https://github.com/jenkinsci/workflow-plugin
The workflow plugin allows you to have a single workflow in groovy, instead of a set of jobs that chain together.
"For example, to check out and build several repositories in parallel, each on its own slave:
parallel repos.collectEntries {repo -> [/* thread label */repo, {
node {
dir('sources') { // switch to subdir
git url: "https://github.com/user/${repo}"
sh 'make all -Dtarget=../build'
}
}
}]}
"
If you build these workflow definitions from a template engine (ERB, jinja2), and integrate them with a configuration management tool (again ansible,chef,puppet).
It becomes a lot easier to make small and larger changes that affect one or all the jobs.
For example, you can template that some jenkins boxes compile, publish and deploy the artifacts into a development environment, while others simply deploy the artifacts into a QA environment.
This can all be achieved from the same template, using if/then statements and macros in jinja2/erb.
Ex (an abstraction):
if ($environment == dev=) then compile, publish, deploy($environment)
elif ($environment== qa) then deploy($environment)
part2) is to make sure all the jenkins configuration for all the jobs and flows is kept in source control, and make sure a change of a job definition in source control will be automatically propagated to the jenkins server(s) (again ansible, puppet, chef).
Or even have a jenkins jobs that monitors its own repo of jobs definitions and automatically updates itself
When you achieve #1 and #2 you should be at a position where you can with some confidence allow all your team members to make changes to their jobs/projects, giving you information of who changed what and when, and be able to rollback changes easily from change control when things go wrong.
its pretty much about getting jenkins to deploy code from a series of templated jobs that were themselves defined in code.
Another approach we've been following is managing jobs via Ansible templates. We started way before jenkins_job module became available, and are using url module to talk to jenkins, but overall approach will be the same:
j2 templates created for different jobs
loop goes over project definitions, and updates jobs and views in jenkins
by default common definition is used, and very minimal description is required:
default_project:
jobs:
Build:
template: build.xml.j2
Release: ...
projects:
DefaultProject1:
properties:
repository: git://../..
CustomProject2:
properties:
a: b
c: d
jobs:
Custom-Build:
template: custom.j2

Resources