bitbucket plugin in jenkins pipeline should have several repositories as trigger - jenkins

We use several git repositories of sources for common builds. Because not only one, but several of them might be changed alone, we would need several repositories to be checked to trigger invocation of a jenkins pipeline script.
I guess, that others might also be interested in this kind of extended functionality. Any proposal/hint how to resolve this is highly appreciated.
We have not implemented yet the bitbucket plugin use, because this would be to incomplete to serve our needs now.

You could try using web hooks
Put your web hook at project level, and changes to repositories within the project will trigger the web hook.
Then on the Jenkins side you will need the Generic Webhook Plugin
Bitbucket sends a json payload to Jenkins which consumes the payload.
You can use the Json path expression tester to get relevant fields from the payload. From that you can decide what build should be triggered.
The newer way of doing this is to use the Bitbucket Server plugin for Jenkins

Related

Best route to take for a Jenkins job with hundreds of sub jobs

Currently, at my organization we have a few repositories which contain ~500+ projects that need to be built to satisfy unit testing (really integration testing), and I am trying to think of a new way of approaching the situation.
Currently, the pipeline for building the projects is templatized and is stored on our Bitbucket server. All the projects get built in parallel, so once the jobs are queued, they all go to the master node to do a SCM check of the pipeline.
This creates stress on the master node, and for some reason it is not able to utilize every available node and executor on that node to it's fullest potential. Contrary, if the pipeline is not stored on SCM, it does the complete opposite to where it DOES use every possible node with any available executor on that node.
Is there something I am missing about the SCM checkout version that makes it different than storing the pipeline locally on Jenkins? I understand that you need to do an SCM poll, and I am assuming only the master can do the SCM poll for the original Jenkinsfile.
I've tried:
Looking to see if I am potentially throttling the build, but I do not see anything
Disable concurrent builds is not enabled within the pipeline
Lightweight checkout seems to work when I do it with Git plugin, but not the Bitbucket Server Integration plugin; however, Atlassian mentioned this will never be a feature, so this doesn't really matter.
I am trying to see if there is a possible way to change the infrastructure since I don't have much of a choice in how certain programs are setup since they are very tightly coupled.
I could in theory just have the pipeline locally on Jenkins and use that as a template rather than checking it into SCM; however, making changes locally to the template does not change the sub-jobs that uses it (I could implement this feature, but SCM already does it). Plus, having the template pipeline checked into Bitbucket allows a better control, so I am trying to avoid that option.

Is there a way to set/change the changeSet (changelog) content from pipeline script? Needed for preflight type of job

I have a preflight job using perforce in which I retrieve a branch, unshelve (apply) a given changelist on it and then build to validate that the change in question has not broken the build. Very similar to what you would do for a GitHub Pull Request type of CI.
I use the official checkout() pipeline call to get the branch as it simplify dealing with the perforce creds, and that causes the jenkins build to include the changelog of that branch in the build. Yet, those are of no interest to me, as my interest is on the changelist I am unshelving on top of that branch.
Can I, from the pipeline script clear and fill the currentBuild.changeSet? If so, would someone have an example and which fields I can set under currentBuild.changeSet.items?
Or doing is only possible by going through the plugin road in the same way the p4/git plugins are doing this?
My advice, don't play with the currentBuild.changeSet. It also contains the changeset of the shared libraries you are using. I personally don't rely on that anymore.
However, here is an article on how to update the changeSet
https://support.cloudbees.com/hc/en-us/articles/217630098-How-to-access-Changelogs-in-a-Pipeline-Job-
Here is an exemple on how to implement that in a pipeline
https://issues.jenkins-ci.org/browse/JENKINS-58441
Finally, in an ideal world, don't share your jenkins with management or non developers/testers, share only a dashboard that is connected to a database that you filled with the relevant information you need. I use influxdb+grafana to do that with the influxdb plugin

Using Jira to trigger a Jenkins build

I have a strange use case here I know, but basically I have a CI / CD solution that starts be a developer creating a zip file of a set of resources. This zip is then sucked in to SVN via the tools internal programs.
Currently the solution works, using the FSTRigger to poll for an updated zip. When it see it, then the process kicks off and we're happy.
going forward I'd like the builds to be triggered by a Jira job reaching a certain status and have been looking at the Jira trigger plugin. It looks like it will help satisfy me with regards the triggering of the build and passing data from Jira to Jenkins to use for delivery notes etc. However it would still depend on the zip file being in a certain location to be picked up.
I'm wondering if it's possible to attach the zip to the Jira task and then as part of the task status hitting 'build' kick off the Jenkins job and copy the zip so it can be picked up by by the Jenkins build task.
for reasons to complex to mention, checking the zip into svn first won't really work.
When your Jenkins build is triggered via jira-trigger-plugin, you would be able to access JIRA_ISSUE_KEY environment variable that contains the JIRA issue which status has changed.
With the JIRA issue key, you can hit Get Issue JIRA REST API to retrieve the issue details. The issue details would contain the attachment information, which would then be able to be used for downloading the zip in Jenkins.

How to create complex value stream with multiple pipelines with Jenkins WorkFlow

How do you implement a complex value stream with multiple pipelines in Jenkins WorkFlow? Similar like you can do with Go CD: How do I do CD with Go?: Part 2: Pipelines and Value Streams.
For a distributed system I would like to have each dev team and operation team to start with their own delivery pipeline. One change needs to trigger only the pipeline of the team that made the change. It needs to trigger a new pipeline that needs to take the latest successful artifacts from each of the team's pipelines and move on from there. This mean that the artifacts from the other teams were not rebuild or retested as they were not changed. And after the Fan In we can run a set of automated tests to verify the correct behaviour of the distributed system with the change.
In the documentation I only find you can pull from multiple VCS's but I assume everything is then build and tested with every change. Which is something I want to avoid.
If each delivery pipeline is in it's own Jenkins Job. How can I visualize the complete pipeline and what is the best way to pull in the last successful artifacts or version from the other pipelines?
There is no direct equivalent in Jenkins for value streams, and Workflow jobs do not behave any differently in that respect: you can have upstream jobs and downstream jobs correlated with triggers (in this case the build step, or the core ReverseBuildTrigger), and use (for example) the Copy Artifact plugin to transfer artifacts to downstream builds. Similarly, you could use an external repository manager as the “source of truth” and define job triggers based on snapshots pushed to the repository.
That said, part of the purpose of Workflow is to avoid the need for complex job chains in most situations¹, since it is usually easier to reason about, debug, and customize a single script with standard control flow operators and local variables than to manage a set of interdependent jobs. If the main problem with a single flow is that you need to avoid rebuilding unmodified parts, one solution would be to use something like JENKINS-30412 to check the changelog of particular repository checkouts and skip build steps if empty. I think there would be more features needed to make such a system work in the general case that workspaces are clobbered or discarded by other builds.
¹One case where you definitely need separate jobs is that for security reasons the teams contributing to different projects must not be able to see one another’s sources or build logs.
Assuming that each of your dev teams works on a different module of your project and „One change needs to trigger only the pipeline of the team that made the change“ I'd use Git Submodules:
Submodules allow you to keep a Git repository as a subdirectory of another Git repository.
with one repo, that becomes a submodule of a main module repo, for each team. This will be transparent to the teams since they just work on their designated repos only.
The main module is also the aggregator project for your module projects in terms of the build tool. So, you have the options:
to build each repo/pipeline individually or
to build the whole (main) project at once.
A build pipeline that comprises one or more build jobs is associated to every team/repo/module.
The main pipeline is merely a collection of downstream jobs which represent the starting points of the team/repo/module pipelines.
The build triggers can be any of manually, timed or on source changes.
A decision has also to be made:
whether you version your modules individually, such that other modules depend on release versions only.
Advantage:
Others rely on released, usually more stable versions.
Modules can decide which version of a dependency they want to use.
Disadvantages:
Releases have to be prepared for each module.
It may take longer until the latest changes are available to others.
Modules have to decide which version of a dependency they want to use. And they have to adapt it every time they need functionality added in a newer version.
or whether you use one version for the entire project (which is inherited by the modules then): ...-SNAPSHOT during the development cycle, a release version when releasing the project.
In this case, if there are modules that are essential for others, e.g. a core module, a successful build of it should trigger a build of the dependent modules, as well, so that incompatibilities are recognized as early as possible.
Advantages:
Latest changes are immediately available to others.
A release is prepared for the whole project only once it is to be delivered.
Disadvantages:
Latest changes immediately available to others may introduce not so stable (snapshot) code.
Re „How can I visualize the complete pipeline“
I'm not aware of any plugin that can do this with Workflows at the moment.
There's the Build Graph View Plugin which originally has been created for Build Flows, but it's more than two years old now:
Downstream builds are identified by DownStreamRunDeclarer extension point.
Default one is using Jenkins dependencyGraph and UpstreamCause and as such can detect common build chain.
build-flow plugin is contributing one to render flow execution as a graph
some Jenkins plugins may later contribute dedicated solutions.
(You know, „may“ and „later“ often become will not and never in development. ;)
There's the Build Pipeline Plugin but it apparently is also not suitable for Workflows:
This plugin provides a Build Pipeline View of upstream and downstream connected jobs [...]
Re „way to pull in the last successful artifacts“
Apparently it's not that smooth with Gradle:
By default, Gradle does not define any repositories.
I'm using Maven and there exist local and remote repositories where the latter can also be:
[...] internal repositories set up on a file or HTTP server within your company, used to share private artifacts between development teams and for releases.
Have you considered using a binary repository manager like Artifactory or Nexus?
From what I have seen, people are moving towards smaller, independent pieces of code delivery rather than monolithic deployments. But clearly, there will still be dependencies between different components. At the very least, for example, if you had one script that provisioned your infrastructure and another that built and deployed your app, you would want to be sure your infrastructure update script was run before your app deployment. On the other hand, your infrastructure does not depend on deploying your app code - it can be updated at its own pace, so long as it ideally passes some testing.
As mentioned in another post, you really have two options to accomplish this dependency:
Have a single pipeline (workflow script) that checks out code from both repos and puts them through the same pipeline simultaneously. Any change to one requires the full boat pipeline for everything.
Have two pipelines and this would allow each to go at its own pace independent of what the other does. This isn't a problem for the infrastructure code, but it very well could be for the app code. If you pushed your app code to production without the infrastructure update having happened first, the results may not be pleasant.
What I've started to do with Jenkins Workflow is establish a dependency between my flows. Basically, I declare that one flow is dependent on a particular version (in this case, simply BUILD_NUM) and so before I do a production deploy I verify that the last successful build of the other pipeline has completed first. I'm able to do this using the Jenkins API as part of my flow script that waits for that build or greater to succeed, like so
import hudson.EnvVars
import hudson.model.*
int indepdentBuildNum = 16
waitUntil{
verifyDependentPipelineCompletion("FLDR_CM/WorkflowDepedencyTester2", indepdentBuildNum)
}
boolean verifyDependentPipelineCompletion(String jobName, int buildNum){
def hi = jenkins.model.Jenkins.instance
Item dep2 = hi.getItemByFullName(jobName)
hi = null
def jobs = dep2.getAllJobs().toArray()
def onlyJob = jobs[0] //always 1 job...I think?
def targetedBuild = onlyJob.getLastSuccessfulBuild()
EnvVars me = targetedBuild.getCharacteristicEnvVars()
def es = me.entrySet()
int targetBuildNum = 0;
def vars = es.iterator()
while(vars.hasNext()){
def envVar = vars.next()
if(envVar.getKey().equals("BUILD_ID")){
targetBuildNum = Integer.parseInt(envVar.getValue())
}
}
if (buildNum > targetBuildNum) {
return false
}
return true
}
Disclaimer that I am just beginning this process so I do not have much real-world experience with this yet, but will update this thread if I have more relevant information. Any feedback welcome.

when to use trigger builds remotely in jenkins

I have searched many links but could not find the appropriate answer. I need to know when should I use "trigger builds remotely". I have gone through Integration Jenkins with SVN, there I saw that I need to check this option. I am not getting any idea regarding this.
This is used when someone commits to the source control (SVN) it shall ping Jenkins to trigger a build.
The "trigger builds remotely" option is used when you want to trigger a build from another tool:
As explain, you can trigger a job using the Jenkins URL:
http://your-jenkins-url/jobs/your-job/build
You can secure this URL by using an authentication token.
I hope it helps :)
You don't have to use it. There are multiple ways a build can be triggered
Build periodically - based on repeating cron schedule.
Poll SCM - based on SCM commits. This is also required when building on SCM hooks.
Build after/before other projects / Various - multiple ways to setup cross-project dependencies for selecting when to build.
Trigger builds remotely - to delegate the logic for monitoring/triggering builds to 3rd party applications/scripts.
The last one is used when you don't want Jenkins to be doing the triggering of the jobs (but Jenkins will still do the execution). It allows you to trigger a build through a specific URL. To avoid unauthorized triggering (since there is no login at this point), an authentication token can be provided.
This URL can be invoked any way you want: manually, command-line script, or some other 3rd-party application.

Resources