Jenkins - Option to have all jenkinsfiles put in one separate folder - jenkins

I am planning to create a Project in Jenkins and I am thinking of using Organisation folder for that.
Since the project has severals applications (mobile app with backend and frontend parts) I have several repos that will need to be separate jobs.
My question here is is it possible (or is it a good/bad practice as well) to put all Jenkinsfiles for all the apps in one separate folder (called Jenkinsfiles for example) from where I will invoke the corresponding file?
Until now I have been placing Jenkinsfile in the repo where is my app that I am doing the build but now, with whole project, I need to decide which approach to take so I would appreshiate any contribution in decision making

If you are asking "can it be done" the answer is Yes, most likely - but how will you manage which Jenkinsfile is being run?
As I see it, the main idea is to have the Jenkinsfile next to the code being deployed.
If you have several applications in your project repository, maybe you could have separate repositories with "just" the Jenkinsfile and deployment config. You could then use the Jenkinsfile in the main code repo to trigger all the sub-jobs.
I have done something similar, and have seen several companies with separate jobs for prod/user-test that require separate permissions to be triggered as well

Related

Jenkins pipeline Continuous Integration

I have 20-30 projects that I am working on with their own git repos, each repo have several branches not dependent on other projects. I am looking if there's some way to come up with Jenkins Pipeline to accommodate all the projects with CI/CD ecosystem. OR do I need to create separate pipeline for each repo.
Is there a way I can use one Jenkins-file into all these projects.
how do you share data between pipelines if module 3 is depenendent on data coming from Module1&2.
do I need to create 30 hooks/tokens if I have 30 projects?
I was able to create dependent build triggers between the very first three such that if A & B build then C will build using the SCM polling option and build triggers.
Thanks in advance. Appreciate any help, feedback or suggestions.
You can use shared libraries in Jenkins pipelines. It is a fairly involved process requiring writing libraries in groovy.
As Pipeline is adopted for more and more projects in an organization,
common patterns are likely to emerge. Oftentimes it is useful to share
parts of Pipelines between various projects to reduce redundancies and
keep code "DRY".
Pipeline has support for creating "Shared Libraries" which can be
defined in external source control repositories and loaded into
existing Pipelines.

share Jenkinsfile between developers and DevOps team

I have Jenkinsfile, which I want to share with all developers of various projects. So, every project team will use same Jenkinsfile, which is shared by me.
The next thing is, every project team will be able to add custom steps to my Jenkinsfile as per their requirement.
So, after few weeks, If I want to add few more steps to my Jenkinsfile and replicate that across all project Jenkinsfile, how can I make it, without affecting custom steps of all these development teams.
This is a bit hard to answer, but mostly sounds like a use case for a Shared Library: stored in a separate repository and loaded (explicitly or implicitly by other Jenkinsfiles/pipelines), this allows you to extract common functionality into own functions.
Depending on your actual change, there is a (too broad) range of possibilities, from having a deploy() step that you maintain and others use or really means to supply pieces, i.e. steps executed within the pipeline, and you centrally define the flow around it, e.g. including error handling, notifications etc.

Is there a trick to debug shared groovy libraries without pushing?

I'm adding to, and maintaining, groovy files to build a set of repositories - previously they were built with freestyle Jenkins jobs. I support some code in shared libraries and to be honest (mainly for DRY reasons) I want to do that more.
However, the only way I know how to test and debug those library files is to push the changes on a git branch. I know about the "replay" trick to test the main Jenkins file. Is there some approach I've missed to do something similar for library code?
If you set up a job to load the shared library instead of relying on a globally set up shared library (you can have both going, for this particular job), then it is possible to hit "replay" and have all your shared library steps show up as editable files.
This can be helpful in iterative development without a million commits.
EDIT: Here's how that looks on an Organization job in Jenkins.
There is the 3rd party Jenkins Pipeline Unit testing framework.
While it does not yet cover all features of pipeline, it is well documented and maintained so that I would consider starting using it (once I revisit our Jenkins setup).

Continuous Integration with BitBucket

I'm developing a private webapp in JSF which is available over the internet and now reached a stage where I wanted to introduce CI (Which I'm fairly new to) into the whole process. My current project setup looks like this:
myApp-persistence: maven project that handles DB access (DAOs and hibernate stuff)
myApp-core: maven project, that includes all the Java code (Beans and Utils). It has a dependency on myApp-persistence.jar
myApp-a: maven project just with frontend code (xhtml, css, JS). Has a dependency on myApp-core.jar
myApp-b: maven project just with frontend code (xhtml, css, JS). Has a dependency on myApp-core.jar
myApp-a and myApp-b are independent from each other, they are just different instances of the core for two different platforms and only display certain components differently or call different bean-methods.
Currently I'm deploying manually, i.e. use the eclipse built-in export as war function and then manually upload it to the deployments dir of my wildfly server on prod. I'm using BitBucket for versioning control and just recently discovered pipelines in BitBucket and implemented one for each repository (every project is a separate repo). Now myApp-persistence builds perfectly fine because all dependencies are accessible via the public maven repo but myApp-core (hence myApp-a and myApp-b, too) fails of course because myApp-persistence isn't published on the central maven repo.
Is it possible to tell BitBucket somehow to use the myApp-persistence.jar in the corresponding repo on BitBucket?
If yes, how? And can I also tell BitBucket to deploy directly to prod in case the build including tests ran fine?
If no, what would be a best practice to do that? I was thinking of using a second dev server (already available, so no big deal) as a CI server but then still I would need some advise or recommendations on which tools (Jenkins, artifactory, etc.) to use.
One important note maybe: I'm the only person working on this project so this might seem like an overkill but for me the process of setting that up is quite some valuable experience. That said, I'm not necessarily looking for the quickest solution but for the most professional and convenient solution.
From my point of view, you can find the solution in this post-https://christiangalsterer.wordpress.com/2015/04/23/continuous-integration-for-pull-requests-with-jenkins-and-stash/. It guides you step by step how to set up everything. The post is from 2015 but the process and idea are still the same. Hope it helps.

How to create complex value stream with multiple pipelines with Jenkins WorkFlow

How do you implement a complex value stream with multiple pipelines in Jenkins WorkFlow? Similar like you can do with Go CD: How do I do CD with Go?: Part 2: Pipelines and Value Streams.
For a distributed system I would like to have each dev team and operation team to start with their own delivery pipeline. One change needs to trigger only the pipeline of the team that made the change. It needs to trigger a new pipeline that needs to take the latest successful artifacts from each of the team's pipelines and move on from there. This mean that the artifacts from the other teams were not rebuild or retested as they were not changed. And after the Fan In we can run a set of automated tests to verify the correct behaviour of the distributed system with the change.
In the documentation I only find you can pull from multiple VCS's but I assume everything is then build and tested with every change. Which is something I want to avoid.
If each delivery pipeline is in it's own Jenkins Job. How can I visualize the complete pipeline and what is the best way to pull in the last successful artifacts or version from the other pipelines?
There is no direct equivalent in Jenkins for value streams, and Workflow jobs do not behave any differently in that respect: you can have upstream jobs and downstream jobs correlated with triggers (in this case the build step, or the core ReverseBuildTrigger), and use (for example) the Copy Artifact plugin to transfer artifacts to downstream builds. Similarly, you could use an external repository manager as the “source of truth” and define job triggers based on snapshots pushed to the repository.
That said, part of the purpose of Workflow is to avoid the need for complex job chains in most situations¹, since it is usually easier to reason about, debug, and customize a single script with standard control flow operators and local variables than to manage a set of interdependent jobs. If the main problem with a single flow is that you need to avoid rebuilding unmodified parts, one solution would be to use something like JENKINS-30412 to check the changelog of particular repository checkouts and skip build steps if empty. I think there would be more features needed to make such a system work in the general case that workspaces are clobbered or discarded by other builds.
¹One case where you definitely need separate jobs is that for security reasons the teams contributing to different projects must not be able to see one another’s sources or build logs.
Assuming that each of your dev teams works on a different module of your project and „One change needs to trigger only the pipeline of the team that made the change“ I'd use Git Submodules:
Submodules allow you to keep a Git repository as a subdirectory of another Git repository.
with one repo, that becomes a submodule of a main module repo, for each team. This will be transparent to the teams since they just work on their designated repos only.
The main module is also the aggregator project for your module projects in terms of the build tool. So, you have the options:
to build each repo/pipeline individually or
to build the whole (main) project at once.
A build pipeline that comprises one or more build jobs is associated to every team/repo/module.
The main pipeline is merely a collection of downstream jobs which represent the starting points of the team/repo/module pipelines.
The build triggers can be any of manually, timed or on source changes.
A decision has also to be made:
whether you version your modules individually, such that other modules depend on release versions only.
Advantage:
Others rely on released, usually more stable versions.
Modules can decide which version of a dependency they want to use.
Disadvantages:
Releases have to be prepared for each module.
It may take longer until the latest changes are available to others.
Modules have to decide which version of a dependency they want to use. And they have to adapt it every time they need functionality added in a newer version.
or whether you use one version for the entire project (which is inherited by the modules then): ...-SNAPSHOT during the development cycle, a release version when releasing the project.
In this case, if there are modules that are essential for others, e.g. a core module, a successful build of it should trigger a build of the dependent modules, as well, so that incompatibilities are recognized as early as possible.
Advantages:
Latest changes are immediately available to others.
A release is prepared for the whole project only once it is to be delivered.
Disadvantages:
Latest changes immediately available to others may introduce not so stable (snapshot) code.
Re „How can I visualize the complete pipeline“
I'm not aware of any plugin that can do this with Workflows at the moment.
There's the Build Graph View Plugin which originally has been created for Build Flows, but it's more than two years old now:
Downstream builds are identified by DownStreamRunDeclarer extension point.
Default one is using Jenkins dependencyGraph and UpstreamCause and as such can detect common build chain.
build-flow plugin is contributing one to render flow execution as a graph
some Jenkins plugins may later contribute dedicated solutions.
(You know, „may“ and „later“ often become will not and never in development. ;)
There's the Build Pipeline Plugin but it apparently is also not suitable for Workflows:
This plugin provides a Build Pipeline View of upstream and downstream connected jobs [...]
Re „way to pull in the last successful artifacts“
Apparently it's not that smooth with Gradle:
By default, Gradle does not define any repositories.
I'm using Maven and there exist local and remote repositories where the latter can also be:
[...] internal repositories set up on a file or HTTP server within your company, used to share private artifacts between development teams and for releases.
Have you considered using a binary repository manager like Artifactory or Nexus?
From what I have seen, people are moving towards smaller, independent pieces of code delivery rather than monolithic deployments. But clearly, there will still be dependencies between different components. At the very least, for example, if you had one script that provisioned your infrastructure and another that built and deployed your app, you would want to be sure your infrastructure update script was run before your app deployment. On the other hand, your infrastructure does not depend on deploying your app code - it can be updated at its own pace, so long as it ideally passes some testing.
As mentioned in another post, you really have two options to accomplish this dependency:
Have a single pipeline (workflow script) that checks out code from both repos and puts them through the same pipeline simultaneously. Any change to one requires the full boat pipeline for everything.
Have two pipelines and this would allow each to go at its own pace independent of what the other does. This isn't a problem for the infrastructure code, but it very well could be for the app code. If you pushed your app code to production without the infrastructure update having happened first, the results may not be pleasant.
What I've started to do with Jenkins Workflow is establish a dependency between my flows. Basically, I declare that one flow is dependent on a particular version (in this case, simply BUILD_NUM) and so before I do a production deploy I verify that the last successful build of the other pipeline has completed first. I'm able to do this using the Jenkins API as part of my flow script that waits for that build or greater to succeed, like so
import hudson.EnvVars
import hudson.model.*
int indepdentBuildNum = 16
waitUntil{
verifyDependentPipelineCompletion("FLDR_CM/WorkflowDepedencyTester2", indepdentBuildNum)
}
boolean verifyDependentPipelineCompletion(String jobName, int buildNum){
def hi = jenkins.model.Jenkins.instance
Item dep2 = hi.getItemByFullName(jobName)
hi = null
def jobs = dep2.getAllJobs().toArray()
def onlyJob = jobs[0] //always 1 job...I think?
def targetedBuild = onlyJob.getLastSuccessfulBuild()
EnvVars me = targetedBuild.getCharacteristicEnvVars()
def es = me.entrySet()
int targetBuildNum = 0;
def vars = es.iterator()
while(vars.hasNext()){
def envVar = vars.next()
if(envVar.getKey().equals("BUILD_ID")){
targetBuildNum = Integer.parseInt(envVar.getValue())
}
}
if (buildNum > targetBuildNum) {
return false
}
return true
}
Disclaimer that I am just beginning this process so I do not have much real-world experience with this yet, but will update this thread if I have more relevant information. Any feedback welcome.

Resources