share Jenkinsfile between developers and DevOps team - jenkins

I have Jenkinsfile, which I want to share with all developers of various projects. So, every project team will use same Jenkinsfile, which is shared by me.
The next thing is, every project team will be able to add custom steps to my Jenkinsfile as per their requirement.
So, after few weeks, If I want to add few more steps to my Jenkinsfile and replicate that across all project Jenkinsfile, how can I make it, without affecting custom steps of all these development teams.

This is a bit hard to answer, but mostly sounds like a use case for a Shared Library: stored in a separate repository and loaded (explicitly or implicitly by other Jenkinsfiles/pipelines), this allows you to extract common functionality into own functions.
Depending on your actual change, there is a (too broad) range of possibilities, from having a deploy() step that you maintain and others use or really means to supply pieces, i.e. steps executed within the pipeline, and you centrally define the flow around it, e.g. including error handling, notifications etc.

Related

Jenkins - Option to have all jenkinsfiles put in one separate folder

I am planning to create a Project in Jenkins and I am thinking of using Organisation folder for that.
Since the project has severals applications (mobile app with backend and frontend parts) I have several repos that will need to be separate jobs.
My question here is is it possible (or is it a good/bad practice as well) to put all Jenkinsfiles for all the apps in one separate folder (called Jenkinsfiles for example) from where I will invoke the corresponding file?
Until now I have been placing Jenkinsfile in the repo where is my app that I am doing the build but now, with whole project, I need to decide which approach to take so I would appreshiate any contribution in decision making
If you are asking "can it be done" the answer is Yes, most likely - but how will you manage which Jenkinsfile is being run?
As I see it, the main idea is to have the Jenkinsfile next to the code being deployed.
If you have several applications in your project repository, maybe you could have separate repositories with "just" the Jenkinsfile and deployment config. You could then use the Jenkinsfile in the main code repo to trigger all the sub-jobs.
I have done something similar, and have seen several companies with separate jobs for prod/user-test that require separate permissions to be triggered as well

Is there a trick to debug shared groovy libraries without pushing?

I'm adding to, and maintaining, groovy files to build a set of repositories - previously they were built with freestyle Jenkins jobs. I support some code in shared libraries and to be honest (mainly for DRY reasons) I want to do that more.
However, the only way I know how to test and debug those library files is to push the changes on a git branch. I know about the "replay" trick to test the main Jenkins file. Is there some approach I've missed to do something similar for library code?
If you set up a job to load the shared library instead of relying on a globally set up shared library (you can have both going, for this particular job), then it is possible to hit "replay" and have all your shared library steps show up as editable files.
This can be helpful in iterative development without a million commits.
EDIT: Here's how that looks on an Organization job in Jenkins.
There is the 3rd party Jenkins Pipeline Unit testing framework.
While it does not yet cover all features of pipeline, it is well documented and maintained so that I would consider starting using it (once I revisit our Jenkins setup).

How to reuse Build Parameters across multiple Jenkins jobs?

I'm planning to reuse the same set of build parameters (like 10 of them) across dozens of jobs.
One way is to create a job, and clone it. But what if I want to change the build parameters at the later time when I have already hundred of similar jobs. Editing all of them one by one could be a nightmare.
Is there any way of managing parameterized projects?
As solution to this problem I would imaging some option or plugin where I can define global set of parameters and reuse them across my jobs.
You could try using Configuration Slicing Plugin. This plugin allows you to perform mass configuration (including parameters) for a group of jobs.
Alternatively you could try writing a groovy management script to set the group of parameters to all those jobs at once. A good starting point would be this, note that this is just printing the current jobs parameters, you would have to alter that script to do want you want.
Unfortunately mentioned Inheritance Plugin is not maintained anymore, it's buggy and it has some limitation such as Trigger Parameterized Builds cannot be implemented in Parent Projects, it's also difficult to override specific configuration and does not play well with Folders plugin.
Alternative ways are:
Job DSL Plugin which allows process jobs with DSLs which can be used as templates (a "seed" job), then run these DSL scripts into your jobs (read the tutorial). It's actively maintained on GitHub. For more advanced solutions you may use Pipelines instead.
Template Project Plugin which allows to set up a template project which has all the settings you want to share across your other jobs (by selecting use all the publishers from this project and pick the template project.
How about EZ Templates Plugin (check also GitHub page)?
Just remember that when you create a template, that job shouldn't actually do anything else then being a template (meaning: you should not run that job) and put only the minimum common configs there, nothing else or things can get messy. That way you shouldn't have any problems.
Using Parameterized Trigger Plugin you can save the properties in a property file and pass them across jobs. Then in you can override or use as is in the subsequent jobs.
Also this would help: Retrieve parameters from properties file.
You could also consider using Pipeline Global Library.
This plugin adds that functionality by creating a "shared library script" Git repository inside Jenkins. Every Pipeline script in your Jenkins see these shared library scripts in their classpath.
Try Inheritence-Plugin which can help to solve the problem. We can read from plugin description:
Instead of having to define the same property multiple times across as many projects; it should be possible for many projects to refer to the same property that is defined only once. In other words, everything that is defined multiple times, but used in the same way, should be defined only once and simply referred to many times.
So to define the property only once across multiple jobs, you need to:
Create a new job as Inheritance Project.
You may set it as abstract project choose This build is parameterized.
Add Inheritable Parameter and set it as Overwritable.
After saving, set this project as parent, so parameters can be inherited.
Check the Jenkins Inheritance Plugin Tutorial Video for overview of the main features. See also GitHub page.
Unfortunately the plugin is not well maintained and it can be buggy when using with the latest Jenkins (e.g. #22885).
You may manage this using single property file which can be injected in all the jobs

Power tradeoff between buildscript and CI server

Although this question specifically involves Gradle and Bamboo, it really is a question about any build system (Ant/Maven/Gradle/etc.) and any CI tool (Bamboo/Jenkins/Hudson/etc.).
I was always under the impression that the purpose of a CI build is to:
Check out code from VCS
Run a buildscript (Gradle, etc.)
Deploy a binary (WAR, etc.) to an environment
Hence, all the guts and heavy-lifting (running automated tests, code analysis, test coverage, compiling, Javadocs, packaging, etc.) was all to be done from inside the buildscript.
But Bamboo seems to allow you to break this heavy-lifting out of the buildscript and into Bamboo itself. In Bamboo, you can add build stages and decompose the stages into tasks. Each task is something just as atomic/fundamental as an Ant task.
So it got me thinking: how much should one empower the CI tool? What typical buildscript functionality should be transferred over to Bambooo/CI? For instance, should I be compiling from a Gradle task, or from a Bamboo task? Same goes for all tasks/stages.
For some reason, I view this as the same problem as to whether or not to use stored procedures or put the data processing all at the application layer. What are the pros/cons of each approach?
TL;DR at the bottom
My experience is with Jenkins, so examples will relate to that.
One thing with any build system (be it CI server or a buildscript), is that it should be stable, simple and self-contained so that an untrained receptionist (with printed instructions and proper credentials) could do it.
Ease of use and re-use
Based on the above, one would think that a buildscript wins. Not always. As with the receptionist example, it's about easy of use and easy of reproducibility.
If a buildscript has interdependent build targets that only work in correct order, dependence on pre-supplied property files that have to be adjusted for the correct branch ahead of build, reliance on environment variables that no-one remembers who created in the first place, and a supply of SCM revision numbers that have to be obtained by looking at the log of the commits for the last month... This is in no way better than a Jenkins job that can be triggered with a single button.
Likewise, a Jenkins workflow could be reliant on multiple dependant jobs, each being manually pre-configured before the build, and need artifacts uploaded from one place to another... which no receptionist will do.
So, at this point, a self-contained good buildscript that only requires ant build command to do everything from beginning to end, is just as good as a Jenkins job that only required build now... button to be pressed.
Self-contained
It is easy to think that since Jenkins will (at some point) end up calling at least a portion of a buildscript (say ant compile), that Jenkins is "compartmentalizing" the buildscript into multiple steps, thus breaking away from being self-contained.
However, instead you should zoom out by one level, and treat the whole Jenkins job configuration as a single XML file (which, by the way, can be stored and versioned through an SCM just like the buildscript)
So, at this point, it doesn't matter if the whole build logic is inside a single buildfile, or a single XML job configuration file. Both can be self-contained when done right.
The devil you know
In majority of cases, it comes down to what you know.
Some people find it easier to use Jenkins UI to visually arrange their build workflow, reporting, emailing, and archiving (and for anything that doesn't fit as wanted, find a plugin). For them, figuring out a build script language is more time consuming then simply trying it in UI.
Others prefer to know exactly what every single line of their build script does, and don't like giving control to some piece of foreign code obfuscated by UI.
Both points have merits from all sides Quality-Time-Budget triangle
The presentation
So far, things have been more or less balanced. However:
My Jenkins will email a detailed HTML report with a link to a job page and send it straight up to the (non tech-savvy) CEO. He can look at the list of latest builds, along with SCM changes for each build, linking him to JIRA issues fixed for each build (all hyperlinks to relevant places). He can select the build with the set of changes that he wants, and click "install iOS package" right off his iPad that he just used to view all this information. Meanwhile I can go to the same job page, and review the build logs and artifacts of each log, check the build time trends and compare the parameters that were used between the failing and succeeding jobs (and I didn't have to write any echos to display that, it's just all there, cause Jenkins does that for you)
With a buildscript, even if you piped the output to a file, would you send that to your (non tech-savvy) CEO? Unlikely. But wait, you know this devil very well. A few quick changes and hacks, couple Red Bulls... and months of thankless work (mostly after-hours) later... you've created a buildscript that will create and start a webserver, prepare HTML reports, collect statistics and history, email all the relevant people, and publish everything on a webpage, just like Jenkins did. (Ohh, if people could only see all the magic you did escaping and sanitizing all that HTML content in a buildscript). But wait... this only works for a single project.
So, a full case of Red Bulls later, you've managed to make it general enough to build any project, and you've created...
Another Jenkins/Bamboo/CI-server
Congratulations. Come up with a name, market it, and make some cash of it, cause this ultimate buildscript just became another CI solution a la Jenkins.
TL;DR:
Provided the CI-server can be configured simply and intuitively so that a receptionist could run the build, and provided the configuration can be self-contained (through whatever storage method the CI-server uses) and versioned in SCM, it all comes down to the Quality-Time-Budget triangle.
If you have little time and budget to learn the CI server, you can still greatly increase the quality (at least of the presentation) by embracing the CI-server's way of organizing stuff.
If you have unlimited time and budget, by all means, make your own Jenkins with the buildscript.
But considering the "unlimited" part is rather unrealistic, I would embrace the CI-server as much as possible. Yes, it's a change. However a little time invested in learning the CI-server and how it compartmentalizes or breaks into tasks the different parts of the build flow, this time spent can go a long way to increasing the quality.
Likewise, if you have no time and/or budget, figuring out the quirks of all the plugins/tasks/etc and how it all comes together will only bring your overall quality down, or even drag the time/budget down with it. In such cases, use the CI-server for bare minimum needed to trigger your existing buildscripts. However, in some cases, the "bare minimum" is no better than not using the CI-server in the first place. And when you are at this place... ask yourself:
Why do you want a CI-server in the first place?
Personally (and with today's tools), I'd take a pragmatic approach. I'd do as much as feasible on the build side (clearly better from an automation perspective), and the rest (e.g. distribution of work across machines) on the CI server. Anything that a developer might want to do on his own machine should definitely be automated on the build level. As to the concrete steps you gave, I'd generally check out code from the CI server, and deploy binaries from the build. I'd try to make every CI job look the same, invoking the build tool in the same way (e.g. gradlew ciBuild).
In Bamboo, you can add build stages and decompose the stages into tasks. Each task is something just as atomic/fundamental as an Ant task.
To some extent, this overlap in functionality is natural, as neither build tool nor CI server can assume existence of the other, and both want to provide as complete a solution as possible.
For some reason, I view this as the same problem as to whether or not to use stored procedures or put the data processing all at the application layer.
It's not an unfair comparison, and hence opinions will be as diverse, contextual, and nuanced.
Disclaimer: I'm a Gradle(ware) developer.

Reuse parts of a TFS build process template

TFS build flow is defined in TFS 2010's build template(which in fact is Windows Workflow Foundation file with *.xaml extension).
It was pretty convenient for dealing with single build definition in simple project, but in the near future we'll have more complicated project where we'll have many very different build definitions, but in the same time some of them will have some significant common parts in logic.
And there is no wish to have common logic replicated in each build template, and on the other hand having one super-smart-parametrizable build is considered as not the best idea.
Long story short, but the questions is:
is there any possibility to put common logic into another build template/or_whatever and reuse it?
If not - do you have some approaches/recommendation regarding such situation?
UPDATE
As K.Hoff mentioned, there is a possibility to create custom activities, but I want to go deeper and reuse not only activities but sequences as well(put simply, similarly to like Ant or NAnt do - include one file into another, call one sequences from another, etc).
I would recommend you to check whether it is possible to write code activity which executes workfow (.xaml file) with common build functionality. As a result such code activity could be put into several "master" build templates so it is possible to reuse common flow.
Here is an example how to dynamically load and execute workflow - http://msdn.microsoft.com/en-us/vs2010trainingcourse_introtowf_topic8.aspx.
We have a similar situation, but since most of our build scenarios are similar (i.e. get->build->test->deploy) we have mostly solved it with one big definition and custom activities. But we also make use of the ExecuteWorkflow activity available from Community TFS Build Extensions.
This works well for "simple" scenarios, the reason we don't use this more extensively is because it's quite complicated to pass parameters between workflow executions. Here's a link to a problem I had with this (and further down the solution I found).
You can create custom code activities as explained here and reuse them in other build templates.
An other way is to implement good old msbuild scripts and put them in the msbuild execution activities to reuse them in many build process templates.
I can't find a quick way to reuse complete sequences, the only way we found is to write the acitvities as common as possible and inject parameters to get them run.
But I don't think it's a TFS problem it's a Workflow problem.

Resources