Jenkins Declarative Pipeline without Jenkins file in the repository - jenkins

I am trying to upgrade my current regression infrastructure to use pipeline plugin and I realize there are two methods: scripted pipeline and declarative pipeline. Going through multiple articles, I realize that declarative pipeline is more future proof and more powerful, hence I am inclined to use this. But there seems to be following restrictions which I don't want to have in my setup:
The jenkinsfile needs to be in the repository. I don't want to keep my jenkinsfile in the code repository.
Since the jenkinsfile needs to be in SCM. Does it mean I cannot test any modification in the file until I check that in to the repository.
Any details on the above will be very helpful.

Declarative pipelines are compiled to scripted ones, so those will definitely not go away. But declarative ones are a bit easier to handle, so all fine for you.
You don't have to check a Jenkinsfile into VCS. You can also set up a job of type Pipeline and define it there. But this has the usual disadvantages like no history etc.
When using multi-branch pipelines, i.e., every branch containing a Jenkinsfile generating an own job, you just push your changed pipeline to a new branch and execute it. Once it's done, you merge it.
This approach certainly increases feedback cycles a bit, but it just applies the same principles as when writing your software. For experimentation, just set up a Pipeline type job and play around. Afterwards, commit it to a branch, test it, review it, merge it.

You can use the Pipeline Multibranch Defaults Plugin for that. It allows you to define the Jenkinsfile in the web UI (with the Config File Provider plugin) itself and then reference that file from a Multibranch Pipeline.

Related

Best route to take for a Jenkins job with hundreds of sub jobs

Currently, at my organization we have a few repositories which contain ~500+ projects that need to be built to satisfy unit testing (really integration testing), and I am trying to think of a new way of approaching the situation.
Currently, the pipeline for building the projects is templatized and is stored on our Bitbucket server. All the projects get built in parallel, so once the jobs are queued, they all go to the master node to do a SCM check of the pipeline.
This creates stress on the master node, and for some reason it is not able to utilize every available node and executor on that node to it's fullest potential. Contrary, if the pipeline is not stored on SCM, it does the complete opposite to where it DOES use every possible node with any available executor on that node.
Is there something I am missing about the SCM checkout version that makes it different than storing the pipeline locally on Jenkins? I understand that you need to do an SCM poll, and I am assuming only the master can do the SCM poll for the original Jenkinsfile.
I've tried:
Looking to see if I am potentially throttling the build, but I do not see anything
Disable concurrent builds is not enabled within the pipeline
Lightweight checkout seems to work when I do it with Git plugin, but not the Bitbucket Server Integration plugin; however, Atlassian mentioned this will never be a feature, so this doesn't really matter.
I am trying to see if there is a possible way to change the infrastructure since I don't have much of a choice in how certain programs are setup since they are very tightly coupled.
I could in theory just have the pipeline locally on Jenkins and use that as a template rather than checking it into SCM; however, making changes locally to the template does not change the sub-jobs that uses it (I could implement this feature, but SCM already does it). Plus, having the template pipeline checked into Bitbucket allows a better control, so I am trying to avoid that option.

Is it possible to define a stage of a Jenkins declarative pipeline in a helper method or shared library?

I am working on a team that has a lot of projects with independent Jenkins declarative pipeline files. Most of those files duplicate a lot of the pipeline definition. We would benefit from share collections of step and an entire stage in a shared library. While the former appears to be possible with declarative pipelines the later does not.
The question of stage definitions was covered in June 2018 here with the accepted answer being NO. A later answer recommended using script blocks to share stages ... coming up with a messy scripted/declarative pipeline. Moreover, the request on the Jenkins jira is still open and hasn't had any meaningful movement since April 2018.
So before I waste a bunch of time jamming a square peg in a round hole or ignoring a perfectly reasonable solution, is the best solution to just embrace scripted pipelines? That syntax would allow me to define usable chucks of the pipeline at any level. I could have one-liner pipelines for cases where a "cookie cutter" build is acceptable, or larger customized pipelines that reuse a few complete stages but perhaps deploy code in a different manner.

What exactly is "Declarative Pipeline" in Jenkins? How to switch from the previous "pipeline" concept?

I am a bit confused about the concept of "Declarative Pipeline" in Jenkins.
Right now, I am using several Jobs of the "Multibranch-Pipeline" type of job.
I maintain "Shared Libraries", which combine a vars folder with reusable functionality. In the same repository, there is also a jobs folder, that contain "complete pipelines", that are supposed to be configured with a normal "Pipeline" type of job, and then these get triggered from Jenkinsfiles that are watched by the Multibranch-Pipeline-Job.
How to convert this to a "Declarative Pipeline". What are the best documentation resources to get started on the topic "normal vs. Declarative".
Here's a couple of good articles to get you started on scripted vs. declarative pipelines (I'm not the author).
https://www.blazemeter.com/blog/how-to-use-the-jenkins-scripted-pipeline
https://www.blazemeter.com/blog/how-to-use-the-jenkins-declarative-pipeline
The paragraph below is from the second article and sums it up nicely. One big difference that I see between scripted and declarative pipelines is that declarative pipelines are expected to be stored in a source code control system and checked out each time they're run. Declarative pipelines are a newer way than scripted, but scripted pipelines are not going away.
Jenkins provides you with two ways of developing your pipeline code: Scripted and Declarative. Scripted pipelines, also known as “traditional” pipelines, are based on Groovy as their Domain-specific language. On the other hand, Declarative pipelines provide a simplified and more friendly syntax with specific statements for defining them, without needing to learn Groovy.

Is it possible to have a common repository for multiple pipeline jobs?

I have 11 jobs running on the Jenkins master node, all of which have a very similar pipeline setup. For now I have integrated each job with its very own Jenkinsfile that specifies the stages within the job and all of them build just fine. But, wouldn't it be better to have a single repo that has some files (preferably a single Jenkinsfile and some libraries) required to run all the jobs that have similar pipeline structure with a few changes that can be taken care of with a work around?
If there is a way to accomplish this, please let me know.
Use a Shared Library to define common functionality. Your 11 Jenkinsfiles can then be as small as only a single call to the function implementing the pipeline.
Besides using a Shared Library, you can create a groovy file with common functionality and call its methods via load().
Documentation
and example. This is an easier approach, but in the future with the increasing complexity of pipelines, this may impose some limitations.

How to execute job dsl script in Jenkins workflow (pipeline) plugin

I am trying to combine nice branching handling of Workflow Multibranch
with powerfull Job-Dsl plugin Job generation. So basically I want branch to regenerate it's jobs from script in repository and run the main one.
But I don't see a way to run Process Job DSLs step from workflow script. May be there is a built in way to execute custom steps in Workflow, but I just can't find it.
You could create a separate job that processes the job-dsl, and then call it with the proper parameters from the workflow via a "build job: xxx" step.
Not quite sure where you are going with this, but perhaps what you really want is multibranch binding for Job DSL, or to manually iterate branches.
Alternately, with Workflow alone you can probably accomplish your goal, whatever that is.
It seems that the jobDSL method can be used in the pipeline.
Have a look into the Snippet Generator to generate some code :

Resources