How to migrate from Jenkinsfile to pipeline_config.groovy which uses JTE? - jenkins

I'm in the process of migrating git-based projects to use a shared Pipeline definition from a governance tier built with Jenkins Templating Engine.
In the process of testing I cloned the project and pushed it to a new repository in Bitbucket where it was recognized by Jenkins and the template was used immediately based on the definitions in pipeline_config.groovy. However, this is not a sane migration path for existing projects. How do I get Jenkins to start using the template on branches without Jenkinsfile and the Jenkinsfile on branches with a Jenkinsfile.
The result of of "Scan Multibranch Pipeline Now" according to the logs is ‘Jenkinsfile’ not found. Skipping. I assume that a new project regonizer is provided by the Jenkins Templating Plugin.
I assume that every project with Git Flow has to perform this migration, so I'm confused there's no documentation.
I'm using Jenkins 2.306 and JTE plugin 2.3.

It seems that newer versions of JTE or Jenkins allow a mixture of branches with JTE and without.
In case you have to deal with versions that don't do the following:
Once I removed Jenkinsfile from every branch and put a pipeline_config.groovy on every branch the Multibranch Plugin started recognizing the project first as removed and at the next scan as present with all branches which were all using the Jenkins Template Engine.
Not the best migration imo, but a great opportunity to cleanup old branches. Since my project was using Git Flow I needed to make a technical hotfix release to also migrate away from Jenkinsfile on master.

Related

Jenkins Pipeline from SCM. Can we select branch

We have our pipelines groovy scripts for Jenkins in SCM (git). I believe it currently gets the scripts from master as default.
Are we able to specify the branch we want to use for the groovy scripts?
There is a setting in the particular section but that seems to be for build branch if I understand correctly (as it allows for setting of multiple branches)
It appears that the branch option is the branch for the pipeline script.
I assume that multiple branches means you can run multiple branches of the different versions of that script. I suspect I am missing when that would be used but it does answer my original question.

Use one Jenkinsfile or multiple Jenkinfiles

We are currently using Windows \ Jenkins 2.107.1 (no pipeline), and I am researching going to pipeline. We have a nightly build job, that fetches from repositories, and submits and waits on other jobs. I see 9 jobs running on the same Master node (we only have a master), at the same time. I am not clear on if we should have one Jenkinsfile or multiple Jenkinsfiles. It will not be a multibranch pipeline, as we do not create test branches and then merge back to a master. In the repository we have product1.0 branch, product2.0 branch etc, and build only one branch (the latest one). While I do like the Blue Ocean editor, it is only for MultiBranch pipelines.
Do I combine all the jobs into one Jenkinsfile, or create multiple jenkins files for each of the existing jobs (Jenkinsfilestart, JenkinsfileFetchCVs, JenkinsFileFetchGit, Jenkinsfilenextjob,etc., and have one call the other)?. Do I create all the old jobs as Jenkinsfiles, or scripts executed by the one master Jenkinsfile? Do I do this in Declarative or script ?
Have set up Jenkins pipeline on test VM, but not clear on which way to go yet.
Looking for directions and\or examples. Is there documentation on how to convert existing Jenkins non-pipeline systems?
I found this after doing the initial post...https://wiki.jenkins.io/display/JENKINS/Convert+To+Pipeline+Plugin.
It does help a little in that it gives you some converted steps, but cannot convert all the steps, and will give comments in the pipeline script "//Unable to convert a build step referring to...please verify and convert manually if required." There is an option "Recursively convert downstream jobs if any" and if you select that, it appears to add all the downstream jobs to the same pipeline script, and really confuses the job parameters. There is also an option to "Commit JenkinsFile". I will play with this some more, but it is not the be all and end all of converting to pipeline, and I still am not sure of whether I should be have one or more scripts.
Added 07/26/19 -
Let’s see if I have my research to date correct…
A Declarative pipeline (Pipeline Script from SCM), is stored in a Jenkinsfile in the repository. Every time that this Jenkins job is executed, a fetch from the repository is done (to get the latest version of the Jenkinsfile).
A Pipeline script is stored as part of the config.xml file in the Jenkins\Jobs folder (it is not stored in the repository, or in a separate Jenkinsfile in the jobs folder). There is a fetch from the repository only if the job requires it (you do not need to do a repository fetch to get the Pipeline script).
Besides our nightly product build, we also have other jobs. I could create a separate Declarative Jenkinsfile for each of them (JenkinsfileA, JenkinsfileB, etc.) for each of the other jobs and store then in the repository also (in the same branch as the main Jenkinsfile), but that would mean that every one of those additional jobs, to get the particular Jenkinsfile for that job, would also need to do a repository fetch (basically fetching\cloning the repository branch for each job, and have multiple versions of the repository branch unnecessarily downloaded to the workspace of each job).
That does not make sense to me (unless my understanding of things to date is incorrect). Because the main product build does require a fetch every time it is run (to get any possible developer check-ins), I do not see a problem doing Declarative Jenkinsfile for that job. For the other jobs (if we do not leave then for the time being in the classic (non-pipeline) format)), they will be Pipeline scripts.
Is there any way of (or plans for), being able to do Declarative pipeline without having to store in the repository and doing a fetch every time (lessening the need to become a Groovy developer)? The Blue Ocean script editor appears to be an easier tool to use to create pipeline scripts, but it is only for MultiBranch pipelines (which we don’t do).
Serialization (restarting a job), is that only for when a node goes down, or can you restart a pipeline job (Declarative or Scripted), from any point if it fails?
I see that there are places to look to see what Jenkins plugin’s have been ported to pipeline, but is there anything that can be run to take a look at the classic jobs that you have, to determine up front which jobs are going to have problems being converted to pipeline?
08/02/19...
Studying and playing with pipelines. I see that you can use Declarative in the Pipeline Scrip window, but it still stores it in the config.xml file. And I have played with the combination of both Declarative and non Declarative in the same script.
I am trying to understand the Blue Ocean interface, the word "MultiBranch" is throwing me a little. We do not create test branches, and them merge them back into the master. In the repository, we have branches for each release of the product, and we rarely go back to previous branches\versions. So, if I am working on branchV9 right now, do I also need a Jenkinsfile in the Master branch, or any other of the previous version branches?
I have been playing with Blue Ocean (which only does MultiBranch pipelines). I am on a Windows system, Jenkins 2.176.2, and have all the latest Blue Ocean plugins as of today (1.18.0). I am accessing a local Git repository (not GitHub), and am running into the following...
If I try to use use “c:\GitRepos\Pipelines1.git”, i get "not a valid name"...
Why does it do this?
If you have a single job that you would be executed on multiple branches (with possibly optional stages, depending on the branch name or tag or other) then you still could utilize multi branch pipeline.
In general I would say that paradigm shift focuses mainly on converting the old jobs to stages in order to automate your build process. If you would have semi/fully automated CI/CD flow this could look like
Multibranch pipeline project (all branches) with the following stages (1st jenkinsfile)
build (all branches)
unit tests (all branches) publish report
publish artifacts (master and release branches)
build and publish docker (master and release branches)
deploy to test (master and release branches)
run integration tests (master and release branches)
deploy to staging (master and release branches) possibly ending with manual step if result of deployment was as expected
deploy to production (release branches)
Pipeline job for nightly tests (other jenkinsfile), what's the result here? Would it break CI/CD flow?

Jenkins Pipeline - How to maintain over time

I am currently using Cloudbees Jenkins Coreas my Jenkins solution.
I am using Jenkins Pipelines to write our Jenkins job configuration. These pipelines are stored in GitHub repositories. Each Jenkins job when created is connected to a GitHub Repository where the source code is pulled from, and that's where the Jenkinsfile is stored and Jenkins reads from.
Below are some high-level photos for how our Jenkins jobs are configured.
The advantage of the way these jobs are configured is the Jenkinsfile is always read from the master branch. Meaning if a rouge developer tries to remove stages from the Jenkinsfile from within there own branch, it doesn't matter because the Jenkinsfile is always read from the master branch (which is always protected).
However, the one massive drawback to this - is how do teams and developers who are devops engineerings make changes to the Jenkinsfile? For example, let's say a developer creates a branch called feature-jenkins-search and they edit the Jenkinsfile adding a new stage in the pipeline. Whenever they push these changes to GitHub to test - they can't test as it's always read from the master branch? Meaning devops engineerings have to work directly on the master branch? Surely this is not the best way to go and there is a better configuration to set?
We do want to still provide the security that if a developer is rougue and
You should really look into the Jenkins multi-branch pipeline feature. The Jenkins multi-branch pipeline allows to create a single configuration item in Jenkins (a bit like a folder) that can detect all the branches and pull requests in a GitHub repository with a Jenkinsfile and build them using automatically created jobs. Inside this multi-branch pipeline object when it is configured in Jenkins, you will find a number of jobs to build the various branches and pull-requests in the GitHub repository.
So your developers should maintain a Jenkinsfile in every branch they work on in GitHub to build that branch in your Jenkins server.
It is possible to make the Jenkinsfile do branch specific handling if required with conditional stages / when conditions in the Jenkinsfile pipelines in each branch.
You can lock down the master branch so that code and Jenkinsfile changes from other branches can only be merged with an approved PR (pull request). There is good integration between Jenkins and GitHub such that you can configure the master branch to only allow a PR to be merged if the PR is buildable in Jenkins. So if developers add new stages / processing to a Jenkinsfile on a branch being merged to master, it should be validated so that builds of your master branch are not broken.
There is a lot of configurability in the Jenkins multi-branch pipeline object for detection and handling of branches and it may be necessary to experiment to get it right for what you need with your team. If you cannot find this feature in Jenkins, it is probably because the correct Jenkins pipeline and GitHub related plugins are not installed.
You could also have a look at a similar Jenkins feature called the Jenkins GitHub Organization Folder which allows to detect and build all repos and branches at a GitHub Organization level. But when starting out, I would suggest to look into the multi-branch pipeline at the single repo level first.
These features are discussed in the Jenkins pipeline documentation. We use these features with our internal GitHub and Jenkins server and it works very well.
I think you will find the idea of using a single Jenkinsfile in the master branch to be used for building all branches is unworkable, as you have seen!

Pipeline to use artifacts from 2 projects associated by the same git branch name

the company where I work for is evaluating jenkins 2.71, in particular the pipeline and blue ocean plugins. We already tested also GoCD and we need, as in GoCD, a way for a pipeline to automatically fetch the artifacts from 2 other pipelines (taking the last successful result of each one of them), here our case.
We have these initial pipelines (build & run tests), which reflect 2 projects:
frontend, ~ 15 minutes
backend, ~10 minutes
I created a pipeline called configure (~1 minute), with e.g. a parameter called customer-name, which takes backend and frontend files and puts them together, then applies specific customer specific configurations and customizations and produces deployable artifacts. Instead of "customer-name" I could also parallelize this job to create all the artifacts for each customer at once, separated in different directories.
The next pipeline would be to deploy them on different test servers separated for each customer. This could be also part of the same configure pipeline, we still have to see how to put things together in jenkins...
Ideally, I need configure pipeline to be triggered automatically (or also on demand) after each frontend or backend success and take as input the last successful artifacts from these 2 pipelines, but not just having the last successful build, we need as dependency the git branch name.
E.g. we have:
backend branches:
master
release/2017.2
frontend braches:
master
release/2017.2
In the pipeline editor, I found a Build Triggers option and set it as follows: Build after other projects are built > Projects to watch: frontend, backend > Check Trigger only if build is stable or better in my test environment full of failures Trigger even if the build is unstable.
Searching further, I found Copy Artifact Plugin
But now the big question, how to fetch the last successful artifacts from these pipelines with the same git branch name?
Because we don't want to mix e.g. a backend build of "release/2017.2" with frontend "master", it has to find as the last successful build having the same relationship or parameter or whatever you wanna call it, in our case the association is the git branch name.
Is it possible to achieve this? If yes, how?
The copy artifact plugin seems to work in a freestyle project. Would it work in a pipeline? That's also a concern...
Thanks
Yes, the Copy Artifact plugin does work in both freestyle and pipeline projects; pipeline uses the copyArtifact function that I referenced in my comment. Note that if you go to the Pipeline Syntax link, it's kind of hidden: you have to first select "step: General Build Step" from the drop-down, then it will give you the Copy Artifact pipeline command builder.
I'm going to assume that your frontend and backend projects are built as multi-branch pipelines, as that would probably be easiest to maintain so that you don't have to keep creating new projects for every release. You can reference these projects from other projects by referencing <project name>/<branch name> (sometimes I've had to replace the / with %2f instead, I think mostly on freestyle projects). You could then set up your configure project as a parameterized build (either pipeline or freestyle), say with a string parameter of PROJECT_BRANCH_NAME. Then put in the following in your frontend/backend project pipeline scripts to trigger a build of your configure project
build job: 'configure', parameters: [[$class: 'StringParameterValue', name: 'PROJECT_BRANCH_NAME', value: ${env.BRANCH_NAME}]]
Then you should just be able to make your configure project reference the frontend/%PROJECT_BRANCH_NAME% and backend/%PROJECT_BRANCH_NAME% (or ${env.PROJECT_BRANCH_NAME} in a pipeline script) when copying the artifacts.
Also, is there a particular reason why you're evaluating specifically Jenkins 2.7? 2.7 is a year old now, and there have been a few new LTS releases since then. I'd recommend staying reasonably up-to-date unless you know there's a specific reason you want 2.7.

Can I store Jenkins configuration in the project repo (like Travis CI)?

How do you maintain the Jenkins job configuration in SCM along side the source code?
As source code evolves, so does the job configuration. It would be ideal to be able to keep the job configuration in SCM, for the following benefits:
easy to see who a history of the changes, including the author and the description
able to rebuild old branch/tag by checking out the revision and build just work
not having to scroll through the UI to find the appropriate section and make change
I see there is a Jenkins Job Builder plugin. I prefer a solution along the lines of Travis CI, where the job configuration is maintained in a YAML file (.travis.yml). Any good suggestions?
Note: Most of our projects are using Java & Maven.
Update 2016: Jenkins now provides a Jenkinsfile which provides exactly this. This is supported by the core Jenkins developers and actively developed.
Benefits:
Creating a Jenkinsfile, which is checked into source control, provides a number of immediate benefits:
Code review/iteration on the Pipeline
Audit trail for the Pipeline
Single source of truth for the Pipeline, which can be viewed and edited by multiple members of the project.
I've written a plugin that does this!
Other than my plugin, you have some (limited) options with existing Jenkins plugins:
Use a single test script
If you configure your Jenkins to simply run:
$ bash run_tests.sh
You can then check in a run_tests.sh file into your SCM repo and you're now tracking changes for how you run tests. However, this won't track configuration of any plugins.
Similarly, if you're using Maven, the Maven Project Plugin simply runs a specified goal for your repo.
The Literate Plugin does allow Jenkins to run the commands in your README.md, but it hasn't yet been released.
Track changes to Jenkins configuration
You can use the SCM Sync configuration plugin to write configuration changes to SCM, so you at least have a persistent record. This is global, across all projects on your Jenkins instance.
There's also the job config history plugin, which stores config history on the filesystem.
Write Jenkins configuration from SCM
The Jenkins job builder project you mentioned lets you check config changes into SCM and have them applied to your Jenkins instance. Again, this is across all projects on your Jenkins instance.
Write Jenkins configuration from another job
You can use the Job DSL Plugin with a repo of groovy scripts. Jenkins then polls that repo, executes the groovy scripts, which create job configurations.
Discussions
Issue 996 (now closed) discusses this, and it has also been discussed on the mailing list: 'Keeping track of Hudson's configuration changes', and 'save hudson config in svn'.
you can do this all with the workflow plugin and a lot more. Workflow is one of the most advanced technics to use jenkins and it has a very strong support.
It is based on a groovy DSL and allows you to keep the whole configuration in the SCM of your choise (e.g. GIT, SVN...).

Resources