Configuring a single Jenkins release job to release from trunk or branches with Perforce as SCM - jenkins

This question is similar to the question being asked at How to configure a single Jenkins job to make the release process from trunk or branches? however in this case Perforce is the SCM being used within Jenkins. Currently in Jenkins I have the following:
One release job per branch/trunk.
Each job has a separate Perforce workspace mapping the necessary branch/trunk
Upon running the job, the jenkins-perforce-plugin synchronises the complete workspace and then runs the maven release plugin.
Ideally I would like to have one release job that can point to any branch, synchronise to the code from that branch and carry out a maven release. However, with Perforce workspaces, I will require a view mapping for each branch/trunk. Is there are way to tell the jenkins-perforce-plugin to only synchronise to a particular view in the workspace view? This way I could build the release job with a parameter that passes in the branch path and the jenkins-perforce-plugin synchronises to jobs perforce workspace to this path only and then carries a build from there.

If I were trying to implement this as described I would created a parameterized build where I could hand in a Perforce label name. The Perforce Jenkins plugin can sync to a label; I would create labels for each release specifying the paths that should be synced and with a revision specifier of #head.
Jenkins should then sync just the files you want for that build. The workspace would of course map everything; the labels will specify the files to fetch.

Related

Jenkins / Poll SCM : How to retrieve which SVN repository change triggered build job in declarative pipeline

I'm currently trying to set up a build job connected to multiple (3) svn repositories with single pipeline, with Poll SCM scheme. All the contents from the repositories build into a single artifact. Repositories are, e.g., say:
https://myrepo.com/mainSrc/main
https://myrepo.com/libs/library1
https://myrepo.com/libs/library2
When I set all this repos inside Pipeline configuration, the build job successfully checkouts all the paths to the workspace.
However, the point is, I need to track which repository started the build job, to write some conditional steps. Is there a solution for this? I googled for it, there are some ways to obtain svn revision for a path, however it's not what I'm looking for.

Use one Jenkinsfile or multiple Jenkinfiles

We are currently using Windows \ Jenkins 2.107.1 (no pipeline), and I am researching going to pipeline. We have a nightly build job, that fetches from repositories, and submits and waits on other jobs. I see 9 jobs running on the same Master node (we only have a master), at the same time. I am not clear on if we should have one Jenkinsfile or multiple Jenkinsfiles. It will not be a multibranch pipeline, as we do not create test branches and then merge back to a master. In the repository we have product1.0 branch, product2.0 branch etc, and build only one branch (the latest one). While I do like the Blue Ocean editor, it is only for MultiBranch pipelines.
Do I combine all the jobs into one Jenkinsfile, or create multiple jenkins files for each of the existing jobs (Jenkinsfilestart, JenkinsfileFetchCVs, JenkinsFileFetchGit, Jenkinsfilenextjob,etc., and have one call the other)?. Do I create all the old jobs as Jenkinsfiles, or scripts executed by the one master Jenkinsfile? Do I do this in Declarative or script ?
Have set up Jenkins pipeline on test VM, but not clear on which way to go yet.
Looking for directions and\or examples. Is there documentation on how to convert existing Jenkins non-pipeline systems?
I found this after doing the initial post...https://wiki.jenkins.io/display/JENKINS/Convert+To+Pipeline+Plugin.
It does help a little in that it gives you some converted steps, but cannot convert all the steps, and will give comments in the pipeline script "//Unable to convert a build step referring to...please verify and convert manually if required." There is an option "Recursively convert downstream jobs if any" and if you select that, it appears to add all the downstream jobs to the same pipeline script, and really confuses the job parameters. There is also an option to "Commit JenkinsFile". I will play with this some more, but it is not the be all and end all of converting to pipeline, and I still am not sure of whether I should be have one or more scripts.
Added 07/26/19 -
Let’s see if I have my research to date correct…
A Declarative pipeline (Pipeline Script from SCM), is stored in a Jenkinsfile in the repository. Every time that this Jenkins job is executed, a fetch from the repository is done (to get the latest version of the Jenkinsfile).
A Pipeline script is stored as part of the config.xml file in the Jenkins\Jobs folder (it is not stored in the repository, or in a separate Jenkinsfile in the jobs folder). There is a fetch from the repository only if the job requires it (you do not need to do a repository fetch to get the Pipeline script).
Besides our nightly product build, we also have other jobs. I could create a separate Declarative Jenkinsfile for each of them (JenkinsfileA, JenkinsfileB, etc.) for each of the other jobs and store then in the repository also (in the same branch as the main Jenkinsfile), but that would mean that every one of those additional jobs, to get the particular Jenkinsfile for that job, would also need to do a repository fetch (basically fetching\cloning the repository branch for each job, and have multiple versions of the repository branch unnecessarily downloaded to the workspace of each job).
That does not make sense to me (unless my understanding of things to date is incorrect). Because the main product build does require a fetch every time it is run (to get any possible developer check-ins), I do not see a problem doing Declarative Jenkinsfile for that job. For the other jobs (if we do not leave then for the time being in the classic (non-pipeline) format)), they will be Pipeline scripts.
Is there any way of (or plans for), being able to do Declarative pipeline without having to store in the repository and doing a fetch every time (lessening the need to become a Groovy developer)? The Blue Ocean script editor appears to be an easier tool to use to create pipeline scripts, but it is only for MultiBranch pipelines (which we don’t do).
Serialization (restarting a job), is that only for when a node goes down, or can you restart a pipeline job (Declarative or Scripted), from any point if it fails?
I see that there are places to look to see what Jenkins plugin’s have been ported to pipeline, but is there anything that can be run to take a look at the classic jobs that you have, to determine up front which jobs are going to have problems being converted to pipeline?
08/02/19...
Studying and playing with pipelines. I see that you can use Declarative in the Pipeline Scrip window, but it still stores it in the config.xml file. And I have played with the combination of both Declarative and non Declarative in the same script.
I am trying to understand the Blue Ocean interface, the word "MultiBranch" is throwing me a little. We do not create test branches, and them merge them back into the master. In the repository, we have branches for each release of the product, and we rarely go back to previous branches\versions. So, if I am working on branchV9 right now, do I also need a Jenkinsfile in the Master branch, or any other of the previous version branches?
I have been playing with Blue Ocean (which only does MultiBranch pipelines). I am on a Windows system, Jenkins 2.176.2, and have all the latest Blue Ocean plugins as of today (1.18.0). I am accessing a local Git repository (not GitHub), and am running into the following...
If I try to use use “c:\GitRepos\Pipelines1.git”, i get "not a valid name"...
Why does it do this?
If you have a single job that you would be executed on multiple branches (with possibly optional stages, depending on the branch name or tag or other) then you still could utilize multi branch pipeline.
In general I would say that paradigm shift focuses mainly on converting the old jobs to stages in order to automate your build process. If you would have semi/fully automated CI/CD flow this could look like
Multibranch pipeline project (all branches) with the following stages (1st jenkinsfile)
build (all branches)
unit tests (all branches) publish report
publish artifacts (master and release branches)
build and publish docker (master and release branches)
deploy to test (master and release branches)
run integration tests (master and release branches)
deploy to staging (master and release branches) possibly ending with manual step if result of deployment was as expected
deploy to production (release branches)
Pipeline job for nightly tests (other jenkinsfile), what's the result here? Would it break CI/CD flow?

Inconsistent Jenkins workspace path on slave machines

We have some jobs set up which share a workspace. The workflow for the various branches is:
Build a big honking C++ project called foo.
Execute several downstream tests, each of which uses the workspace of foo.
We accomplish this by assigning the Use custom workspace field of the downstream jobs to the build workspace.
Recently, we took one branch and assigned it to be build on a Jenkins slave machine rather than on the master. I was surprised to find that on master, the foo repository was cloned to $JENKINS_JOBS_PATH/FOO/workspace/foo_repo - while on the slave, the repository was cloned to $JENKINS_JOBS_PATH/FOO/foo_repo.
Is this by design, or have we somehow configured master and slave inconsistently?
Older versions of Jenkins put the workspace under the ${JENKINS_HOME}/jobs/JOB/workspace directories. After upgrading, this pattern stays with the Jenkins instance. New versions put the workspaces in ${JENKINS_HOME}/workspace/. I suspect the slaves don't need to follow the old pattern (especially if it is a newer slave), so the directories may not be consistent across machines.
You can change the location of the workspaces on the master in Jenkins -> Configure Jenkins -> Advanced.
I think the safe way to handle this... If you are going to use a custom workspace, you should use that for all of your jobs, including the first one that builds the big honking c++ project.
If you did this all in a pipeline, you can run all of this in a single job and have more control over where all the files are, and you have the option of stash and unstash, but if the files are huge, stash may not be the way to go.
You can omit 'Use custom workspace' option for each job and instead change master and/or slave workspace paths and use
%WORKSPACE%/../foo_repo path
or (that equal)
./../foo_repo path
In that case
%WORKSPACE% = [master or slave node workspace]/[job name]
and
%WORKSPACE%/../ = [master or slave node workspace]

How do I trigger deploy after the successful build of a specific branch?

I have a Jenkins task that triggers on any changes made to a gitlab project.
There are a few situations I'd like to be able to set up, however I'm not sure how to best accomplish them. Most of it centers around being able to do the following:
Once the job is complete, I'd like to trigger another job that takes the contents of the first job's workspace (emptying out the initial one).
I'd like for a way to only run certain other jobs when the workspace contains a specific branch (automatically deploy develop branch to a preview environment).
"to trigger another job that takes the contents of the first job's workspace" see Shared workspace plugin:
This plugin allows to share workspaces by Jenkins jobs with the same SCM repos.

How to sync to a Perforce pending changelist in jenkins

I have a pending changelist that I want to test in the build on our Jenkins server. I tried to do this using a P4 label but syncing to the label does not pick up my pending changelist.
Is there any way to do this with Jenkins SCM configuration?
In general, unless you have some very special circumstances, no other workspace can sync your pending changelist's changes, because they exist ONLY on your own workstation, not on the server. The server knows the names of the files in your pending changelist, but not their contents.
To make your changes accessible to the automated build tools, there are generally two approaches:
You can shelve the changes, then instruct the build tool to build from the shelf, or
You can check your changes into a development branch, then instruct the build tool to build the branch.
Or, of course, you could check your changes into the mainline, and have the build tools build them normally, but I'm guessing from your question you don't want to do that.
You can use the P4 Plugin. Follow the below steps
Create a Jenkins Job
Select checkbox: This project is parameterized
Add parameter name as Changelist
For the first step of build, select the option perforce unshelve and use ${changelist} as Changelist and resolve type as per your requirement
Under Build Trigger, select Trigger builds remotely (e.g., from scripts)
To trigger the build:
Shelve the Changelist
Trigger build with URL http://jenkinurl/projectname/buildWithParam?changelist=< shelved_cl >

Resources