I am getting this null pointer exception and the lock is not getting freed, it was working correctly for a long and suddenly started to throw this exceptions
any one has any idea?
stage('Deploy to iDev') {
steps {
script {
lock(resource: "$DEV_LOCK", inversePrecedence: true) {
milestone (15)
ansiColor('xterm') {
ansibleTower credential: '',
extraVars: "$DEV_ANSIBLE_PARAMS",
importTowerLogs: true,
importWorkflowChildLogs: false,
inventory: '',
jobTags: '',
jobTemplate: "$DEV_ANSIBLE_ID",
jobType: 'run',
limit: '',
removeColor: false,
skipJobTags: '',
templateType: 'job',
towerServer: "$TOWER_SERVER",
verbose: true
}
}
if ("$DEV_CONTAINER_JOB" != 'NA') {
build job: "$DEV_CONTAINER_JOB"
}
if ("$DEV_TEST_JOB" != 'NA') {
build job: DEV_TEST_JOB, parameters: [[$class: DEV_TEST_PARAMS_CLASS, name: DEV_TEST_PARAMS_NAME, value: DEV_TEST_PARAMS_VALUE]]
}
}
}
post {
failure {
// We want to email the development team.
}
aborted {
echo "aborted.. during deploy to iDev"
}
}
}
errors are below:
java.lang.NullPointerException
at org.jenkins.plugins.lockableresources.LockableResourcesManager.freeResources(LockableResourcesManager.java:323)
at org.jenkins.plugins.lockableresources.LockableResourcesManager.unlockNames(LockableResourcesManager.java:367)
at org.jenkins.plugins.lockableresources.LockStepExecution$Callback.finished(LockStepExecution.java:125)
at org.jenkinsci.plugins.workflow.steps.BodyExecutionCallback$TailCall.onSuccess(BodyExecutionCallback.java:114)
at org.jenkinsci.plugins.workflow.cps.CpsBodyExecution$SuccessAdapter.receive(CpsBodyExecution.java:368)
at com.cloudbees.groovy.cps.Outcome.resumeFrom(Outcome.java:73)
As per this
you need to do this as the lock is a declarative step or wrapper
stage('Deploy to iDev') {
steps {
lock(resource: "$DEV_LOCK", inversePrecedence: true) {
script {
.
.
.
}
}
}
}
You might run into problems with $DEV_LOCK too, depending how you defined it. You might be able to do "${env.DEV_LOCK}" or "${DEV_LOCK}"
Looking a bit closer, I think you only need to script the if statements. You could even put build job... into separate stages using when clauses with expressions and lose the script altogether and lock the whole pipeline as per my first link answer
As a first look, I would say that $DEV_LOCK doesn't exist at the moment it is evaluated. Just for the sake of argument, you can try to change it for a static string temporary, let's say
lock(resource: "foo", inversePrecedence: true)
Getting a bit deeper into it, and
seeing this line of the error stack trace
org.jenkins.plugins.lockableresources.LockableResourcesManager.freeResources(LockableResourcesManager.java:323)
judging by the date of this post,
and the commit history of the plugin...
...I would say that we are talking about this line in the plugin's code: https://github.com/jenkinsci/lockable-resources-plugin/blob/79034dcd1c12f88030b0990356ad9f7c63d1937e/src/main/java/org/jenkins/plugins/lockableresources/LockableResourcesManager.java#L323
The line is
323. private synchronized void freeResources(List<String> unlockResourceNames, #Nullable Run<?, ?> build) {
321. for (String unlockResourceName : unlockResourceNames) {
322. for (LockableResource resource : this.resources) {
323. if (resource != null && resource.getName() != null && resource.getName().equals(unlockResourceName)) {
we can see that resource is not null, and resource.getName() is not null, so the only possible null thing there is unlockResourceName, which makes sense since it is not checked in the lines above.
So it looks like the resource name of your resource (remember that your resource was $DEV_LOCK) happens to be null.
so I would say that the test proposed above, using just lock(resource: "foo", inversePrecedence: true) to see if the problems come from there, would be a good starting point. If it works, then you can decide if you really need $DEV_LOCK as a env variable, or you can change it for something static. If you needed it then take it from there trying to find out where it is unset, or if it is actually set somewhere.
Related
In this integration pipeline in Jenkins, I am triggering different builds in parallel using the build step, as follows:
stage('trigger all builds')
{
parallel
{
stage('componentA')
{
steps
{
script
{
def myjob=build job: 'componentA', propagate: true, wait: true
}
}
}
stage('componentB')
{
steps
{
script
{
def myjob=build job: 'componentB', propagate: true, wait: true
}
}
}
}
}
I would like to access the return value of the build step, so that I can know in my Groovy scripts what job name, number was triggered.
I have found in the examples that the object returned has getters like getProjectName() or getNumber() that I can use for this.
But how do I know the exact class of the returned object and the list of methods I can call on it? This seems to be missing from the Pipeline documentation. I am asking for this case in particular, but generally speaking, how can I know the class of the returned object and its documentation?
The step documentation is generated based on some files that are bundled with the plugin, which sometimes isn't enough. One easy way would be to just print out the class of the result object by calling getClass:
def myjob=build job: 'componentB', propagate: true, wait: true
echo "${myjob.getClass()}"
This output would tell you that the result (in this case) is a org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper which has published Javadoc.
For other cases, I usually have to dive into the Jenkins source code. Here is my general strategy:
Figure out which plugin the step comes from either by the step documentation, jenkins.io steps reference, or just searching the internet
From the plugin site, go to the source code repository
Search for the String literal of the step name, and find the step type that returns it. In this case, it looks to be coming from the BuildTriggerStep class, which extends AbstractStepImpl
#Override
public String getFunctionName() {
return "build";
}
Look at the nested DescriptorImpl to see what execution class is returned
public DescriptorImpl() {
super(BuildTriggerStepExecution.class);
}
Go to BuildTriggerStepExecution and look at the execution body in the start() method
Reading over the workflow step README shows that something should call context.onSuccess(value) to return a result. There is one place in that file, but that is only on the "no-wait" case, which always returns immediately and is null (source).
if (step.getWait()) {
return false;
} else {
getContext().onSuccess(null);
return true;
}
Ok, so it isn't completing in the step execution, so it must be somwhere else. We can also search the repository for onSuccess and see what else might trigger it from this plugin. We find that a RunListener implementation handles setting the result asynchronously for the step execution if it has been configured that way:
for (BuildTriggerAction.Trigger trigger : BuildTriggerAction.triggersFor(run)) {
LOGGER.log(Level.FINE, "completing {0} for {1}", new Object[] {run, trigger.context});
if (!trigger.propagate || run.getResult() == Result.SUCCESS) {
if (trigger.interruption == null) {
trigger.context.onSuccess(new RunWrapper(run, false));
} else {
trigger.context.onFailure(trigger.interruption);
}
} else {
trigger.context.onFailure(new AbortException(run.getFullDisplayName() + " completed with status " + run.getResult() + " (propagate: false to ignore)"));
}
}
run.getActions().removeAll(run.getActions(BuildTriggerAction.class));
The trigger.context.onSuccess(new RunWrapper(run, false)); is where the RunWrapper result comes from
The result of the downstream job is given in the result attribute of the returned object.
I recommend to use propagate: false to get control of how the result of the downstream job affects the current build.
Example:
pipeline{
[...]
stages {
stage('Dummy') {
steps {
echo "Hello world #1"
}
}
stage('Fails') {
steps {
script {
downstream = build job: 'Pipeline Test 2', propagate: false
if (downstream.result != 'SUCCESS') {
unstable(message: "Downstream job result is ${downstream.result}")
}
}
}
}
}
[...]
}
In this example, the current build is set to UNSTABLE whenever the downstream build has not been successful.
The result can be: SUCCESS, FAILURE, UNSTABLE, or ABORTED.
For your other question see Is there a built-in function to print all the current properties and values of an object? and Groovy / grails how to determine a data type?
I got the class path from build log:
13:20:52 org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper#5fe8d20f
and then I found the doc from https://javadoc.jenkins.io/, you can get all you need from this page
link: https://javadoc.jenkins.io/plugin/workflow-support/org/jenkinsci/plugins/workflow/support/steps/build/RunWrapper.html
My jenkinsfile has several paremeters, every time I make an update in the parameters (e.g. remove or add a new input) and commit the change to my SCM, I do not see the job input screen updated accordingly in jenkins, I have to run an execution, cancel it and then see my updated fields in
properties([
parameters([
string(name: 'a', defaultValue: 'aa', description: '*', ),
string(name: 'b', description: '*', ),
string(name: 'c', description: '*', ),
])
])
any clues?
One of the ugliest things I've done to get around this is create a Refresh parameter which basically exits the pipeline right away. This way I can run the pipeline just to update the properties.
pipeline {
agent any
parameters {
booleanParam(name: 'Refresh',
defaultValue: false,
description: 'Read Jenkinsfile and exit.')
}
stages {
stage('Read Jenkinsfile') {
when {
expression { return parameters.Refresh == true }
}
steps {
echo("Ended pipeline early.")
}
}
stage('Run Jenkinsfile') {
when {
expression { return parameters.Refresh == false }
}
stage('Build') {
// steps
}
stage('Test') {
// steps
}
stage('Deploy') {
// steps
}
}
}
}
There really must be a better way, but I'm yet to find it :(
Unfortunately the answer of TomDotTom was not working for me - I had the same issue and my jenkins required another stages unter 'Run Jenkinsfile' because of the following error:
Unknown stage section "stage". Starting with version 0.5, steps in a stage must be in a ‘steps’ block.
Also I am using params instead of parameters as variable to check the condition (as described in Jenkins Syntax).
pipeline {
agent any
parameters {
booleanParam(name: 'Refresh',
defaultValue: false,
description: 'Read Jenkinsfile and exit.')
}
stages {
stage('Read Jenkinsfile') {
when {
expression { return params.Refresh == true }
}
steps {
echo("stop")
}
}
stage('Run Jenkinsfile') {
when {
expression { return params.Refresh == false }
}
stages {
stage('Build') {
steps {
echo("build")
}
}
stage('Test') {
steps {
echo("test")
}
}
stage('Deploy') {
steps {
echo("deploy")
}
}
}
}
}
}
applied to Jenkins 2.233
The Jenkinsfile needs to be executed in order to update the job properties, so you need to start a build with the new file.
Apparently it is known Jenkins "issue" or "hidden secret" https://issues.jenkins.io/browse/JENKINS-41929.
I overcome this automatically using Jenkins Job DSL plugin.
I have Job DSL's seed job for my pipelines checking for changes in git repository with my pipeline.
pipelineJob('myJobName') {
// sets RELOAD=true for when the job is 'queued' below
parameters {
booleanParam('RELOAD', true)
}
definition {
cps {
script(readFileFromWorkspace('Jenkinsfile'))
sandbox()
}
}
// queue the job to run so it re-downloads its Jenkinsfile
queue('myJobName')
}
Upon changes seed job runs and re-generate pipeline's configuration including params. After pipeline is created/updated Job DSL will queue pipeline with special param RELOAD.
Pipeline than reacts to it in first stage and abort early. (Apparently there is no way in Jenkins to abort pipeline stop without error at the end of stage causing "red" pipeline.)
As parameters in Jenkinsfile are in properties, they will be set over anything set by seed job like RELOAD. At this stage pipeline is ready with actual params without any sign of RELOAD to confuse users.
properties([
parameters([
string(name: 'PARAM1', description: 'my Param1'),
string(name: 'PARAM2', description: 'my Param2'),
])
])
pipeline {
agent any
stages {
stage('Preparations') {
when { expression { return params.RELOAD == true } }
// Because this: https://issues.jenkins-ci.org/browse/JENKINS-41929
steps {
script {
if (currentBuild.getBuildCauses('hudson.model.Cause') != null) {
currentBuild.displayName = 'Parameter Initialization'
currentBuild.description = 'On first build we just load the parameters as they are not available of first run on new branches. A second run has been triggered automatically.'
currentBuild.result = 'ABORTED'
error('Stopping initial build as we only want to get the parameters')
}
}
}
}
stage('Parameters') {
steps {
echo 'Running real job steps...'
}
}
}
End result is as such that every time I update anything in Pipeline repository, all jobs generated by seed are updated and run to get updated params list. There will be message "Parameters initialization" to indicate such a job.
There is potentially way how to improve and only update affected pipelines but I haven't explore that as all my pipelines are in one repository and I'm happy with always updating them.
Another upgrade could be that if someone doesn't like "abort" with "error", you could have while condition in every other stage to skip it if parameter is RELOAD but I find adding when to every other stage cumbersome.
I initially tried #TomDotTom's answer but than I didn't liked manual effort.
Scripted pipeline workaround - can probably make it work in declarative as well.
Since you are using SCM, you can check which files have changed since last build (see here), and then decide what to do base on it.
Note that poll SCM on the job must be enabled to detect the Jenkinsfile changes automatically.
node('master') {
checkout scm
if (checkJenkinsfileChanges()) {
return // exit the build immediately
}
echo "build" // build stuff
}
private Boolean checkJenkinsfileChanges() {
filesChanged = getChangedFilesList()
jenkinsfileChanged = filesChanged.contains("Jenkinsfile")
if (jenkinsfileChanged) {
if (filesChanged.size() == 1) {
echo "Only Jenkinsfile changed, quitting"
} else {
echo "Rescheduling job with updated Jenkinsfile"
build job: env.JOB_NAME
}
}
return jenkinsfileChanged
}
// returns a list of changed files
private String[] getChangedFilesList() {
changedFiles = []
for (changeLogSet in currentBuild.changeSets) {
for (entry in changeLogSet.getItems()) { // for each commit in the detected changes
for (file in entry.getAffectedFiles()) {
changedFiles.add(file.getPath()) // add changed file to list
}
}
}
return changedFiles
}
I solve this by using Jenkins Job Builder python package. The main goal of this package is to achieve Jenkins Job as Code
To solve your problem I could simply use like below and keep that on SCM with a Jenkins pipeline which will listen to any changes for jobs.yaml file change and build the job for me so that whenever I trigger my job all the needed parameters will be ready for me.
jobs.yaml
- job:
name: 'job-name'
description: 'deploy template'
concurrent: true
properties:
- build-discarder:
days-to-keep: 7
- rebuild:
rebuild-disabled: false
parameters:
- choice:
name: debug
choices:
- Y
- N
description: 'debug flag'
- string:
name: deploy_tag
description: "tag to deploy, default to latest"
- choice:
name: deploy_env
choices:
- dev
- test
- preprod
- prod
description: "Environment"
project-type: pipeline
# you can use either DSL or pipeline SCM
dsl: |
node() {
stage('info') {
print params
}
}
# pipeline-scm:
# script-path: Jenkinsfile
# scm:
# - git:
# branches:
# - master
# url: 'https://repository.url.net/x.git'
# credentials-id: 'jenkinsautomation'
# skip-tag: true
# wipe-workspace: false
# lightweight-checkout: true
config.ini
[job_builder]
allow_duplicates = False
keep_descriptions = False
ignore_cache = True
recursive = False
update = all
[jenkins]
query_plugins_info = False
url = http://localhost:8080
command to load / update the job
jenkins-jobs --conf conf.ini -u $JENKINS_USER -p $JENKINS_PASSWORD update jobs.yaml
Note - To use jenkins-jobs command, make sure you need install this jenkins-job-builder python package.
This package has a lot of features like create (free-style, pipeline, multibranch) , update, delete , validate jenkins job configuration. It supports Templates - meaning with one generic template, you can build an 'n' number of similar jobs, dynamically generate parameters and etc..
I have the following Jenkins DSL file:
if (params["BUILD_SNAPSHOT"] == "true") {
parallel(
{
build("company-main-build-snapshot")
},
{
build("1-company-worker-build-snaphsot", WORKER_NAME: "sharding-worker")
}
)
}
parallel (
{
build("company-deployment-info",
API_KEY: "aaaaa5dd4cd58b94215f9cddd4441c391b4ddde226ede98",
APP: "company-Staging-App")
},
{
build("company-salt-role-deploy",
ENV: "staging",
ROLE: "app")
},
{
build("company-deployment-info",
API_KEY: "aaaaa5dd4cd58b94215f9cddd4441c391b4ddde226ede98",
APP: "company-Staging-Shardwork")
},
{
build("company-salt-workers-deploy",
ENVIRONMENT: "staging",
WORKER_TYPE: "shardwork")
}
)
if (params["REST_TEST"] == "true") {
build("company_STAGING_python_rest_test")
}
My task is to convert/rewrite this workflow file content to Jenkins pipeline Jenkinsfile.
I have some example files for reference but I'm having a hard time understanding how I should even begin...
Can anyone please shed some light on this subject?
First, have a good look at Jenkins pipeline documentation, it is a great start and it is providing a whole bunch of information such as Build Parameters usage or parallel steps.
Here are a few more hints for you to explore :
Parameters
Just use the parameter name as a variable such as :
if (BUILD_SNAPSHOT) {
...
}
Call other jobs
You can also use build step such as :
build job: '1-company-worker-build-snaphsot', parameters: [stringParam(name: 'WORKER_NAME', value: "sharding-worker")]
Use functions
Instead of calling downstream jobs using build steps each time, you might want to consider using pipeline functions from another Groovy script, either from your current project or even from an external, checked out Groovy script.
As an example, you could replace your second job call from :
build("1-company-worker-build-snaphsot", WORKER_NAME: "sharding-worker")
to :
git 'http://urlToYourGit/projectContainingYourScript'
pipeline = load 'global-functions.groovy'
pipeline.buildSnapshot("sharding-worker")
...of course the init phase (Git checkout and pipeline loading) is only needed once before you can call all your external scripts functions.
In short
To sum it up a little bit, your code could be converted to something along these lines :
node {
git 'http://urlToYourGit/projectContainingYourScript'
pipeline = load 'global-functions.groovy'
if(BUILD_SNAPSHOT) {
parallel (
phase1: { pipeline.buildMainSnapshot() },
phase2: { pipeline.buildWorkerSnapshot("sharding-worker") }
)
}
parallel (
phase1: { pipeline.phase1(params...) },
phase2: { pipeline.phase2(params...) },
phase3: { pipeline.phase3(params...) },
phase4: { pipeline.phase4(params...) }
)
if (REST_TEST) {
pipeline.finalStep()
}
}
I have a Jenkins Pipeline job that, for part of the build, uses a node with a lot of downtime. I'd like this step performed if the node is online and skipped without failing the build if the node is offline.
This is related, but different from the problem of skipping parts of a Matrix Project.
I tried to programmatically check if a node is online like so.
jenkins.model.Nodes.getNode('my-node').toComputer().isOnline()
This runs up against the Jenkins security sandbox:
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: unclassified method java.lang.Class getNode java.lang.String
I tried setting a timeout that will be tripped if the node is offline.
try {
timeout(time: 10, unit: 'MINUTES') {
node('my-node') {
// Do optional step
}
}
} catch (e) {
echo 'Time out on optional step. Node down?'
}
This has a major downside. I have to know what the longest time the step would take, then wait even longer when the node is down. I tried working around that with a "canary" step:
try {
timeout(time: 1, unit: 'SECONDS') {
node('my-node') {
echo 'Node is up. Performing optional step.'
}
}
node('my-node') {
echo 'This is an optional step.'
}
} catch (e) {
echo 'Time out on optional step. Node down?'
}
This skips the step if the node is up, but busy with another job. This is the best solution I have come up with so far. Is there just a way to check if the node is online without using a timeout?
This should work:
Jenkins.instance.getNode('my-node').toComputer().isOnline()
see http://javadoc.jenkins-ci.org/jenkins/model/Jenkins.html
There is a pipeline call for this.
nodesByLabel 'my-node'
Returns [] if no node is online; returns arraylist with online instances otherwise.
I simply did this:
pipeline {
agent none
environment { AGENT_NODE = "somenode" }
stages {
stage('Offline Node') {
when {
beforeAgent true
expression {
return nodesByLabel(env.AGENT_NODE).size() > 0
}
}
agent {
label "${env.AGENT_NODE}"
}
steps {
...
}
}
}
}
What is the best way that I can detect if a parameter in a parameterized build exists or not?
The closest solution I found was to do this in groovy:
node {
groovy.lang.Binding myBinding = getBinding()
boolean mybool = myBinding.hasVariable("STRING_PARAM1")
echo mybool.toString()
if (mybool) {
echo STRING_PARAM1
echo getProperty("STRING_PARAM1")
} else {
echo "STRING_PARAM1 is not defined"
}
mybool = myBinding.hasVariable("DID_NOT_DEFINE_THIS")
if (mybool) {
echo DID_NOT_DEFINE_THIS
echo getProperty("DID_NOT_DEFINE_THIS")
} else {
echo "DID_NOT_DEFINE_THIS is not defined"
}
}
Is using getBinding() the proper API to do this, or is there a better way?
You can use try-catch to check for the existence of a parameter:
try {
echo TEST1
echo 'TEST1 is defined'
} catch (err) {
echo 'TEST1 is not defined'
}
When you are using Pipelines then you have access to the object: params whichs is a Java map, then you can use: containsKey method, i.e:
if(params.containsKey("STRING_PARAM1")) {
echo "STRING_PARAM1 exists as parameter with value ${STRING_PARAM1}"
} else {
echo "STRING_PARAM1 is not defined"
}
When you're in Sandbox mode (or via a SCM), you're not allowed to use the getBinding(). At least, that's what I've run into so far.
What I've used so far is following method, in the workflow file, at the top I insert the following:
properties([[$class: 'ParametersDefinitionProperty', parameterDefinitions: [[$class: 'StringParameterDefinition', defaultValue: 'default_value', description: '', name: 'your_parameter']]]])
This way your parameter will have a default value, which will be overridden when it is supplied as a build parameter.
You could use the params variable in the newer versions of Jenkins.
Here is how I read the PLATFORM parameter of a parametrized build. It is a String parameter.
def platform = params?.PLATFORM?.trim()
stage("checkPlatform") {
if (platform) {
echo "Going to build for platform: ${platform}"
// ...
} else {
echo "No platform given. Cancelling build"
error("No platform given")
}
}
stage("..."){
///...
}
There is a tutorial here: https://st-g.de/2016/12/parametrized-jenkins-pipelines
I wrote "newer versions of Jenkins" above. Here is the definition from that tutorial:
As of workflow-cps version 2.18, a new params global variable provides sane access also on the first run (by returning specified default values).