Calling a Scriptler script within another Scriptler script - jenkins

I'm using the Scriptler plugin for Jenkins, and am having a hard time finding any information on how to share the scriptler scripts I'm writing between scripts. I've tried using the ScriptHelper from the Scriptler API, but have run into issues when passing in arguments to the script.
Anyone else come across this and solve it? Is there a standard way to do this (without calling the Jenkins REST API) to execute a script?
More Details
We have a full build MultiJob that contains many phase jobs, each with their own artifacts, with a 3 day time to live on them. When a this full build job is promoted, a scriptler runs against it, pulling each of the phase jobs artifacts into the full build job. By doing so, we can keep the full build alive forever, without changing the lifetime on the artifacts for each phase job (essentially 'keep this build forever' on the full build, ignoring the lifetimes set in the phase jobs.
We also want to pull these artifacts into a deploy job. The idea is that we can point a deploy job to a full build, and it will pull out the artifacts we specify. If the full build is promoted, this script will pull the artifacts directly from the full build job, otherwise, it will pull them from the internal phase jobs. Since we have 2 scripts that work with MultiJobs, I would like to be able to share this code between them.
The script would take a MultiJob name and build number, and return the individual phase job's build numbers, build statuses, and artifact information.

This is possible using Groovy capabilities, though I don't know if Scripler supports it directly. If you are running on the master node, you can use Groovy evaluate. Scriptler scripts are stored as Groovy files on the file system of the master node in the $JENKINS_HOME/scriptler/scripts directory. The Scripter ID is the function name within that directory.
Here is a very simple example. It uses two files. The first is the parameterized function, findByScm.groovy, which finds jobs using a give source control type. The second script, findByGitScm.groovy will evaluate the first function for Git SCMs and print the results.
findByScm.groovy
import jenkins.model.*
jenkins = Jenkins.instance;
// Notice that myScmType is not defined in this function
scmJobs = jenkins.items
.findAll { job -> job.scm != null && job.scm.type == myScmType }
findByGitScm.groovy
// This is supplying the argument to findByScm.groovy
myScmType = 'hudson.plugins.git.GitSCM'
// Now we are evaluating the script
evaluate(new File("${System.getProperty('JENKINS_HOME')}/scriptler/scripts/findByScm.groovy"))
// scmJobs is a variable which was introduced in findByScm.groovy
scmJobs.each { println it }

Related

Jenkins CHANGES_SINCE_LAST_SUCCESS plugin variable is empty

I have installed the Changes Since Last Success Plugin for Jenkins jobs. Inside the Build step of a Jenkins job I am trying to echo the value of the CHANGES_SINCE_LAST_SUCCESS variable. Unfortunately there is no value for this variable. I echo this value into a file inside my job's workspace.
You need Email-ext plugin instead.
Changes Since Last Success plugin has nothing to do with the CHANGES_SINCE_LAST_SUCCESS variable.
https://wiki.jenkins.io/display/JENKINS/Changes+Since+Last+Success+Plugin
This plugin adds a build action to aggregate changes from all previous builds to the last successful one. The primary goal is to generate a changelog to be used for continuous delivery, as an aggregate for all changes since the last deployable artifact.
Additionally, this plugin can be used to generate a changelog for an arbitrary range of builds:...

JMeter & Jenkins - passing jmeter parameters to downstream build

The Setup - A jenkins job using jenkins parameters testApp and testEnv. The Execution Batch looks like this:
C:\jmeter\apache-jmeter-3.2\bin\jmeter.bat -n -t
C:\JMeter\Scripts\API_scripts\%testApp%.jmx -Jtestenv=%testEnv% -JtestApp=%testApp% -JtestBrowser=NA -l
C:\AUTO_Results\jtl\%testApp%_%testEnv%.jtl
Post-build Actions
Console output (build lob) parsing with a global rule so that the Failures that are logged in the Jenkins Console window will consider the JMeter script failing. (discussed Jenkins shows JMeter script failure even though the script actually passed)
Triggered parameterized build - this is a separate jmeter script that updates a wiki page with either PASS/FAIL and uploads the JMeter report.
The Issue - How do I get the downstream Triggered build to use the parameters from the upstream script? I set the Parameter = Current build parameters but it's not applying those. Also, I wont know the value of the testResult parameter until the upstream build finishes. I tried adding %testResult%=PASS to the 'Predefined parameters' box
As per Parameterized Trigger Plugin page:
The parameters section can contain a combination of one or more of the following:
a set of predefined properties
properties from a properties file read from the workspace of the triggering build
the parameters of the current build
Subversion revision: makes sure the triggered projects are built with the same revision(s) of the triggering build. You still have to make sure those projects are actually configured to checkout the right Subversion URLs.
Restrict matrix execution to a subset: allows you to specify the same combination filter expression as you use in the matrix project configuration and further restricts the subset of the downstream matrix builds to be run.
So you basically need to copy over the parameters you would like to have in the "downstream" job from the current one.
As a workaround to current performance plugin limitations you can consider running JMeter using Taurus tool as a wrapper, it has flexible and powerful pass/fail criteria subsystem which will basically return to Jenkins non-zero exit code triggering build failure in case of issue in the test. If everything goes well Taurus exit code will be 0 which is considered successful by Jenkins. Check out How to Run Taurus with the Jenkins Performance Plugin article for more details.

Jenkins how to create pipeline manual step

Prior Jenkins2 I was using Build Pipeline Plugin to build and manually deploy application to server.
Old configuration:
That works great, but I want to use new Jenkins pipeline, generated from groovy script (Jenkinsfile), to create manual step.
So far I came up with input jenkins step.
Used jenkinsfile script:
node {
stage 'Checkout'
// Get some code from repository
stage 'Build'
// Run the build
}
stage 'deployment'
input 'Do you approve deployment?'
node {
//deploy things
}
But this waits for user input, noting that build is not completed. I could add timeout to input, but this won't allow me to pick/trigger a build and deploy it later on:
How can I achive same/similiar result for manual step/trigger with new jenkins-pipeline as prior with Build Pipeline Plugin?
This is a huge gap in the Jenkins Pipeline capabilities IMO. Definitely hard to provide due to the fact that a pipeline is a single job. One solution might be to "archive" the workspace as an "artifact" (tar and archive **/* as 'workspace.tar.gz'), and then have another pipeline copy the artifact and and untar it into the new workspace. This allows the second pipeline to pickup where the previous one left off. Of course there is no way to gauentee that the second pipeline cannot be executed out of turn or more than once. Which is too bad. The Delivery Pipeline Plugin really shines here. You execute a new pipeline right from the view - instead of the first job. Anyway - not much of an answer - but its the path I'm going to try.
EDIT: This plugin looks promising:
https://github.com/jenkinsci/external-workspace-manager-plugin/blob/master/doc/PIPELINE_EXAMPLES.md

Up to what extent can I use environment variables in Jenkins jobs?

I have a lot of jobs that contain very similiar configuration values.
Then I had the idea to use the EnvInject Plugin to read a generated Properties file, which contains most of my configuration.
However, I don't know up to what extent I can use environment variables in a Jenkins job configuration.
For instance, in a Maven job, I can specify the Root POM to be ${JOB_NAME}/pom.xml. Jenkins will tell me it can't find the file, but the job actually works.
Configuring other parts (like the number of builds to keep) fails miserably (the variable is simply removed).
So does anyone have experience in using environment variables to cut down the copy/paste configuration in Jenkins?
If your objective is to cut down on cut and paste then the job dsl plugin might suit your needs better.
You can build a template job (using statement) then use that to build your main jobs.
Modified from the tutorial
job {
name 'DSL-Tutorial-1-Test'
using 'seed-job'
scm {
git('git://github.com/jgritman/aws-sdk-test.git')
}
triggers {
scm('*/15 * * * *')
}
steps {
maven('-e clean test')
}
}
In addition if you need to change all the jobs you can change your template and rebuild the main jobs

Is it possible to run part of Job on master and the other part on slave?

I'm new to Jenkins. I have a requirement where I need to run part of a job on the Master node and the rest on a slave node.
I tried searching on forums but couldn't find anything related to that. Is it possible to do this?
If not, I'll have to break it into two separate jobs.
EDIT
Basically I have a job that checks out source code from svn, then compiles and builds jar files. After that it's building a wise installer for this application. I'd like to do source code checkout and compilation on the master(Linux) and delegate Wise Installer setup to a Windows slave.
It's definitely easier to do this with two separate jobs; you can make the master job trigger the slave job (or vice versa).
If you publish the files that need to be bundled into the installer as build artifacts from the master build, you can pull them onto the slave via a Jenkins URL and create the installer. Use the "Archive artifacts" post build step in the master build to do this.
The Pipeline Plugin allows you to write jobs that run on multiple slave nodes. You don't even have to go create other separate jobs in Jenkins -- just write another node statement in the Pipeline script and that block will just run on an assigned node. You can specify labels if you want to restrict the type of node it runs on.
For example, this Pipeline script will execute parts of it on two different nodes:
node('linux') {
git url: 'https://github.com/jglick/simple-maven-project-with-tests.git'
sh "make"
step([$class: 'ArtifactArchiver', artifacts: 'build/program', fingerprint: true])
}
node('windows && amd64') {
git url: 'https://github.com/jglick/simple-maven-project-with-tests.git'
sh "mytest.exe"
}
Some more information at the Pipeline plugin tutorial. (Note that it was previously called the Workflow Plugin.)
You can use the Multijob plugin which adds an the idea of a build phase which runs other jobs in parallel as a build step. You can still continue to use the regular freestyle job build and post build options as well

Resources