Scripted pipeline: wrap stage - jenkins

I would like to be able to wrap a 'stage' in Jenkins, so I can execute custom code at the start and end of a stage, so something like:
myStage('foo') {
}
I thought I could do this by using metaClass:
//Wrap stages to automatically trace
def originalMethod = this.metaClass.getMetaMethod("stage", null)
this.metaClass.myStage = { args ->
println "Beginning of stage"
println "Args: " + args
def result = originalMethod.invoke(delegate, args)
println "End of stage"
return result
}
But it appears the Groovy script itself is a Binding, which doesn't have a metaClass:
groovy.lang.MissingPropertyException: No such property: metaClass for class: groovy.lang.Binding
I'm still learning how Groovy and Jenkins Pipeline work, so perhaps I'm just missing something.

I am not familiar with the metaclass concept but I think that a simple solution to your problem is to define a wrapped stage as a function.
Here's an example of how you'd define such a function:
def wrappedStage(name, Closure closure) {
stage(name) {
echo "Beginning of stage"
def result = closure.call()
echo "End of stage"
return result
}
}
and this is how you would call it:
wrappedStage('myStage') {
echo 'hi'
}
The return value of wrappedStage would only make sense when the body of your stage actually returns something, for example:
If you call another job, eg:
wrappedStage('myStage') {
build job: 'myJob'
}
you will get back org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper which you can use to access info of the job you run, like result, variables etc
If you print something to the console, eg:
wrappedStage('myStage') {
echo 'hi'
}
you will get back null.
Note that in my example I am not printing args because the way I understand stage, it only takes 2 arguments; the stage name and the closure it should run. The name of the stage will already be printed in the log, and I don't know how much value you'd get from printing the code you're about to execute but if that's something you want to do, take a look at this.
If you have a more specific case in mind for what you'd want to wrap, you can add more params to the wrapper and print all the information you want through those extra parameters.

Related

Using env variables in a Jenkinsfile function?

I'm triggering one Jenkins job from the other, all via Jenkinsfiles.
stage("Trigger another job")
{
build([job:"Job2", wait:true, propagate:true, parameters: [string(name:'branch_my',value:"${env.ghprbActualCommit}")]])
}
Note that parameter branch_my is sent to Job2. However, the pipeline Job2 needs to work even when branch_my is NOT defined, for example when it is triggered manually.
Jenkinsfile for Job2 looks like:
pipeline
{
// ...
steps
{
customBranches()
// etc...
}
}
def customBranches()
{
if ( env.branch_my != null)
{
sh "switch_to ${env.branch_my}"
}
}
However, the customBranches() if statement never evaluates to true. When I do
sh "echo 'Env branch_my is: ${env.branch_my} '"
I get Env branch_my is: some_value , which is OK, and if statement should evaluate to true - but it does not.
I tried adding ${} like so: if ( ${env.branch_my} != null), but that failed completely: No such DSL method "$" found.
What's wrong with my customBranches()?
Problem isn't the Jenkinsfile syntax, but the Jenkins job configuration: it must be labeled as "parametrized" in the GUI and a string parameter branch_my needs to be defined:
Note, the parameters can be added via the Jenkinsfile itself:
parameters { string(name: 'branch_my', defaultValue: 'master', description: '') }
However, this just adds the parameter to the GUI, so you end up with the same thing.

Jenkins pipeline - how to load a Jenkinsfile without first calling node()?

I have a somewhat unique setup where I need to be able to dynamically load Jenkinsfiles that live outside of the src I'm building. The Jenkinsfiles themselves usually call node() and then some build steps. This causes multiple executors to be eaten up unnecessarily because I need to have already called node() in order to use the load step to run a Jenkinsfile, or to execute the groovy if I read the Jenkinsfile as a string and execute it.
What I have in the job UI today:
#Library(value='myGlobalLib#head', changelog=fase) _
node{
load "${JENKINSFILES_ROOT}/${PROJECT_NAME}/Jenkinsfile"
}
The Jenkinsfile that's loaded usually also calls node(). For example:
node('agent-type-foo'){
someBuildFlavor{
buildProperty = "some value unique to this build"
someConfig = ["VALUE1", "VALUE2", "VALUE3"]
runTestTarget = true
}
}
This causes 2 executors to be consumed during the pipeline run. Ideally, I load the Jenkinsfiles without first calling node(), but whenever I try, I get an error message stating:
"Required context class hudson.FilePath is missing
Perhaps you forgot to surround the code with a step that provides this, such as: node"
Is there any way to load a Jenkinsfile or execute groovy without first having hudson.FilePath context? I can't seem to find anything in the doc. I'm at the point where I'm going to preprocess the Jenkinsfiles to remove their initial call to node() and call node() with the value the Jenkinsfile was using, then load the rest of the file, but, that's somewhat too brittle for me to be happy with.
When using load step Jenkins evaluates the file. You can wrap your Jenkinsfile's logics into a function (named run() in my example) so that it will load but not run automatically.
def run() {
node('agent-type-foo'){
someBuildFlavor{
buildProperty = "some value unique to this build"
someConfig = ["VALUE1", "VALUE2", "VALUE3"]
runTestTarget = true
}
}
}
// This return statement is important in the end of Jenkinsfile
return this
Call it from your job script like this:
def jenkinsfile
node{
jenkinsfile = load "${JENKINSFILES_ROOT}/${PROJECT_NAME}/Jenkinsfile"
}
jenkinsfile.run()
This way there is no more nested node blocks because the first gets closed before run() function is called.

custom jenkins declarative pipeline dsl with named arguments

I've been trying for a while now to start working towards moving our free style projects over to pipeline. To do so I feel like it would be best to build up a shared library since most of our builds are the same. I read through this blog post from Jenkins. I came up with the following
// vars/buildGitWebProject.groovy
def call(body) {
def args= [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = args
body()
pipeline {
agent {
node {
label 'master'
customWorkspace "c:\\jenkins_repos\\${args.repositoryName}\\${args.branchName}"
}
}
environment {
REPOSITORY_NAME = "${args.repositoryName}"
BRANCH_NAME = "${args.branchName}"
SOLUTION_NAME = "${args.solutionName}"
}
options {
buildDiscarder(logRotator(numToKeepStr: '3'))
skipStagesAfterUnstable()
timestamps()
}
stages {
stage("checkout") {
steps {
script{
assert REPOSITORY_NAME != null : "repositoryName is null. Please include it in configuration."
assert BRANCH_NAME != null : "branchName is null. Please include it in configuration."
assert SOLUTION_NAME != null : "solutionName is null. Please include it in configuration."
}
echo "building with ${REPOSITORY_NAME}"
echo "building with ${BRANCH_NAME}"
echo "building with ${SOLUTION_NAME}"
checkoutFromGitWeb(args)
}
}
stage('build and test') {
steps {
executeRake(
"set_assembly_to_current_version",
"build_solution[$args.solutionName, Release, Any CPU]",
"copy_to_deployment_folder",
"execute_dev_dropkick"
)
}
}
}
post {
always {
sendEmail(args)
}
}
}
}
in my pipeline project I configured the Pipeline to use Pipeline script and the script is as follows:
buildGitWebProject {
repositoryName:'my-git-repo'
branchName: 'qa'
solutionName: 'my_csharp_solution.sln'
emailTo='testuser#domain.com'
}
I've tried with and without the environment block but the result ends up being the same that the value is 'null' for each of those arguments. Oddly enough the script portion of the code doesn't make the build fail either... so not sure what's wrong with that. Also the echo parts show null as well. What am I doing wrong?
Your Closure body is not behaving the way you expect/believe it should.
At the beginning of your method you have:
def call(body) {
def args= [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = args
body()
Your call body is:
buildGitWebProject {
repositoryName:'my-git-repo'
branchName: 'qa'
solutionName: 'my_csharp_solution.sln'
emailTo='testuser#domain.com'
}
Let's take a stab at debugging this.
If you add a println(args) after the body() in your call(body) method you will see something like this:
[emailTo:testuser#domain.com]
But, only one of the values got set. What is going on?
There are a few things to understand here:
What does setting a delegate of a Closure do?
Why does repositoryName:'my-git-repo' not do anything?
Why does emailTo='testuser#domain.com' set the property in the map?
What does setting a delegate of a Closure do?
This one is mostly straightforward, but I think it helps to understand. Closure is powerful and is the Swiss Army knife of Groovy. The delegate essentially sets what the this is in the body of the Closure. You are also using the resolveStrategy of Closure.DELEGATE_FIRST, so methods and properties from the delegate are checked first, and then from the enclosing scope (owner) - see the Javadoc for an in-depth explanation. If you call methods like size(), put(...), entrySet(), etc., they are all first called on the delegate. The same is true for property access.
Why does repositoryName:'my-git-repo' not do anything?
This may appear to be a Groovy map literal, but it is not. These are actually labeled statements. If you surround it instead with square brackets like [repositoryName:'my-git-repo'] then that would be a map literal. But, that is all you would be doing there - is creating a map literal. We want to make sure that these objects are consumed in the Closure
Why does emailTo='testuser#domain.com' set the property in the map?
This is using the map property notation feature of Groovy. As mentioned earlier, you have set the delegate of the Closure to def args= [:], which is a Map. You also set the resolveStrategy of Closure.DELEGATE_FIRST. This makes your emailTo='testuser#domain.com' resolve to being called on args, which is why the emailTo key is set to the value. This is equivalent to calling args.emailTo='testuser#domain.com'.
So, how do you fix this?
If you want to keep your Closure syntax approach, you could change the body of your call to anything that essentially stores values in the delegated args map:
buildGitWebProject {
repositoryName = 'my-git-repo'
branchName = 'qa'
solutionName = 'my_csharp_solution.sln'
emailTo = 'testuser#domain.com'
}
buildGitWebProject {
put('repositoryName', 'my-git-repo')
put('branchName', 'qa')
put('solutionName', 'my_csharp_solution.sln')
put('emailTo', 'testuser#domain.com')
}
buildGitWebProject {
delegate.repositoryName = 'my-git-repo'
delegate.branchName = 'qa'
delegate.solutionName = 'my_csharp_solution.sln'
delegate.emailTo = 'testuser#domain.com'
}
buildGitWebProject {
// example of Map literal where the square brackets are not needed
putAll(
repositoryName:'my-git-repo',
branchName: 'qa',
solutionName: 'my_csharp_solution.sln',
emailTo: 'testuser#domain.com'
)
}
Another way would be to have your call take in the Map as an argument and remove your Closure.
def call(Map args) {
// no more args and delegates needed right now
}
buildGitWebProject(
repositoryName: 'my-git-repo',
branchName: 'qa',
solutionName: 'my_csharp_solution.sln',
emailTo: 'testuser#domain.com'
)
There are also some other ways you could model your API, it will depend on the UX you want to provide.
Side note around declarative pipelines in shared library code:
It's worth keeping in mind the limitations of declarative pipelines in shared libraries. It looks like you are already doing it in vars, but I'm just adding it here for completeness. At the very end of the documentation it is stated:
Only entire pipelines can be defined in shared libraries as of this time. This can only be done in vars/*.groovy, and only in a call method. Only one Declarative Pipeline can be executed in a single build, and if you attempt to execute a second one, your build will fail as a result.

Dynamic variable in Jenkins pipeline with groovy method variable

I have a Jenkinsfile in Groovy for a declarative pipeline and two created Jenkins variables with names OCP_TOKEN_VALUE_ONE and OCP_TOKEN_VALUE_TWO and the corresponding values. The problem comes when I try to pass a method variable and use it in an sh command.
I have the next code:
private def deployToOpenShift(projectProps, environment, openshiftNamespaceGroupToken) {
sh """/opt/ose/oc login ${OCP_URL} --token=${openshiftNamespaceGroupToken} --namespace=${projectProps.namespace}-${environment}"""
}
The problem is, the method deployToOpenShift has in the openshiftNamespaceGroupToken variable, a value that is the name of variable that has been set in Jenkins. It needs to be dynamic and the problem is that Jenkins don't resolve the Jenkins variable value, just the one passed as String, I mean, the result is:
--token=OCP_TOKEN_VALUE_ONE
If I put in the code
private def deployToOpenShift(projectProps, environment, openshiftNamespaceGroupToken) {
sh """/opt/ose/oc login ${OCP_URL} --token=${OCP_TOKEN_VALUE_ONE} --namespace=${projectProps.namespace}-${environment}"""
}
works perfect but is not dynamic that is the point of the method variable. I have tried with the """ stuff as you can see, but not working.
Any extra idea?
Edited with the code that calls the method:
...
projectProps = readProperties file: './gradle.properties'
openShiftTokenByGroup = 'OCP_TOKEN_' + projectProps.namespace.toUpperCase()
...
stage ('Deploy-Dev') {
agent any
steps {
milestone ordinal : 10, label: "Deploy-Dev Milestone"
deployToOpenShift(projectProps, 'dev', openShiftTokenByGroup)
}
}
I have got two different ways to do that. One is using evaluate from groovy like this:
def openShiftTokenByGroup = 'OCP_TOKEN_' + projectProps.namespace.toUpperCase()
evaluate("${openShiftTokenByGroup}") //This will resolve the configured value in Jenkins
The second one is the same approach but in the sh command with eval escaping the $ character:
sh """
eval \$$openShiftTokenByGroup
echo "Token: $openShiftTokenByGroup
"""
This will do the magic too and you'll get the Jenkins configured value.

println in "call" method of "vars/foo.groovy" works, but not in method in class

I'm iterating through building a Jenkins pipeline shared library, so my Jenkinsfile is a little cleaner.
I'm using the following page for guidance: https://jenkins.io/doc/book/pipeline/shared-libraries/ .
I first defined several methods in individual files, like "vars/methodName.groovy", with a "call()" method in the code. This works ok, and I particularly note that "println" calls in these methods are seen in the Jenkins console output.
I then decided I wanted to save some state between method calls, so I added a new file in "vars" named "uslutils.groovy" that begins like this (minus some required imports):
class uslutils implements Serializable {
I defined some "with<property>" methods that set a property and return this.
I then wrote a "public String toString()" method in "uslutils" that looks something like this:
public String toString() {
println "Inside uslutils.toString()."
return "[currentBuild[${currentBuild}] mechIdCredentials[${mechIdCredentials}] " +
"baseStashURL[${baseStashURL}] jobName[${jobName}] codeBranch[${codeBranch}] " +
"buildURL[${buildURL}] pullRequestURL[${pullRequestURL}] qBotUserID[${qBotUserID}] " +
"qBotPassword[${qBotPassword}]]"
}
Then, inside my Jenkinsfile, after setting the uslutils properties, I added a line like this:
println "uslutils[${uslutils}]"
Then, I ran my job, and the curious thing that happened is that I didn't see the "uslutils" line, or the Inside uslutils.toString(). line. However, I did modify the one functional method I've added so far to "uslutils" (besides the "with" methods), which returns a string value, and I just added an "x" to the value. My Jenkinsfile was printing the result from that, and it did show the additional "x".
Note that no errors occurred here, it just seemed to omit the println output from within the shared library class, and even stranger, omitted the output from the println call in the Jenkinsfile that was implicitly calling the uslutils.toString() method. Note that the println calls in the original call() methods WERE seen in the console output.
Any ideas what might be happening here?
Update:
I now have the following lines in my Jenkinsfile (among others):
println "uslutils.qBotPassword[${uslutils.qBotPassword}]"
println "uslutils[${uslutils}]"
println "uslutils.toString()[${uslutils.toString()}]"
And to repeat, here is the "uslutils.toString()" method:
public String toString() {
println "Inside uslutils.toString()."
return "[currentBuild[${currentBuild}] mechIdCredentials[${mechIdCredentials}] " +
"baseStashURL[${baseStashURL}] jobName[${jobName}] codeBranch[${codeBranch}] " +
"codeURL[${codeURL}] buildURL[${buildURL}] pullRequestURL[${pullRequestURL}] qBotUserID[${qBotUserID}] " +
"qBotPassword[${qBotPassword}]]"
}
Here are corresponding lines of output from the build:
[Pipeline] echo
uslutils.qBotPassword[...]
[Pipeline] echo
uslutils.toString()[[currentBuild[org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper#41fb2c94] mechIdCredentials[121447d5-0fe4-470d-b785-6ce88225ef01] baseStashURL[https://...] jobName[unified-service-layer-build-pipeline] codeBranch[master] codeURL[ssh://git#...] buildURL[http://...] pullRequestURL[] qBotUserID[...] qBotPassword[...]]
As you can see, the line attempting to print "uslutils[${uslutils}]" was simply ignored. The line attempting to print "uslutils.toString()[${uslutils.toString()}]" did render, but also note that the Inside uslutils.toString(). did not render.
I'm still looking for an explanation for this behavior, but perhaps this summarizes it more succinctly.
I did some digging and found this issue, https://issues.jenkins-ci.org/browse/JENKINS-41953, basically in normal pipeline script println is aliased to echo step. But when you're in a class, e.g. outside of the pipeline CPS, then the echo step isn't available and the println is ignored (since, as I understand it, there is no logger available).
What you can do is to propagate the script environment into your class methods using a variable and call echo through the variable (found solution in this thread). Like this:
class A {
Script script;
public void a() {
script.echo("Hello")
}
}
def a = new A(script:this)
echo "Calling A.a()"
a.a()
outputs:
Started by user jon
[Pipeline] echo
Calling A.a()
[Pipeline] echo
Hello
[Pipeline] End of Pipeline
Finished: SUCCESS
Which is what we want. For comparison, here is without propagation:
class A {
public void a() {
println "Hello"
}
}
def a = new A()
echo "Calling A.a()"
a.a()
Gives:
Started by user jon
[Pipeline] echo
Calling A.a()
[Pipeline] End of Pipeline
Finished: SUCCESS

Resources