What I'm trying to do
I have a script that looks something like this:
def doStuff() {
println 'stuff done'
}
return this
I am loading this script in another script so that I have a Groovy Script object that I can call doStuff from. This is in a script, call it myscript.groovy that looks like this:
Script doer = load('doStuff.groovy')
doer.doStuff()
I would like to be able to mock the Script object that is returned by load, stub doStuff, and assert that it is called. Ideally, something like the following (assume that load is already mocked):
given:
Script myscript = load('myscript.groovy')
Script mockDoer = Mock(Script)
when:
myscript.execute()
then:
1 * load('doStuff.groovy') >> mockDoer
1 * mockDoer.doStuff()
However, I am getting an NPE at the line:
doer.doStuff()
How can I mock the Script object in a way that I can make sure that the doStuff method is stubbed and called properly in my test?
Why I'm doing it this way
I know this is a bit of a weird use case. I figured I should give some context why I am trying to do this in case people want to suggest completely different ways of doing this that might not apply to what I am trying to do.
I recently started working on a project that uses some fairly complex Jenkins Pipeline scripts. In order to modularize the scripts to some degree, utility functions and pieces of different pipelines are contained in different scripts and loaded and executed similarly to how doStuff.groovy is above.
I am trying to make a small change to the scripts at the same time as introducing some testing using this library: https://github.com/lesfurets/JenkinsPipelineUnit
In one test in particular I want to mock a particular utility method and assert that it is called depending on parameters to the pipeline.
Because the scripts are currently untested, reasonably complex, I am new to them, and many different projects depend on them I am reluctant to make any sweeping changes to how the code is structured or modularized.
Related
It seems like it's really difficult to be able to store a bunch of variables for use in shared code in Jenkins/Groovy scripted pipelines. I've tried a bunch of methods and none of them seem to give the desired result.
This method looked the most promising, but the values all came back as null in the calling pipeline. Get Global Variables in jenkins pipeline.
My codes is something lie
import org.blabla.JobHelper
println("env.NO_PROXY: -->${env.NO_PROXY}<--")
And in the JobHelper.groovy file, I've defined
package org.blabla.project
env.NO_PROXY = 'localhost,127.0.0.1,169.254.169.254'
the names have been changed a bit to protect the innocent, but you get the idea.
the script just prints null for the value.
Is there a simple way (or indeed any way) that I can pull in a bunch of variables from a shared library file? This feels like it should be a really simple exercise, but after spending many hours searching I'm none the wiser.
In general, env is only available once the pipeline has started, but groovy scripts are resolved much earlier.
I'm using static class members as global variables. Applied to your code sample, it would look like this:
JobHelper.groovy
package org.blabla.project
# Class must be named like the file that contains it.
class JobHelper {
static String getNO_PROXY() { 'localhost,127.0.0.1,169.254.169.254' }
}
Elsewhere:
import org.blabla.project
println("NO_PROXY: -->${JobHelper.NO_PROXY}<--")
Note that Groovy automatically generates properties from get*() and set*() methods, so we can use the short form instead of having to write JobHelper.getNO_PROXY().
I am looking for way to speed up our Build-Pipeline.
The biggest impact would be to do certain things only if they have not been done.
So basically, I have already parameterized some optional stages which works fine, but there are some things I'd like to skip if there was a execution before which was successfull.
I have searched the docs, especially the section ref. when but the was nothing that did the job.
So the question is: Is there something I can do in a declarative pipeline to always skip a stage, except when
It has never been run successfully before
The last time it ran was not successful (eg. if I allow for its execution to be forced by passing a parameter)
I though about using the Build Number (eg only run it on the first execution of the pipeline), but this does't cut it for 2 reasons
I'm using milestones to prevent multiple executions of the pipe for a given branch/PR
If the first run fails, it would never be tried again
Oh, and I also thought about putting all of the logic in post -> regression and forcing the stage to fail on the first pipeline run, but this doesn't seem to be a good idea either.
One option ,although not ideal for large scale, can be to store the diffrent states as Global Environment Variables. (Manage Jenkins -> Configure System -> Global properties -> Environment variables).
These parameters are available for all jobs and can store parameters in a 'server wide' scope.
You can then update or set them via code using the following groovy method:
#NonCPS
def updateGlobalEnvVariable(String name, String value) {
def globalNodeProperties = Jenkins.getInstance().getGlobalNodeProperties()
def envVarsNodePropertyList = globalNodeProperties.getAll(hudson.slaves.EnvironmentVariablesNodeProperty.class)
if (envVarsNodePropertyList == null || envVarsNodePropertyList.size() == 0) {
def envVarsNodePropertyClass = this.class.classLoader.loadClass('hudson.slaves.EnvironmentVariablesNodeProperty')
globalNodeProperties.add(envVarsNodePropertyClass.newInstance())
}
envVarsNodePropertyList.get(0).getEnvVars().put(name, value)
}
This function will create the parameter if it does not exists or update its value in case it already exists. Also this function should better be placed in a Shared Library from which it will be available for all pipelines.
A nice advantage for this technique is that you can always 'reset' the different stages from the configuration page, However if you need to store multiple stages it can overflow the configuration page with a bit too much information.
Maybe you could use custom shared-workspace and then create/store some state file or something that could be used by your next executions like lastexecfailed.state and then try to locate the file on the shared workspace at the beginning of your execution.
I recently switched my logback configuration file from logback.xml to logback.groovy. Using a DSL with Groovy is more versatile than XML for this sort of thing.
I need to analyse this file programmatically, like I analysed the previous XML file (any of innumerable parsing tools). I realise that this will be imperfect, as a DSL config file sits on top of an object which it configures and must be executed, so its results are inevitably dynamic, whereas an XML file is static.
If you want to include one Groovy file in another file there are solutions. This one worked for me.
But I'm struggling to find what I need from the results.
If I put a function like this in the DSL file ...
def greet(){
println "hello world"
}
... not only can I execute it (config.greet() as below), but I can also see it listed when I go
GroovyShell shell = new GroovyShell()
def config = shell.parse( logfileConfigPath.toFile() )
println "config.class.properties ${config.class.properties}"
But if I put a line like this in the DSL file...
def MY_CONSTANT = "XXX"
... I have no idea how to find it and get its value (it is absent from the confusing and copious output from config.class.properties).
PS printing out config.properties just gives this:
[class:class logback, binding:groovy.lang.Binding#564fa2b]
... and yes, I did look at config.binding.properties: there was nothing.
further thought
My question is, more broadly, about what if any tools are available for analysis of Groovy DSL configuration files. Given that such a file is pretty meaningless without the underlying object it is configuring (an object implementing org.gradle.api.Project in the case of Gradle; I don't know what class it may be in the case of logback), you would have thought there would need to be instrumentation to kind of hitch up such an object and then observe the effects of the config file in a controlled, observable way. If Groovy DSL config files are to be as versatile as their XML counterparts surely you need something along those lines? NB I have a suspicion that org.gradle.tooling.model.GradleProject or org.gradle.tooling.model.ProjectModel might serve that purpose. Unfortunately, at the current time I am unable to get GradleConnector working, as detailed here.
I presume there is nothing of this kind for logback, and at the moment I have no knowledge of its DSL or configurable object, or the latter's class or interface...
The use of def creates a local variable in the execution of the script that is not available in the binding of the script; see this. Even dropping def will not expose MY_CONSTANT in the binding because parsing the script via GroovyShell.parse() does not interpret/execute the code.
To expose MY_CONSTANT in config's binding, change def MY_CONSTANT = "XXX" to MY_CONSTANT = "XXX" and execute the config script via config.run().
So I have just created a geb script that tests the creation of a report. Let's call this Script A
I have other test cases I need to run that are dependent on the previous report being created, but I still want the Script A to be a stand alone test. we will call the subsiquent script Script B
Furthermore Script A generates a pair of numbers that will be needed in subsequent scripts (to verify data got recorded accurately)
Is there a way I can setup geb such that Script B executes 'Script Aand is able to pull those 2 numbers fromScript Ato be used inScript B`?
In summary there will be a number a scripts that are dependent on the actions of Script A (which is itself a test) I want to be able to modularize Script A so that it can be executed from other scripts. What would be the best way to do this?
For reuse and not repeating yourself I would put the report creation into a separate method call in a new class such as ReportGenerator, this would generate the report given a set of parameters (if required) and return the report figures for use in whatever test you like.
You could then call that in any spec you want, with no reliance on other specs.
I am trying to use Testrail as a test case management system and so,
integrating testrail with the Jenkins would be useful.
This is what I want to achieve:
Lets say I manually create three test cases in testrail with case ID's
C1, C2 and C3 and these test cases will have some unique automated test names such as A1, A2, and A3 (in more info, there will be a field in testrail with such a unique
information)
When I hit "Start Automated Tests" button and run a Jenkins job from testrail (considering I have already implemented UI script for testrail that has this button):
, I want to run a script/something that takes the case ID's of the selected test cases and annotate those IDs to the actual Java tests temporarily so that it can run those specific tests and post results back to the Testrail.
Approach I can think of:
When I hit "Start Automated Tests" button on Testrail, I can make a script to run to create an XML file that will include the desired selected test cases from Testrail. This XML will then be provided as a default input to the Jenkins job and it will run the test cases mentioned in the XML file. This XML will be temporary and will be replaced everytime the selection is made from the Testrail. However, how do you do it? I am a newbie to the Testrail and read its API and looks like API will be useful to post the results back to the Testrail. But, how do we achieve the mapping of the ID's?
Also, any advise on posting results back to the Testrail will be useful.
This isn't TestNG specific, but you can make a custom annotation in java. You can update a TestRail test in a test run through the api either by the test ID (using add_result), or both the case id and run id(using add_result_for_case). http://docs.gurock.com/testrail-api2/reference-results
The case id doesn't ever change, so you can just hard code these in your tests.
Here is what I'm using for this purpose:
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.METHOD)
public #interface TestData
{
int testId() default 0;
String[] tags() default "";
}
My test method then looks like this(Using Junit, but shouldn't be much different with other frameworks):
#Test
#TestData(
testId = 177,
tags = {"smoke", "authentication"}
)
public void testName()
{
//Do the test
}
I then use a JUnit specific way to get the test method name to use in my teardown method, but I'm sure there is a variety of ways to do that. Once you have the test method name here is how I read the annotation:
#After
public void baseTearDown() throws Exception
{
//Good place to record test results
Method testMethod = getClass().getMethod(testName);
if(testMethod.isAnnotationPresent(TestData.class))
{
TestData testData = testMethod.getAnnotation(TestData.class);
//Do something with testData.testId();
System.out.println("Test ID = " + testData.testId());
}
//other cleanups
}
This mkyong link gives some pretty basic examples of both creating an annotation and reading it with reflection. This is what I used to get started:
https://www.mkyong.com/java/java-custom-annotations-example/
If you're starting the test run in your code, then you can just keep track of the test run id and use it as needed. If not, my preference is to define and set some environment variables using Jenkins or other scripts that your code can read from so you don't have to deal with passing around files for some really basic key value pairs