Currently coding a lot of groovy for very specific jenkins scenarios.
The problem is that I have to keep track of the current CpsScript-instance for the context (getting properties, the environment and so on) and its invokeMethod (workflow steps and the likes).
Currently this means I pass this in the pipeline groovy script onto my entry class and from there it's passed on to every class separately, which is very annoying.
The script instance is created by the CpsFlowExecution and stored within the Continuable-instance and the CpsThreadGroup, neither of which allow you to retrieve it.
Seems that GlobalVariable derived extensions receive it so that they have a context but I'm currently not knowledgeable enough to write my own extension to leverage that.
So the question is:
Does anyone know of a way to keep track of the CpsScript-instance that doesn't require me to pass it on to every new class I create? (Or alternatively: obtain it from anywhere - does this really need to be so hard?)
Continued looking into ways to accomplish this. Even wrote a jenkins plugin that provides an cpsScript global variable. Unfortunately you need the instance to provide a context for that call, so it's useless.
So as the "least bad solution"(tm) I created a class I called ScriptContext that I can use as a base class for my pipeline classes (It implements Serializable).
When you write your pipeline script you either pass it the CpsScript statically once:
ScriptContext.script = this
Or, if you derived from it (make sure to call super()):
new MyPipeline(this)
If your class is derived from the ScriptContext your work is done. Everything will work as though you didn't create a class but just used the automagic conversion. If you use any CpsScript-level functions besides println, you might want to add these in here as well.
Anywhere else you can just call ScriptContext.script to get the script instance.
The class code (removed most of the comments to keep it as short as possible):
package ...
import org.jenkinsci.plugins.workflow.cps.*
class ScriptContext implements Serializable {
protected static CpsScript _script = null
ScriptContext(CpsScript script = null) {
if (!_script && script) {
_script = script
}
}
ScriptContext withScript(CpsScript script) {
setScript(script)
this
}
static void setScript(CpsScript script) {
if (!_script && script) {
_script = script
}
}
static CpsScript getScript()
{
_script
}
// functions defined in CpsScript itself are not automatically found
void println(what) {
_script.println(what)
}
/**
* For derived classes we provide missing method functionality by trying to
* invoke the method in script context
*/
def methodMissing(String name, args) {
if (!_script) {
throw new GroovyRuntimeException('ScriptContext: No script instance available.')
}
return _script.invokeMethod(name, args)
}
/**
* For derived classes we provide missing property functionality.
* Note: Since it's sometimes unclear whether a property is an actual property or
* just a function name without brackets, use evaluate for this instead of getProperty.
* #param name
* #param args
* #return
*/
def propertyMissing(String name) {
if (!_script) {
throw new GroovyRuntimeException('ScriptContext: No script instance available.')
}
_script.evaluate(name)
}
/**
* Wrap in node if needed
* #param body
* #return
*/
protected <V> V node(Closure<V> body) {
if (_script.env.NODE_NAME != null) {
// Already inside a node block.
body()
} else {
_script.node {
body()
}
}
}
}
Related
Is something like this possible, i.e. using the JobDSL API from a class outside the main DSL script?
//main_jobdsl_script.groovy:
new JobCreator().createJob()
//JobCreator.groovy:
job("new-job") {
steps {
batchFile("Hello World")
}
}
When running it I get the error
13:03:18 ERROR: No signature of method: JobCreator.job() is applicable for argument types:
(org.codehaus.groovy.runtime.GStringImpl, StartJobCreator$_createJob_closure1)
values: ["new-job", de.dbh.jobcreation.StartJobCreator$_createStartJob_closure1#374d293]
I want to avoid that the main-script gets too big and cluttered and rather divide the code into several scripts / classes.
Yes, it is possible. The current script has access to all API methods, so you need to pass it to the custom class.
//main_jobdsl_script.groovy:
new JobCreator(this).createJob()
//JobCreator.groovy:
class JobCreator {
private final Object context
JobCreator(Object context) {
this.context = context
}
void createJob() {
context.job('new-job') {
steps {
batchFile('Hello World')
}
}
}
}
This question is a follow on after such a great answer Is there a way to upload jars for a dataflow job so we don't have to serialize everything?
This made me realize 'ok, what I want is injection with no serialization so that I can mock and test'.
Our current method requires our apis/mocks to be serialiable BUT THEN, I have to put static fields in the mock because it gets serialized and deserialized creating a new instance that dataflow uses.
My colleague pointed out that perhaps this needs to be a sink and that is treated differently? <- We may try that later and update but we are not sure right now.
My desire is from the top to replace the apis with mocks during testing. Does someone have an example for this?
Here is our bootstrap code that does not know if it is in production or inside a feature test. We test end to end results with no apache beam imports in our tests meaning we swap to any tech if we want to pivot and keep all our tests. Not only that, we catch way more integration bugs and can refactor without rewriting tests since the contracts we test are customer ones we can't easily change.
public class App {
private Pipeline pipeline;
private RosterFileTransform transform;
#Inject
public App(Pipeline pipeline, RosterFileTransform transform) {
this.pipeline = pipeline;
this.transform = transform;
}
public void start() {
pipeline.apply(transform);
pipeline.run();
}
}
Notice that everything we do is Guice Injection based so the Pipeline may be direct runner or not. I may need to modify this class to pass things through :( but anything that works for now would be great.
The function I am trying to get our api(and mock and impl to) with no serialization is thus
private class ValidRecordPublisher extends DoFn<Validated<PractitionerDataRecord>, String> {
#ProcessElement
public void processElement(#Element Validated<PractitionerDataRecord>element) {
microServiceApi.writeRecord(element.getValue);
}
}
I am not sure how to pass in microServiceApi in a way that avoid serialization. I would be ok with delayed creation as well after deserialization using guice Provider provider; with provider.get() if there is a solution there too.
Solved in such a way that mocks no longer need static or serialization anymore by one since glass bridging the world of dataflow(in prod and in test) like so
NOTE: There is additional magic-ness we have in our company that passes through headers from service to service and through dataflow and that is some of it in there which you can ignore(ie. the RouterRequest request = Current.request();). so for anyone else, they will have to pass in projectId into getInstance each time.
public abstract class DataflowClientFactory implements Serializable {
private static final Logger log = LoggerFactory.getLogger(DataflowClientFactory.class);
public static final String PROJECT_KEY = "projectKey";
private transient static Injector injector;
private transient static Module overrides;
private static int counter = 0;
public DataflowClientFactory() {
counter++;
log.info("creating again(usually due to deserialization). counter="+counter);
}
public static void injectOverrides(Module dfOverrides) {
overrides = dfOverrides;
}
private synchronized void initialize(String project) {
if(injector != null)
return;
/********************************************
* The hardest part is this piece since this is specific to each Dataflow
* so each project subclasses DataflowClientFactory
* This solution is the best ONLY in the fact of time crunch and it works
* decently for end to end testing without developers needing fancy
* wrappers around mocks anymore.
***/
Module module = loadProjectModule();
Module modules = Modules.combine(module, new OrderlyDataflowModule(project));
if(overrides != null) {
modules = Modules.override(modules).with(overrides);
}
injector = Guice.createInjector(modules);
}
protected abstract Module loadProjectModule();
public <T> T getInstance(Class<T> clazz) {
if(!Current.isContextSet()) {
throw new IllegalStateException("Someone on the stack is extending DoFn instead of OrderlyDoFn so you need to fix that first");
}
RouterRequest request = Current.request();
String project = (String)request.requestState.get(PROJECT_KEY);
initialize(project);
return injector.getInstance(clazz);
}
}
I suppose this may not be what you're looking for, but your use case makes me think of using factory objects. They may depend on the pipeline options that you pass (i.e. your PipelineOptions object), or on some other configuration object.
Perhaps something like this:
class MicroserviceApiClientFactory implements Serializable {
MicroserviceApiClientFactory(PipelineOptions options) {
this.options = options;
}
public static MicroserviceApiClient getClient() {
MyPipelineOptions specialOpts = options.as(MySpecialOptions.class);
if (specialOpts.getMockMicroserviceApi()) {
return new MockedMicroserviceApiClient(...); // Or whatever
} else {
return new MicroserviceApiClient(specialOpts.getMicroserviceEndpoint()); // Or whatever parameters it needs
}
}
}
And for your DoFns and any other execution-time objects that need it, you would pass the factory:
private class ValidRecordPublisher extends DoFn<Validated<PractitionerDataRecord>, String> {
ValidRecordPublisher(MicroserviceApiClientFactory msFactory) {
this.msFactory = msFactory;
}
#ProcessElement
public void processElement(#Element Validated<PractitionerDataRecord>element) {
if (microServiceapi == null) microServiceApi = msFactory.getClient();
microServiceApi.writeRecord(element.getValue);
}
}
This should allow you to encapsulate the mocking functionality into a single class that lazily creates your mock or your client at pipeline execution time.
Let me know if this matches what you want somewhat, or if we should try to iterate further.
I have no experience with Guice, so I don't know if Guice configurations can easily pass the boundary between pipeline construction and pipeline execution (serialization / submittin JARs / etc).
Should this be a sink? Maybe, if you have an external service, and you're writing to it, you can write a PTransform that takes care of it - but the question of how you inject various dependencies will remain.
Currently I'm trying to register findFiles step.
My set up is as follows:
src/
test/
groovy/
TestJavaLib.groovy
vars/
javaLib.groovy
javaApp.jenkinsfile
Inside TestJavaApp.groovy I have:
...
import com.lesfurets.jenkins.unit.RegressionTest
import com.lesfurets.jenkins.unit.BasePipelineTest
class TestJavaLibraryPipeline extends BasePipelineTest implements RegressionTest {
// Some overridden setUp() which loads shared libs
// and registers methods referenced in javaLib.groovy
void registerPipelineMethods() {
...
def fileList = [new File("testFile1"), new File("testFile2")]
helper.registerAllowedMethod('findFiles', { f -> return fileList })
...
}
}
and my javaLib.groovy contains this currently failing part:
...
def pomFiles = findFiles glob: "target/publish/**/${JOB_BASE_NAME}*.pom"
if (pomFiles.length < 1) { // Fails with java.lang.NullPointerException: Cannot get property 'length' on null object
error("no pom file found")
}
...
I have tried multiple closures returning various objects, but everytime I get NPE.
Question is - how to correctly register "findFiles" method?
N.B. That I'm very new to mocking and closures in groovy.
Looking at the source code and examples on GitHub, I see a few overloads of the method (here):
void registerAllowedMethod(String name, List<Class> args = [], Closure closure)
void registerAllowedMethod(MethodSignature methodSignature, Closure closure)
void registerAllowedMethod(MethodSignature methodSignature, Function callback)
void registerAllowedMethod(MethodSignature methodSignature, Consumer callback)
It doesn't look like you are registering the right signature with your call. I'm actually surprised you aren't getting a MissingMethodException with your current call pattern.
You need to add the rest of the method signature during registration. The findFiles method is taking a Map of parameters (glob: "target/publish/**/${JOB_BASE_NAME}*.pom" is a map literal in Groovy). One way to register that type would be like this:
helper.registerAllowedMethod('findFiles', [Map.class], { f -> return fileList })
I also faced the same issue. However, I was able to mock the findFiles() method using the following method signature:
helper.registerAllowedMethod(method('findFiles', Map.class), {map ->
return [['path':'testPath/test.zip']]
})
So I found a way on how to mock findFiles when I needed length property:
helper.registerAllowedMethod('findFiles', [Map.class], { [length: findFilesLength ?: 1] })
This also allows to change findFilesLength variable in tests to test different conditions in pipeline like the one in my OP.
I have a jenkins pipeline job that has been working fine. I have a small handful of similar pipelines, and I've been duplicating a small set of reusable utility methods into each one. So, I've started to construct a shared library to reduce that duplication.
I'm using the following page for guidance: https://jenkins.io/doc/book/pipeline/shared-libraries/ .
For each method that I move into the shared library, I create a "vars/methodname.groovy" file in the shared library, and change the method name to "call".
I've been doing these one at a time and verifying the pipeline job still works, and this is all working fine.
The original set of methods would reference several "global" variables, like "env.JOB_NAME" and "params.". In order for the method to work in the shared library, I would add references to those env vars and params as parameters to the methods. This also works fine.
However, I don't like the fact that I have to pass these "global" variables, that are essentially static from the start of the job, sometimes through a couple of levels of these methods that I've put into the shared library.
So, I've now created something like the "vars/acme.groovy" example from that doc page. I'm going to define instance variables to store all of those "global" variables, and move each of the single methods defined in each of the "vars/methodname.groovy" files as instance variables in this new class.
I also defined a "with" method in the class for each of the instance variables (setter that returns "this" for chaining).
I initially would configure it inside my "node" block with something like the following (the file in the library is called "vars/uslutils.groovy"):
uslutils.withCurrentBuild(currentBuild).with...
And then when I need to call any of the reused methods, I would just do "uslutils.methodname(optionalparameters)".
I also added a "toString()" method to the class, just for debugging (since debugging Jenkinsfiles is so easy :) ).
What's odd is that I'm finding that if I call this toString() method from the pipeline script, the job hangs forever, and I have to manually kill it. I imagine I'm hitting some sort of non-obvious recursion in some Groovy AST, but I don't see what I'm doing wrong.
Here is my "vars/uslutils.groovy" file in the shared library:
import hudson.model.Cause
import hudson.triggers.TimerTrigger
import hudson.triggers.SCMTrigger
import hudson.plugins.git.GitStatus
class uslutils implements Serializable {
def currentBuild
String mechIdCredentials
String baseStashURL
String jobName
String codeBranch
String buildURL
String pullRequestURL
String qBotUserID
String qBotPassword
def getCurrentBuild() { return currentBuild }
String getMechIdCredentials() { return mechIdCredentials }
String getBaseStashURL() { return baseStashURL }
String getJobName() { return jobName }
String getCodeBranch() { return codeBranch }
String getBuildURL() { return buildURL }
String getPullRequestURL() { return pullRequestURL }
String getQBotUserID() { return qBotUserID }
String getQBotPassword() { return qBotPassword }
def withCurrentBuild(currentBuild) { this.currentBuild = currentBuild; return this }
def withMechIdCredentials(String mechIdCredentials) { this.mechIdCredentials = mechIdCredentials; return this }
def withBaseStashURL(String baseStashURL) { this.baseStashURL = baseStashURL; return this }
def withJobName(String jobName) { this.jobName = jobName; return this }
def withCodeBranch(String codeBranch) { this.codeBranch = codeBranch; return this }
def withBuildURL(String buildURL) { this.buildURL = buildURL; return this }
def withPullRequestURL(String pullRequestURL) { this.pullRequestURL = pullRequestURL; return this }
def withQBotUserID(String qBotUserID) { this.qBotUserID = qBotUserID; return this }
def withQBotPassword(String qBotPassword) { this.qBotPassword = qBotPassword; return this }
public String toString() {
// return "[currentBuild[${this.currentBuild}] mechIdCredentials[${this.mechIdCredentials}] " +
// "baseStashURL[${this.baseStashURL}] jobName[${this.jobName}] codeBranch[${this.codeBranch}] " +
// "buildURL[${this.buildURL}] pullRequestURL[${this.pullRequestURL}] qBotUserID[${this.qBotUserID}] " +
// "qBotPassword[${this.qBotPassword}]]"
return this.mechIdCredentials
}
Note that I've simplified the toString() method temporarily until I figure out what I'm doing wrong here.
This is what I added at the top of my "node" block:
uslutils.currentBuild = currentBuild
println "uslutils[${uslutils}]"
When I run the job, it prints information from lines that come before this, and then it just shows the rotating thing forever, until I kill the job. If I comment out the "println", it works fine.
I have been trying to create my first jenkins plugin. Everything is great except that the global config does not persist after the jenkins service is restarted.
THe config saves fine as long as the service is not restarted.
The global config jelly file...
<j:jelly xmlns:j="jelly:core" xmlns:st="jelly:stapler" xmlns:d="jelly:define" xmlns:l="/lib/layout" xmlns:t="/lib/hudson" xmlns:f="/lib/form">
Jenkins uses a set of tag libraries to provide uniformity in forms.
To determine where this tag is defined, first check the namespace URI,
and then look under $JENKINS/views/. For example, <f:section> is defined
in $JENKINS/views/lib/form/section.jelly.
It's also often useful to just check other similar scripts to see what
tags they use. Views are always organized according to its owner class,
so it should be straightforward to find them.
-->
<f:section title="Hello World Builder">
<f:entry title="French" field="useFrench"
description="Check if we should say hello in French">
<f:checkbox />
</f:entry>
</f:section>
</j:jelly>
After save jenkins is constructing a config file named
examplePlugin.examplePlugin.HelloWorldBuilder.xml
With Content
false
The descriptor itself is the following.
// Overridden for better type safety.
// If your plugin doesn't really define any property on Descriptor,
// you don't have to do this.
#Override
public DescriptorImpl getDescriptor() {
return (DescriptorImpl)super.getDescriptor();
}
/**
* Descriptor for {#link HelloWorldBuilder}. Used as a singleton.
* The class is marked as public so that it can be accessed from views.
*
* <p>
* See <tt>src/main/resources/hudson/plugins/hello_world/HelloWorldBuilder/*.jelly</tt>
* for the actual HTML fragment for the configuration screen.
*/
#Extension // This indicates to Jenkins that this is an implementation of an extension point.
public static final class DescriptorImpl extends BuildStepDescriptor<Builder> {
/**
* To persist global configuration information,
* simply store it in a field and call save().
*
* <p>
* If you don't want fields to be persisted, use <tt>transient</tt>.
*/
private boolean useFrench;
/**
* Performs on-the-fly validation of the form field 'name'.
*
* #param value
* This parameter receives the value that the user has typed.
* #return
* Indicates the outcome of the validation. This is sent to the browser.
*/
public FormValidation doCheckName(#QueryParameter String value)
throws IOException, ServletException {
if (value.length() == 0)
return FormValidation.error("Please set a name");
if (value.length() < 4)
return FormValidation.warning("Isn't the name too short?");
return FormValidation.ok();
}
public boolean isApplicable(Class<? extends AbstractProject> aClass) {
// Indicates that this builder can be used with all kinds of project types
return true;
}
/**
* This human readable name is used in the configuration screen.
*/
public String getDisplayName() {
return "Say hello world";
}
#Override
public boolean configure(StaplerRequest req, JSONObject formData) throws FormException {
// To persist global configuration information,
// set that to properties and call save().
useFrench = formData.getBoolean("useFrench");
// ^Can also use req.bindJSON(this, formData);
// (easier when there are many fields; need set* methods for this, like setUseFrench)
save();
return super.configure(req,formData);
}
/**
* This method returns true if the global configuration says we should speak French.
*
* The method name is bit awkward because global.jelly calls this method to determine
* the initial state of the checkbox by the naming convention.
*/
public boolean getUseFrench() {
return useFrench;
}
}
Any help with why this is not reloading on reboot would be very helpful, since this seems to be a problem with the example project created by the maven archetype.
So this is problem with the hello world application. You need to define in your constructor that you want to load the configuration.
public DescriptorImpl(){
load();
}
That fixes the issue I was seeing with the configuration not being persisted.