How to retain Cobertura ratcheted configuration in job-dsl-plugin? - jenkins

Cobertura plugin in Jenkins has a support of ratcheting by ticking these boxes:
Health auto update
Stability auto update
When ticking this box, the coverage metric targets (in Jenkins configuration page) will be updated on every successful build:
These values will be overridden by job-dsl-plugin when seed job is triggered. How can I retain these values when my seed job is triggered?

Seems like I can't find a pretty way to do this right now, but here is my solution.
Solution
1. Execute a Groovy script and store all of the current job cobertura configuration in a JSON file.
Cobertura configuration can be retrieved like:
def coberturaPublisher = project.getPublishersList().get(CoberturaPublisher)
coberturaPublisher.**healthyTarget**.getTarget(**CoverageMetric.METHOD**)
2. job-dsl-plugin to configure cobertura by using the JSON file if it's available
job-dsl's CoberturaContext normal method can't be called here because the data represented in the first step is different with the method parameter:
80% is stored as 8000000 in the JSON file
80% must be passed in as 80 instead of 8000000 to CoberturaContext methods.
As of today, I can't simply divide the value by 100000 because the method is accepting Integer instead of double. To retain the precision of the ratcheted configuration, I have to bypass the validation by manipulating the target directly:
coberturaContext.targets = [
'METHOD': new CoberturaContext.CoberturaTarget(
targetType: CoberturaContext.TargetType.METHOD,
healthyTarget: 8000000,
unhealthyTarget: previousConfig ? previousConfig.cobertura.method.unhealthy : 0,
failingTarget: previousConfig ? previousConfig.cobertura.method.failing : 0
),
Why bother creating the JSON file while you can call Jenkins API directly?
My seed job is configured with this example here, hence I have an additional classpath used in the job configuration. When I tried to hit Jenkins API directly, I'm getting class loading issue for Cobertura plugin classes.

Related

Jenkins Addon in Jenkins Pipeline

I have a parameterized project. With the variable VAR1.
I'm using the the Xray for JIRA Jenkins Plugin for Jenkins. There you can fill four parameters:
JIRA Instance
Issues
Filter
File Path
I'm new to Jenkins but what I have learned so far, that you can't fill this fields with environment variables. Something like
Issues: ${VAR1} - doesn't work.
So I thought I can do this with a pipeline. When I click on Pipeline Syntax and chose step: General Build Step I can choose Xray: Cucumber Features Export Task. Then I fill the fields with my environment variable and click Generate Pipeline Script The output is as follows:
step <object of type com.xpandit.plugins.xrayjenkins.task.XrayExportBuilder>
That doesn't work. What I'm doing wrong?
All you're doing is OK, but what you want is not supported by Jenkins whether it is pipeline or not, since the parameters' load is happening prior to the pipeline-flow or the definition of the ${VAR1}.
You can try to overcome this by defining the 'Issues' value as a pipeline internal value instead of a parameter and base it on the ${VAR1} value.
If it must be a parameter, use 2 jobs where one defines the value of 'Issues' based on a the ${VAR1} and pass it to the other job that gets the 'Issues' as a fixed value.

Dataflow/Beam Templates, Productionization, Initialization, and ValueProviders

I have an Apache Beam job running on Google Cloud Dataflow, and as part of its initialization it needs to run some basic sanity/availability checks on services, pub/sub subscriptions, GCS blobs, etc. It's a streaming pipeline intended to run ad infinitum that processes hundreds of thousands of pub/sub messages.
Currently it needs a whole heap of required, variable parameters: which Google Cloud project it needs to run in, which bucket and directory prefix it's going to be storing files in, which pub/sub subscriptions it needs to read from, and so on. It does some work with these parameters before pipeline.run is called - validation, string splitting, and the like. In its current form in order to start a job we've been passing these parameters to to a PipelineOptionsFactory and issuing a new compile every single time, but it seems like there should be a better way. I've set up the parameters to be ValueProvider objects, but because they're being called outside of pipeline.run, Maven complains at compile time that ValueProvider.get() is being called outside of a runtime context (which, yes, it is.)
I've tried using NestedValueProviders as in the Google "Creating Templates" document, but my IDE complains if I try to use NestedValueProvider.of to return a string as shown in the document. The only way I've been able to get NestedValueProviders to compile is as follows:
NestedValueProvider<String, String> pid = NestedValueProvider.of(
pipelineOptions.getDataflowProjectId(),
(SerializableFunction<String, String>) s -> s
);
(String pid = NestedValueProvider.of(...) results in the following error: "incompatible types: no instance(s) of type variable(s) T,X exist so that org.apache.beam.sdk.options.ValueProvider.NestedValueProvider conforms to java.lang.String")
I have the following in my pipelineOptions:
ValueProvider<String> getDataflowProjectId();
void setDataflowProjectId(ValueProvider<String> value);
Because of the volume of messages we're going to be processing, adding these checks at the front of the pipeline for every message that comes through isn't really practical; we'll hit daily account administrative limits on some of these calls pretty quickly.
Are templates the right approach for what I want to do? How do I go about actually productionizing this? Should (can?) I compile with maven into a jar, then just run the jar on a local dev/qa/prod box with my parameters and just not bother with ValueProviders at all? Or is it possible to provide a default to a ValueProvider and override it as part of the options passed to the template?
Any advice on how to proceed would be most appreciated. Thanks!
The way templates are currently implemented there is no point to perform "post-template creation" but "pre-pipeline start" initialization/validation.
All of the existing validation executes during template creation. If the validation detects that there the values aren't available (due to being a ValueProvider) the validation is skipped.
In some cases it is possible to approximate validation by adding runtime checks either as part of initial splitting of a custom source or part of the #Setup method of a DoFn. In the latter case, the #Setup method will run once for each instance of the DoFn that is created. If the pipeline is Batch, after 4 failures for a specific instance it will fail the pipeline.
Another option for productionizing pipelines is to build the JAR that runs the pipeline, and have a production process that runs that JAR to initiate the pipeline.
Regarding the compile error you received -- the NestedValueProvider returns a ValueProvider -- it isn't possible to get a String out of that. You could, however, put the validation code into the SerializableFunction that is run within the NestedValueProvider.
Although I believe this will currently re-run the validation everytime the value is accessed, it wouldn't be unreasonable to have the NestedValueProvider cache the translated value.

How to get interactivity with maven release plugin

The maven release plugin in Jenkins prompts you for input when doing a release:prepare. It'll give you a suggested value, but you also have the choice to override that suggested value to put your own value. How can you do this same thing when writing a Jenkins pipeline in groovy? The only thing that I've found online is -> https://gist.github.com/cyrille-leclerc/552e3103139557e0196a. This method is strictly command and just gives batch mode with no user interaction.
The parameter definitions passed to the input step can specify default values. Try it in Snippet Generator.

How to pass info from one job to another

I have a jenkins job (jobA) which calls another one (jobB).
I have a string which is generated in a batch file called by jobA which needs to be passed into jobB.
How can I get that string out of jobA and into jobB?
Might it be possible to, say, set an environment variable to that string, somehow turn it into a jenkins parameter, and then pass that parameter into jobB?
Currently, my only other idea is to write the string out to a file in jobA, save that file as an artifact, pass that artifact into jobB, and then have jobB read that file. That seems a really kludgey way to do it, though.
It seems that there must be a better way.
One option is to use Jenkins Parameterized Trigger Plugin.
Then, you can for example set jobB's parameters based on a properties file generated by jobA.

Updating the Jenkins Variable with a value calculated in a batch file

I am new to Jenkins. My aim is to define a build job in which I have created an environment variable suppose "x" by checking "This build is parameterized" option. I am executing a batch file which is performing a set of instructions and I want the x to be updated by a value calculated in batch file. Any suggestions how can I update the Jenkins variable value calculated using a batch file. I have tried using enviject plugin but not getting how to update the variable.
Thanks in advance
enviject plugin should be helpful here.
But - you need to update your batchfile to create a property file.
As the first step in the build action, I am passing some properties to my test.
Then, I use the property name like a parameter passed to Jenkins to execute a windows batch command. This bat file creates a result file in .properties format.
As the third step, i read the (result) property file. The result property file can have the same properties with updated values.
Now - your original proeprties would have been updated with new values & can be used in your subsequent build steps.

Resources