I have a shared library of my own where a 3rd party java library gets 'grabed' and used more or less as follows:
package mypackage
#Grab(group='some.group', module='some-module', version = '1.0)
import some.group.some-module.Utils
#Singleton
class Grabber implements Serializable {
def static void callExternalMethod(String params){
Utils.method(params); // I would like to see what this one prints to stdout
}
}
The pipeline is more or less
#Library('mylibrary')
import mypackage.*
node{
...
stage("run") {
Grabber.callExternalMethod("some parameters")
}
}
The Grab works and the method is called, but I would like to include what Utils.method() normally prints to stdout on the Jenkins build output.
Is there a way to do this?
Related
I'm writing a shared library that will get used in Pipelines.
class Deployer implements Serializable {
def steps
Deployer(steps) {
this.steps = steps
}
def deploy(env) {
// convert environment from steps to list
def process = "ls -l".execute(envlist, null)
process.consumeProcessOutput(output, error)
process.waitFor()
println output
println error
}
}
In the Jenkinsfile, I import the library, call the class and execute the deploy function inside a script section:
stage('mystep') {
steps {
script {
def deployer = com.mypackage.HelmDeployer("test")
deployer.deploy()
}
}
}
However, no output or errors are printed on the Console log.
Is it possible to execute stuff inside a shared library class? If so, how, and what am I doing wrong?
Yes, it is possible but not really an obvious solution. Every call that is usually done in the Jenkinsfile but was moved to the shared-library needs to reference the steps object you passed.
You can also reference the Jenkins environment by calling steps.env.
I will give you a short example:
class Deployer implements Serializable {
def steps
Deployer(steps) {
this.steps = steps
}
def callMe() {
// Always call the steps object
steps.echo("Test")
steps.echo("${steps.env.BRANCH_NAME}")
steps.sh("ls -al")
// Your command could look something like this:
// def process = steps.sh(script: "ls -l", returnStdout: true).execute(steps.env, null)
...
}
}
You also have to import the object of the shared library and create an instance of it. Define the following outside of your Pipeline.
import com.mypackage.Deployer // path is relative to your src/ folder of the shared library
def deployer = new Deployer(this) // 'this' references to the step object of the Jenkins
Then you can call it in your pipeline as the following:
... script { deployer.test() } ...
According to the Beam website,
Often it is faster and simpler to perform local unit testing on your
pipeline code than to debug a pipeline’s remote execution.
I want to use test-driven development for my Beam/Dataflow app that writes to Bigtable for this reason.
However, following the Beam testing documentation I get to an impasse--PAssert isn't useful because the output PCollection contains org.apache.hadoop.hbase.client.Put objects, which don't override the equals method.
I can't get the contents of the PCollection to do validation on them either, since
It is not possible to get the contents of a PCollection directly - an
Apache Beam or Dataflow pipeline is more like a query plan of what
processing should be done, with PCollection being a logical
intermediate node in the plan, rather than containing the data.
So how can I test this pipeline, other than manually running it? I'm using Maven and JUnit (in Java since that's all the Dataflow Bigtable Connector seems to support).
The Bigtable Emulator Maven plugin can be used to write integration tests for this:
Configure the Maven Failsafe plugin and change your test case's ending from *Test to *IT to run as an integration test.
Install the Bigtable Emulator in the gcloud sdk on command line:
gcloud components install bigtable
Note that this required step is going to reduce code portability (e.g. will it run on your build system? On other devs' machines?) so I'm going to containerize it using Docker before deploying to the build system.
Add the emulator plugin to the pom per the README
Use the HBase Client API and see the example Bigtable Emulator integration test to set up your session and table(s).
Write your test as normal per the Beam documentation, except instead of using PAssert actually call CloudBigtableIO.writeToTable and then use the HBase Client to read the data from the table to verify it.
Here's an example integration test:
package adair.example;
import static org.apache.hadoop.hbase.util.Bytes.toBytes;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.UUID;
import java.util.stream.Collectors;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.transforms.Create;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.Mutation;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.util.Bytes;
import org.hamcrest.collection.IsIterableContainingInAnyOrder;
import org.junit.Assert;
import org.junit.Test;
import com.google.cloud.bigtable.beam.CloudBigtableIO;
import com.google.cloud.bigtable.beam.CloudBigtableTableConfiguration;
import com.google.cloud.bigtable.hbase.BigtableConfiguration;
/**
* A simple integration test example for use with the Bigtable Emulator maven plugin.
*/
public class DataflowWriteExampleIT {
private static final String PROJECT_ID = "fake";
private static final String INSTANCE_ID = "fakeinstance";
private static final String TABLE_ID = "example_table";
private static final String COLUMN_FAMILY = "cf";
private static final String COLUMN_QUALIFIER = "cq";
private static final CloudBigtableTableConfiguration TABLE_CONFIG =
new CloudBigtableTableConfiguration.Builder()
.withProjectId(PROJECT_ID)
.withInstanceId(INSTANCE_ID)
.withTableId(TABLE_ID)
.build();
public static final List<String> VALUES_TO_PUT = Arrays
.asList("hello", "world", "introducing", "Bigtable", "plus", "Dataflow", "IT");
#Test
public void testPipelineWrite() throws IOException {
try (Connection connection = BigtableConfiguration.connect(PROJECT_ID, INSTANCE_ID)) {
Admin admin = connection.getAdmin();
createTable(admin);
List<Mutation> puts = createTestPuts();
//Use Dataflow to write the data--this is where you'd call the pipeline you want to test.
Pipeline p = Pipeline.create();
p.apply(Create.of(puts)).apply(CloudBigtableIO.writeToTable(TABLE_CONFIG));
p.run().waitUntilFinish();
//Read the data from the table using the regular hbase api for validation
ResultScanner scanner = getTableScanner(connection);
List<String> resultValues = new ArrayList<>();
for (Result row : scanner) {
String cellValue = getRowValue(row);
System.out.println("Found value in table: " + cellValue);
resultValues.add(cellValue);
}
Assert.assertThat(resultValues,
IsIterableContainingInAnyOrder.containsInAnyOrder(VALUES_TO_PUT.toArray()));
}
}
private void createTable(Admin admin) throws IOException {
HTableDescriptor tableDesc = new HTableDescriptor(TableName.valueOf(TABLE_ID));
tableDesc.addFamily(new HColumnDescriptor(COLUMN_FAMILY));
admin.createTable(tableDesc);
}
private ResultScanner getTableScanner(Connection connection) throws IOException {
Scan scan = new Scan();
Table table = connection.getTable(TableName.valueOf(TABLE_ID));
return table.getScanner(scan);
}
private String getRowValue(Result row) {
return Bytes.toString(row.getValue(toBytes(COLUMN_FAMILY), toBytes(COLUMN_QUALIFIER)));
}
private List<Mutation> createTestPuts() {
return VALUES_TO_PUT
.stream()
.map(this::stringToPut)
.collect(Collectors.toList());
}
private Mutation stringToPut(String cellValue){
String key = UUID.randomUUID().toString();
Put put = new Put(toBytes(key));
put.addColumn(toBytes(COLUMN_FAMILY), toBytes(COLUMN_QUALIFIER), toBytes(cellValue));
return put;
}
}
In Google Cloud you can do e2e testing of your Dataflow pipeline easily using real cloud resources like Pub/Sub topic and BigQuery tables.
By using Junit5 Extension Model (https://junit.org/junit5/docs/current/user-guide/#extensions) you can create custom classes that will handle the creation and deletion of the required resources for your pipeline.
You can find a demo/seed project here https://github.com/gabihodoroaga/dataflow-e2e-demo and a blog post here https://hodo.dev/posts/post-31-gcp-dataflow-e2e-tests/.
I am building a jenkins pipeline and the job can be triggered by remote. I have the requirement to know which IP triggered the job. So I have a little groovy script, which returns the remote IP. With the EnvInject-plugin I can easily use this variable in a normal freestyle job, but how can I use this in the pipeline scirpt? I can't use the EnvInject-plugin with the pipeline-plugin :(
Here is the little script for getting the IP:
import hudson.model.*
import static hudson.model.Cause.RemoteCause
def ipaddress=""
for (CauseAction action : currentBuild.getActions(CauseAction.class)) {
for (Cause cause : action.getCauses()) {
if(cause instanceof RemoteCause){
ipaddress=cause.addr
break;
}
}
}
return ["ip":ipaddress]
You can create a shared library function (see here for examples and the directory structure). This is one of the undocumented (or really hard to find any documentation) features of Jenkins.
If you would put a file triggerIp.groovy in the directory vars, which is in the directory workflow-libs at the root level of JENKINS_HOME and put your code in that file.
The full filename then will be $JENKINS_HOME/workflow-libs/vars/ipTrigger.groovy
(You can even make a git repo for your shared libraries and clone it in that directory)
// workflow-libs/vars/ipTrigger.groovy
import hudson.model.*
import static hudson.model.Cause.RemoteCause
#com.cloudbees.groovy.cps.NonCPS
def call(currentBuild) {
def ipaddress=""
for (CauseAction action : currentBuild.getActions(CauseAction.class)) {
for (Cause cause : action.getCauses()) {
if(cause instanceof RemoteCause){
ipaddress=cause.addr
break;
}
}
}
return ["ip":ipaddress]
}
After a restart of Jenkins, from your pipeline script, you can call the method by the filename you gave it.
So from your pipeline just call def trigger = ipTrigger(currentBuild)
The the ipaddress will be, trigger.ip (sorry for the bad naming, couldn't come up with something original)
I want to refactor my Jenkins pipeline script into classes for readability and reuse.
The problem is i get exceptions when doing so.
Let's look at a simple example:
When i run
echo currentBuild.toString()
everything is fine
But when i extract it into a class as so:
class MyClass implements Serializable {
def runBuild() {
echo currentBuild.toString()
}
}
new MyClass().runBuild()
i get an exception:
Started by user admin
Replayed #196
[Pipeline] End of Pipeline
groovy.lang.MissingPropertyException: No such property: currentBuild for class: MyClass
What is the proper way of extracting pipeline code in to classes?
You are on the right way, but the problem is that you didn't pass the script object to the instance of your class and was trying to call method which is not defined in the class that you have created.
Here is one way to solve this:
// Jenkins file or pipeline scripts editor in your job
new MyClass(this).runBuild()
// Class declaration
class MyClass implements Serializable {
def script
MyClass(def script) {
this.script=script
}
def runBuild() {
script.echo script.currentBuild.toString()
}
}
your code missing declare class field script
class MyClass implements Serializable {
def script
MyClass(def script) {
this.script=script
}
def runBuild() {
script.echo script.currentBuild.toString()
}
}
this code should be ok #bram
I would like to use a slightly more complex pipeline build via jenkinsfiles, with some reusable steps as I have a lot or similar projects. I'm using jenkins 2.0 with the pipeline plugins. I know that you can load groovy scripts which contain can contain some generic pieces of code but I was wondering if these scripts can use some of the Object oriented features of groovy like traits. For example say I had a trait called Step:
package com.foo.something.ci
trait Step {
void execute(){ echo 'Null execution'}
}
And a class that then implemented the trait in another file:
class Lint implements Step {
def execute() {
stage('lint')
node {
echo 'Do Stuff'
}
}
}
And then another class that contained the 'main' function:
class foo {
def f = new Lint()
f.execute()
}
How would I load and use all these classes in a Jenkinsfile, especially since I may have multiple classes each defining a step? Is this even possible?
Have a look at Shared Libaries. These enable the use of native groovy code in Jenkins.
Your Jenkinsfile would include your shared libary, and the use the classes you defined. Be aware, that you have to pass the steps variable of Jenkins, if you want to use stage or the other variables defined in the Jenkins Pipeline plugin.
Excerpt from the documentation:
This is the class, which would define your stages
package org.foo
class Utilities implements Serializable {
def steps
Utilities(steps) {this.steps = steps}
def mvn(args) {
steps.sh "${steps.tool 'Maven'}/bin/mvn -o ${args}"
}
}
You would use it like this:
#Library('utils') import org.foo.Utilities
def utils = new Utilities(steps)
node {
utils.mvn 'clean package'
}