I am writing my own Jenkins shared library. Currently my lib looks like this:
root
|
|- vars
|---function1.groovy
|---function2.groovy
|---function3.groovy
Each function file contains a call() method and the code that it is executing. How can I combine all those functions in one file?
Put all functions into single file:
root
|
|- vars
|---allFunctions.groovy
Rename functions from call() to:
function1(string) {
echo "function1 - $string"
}
function2(string) {
echo "function2 - $string"
}
Call them from different file (e.g. vars/buildRepo.groovy) as:
allFunctions.function1('Hello world via function1')
allFunctions.function1('Hello world via function2')
Call them from withing same file (e.g. vars/allFunctions.groovy) as:
function1('Hello world via function1')
function1('Hello world via function2')
As #matt-schuchard noted, it is described in https://www.jenkins.io/doc/book/pipeline/shared-libraries/#defining-global-variables on log.groovy example.
Related
Imagine I have a java_binary target triggered by a custom rule that generates source code and places the generated sources under a directory, let's call it "root".
So after the code generation we will have something like this:
// bazel-bin/...../src/com/example/root
root:
-> Foo.java
-> Bar.java
-> utils
-> Baz.java
Now, I have another target, a java_library, that depends on the previously generated sources, so it depends on the custom rule.
My custom rule definition currently looks something like this:
def _code_generator(ctx):
outputDir = ctx.actions.declare_directory("root")
files = [
ctx.actions.declare_file("root/Foo.java"),
ctx.actions.declare_file("root/Bar.java"),
ctx.actions.declare_file("root/utils/Baz.java"),
// and many,
// many other files
]
outputs = []
outputs.append(outputDir)
outputs.extend(files)
ctx.actions.run(
executable = // executable pointing to the java_binary
outputs = outputs
// ....
)
This works. But as you can see, every anticipated file that is to be generated, is hard-coded in the rule definition. This makes it very fragile, should the code generation produce a different set of files in the future (which it will).
(Without specifying each of the files, as shown above, Bazel will fail the build saying that the files have no generating action)
So I was wondering, is there a way to read the content of the root directory and automatically, somehow, declare each of the files as an output?
What I tried:
The documentation of declare_directory says:
The contents of the directory are not directly accessible from Starlark, but can be expanded in an action command with Args.add_all().
And add_all says:
[...] Each directory File item is replaced by all Files recursively contained in that directory.
This sounds like there could be a way to get access to the individual files in the directory, but I am not sure how.
I tried:
outputDir = ctx.actions.declare_directory("root")
//...
args = ctx.actions.args()
args.add_all(outputDir)
with the intention to access the individual files later from args, but the build fails with: "Error in add_all: expected value of type sequence or depset for values, got File".
Any other ideas on how to implement the rule, so that I don't have to hard-code each and every file that will be generated?
I am trying to writing a Jenkins Shared Library for my CI process. I'd like to reference a class that is in the \src folder inside a global function defined in the \vars folder, since it would allow me to put most of the logic in classes instead of in the global functions. I am following the repository structure documented on the official Jenkins documentation:
Jenkins Shared Library structure
Here's a simplified example of what I have:
/src/com/example/SrcClass.groovy
package com.example
class SrcClass {
def aFunction() {
return "Hello from src folder!"
}
}
/vars/classFromVars.groovy
import com.example.SrcClass
def call(args) {
def sc = new SrcClass()
return sc.aFunction()
}
Jenkinsfile
#Library('<lib-name>') _
pipeline {
...
post {
always {
classFromVars()
}
}
}
My goal was for the global classes in the /vars folder to act as a sort of public facade and to use it in my Jenkinsfile as a custom step without having to instantiate a class in a script block (making it compatible with declarative pipelines). It all seems pretty straightforward to me, but I am getting this error when running the classFromVars file:
<root>\vars\classFromVars.groovy: 1: unable to resolve class com.example.SrcClass
# line 1, column 1.
import com.example.SrcClass
^
1 error
I tried running the classFromVars class directly with the groovy CLI locally and on the Jenkins server and I have the same error on both environments. I also tried specifying the classpath when running the /vars script, getting the same error, with the following command:
<root>>groovy -cp <root>\src\com\example vars\classFromVars.groovy
Is what I'm trying to achieve possible? Or should I simply put all of my logic in the /vars class and avoid using the /src folder?
I have found several repositories on GitHub that seem to indicate this is possible, for example this one: https://github.com/fabric8io/fabric8-pipeline-library, which uses the classes in the /src folder in many of the classes in the /vars folder.
As #Szymon Stepniak pointed out, the -cp parameter in my groovy command was incorrect. It now works locally and on the Jenkins server. I have yet to explain why it wasn't working on the Jenkins server though.
I found that when I wanted to import a class from the shared library I have, to a script step in the /vars I needed to do it like this:
//thanks to '_', the classes are imported automatically.
// MUST have the '#' at the beginning, other wise it will not work.
// when not using "#BRANCH" it will use default branch from git repo.
#Library('my-shared-library#BRANCH') _
// only by calling them you can tell if they exist or not.
def exampleObject = new example.GlobalVars()
// then call methods or attributes from the class.
exampleObject.runExample()
So I'm trying to make define folder level variables by putting them in a groovy file in the \vars directory.
Alas, the documentation is so bad, that it's impossible to figure out how to do that...
Assuming we have to globals G1 and G2, is this how we define them in the groovy file?
#!Groovy
static string G1 = "G1"
static string G2 = "G2"
Assuming the Groovy file is called XYZ.Groovy, how do I define it in the folder so its available for the folder's script?
Assuming I get over that, and that that LIBXYZ is the name the folder associates with the stuff in the /vars directory, is it correct to assume that when I call
#Library("LIBXYZ") _
it will make XYZ available?
In that case, is XYZ.G1 the way to access the globals?
thanks, a.
I have a working example here as I was recently curious about this. I agree that the documentation is wretched.
The following is similar to the info in README.md.
Prep: note that folder here refers to Jenkins Folders from the CloudBees Folder plugin. It is a way to organize jobs.
Code Layout
The first part to note is src/net/codetojoy/shared/Bar.groovy :
package net.codetojoy.shared
class Bar {
static def G1 = "G1"
static def G2 = "G2"
def id
def emitLog() {
println "TRACER hello from Bar. id: ${id}"
}
}
The second part is vars/folderFoo.groovy:
def emitLog(message) {
println "TRACER folderFoo. message: ${message}"
def bar = new net.codetojoy.shared.Bar(id: 5150)
bar.emitLog()
println "TRACER test : " + net.codetojoy.shared.Bar.G1
}
Edit: To use a static/"global" variable in the vars folder, consider the following vars/Keys.groovy:
class Keys {
static def MY_GLOBAL_VAR3 = "beethoven"
}
The folderFoo.groovy script can use Keys.MY_GLOBAL_VAR3.
And then usage (in my example: Basic.Folder.Jenkinsfile):
#Library('folderFoo') _
stage "use shared library"
node {
script {
folderFoo.emitLog 'pipeline test!'
}
}
Jenkins Setup: Folder
Go to New Item and create a new Folder
configure the folder with a new Pipeline library:
Name is folderFoo
Default version is master
Retrieval Method is Modern SCM
Source Code Management in my example is this repo
Jenkins Setup: Pipeline Job
create a new Pipeline job in the folder created above
though a bit confusing (and self-referential), I create a pipeline job that uses this same this repo
specify the Jenkinsfile Basic.Folder.Jenkinsfile
the job should run and use the library
I've got a project structured like this:
/
/ Jenkinsfile
/ build_tools /
/ pipeline.groovy # Functions which define the pipeline
/ reporting.groovy # Other misc build reporting stuff
/ dostuff.sh # A shell script used by the pipeline
/ domorestuff.sh # Another pipeline supporting shell-script
Is it possible to import the groovy files in /build_tools so that I can use functions inside those 2 files in my Jenkinsfile?
Ideally, I'd like to have a Jenkins file that looks something like this (pseudocode):
from build_tools.pipeline import build_pipeline
build_pipeline(project_name="my project", reporting_id=12345)
The bit I'm stuck on is how you write a working equivalent of that pretend import statement on line #1 of my pseudocode.
PS. Why I'm doing this: The build_tools folder is actually a git submodule shared by many projects. I'm trying to give each project access to a common set of build tooling to stop each project maintainer from reinventing this wheel.
The best-supported way to load shared groovy code is through shared libraries.
If you have a shared library like this:
simplest-jenkins-shared-library master % cat src/org/foo/Bar.groovy
package org.foo;
def awesomePrintingFunction() {
println "hello world"
}
Shove it into source control, configure it in your jenkins job or even globally (this is one of the only things you do through the Jenkins UI when using pipeline), like in this screenshot:
and then use it, for example, like this:
pipeline {
agent { label 'docker' }
stages {
stage('build') {
steps {
script {
#Library('simplest-jenkins-shared-library')
def bar = new org.foo.Bar()
bar.awesomePrintingFunction()
}
}
}
}
}
Output from the console log for this build would of course include:
hello world
There are lots of other ways to write shared libraries (like using classes) and to use them (like defining vars so you can use them in Jenkinsfiles in super-slick ways). You can even load non-groovy files as resources. Check out the shared library docs for these extended use-cases.
I am developing some fixtures in Java to use with fitnesse slim. I run into problems (EXCEPTION:java.lang.NoClassDefFoundError:) when I must update my root page with paths like this:
!define TEST_SYSTEM {slim}
!path: C:\WORKSPACE\Projects\iperoom_67_workspace\acceptance_test_project\bin
!path: C:\WORKSPACE\Projects\iperoom_67_workspace\iperoom\BASE\common_util\target\classes
!path C:\WORKSPACE\Projects\iperoom_67_workspace\iperoom\BASE\dfc_util\target\classes
Where a class in i.e. ...BASE\dfc_util\target\classes; has the following imports:
import no.joint.iperoom.test.AbstractDfcTest;
code
.
.
.
Which gives the complete path in my local C drive workspace:
C:\WORKSPACE\Projects\iperoom_67_workspace\iperoom\BASE\dfc_util\target\classes\no\joint\iperoom\test
My question is could I say, on the root page:
classpath: C:\WORKSPACE\Projects\iperoom_67_workspace\iperoom\BASE*; as in take in all the .class files from here and up. Something more general?
and possibly import several pats to .class files on the fitnesse test page:
|import|
|dfc_util.target.classes.no.joint.iperoom.test.AbstractDfcTest|
Or is there any other and better way to solve this problem with a growing number of '!paths' in my root page due to calling one .class from another .class from antoher .class and so forth.
Or maybe my fixture code is not good enough:
public class SessionHelperTest /extends AbstractDfcTest/{
public boolean testNewSession() {
System.out.println("Hello Joint");
IDfSession session = SessionRegistry.getSuperUserSession("eRoomPCI_v_1_1");
try {
String si = session.getSessionId();
System.out.println("The sessionId is:\n" + si);
return true;
} catch (DfException e) {
e.printStackTrace();
return false;
}
}
}
Cheers
Magnus
I don't think path is going to work the way you want it to. If you define it at too low a level, I'm pretty sure it won't find your classes.
The !path works fine when you do any of the following:
This will get all of the class files under build/classes if it is under the folder fitnesse starts in:
!path build/classes
This will handle multiple jar files:
!path lib/*.jar
Important to note is that you can leverage environment variables for this. Assuming you have an environment variable called WORKSPACE defined that points to the base of your project, you can do this:
!path ${WORKSPACE}/acceptance_test_project/bin
!path ${WORKSPACE}/acceptance_test_project/common_util/target/classes
!path ${WORKSPACE}/acceptance_test_project/dfc_util/target/classes
The reality is that if your files are scattered across multiple folders, you will have to use multiple entries. If just to make sure you can control the order the path is processed. If you only do this on your FrontPage, then everything below it will inherit the same path. Then you only have to manage it in one location. So while the list might be longer than you prefer, the maintenance is managed.