I have a declarative multibranch pipeline in Jenkins that uses shared libraries. In my shared library I have a global variable say gb1.groovy where I call it from a steps using:
gb1(binding)
Inside gb1.groovy I do code similar to:
Map x = [a: 1, b: 2]
binding.variables.put('x',x)
Later on in my pipeline I access variable x and I have access to the Map with all their values. The problem comes when I call a second global variable say gb2(). Inside gb2.groovy, the binding that was done in gb1 for x is not available (I get an error). However, the env binding done by Jenkins is available in gb2.
Can any shed some light as why this is happening and what I could do to make x available in gb2 (short of passing the binding and extracting it which is what I am doing today).
Related
Working on my 6th or 7th Jenkins script now - and I already noticed they share a bit of code (essentially just the same groovy subroutines over and over again). I wouldn't like to continue with that and rather learn some best practices.
It seems that "Shared Libraries" are the thing to do. (Or is there a better way when you just want to share groovy code, not script steps etc.?)
Those scripts are part of a larger repo (that contains the source of the entire project, including the other scripts), stored in a subfolder Jenkins/Library with this structure:
Jenkins/Library
+- vars
| common_code.groovy
There is only a vars folder, no src. The documentation said
For Shared Libraries which only define Global Variables (vars/), or a Jenkinsfile which only needs a Global Variable, the annotation pattern #Library('my-shared-library') _ may be useful for keeping code concise. In essence, instead of annotating an unnecessary import statement, the symbol _ is annotated.
so I concluded that I wouldn't need a src folder and can do with vars alone.
The library is made available via "Configure Jenkins" > "Global Pipeline Libraries" with SourcePath set to "/Jenkins/Library/" and is brought in with the statement #Library('{name}') _ as first line of the script.
However, when attempting to use the library, I get the error shown in the subject.
What's the problem? (I already searched around and found this instance of the problem, but that doesn't seem to fit for my issue - unless I misunderstood something.)
To specify a name of the library you should set the same name in your jenkins settings:
Name.
An identifier you pick for this library, to be used in the #Library
annotation. An environment variable library.THIS_NAME.version will
also be set to the version loaded for a build (whether that comes from
the Default version here, or from an annotation after the #
separator).
Your '{name}' parameter inside of #Library() means you should add a library with the same name. Because it's not a variable like "${name}" which is not a built in variable and undefined.
If you wish to set up your library with the same name as your jenkins pipleine you could use env.JOB_NAME variable, or check the all environment and pre-defined variables:
println env.getEnvironment()
Or check job parameters only:
println params
Now step-by-step instructions:
Create your library, for example from Git SCM as shown on the screenshot.
Put your library code to the project, e.g: <project_root_folder>/vars/common_code.groovy. You don't need your additional path Jenkins/Library. Also you have named your file in 'snake case' style, which is not usual for groovy:
The vars directory hosts scripts that define global variables
accessible from Pipeline. The basename of each *.groovy file should be
a Groovy (~ Java) identifier, conventionally camelCased.
So your file in 'camel case' should looks CommonCode.groovy.
Write your library code:
// vars/commonCode.groovy
// Define your method
def call() {
// do some stuff
return 'Some message'
}
Write your pipeline. Example of scripted pipeline:
#!/usr/bin/env groovy
// yourPipeline.groovy file in your project
#Library('jenkins-shared-library') _
// Get the message from the method in your library
def messageText = commonCode() as String
println messageText
If you wish to define some global variables this answer also may help you.
PS: Using 'vars' folder allows you to load everything from your vars folder once at the same time. If you wish to load dynamically use import from src folder.
Is there a way to use a global pipeline library when defining an Active Choices Reactive Parameter in Jenkins?
I have added a global pipeline library in Jenkins, let's say PipelineLibrary, and I am able to use it successfully in my pipelines by loading it with #Library('PipelineLibrary') _. In this library I have a global function foo.bar(), which I would like to use also in the Groovy Script box when adding an Active Choices Reactive Parameter to several of my jobs.
So I would like to have something like this in the Groovy Script box of that parameter:
// Somehow take into use PipelineLibrary
return foo.bar();
What is the correct syntax to load the library here?
Or is it even possible? If not, is there some other way to share the Groovy script to several places without just copy-pasting the code in the GUI?
I think you're banging on this door - JENKINS-46394
Actually, I have found a way to do it. You just need to need to define new global variables in your Jenkinsfile and their values will be from the Shared library respectfully. Afterwards you can use the new defined global variables in your Active Choice Parameter script.
I want to specify the machine type in vertex ai pipeline with using kfp sdk.
I don't know how to specify machine_type while executing it as a component of pipeline.
I tried kfp.v2.google.experimental.run_as_aiplatform_custom_job, but it ran as CustomJobExecution instead of ContainerExecution.
For that reason, I want to use Airtifact, but airtifact is not mounted on this component.
Since I want to use the airtifact of the previous components and the function of Output [Airtifact], I want to execute it as ContainerExecution instead of CustomJobExecution.
You can use set_memory_limit and set_cpu_limit, just like you would using Kubeflow Pipelines. Vertex Pipelines will convert these limits to a machine type that satisfies your request.
I am using Jenkins 2.6 on Red Hat Linux. I want to use the following in my Jenkinsfile, but when I try, Jenkins complains bitterly. (It seems only to dislike the syntax on left-hand side of the = operator.):
def (a, b) = [6, 7]
It doesn't like Multiple Assignments it seems, yet Groovy 1.6 and later apparently support them, as per this post:
http://mrhaki.blogspot.co.uk/2009/09/groovy-goodness-multiple-assignments.html
I want to do this so that when I call a method that returns [6, 7] I can call it like this:
def (a, b) = mymethod()
def mymethod()
{
return [6, 7]
}
Can anyone tell me whether this should work in Jenkins and if so in which version of Jenkins? Or is it an unsupported feature? Or a bug?
Thanks
As explained here, Pipeline "scripts" are not simple Groovy scripts, they are heavily transformed before running, some parts on master, some parts on slaves, with their state (variable values) serialized and passed to the next step. As such, every Groovy feature is not supported.
I wish it was made more clear in Jenkins docs & presentations (that's why I find myself repeating the paragraph above... there is much confusion about this). Pipeline is so simple it seems magic... well, it kinda is ;)
It seems multiple assignments are not supported indeed. I only found this reference to confirm it: this example of a commit where the author changes his code from multiple assignments to simple ones because of that limitation.
Probably a good idea to ask for it on the mailing list as suggested by #rjohnston.
It should be available (and indeed it works from the Script Console) however it looks like whatever is necessary to make it work from a pipeline script hasn't been done.
The dev mailing list (https://groups.google.com/forum/#!forum/jenkinsci-dev) may be able to tell you if this is a bug or just an unsupported feature!
A custom plugin we wrote for an older version of Jenkins uses an EnvironmentContributingAction to provide environment variables to the execution so they could be used in future build steps and passed as parameters to downstream jobs.
While attempting to convert our build to workflow, I'm having trouble accessing these variables:
node {
// this step queries an API and puts the results in
// environment variables called FE1|BE1_INTERNAL_ADDRESS
step([$class: 'SomeClass', parameter: foo])
// this ends up echoing 'null and null'
echo "${env.FE1_INTERNAL_ADDRESS} and ${env.BE1_INTERNAL_ADDRESS}"
}
Is there a way to access the environment variable that was injected? Do I have to convert this functionality to a build wrapper instead?
EnvironmentContributingAction is currently limited to AbstractBuilds, which WorkflowRuns are not, so pending JENKINS-29537 which I just filed, your plugin would need to be modified somehow. Options include:
Have the builder add a plain Action instead, then register an EnvironmentContributor whose buildEnvironmentFor(Run, …) checks for its presence using Run.getAction(Class).
Switch to a SimpleBuildWrapper which defines the environment variables within a scope, then invoke it from Workflow using the wrap step.
Depend on workflow-step-api and define a custom Workflow Step with comparable functionality but directly returning a List<String> or whatever makes sense in your context. (code sample)
Since PR-2975 is merged, you are able to use new interface:
void buildEnvVars(#Nonnull Run<?, ?> run, #Nonnull EnvVars env, #CheckForNull Node node)
It will be used by old type of builds as well.