In the Job DSL, there is the method readFileFromWorkspace(), which makes it possible to read a files content from the workspace.
Now it would like to have something like readFilesFromDirectory() which gives me all files in some directory.
The goal is to make it possible to choose from different ansible playbooks:
choiceParam('PLAYBOOK_FILE', ['playbook1.yml', 'playbook2.yml'])
and to populate this list with existing files from a directory. Is something like this possible?
Well, shortly after asking this question, I found the solution.
So the Hudson API can be used:
hudson.FilePath workspace =
hudson.model.Executor.currentExecutor().getCurrentWorkspace()
def resultList = workspace.list().findAll { it.name ==~ /deploy.*\.yml/ }
Related
Working on my 6th or 7th Jenkins script now - and I already noticed they share a bit of code (essentially just the same groovy subroutines over and over again). I wouldn't like to continue with that and rather learn some best practices.
It seems that "Shared Libraries" are the thing to do. (Or is there a better way when you just want to share groovy code, not script steps etc.?)
Those scripts are part of a larger repo (that contains the source of the entire project, including the other scripts), stored in a subfolder Jenkins/Library with this structure:
Jenkins/Library
+- vars
| common_code.groovy
There is only a vars folder, no src. The documentation said
For Shared Libraries which only define Global Variables (vars/), or a Jenkinsfile which only needs a Global Variable, the annotation pattern #Library('my-shared-library') _ may be useful for keeping code concise. In essence, instead of annotating an unnecessary import statement, the symbol _ is annotated.
so I concluded that I wouldn't need a src folder and can do with vars alone.
The library is made available via "Configure Jenkins" > "Global Pipeline Libraries" with SourcePath set to "/Jenkins/Library/" and is brought in with the statement #Library('{name}') _ as first line of the script.
However, when attempting to use the library, I get the error shown in the subject.
What's the problem? (I already searched around and found this instance of the problem, but that doesn't seem to fit for my issue - unless I misunderstood something.)
To specify a name of the library you should set the same name in your jenkins settings:
Name.
An identifier you pick for this library, to be used in the #Library
annotation. An environment variable library.THIS_NAME.version will
also be set to the version loaded for a build (whether that comes from
the Default version here, or from an annotation after the #
separator).
Your '{name}' parameter inside of #Library() means you should add a library with the same name. Because it's not a variable like "${name}" which is not a built in variable and undefined.
If you wish to set up your library with the same name as your jenkins pipleine you could use env.JOB_NAME variable, or check the all environment and pre-defined variables:
println env.getEnvironment()
Or check job parameters only:
println params
Now step-by-step instructions:
Create your library, for example from Git SCM as shown on the screenshot.
Put your library code to the project, e.g: <project_root_folder>/vars/common_code.groovy. You don't need your additional path Jenkins/Library. Also you have named your file in 'snake case' style, which is not usual for groovy:
The vars directory hosts scripts that define global variables
accessible from Pipeline. The basename of each *.groovy file should be
a Groovy (~ Java) identifier, conventionally camelCased.
So your file in 'camel case' should looks CommonCode.groovy.
Write your library code:
// vars/commonCode.groovy
// Define your method
def call() {
// do some stuff
return 'Some message'
}
Write your pipeline. Example of scripted pipeline:
#!/usr/bin/env groovy
// yourPipeline.groovy file in your project
#Library('jenkins-shared-library') _
// Get the message from the method in your library
def messageText = commonCode() as String
println messageText
If you wish to define some global variables this answer also may help you.
PS: Using 'vars' folder allows you to load everything from your vars folder once at the same time. If you wish to load dynamically use import from src folder.
We have a mix of DSL-seeded and manually created jobs on our Jenkins server.
I'd like to find all jobs NOT generated by DSL (or all generated by DSL at any time in the past).
I found no indication in job's config.xml that it was generated by DSL.
So, is it possible and how?
Well, I have the same problem as you.
I wonder how the link "Seed job" within the generated job is created. I don't see it in all of the generated jobs, either.
Unfortunately, I didn't get very far with my research.
In the script console, I listed the methods for one of my jobs (let's call it foo) :
Jenkins.instance.getItems().each {
if (it.getName() == 'foo') {
println it
it.class.getMethods().each { method ->
println method
}
}
};
However, I didn't see any methods containing jobdsl there.
I found a file $JENKINS_HOME/javaposse.jobdsl.plugin.ExecuteDslScripts.xml that contains generated job names and their seed jobs. But I don't know whether there is an official JobDSL API for reading it.
So... if you find more information, I'd be glad to know - good luck!
All the tutorials that I have come across regarding writing a declarative pipeline suggest to include the stages and steps in the Jenkinsfile.
But I have noticed one of my seniors writing it the opposite way. He uses the Jenkinsfile just for defining all the properties, i.e. his Jenkinsfile is just a properties file, nothing more nothing less.
And for defining the pipeline he makes use of the shared library concepts where he writes his pipeline code in a file in the vars folder. I am not able to guess the wisdom behind this approach.
Nowhere over the internet did I come across anything similar.
Any guidance in this regard is highly appreciated. I am a beginner in the Jenkins world.
As illustrated in Extending with Shared Libraries, that approach (that I am using as well) allows to:
keep a Jenkinsfile content to a minimum
enforce a standard way of doing a particular job (as coded in the shared library)
That shared library becomes a template of a process for which you provide only values in your Jenkinsfile before delegating the actual execution to the pre-defined library.
The OP Asif Kamran Malick note that the documentation does include:
There is also a “builder pattern” trick using Groovy’s Closure.DELEGATE_FIRST, which permits Jenkinsfile to look slightly more like a configuration file than a program, but this is more complex and error-prone and is not recommended.
He then asks:
Why did the blogger prefer that way when its actually discouraged in the official doc.
I checked and we are using also Closure.DELEGATE_FIRST.
The reason is in the part "permits Jenkinsfile to look slightly more like a configuration file than a program"
This avoids us having to define a JSON block, and keep the parameter as a series of key=value lines, easier to read.
A call to a shared library is then:
#!/usr/bin/env groovy
#Library("MyLibraries") _
MyLibrary {
config1 = 'value1'
config2 = 'value2'
...
}
{
anotherConfigA = 'valueA'
anotherConfigB = 'valueB'...
astep(
...
)
}
Then your jenkins pipeline template in MyLibraries/vars/MyLibrary.yml can use those closure blocks:
def call(Closure configBlock, Closure body) {
def config = [:]
configBlock.resolveStrategy = Closure.DELEGATE_FIRST
configBlock.delegate = config
configBlock()
astep(
...
){
if (body) { body() }
}
}
I'm writing a somewhat complex global pipeline library. The library really just orchestrates a complex build, with a whole bunch of steps exposed as vars/* and a single src/com/myorg/pipeline/utils.groovy class that handles all common pieces of functionality. Each Jenkinsfile defines all 'build' specific config, and passes it to a vars/myBuildFlavor.groovy step that then calls all steps required for that flavor of the build. The vars/myBuildFlavor.groovy step also reads on a server config file that contains all config that is global to each Jenkins instance.
This setup works incredibly well. It allows users to either piece together their own builds from the steps I've exposed in the global library, or just set all build properties in their Jenkinsfile and call an existing flavor of a build that I've exposed as a step. What I'm struggling with is how I can access configuration values from both the 'build' and 'server' configuration, plus I have some random properties from steps early on in the build that I want to save and use later in the build. What is incredibly annoying is that I have to pass the entire context of the script around with 'this', or have extremely long method signatures to handle the juggling of all of these values.
What I'm thinking may be a good idea is to write a file in the workspace root that contains all build and server config values, plus any properties that I need later on in the build. Has anyone had to deal with this previously? Any major issues with my approach? Better ideas?
I haven't tried this, but you make me want to make sure this works. If you don't beat me to it, I'll give it a shot, so please report back...
The things in vars are created as singletons. So I think you should be able to do something like this:
// vars/customConfig.groovy
class customConfig implements Serializable {
private String url
private Map allTheThings
def setUrl(myUrl) {
url = myUrl
}
def getUrl() {
url
}
def setAllTheThings(Map configMap) {
allTheThings = configMap
}
def getAllTheThings() {
return allTheThings
}
def coolMethod(myVar) {
echo "This method does something cool with the ${myVar} and with ${name}"
}
}
Then access these things like:
customConfig.url = 'https://www.google.com'
echo ${customConfig.url}"
customConfig.coolMethod "FOOBAR"
customConfig.allTheThings.configItem1 = "BAZ"
customConfig.allTheThings.configItem2 = 12345
echo "${customConfig.allTheThings.configItem2} is an Int"
Since it is a "global var" or a singleton, I think you can use it everywhere and the values are all shared.
Let me know if this does what I think it will do.
env.JOB_NAME Is the pipeline name suffixed with the branch name.
So env.JOB_NAME will be <jenkins_pipeline_name>_<my_branch>
How can I just get the pipeline name and store it in a var in the environment{} block at the top of my jenkinsfile to use through the file?
I don't want to resort to scripted pipeline just the declarative.
#red888 pointed out the following answer that worked like magic for me. I am pointing it out in an actual answer because I almost missed it:
env.JOB_BASE_NAME
Credit to #red888 in the comment above. Send upvotes his/her way.
Since you are using the multibranch job, the env variable is returning the actual job name that it created out of the branch .i.e. _ .. So, without some mechanism to strip/sed out the branch name, i don't think there is an env variable for it in jenkins out-of-the-box.
You can check http://YOUR-JENKINS-SERVER.com/pipeline-syntax/globals#env and it'll get you
JOB_NAME
Name of the project of this build, such as "foo" or "foo/bar".
JOB_BASE_NAME
Short Name of the project of this build stripping off folder paths, such as "foo" for "bar/foo".
For a jenkins organization project with multiple branches and PRs, the JOB_NAME is actually YOUR_ORG_NAME/YOUR_REPO_NAME/YOUR_BRANCH_OR_PR_NAME, e.g. my-org/my-repo/PR-23, and the JOB_BASE_NAME is just PR-23. Just as Israel said, you can use env.JOB_NAME.tokenize('/') as String[] to get what you want.
If you want more fancy scripts in Jenkins, Groovy can be a good reference.
In jenkins pipeline
def allJob = env.JOB_NAME.tokenize('/') as String[];
def projectName = allJob[0];
env.JOB_NAME is the project name... There is no DOUBT
However, my issue with copyArtifact plugin. It said :
Unable to find project for artifact copy: YOUR_PROJECT_WITH_ARTIFACTS
This may be due to incorrect project name or permission settings;
At the end it is not "incorrect project name", and env.JOB_NAME must work.
However, the issue is permission settings which can be fixed generally by :
pipeline {
agent any
options {
copyArtifactPermission('*');
}
//..
}
Reference