I want to create DSL extension for my Jenkins plugin (built using maven) just like in the example of Docker plugin for Jenkins. I see that the groovy file Docker.groovy is in: src/main/resources/org/jenkinsci/plugins/docker/workflow/Docker.groovy
Does this groovy file have to be within org.jenkinsci.plugin.docker.workflow, or can I just put it inside resources? What is the difference?
Also, If I define my DSL extension within the groovy file in this manner is the DSL extension available to call implicitly in the pipeline file?
In order to make a step available in the Pipeline DSL through your plugin, you need to define a subclass of Step that performs the needed task. This can be completely done within Java, and is the preferred method for adding expanding the Pipeline DSL within a Jenkins plugin.
The Docker example you linked is unusual in this instance, and doesn't define a typical Pipeline DSL step (the docker directive in Pipeline functions like a cross between an agent, a step and a context block). Furthermore, it appears to include a Java class that loads the Groovy script dynamically, which acts as the entry point into the directive.
Groovy can be used to expand the Pipeline DSL; however this is done within the context of a shared library, which is meant to be more of a boilerplate reducing tool to be used internally.
Related
I'm trying to wrap my head around how this declarative Jenkinsfile is Groovy. I want to write supporting code to execute this outside the Jenkins environment, in pure Groovy, if that's possible. I've been writing example groovy code but still am unsure what "pipeline", "agent", and "stages" are.
Any tips to understand this structure is appreciated
EDIT: I edited this question with simplified code below. I'm just wondering if there is a way that this can be turned into valid groovy code without the preprocessor/groovyshell environment that is utilized by Jenkins
pipeline {
stages {
// extra code here
}
}
No, you can't run Jenkinsfile as a standalone Groovy script. In short, Jenkins executes the pipeline code inside a pre-configured GroovyShell that knows how to evaluate things like pipeline, agent, stages, and so forth. However, there is a way to execute Jenkinsfie without the Jenkins server - you can use JenkinsPipelineUnit test library to write JUnit/Spock unit tests that will evaluate your Jenkinsfile and display the call stack tree. It uses mocks, so you can treat it as interaction-based testing, to see if a specific part of your pipeline gets executed. Plus, you can catch some code errors prior to running the pipeline on the server.
A simple unit test for the declarative pipeline can look like this:
import com.lesfurets.jenkins.unit.declarative.*
class TestExampleDeclarativeJob extends DeclarativePipelineTest {
#Test
void should_execute_without_errors() throws Exception {
def script = runScript("Jenkinsfile")
assertJobStatusSuccess()
printCallStack()
}
}
You can find more examples in the official README.md - https://github.com/jenkinsci/JenkinsPipelineUnit
Alternatively, you can try Jenkinsfile Runner command-line tool that can execute your Jenkinsfile outside of the Jenkins server - https://github.com/jenkinsci/jenkinsfile-runner
UPDATE
I edited this question with simplified code below. I'm just wondering if there is a way that this can be turned into valid groovy code without the preprocessor/groovyshell environment that is utilized by Jenkins.
Your pipeline code example looks like a valid Jenkinsfile, but you can't turned it into a Groovy code that can be run e.g. from the command-line as a regular Groovy script:
$ groovy Jenkinsfile
This won't work, because Groovy is not aware of the Jenkins Pipeline syntax. The syntax is added as a DSL via the Jenkins plugin, and it uses a dedicated GroovyShell that is pre-configured to interpret the pipeline syntax correctly.
If you are interested in checking if the syntax of the Jenkins Pipeline is correct, there are a few different options:
npm-groovy-lint (https://github.com/nvuillam/npm-groovy-lint) can validate (and even auto-fix) the syntax of your Jenkinsfile without connecting to the Jenkins server,
Command-Line Pipeline Linter (https://www.jenkins.io/doc/book/pipeline/development/#linter) can send your pipeline code to the Jenkins server and validate its syntax.
These are a few tools that can help you with catching up the syntax errors before you run the pipeline. But that's just a nice addon to your toolbox. The first step, as always, is to understand what the syntax means, and the official documentation (https://www.jenkins.io/doc/book/pipeline/syntax) is the best place to start.
I am using Jenkins pipeline, I created 4 Jobs, each job has some functions and Their is a redundant function existing in all those Jobs.
How to make that redundant function in a shared place and all those jobs can call this function ?
You are looking for Jenkins shared library
As the name suggest, you create a library - a pipeline shared among jenkins jobs - in a SCM (git svn ...) and in your project you create a simple Jenkinsfile calling the library.
So, every build will checkout your project, read the Jenkinsfile and then checkout the library with the pipeline.
I did it by:
Created Folder in Jenkins working home
In that folder => I Created file.groovy contains the functions i need
At the end of that file should contain
return this
in JenkinsFile add
node{shared_functionality = load "FilePath.groovy"}
This number four will include functions in .groovy file in your jenkinsFile
So you can add node statement in your JenkinsFiles to include Functions you need
I want to setup a Jenkins from code to
Create one initial pipeline
Create the Job DSL seed job and executing it to configure jobs used in the pipeline
Configure Jenkins settings
Locales - set locale to EN
Access control - Lock down system
I read many tutorials and questions and found the following ideas
Using the Jenkins CLI
Some Job DSL interface for setting up a job as described here at the bottom
Using JenkinsSCI interface within a Groovy file located in init.groovy.d - see below
For testing I use Docker and have the following sample already running.
Dockerfile
# https://github.com/jenkinsci/docker/blob/master/README.md
FROM jenkins/jenkins:lts
USER root
COPY groovy/* /usr/share/jenkins/ref/init.groovy.d/
USER jenkins
EXPOSE 8080
ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/jenkins.sh"]
groovy/jobs/test1-basic.groovy
#!/usr/bin/env groovy
import hudson.model.*
import jenkins.model.Jenkins;
import hudson.tasks.Shell;
job = Jenkins.instance.createProject(FreeStyleProject, 'test1-basic')
job.buildersList.add(new Shell('echo hello world'))
job.save()
The sample sadly lacks the
configuration part, as I do not know how to access the locale plugin from within the groovy code
Job DSL integration, how to read the seed job and execute it ones
I really did an intensive research and could not find much about this initial setup part. It seems many people do this manually, or the legacy way copying XML files. Could you help me out solving this and making it a "best practice documentation" for other?
If you are familiar with configuration management tool like chef you can use it for configuring your jenkins instance. There is a jenkins community cookbook which can be utilized to write a wrapper to suite your needs.
jenkins_job resource in this cookbook lets you create any type of job be it pipeline, free style etc, you just need to supply the required job configuration. You can template this with variables so based on what you supplied, job will be created accordingly. Not just jobs, you can configure almost everything you do manually with chef using a resource corresponding to that.
One of the best part about using chef is you can source control it and update configuration based on requirements at any point of time.
If you are not planning to use a configuration management tool, you can check out the discussion here on how to achieve job creation with plugins
How can I add/edit new code to my Jenkins instance that would be accesible in a DSL script? Context follows
I've inherited a Jenkins instance. Part of this inheritance includes spending the night in a haunted house writing some new automation in groovy via the Jobs DSL plugin. Since I'm fearful of ruining our jenkins instance, my first step is setting up a local development instance.
I'm having trouble running one of our existing DSL Scripts on my local development instance -- my builds on the local server fail with the following in the Jenkins error console.
Processing DSL script jobs.groovy
ERROR: startup failed:
jobs.groovy: 1: unable to resolve class thecompanysname.jenkins.extensions
The script in question starts off like this.
import thecompanysname.jenkins.extensions
use(extensions) {
def org = 'project-name'
def project = 'test-jenkins-repo'
def _email = 'foo#example.com'
So, as near I can tell, it seems like a predecesor has written some custom Groovy code that they're importing
import thecompanysname.jenkins.extensions
What's not clear to me is
Where this code lives
How I can find it in our real Jenkins instance
How I can add to to my local instance
Specific answers are welcome, as our here's how you can learn to fish answers.
While there may be other ways to accomplish this, after a bit of poking around I discovered
The Jenkins instance I've installed has an older version of the Jobs DSL plugin installed.
This version of the Jobs DSL plugin allowed you to set an additional classpath in your Process DSL Builds job section that pointed to additional jar files.
These jar files can give you access to additional classes in your groovy scripts (i.e. thecompanysname.jenkins.extensions)
Unfortunately, more recent versions of the Jobs DSL plugin have removed this option, and it's not clear if it's possible to add it back. That, however, is another question.
Configure Global Security -> uncheck "Enable script security for Job DSL
scripts".
works for me
I want the gradle plugin to pick up environment variables that are set in a withEnv step (or other wrapper-types). When I invoke gradle using a sh step the variable is found, but when I use the gradle plugin it is not.
The gradle plugin performs the equivalent of this:
EnvVars env = run.getEnvironment(taskListener);
launcher.launch().cmds(args).envs(env).stdout(gca)
.pwd(rootLauncher).join();
The javadoc for run.getEnvironment() states:
Returns the map that contains environmental variables to be used for
launching processes for this build. BuildSteps that invoke external
processes should use this. This allows BuildWrappers and other project
configurations (such as JDK selection) to take effect.
Unlike earlier getEnvVars(), this map contains the whole environment,
not just the overrides, so one can introspect values to change its
behavior.
If I debug the plugin I see that there are less than a dozen variables in the environment passed to the gradle invocation, none of which are the variables withEnv should be providing. To the best I could tell, the sh step uses a completely different extension point, and is straight up given an instance of EnvVars that appears to be much more complete. I'm fairly certain the problem isn't in withEnv, but I don't see how to fix the gradle plugin.
Am I using the wrong call? Or perhaps the wrong extension point?
Do not call Run.getEnvironment. Rather use the EnvVars passed to SimpleBuildStep.perform.