I need to change some plugin config files before plugins are loaded. I looked into the init.groovy.d, however it seems to run Groovy scripts in that directory after plugins have been loaded and therefore would require a restart to apply. Is there a way to run Groovy scripts before Jenkins loads the plugins?
What you are requesting is not necessary. Generally, when adding plugins, they come unconfigured. Jenkins starts, loads the plugins, then you can configure via init.groovy, CasC, etc., similar to you adding via GUI (add, restart, configure).
We start w/war file, wrapper, init.groovy.d, plus a variant of the docker install_plugins.sh. Other than the war, the wrapper and wrapper.conf, install_plugins.sh and plugin list, and all the init scripts are controlled in a git repo, which we pull down.
dumping the plugins into the plugins dir, then launch jenkins.sh.
The init.groovy runs automatically after initialization and configures all system, global, tools and plugins values, as well as credentials values, also creating/configuring nodes.
NB: best to use 1 init script per section or plugin as a failure in any init script will quietly fail, effectively skipping the rest of the script.
You may need to .save() after setting most parameters via init.goovy. Perhaps that's why you did not see the changes.
If you were really paranoid, you could first invoke Hudson.instance.doQuietDown(), which effective blocks the queue (multiple init.groovy scripts execute in lexical order), do all the configurations, then invoke doCancelQuietDown(), but we'd had no issues w/o that.
This approach (init.groovy.d) works fine, but looking to switch to JCasC now that it's matured. CasC is a simpler to manage (again, using separate config files for each plugin) and read.
Related
I upgraded Jenkins today from 1.618 to 2.3. This included installing a whole bunch of plugins that it recommended (Mostly Pipeline plugins and their dependencies).
Since the upgrade, I get a new error (or, at least, a new unwanted behavior) any time a job kicks off another job. Any values passed to the child as "Predefined parameters" are ignored unless the child job already has those keys defined.
Let me illustrate: Let's say that I have a parent job and a child job.
Parent launches child through a "Trigger parameterized build on other projects" Post-build Action. In the definition of that Post-build Action, under the "Predefined parameters", I have FOO=BAR defined.
In Jenkins 1.618, when child was triggered this way, it would have FOO set as a parameter, with a value of BAR.
But in 2.3, FOO is not set on that build of child.
If I modify child so that FOO is always a parameter of that job, it will then pick up the FOO=BAR set from parent. This is an unacceptable work-around because we pass dozens of parameters this way, and defining them on both ends is too fragile and violates the "don't repeat yourself" principle.
I get the same results whether I'm triggering the child job through through the "Trigger parameterized build on other projects" Post-build Action or through a MultiJob Phase of a MultiJob project.
Is this an intended change? Was it broken before, and we were just using it incorrectly? Or is this a bug?
According to Jenkins 2 Security updates, you can bypass it by setting:
hudson.model.ParametersAction.keepUndefinedParameters=true
To validate this workaround, go to Manage Jenkins -> Script Console, and run:
System.setProperty("hudson.model.ParametersAction.keepUndefinedParameters", "true")
To make it permanent, change Jenkins arguments as follow (and restart Jenkins afterwards):
On Windows edit jenkins.xml in Jenkins home directory, for example:
<arguments>
-Xrs -Xmx256m -Dhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle
-Dhudson.model.ParametersAction.keepUndefinedParameters=true
-jar "%BASE%\jenkins.war" --httpPort=8080
</arguments>
For most of the Linux distributions, you can modify JENKINS_ARGS inside file:
/etc/default/jenkins (or jenkins-oc)
For CentOS, modify JENKINS_JAVA_OPTIONS inside file:
/etc/sysconfig/jenkins (or jenkins-oc)
Here's a list of reported plugins, that were affected by the issue, and has an open bug already:
https://wiki.jenkins-ci.org/display/JENKINS/Plugins+affected+by+fix+for+SECURITY-170
There are some solutions
commabd line
java -Dhudson.model.ParametersAction.keepUndefinedParameters=true -jar jenkins.war
groovy
import jenkins.model.*;
System.setProperty("hudson.model.ParametersAction.keepUndefinedParameters", "true")
I couldn't find a Start-to-End answer on how to set this for a linux box. After a couple hours of cross-referencing guides, this is what ended up working. There are supposed to be a couple flavors of these Jenkins configurations. I'm using the Ubuntu flavor for this answer.
Get the Groovy scripting plugin
Discern where your $JENKINS_HOME is being set. By default, it's supposed to be at ~/.jenkins, but I didn't set this server up, so I had to go digging through some configuration files. In case you do too, this is what I had to do:
Check the contents of /etc/default/jenkins with vi to grab the value of $JENKINS_HOME -- mine was /var/lib/$NAME and further up the file, $NAME was set to jenkins, so it was /etc/libs/jenkins
Change directories to the $JENKINS_HOME path
Search for a directory called init.groovy.d -- if it doesn't exist, make one and then cd into it. You might have to use sudo if needing to make it
Create a new file in the init.groovy.d directory that ends in .groovy -- I just called mine params.groovy
Enter following script code into the groovy file we just made:
import jenkins.model.*;
System.setProperty("hudson.model.ParametersAction.keepUndefinedParameters", "true")
Save and Close, then reboot your Jenkins server.
That should unblock you, if you ran into the same problem I did. Your mileage may vary :) I ultimately used a start-up script to utilize that functionality in conjunction with this solution proposed by Jenkins.
I have inherited a system which uses Jenkins Job DSL to build the jobs for all our projects, I have little experience with configuring Jenkins and none at all with Jenkins Job DSL, so please be gentle.
Some of these projects are Gradle projects. There is a function, createGradleJob() which creates the gradle job. In this function we build the task list for the job, as a string, based upon some features of the project. e.g. if it is being built from the master branch we append the 'publish' task. All of these conditional tasks are currently appended based upon the projects branch name, or the presence , or absence of certain files in the projects repo.
I would like to now add a new task into this task list conditional upon the contents of some of these files. Specifically if certain keywords are detected in the projects build.gradle file then certain tasks need to be appended to the task list.
So, is there a way in Jenkins Job DSL to check the contents of a file and use that as a conditional expression?
I have found that I can execute arbitrary shell commands using the shell function, so I thought I could just grep the file, but I can't locate the documentation for this function, so I'm not clear how I can able to access the output of the commands, so as to use them in a conditional expression.
I have found the textFinder function, but this appears to only allow you to fail (or mark as unstable) the build as a result of finding or failing to find, the text, not use the result as a conditional expression.
It sounds like you want to readFileFromWorkspace. It returns the contents of the file as a string. Simply read your file and parse the string as needed using the Groovy and/or Java string utils.
It's not quite clear from your question, but if you're talking about reading files out of the repo to be checked out by the job you're generating, this function won't help. But if the file is already somewhere in the workspace (i.e. it's one of the files checked out by the seed job), you'll be fine.
The shell command you found adds an "Execute Shell Script" build step to the job being generated. It doesn't actually execute the script there and then, it just copies the contents of the parameter verbatim into the build step ready to be executed when Jenkins runs the job.
For your continued sanity, here is a link to the Job DSL API Documentation
I'm trying to make a Jenkins job that only scans the test source files, so everything under /src/test/java (using Maven). I use the SonarQube Jenkins post-action for this.
When we used to configure Sonar in the pom file directly we could do this in a profile:
<sonar.sources>/src/test/java</sonar.sources>
<sonar.tests/>
That worked fine.
But in the Jenkins job I have to specify these as 'Additional properties' and I can't seem to specify an emtpy sonar.tests element. I tried -Dsonar.tests, -Dsonar.tests=,-Dsonar.tests="", nothing works. When this element is not empty Sonar will attempt to scan the test files twice and crash.
The post-build step is specifically and explicitly a Maven operation. Your problem comes from trying to use Maven to do something un-Mavenish; i.e. ignore the convention that files in the tests directory should be treated as tests.
Since you want to scan your tests as code, your best bet is to use the build step (which uses SonarQube Scanner) and set your scanner properties manually. That will make it easy to set your sources directory and to omit the tests directory.
I am new to Jenkins and just started configuring it. This is what i have done till now:
Installed and configured Jenkins to display the home page. Added PMD plugin.
Set the HUDSON_HOME to a specific directory > C:\Work\Jenkins
Configured a test build to run a simple do-nothing ant script. It runs successfully
Written an independent pmdbuild.xml to run checks on a set of files in C:\myview (I am using clearcase). This xml also copies the output pmd_results.xml to the workspace directory in $HUDSON_HOME/[job-name]/workspace
Now I added the pmdbuild.xml as a step in my primary build. So my build has 2 steps:
a. Run a simple script, do-nothing.
b. Run pmdbuild.xml which generate pmd_results.xml and place it in $HUDSON_HOME/[job-name]/workspace (HARD-CODED as Jenkins PMD plugin expects the file there)
Jenkins picks up the pmd_results.xml automatically with the plugin and displays warnings and everything.
Now the problem:
If I click on a filename in the PMD results, it gives a filenotfound exception as it is looking for the source file in $HUDSON_HOME/[job-name]/workspace.
My java code files are placed in C:\myview (a clearcase snapshot view)
My question is, do I need all my code files to be present inside $HUDSON_HOME/[job-name]/workspace ?? Meaning can't I tell Jenkins to look for the PMD input files in C:\myview or any other directory instead of $HUDSON_HOME/[job-name]/workspace ??
Sorry for the extremely long description.
Jenkins expects that all the code is in the workspace. Usually Jenkins is used to check out a copy of the code into the workspace, and then runs all build steps on the Sources in the Workspace.
Might seem restraining at first, but it saves you a lot of trouble if you need to move Jenkins to another server, or create a slave instance.
So I would suggest you let Jenkins check out your code (there should be a clearcase plugin) into the workspace, and run the analysis on the checked out code.
If there are compelling reasons why your code has to stay where it is (C:\myview in your case) you can still set the workspace of your build to that directory (find this in the job configuration page, you need to click on the 'extended' button to see the option).
I have about 100 jobs on my hudson CI, possible to mass delete them ?
The easiest way, IMHO, would be to use script. Go to http://your.hudson.url/script/
Delete jobs by running:
for(j in hudson.model.Hudson.theInstance.getProjects()) {
j.delete();
}
And this way gives you an option to easily use condition to filter out jobs to delete.
FOR JENKINS
Current versions (2.x):
for(j in jenkins.model.Jenkins.theInstance.getAllItems()) {
j.delete()
}
Older versions:
for(j in jenkins.model.Jenkins.getInstance().getProjects()) {
j.delete();
}
Just delete the job directories:
cd $HUDSON_HOME/jobs
rm -rf <JOB_NAME>
See: Administering Hudson
You can programmatically use the XML api (or use the JSON flavor if you prefer that):
http://your.hudson.url/api/xml?xpath=//job/name&wrapper=jobs
Returns:
<jobs>
<name>firstJob</name>
<name>secondJob</name>
<!-- etc -->
</jobs>
Now iterate over the job names and do a post request to
http://your.hudson.url/job/your.job.name/doDelete
(You can do this with any programming language you like that supports XML and HTTP)
I had similar manageability problems with a Hudson instance that was running 500+ build jobs - it was impractical to manually maintain that many jobs using the gui. However, you can provision jobs in Hudson remotely and programatically by using the CLI - which is supplied as a jar file [http://wiki.hudson-ci.org/display/HUDSON/Hudson+CLI].
The command to delete a job would be something like:
**java -jar hudson-cli.jar -s http://host:port/ delete-job jobname**
And the rest of the commands you will need are here:
**java -jar hudson-cli.jar -s http://host:port/** help
I wrapped the cli in python and created an XML file from which to hold the build configuration - then I could use this to manipulate my running instances of Hudson. This also provided the ability to 'reset' the CI instance back to a known configuration - handy if you suspect build failures were caused by manual changes in the UI or if you are using a different CI server for each environment you deploy to (ie dev, test, prod) and need to provision a new one.
This has also got me out of a few binds when badly written plugins have mangled Hudson's own XML and I've needed to rebuild my instances. Hudson is also I/O bound and for really loaded instances it is often faster to boot Hudson from scratch and populate it's configuration this way.