Jenkins promoting: empty classpath entries not allowed - jenkins

I have a freestyle jenkins job B which will run after the run of job A.
Now I choose:
Promote builds when...
Custom Groovy script
I check Groovy Sandbox and I define a simple groovy script.
When I try to save my job I got this error:
java.net.MalformedURLException: JENKINS-37599: empty classpath entries not allowed
I have to define a class path entry: JAR file path or URL
Definition:
A path or URL to a JAR file. This path should be approved by an
administrator or a user with the RUN_SCRIPT permission, or the script
fails. If the file or files are once approved, they are treated
approved even located in another path.
I really don't know to what file I have to point or what I have to do. Why isn't it just working when I check the sandbox?

This is a jenkins bug. It requires removing that Classpath entry every time before saving the job. I found a workaround, to set the value to any of the existing jars, like https://your-jenkins-host/jnlpJars/slave.jar. This won't affect script execution and won't require you to remember removing that stupid UI block every time you update your jenkins job config.

I had a similar issue and I clicked the red box with the white "X" to close that additional classpath window. I then saved the script.

Related

jenkins parameter From Properties file

I have 3 Jenkins jobs to be run in serial.
Run a Ant File
Run another ANT File
Run a command line
All the above jobs use a file path which is set in a properties file.
Ex Job 1 , Executes ANT file placed in file path location
Job 2 , Executes another file placed in same file path location
Job 3 , Executes command line to do SVN update in same file path location
I need to parameterize the file path in all three builds from properties file.
Can anyone help me with possible approach?
Thanks In Advance
This answer could be a little high level. You can use Jenkins Pipeline as a code for this approach instead of using 3 freestyle jobs.
You can create 3 stages which performs these 3 steps. Pipeline as a code supports reading of properties from different file types (json, yaml etc.)
Look for the "EnvInject" plugin. This lets you inject properties into your build as environment variables; these assignments survive build step boundaries.
If the property file is checked in, you can load it in the Build Environment section before the build steps start executing. If the property file is generated during the build sequence, you can add a build step between where the property file is created and where it is used.
Once set, if the property file contains "FOO=/path/to/folder" then in configuring Jenkins things you would refer to $FOO or ${FOO} (for example, an Ant build step might specify "${FOO}/build.xml"; in Windows batch script execution FOO shows up as an environment variable and is referenced by %FOO% (i.e., "#echo Some_Useful_Piece_Of_Data > %FOO%\data.txt"
More information can be found here: https://wiki.jenkins.io/display/JENKINS/EnvInject+Plugin

Pre-Defined parameters no longer passed to child job

I upgraded Jenkins today from 1.618 to 2.3. This included installing a whole bunch of plugins that it recommended (Mostly Pipeline plugins and their dependencies).
Since the upgrade, I get a new error (or, at least, a new unwanted behavior) any time a job kicks off another job. Any values passed to the child as "Predefined parameters" are ignored unless the child job already has those keys defined.
Let me illustrate: Let's say that I have a parent job and a child job.
Parent launches child through a "Trigger parameterized build on other projects" Post-build Action. In the definition of that Post-build Action, under the "Predefined parameters", I have FOO=BAR defined.
In Jenkins 1.618, when child was triggered this way, it would have FOO set as a parameter, with a value of BAR.
But in 2.3, FOO is not set on that build of child.
If I modify child so that FOO is always a parameter of that job, it will then pick up the FOO=BAR set from parent. This is an unacceptable work-around because we pass dozens of parameters this way, and defining them on both ends is too fragile and violates the "don't repeat yourself" principle.
I get the same results whether I'm triggering the child job through through the "Trigger parameterized build on other projects" Post-build Action or through a MultiJob Phase of a MultiJob project.
Is this an intended change? Was it broken before, and we were just using it incorrectly? Or is this a bug?
According to Jenkins 2 Security updates, you can bypass it by setting:
hudson.model.ParametersAction.keepUndefinedParameters=true
To validate this workaround, go to Manage Jenkins -> Script Console, and run:
System.setProperty("hudson.model.ParametersAction.keepUndefinedParameters", "true")
To make it permanent, change Jenkins arguments as follow (and restart Jenkins afterwards):
On Windows edit jenkins.xml in Jenkins home directory, for example:
<arguments>
-Xrs -Xmx256m -Dhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle
-Dhudson.model.ParametersAction.keepUndefinedParameters=true
-jar "%BASE%\jenkins.war" --httpPort=8080
</arguments>
For most of the Linux distributions, you can modify JENKINS_ARGS inside file:
/etc/default/jenkins (or jenkins-oc)
For CentOS, modify JENKINS_JAVA_OPTIONS inside file:
/etc/sysconfig/jenkins (or jenkins-oc)
Here's a list of reported plugins, that were affected by the issue, and has an open bug already:
https://wiki.jenkins-ci.org/display/JENKINS/Plugins+affected+by+fix+for+SECURITY-170
There are some solutions
commabd line
java -Dhudson.model.ParametersAction.keepUndefinedParameters=true -jar jenkins.war
groovy
import jenkins.model.*;
System.setProperty("hudson.model.ParametersAction.keepUndefinedParameters", "true")
I couldn't find a Start-to-End answer on how to set this for a linux box. After a couple hours of cross-referencing guides, this is what ended up working. There are supposed to be a couple flavors of these Jenkins configurations. I'm using the Ubuntu flavor for this answer.
Get the Groovy scripting plugin
Discern where your $JENKINS_HOME is being set. By default, it's supposed to be at ~/.jenkins, but I didn't set this server up, so I had to go digging through some configuration files. In case you do too, this is what I had to do:
Check the contents of /etc/default/jenkins with vi to grab the value of $JENKINS_HOME -- mine was /var/lib/$NAME and further up the file, $NAME was set to jenkins, so it was /etc/libs/jenkins
Change directories to the $JENKINS_HOME path
Search for a directory called init.groovy.d -- if it doesn't exist, make one and then cd into it. You might have to use sudo if needing to make it
Create a new file in the init.groovy.d directory that ends in .groovy -- I just called mine params.groovy
Enter following script code into the groovy file we just made:
import jenkins.model.*;
System.setProperty("hudson.model.ParametersAction.keepUndefinedParameters", "true")
Save and Close, then reboot your Jenkins server.
That should unblock you, if you ran into the same problem I did. Your mileage may vary :) I ultimately used a start-up script to utilize that functionality in conjunction with this solution proposed by Jenkins.

Jenkins Job DSL - behaviour conditional on file contents

I have inherited a system which uses Jenkins Job DSL to build the jobs for all our projects, I have little experience with configuring Jenkins and none at all with Jenkins Job DSL, so please be gentle.
Some of these projects are Gradle projects. There is a function, createGradleJob() which creates the gradle job. In this function we build the task list for the job, as a string, based upon some features of the project. e.g. if it is being built from the master branch we append the 'publish' task. All of these conditional tasks are currently appended based upon the projects branch name, or the presence , or absence of certain files in the projects repo.
I would like to now add a new task into this task list conditional upon the contents of some of these files. Specifically if certain keywords are detected in the projects build.gradle file then certain tasks need to be appended to the task list.
So, is there a way in Jenkins Job DSL to check the contents of a file and use that as a conditional expression?
I have found that I can execute arbitrary shell commands using the shell function, so I thought I could just grep the file, but I can't locate the documentation for this function, so I'm not clear how I can able to access the output of the commands, so as to use them in a conditional expression.
I have found the textFinder function, but this appears to only allow you to fail (or mark as unstable) the build as a result of finding or failing to find, the text, not use the result as a conditional expression.
It sounds like you want to readFileFromWorkspace. It returns the contents of the file as a string. Simply read your file and parse the string as needed using the Groovy and/or Java string utils.
It's not quite clear from your question, but if you're talking about reading files out of the repo to be checked out by the job you're generating, this function won't help. But if the file is already somewhere in the workspace (i.e. it's one of the files checked out by the seed job), you'll be fine.
The shell command you found adds an "Execute Shell Script" build step to the job being generated. It doesn't actually execute the script there and then, it just copies the contents of the parameter verbatim into the build step ready to be executed when Jenkins runs the job.
For your continued sanity, here is a link to the Job DSL API Documentation

Jenkins creating wrong folder for the new jobs that were copied from other existing jobs

Sorry, for the confusing title. I am totally new to Jenkins and have been handed over Jenkins to maintain which was set-up by someone else.
This is Jenkins Master slave config. I have 1 Master and 3 Slaves.
When I create a new job by "copying an existing" job, the new job works fine and no issues.
QUESTION: I see that in Jenkins workspace, this new job is creating a folder with the name of the original job that it was copied from. Why it is not creating a folder with the name of the new job instead?
Now, this is certainly not a show stopper for me, but it seems that Jenkins is creating a folder in workspace for each job that is run. And hence this particular folder is causing some confusion (although notional it is).
Hence, could you help me find out why the new job is creating a workspace folder with the name of original job it was copied from.
BTW, above issue was seen on the Jenkins slave.
It can be solved by configuring the correct building workspace in jenkins job.
General > Advanced > custom workspace > "give your correct workspace"
I had the same problem:
copied some jenkins project and wondered about hard coded workspace paths
Console output of the copied project. Job failed due to missing D: drive.
12:30:44 java.io.IOException: Failed to mkdirs: D:\TEAMS\WORKSPACE\RELEASE_1_1
The problem i had: the 'Advanced project options' were not expanded and the configure GUI had an enormous width, that i didnĀ“t see the button to expand and show the 'advanced' settings.
In fact (thanks to sti): the original project had some hard coded workspace path.
One possibility is that you accidentally triggered the wrong job. You could change the job to print the directory where it executes by adding something like:
echo "XXX $JOB_NAME running in directory $WORKSPACE"
into the build step script. Then look for XXX in the build console log.
Second possibility is that you found an old workspace of the original job. Jenkins leaves workspaces lying around just in case it needs them again so it does not have to make them from scratch.
Third possibility is the original job is configured to use a hard-coded path as workspace. (Custom workspace). If you clone such a job, it would be a good idea to change the hard-coded path. An even better idea would be to let Jenkins manage the workspace and it's naming.
And finally, if all the other possibilities have been checked, you may have found a bug. You could look for it in https://issues.jenkins-ci.org/ and create a bug report if it is a new one.

Hudson/Jenkins PMD Configuration

I am new to Jenkins and just started configuring it. This is what i have done till now:
Installed and configured Jenkins to display the home page. Added PMD plugin.
Set the HUDSON_HOME to a specific directory > C:\Work\Jenkins
Configured a test build to run a simple do-nothing ant script. It runs successfully
Written an independent pmdbuild.xml to run checks on a set of files in C:\myview (I am using clearcase). This xml also copies the output pmd_results.xml to the workspace directory in $HUDSON_HOME/[job-name]/workspace
Now I added the pmdbuild.xml as a step in my primary build. So my build has 2 steps:
a. Run a simple script, do-nothing.
b. Run pmdbuild.xml which generate pmd_results.xml and place it in $HUDSON_HOME/[job-name]/workspace (HARD-CODED as Jenkins PMD plugin expects the file there)
Jenkins picks up the pmd_results.xml automatically with the plugin and displays warnings and everything.
Now the problem:
If I click on a filename in the PMD results, it gives a filenotfound exception as it is looking for the source file in $HUDSON_HOME/[job-name]/workspace.
My java code files are placed in C:\myview (a clearcase snapshot view)
My question is, do I need all my code files to be present inside $HUDSON_HOME/[job-name]/workspace ?? Meaning can't I tell Jenkins to look for the PMD input files in C:\myview or any other directory instead of $HUDSON_HOME/[job-name]/workspace ??
Sorry for the extremely long description.
Jenkins expects that all the code is in the workspace. Usually Jenkins is used to check out a copy of the code into the workspace, and then runs all build steps on the Sources in the Workspace.
Might seem restraining at first, but it saves you a lot of trouble if you need to move Jenkins to another server, or create a slave instance.
So I would suggest you let Jenkins check out your code (there should be a clearcase plugin) into the workspace, and run the analysis on the checked out code.
If there are compelling reasons why your code has to stay where it is (C:\myview in your case) you can still set the workspace of your build to that directory (find this in the job configuration page, you need to click on the 'extended' button to see the option).

Resources