I have two tasks,task_1 should compress png files and task_2 should not compress png files,so i want to add an parameter to control it.
project.ext.set("compressPngs", 1);
task taskCompressPngs(type:Exec){
commandLine "myshell.sh"
args compressPngs
}
task task_1(dependsOn:'taskCompressPngs'){}
task task_2(dependsOn:'taskCompressPngs'){}
gradle.taskGraph.whenReady { taskGraph ->
if (taskGraph.hasTask(task_1))
{
compressPngs=1
}
if (taskGraph.hasTask(task_2))
{
compressPngs=0
}
}
But when i run task_1 or task_2,in task 'taskCompressPngs', 'compressPngs' passed to my script 'myshell.sh' always be 1, why? how to solve it?
taskCompressPngs gets configured before the configuration value is changed. Conditional configuration is rarely a good solution. A better approach is to declare two Exec tasks.
As others have mentioned, it's probably best to take the advice of #PeterNiederwieser and use two separate tasks, but if you really don't think you can, here are a couple other options that should work.
1) Check Gradle startParameter
Configure your reusable task based on which task is passed to gradle on the command line.
task taskCompressPngs(type: Exec) {
def compressPngs = 1
if(gradle.startParameter.taskNames.toString().toLowerCase().contains("task_2")) compressPngs = 0
commandLine "myshell.sh $compressPngs".tokenize()
}
This gives you a variable to use (gradle.startParameter.taskNames) that is available at configuration-time.
Here we change compressPngs to 0 only if task_2 is specified on the command line when running gradle.
I.E. gradlew task_1 will run myshell.sh 1, but gradlew task_2 (or even gradlew task_1 task_2) will run myshell.sh 0.
This logic could also be applied to a project property outside of the taskCompressPngs task - if, for example, you wanted to change other tasks too.
Again, this only works if "task_2" is specified in the command used to run gradle.
2) Use DefaultExecAction instead of Exec task
Instead of using a task of type Exec, you could write a custom task and check the taskGraph in it.
task taskCompressPngs << {
def compressPngs = 1
if(gradle.taskGraph.hasTask(two)) compressPngs = 2
org.gradle.process.internal.DefaultExecAction e = new org.gradle.process.internal.DefaultExecAction(getServices().get(org.gradle.api.internal.file.FileResolver.class))
e.commandLine("myshell.sh $compressPngs".tokenize())
e.execute()
}
This is just moves your existing logic from configuration-time to execution-time.
This requires the use of "internal" Gradle classes (which is bad), but it gives you a little more flexibility in how/when the shell command is run.
Note that these solutions were checked against Gradle 1.7 and Gradle 1.11.
Related
So, most of the questions and answers I've found on this subject is for people who want to use the SAME workspace for different runs. (Which baffles me, but then I require a clean slate each time I start a job. Leftover stuff will only break things)
My issue is the EXACT opposite - I MUST have a separate workspace for each run (or I need to know how to create files with the same name in different runs that stay with that run only, and which are easily reachable from bash scripts started by the pipeline!)
So, my question is - how do I either force Jenkins to NOT use the same workspace for two concurrently-running jobs on different hosts, OR what variable can I use in the 'custom workspace' field to accomplish this?
After I responded to the question by #Joerg S I realized that I'm saying the thing that Joerg S says CAN'T happen is EXACTLY what I'm observing! Jenkins is using the SAME workspace for 2 different, concurrent, jobs on 2 different hosts. Is this a Jenkins pipeline bug?
See below for a bewildering amount of information.
Given the way I have to go onto and off of nodes during the run, I've found that I can start 2 different builds on different hosts of the same job, and they SHARE the workspace dir! Since each job has shell scripts which are busy writing files into that directory, this is extremely bad.
In Custom workspace in jenkins we are told to use custom workspace, and I'm set up just like that
In Jenkins: how to run builds in unique directories we are told to use ${BUILD_NUMBER} in the above custom workspace field, so what I tried was:
${JENKINS_HOME}/workspace/${ITEM_FULLNAME}/${BUILD_NUMBER}
All that happens to me when I use that is that the workspace name is, you guessed it, "${BUILD_NUMBER}" (and I even got a "${BUILD_NUMBER}#2" just for good measure!)
I tried {$BUILD_ID}, same thing (uses that literally, does not substitute the number).
I have the 'allow concurrent builds' turned on.
I'm using pipelines exclusively.
All jobs here, as part of normal execution, cause the slave, non-master host to reboot into an OS that does not have the capability to run slave.jar (indeed, it has no network access at all), so I cannot run the entire pipeline on that host.
All jobs use the following construct somewhere inside them:
tests=Arrays.asList(tests.split("\\r?\n"))
shellerror=231
for( line in tests){
So let's call an example job 'foo' that loops through a list, as above, that I want to run on 2 different hosts. The pipeline for that job starts running on master (since the above for (line in tests) is REQUIRED to run on a node!)). Then goes back and forth between master and slave, often multiple times.
If I start this job on host A and host B at about the same time, they will BOTH use the workspace ${JENKINS_HOME}/workspace/${JOB_NAME}, or in my case /var/lib/jenkins/jenkins/workspace/job
Since they write different data to files with the same name in that directory, I'm clearly totally broken immediately.
So, how do I force Jenkins to use a unique workspace EVERY SINGLE JOB?
Or, what???
Other things: pipeline build step version 2.5.1, Jenkins 2.46.2
I've been trying to get the workspace statement ('ws') to work, but that doesn't quite work as I expected either - some files are in the workspace I explicitly name, and some are still in the 'built-in' workspace (workspace/).
I was asked to provide code. The 'standard' pipeline I use is about 26K bytes, composing about 590 lines. So, I'm going to GREATLY reduce. That being said:
node("master") { // 1
..... lots of stuff....
} // this matches the "node('master')" above
node(HOST) {
echo "on $HOST, check what os"
if (isUnix())
...some more stuff...
} // end of 'node(HOST)' above
if (isok == 0 ) {
node("master") {
echo "----------------- Running on MASTER 19 $shellerror waiting on boot out of windows ------------"
sleep 120
echo "----------------- Leaving MASTER ------------"
}
}
... lots 'o code ...
node(HOST) {
... etc
} // matches the latest 'node HOST' above
node("master") { // 120
.... code ...
for( line in tests) {
...code...
}
}
... and on and on and on, switching back and forth from one to the other
FWIW, when I tried to make the above use 'ws' so that I could make certain the ws name was unique, I simply added a 'ws wsname' block directly under (almost) every 'node' opening so it was
node(name) { ws (wsname) { ..stuff that was in node block before... } }
But then I've got two directories to worry about checking - both the 'default' workspace/jobname dir AND the new wsname one.
Try using customWorkspace node common option:
pipeline {
agent {
node {
label 'node(s)-defined-label'
customWorkspace "${JENKINS_HOME}/workspace/${JOB_NAME}/${BUILD_NUMBER}"
}
}
stages {
// Your pipeline logic here
}
}
customWorkspace
A string. Run the Pipeline or individual stage this
agent is applied to within this custom workspace, rather than the
default. It can be either a relative path, in which case the custom
workspace will be under the workspace root on the node, or an absolute
path.
Edit
Since this doesn't work for your complex pipeline. Maybe try this silly solution:
def WORKSPACE = "${JENKINS_HOME}/workspace/${JOB_NAME}/${BUILD_NUMBER}"
node(HOST) {
sh(script: "mkdir -p ${WORKSPACE}")
sh(script: "cd ${WORKSPACE}")
//Do stuff here
}
or if dir() is accessible:
def WORKSPACE = "${JENKINS_HOME}/workspace/${JOB_NAME}/${BUILD_NUMBER}"
node(HOST) {
sh(script: "mkdir -p ${WORKSPACE}")
dir(WORKSPACE) {
//Do stuff here
}
}
customWorkspace didn't work for me.
What worked:
stages {
stage("SCM (For commit trigger)"){
steps {
ws('custom-workspace') { // Because we don't want to switch from the pipeline checkout
// Generated from http://lstool01:8080/job/Permanent%20Build/pipeline-syntax/
checkout(xxx)
}
}
}
'${SOMEVAR}'
will not get substituted
"${SOMEVAR}"
will - this is how groovy strings are being handled
see groovy string handling
so if you have a
ws("/some/path/somewhere/${BUILD_ID}")
{
//something
}
on your node in your pipeline Jenkinsfile it should do the trick in this regard
the problem with #2 workspaces can occur when you allow concurrent builds of the project - I had the exact same problem with a custom ws() with #2 - simply disallow concurrent builds or work around that.
Is it possible to define/specify a runner when starting tests from cucumber's command line(cucumber.api.cli.Main)?
My reason for this is so i can generate xml reports in Jenkins and push the results to ALM Octane.
I kind of inherited this project and its using gradle to do a javaexect and call cucumber.api.cli.Main
I know its possible to do this with #RunWith(OctaneCucumber.class) when using JUnit runner + maven (or only JUnit runner), otherwise that tag is ignored. I have the custom runner with that tag but when i run from cucumber.api.cli.Main i can't find a way to run with it and my tag just gets ignored.
What #Grasshopper suggested didn't exactly work but it made me look in the right direction.
Instead of adding the code as a plugin, i managed to "hack/load" the octane reporter by creating a copy of the cucumber.api.cli.Main, using it as a base to run the cli commands and change a bit the run method and add the plugin at runtime. Needed to do this because the plugin required quite a few parameters in its constructor. Might not be the perfect solution, but it allowed me to keep the gradle build process i initially had.
public static byte run(String[] argv, ClassLoader classLoader) throws IOException {
RuntimeOptions runtimeOptions = new RuntimeOptions(new ArrayList<String>(asList(argv)));
ResourceLoader resourceLoader = new MultiLoader(classLoader);
ClassFinder classFinder = new ResourceLoaderClassFinder(resourceLoader, classLoader);
Runtime runtime = new Runtime(resourceLoader, classFinder, classLoader, runtimeOptions);
//====================Added the following lines ================
//Hardcoded runner(?) class. If its changed, it will need to be changed here also
OutputFile outputFile = new OutputFile(Main.class);
runtimeOptions.addPlugin(new HPEAlmOctaneGherkinFormatter(resourceLoader, runtimeOptions.getFeaturePaths(), outputFile));
//==============================================================
runtime.run();
return runtime.exitStatus();
}
I am preparing and docker image based on jenkins:lts. To setup the initial configuration I use the init.groovy.d scripts, but:
is that the best option?
is there a way to prevent those scripts to run again in the second start? (I do not want to overwrite any change after init)
I end up using a file as a status marker
// Skip exec. if the init already run once, do not overwrite ui config
final File status = new
File("${System.getenv("JENKINS_HOME")}/init.groovy.d/uk-config.status")
if (status.exists()) {
logger.info("First init already run")
return
}
status.createNewFile()
It creates a file the first time the script runs and it checks if the file exists and prevent execution the second one.
We use clover for code coverage testing but it interferes with stack traces and error information. I want to be able to use cloverGenerateReport when doing automated builds via jenkins but to skip this step entirely when doing local builds.
I've tried the various suggestions from searches for 'gradle optional dependencies' but I can't seem to get clover completely out of the way.
Suggestions?
You can use the method onlyIf.
cloverGenerateReport.onlyIf {
project.hasProperty('enableClover') ? Boolean.valueOf(project.getProperty('enableClover')) : false
}
On the command line you can enable it by providing the project property:
gradle cloverGenerateReport -PenableClover=true
One solution would be to check if the environment variable "JENKINS_HOME" exists. If it does, then set cloverGenerateReport as a dependency to another task.
In your build.gradle:
def env = System.getenv()
if(env.containsKey('JENKINS_HOME')){
reportTask.dependsOn cloverGenerateReport
}
I use the OWASP Dependency Check from its ant task (no Gradle support yet) like this:
task checkDependencies() {
ant.taskdef(name: 'checkDependencies',
classname: 'org.owasp.dependencycheck.taskdefs.DependencyCheckTask',
classpath: 'scripts/dependency-check-ant-1.2.5.jar')
ant.checkDependencies(applicationname: "MyProject",
reportoutputdirectory: "generated",
dataDirectory: "generated/dependency-check-cache") {
fileset(dir: 'WebContent/WEB-INF/lib') {
include(name: '**.jar')
}
}
}
This works way too good. Even though nothing defines this ant task as dependency (neither in ant nor in Gradle), it is always executed first, even for a simple gradlew tasks. Why is that and how can I avoid this? (The dependency check is quite slow.)
This is a very common confusion with Gradle. In your example above you are executing the Ant tasks during project configuration. What you really intended was for it to run during task execution. To fix this, your execution logic should be placed within a task action, either by using a doLast {...} configuration block or using the left shift (<<) operator.
task checkDependencies << {
// put your execution logic here
}
See the Gradle docs for more information about the Gradle build lifecycle.