Gradle wait for ANT task to complete - ant

I have a gradle task that calls ant.exec() to do svn export into a directory:
/*
* Get code from repository into the 'src' directory
*/
task getSource << {
ant.exec(executable: svn_executable) {
arg(value: 'export')
arg(value: repository)
arg(value: 'src')
}
}
Then I have a task that deletes certain files in the exported directory:
task deletes(type: Delete) {
ant.delete() {
fileset(dir: "src", includes: "**/*template*")
}
}
And then I have another task that calls getSource and deletes one after another.
The problem is that gradle doesn't wait for the getSource to complete and goes straight ahead to the next task, which is a problem, since at that moment there are no files that need to be deleted.
Is there a way to get around this?
Thank you!

Your 'deletes' task calls ant.delete in the configuration phase instead of the execution phase of gradle. Have a look on the Gradle DSL reference about how to configure the 'Delete' task correctly at http://www.gradle.org/docs/current/dsl/org.gradle.api.tasks.Delete.html
hope that helps,
cheers,
René

And then I have another task that calls getSource and deletes one after another.
What exactly do you mean by this? A Gradle task cannot call other tasks; it can only depend on them.

Related

Jenkins declarative pipeline #tmp folders

I'd try to understand the root cause of reason why Jenkins creates such directories as below.. When I try to find coverage report, I realise that it is located in my-application-ms#2 rather than my-application-ms. Meanwhile I checked rest of directories abd there is only SecretFiles which is empty.
So what is the best way to delete rest of directories which current directory should be always my-application-ms .. Should I specify the each dir in post section ?Is there any doubt to delete rest of directories?
my-application-ms
my-application-ms#2
my-application-ms#2#tmp
my-application-ms#tmp
post{
failure{
notifyBuild('FAILED')
}
success{
notifyBuild('SUCCESSFUL')
}
aborted{
notifyBuild('FAILED')
}
always {
deleteDir() /* clean up our workspace */
}
}
I think you should rather use special step type for workspace clean up. cleanWs should do the job for you.

How to put jobs inside a folder in jenkins?

I'm trying to put jobs inside a folder using jenkins DSL script
Now i create a listView and i put inside my jobs here the code i'm using
listView('MyJobsList') {
jobs {
map.each{
name((it.key).trim())
}
}
columns{
status()
weather()
name()
lastSuccess()
lastFailure()
lastDuration()
buildButton()
}
}
i want to do the same thing but this time i want to put the jobs in a folder !!
Please refer the below Job-DSL documentation to create a folder in Jenkins through Job-DSL.
Folder
folder('folder-a') {
description('Folder containing all jobs for folder-a')
}
job('folder-a/job-a') {
// Job config goes here
}
Please, take a look at Jenkins filestructure: https://wiki.jenkins-ci.org/display/JENKINS/Administering+Jenkins
Here you can see where jobs are stored by default (job config and build logs). You can not and should not change this filestructure with DSL script (JobDSL plugin).

Jenkins execute action after delete build

I need to execute some action after build has been deleted (by user or automatically) on Jenkins.
Actually, I need the following:
Send http-request with info about deleted build.
Delete build artifacts on remote location.
About #2: I use Artifact Deployer plugin to deploy build, but because of issue
https://issues.jenkins-ci.org/browse/JENKINS-26109 build is not deleting on remote location after build has been deleted.
Any way, how I can do something on deleting build? Maybe I have to write script or create plugin?
There is a class RunListener which has method onDeleted.
Do something like this:
import hudson.model.Run;
import hudson.model.listeners.RunListener;
import hudson.Extension;
#Extension
public class DeleteListener extends RunListener<Run> {
#Override
public void onDeleted(Run r) {
// your code here
}
}
Moreover, remember that if you delete job - event onDeleted will not be fired.
I don't think Plugin is required for this.
You can write a cron job for this which checks the build directory continuously for any changes in the contents.
$JENKINS_HOME/jobs/your_job_here/builds/
If any of the folder is deleted that means a build is deleted manually/automatically. Then you can trigger a mail or perform whatever task you want to do as now you know a build has been deleted.

Why does Gradle always call an ant task first?

I use the OWASP Dependency Check from its ant task (no Gradle support yet) like this:
task checkDependencies() {
ant.taskdef(name: 'checkDependencies',
classname: 'org.owasp.dependencycheck.taskdefs.DependencyCheckTask',
classpath: 'scripts/dependency-check-ant-1.2.5.jar')
ant.checkDependencies(applicationname: "MyProject",
reportoutputdirectory: "generated",
dataDirectory: "generated/dependency-check-cache") {
fileset(dir: 'WebContent/WEB-INF/lib') {
include(name: '**.jar')
}
}
}
This works way too good. Even though nothing defines this ant task as dependency (neither in ant nor in Gradle), it is always executed first, even for a simple gradlew tasks. Why is that and how can I avoid this? (The dependency check is quite slow.)
This is a very common confusion with Gradle. In your example above you are executing the Ant tasks during project configuration. What you really intended was for it to run during task execution. To fix this, your execution logic should be placed within a task action, either by using a doLast {...} configuration block or using the left shift (<<) operator.
task checkDependencies << {
// put your execution logic here
}
See the Gradle docs for more information about the Gradle build lifecycle.

Gradle - different task use different parameter

I have two tasks,task_1 should compress png files and task_2 should not compress png files,so i want to add an parameter to control it.
project.ext.set("compressPngs", 1);
task taskCompressPngs(type:Exec){
commandLine "myshell.sh"
args compressPngs
}
task task_1(dependsOn:'taskCompressPngs'){}
task task_2(dependsOn:'taskCompressPngs'){}
gradle.taskGraph.whenReady { taskGraph ->
if (taskGraph.hasTask(task_1))
{
compressPngs=1
}
if (taskGraph.hasTask(task_2))
{
compressPngs=0
}
}
But when i run task_1 or task_2,in task 'taskCompressPngs', 'compressPngs' passed to my script 'myshell.sh' always be 1, why? how to solve it?
taskCompressPngs gets configured before the configuration value is changed. Conditional configuration is rarely a good solution. A better approach is to declare two Exec tasks.
As others have mentioned, it's probably best to take the advice of #PeterNiederwieser and use two separate tasks, but if you really don't think you can, here are a couple other options that should work.
1) Check Gradle startParameter
Configure your reusable task based on which task is passed to gradle on the command line.
task taskCompressPngs(type: Exec) {
def compressPngs = 1
if(gradle.startParameter.taskNames.toString().toLowerCase().contains("task_2")) compressPngs = 0
commandLine "myshell.sh $compressPngs".tokenize()
}
This gives you a variable to use (gradle.startParameter.taskNames) that is available at configuration-time.
Here we change compressPngs to 0 only if task_2 is specified on the command line when running gradle.
I.E. gradlew task_1 will run myshell.sh 1, but gradlew task_2 (or even gradlew task_1 task_2) will run myshell.sh 0.
This logic could also be applied to a project property outside of the taskCompressPngs task - if, for example, you wanted to change other tasks too.
Again, this only works if "task_2" is specified in the command used to run gradle.
2) Use DefaultExecAction instead of Exec task
Instead of using a task of type Exec, you could write a custom task and check the taskGraph in it.
task taskCompressPngs << {
def compressPngs = 1
if(gradle.taskGraph.hasTask(two)) compressPngs = 2
org.gradle.process.internal.DefaultExecAction e = new org.gradle.process.internal.DefaultExecAction(getServices().get(org.gradle.api.internal.file.FileResolver.class))
e.commandLine("myshell.sh $compressPngs".tokenize())
e.execute()
}
This is just moves your existing logic from configuration-time to execution-time.
This requires the use of "internal" Gradle classes (which is bad), but it gives you a little more flexibility in how/when the shell command is run.
Note that these solutions were checked against Gradle 1.7 and Gradle 1.11.

Resources