I want to use the Workspace from my workflow Task in other tasks whom I trigger via the 'build' command.
I need to make this Flexible since I want to be able to trigger those jobs from various workflows with different Workspaces, this is why I cannot provide a hardcoded workspace Path.
Here is some Code:
node {
git branch: branchName, credentialsId: '1337', url: 'https://i-didnt-provide-this.but-this-is-working.git'
def buildType = 'xxx'
def buildFlavor = 'yyy'
def hockeyAppId = 'zzz'
def buildTypeParam = new hudson.model.StringParameterValue('buildType', buildType)
def buildFlavorParam = new hudson.model.StringParameterValue('buildFlavor', buildFlavor)
def hockeyAppIdParam = new hudson.model.StringParameterValue('hockeyAppId', hockeyAppId)
def outputApkFilenameParam = new hudson.model.StringParameterValue('fileName', '*-{buildFlavor}-{buildType}.apk')
def proguardMappingParam = new hudson.model.StringParameterValue('mappingFile', '{buildFlavor}/{buildType}/mapping.txt')
build job: 'android_compile', parameters: [buildTypeParam, buildFlavorParam] //This needs the same workspace
build job: 'android_lint', parameters: [buildTypeParam, buildFlavorParam] //same here
build job: 'android_upload_hockey', parameters: [hockeyAppIdParam, outputApkFilenameParam, proguardMappingParam] //and here
}
Thanks for Help in Advance
Rather than trying to share a workspace, which will not work, archive any files you need from downstream jobs. They can then access those files using the Copy Artifact plugin.
In this case, if you just want to check out the same Git revision in your downstream jobs, determine its commit hash and pass that to downstream builds as a parameter. JENKINS-26100 would save you from manually running git rev-parse HEAD or the like.
Related
I'm using the Jfrog Artifactory plugin in my Jenkins pipeline to pull some in-house utilities that the pipelines use. I specify which version of the utility I want using a parameter.
After executing the server.download, I'd like to verify and report which version of the file was actually downloaded, but I can't seem to find any way at all to do that. I do get a buildInfo object returned from the server.download call, but I can find any way to pull information from that object. I just get an object reference if I try to print the buildInfo object. I'd like to abort the build and send a report out if the version of the utility downloaded is incorrect.
The question I have is, "How does one verify that a file specified by a download spec is successfully downloaded?"
This functionality is only available on scripted pipeline at the moment, and is described in the documentation.
For example:
node {
def server = Artifactory.server SERVER_ID
def downloadSpec = readFile 'downloadSpec.json'
def buildInfo = server.download spec: downloadSpec
if (buildInfo.getDependencies().size() > 0) {
def localPath = buildInfo.getDependencies()[0].getLocalPath()
def remotePath = buildInfo.getDependencies()[0].getRemotePath()
def md5 = buildInfo.getDependencies()[0].getMd5()
def sha1 = buildInfo.getDependencies()[0].getSha1()
echo localPath
}
server.publishBuildInfo buildInfo
}
In gradle I'd like to add both the current branch-name and commit-number as suffix to my versionName. (Why? Because when I build my app in Jenkins to release it in HockeyApp, it's useful to show what branch & commit that app was built from!)
So when I enter this in command prompt, my current branch name is returned:
git rev-parse --abbrev-ref HEAD
Same happens when I use this line in Android gradle, using the code in either this answer, or as shown in this piece of gradle code:
def getVersionNameSuffix = { ->
def branch = new ByteArrayOutputStream()
exec {
// The command line to request the current branch:
commandLine 'git', 'rev-parse', '--abbrev-ref', 'HEAD'
standardOutput = branch
}
println "My current branch: " + branch
def versionNameSuffix = "-" + branch
// ... some other suffix additions ...
return versionNameSuffix
}
buildTypes {
debug {
applicationIdSuffix ".test"
versionNameSuffix getVersionNameSuffix()
}
}
Resulting log (this is exactly what I want):
"My current branch: feature/MyFeature"
However, when I build my app in a Jenkins job, it will output a different result:
"My current branch: HEAD"
Why does this happen, and how to correctly retrieve my current branch name in Jenkins?
EDIT:
I've used a different approach, which returns the branchName correctly in most cases, also on Jenkins:
git name-rev --name-only HEAD
Example output in prompt:
"My current branch: feature/MyFeature"
Example output in Jenkins:
"My current branch: remotes/origin/feature/MyFeature"
I can remove "remotes/origin/" if i like, so that's okay!
But this approach causes different trouble (both in prompt, gradle and on Jenkins). When I have tagged the last commit, it won't output the branch-name, but this:
"My current branch: tags/MyTag^0"
EDIT 2:
A third approach can be found here.
Including the comments below the answer, I could use grep * to retrieve the branch in prompt. However, I cannot use the backslash in the gradle code. This fails:
commandLine 'git', 'branch', '|', 'grep', '\\*'
Any advice?
Try the env: BRANCH_NAME
BRANCH_NAME
For a multibranch project, this will be set to the name of the branch being built, for example in case you wish to deploy to production from master but not from feature branches.
Access it with env.BRANCH_NAME
I know it possible to pass values from parent to child jobs using Multijob Plugin
Is it possible to pass variables from child job to parent?
Yes with a little work. If JobParent calls jobChild and you want to have variableChild1 (that you may have created in jobChild job) to be visible in jobParent job then do the following simple steps.
In the child job, create a file (variable=value) pair with all the variables in it. Lets call it child or downstream_job or jobChild_envs.txt
Now, once jobParent is done calling jobChild (I guess you are calling Trigger another project or Build other project steps etc), next action just would be to use "Copy Artifact from another project/job" (Copy Artifact plugin in Jenkins). PS: You would need to click on the check box to FLATTEN the file (see jobParent image below). https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin
Using this plugin, you'll be able to get a file/folder from jobChild's workspace into jobParent's workspace in a defined/base workspace location.
In jobParent, you'll Inject Environment variables (in the BUILD step).
https://wiki.jenkins-ci.org/display/JENKINS/EnvInject+Plugin
At this time, if jobChild job created a .txt file with a variable for ex:
variableChild1=valueChild1
in it, then it'll be available/visible to the parent/upstrem job jobParent.
See the images for more details and run the jobs at your end to see the output.
and
In pipeline builds, you can do this as follows. Let's say you want to save the child build's URL and pass it back to the parent pipeline.
In your child build...
// write out some data about the job
def jobData = [job_url: "${BUILD_URL}"]
def jobDataText = groovy.json.JsonOutput.toJson(jobData)
writeFile file: "jobDataChild.json", text: jobDataText, encoding: 'UTF-8'
// archive the artifacts
archiveArtifacts artifacts: "jobDataChild.json", onlyIfSuccessful: false
And you can retrieve this in the parent build...
step ([$class: 'CopyArtifact', projectName: 'ChildJobName', filter: "jobDataChild.json", selector: [$class: 'LastCompletedBuildSelector'] ])
if (fileExists("jobDataChild.json")) {
def jobData = readJSON file: "jobDataChild.json"
def jobUrl = jobData.job_url
}
To add to this answer years later. The way i'm doing it is by having a redis instance that pipelines can connect to and pass data back and forth.
sh "redis-cli -u $redis_url ping" // server is up
def redis_key = "$BUILD_TAG" // BUILD_TAG is always unique
build job: "child", propagate: true, wait: true, parameters: [
string(name: "redis", value: "$redis_url;$redis_key"),
]
/******** in child job ***********/
def (redis_url, redis_key) = env.redis.tokenize(";")
sh"redis-cli -u $redis_url ping" // we are connected on url
// lpush adds to an array in redis
sh"""
redis-cli -u $redis_url lpush $redis_key "MY_DATA"
"""
/******* in parent job after waiting for child job *****/
def data_from_child = sh(script: "redis-cli --raw -u $redis_url LRANGE $redis_key 0 -1", returnStdout: true)
data_from_child == "MY_DATA"? println("👍") : error("wow did not expect this")
I kind of like this approach better than passing back and forth with files because it allows scaling up via multiple worker nodes and executing multiple jobs in parallel.
I'm using MultiJob plugin and have a job (Job-A) that triggers Job-B several times.
My requirement is to copy some artifact (xml files) from each build.
The difficulty I have is that using Copy Artifact Plugin with "last successful build" option will only take the last build of Job-B, while I need to copy from all builds that were triggered on the same build of Job-A
The flow looks like:
Job-A starts and triggers:
`Job-A` -->
Job-B build #1
Job-B build #2
Job-B build #3
** copy artifcats of all last 3 builds, not just #3 **
Note: Job-B could be executed on different slaves on the same run (I set the slave to run on dynamically by setting parameter on upstream job-A)
When all builds are completed, I want Job-A to copy artifact from build #1, #2 and #3 , and not just from last build.
How can I do this?
Here is more generic groovy script; it uses the groovy plugin and the copyArtifact plugin; see instructions in the code comments.
It simply copies artifacts from all downstream jobs into the upstream job's workspace.
If you call the same job several times, you could use the job number in the copyArtifact's 'target' parameter to keep the artifacts separate.
// This script copies artifacts from downstream jobs into the upstream job's workspace.
//
// To use, add a "Execute system groovy script" build step into the upstream job
// after the invocation of other projects/jobs, and specify
// "/var/lib/jenkins/groovy/copyArtifactsFromDownstream.groovy" as script.
import hudson.plugins.copyartifact.*
import hudson.model.AbstractBuild
import hudson.Launcher
import hudson.model.BuildListener
import hudson.FilePath
for (subBuild in build.builders) {
println(subBuild.jobName + " => " + subBuild.buildNumber)
copyTriggeredResults(subBuild.jobName, Integer.toString(subBuild.buildNumber))
}
// Inspired by http://kevinormbrek.blogspot.com/2013/11/using-copy-artifact-plugin-in-system.html
def copyTriggeredResults(projName, buildNumber) {
def selector = new SpecificBuildSelector(buildNumber)
// CopyArtifact(String projectName, String parameters, BuildSelector selector,
// String filter, String target, boolean flatten, boolean optional)
def copyArtifact = new CopyArtifact(projName, "", selector, "**", null, false, true)
// use reflection because direct call invokes deprecated method
// perform(Build<?, ?> build, Launcher launcher, BuildListener listener)
def perform = copyArtifact.class.getMethod("perform", AbstractBuild, Launcher, BuildListener)
perform.invoke(copyArtifact, build, launcher, listener)
}
I suggest you the following approach:
Use Execute System Groovy script from Groovy Plugin to execute the following script:
import hudson.model.*
// get upstream job
def jobName = build.getEnvironment(listener).get('JOB_NAME')
def job = Hudson.instance.getJob(jobName)
def upstreamJob = job.upstreamProjects.iterator().next()
// prepare build numbers
def n1 = upstreamJob.lastBuild.number
def n2 = n1 - 1
def n3 = n1 - 2
// set parameters
def pa = new ParametersAction([
new StringParameterValue("UP_BUILD_NUMBER1", n1.toString()),
new StringParameterValue("UP_BUILD_NUMBER2", n2.toString()),
new StringParameterValue("UP_BUILD_NUMBER3", n3.toString())
])
Thread.currentThread().executable.addAction(pa)
This script will create three environment variables which correspond to three last build numbers of upstream job.
Add three build steps Copy artifacts from upstream project to copy artifacts from last three builds of upstream project (use environment variables from script above to set build number):
Run build and checkout build log, you should have something like this:
Copied 2 artifacts from "A" build number 4
Copied 2 artifacts from "A" build number 3
Copied 1 artifact from "A" build number 2
Note: perhaps, script need to be adjusted to catch unusual cases like "upstream project has only two builds", "current job doesn't have upstream job", "current job has more than one upstream job" etc.
You can use the following example from an "Execute Shell" Build Step.
Please note it can be run only from the Jenkins Master machine and the job calling this step also triggered the MultiJob.
#--------------------------------------
# Copy Artifacts from MultiJob Project
#--------------------------------------
PROJECT_NAME="MY_MULTI_JOB"
ARTIFACT_PATH="archive/target"
TARGET_DIRECTORY="target"
mkdir -p $TARGET_DIRECTORY
runCount="TRIGGERED_BUILD_RUN_COUNT_${PROJECT_NAME}"
for ((i=1; i<=${!runCount} ;i++))
do
buildNumber="${PROJECT_NAME}_${i}_BUILD_NUMBER"
cp $JENKINS_HOME/jobs/$PROJECT_NAME/builds/${!buildNumber}/$ARTIFACT_PATH/* $TARGET_DIRECTORY
done
#--------------------------------------
I recently updated the configuration of one of my hudson builds. The build history is out of sync. Is there a way to clear my build history?
Please and thank you
Use the script console (Manage Jenkins > Script Console) and something like this script to bulk delete a job's build history https://github.com/jenkinsci/jenkins-scripts/blob/master/scriptler/bulkDeleteBuilds.groovy
That script assumes you want to only delete a range of builds. To delete all builds for a given job, use this (tested):
// change this variable to match the name of the job whose builds you want to delete
def jobName = "Your Job Name"
def job = Jenkins.instance.getItem(jobName)
job.getBuilds().each { it.delete() }
// uncomment these lines to reset the build number to 1:
//job.nextBuildNumber = 1
//job.save()
This answer is for Jenkins
Go to your Jenkins home page → Manage Jenkins → Script Console
Run the following script there. Change copy_folder to your project name
Code:
def jobName = "copy_folder"
def job = Jenkins.instance.getItem(jobName)
job.getBuilds().each { it.delete() }
job.nextBuildNumber = 1
job.save()
My post
If you click Manage Hudson / Reload Configuration From Disk, Hudson will reload all the build history data.
If the data on disk is messed up, you'll need to go to your %HUDSON_HOME%\jobs\<projectname> directory and restore the build directories as they're supposed to be. Then reload config data.
If you're simply asking how to remove all build history, you can just delete the builds one by one via the UI if there are just a few, or go to the %HUDSON_HOME%\jobs\<projectname> directory and delete all the subdirectories there -- they correspond to the builds.
Afterwards restart the service for the changes to take effect.
Here is another option: delete the builds with cURL.
$ curl -X POST http://jenkins-host.tld:8080/jenkins/job/myJob/[1-56]/doDeleteAll
The above deletes build #1 to #56 for job myJob.
If authentication is enabled on the Jenkins instance, a user name and API token must be provided like this:
$ curl -u userName:apiToken -X POST http://jenkins-host.tld:8080/jenkins/job/myJob/[1-56]/doDeleteAll
The API token must be fetched from the /me/configure page in Jenkins. Just click on the "Show API Token..." button to display both the user name and the API token.
Edit: one might have to replace doDeleteAll by doDelete in the URLs above to make this work, depending on the configuration or the version of Jenkins used.
Here is how to delete ALL BUILDS FOR ALL JOBS...... using the Jenkins Scripting.
def jobs = Jenkins.instance.projects.collect { it }
jobs.each { job -> job.getBuilds().each { it.delete() }}
You could modify the project configuration temporarily to save only the last 1 build, reload the configuration (which should trash the old builds), then change the configuration setting again to your desired value.
If you want to clear the build history of MultiBranchProject (e.g. pipeline),
go to your Jenkins home page → Manage Jenkins → Script Console and run the following script:
def projectName = "ProjectName"
def project = Jenkins.instance.getItem(projectName)
def jobs = project.getItems().each {
def job = it
job.getBuilds().each { it.delete() }
job.nextBuildNumber = 1
job.save()
}
This one is the best option available.
Jenkins.instance.getAllItems(AbstractProject.class).each {it -> Jenkins.instance.getItemByFullName(it.fullName).builds.findAll { it.number > 0 }.each { it.delete() } }
This code will delete all Jenkins Job build history.
Using Script Console.
In case the jobs are grouped it's possible to either give it a full name with forward slashes:
getItemByFullName("folder_name/job_name")
job.getBuilds().each { it.delete() }
job.nextBuildNumber = 1
job.save()
or traverse the hierarchy like this:
def folder = Jenkins.instance.getItem("folder_name")
def job = folder.getItem("job_name")
job.getBuilds().each { it.delete() }
job.nextBuildNumber = 1
job.save()
Deleting directly from file system is not safe. You can run the below script to delete all builds from all jobs ( recursively ).
def numberOfBuildsToKeep = 10
Jenkins.instance.getAllItems(AbstractItem.class).each {
if( it.class.toString() != "class com.cloudbees.hudson.plugins.folder.Folder" && it.class.toString() != "class org.jenkinsci.plugins.workflow.multibranch.WorkflowMultiBranchProject") {
println it.name
builds = it.getBuilds()
for(int i = numberOfBuildsToKeep; i < builds.size(); i++) {
builds.get(i).delete()
println "Deleted" + builds.get(i)
}
}
}
Go to "Manage Jenkins" > "Script Console"
Run below:
def jobName = "build_name"
def job = Jenkins.instance.getItem(jobName)
job.getBuilds().each { it.delete() }
job.save()
Another easy way to clean builds is by adding the Discard Old Plugin at the end of your jobs. Set a maximum number of builds to save and then run the job again:
https://wiki.jenkins-ci.org/display/JENKINS/Discard+Old+Build+plugin
Go to the %HUDSON_HOME%\jobs\<projectname> remove builds dir and remove lastStable, lastSuccessful links, and remove nextBuildNumber file.
After doing above steps go to below link from UI
Jenkins-> Manage Jenkins -> Reload Configuration from Disk
It will do as you need
If using the Script Console method then try using the following instead to take into account if jobs are being grouped into folder containers.
def jobName = "Your Job Name"
def job = Jenkins.instance.getItemByFullName(jobName)
or
def jobName = "My Folder/Your Job Name
def job = Jenkins.instance.getItemByFullName(jobName)
Navigate to: %JENKINS_HOME%\jobs\jobName
Open the file "nextBuildNumber" and change the number. After that reload Jenkins configuration. Note: "nextBuildNumber" file contains the next build no that will be used by Jenkins.
Tested on jenkins 2.293 over linux. It will remove all the build logs but not the corellative build number
cd /var/lib/jenkins/jobs
find . -name "builds" -exec rm -rf {} \;
Be careful with this command because it executes a rm -rf on each find result. You could exec this first to validate if the result are only the builds folder of you jobs
find . -name "builds"
If you are looking for a solution where you have job inside a Folder you can use getItemByFullName function. It also supports white space in folder and job name.
def jobName = "folder_name/job_name"
def job = Jenkins.instance.getItemByFullName(jobName)
job.getBuilds().each { it.delete() }
job.nextBuildNumber = 1
job.save()