Search through console output of a Jenkins job - jenkins

I have a Jenkins job with 100+ builds. I need to search through all the builds of that job to find builds that have a certain string in the console output. Is there any plugin for that? How do I do that?

I often use the Jenkins Script Console for tasks like this. The Groovy plugin provides the Script Console, but if you're going to use the Script Console for periodic maintenance, you'll also want the Scriptler plugin which allows you to manage the scripts that you run.
From Manage Jenkins -> Script Console, you can write a groovy script that iterates through the job's builds looking for the matching string:
JOB_NAME = "My Job"
BUILD_STRING = "Hello, world"
def job = Jenkins.instance.items.find { it.name == JOB_NAME }
for (build in job.builds) {
def log = build.log
if (log.contains(BUILD_STRING)) {
println "${job.name}: ${build.id}"
}
}

If there is no additional requirements I would do it simply in the shell, e.g.:
find $JENKINS_HOME/jobs/haystack -name log -exec grep -l needle {} \; \
| sed 's|.*/\(.*\)/log|\1|'

To search in logs of all jobs:
I enhanced #DaveBacher 's code to be run in the Jenkins script console.
Helped me to locate a sporadic error happening in multiple jobs.
NEEDLE = "string_i_am_looking_for"
for (job in Jenkins.instance.getAllItems(Job.class)) {
for (build in job.builds) {
def log = build.log
if (log.contains(NEEDLE)) {
println "${job.name}: ${build.id}"
}
}
}

Thanks everyone for your valuable solutions. After a bit of additional research i found that there is a plugin in Jenkins to do this.
https://wiki.jenkins-ci.org/display/JENKINS/Lucene-Search
This will save the console output results and users can do search in search box.

There is the Log Parser Plugin
highlighting lines of interest in the log (errors, warnings,information)
dividing the log into sections displaying a summary of number of errors, warnings and information lines within the log and its sections.
linking the summary of errors and warnings into the context of the full log, making it easy to find a line of interest in the log
showing a summary of errors and warnings on the build page
If it is old logs then #jil has the answer assuming you are on Linux.

Just to throw another plugin out there, this blog post pointed me at the TextFinder plugin which allows you to search for text in either a workspace file or the console output, and override the build status as success/failure when the text is found.
The original poster doesn't say what should happen when the text is found, but it was searching for this functionality that brought me here.

To search for a regex pattern in all Jenkins jobs, and print the first matching line:
for (job in Jenkins.instance.items) {
for (build in job.builds) {
try {
def log = build.log
def match = log =~ "\n(.*${PATTERN}.*)\n"
if (match) {
println "Job [${job.name}] - Build [${build.id}]: ${match[0][0]}"
}
}
catch (Exception e) {
println e
}
}
}
For example, searching in my builds for PATTERN = "(TLS|Build).*timeout" I found:
Job [OSP-AWS] - Build [83]: Build timeout: dial tcp
[::1]:6443: connect: connection refused
Job [OSP-GCP] - Build [21]: Unable to
connect to the server: net/http: TLS handshake timeout

Just use Jenkins std search (top right corner) with keyword "console":
console:"whatever you are looking for"

Related

Raise Abort in Jenkins Job from Batch script

I have a Jenkins job, which do Git syncs and build the source code.
I added and created a "Post build task" option.
In 'post build task', I am searching for keyword "TIMEOUT:" in console output (this part is done) and want to declare job as Failed and Aborted if keyword matches.
How can I raise / declare the Job as Aborted from batch script if keyword matches. Something like echo ABORT?
It is easier if you want mark it as "FAIL"
Just exit 1 will do that.
It is tricky to achieve "Abort" from post build task plugin, it is much easier to use Groovy post build plugin.
The groovy post build provide rich functions to help you.
Such as match function:
def matcher = manager.getLogMatcher(".*Total time: (.*)\$")
if(matcher?.matches()) {
manager.addShortText(matcher.group(1), "grey", "white", "0px", "white")
}
Abort function:
def executor = build.executor ?: build.oneOffExecutor;
if (executor != null){
executor.interrupt(Result.ABORTED)
}
Br,
Tim
you can simply exit the flow and raise the error code that you want:
echo "Timeout detected!"
exit 1
Jenkins should detect the error and set-up the build as failed.
The error code must be between 1 and 255. You can chose whatever your want, just be aware that some code are reserved:
http://tldp.org/LDP/abs/html/exitcodes.html#EXITCODESREF
You can also consider using the time-out plugin:
https://wiki.jenkins.io/display/JENKINS/Build-timeout+Plugin
And another option is to build a query to BUILD ID URL/stop. Which is exactly what is done when you manually abort a build.
echo "Timeout detected!"
curl yourjenkins/job_name/11/stop

Jenkins Pipeline - How do I use the 'tool' option to specify a custom tool?

I have a custom tool defined within Jenkins via the Custom Tools plugin. If I create a freestyle project the Install custom tools option correctly finds and uses the tool (Salesforce DX) during execution.
However, I cannot find a way to do the same via a pipeline file. I have used the pipeline syntax snippet generator to get:
tool name: 'sfdx', type: 'com.cloudbees.jenkins.plugins.customtools.CustomTool'
I have put that into my stage definition:
stage('FetchMetadata') {
print 'Collect Prod metadata via SFDX'
tool name: 'sfdx', type: 'com.cloudbees.jenkins.plugins.customtools.CustomTool'
sh('sfdx force:mdapi:retrieve -r metadata/ -u DevHub -k ./metadata/package.xml')
}
but I get an error message stating line 2: sfdx: command not found
Is there some other way I should be using this snippet?
Full Jenkinsfile for info:
node {
currentBuild.result = 'SUCCESS'`
try {
stage('CheckoutRepo') {
print 'Get the latest code from the MASTER branch'
checkout scm
}
stage('FetchMetadata') {
print 'Collect Prod metadata via SFDX'
tool name: 'sfdx', type: 'com.cloudbees.jenkins.plugins.customtools.CustomTool'
sh('sfdx force:mdapi:retrieve -r metadata/ -u DevHub -k ./metadata/package.xml')
}
stage('ConvertMetadata') {
print 'Unzip retrieved metadata file'
sh('unzip unpackaged.zip .')
print 'Convert metadata to SFDX format'
sh('/usr/local/bin/sfdx force:mdapi:convert -r metadata/unpackaged/ -d force-app/')
}
stage('CommitChanges') {
sh('git add --all')
print 'Check if any changes need committing'
sh('if ! git diff-index --quiet HEAD --; then echo "changes found - pushing to repo"; git commit -m "Autocommit from Prod # $(date +%H:%M:%S\' \'%d/%m/%Y)"; else echo "no changes found"; fi')
sshagent(['xxx-xxx-xxx-xxx']) {
sh('git push -u origin master')
}
}
}
catch (err) {
currentBuild.result = 'FAILURE'
print 'Build failed'
error(err)
}
}
UPDATE
I have made some progress using this example Jenkinsfile
My stage now looks like this:
stage('FetchMetadata') {
print 'Collect Prod metadata via SFDX'
def sfdxLoc = tool 'sfdx'
sh script: "cd topLevel; ${sfdxLoc}/sfdx force:mdapi:retrieve -r metadata/ -u DevHub -k ./metadata/package.xml"
}
Unfortunately, although it looks like Jenkins is now finding and running the sfdx tool, I get a new error:
TypeError: Cannot read property 'run' of undefined
at Object.<anonymous> (/var/lib/jenkins/.cache/sfdx/tmp/heroku-script-509584048:20:4)
at Module._compile (module.js:570:32)
at Object.Module._extensions..js (module.js:579:10)
at Module.load (module.js:487:32)
at tryModuleLoad (module.js:446:12)
at Function.Module._load (module.js:438:3)
at Module.runMain (module.js:604:10)
at run (bootstrap_node.js:394:7)
at startup (bootstrap_node.js:149:9)
at bootstrap_node.js:509:3
I ran into the same problem. I got to this workaround:
environment {
GROOVY_HOME = tool name: 'Groovy-2.4.9', type: 'hudson.plugins.groovy.GroovyInstallation'
}
stages {
stage('Run Groovy') {
steps {
bat "${groovy_home}/bin/groovy <script.name>"
}
}
}
Somehow the tool path is not added to PATH by default (as was customary on my 1.6 Jenkins server install). Adding the ${groovy_home} when executing the bat command fixes that for me.
This way of calling a tool is basically lent from the scripted pipeline syntax.
I am using this for all my custom tools (not only groovy).
The tool part:
tool name: 'Groovy-2.4.9', type: 'hudson.plugins.groovy.GroovyInstallation'
was generated by the snippet generator like you did.
According to the Jenkins users mailing list, work is still ongoing for a definitive solution, so my solution really is a work around.
This is my first time commenting on stack overflow, but I've been looking for this answer for a few days and I think I have a potential solution. Checking out Fholst answer, I'd like to expand on it. That environment stanza I think may work for declarative syntax, but on a scripted pipeline you must use the withEnv() equivalent, and pass in the tools via a gString: i.e. ${tool 'nameOfToolDefinedInGlobalTools'}. For my particular use case, for reasons beyond my control, we do not have maven installed on our jenkins host machine, but there is one defined within the global tools configuration. This means I need to add mvn to the path before executing my sh commands within my steps. What I have been able to do is this:
withEnv(["PATH+MVN=${tool 'NameOfMavenTool'}/bin"]){
sh '''
echo "PATH = ${PATH}"
'''
}
This should give you what you need. Please ignore the triple single quotes on the sh line, I actually have several environment variables loaded and simply removed them from my snippet.
Hope this helps anyone who has been searching for this solution for days. I feel your pain. Cobbled this together from looking through the console output of a declarative pipeline script (if you use tools{} stanza it will show you how it builds those environment variables and wraps your subsequent declarative steps) and the following link: https://go.cloudbees.com/docs/cloudbees-documentation/use/automating-projects/jenkinsfile/
You may be having a problem because of the path to your sfdx install folder if you are on Windows. The Dreamhouse Jenkinsfile was written for a linux shell or Mac terminal so some changes are necessary to make it work on Windows.
${sfdxLoc}/sfdx
Should be
\"${sfdxLoc}/sfdx\"
So that the command line handles any spaces properly.
https://wipdeveloper.com/2017/06/22/salesforce-dx-jenkins-jenkinsfile/

Using waitForQualityGate in a Jenkins declarative pipeline

The following SonarQube (6.3) analysis stage in a declarative pipeline in Jenkins 2.50 is failing with this error in the console log: http://pastebin.com/t2ja23vC. More specifically:
SonarQube installation defined in this job (SonarGate) does not match any configured installation. Number of installations that can be configured: 1.
Update: after changing "SonarQube" to "SonarGate" in the Jenkins settings (under SonarQube servers, so it'll match the Jenkinsfile), I get a different error: http://pastebin.com/HZZ6fY6V
java.lang.IllegalStateException: Unable to get SonarQube task id and/or server name. Please use the 'withSonarQubeEnv' wrapper to run your analysis.
The stage is a modification of the example from the SonarQube docs: https://docs.sonarqube.org/display/SCAN/Analyzing+with+SonarQube+Scanner+for+Jenkins#AnalyzingwithSonarQubeScannerforJenkins-AnalyzinginaJenkinspipeline
stage ("SonarQube analysis") {
steps {
script {
STAGE_NAME = "SonarQube analysis"
if (BRANCH_NAME == "develop") {
echo "In 'develop' branch, don't analyze."
}
else { // this is a PR build, run sonar analysis
withSonarQubeEnv("SonarGate") {
sh "../../../sonar-scanner-2.9.0.670/bin/sonar-scanner"
}
}
}
}
}
stage ("SonarQube Gatekeeper") {
steps {
script {
STAGE_NAME = "SonarQube Gatekeeper"
if (BRANCH_NAME == "develop") {
echo "In 'develop' branch, skip."
}
else { // this is a PR build, fail on threshold spill
def qualitygate = waitForQualityGate()
if (qualitygate.status != "OK") {
error "Pipeline aborted due to quality gate coverage failure: ${qualitygate.status}"
}
}
}
}
}
I also created a webhook, sonarqube-webhook, with the URL http://****/sonarqube-webhook/. Should it be like that, or http://****/sonarqube/sonarqube-webhook? To access the server dashboard I use http://****/sonarqube.
In SonarQube's Quality Gates section I created a new quality gate:
I am not sure if the setting in SonarGate is correct. I do use jenkins-mocha to generate an lcov.info file that is used in Sonar to generate the coverage data.
Perhaps the quality gate setting is the wrong setting to do? The end result is to fail the job in Jenkins if coverage % is not met.
Finally, I am not sure if the following configurations in the Jenkins system configuration are at all required:
And
(It's 9000 not 900... cut text in the screen shot)
The SonarQube Jenkins plugin scans the build output for two specific lines, which it uses to get the SonarQube report task properties and project URL. If your invocation of sonar-scanner does not output these lines, the waitForQualityGate() call won't have the task ID to look them up. So you will have to figure out the correct settings to make it more verbose.
See the extractSonarProjectURLFromLogs and extractReportTask methods in the SonarUtils class of the plugin to understand how they work:
ANALYSIS SUCCESSFUL, you can browse <project URL> is used to add a link to the badge (in the build history)
Working dir: <dir with report-task.txt> is used to pass the task ID to the waitForQualityGate step
This was discovered to be a bug in the SonarQube scanner for Jenkins, when using a Jenkins slave for jobs (if the job is run on the master, it'd work). You can read more here: https://jira.sonarsource.com/browse/SONARJNKNS-282
I have tested this using a test build of v2.61 of the scanner plug-in and found it working.
The solution is to upgrade to v2.61 when released.
This stage will then work:
stage ("SonarQube analysis") {
steps {
withSonarQubeEnv('SonarQube') {
sh "../../../sonar-scanner-2.9.0.670/bin/sonar-scanner"
}
def qualitygate = waitForQualityGate()
if (qualitygate.status != "OK") {
error "Pipeline aborted due to quality gate coverage failure: ${qualitygate.status}"
}
}
}
If you're running SonarCube in a docker container check that the memory isn't exhausted. We were maxing out. Which seemed to be the issue.

Jenkins: remove old builds with command line

I delete old jenkins builds with rm where job is hosted:
my_job/builds/$ rm -rf [1-9]*
These old builds are still visible in job page.
How to remove them with command line?
(without the delete button in each build user interface)
Here is another option: delete the builds remotely with cURL. (Replace the beginning of the URLs with whatever you use to access Jenkins with your browser.)
$ curl -X POST http://jenkins-host.tld:8080/jenkins/job/myJob/[1-56]/doDeleteAll
The above deletes build #1 to #56 for job myJob.
If authentication is enabled on the Jenkins instance, a user name and API token must be provided like this:
$ curl -u userName:apiToken -X POST http://jenkins-host.tld:8080/jenkins/job/myJob/[1-56]/doDeleteAll
The API token must be fetched from the /me/configure page in Jenkins. Just click on the "Show API Token..." button to display both the user name and the API token.
Edit: As pointed out by yegeniy in a comment below, one might have to replace doDeleteAll by doDelete in the URLs above to make this work, depending on the configuration.
It looks like this has been added to the CLI, or is at least being worked on: http://jenkins.361315.n4.nabble.com/How-to-purge-old-builds-td385290.html
Syntax would be something like this: java -jar jenkins-cli.jar -s http://my.jenkins.host delete-builds myproject '1-7499' --username $user --password $password
Check your home jenkins directory:
"Manage Jenkins" ==> "Configure System"
Check field "Home directory" (usually it is /var/lib/jenkins)
Command for delete all jenkins job builds
/jenkins_home/jobs> rm -rf */builds/*
After delete should reload config:
"Manage Jenkins" ==> "Reload Configuration from Disk"
You can do it by Groovy Scripts using Hudson API.. Access your jenkins instalation
http://localhost:38080/script.
For Example, for deleting all old builds of all projects using the follow script:
Note: Take care if you use Finger Prints , you will lose all history.
import hudson.model.*
// For each project
for(item in Hudson.instance.items) {
// check that job is not building
if(!item.isBuilding()) {
System.out.println("Deleting all builds of job "+item.name)
for(build in item.getBuilds()){
build.delete()
}
}
else {
System.out.println("Skipping job "+item.name+", currently building")
}
}
Or for cleaning all workspaces :
import hudson.model.*
// For each project
for(item in Hudson.instance.items) {
// check that job is not building
if(!item.isBuilding()) {
println("Wiping out workspace of job "+item.name)
item.doDoWipeOutWorkspace()
}
else {
println("Skipping job "+item.name+", currently building")
}
}
There are a lot of examples on the Jenkins wiki
Is there a reason you need to do this manually instead of letting Jenkins delete old builds for you?
You can change your job configuration to automatically delete old builds, based either on number of days or number of builds. No more worrying about it or having to keep track, Jenkins just does it for you.
The following script cleans old builds of jobs. You should reload config from disk if you delete build manually:
import hudson.model.*
for(item in Hudson.instance.items) {
if (!item.isBuilding()) {
println("Deleting old builds of job " + item.name)
for (build in item.getBuilds()) {
//delete all except the last
if (build.getNumber() < item.getLastBuild().getNumber()) {
println "delete " + build
try {
build.delete()
} catch (Exception e) {
println e
}
}
}
} else {
println("Skipping job " + item.name + ", currently building")
}
}
From Script Console Run this, but you need to change the job name:
def jobName = "name"
def job = Jenkins.instance.getItem(jobName)
job.getBuilds().each { it.delete() }
job.nextBuildNumber = 1
job.save()
From Jenkins Scriptler console run the following Groovy script to delete all the builds of jobs listed under a view:
import jenkins.model.Jenkins
hudson.model.Hudson.instance.getView('<ViewName>').items.each() {
println it.fullDisplayName
def jobname = it.fullDisplayName
def item = hudson.model.Hudson.instance.getItem(jobname)
def build = item.getLastBuild()
if (item.getLastBuild() != null) {
Jenkins.instance.getItemByFullName(jobname).builds.findAll {
it.number <= build.getNumber()
}.each {
it.delete()
}
}
}
def jobName = "MY_JOB_NAME"
def job = Jenkins.instance.getItem(jobName)
job.getBuilds().findAll { it.number < 10 }.each { it.delete() }
if you had 12 builds this would clear out builds 0-9 and you'd have 12,11,10 remaining. Just drop in the script console
This script will configure the build retention settings of all of the Jenkins jobs.
Change the values from 30 and 200 to suite you needs, run the script, then restart the Jenkins service.
#!/bin/bash
cd $HOME
for xml in $(find jobs -name config.xml)
do
sed -i 's#<daysToKeep>.*#<daysToKeep>30</daysToKeep>#' $xml
sed -i 's#<numToKeep>.*#<numToKeep>200</numToKeep>#' $xml
done
The script below works well with Folders and Multibranch Pipelines. It preserves only 10 last builds for each job. That could be adjusted or removed (proper if) if needed. Run that from web script console (example URL: https://jenkins.company.com/script)
def jobs = Hudson.instance.getAllItems(hudson.model.Job.class)
for (job in jobs){
println(job)
def recent = job.builds.limit(10)
for(build in job.builds){
if(!recent.contains(build)){
println("\t Deleting build: " + build)
build.delete()
}
}
}
From my opinion all those answers are not sufficient, you have to do:
echo "Cleaning:"
echo "${params.PL_JOB_NAME}"
echo "${params.PL_BUILD_NUMBER}"
build_number = params.PL_BUILD_NUMBER as Integer
sleep time: 5, unit: 'SECONDS'
wfjob = Jenkins.instance.getItemByFullName(params.PL_JOB_NAME)
wfjob.getBuilds().findAll { it.number >= build_number }.each { it.delete() }
wfjob.save()
wfjob.nextBuildNumber = build_number
wfjob.save()
wfjob.updateNextBuildNumber(build_number)
wfjob.save()
wfjob.doReload()
Or the job will not be correctly reset and you have to hit build until you reach next free number in the meanwhile the jenkins log will show:
java.lang.IllegalStateException: JENKINS-23152: ****/<BUILD_NUMBER> already existed;

How do I clear my Jenkins/Hudson build history?

I recently updated the configuration of one of my hudson builds. The build history is out of sync. Is there a way to clear my build history?
Please and thank you
Use the script console (Manage Jenkins > Script Console) and something like this script to bulk delete a job's build history https://github.com/jenkinsci/jenkins-scripts/blob/master/scriptler/bulkDeleteBuilds.groovy
That script assumes you want to only delete a range of builds. To delete all builds for a given job, use this (tested):
// change this variable to match the name of the job whose builds you want to delete
def jobName = "Your Job Name"
def job = Jenkins.instance.getItem(jobName)
job.getBuilds().each { it.delete() }
// uncomment these lines to reset the build number to 1:
//job.nextBuildNumber = 1
//job.save()
This answer is for Jenkins
Go to your Jenkins home page → Manage Jenkins → Script Console
Run the following script there. Change copy_folder to your project name
Code:
def jobName = "copy_folder"
def job = Jenkins.instance.getItem(jobName)
job.getBuilds().each { it.delete() }
job.nextBuildNumber = 1
job.save()
My post
If you click Manage Hudson / Reload Configuration From Disk, Hudson will reload all the build history data.
If the data on disk is messed up, you'll need to go to your %HUDSON_HOME%\jobs\<projectname> directory and restore the build directories as they're supposed to be. Then reload config data.
If you're simply asking how to remove all build history, you can just delete the builds one by one via the UI if there are just a few, or go to the %HUDSON_HOME%\jobs\<projectname> directory and delete all the subdirectories there -- they correspond to the builds.
Afterwards restart the service for the changes to take effect.
Here is another option: delete the builds with cURL.
$ curl -X POST http://jenkins-host.tld:8080/jenkins/job/myJob/[1-56]/doDeleteAll
The above deletes build #1 to #56 for job myJob.
If authentication is enabled on the Jenkins instance, a user name and API token must be provided like this:
$ curl -u userName:apiToken -X POST http://jenkins-host.tld:8080/jenkins/job/myJob/[1-56]/doDeleteAll
The API token must be fetched from the /me/configure page in Jenkins. Just click on the "Show API Token..." button to display both the user name and the API token.
Edit: one might have to replace doDeleteAll by doDelete in the URLs above to make this work, depending on the configuration or the version of Jenkins used.
Here is how to delete ALL BUILDS FOR ALL JOBS...... using the Jenkins Scripting.
def jobs = Jenkins.instance.projects.collect { it }
jobs.each { job -> job.getBuilds().each { it.delete() }}
You could modify the project configuration temporarily to save only the last 1 build, reload the configuration (which should trash the old builds), then change the configuration setting again to your desired value.
If you want to clear the build history of MultiBranchProject (e.g. pipeline),
go to your Jenkins home page → Manage Jenkins → Script Console and run the following script:
def projectName = "ProjectName"
def project = Jenkins.instance.getItem(projectName)
def jobs = project.getItems().each {
def job = it
job.getBuilds().each { it.delete() }
job.nextBuildNumber = 1
job.save()
}
This one is the best option available.
Jenkins.instance.getAllItems(AbstractProject.class).each {it -> Jenkins.instance.getItemByFullName(it.fullName).builds.findAll { it.number > 0 }.each { it.delete() } }
This code will delete all Jenkins Job build history.
Using Script Console.
In case the jobs are grouped it's possible to either give it a full name with forward slashes:
getItemByFullName("folder_name/job_name")
job.getBuilds().each { it.delete() }
job.nextBuildNumber = 1
job.save()
or traverse the hierarchy like this:
def folder = Jenkins.instance.getItem("folder_name")
def job = folder.getItem("job_name")
job.getBuilds().each { it.delete() }
job.nextBuildNumber = 1
job.save()
Deleting directly from file system is not safe. You can run the below script to delete all builds from all jobs ( recursively ).
def numberOfBuildsToKeep = 10
Jenkins.instance.getAllItems(AbstractItem.class).each {
if( it.class.toString() != "class com.cloudbees.hudson.plugins.folder.Folder" && it.class.toString() != "class org.jenkinsci.plugins.workflow.multibranch.WorkflowMultiBranchProject") {
println it.name
builds = it.getBuilds()
for(int i = numberOfBuildsToKeep; i < builds.size(); i++) {
builds.get(i).delete()
println "Deleted" + builds.get(i)
}
}
}
Go to "Manage Jenkins" > "Script Console"
Run below:
def jobName = "build_name"
def job = Jenkins.instance.getItem(jobName)
job.getBuilds().each { it.delete() }
job.save()
Another easy way to clean builds is by adding the Discard Old Plugin at the end of your jobs. Set a maximum number of builds to save and then run the job again:
https://wiki.jenkins-ci.org/display/JENKINS/Discard+Old+Build+plugin
Go to the %HUDSON_HOME%\jobs\<projectname> remove builds dir and remove lastStable, lastSuccessful links, and remove nextBuildNumber file.
After doing above steps go to below link from UI
Jenkins-> Manage Jenkins -> Reload Configuration from Disk
It will do as you need
If using the Script Console method then try using the following instead to take into account if jobs are being grouped into folder containers.
def jobName = "Your Job Name"
def job = Jenkins.instance.getItemByFullName(jobName)
or
def jobName = "My Folder/Your Job Name
def job = Jenkins.instance.getItemByFullName(jobName)
Navigate to: %JENKINS_HOME%\jobs\jobName
Open the file "nextBuildNumber" and change the number. After that reload Jenkins configuration. Note: "nextBuildNumber" file contains the next build no that will be used by Jenkins.
Tested on jenkins 2.293 over linux. It will remove all the build logs but not the corellative build number
cd /var/lib/jenkins/jobs
find . -name "builds" -exec rm -rf {} \;
Be careful with this command because it executes a rm -rf on each find result. You could exec this first to validate if the result are only the builds folder of you jobs
find . -name "builds"
If you are looking for a solution where you have job inside a Folder you can use getItemByFullName function. It also supports white space in folder and job name.
def jobName = "folder_name/job_name"
def job = Jenkins.instance.getItemByFullName(jobName)
job.getBuilds().each { it.delete() }
job.nextBuildNumber = 1
job.save()

Resources