I am using Jenkins with multiple tools (node, maven etc) and some of build versions are basing on build number. Jobs configured with Pipeline (Bitbucket Team Project).
Sometimes Jenkins just removes most of builds from branches, and restarts building (like first-time Branch Indexing happens).
Some logs:
Sep 7 07:50:07 jenkins-1 nice[21775]: INFO: Skipping job "blablabla" with type org.jenkinsci.plugins.workflow.multibranch.WorkflowMultiBranchProject
Sep 7 07:50:44 jenkins-1 nice[21775]: WARNING: Failed to load Owner[blablabla/blablabla/initial/13:blablabla/blablabla/initial #13]. Unregistering
Sep 7 07:50:44 jenkins-1 nice[21775]: java.io.IOException: blablabla/blablabla/initial #13 did not yet start
Jenkins version: 2.73.2
Any ideas why it's happening?
Related
I have a Jenkins Job DSL job that worked well until about january (it is not used that often). Last week, the job failed with the error message ERROR: java.io.IOException: Failed to persist config.xml (no Stack trace, just that message). There were no changes to the job since the last successful execution in january.
[...]
13:06:22 Processing provided DSL script
13:06:22 New run name is '#15 (Branch_B20_2_x)'
13:06:22 ERROR: java.io.IOException: Failed to persist config.xml
13:06:22 [WS-CLEANUP] Deleting project workspace...
13:06:22 [WS-CLEANUP] Deferred wipeout is used...
13:06:22 [WS-CLEANUP] done
13:06:22 Finished: FAILURE
I thougt that between january and noew, maybe some plugin was updated and the DSL script is now wrong, so I changed my DSL script to the most easy one I could imagine (example from job-dsl plugin page):
job('example') {
steps {
shell('echo Hello World!')
}
}
But the job still fails with the exact same error.
I checked the jenkins logs but nothing to see.
I am running jenkins in a docker swarm container and each job is executed in an own build agent conatiner using docker-swarm-plugin (no changes to that either, worked in january).
The docker deamon logs also show no errors.
The filesystem for the workspace of jenkins also is not full and the user in the build agent container has write access to taht file system.
It even does not work, when I mount an empty tmpfs to the workspace.
Does anyone have an idea what goes wrong or at least a hint where to continue searching for that error?
Jenkins version: 2.281
job-dsl plugin version: 1.77
Docker version: 20.10.4
Problem was solved by updating jenkins to 2.289
Seems like there war some problem with the combination of the versions before. I will keep you updated if some of the next updates chnages anything.
I'm using a Multibranch pipeline job to discover branches/tags/PRs and execute certain jobs. I got repos on GitHub and scan is able to discover all but not tags. I get below error. Also, when the Discover tags option is disabled in multibranch job configuration, I don't see this error and I miss the build when tags are created.
I tried to create multiple new repos but it did not help.
Jenkins version: 2.150.1
Getting remote tags...
ERROR: [Sun Jan 06 16:00:21 UTC 2019] Could not fetch branches from source 3f765a8f-ee7f-4c6d-a655-f9ca3b2b25d3
org.kohsuke.github.GHException: Failed to retrieve https://repourl/branch/git/refs/tags
at org.kohsuke.github.Requester$PagingIterator.fetch(Requester.java:529)
at org.kohsuke.github.Requester$PagingIterator.hasNext(Requester.java:494)
at org.kohsuke.github.PagedIterator.fetch(PagedIterator.java:44)
at org.kohsuke.github.PagedIterator.hasNext(PagedIterator.java:32)
at org.jenkinsci.plugins.github_branch_source.GitHubSCMSource$LazyTags$1$1.hasNext(GitHubSCMSource.java:2222)
at org.jenkinsci.plugins.github_branch_source.GitHubSCMSource.retrieve(GitHubSCMSource.java:1016)
at jenkins.scm.api.SCMSource._retrieve(SCMSource.java:374)
at jenkins.scm.api.SCMSource.fetch(SCMSource.java:284)
at jenkins.branch.MultiBranchProject.computeChildren(MultiBranchProject.java:634)
at com.cloudbees.hudson.plugins.folder.computed.ComputedFolder.updateChildren(ComputedFolder.java:277)
at com.cloudbees.hudson.plugins.folder.computed.FolderComputation.run(FolderComputation.java:165)
at jenkins.branch.MultiBranchProject$BranchIndexing.run(MultiBranchProject.java:1025)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
[Sun Jan 06 16:00:21 UTC 2019] Finished branch indexing. Indexing took 0.42 sec
FATAL: Failed to recompute children of Pipelines » bl-calibration-orchestrator-v1
org.kohsuke.github.GHException: Failed to retrieve https://repourl/branch/git/refs/tags
at org.kohsuke.github.Requester$PagingIterator.fetch(Requester.java:529)
at org.kohsuke.github.Requester$PagingIterator.hasNext(Requester.java:494)
at org.kohsuke.github.PagedIterator.fetch(PagedIterator.java:44)
at org.kohsuke.github.PagedIterator.hasNext(PagedIterator.java:32)
at org.jenkinsci.plugins.github_branch_source.GitHubSCMSource$LazyTags$1$1.hasNext(GitHubSCMSource.java:2222)
at org.jenkinsci.plugins.github_branch_source.GitHubSCMSource.retrieve(GitHubSCMSource.java:1016)
at jenkins.scm.api.SCMSource._retrieve(SCMSource.java:374)
at jenkins.scm.api.SCMSource.fetch(SCMSource.java:284)
at jenkins.branch.MultiBranchProject.computeChildren(MultiBranchProject.java:634)
at com.cloudbees.hudson.plugins.folder.computed.ComputedFolder.updateChildren(ComputedFolder.java:277)
at com.cloudbees.hudson.plugins.folder.computed.FolderComputation.run(FolderComputation.java:165)
at jenkins.branch.MultiBranchProject$BranchIndexing.run(MultiBranchProject.java:1025)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Finished: FAILURE
That looks like a recent Jenkins issue (JENKINS-52397) which is still pending:
Org Scan blows up when repository has no tags
Given a GitHub Organization Folder that has the "Discover Tags" behavior; the scan blows up on every repository that doesn't have any tags.
Workaround of adding a single tag confirmed to work
This is linked to the JENKINS/GitHub Branch Source Plugin, and is still seen in Jenkins 2.152.
From the release notes of the Jenkins GitHub Branch Source Plugin:
Version 2.4.2
Release date: 2019-01-16
* JENKINS-52397: Org Scan blows up when repository has no tags #191
* INFRA-1934: Stop publishing to jenkinsci/jenkins repo on Docker Hub
ref: https://github.com/jenkinsci/github-branch-source-plugi/blob/b26aba6136024d4dfaafb9e2c36317128ceb82dd/CHANGELOG.md
It might depend on various factors if you received those plugin already. A Jenkins version of 2.160 (2019-01-16) or higher might or might not be a safe indicator for the fixed version of the plugin being at least available at setup time.
After restarting our Jenkins 2.107.2 instance, it shows many previously finished pipeline runs in the "master" section of the homepage, with a partly-completed progress bar.
When looking at the console log for these runs, they were completed days ago (long before the restart), but are showing a "resuming" message afterwards.
[Pipeline] End of Pipeline
Finished: SUCCESS
Resuming build at Tue May 01 06:02:42 PDT 2018 after Jenkins restart
Resuming build at Thu May 03 16:11:45 PDT 2018 after Jenkins restart
How can I purge these old runs from Jenkins (where does Jenkins keep the state for these runs)? I have hundreds of them; stop/kill doesn't remove them either.
I see this in the run's build.xml file - is that what's causing it?
<completed>false</completed>
https://issues.jenkins-ci.org/browse/JENKINS-50199 seemed to be the root cause.
After updating the pipeline plugins (latest "stable" versions as of 2018-05-07), the zombie runs disappeared and all is good again.
During daily work, I am using JIRA Trigger Plugin to trigger Jenkins job when Jira issue status change. Teammate creates two Jira issues based same issue type, then he change issue status almost at the same time. We can see "Build is scheduled for ..." comment in both issues. But only one Jenkins job build was triggered.
After enable Jenkins logging at FINE level for troubleshooting, log as below:
Sep 05, 2017 5:16:12 PM WARNING jenkins.model.lazy.LazyBuildMixIn
newBuild A new build could not be created in job
Jira_Project_Feature_Updator java.lang.IllegalStateException:
JENKINS-23152:
/var/lib/jenkins/jobs/Jira_Project_Feature_Updator/builds/181 already
existed; will not overwrite with Jira_Project_Feature_Updator #181 at
hudson.model.RunMap.put(RunMap.java:188) at
jenkins.model.lazy.LazyBuildMixIn.newBuild(LazyBuildMixIn.java:185)
at hudson.model.AbstractProject.newBuild(AbstractProject.java:1019)
at
hudson.model.AbstractProject.createExecutable(AbstractProject.java:1218)
at
hudson.model.AbstractProject.createExecutable(AbstractProject.java:145)
at hudson.model.Executor$1.call(Executor.java:358) at
hudson.model.Executor$1.call(Executor.java:340) at
hudson.model.Queue._withLock(Queue.java:1362) at
hudson.model.Queue.withLock(Queue.java:1223) at
hudson.model.Executor.run(Executor.java:340)
Sep 05, 2017 5:16:12 PM SEVERE hudson.model.Executor run Unexpected
executor death java.lang.Error: java.lang.IllegalStateException:
JENKINS-23152:
/var/lib/jenkins/jobs/Jira_Project_Feature_Updator/builds/181 already
existed; will not overwrite with Jira_Project_Feature_Updator #181 at
jenkins.model.lazy.LazyBuildMixIn.newBuild(LazyBuildMixIn.java:193)
at hudson.model.AbstractProject.newBuild(AbstractProject.java:1019)
at
hudson.model.AbstractProject.createExecutable(AbstractProject.java:1218)
at
hudson.model.AbstractProject.createExecutable(AbstractProject.java:145)
at hudson.model.Executor$1.call(Executor.java:358) at
hudson.model.Executor$1.call(Executor.java:340) at
hudson.model.Queue._withLock(Queue.java:1362) at
hudson.model.Queue.withLock(Queue.java:1223) at
hudson.model.Executor.run(Executor.java:340) Caused by:
java.lang.IllegalStateException: JENKINS-23152:
/var/lib/jenkins/jobs/Jira_Project_Feature_Updator/builds/181 already
existed; will not overwrite with Jira_Project_Feature_Updator #181 at
hudson.model.RunMap.put(RunMap.java:188) at
jenkins.model.lazy.LazyBuildMixIn.newBuild(LazyBuildMixIn.java:185)
... 8 more
I simply think it’s because the first change event trigger job build #181, and the second event also want to run build #181, but it's already existed.
I am not sure these result is related to my configuration or the plugin. The plugin works well except this problem.
Below is my version info:
Jenkins: 2.19.3
Jira: v6.3.6#6336-sha1:cf1622c
JIRA Trigger Plugin: 0.4.1 & 0.4.3-SNAPSHOT
I also try the latest plugin release https://ci.jenkins.io/blue/organizations/jenkins/Plugins%2Fjira-trigger-plugin/detail/master/10/artifacts/, but the result is the same.
I have problem with Jenkins bilds.
Sometimes my build hangs on operations "Checking out Revision". The problem is not constant. I can get this issue in 4-5 times of 10 bilds.
I am waiting 10-20 minutes and the abort this hangs bild.
Console log:
Started by upstream project "master" build number 737
originally caused by:
Started by user anonymous
Building in workspace /var/lib/jenkins/jobs/master/workspace/CI_BUILD/integration
Checkout:integration / /var/lib/jenkins/jobs/master/workspace/CI_BUILD/integration - hudson.remoting.LocalChannel#29b99c
Using strategy: Default
Last Built Revision: Revision e0963076406dd8bd6fcbd2d31ff37ad4ea60669a (origin/master)
Fetching changes from 1 remote Git repository
Fetching upstream changes from git#github.com:shaliko/shaliko.git
Commencing build of Revision e0963076406dd8bd6fcbd2d31ff37ad4ea60669a (origin/master)
Checking out Revisione 0963076406dd8bd6fcbd2d31ff37ad4ea60669a (origin/master)
In Jenkins config:
6 executors
Quiet period 10
SCM checkout retry count 10
I update Jenkins to version 1.500 and update "Jenkins GIT plugin" to version 1.1.26 - no effect.
What I should check or update?
Fixed issue by updating JDK to version 7u11.
Thanks for the advice!