Our Jenkins instance is currently reporting BUILDS_ALL_TIME to be 999 for all builds of all jobs. Has anyone else experienced this and understand the path of least resistance to getting it to handle this environment variable as expected?
The back story:
Yesterday morning I updated all of the plugins on our Jenkins instance to the latest stable version. There were half a dozen or more plugins to be updated and I never pay close attention to them but the Credentials Binding plugin stuck out because it turned my monitor red regarding a critical security update and kicked off the whole process.
Yesterday afternoon, my coworker noted that the version number of one of his builds went from 1.0.0.7 to 1.0.0.999 and I was able to confirm the same thing with one of mine. Now all jobs that rely on the BUILDS_ALL_TIME environment variable report 999 for that variable in every build.
The Version Number Plugin is installed & up to date, and here's an excerpt from build.xml from an affected build:
<org.jvnet.hudson.tools.versionnumber.VersionNumberAction plugin="versionnumber#1.9">
<info>
<buildsToday>34</buildsToday>
<buildsThisWeek>40</buildsThisWeek>
<buildsThisMonth>40</buildsThisMonth>
<buildsThisYear>40</buildsThisYear>
<buildsAllTime>24</buildsAllTime> <!-- This is correct, is incremented properly between builds, and is updated appropriately when overridden in the job configuration GUI -->
</info>
<versionNumber>999.0.0</versionNumber> <!-- This is incorrect and NOT incremented properly between builds -->
</org.jvnet.hudson.tools.versionnumber.VersionNumberAction>
The timing of this behavior seems to be associated with an upgrade to plugins (this association is by no means certain but it's the best I've got at this point). Consequently I tried downgrading each plugin with that option available in the management GUI, one-by-painful-one, to see if I can find the one culprit. This was fruitless. I'm not given the option to downgrade the Version Number plugin, but the last release of this thing was two years ago.
Well after a year and a half of workarounds and self-loathing, the problem occurred to me. Somehow, someone set the BUILDS_ALL_TIME environment variable globally on our build server. Once I unset that and restarted Jenkins, our builds' version numbers returned to their appropriate values.
Related
After a recent update (both Jenkins and Plug-ins) my Ivy Project settings can no longer be changed due to incompatible layouts (table to div change in a minor version update, from Jenkins 2.263 to 2.264). This broke every plugin that was involved in configuring projects, but went unnoticed for two months because our project settings haven't needed to change in quite a while, and the builds were still working fine in the meantime.
For reference, my build process is based on:
Ant for the build
Ivy for dependency resolution
Artifactory as a dependency repository
Subversion as a code repository (with Jenkins commit triggers)
Junit with Cobertura, Jmeter
FindBugs, CheckStyle, CLOC
Projects are based on Java and JavaDoc
I tried reverting to the earlier version of Jenkins, but this affected nearly every plugin, and I wasn't able to successfully revert to the plugin version combination from prior to the breaking update. After failing to revert the updates, I decided instead to plow forward in updating all of our 68 projects to accommodate the new plugin versions.
Unfortunately, I can't save any configuration changes to Ivy Projects. After trial and error, I've found that I can re-produce my builds using Freestyle Projects. However, Jenkins doesn't seem to offer any way to convert projects from one type to another. If I were to create new projects from scratch to replace my existing projects (all 68, including their dependencies and specific plugin settings), I would lose all of my previous build histories, including the build numbers (which carry over to our deployments) and our project metrics (which we use for performance evaluation). So, I don't want to lose all of that information.
How can I manually change an Ivy Project to a Freestyle Project?
I found a partial solution, but it doesn't seem to work for all projects.
Stop the Jenkins webapp (important).
For each Ivy Project that you want to convert to a Freestyle Project, rename the root element of jobs/[project]/config.xml from <hudson.ivy.IvyModuleSet plugin="ivy#2.1"> to <project> (don't forget to also change the closing tag at the end of the document from </hudson.ivy.IvyModuleSet> to </project>.
Restart Jenkins.
For most projects, I am then able to change the project configuration and save (importantly, Ant/Ivy-Artifactory Integration in a Freestyle Project is a feature-matched substitute for an Ivy Project).
However, three other projects still show up as Ivy Projects after changing the root element tag. What these projects had in common was that they all use the Performance Plugin. In order to finish converting these to Freestyle, I needed to additionally:
Disable the Performance Plugin
Restart Jenkins
Edit/Save the configuration for those projects as above.
Side effects and special considerations:
All of my build timestamps (prior to the change) are now listed as Dec 31, 1969 7:00 PM EDT, with a most recent build time as 50 yr. New build timestamps are correct. This likely was the result of no longer depending on the CloudBees plugin for build pipelines, which mapped build timestamps to build versions to avoid an old regression bug.
Every project immediately changed to red (Failed) on the dashboard, even though no builds had been attempted after the update, and the previous status was blue (Success) or yellow (Unstable). I suspect this is related to the above issue. After the next attempted build, whether successful or not, the status accurately reflects the build status.
No ability to use the Performance Plugin.
Several projects now show up as both an Upstream and Downstream Project, causing endless build cycles. There were three cases of this involving different combinations of projects, and in those cases, one or both projects needed to be removed from the build triggers. I suspect it had been this way for a while but for some reason the endless cycles only happen after the update.
I suddenly have a lot of "Unreadable Data" across all of my Jenkins projects. Unfortunately, discarding it is an all-or-nothing process (can't pick a single project to test). I backed up my jobs directory and clicked Discard, and to my surprise everything still works.
It looks like I'm back in business. My build numbers have been preserved, and the only noticeable side effect is the 50 year old builds. If I encounter any other issues resulting from these changes, I will update this answer.
We're using SonarQube 5.6.6 (LTS) with SonarC# plugin version 6.5.0.3766 and TFS 2017 update 2 (SonarQube plugin v. 3.0.2).
We're repeatedly seeing, that issues that were previously marked as resolved (Won't fix) get reopened. My question is: Why does SonarQube behave in this way?
This issue is also mentioned in a number of different posts(1,2,3) on StackOverflow but with no root cause or resolution. Link 3 also states that this is an issue using SonarQube 6.2.
I'm curious as to whether this is due to a misconfiguration on our part or an integrated part of SonarQube?
Our SonarQube server is running on a Win 2012 R2 server with a MS SQL backend if thats relevant?
Furthermore, we're using TFVC for versioning and get blame through the SCM plugin. If an issues has been marked as resolved (won't fix), I've noticed that it appears to be as opened as a new issue (i.e. no history available).
Example: A colleague marked an issue as resolved (won't fix) in a file which was last modified in TFVC back in november 2015. However, this morning the issue was marked as open and assigned to the developer who originally checked in the code There is no history in SonarQube of the issue having previously been in a resolved state. It appears as if it's a new issue in a new file instead of being a known issue which has already been resolved?
To avoid weird issues related to compiling our C# solution we always clean our workspace completely prior to our build. I don't know if that has something to say? Our builds are also executed on different build machines so I don't know if that will make SonarQube think that we're indeed always analyzing new files?
Could the use of different build machines and/or build definitions for the same SonarQube project result in this behavior?
I've also seen from the logs and reports, that SonarQube appears to analyze the ENTIRE solution and not only the changed files. This makes our analysis very time consuming and not at all suitable in a fast feedback loop. I suspect the long analysis period and the issues reopening is related.
Analysis of a projekt with approx 280 KLOC takes approx. 8-10 min. as a background task on the server. That's on subsequent analysis (i.e. not the first run).
Is this related to the above mentioned problem of issues getting reopened by the server?
Strangely enough, leak periods appear to work correctly, i.e. we correctly identify issues within the right leak period. I've not verified this in detail yet, but it's definitely not ALL old issues that get reported as open within the leak period. We only see old issues reappear, if the file that contains them has been modified - this activates all the other issues (from a previous version or leak period) within that file.
I've not specified any additional cmdline parameters for the SonarQube scanner during builds apart from the TFVC SCM plugin and path for coverage data.
We're using the VSTEST task v. 2 as otherwise it's not possible to collect code coverage in SonarQube when using TFS 2017 update 2 and VS 2017.
Please advise of any further data that I should supply to help this further.
Thank you for your help!
I'm going to upgrade our jenkins-ci to the latest version. I'm following this wiki page (Going for the upgrade button in the "Manage jenkins page"): How to upgrade jenkins
My question is this, we have a lot of jobs that constantly run (some timed jobs, some triggered jobs). When upgrading, should (or even need) I disable all jobs before hand? If there are jobs currently running, should (or even need) i terminate them?
It depends a lot how you deployed you CI. If you installed by default (no custom settings i assume you can follow the auto procedure that you already provided in link).
When upgrading, should (or even need) I disable all jobs before hand?
When upgrading you should put your Jenkins instance in the quiet mode Configure > Manage > Quiet down, this will prevent further builds to be executed, also it will let all running builds to finish, i hope this answer to your both questions.
Speaking more about jobs, you should make a backup first in case something goes wrong.
Also you should think a lot about plugins and review them all since some of them might not work as you expect since you are upgrading to the new fresh Jenkins core. There is one plugin called plugin usage which might help you to understand your current status
We have our projects configured with MSBuild script customization to modify the ApplicationVersion property in the project and copy that into the AssemblyInfo.cs file when the project builds. The problem is that we have TFS set up to run on a nightly schedule, with "Build even if nothing has changed since the previous build" unchecked. But since TFS itself is producing this version update, it will rebuild and increment every night. So this is sort of an infinite loop of our own design, but trying to figure out how to get out of it.
If the "changed since the previous build" detection is based on the history timestamp, ideally it'd be nice if when the version gets updated and commits to TFS it does it with a timestamp that precedes the build time. Is that even possible?
If the "changed since the previous build" detection is based on some boolean/bit flag, is there a way to reset it?
Using TFS 2012.
I'm assuming that you're checking in the new version of the assemblyinfo.cs once it's been updated, and this is why TFS is queuing a new build. Have you tried adding a comment to the checkin of ***NO_CI*** This will definitely suppress a CI build but I'm not 100% certain if it will work in your scenario.
Another option is generating the version number via an algorithm rather then just incrementing a counter and checking it back in to Version Control. This circumvents the issue of a new build being triggered
i.e if your version number looks something like
1.2.3.4
Where 1 is Major (modified by a human not the build process)
2 is minor (also modified by a human)
the final 2 digits are then updated by an automated process.
You could use number of days since January 2000 for digit 3 (an arbitrary number but something that would change on a daily basis) and either the latest changeset number in Version Control or the total number of builds performed by TFS for digit 4.
This would fulfill 2 requirements, that version numbers are unique for a given build of an assembly, they always go up.
I would suggest that you don't check the new version number into TFS. There is no value in having the version number in there.
I typically set the checked in assembly info numbers to all zeros. ( 0.0.0.0) and never update them except locally for the build.
This gives you the benefit of always being able to identify locally built DLLs.
We are currently developing an app with multiple parallel streams of development. We have a Jenkins job to build each stream/release. So Job-A may be building release 1.1, and Job-B may be building release 1.2.
I think it would be best to have the build number shared across each release, such that if Job-A runs with build number 125, if Job-B runs next it will run with build number 126. The reason I think this is the best strategy is that this is an Android app, which requires its versionCode parameter to be incremented each time it's submitted to Google Play. We use the Jenkins build number for the versionCode value.
Is there any way to configure Jenkins to share a build number across multiple jobs? Or, has anyone come up with a better solution to this problem?
short answer use timestamps or manually set versionCodes, keep things out of the CI server when not necessary. Or force the jenkins build numbers.
long answer I like jenkins to be responsible for automating something that also works on its own. So if I don't need jenkins for the setup, I am happy as well.
Also if you use 2 branches, you probably commit in random orders into them. Trying to tie the jobs together in some ways seems like an unnecessary trouble that could be a problem later on. E.g. what if version 2.0 is built and QAed now, just waiting for the proper release date and marketing team to complete its job, but you need to release a v1.1.1 quick fix after that ? Depending on the solution you pick, you may need to trigger some rebuilds to force a versionCode bump. New build, new QA ?
Your real requirement for the versionCode is for it to be higher than the previous release.
From http://developer.android.com/guide/topics/manifest/manifest-element.html
android:versionCode An internal version number.
This number is used
only to determine whether one version is more recent than another,
with higher numbers indicating more recent versions. This is not the
version number shown to users; that number is set by the versionName
attribute. The value must be set as an integer, such as "100". You can
define it however you want, as long as each successive version has a
higher number. For example, it could be a build number. Or you could
translate a version number in "x.y" format to an integer by encoding
the "x" and "y" separately in the lower and upper 16 bits. Or you
could simply increase the number by one each time a new version is
released.
So here are 2 solutions:
manual bumping. In our projects, I use some sed scripts to automate the bumping of the build number before release. As I also need to change a few things by hand, like versionName prefix, disable/enable debugging mode during development, etc, I manually run a bumpversion script so that next build in my branch has appropriate version and versionCode numbers. Note I use the jenkins build number in versionName instead. This solution prevents you from having the 1.1.1 needs to be out after v2 is ready problem if you pick a large enough versionCode bump for v2.
another more automated yet still simple solution would be to use something out of timestamps. The format YYMMDDHHSS is good enough of an integer (< 2^31), and chances are that whatever version you are going to release next is going to be prepared after the previous one and not within the same minute. So basically when you build v1.1, it gets e.g. 1308131600 and if you build v1.2 the minute after it gets 1308131601. (this obviously doesn't help you against the v1.1.1/v2 scenario)
Here are some ideas for scripts to generate/update versionCode Auto increment version code in Android app.
The jenkins way
Now if you still want jenkins in charge, a simple solution is to use something like https://wiki.jenkins-ci.org/display/JENKINS/Next+Build+Number+Plugin and configure your per branch jobs to have a large enough prefix to ensure no clash. The setup is still pretty simple.
E.g.
110000 for branch 1.1
120000 for branch 1.2
You could look at the multijob plugin where you can add multiple parameterised jobs in to a containing job
https://wiki.jenkins-ci.org/display/JENKINS/Multijob+Plugin
You could also look at artifact archiving Archive the artifacts in hudson/jenkins and then pick the files up later
I haven't tried it, but i'm thinking about going on to the build machine and in each of the jobs replacing the nextBuildNumber file with a symlink to a single file somewhere. What could possibly go wrong. Well, concurrent access might be an issue. There might be an issue if Jenkins re-creates the file from scratch, ie with a remove and a create, instead of just opening it as normal.