jenkins version conflict with findbugs - jenkins

i am getting the following thing every time i start jenkins.i couldn't get hudson findbugs graph even though i activate it.
Manage Old Data
When there are changes in how data is stored on disk, Jenkins uses the following strategy: data is migrated to the new structure when it is loaded, but the file is not resaved in the new format. This allows for downgrading Jenkins if needed. However, it can also leave data on disk in the old format indefinitely. The table below lists files containing such data, and the Jenkins version(s) where the data structure was changed.
Sometimes errors occur while reading data (if a plugin adds some data and that plugin is later disabled, if migration code is not written for structure changes, or if Jenkins is downgraded after it has already written data not readable by the older version). These errors are logged, but the unreadable data is then skipped over, allowing Jenkins to startup and function properly.
Type Name Version
The form below may be used to resave these files in the current format. Doing so means a downgrade to a Jenkins release older than the selected version will not be able to read the data stored in the new format. Note that simply using Jenkins to create and configure jobs and run builds can save data that may not be readable by older Jenkins releases, even when this form is not used. Also if any unreadable data errors are reported in the right side of the table above, note that this data will be lost when the file is resaved.
Eventually the code supporting these data migrations may be removed. Compatibility will be retained for at least 150 releases since the structure change. Versions older than this are in bold above, and it is recommended to resave these files.
No old data was found.
Unreadable Data
It is acceptable to leave unreadable data in these files, as Jenkins will safely ignore it. To avoid the log messages at Jenkins startup you can permanently delete the unreadable data by resaving these files using the button below.
Type Name Error
hudson.maven.MavenModuleSet nov 7 latest NonExistentFieldException: No such field hudson.plugins.findbugs.FindBugsReporter.isRankActivated
Discard Unreadable Data

According to the main FindBugs author, this is expected behavior when you downgrade FindBugs from a newer version to an older one:
Once you upgrade to a new version you can't downgrade without getting
such kind of exceptions (I only ensure backward compatibility). Can't
you use the "manage old data wizard" in Jenkins to remove these new
fields from your persisted Jenkins build files?
Nabble discussion.

Related

TFS testing is not picking up new code

I have a TFS 2017 instance running on windows server 2012 R2 with a test box running windows 10.
I am running into a very odd issue. Most of my automated CodedUI tests are running pretty well. However, I have noticed that code changes are not always picked up by TFS when it performs a new build, at least not in the testing code area.
The builds themselves work well, and new code always gets incorporated for those. However, when the latest build triggers a release containing CodedUI tests, those tests do not always grab the latest build.
I have noticed this primarily in my App.config file which contains connection strings that are not being updated. In one case I had three tests that ran apparently successfully, but then they ran again using the values from the old App.config file.
I also have found that changes to the [TestCategory()] attribute are not always picked up either. I use that category to specify which tests I want run in a particular release build. I use variations on the same word for my categories: CodedUI, CodedUIExtended, CodedUIStage. At first I thought the system was doing some sort of StartsWith and picking up the other names, but when I tell it to run CodedUI it is running both the CodedUI and CodedUIStage categories.
[TestCategory("CodedUI"), TestMethod]
public void UI_Login_AdminAuthenticate()
{
...
}
Because the CodedUIStage categories were recently changed and used to be CodedUI that has led me in the direction of suspecting some sort of caching being used in TFS.
Can anyone shed some light on why my category and app.config changes are not being picked up correctly? What is causing this, and could it be happening to the code itself as well when I attempt to deploy a fix/correction?
EDIT:
As suggested, I tried checking the clean option on my TFS build configuration, however it had no affect.
The release, which is triggered immediately upon build does show it is using the newly finished build number.
And looking at the artifacts, the test dll has the current date modified, so it looks like it was just created.
The test.dll.config seems to be the issue. In the artifacts it has a very old Date modified and is not the current version that is checked into TFS. Typically this would feel like a clean issue, however TFS always deletes and re-copies all destination files when doing a build or Release. I have verified that by watching the files being deleted and re-created on the file system during the process.
C:\agent_work\r6\a\ [artifact_name]\bin
EDIT2:
With a little more exploration, the build artifacts are correct. It is when the Release copies those artifacts into the release process that the problem happens.
A week ago I renamed the folder in TFS containing my test project inside the solution. The old folder name is showing up in the artifacts the Release copies into itself. The new folder name is also showing up, which means I now have two dlls, and that is causing problems.
I am not sure where the Release is finding this copy of the old folder. I explicitly deleted it from the Release's copy of the build artifacts and re-ran the build and release and it showed back up.
Thanks to the suggestions from Daniel, I eventually figured out that after creating the artifacts, my build process was then publishing those to a separate place on the file system.
Unfortunately, the Copy and Publish Build Artifacts task does not have a clean feature like the basic Copy Files task does.
As such, whenever files are removed from the build they still exist in that location when the Release process goes to try and grab what it thinks the artifacts are. And so it ends up grabbing extra files.
Manually deleting the old files from that secondary artifact destination location solved the issue.

What do all the options on GetOptions mean?

The MSDN documentation lists four options, with limited explanation:
Overwrite "Overwrite existing writable files if they conflict with the downloaded files." Does this apply to all files, or just ones we've told TFS we've edited?
GetAll "Gets all files." What files does TFS not normally get?
Preview "Executes a get without modifying the disk." This one seems pretty clear.
Remap "Remaps existing items on the disk to the server items where the content and disk location are not changing." I have no idea what this means.
Overwrite: will blindly overwrite writable files that you have not pended for edit. If you have marked a file as 'writable' then you have violated the contract with TFS and it assumes that you have done this for a good reason (eg, modifying the file without taking a checkout, because you were working offline). This will generally produce a writable conflict on the file, but if you specify this flag, then the writable file will be overwritten.
This only applies to server workspaces (local workspaces are always writable). This has no effect on files that you have pended for edit. Get will always produce conflicts for files that are edited locally and updated on the server; if you want to update files that are checked out, you must undo the checkout (or resolve the conflict with TakeTheirs).
Get All: will download every file and update it, even if TFS believes that the local version is the same as the remote version and that downloading a new version would be a noop. TFS tracks every version that you have locally, as well as remotely, so this is only useful if you edit files locally without checking them out.
If you have kept them writable, then then - as mentioned above - this will be a writable conflict. If you have then marked them read-only then TFS assumes that you have not made any changes and will not bother updating them when you do a get (because it knows the file contents haven't changed). If you have manually changed the file contents, then marking this will update those files to the server version.
Preview: will just fire events and provide results that indicate what would be downloaded with the given parameters.
Remap: is a clever option that allows you to perform an in-place branch switching (which is very common with some version control systems that branch at the repository level - like Git - but somewhat complicated in TFVC.)
Consider that you have mapped $/Foo/main to C:\Foo, and done a get latest. If you update your working folder mappings so that $/Foo/branches/feature now points to C:\Foo, then issue a get with Remap, then the server will download only the changed files between main and branches/feature, so it's an inexpensive way to update your local workspace to a feature branch.
(If you're looking for an example, this functionality exists in the command-line interface and in Team Explorer Everywhere but not in Visual Studio.)

TFS check in replaces files

tfsI get my some of my local files replaced with another version when I check in a new version of my project.
We use TFS 2010.
Here's the situation in more detail:
A colleague made significant and incomplete changes to the project leaving it nonfunctional, checked in that code and went on a two week vacation. Since these changes were not even required the obvious course of action is tho get the previous version and work from there.
The problem happens when I check in the new working version of the project: instead of just delivering files to the server I get files replaced on my machine with the server version. I never would have thought checking in would get files from the server!
When you check in your changes, Visual Studio will always try to merge them with the latest version on the server. It must, because the version history of each file is linear, unless you manually branch files.
If the changes on the server are incompatible with your local changes, the checkin will be blocked and you will have to resolve any conflicts locally before you can check in again.
What you're seeing is expected behavior and it cannot be different due to the way TFVC works. Your colleague should not check in a version that doesn't work. In such cases he should create a shelveset (which will store the files seperately from standard history).
In your case I'd select the checkin from your colleague, select Roll back and check in the result of that. It essentially removes his changes, but they are not lost. By re-applying the rolled back changeset (roll back the roll back changeset) or by getting the specific version with the changes he can continue working on them.

Send notification to be received on TFS Get Latest Version

We are using TFS on Visual Studio 2013. When our developers Get Latest Version (GLV) and there is a new db script file received, it is stored under a specific folder, to be run using our custom update app.
What I want is that upon doing a GLV, they get a notification (in Visual Studio) that there are new scripts to run to update the db (generically speaking, that a new file has been added under a certain path).
Is there a way to achieve this with TFS?
It will not give you exactly what you want, but you can use built in TFS alerts to notify you or the team when a file is checkedin under a folder with a specific name/specified path/file extension.
You could write a visual studio extension, to be triggered on Get Latest, which would check a certain path within source control, you would have to roll this to all of your developers and would either have to store the lookup paths centrally or redeploy the app if the look up paths changed.
Alternatively you could add a bat / powershell script to your source control. within this script you could do the get latest and also run any scripts that you would like to run. you would then get the developers to get the latest against this script and then run it, which would get the rest of the files and would also run the db scripts.
If I understand correctly, you want your users to be running their locally built solutions against the latest database version to keep everyone in sync. Why not just use the usual workflow to procure 'notifications' in the form of build output?
I've dealt with this in the past, and the best solution I've come up with is to write a custom MSBuild target for 'BeforeBuild' into each of the projects that rely on the database being updated. The MSBuild target checks the version of the database installed (You would have to come up with a means for doing so, it can be tricky!).
If the currently deployed database does not match the version in the scripts you have just synced to, a build event could be raised. When the versions match the target would output a success message (or maybe nothing at all), and if the versions do not match the target would issue a build Warning or build Error depending on the severity of change observed (it might depend for you insofar as Major/Minor version variance is considered).

Automatic Versioning with Team Foundation Server 2012; Increment Only on Changed Assembly

I've been tasked with setting up a new Team Foundation/Build server at my company, with which we'll be starting a new project. Nobody here currently has experience with TFS, so I'm learning all of this on my own. Everything is working so far; The server's been set up, the Repository and Team Project has been created, the Build Server has been created, and I've created a simple hello world application to verify the source control and Continuous Integration builds (on the build server) run properly.
However, I'm having a problem setting up the automatic versioning. I've installed the TfsVersioning project, and it's working fine; I'm able to define a format for my assembly versions. I haven't yet decided what format I'll use; probably something like Major.Minor.Changeset.Revision (I'm aware of the potential problem regarding using the changeset number in the assembly version, so I may decide to switch to Major.Minor.Julian.Revision before we begin development).
The problem:
I don't want assemblies to have new file versions if their source code has NOT changed since the last build. With a continuous Integration build this isn't a problem, as the build server will only grab the source files that have changed, causing an incremental build which produces only updated modules; the existing unchanged modules won't be built, so their version will remain unchanged.
If I set up a nightly build, I'll want to clean the workspace and perform a Build-All. However, this means that ALL assemblies will have new version (assuming the Assembly File Version includes the build number).
A solution?
This has prompted me to consider using the latest changeset number in the Assembly File Version. This way, if nothing has been committed between two successive Build-Alls, the versions won't be incremented. However, this would mean that a change and commit to a single file would force a version increment on ALL assemblies.
I'm looking for one of two things:
A way to only increment Assembly Version Numbers if their source/dependencies have changed since the last build. Successive Build-Alls should not cause changes in version numbers.
OR
A way for testers and non-developers to be able to tell version W.X.Y.Z and version W.X.Y.Z+1 of assembly 'Foo' are identical, even though they have differing file versions.
I've probably read about 20 articles on the subject, and nobody (except this guy) seem to address the issue. If what I'm asking for isn't common practice in the Team Foundation ALM, how do I address the second bullet point above?
Thanks for your time!
This is something I did in the past. The solution has two critical points:
You must use an incremental build, i.e. Clean Workspace = None
The change to AssemblyInfo.cs must be computed at each project
This latter is the most complex and I will just draft the solution here.
In the custom MSBuild properties use CustomAfterMicrosoftCommonTargets to inject an hook in normal Visual Studio compile
/property:CustomAfterMicrosoftCommonTargets=custom.proj
Also forward a value for the version
/property:BuildNumber=1.2.3.4
In custom.proj redefine the target BeforeCompile to something similar
<Target Name="BeforeCompile"
Inputs="$(MSBuildAllProjects);
#(Compile);
#(_CoreCompileResourceInputs);
$(ApplicationIcon);
$(AssemblyOriginatorKeyFile);
#(ReferencePath);
#(CompiledLicenseFile);
#(EmbeddedDocumentation);
$(Win32Resource);
$(Win32Manifest);
#(CustomAdditionalCompileInputs)"
Outputs="#(DocFileItem);
#(IntermediateAssembly);
#(_DebugSymbolsIntermediatePath);
$(NonExistentFile);
#(CustomAdditionalCompileOutputs)"
Condition="'$(BuildNumber)'!=''">
<Message Text="*TRACE* BuildNumber: $(BuildNumber)"/>
<MyTasksThatReplaceAssemblyVersion
BuildNumber="$(BuildNumber)"
File="$(MSBuildProjectDirectory)\Properties\AssemblyInfo.cs"/>
</Target>
You need to have a task for replacing the AssemblyFileVersion in the AssemblyInfo.cs source. MSBuild Extension Pack has an AssemblyInfo task for this purpose.
I posted the full details at my blog here, here and here.

Resources