I'm going to start with a little background regarding a recurrent problem we've been having with TFS.
A short while ago we upgraded our in-house server to utilise TFS 2013. Some of our projects were embedded with build definitions and continuous delivery prior to the upgrade and worked perfectly. However, after finalising the upgrade our build definitions failed due to what the system assumed were missing files (mainly for AspNetCompileMerge.targets), but after investigating the issue we couldn't find the root cause.
To cut a long story short, we tried a number of ways to fix the problem, the only one which worked was by commenting out the below code within the affected application CSPROJ file.
<PropertyGroup>
<VisualStudioVersion Condition="'$(VisualStudioVersion)' == ''">12.0</VisualStudioVersion>
<VSToolsPath Condition="'$(VSToolsPath)' ==
''">$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)</VSToolsPath>
</PropertyGroup>
However, problem number 2 cropped up a few days later when we merged some of the projects onto VSO without the build definitions. The cause of the issue was due to me commenting out the above lines of code. After un-commenting the above code in the CSPROJ file and checking in the project everything started working again... Until today!
This morning I reimplemented the Build Defintions into the affected projects and tried publishing to VSO which is when I started getting the same error mentioned in paragraph 2. As I had already come across this issue before the fix was to comment the code out again.
The above information now leads me to the following questions
Did something change in TFS 2013 for Build Definitions to not work correctly when the above code is implemented?
Why is it that the above code needs to be removed when utilising Build Definitions in TFS 2013?
Is the above code important when Build Definitions are enabled or disabled?
Obviously it isn't a be all and end all if we can check in without the code, however I would like to understand why it breaks on check in.
I initially assumed this was a misconfiguration on our server, however this bug is also evident in the online version as well so this theory went out of the window. The affected applications were originally built in VS 2012 on the MVC 5 platform if this helps.
Related
We're using SonarQube 5.6.6 (LTS) with SonarC# plugin version 6.5.0.3766 and TFS 2017 update 2 (SonarQube plugin v. 3.0.2).
We're repeatedly seeing, that issues that were previously marked as resolved (Won't fix) get reopened. My question is: Why does SonarQube behave in this way?
This issue is also mentioned in a number of different posts(1,2,3) on StackOverflow but with no root cause or resolution. Link 3 also states that this is an issue using SonarQube 6.2.
I'm curious as to whether this is due to a misconfiguration on our part or an integrated part of SonarQube?
Our SonarQube server is running on a Win 2012 R2 server with a MS SQL backend if thats relevant?
Furthermore, we're using TFVC for versioning and get blame through the SCM plugin. If an issues has been marked as resolved (won't fix), I've noticed that it appears to be as opened as a new issue (i.e. no history available).
Example: A colleague marked an issue as resolved (won't fix) in a file which was last modified in TFVC back in november 2015. However, this morning the issue was marked as open and assigned to the developer who originally checked in the code There is no history in SonarQube of the issue having previously been in a resolved state. It appears as if it's a new issue in a new file instead of being a known issue which has already been resolved?
To avoid weird issues related to compiling our C# solution we always clean our workspace completely prior to our build. I don't know if that has something to say? Our builds are also executed on different build machines so I don't know if that will make SonarQube think that we're indeed always analyzing new files?
Could the use of different build machines and/or build definitions for the same SonarQube project result in this behavior?
I've also seen from the logs and reports, that SonarQube appears to analyze the ENTIRE solution and not only the changed files. This makes our analysis very time consuming and not at all suitable in a fast feedback loop. I suspect the long analysis period and the issues reopening is related.
Analysis of a projekt with approx 280 KLOC takes approx. 8-10 min. as a background task on the server. That's on subsequent analysis (i.e. not the first run).
Is this related to the above mentioned problem of issues getting reopened by the server?
Strangely enough, leak periods appear to work correctly, i.e. we correctly identify issues within the right leak period. I've not verified this in detail yet, but it's definitely not ALL old issues that get reported as open within the leak period. We only see old issues reappear, if the file that contains them has been modified - this activates all the other issues (from a previous version or leak period) within that file.
I've not specified any additional cmdline parameters for the SonarQube scanner during builds apart from the TFVC SCM plugin and path for coverage data.
We're using the VSTEST task v. 2 as otherwise it's not possible to collect code coverage in SonarQube when using TFS 2017 update 2 and VS 2017.
Please advise of any further data that I should supply to help this further.
Thank you for your help!
I have a small solution containing three Visual Studio projects. I'm working in Visual Studio 2015 using TFS 2015.
I have implemented a gated check in, but for some reason the solution will not build on the TFS server. I'm referencing only 1 nuget package - Entity Framework. I am not checking my package folder into TFS, but my packages.config files are being included.
I have previously set up a different project on the same server using the same build definition and it works fine.
In order to restore packages prior to build, you will need to run the following command as part of your build process.
nuget.exe restore path\to\solution.sln
One way to do that is to add another project that is responsible for building your solutions and making sure that the packages get restored prior to your solutions being built.
Following write-up walks you through getting that set up: nuget docs
I managed to get it working, but I tripped into the fix and don't know what exactly solved the problem. This is the first time I've really had to handle TFS builds.
I know I only had one build definition defined and it was intended for a different solution - of which this code was also a part. I think when I was checking in this solution it was actually trying to build the other.
Apparently, I can't have my nuget packages set up different ways for code that is in two different solutions. Anyway, that's my best guess.
To prevent unexpected build breaks and test failures, We have been using gated check ins. This works very well for our core solutions, and has helped improve our quality.
As part of our overall architecture, we have a certain section of our code with many micro-services, each of which is a new solution. New solutions are added to this part of the code base regularly. These are important parts of the system, and I need to make sure they get compiled as part of a gated check in without the chance for developer error.
Is there a way to configure TFS to find ALL solutions under a certain path and include them in a gated check in build?
Thanks
Not without modifying the build process template, which is almost never a good idea. The new build system in TFS2015 does allow that, however.
TFS 2015 vNext builds allow wild cards to search for all solutions. I haven't had success getting this to work with Visual Studio build steps, which you would need, but it works well with NuGet Installer and other build steps. We will not see gated builds in vNext builds until we get update 2 see TFS feature timeline
Introduction
I have a problem with Team Foundation Server Express 2013 on my machine. I have two build definitions on the same controller and agent, both of which run on the same server and the same environment as well.
It should be noted that I already looked at the "similar questions" without any luck. This is clearly not related to the same root cause, and the symptoms are slightly different too.
One of them is a gated check-in build definition, which just compiles everything when commiting to the development branch.
Another is a scheduled build definition, which runs every saturday at 3 AM, building any changes that may have been committed to the main branch since last time.
The gated build definition has a process (which only has minor changes for not running tests and just compiling the code) based on the TfvcTemplate.12.xaml template.
The scheduled build definition process is based on some Azure build definition template that might come from an older version of Visual Studio, possibly based on some Azure template, or maybe the TfvcContinuousDeployment.12.xaml template.
The issue
My gated build definition runs just as expected, without issues. It compiles the full solution, and only passes if the compilation succeeds.
The shceduled build definition however fails compiling (even before it reaches the point where it runs the unit tests). The error I see is as follows.
Obviously this is due to missing fakes assemblies. I tried taking the assemblies and checking them in (which I would rather avoid), only to find that this build definition runs just fine, but not the gated one which ran just fine before.
I thought about just running fakes.exe in the build template to just generate them manually before compiling, but in my initial tests (to see if this theory would even work), it won't even run in the commandline, and outputs some errors and warnings that I don't understand (but are probably not relevant anyway, since I might be running fakes.exe with improper arguments).
Updates
Update #1
It should be noted that I have Visual Studio 2013 Ultimate installed on my build server as well. Both this (and TFS 2013 Express) have Update 3 installed, and the server is fully updated.
I ended up abandoning fakes all together, and implementing Moq instead. Works a lot better, and forces me to abandon shims or moles, which are often considered bad practise anyway.
I have setup a build controller etc and the builds were failing, I have fixed these now and the build failed properly - as in because of an error.
I have fixed the error and checked the code back in but now the code is not being extracted, although sometimes one folder of many is.
I have deleted the code from the build machine and requeued a build but it keeps failing. It complains that it cannot find the solution that I specified as the build solution.
I have checked the check box to build even if nothing has changed.
Have I missed a setting somewhere for extracting the code?
TFS version is 2012 Express
Visual Studio version is 2010 Professional
I had this issue recently with TFS 2012. I think it boils down to this:
In the lastest build definition files, it appears that it performs a Clean task before updating the workspace. This means that if you do something that causes the Clean part of the build to fail, it will never download the new files in order to fix it.
Recently, I was making big changes to my build file and inevitably made a lot of mistakes, I found that if one of these mistakes caused the Clean to break, I had to go onto the Build server and change the file manually to get it working again.
Does this sound like it might be the same issue?
There are several properties in your build definition you can check. I would start with setting the "Clean Workspace" to All to ensure the correct code is being pulled down and built.
There are other checks you can look at as well like the agent set for the build and the "GetVersion" property. Check the below link out. It should be able to help you in more detail.
Define a Build Process that is Based on the Default Template