I have a single Agent pool having 4 agent machines. I am building my code and it is getting successfully using a single agent out of 4.
Achieve: I want to achieve compilation and testing using Unified Agent Pool.The same pool I want to use for Testing.
I create a Release definition and create an agent phase: Selected the option Execute on Multiple Agents using same pool I used in Build Agent. (Concept is achieving the exact functionality of Unified Agent).
Create the Visual Studio test V2 task and in the Search folder as used $(BuildOutput) . Test Assemblies as : test.dll !\obj* and selected RUN TESTS in Parallel on multicore machines.
Output:
Build run successfully and when it automatically trigger the release definition it shows these errors:
First error: No artifacts are available in the build 47777.
2018-07-16T13:19:38.0507114Z ##[error]Error: Preparing the test sources file failed. Error : Error: No test sources found matching the given filter '*test*.dll,!\obj**'
2018-07-16T13:19:38.0507114Z ##[error]Error: Preparing the test sources file failed. Error : Error: No test sources found matching the given filter '*test*.dll,!\obj**'
Question: am I going in right direction for an implementation of Unified Agent using VSTest v.2.
What should i do for resolving these errors and going into right direction.
Thanks!
This is the key problem:
No artifacts are available in the build 47777.
Your build is not publishing any artifacts. Your build has to use the Publish Artifacts task to publish the build outputs in order to make them available in a release definition.
When artifacts are successfully published to a build, there is an "Artifacts" tab that appears on the build summary that will allow you to browse and validate the build outputs.
Related
I have an established CI pipeline comprising (prior to deployment):
TFS build
JFrog Artifactory for build artifact management
SoapUI and SpecFlow (BDD & itaretive, parameterised) for web service functional test automation
I have no access to our build agent servers and no permission to install anything thereon. Instead, I've added the SoapUI binaries as links to my functional test project; the binaries are pulled from source control in the Get Sources step of every build.
This works okay but it greatly increases the footprint of my test project (and any other test project for which SoapUI would be required), and by extension, the execution time of the build: functional testing will only execute on a small fraction of the builds executed (only if application codebase has changed or sufficient time interval since last full build and test has elapsed).
For these reasons, I opted to remove the SoapUI binaries folder from my test project and instead deploy a SoapUI binaries zip archive to an Artifactory repository. With the addition of a PowerShell script step in my build definition, I can pull the SoapUI binaries as needed and extract to the desired location on the build server. Foolishly, I thought this might be straightforward...
I did manage to push the zipped SoapUI binaries folder to the Artifactory repo, and, in my Development build definition, I did manage to correctly script my PowerShell step to pull the zip archive and extract its content successfully to he same folder in the build binaries directory on the build agent server as it had been located originally.
However, when I execute my build, in the step where the SoapUI tests are executed, on the first test iteration, I see the following error returned to build console:
System.ComponentModel.Win32Exception: The directory name is invalid
I added a PowerShell scripted filtered folder content step before the test execution step in both my Development (new) and my Production (original) builds for comparison. Both show the required 'testrunner.bat' to be present, in the same folder on the build agent server.
The test project itself has been unchanged (except for the removal of the SoapUI binaries folder).
To summarise:
I'm trying to execute SoapUI tests in two builds; in each build, the same test project is used and the SoapUI binaries are in the same location when the test execution kicks off.
One build executes successfully without issue.
One build fails at test execution step, returning error "System.ComponentModel.Win32Exception: The directory name is invalid".
I'm very puzzled by this; insights and SoapUI wisdom most welcome.
Thanks for looking.
Turned out, there was a discrepancy in the directory paths on the testrunner.bat between the builds; a '_' where a '-' should have been
We recently upgraded to SonarQube Community Edition Version 7.1 (build 11001). We are also using the TFS SonarQube extension Version 4.3.1. The "Publish Quality Gate Result" build step fails with the message:
[SQ] API GET '/api/ce/task' failed, status code was: 404
[SQ] Could not fetch task for ID 'AWRg8urbC5nyQrURbDKL'
This only happens on the linux build agent. It doesn't happen on the Windows build agent. What's also interesting is that the output from the "Run Code Analysis" step seems to indicate a different task ID:
=========== Run Code Analysis Output ===========
More about the report processing at http://sonarqube:9000/api/ce/task?id=AWSFWzxYmaH45QFNcZ_C
=========== Publish Quality Gate Result Output ===========
[SQ] Could not fetch task for ID 'AWRg8urbC5nyQrURbDKL'
The URL from the Code Analysis step is valid and returns a json response containing all of the data about the task. If I replace the ID in the URL with the ID from the Publish step, I get an error json response with the message "No activity found for task".
How can this be fixed so the build step doesn't fail?
UPDATE - FIXED
After setting system.debug=true on the build, I noticed that there were two report-task.txt files that were being processed by the Publish Quality Gate Result Task: /agent/_work/2/.sonarqube/out/.sonar/report-task.txt and /agent/_work/2/s/.scannerwork/report-task.txt. The task reads the contents of those files to get the URL and Task ID for the SQ analysis. The second was left over from an older build and contained an invalid task ID. Removing that file fixed the issue.
This error can occur if the build directory contains a report-task.txt file left over from a prior build. Make sure there are no report-task.txt files in the build directory by setting the Clean option to true in the build configuration.
We are starting to migrate our old xaml build def's to VSTS web-based build def's. For each branch, we have a debug build def and a release build def. The debug build def is set up as a Continuous Integration build. As a test, I modified one source file and checked it in. The old xaml build def checked out the 1 file and seemed to have built only the project that changed (what we want and expect in a CI build). In the xaml build log I see the following:
<InformationField Name="Message" Value="1 file(s) were downloaded with a total size of 0.29 MB." />
and it ran the build in 3.3 minutes.
In the new VSTS build - I see that it does a "tf get /version:170936" and gets all the files in changeset id "170936":
2018-06-12T15:08:39.8409262Z Checking if artifacts directory exists: C:\BuildAgent\agent2\_work\1\a
2018-06-12T15:08:39.8409262Z Deleting artifacts directory.
2018-06-12T15:08:39.8409262Z Creating artifacts directory.
2018-06-12T15:08:39.8564882Z Checking if test results directory exists: C:\BuildAgent\agent2\_work\1\TestResults
2018-06-12T15:08:39.8564882Z Deleting test results directory.
2018-06-12T15:08:39.8564882Z Creating test results directory.
2018-06-12T15:08:39.8877401Z Starting: Get sources
2018-06-12T15:08:39.9033640Z Entering TfvcSourceProvider.PrepareRepositoryAsync
2018-06-12T15:08:39.9033640Z localPath=C:\BuildAgent\agent2\_work\1\s
2018-06-12T15:08:39.9033640Z clean=False
2018-06-12T15:08:39.9033640Z sourceVersion=170936
2018-06-12T15:08:39.9033640Z mappingJson={"mappings":[{"serverPath":"$/Path/To/Branch","mappingType":"map","localPath":"\\"}]}
2018-06-12T15:08:39.9033640Z Syncing repository: Project Name (TFVC)
2018-06-12T15:08:39.9033640Z workspaceName=ws_1_45
2018-06-12T15:09:06.7318304Z Workspace Name: ws_1_45;a6060273-b85e-4d4b-ac63-3fbbcafc308b
2018-06-12T15:09:06.7630780Z tf get /version:170936
2018-06-12T15:09:21.6070136Z Getting C:\BuildAgent\agent2\_work\1\s;C124440
2018-06-12T15:09:21.6070136Z Getting C:\BuildAgent\agent2\_work\1\s;C124440
2018-06-12T15:09:21.6226405Z Getting C:\BuildAgent\agent2\_work\1\s\.nuget;C158533
2018-06-12T15:09:21.6226405Z Getting C:\BuildAgent\agent2\_work\1\s\Build Scripts;C141602
2018-06-12T15:09:21.6226405Z Getting C:\BuildAgent\agent2\_work\1\s\Databases;C124440
~
~ The rest of branch...
~
and then seems to rebuild all projects taking 13.2 min to run, almost 10 minutes longer than old xaml build.
Am I missing something with the new build def? I do not have the "Clean" button checked in the VS Build task. I do have a build.clean variable but it is Blank by default - sometimes we want to clean so we just can set it to "all" at queue time.
Clicking about on the web shows the following MS VSTFS version: 15.105.25910.0
Any help is much appreciated.
For multiple build agents, even though you have not checked Clean checkbox of the VS Build task, and the clean option in the Repository is set to "false".
Since the agent is randomly picked, it may not contain your source code before. That's why you see that TFS does a tf get /version:170936 and gets all the files in changeset id "170936" and also build all projects.
For Multi-configuration in Options tab of build definition. Please refer the official Article: How do I build multiple configurations for multiple platforms?
After that, it will split configurations to multiple builds during the build.
And ow that your build definition successfully builds multiple configurations, you can go ahead and enable parallel builds to reduce the duration/feedback time of your build. You specify this as additional option on the Options page:
Take a look at this post Building multiple configurations and/or platforms in Build 2015
To narrow down if the issue is related to multiple build agents, you could also send TFS build to a specific agent by demand, and queue the build multiple times with clean=false in the same build agent, it should have built only the project that changed(CI).
I've created a build definition using TF Build. It is the nightly build for our project. It should run the defined Unit Tests and it should package the Azure Cloud Service projects.
This build has been running for some time without the packaging step. This resulted in a successful build that also ran the Unit Tests.
Based on the following guide I have added the packaging of the Cloud Services: https://azure.microsoft.com/en-us/documentation/articles/cloud-services-dotnet-continuous-delivery/. Basically it comes down to setting the target to Publish for msbuild (/target:Publish) in the Build definition.
The problem is that when a solution is build with a Publish target the Unit test projects are not build. MSBuild will return with the following message: Skipping unpublishable project. I have traced this back to the common MSBuild target file. A project will only build when Publishing is the project results in an exe, as can be seen here: http://referencesource.microsoft.com/#MSBuildFiles/C/ProgramFiles(x86)/MSBuild/14.0/bin_/amd64/Microsoft.Common.CurrentVersion.targets,217
What I have tried:
Forcing building of Unit Test projects in Publish builds.
I have added the following msbuild to the Unit Test csproj-files in order to override the default target on Publish:
<PropertyGroup>
<PublishDependsOn>
Build;
</PublishDependsOn>
</PropertyGroup>
Setting the output type of the Unit Test project to Console Application
In both cased MSBuild will give the The specified project reference metadata for the reference "..\..csproj" is missing or has an invalid value: Project for all projects that are referenced by the unit test project.
I feel like I'm not on the right track. Is there a way I can both build the Unit Test projects and build and publish the Cloud Service projects?
Okee, it was much simpler then I though.
The /target-arguments of MSBuild can take multiple targets that are built in turn. I change my build definition to have /target:Build;Publish as msbuild params. This fixed the issue.
I got an error (no entry point specified for cloud service) doing /t:Build;Publish with my service. So I did 2 separate actions, one with Build and one with Publish and that worked.
When an ANT build step fails in my build I'd like to archive the logs in order to determine the problem. The relevant logs, however, are not located in the workspace, so I have to use a full path to them.
The standard artifact archiving feature does not work well with full paths, so first I have to copy the logs into the workspace within some build step so that I can later archive them. I do not want to incorporate the copying code into the original ANT script (it does not really belong there). On the other hand, since the build step fails the build I can't execute the code that copies the artifacts into the workspace as a separate build step as it is never reached.
I am considering using ANT -keep-going option, but how will I then fail the build?
Any other ideas (artifact plugins that handle full paths gracefully, for example)?
Update: I've worked around the problem by creating a symbolic link in the workspace to the directory that contains the files to be archived. Kludgy, but effective.
I would recommend using Flexible Publish plugin in conjunction with the Conditional Build Step plugin.
The Flexible Publish plugin allows you to schedule build steps AFTER the build steps have normally run. This allows you to catch both successful and failed builds and execute something - say a script that copies the files from OUTSIDE the workspace to INSIDE the workspace. The Conditional BuildSet plugin allows conditionalizing the steps so that they only run when the build fails. Using these two plugins, you can copy the files into the workspace upon failure, then archive them with the usual Jenkins mechanisms.