Hi I'm using TFS2010 to build a (master) project which itself sequentially calls two MSBuild tasks to build other (child) projects. The first child project uses a custom task which utilizes TFS API (to read information about build configurations). If the first child project executes this custom task (this task always executes successfully), the second call to MSBuild task (in a master project) always fails silently. In the log file I just get the following:
Task "MSBuild"
Global Properties:
<Some custom properties here>
Build FAILED.
0 Warning(s)
0 Error(s)
If that custom task is not executed everything builds fine. Both projects use other custom tasks (MSBuild.ExtensionPack and a couple of those written by me) and none of them makes the build fail.
Is there any way to troubleshoot the issue and to find out what am I doing wrong?
Seems that applying [LoadInSeparateAppDomain] attribute to the task class (I also applied [Serializable] and derived the task class from AppDomainIsolatedTask) solves the issue. Still I wonder how to troubleshoot such stuff.
Related
I am working over the TFS and facing an issue in which there are different folders are available in TFS Repository
Ex:
C#Project
Extreme
CCM
Basically they are different technologies folders and tfs users just only do the check in of corresponding folder.
In Release pipeline i have various batch task which basically executes some batch script file over agent and execute.
There are multiple batch tasks and they perform some actions and my problem is that i want to execute conditionally batch files.
For ex if any changes occur over the C# application it won't execute some of the scripts, if a changes occur over a specific folder then a specific bat file will execute rest won't execute.
There is nothing out-of-the-box that allows you to execute tasks conditionally based on folder where changes were. However, you can do what you want to do it two ways
1) Create a separate build definition for each of your technology areas. Use the Path filter in your trigger to control which build gets triggered based on the path of your changed files.
2) Create variables for each technology in your build definition. At the start of the build definition, add a Powershell task (or something similar) that sets the appropriate variable(s) based on what files were changed. You can use these variables in the custom condition for your task execution.
We're using TFS-Online to One-Click-Deploy our Software.
From time to time it happens, that we need to use some special scripts, we store in a folder. This basically means, most of the time said folder stays empty.
If i now go and trigger a build, i have there followings tasks
Now the question:
Is there any way to suppress these two tasks if the folder to be zipped/deleted is empty?
The tasks are the built in ones.
Note: This is NO on-premise TFS
You could specify conditions for running a task in VSTS. Express the condition as a nested set of functions. The agent evaluates the innermost function and works its way out. The final result is a boolean value that determines if the task is run or not.
In your case, a solution should be:
Add a powershell task prior to the Archive Files task.
Use the powershell task to judge the folder is empty or not.
If the folder is empty then fail the powershell task.(Remember to check Continue on error or always run option)
Add a condition for both Archive File and Delete File task such as Only when all previous tasks have succeeded
After this, those two task will not run when the special folder is empty during the build pipeline.
More details please refer this thread Specify conditions for running a task.
Build vNext tasks are an awesome improvement over the previous build process. One downside though is that I can't make some tasks conditional. I can create an additional build for every combination, but this clearly scales badly and causes lots of additional work if we have to change some other part of the build.
Instead I'd prefer being able to write my own PowerShell tasks that can call existing build tasks. There is at least one downside to this (if no build asks specifically for the vso-task the build agent won't download it), but considering we are using on-premise TFS and build agents I can live with this.
I tried to do something like the following:
$path = get-item "$env:AGENT_HOMEDIRECTORY\Tasks\NuGetPackager\0.1.56\NuGetPackager.ps1"
& "$path" -searchPattern $searchPattern -outputDir "$packageFolder" -configurationToPackage $configurationToPackage -nugetAdditionalArgs "$nugetAdditionalArgs -version $nugetVersion"
Sadly this causes the following error:
2016-04-12T09:50:22.3652811Z ##[error]import-module : Could not load file or assembly 'Microsoft.TeamFoundation.DistributedTask.Agent.Interfaces,
2016-04-12T09:50:22.3652811Z ##[error]Version=14.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The system cannot find
2016-04-12T09:50:22.3652811Z ##[error]the file specified.
2016-04-12T09:50:22.3652811Z ##[error]At C:\Agent1\Tasks\NuGetPackager\0.1.56\NuGetPackager.ps1:19 char:1
Now one solution I found on the web indicates that I could add the looked for dlls to the GAC, but I really, really don't want to. Also clearly the tasks work just fine when called from TFS directly, so what configuration am I missing?
I tried adding the folder containing the dlls to the path and even call SetDllDirectory explicitly in the PowerShell, but neither of those help.
Environment: Windows Server 2012 R2 on both build agent and TFS server. TFS 2015 Update 1.
The Powershell task Host that's used by the build agent for 2015 RTM up to Update 2 is a custom host which does creative things to resolve assemblies and handle input/output. These tasks can't be called from outside the agent.
Plus, quite a few build tasks are implemented using Node, so you'll have to detect which one is which and invoke them accordingly.
The build tasks are being migrated to a new vsts-task-lib, which will support out-of-agent invocation. These would allow exactly what you want.
In the mean time you could take the existing tasks (they're a simple manifest plus script in most cases) and add one string parameter to the task in which you stick a variable which you can then treat as the condition. You'd need to replace all the standard tasks. Then push them again. if you keep the ExtensionID and the Task GUID the same, they'll act as in-place replacements. This is probably the easiest way to do what you want without having to perform all kinds of hacks that take away the Task's UI. Just set the version number to something ridiculously higher, like 100.0.1.83. that way you'll always end up using your version.
Note: the new builds are meant to be repeatable, in that calling the same build multiple times they always yield the same results. conditional actions can be captured in custom powershell scripts that are stored in source control. These can be executed as part of the workflow.
We have a Visual Studio test controller with 3 registered test agents in a specific test environment setup for our nightly automation runs. I've seen ample documentation on having the build agents run the tests, but we need the test execution to go through the controller and run from the test agents instead.
My thought was to edit the build process template so it would trigger the execution of these remotely executed tests and then wait for the test run results, but I have no experience with build templates and I've been unable to find any examples showing how I might accomplish this. And this is of course assuming that editing the build process is the best/correct solution in the first place.
Can someone with experience with triggering remote execution of tests at the end of a build/deploy cycle point me in the right direction please?
Actually, you don'have to change anything to your template. Just make sure your build definition refers to the correct tests and testsettings file that are configured for remote execution.
Step 1:
Please open http://msdn.microsoft.com/en-us/library/ee256991.aspx and scroll down to the section "Add a test settings for remote execution or data collection to your solution". Follow this to create a test settings file for remote execution.
Step 2:
Edit your build definition: go to the Process page, under heading "2. Basic", open the Automated Tests dialog by clicking the "..."at the end. It the Automated Tests dialog, click "Add". Then browse for your test settings file (for remote execution, the one you just created) and confirm your choices.
Now save your build definition and queue your build. Automagically, your tests are now performed on the remote system, because your testsettings file tells your build system to do so.
Hope that is enough to start your remote tests to work.
I have an Ant task which runs if the lock file is not existing.
But if the build fails, then the lock file is not deleted at the end of the task and subsequently the task is not invoked from my scheduled jobs.
Is there anyway to handle such that even if build fails, I should be able to call my cleanUp task to delete the lock files?
Look at this: Testing and exception handling with Ant
There is macrodef with trycatch
This sounds to me like something that should be cleaned up at the beginning of any build.
Do you have an init task or some task on which all other tasks depend? I would just put the deletion of that file in there so that it always gets deleted even if a previous build failed.
However, it's a confusing requirement. It doesn't sound very idiomatic. Ordinarily, task execution is controlled through dependency and conditional properties. See the relevant section of the targets section of the manual for more details about if and unless. Creating a file is an expensive way to get the functionality already present in ant's core.