We're using TFS-Online to One-Click-Deploy our Software.
From time to time it happens, that we need to use some special scripts, we store in a folder. This basically means, most of the time said folder stays empty.
If i now go and trigger a build, i have there followings tasks
Now the question:
Is there any way to suppress these two tasks if the folder to be zipped/deleted is empty?
The tasks are the built in ones.
Note: This is NO on-premise TFS
You could specify conditions for running a task in VSTS. Express the condition as a nested set of functions. The agent evaluates the innermost function and works its way out. The final result is a boolean value that determines if the task is run or not.
In your case, a solution should be:
Add a powershell task prior to the Archive Files task.
Use the powershell task to judge the folder is empty or not.
If the folder is empty then fail the powershell task.(Remember to check Continue on error or always run option)
Add a condition for both Archive File and Delete File task such as Only when all previous tasks have succeeded
After this, those two task will not run when the special folder is empty during the build pipeline.
More details please refer this thread Specify conditions for running a task.
Related
I am trying to implement parallel testing using VSTest Task as mentioned in the below article.
https://learn.microsoft.com/en-us/azure/devops/pipelines/test/parallel-testing-vstest?view=azure-devops
Brief description of what I am doing:
I have two self-hosted agents installed in the same server.
When I run the tests with a single Agent option (either one of them), it is running without any issue.
But when I apply multi-agent option either with
a) Simple slicing based on the number of tests and agents
or
b) Slicing based on test assemblies
I am getting the below error.
##[error]The slice of type 'Discovery' is 'Aborted' because of the error: System.Exception: No tests were discovered from the specified test sources.
Error message
Thanks in Advance,
Udaya Bhaskar.
Based on the YAML in the comments, this looks like a paths on disk issue, from the task help reference.
The PublishBuildArtifacts task has a default path of $(Build.ArtifactStagingDirectory); you have overridden this to $(Build.ArtifactStagingDirectory)\Packages. If this is set correctly when you look at artefacts for the build you should be able to download your test assemblies and their dependencies which were uploaded from this location.
The DownloadBuildArtifacts task has a default downloadPath of $(System.ArtifactsDirectory) which the YAML view indicates you haven't overridden.
The VSTest task has a default searchPath of $(System.DefaultWorkingDirectory); the YAML view indicates you have set this to $(Agent.BuildDirectory)\Bin.
The exact behaviour here may depend on how your agents have been set up for their disk paths. $(Agent.BuildDirectory) will usually be one of the numbered subdirectories from the agent's base working path. Interestingly, while DownloadBuildArtifacts' documentation says $(System.ArtifactsDirectory) is its default, this does not appear in the current predefined variables list; if it in fact refers to $(Build.ArtifactStagingDirectory), this defaults to the "numbered_build_subdirectory\a".
As your test search path will expand to "numbered_build_subdirectory\Bin", I expect that (assuming the files are being correctly published) they are being downloaded to a location that sits outside the search path the test task is targeting, which would explain why no tests are being found.
I would suggest modifying the download and search paths for DownloadBuildArtifacts and VSTest to be the same and relative to the base directory, eg: $(Agent.BuildDirectory)\Tests (or whatever is appropriate for your pipeline).
I am working over the TFS and facing an issue in which there are different folders are available in TFS Repository
Ex:
C#Project
Extreme
CCM
Basically they are different technologies folders and tfs users just only do the check in of corresponding folder.
In Release pipeline i have various batch task which basically executes some batch script file over agent and execute.
There are multiple batch tasks and they perform some actions and my problem is that i want to execute conditionally batch files.
For ex if any changes occur over the C# application it won't execute some of the scripts, if a changes occur over a specific folder then a specific bat file will execute rest won't execute.
There is nothing out-of-the-box that allows you to execute tasks conditionally based on folder where changes were. However, you can do what you want to do it two ways
1) Create a separate build definition for each of your technology areas. Use the Path filter in your trigger to control which build gets triggered based on the path of your changed files.
2) Create variables for each technology in your build definition. At the start of the build definition, add a Powershell task (or something similar) that sets the appropriate variable(s) based on what files were changed. You can use these variables in the custom condition for your task execution.
I am currently learning the ins and outs of Jenkins and Pipeline.
One thing I do not yet understand is the following:
A Jenkins job by default can be executed concurrently (I can check the checkbox "Do not allow concurrent builds" if I don't want that).
What I don't understand is the following:
Let say Jenkins checks out code in /var/lib/jenkins/workspace/my-project-workspace/
Now how would it be possible to run concurrent builds without conflicts?
Let's say that build nr 1 checks out code in that path and starts testing it, and while doing that, build nr 2 is started and checks out code in that same path.
How will that not conflict with build nr 1?
I am probably missing something obvious here... Please help :)
The subdirectory inside the workspace/ folder will not always be your project name, but a (randomly) generated directory name. That's all the magic.
When this option is checked, multiple builds of this project may be executed in parallel.
By default, only a single build of a project is executed at a time — any other requests to start building that project will remain in the build queue until the first build is complete.
This is a safe default, as projects can often require exclusive access to certain resources, such as a database, or a piece of hardware.
But with this option enabled, if there are enough build executors available that can handle this project, then multiple builds of this project will take place in parallel. If there are not enough available executors at any point, any further build requests will be held in the build queue as normal.
Enabling concurrent builds is useful for projects that execute lengthy test suites, as it allows each build to contain a smaller number of changes, while the total turnaround time decreases as subsequent builds do not need to wait for previous test runs to complete.
This feature is also useful for parameterized projects, whose individual build executions — depending on the parameters used — can be completely independent from one another.
Each concurrently executed build occurs in its own build workspace, isolated from any other builds. By default, Jenkins appends "#" to the workspace directory name, e.g. "#2".
The separator "#" can be changed by setting the hudson.slaves.WorkspaceList Java system property when starting Jenkins. For example, "hudson.slaves.WorkspaceList=-" would change the separator to a hyphen.
For more information on setting system properties, see the wiki page.
However, if you enable the Use custom workspace option, all builds will be executed in the same workspace. Therefore caution is required, as multiple builds may end up altering the same directory at the same time. enter image description here
Build vNext tasks are an awesome improvement over the previous build process. One downside though is that I can't make some tasks conditional. I can create an additional build for every combination, but this clearly scales badly and causes lots of additional work if we have to change some other part of the build.
Instead I'd prefer being able to write my own PowerShell tasks that can call existing build tasks. There is at least one downside to this (if no build asks specifically for the vso-task the build agent won't download it), but considering we are using on-premise TFS and build agents I can live with this.
I tried to do something like the following:
$path = get-item "$env:AGENT_HOMEDIRECTORY\Tasks\NuGetPackager\0.1.56\NuGetPackager.ps1"
& "$path" -searchPattern $searchPattern -outputDir "$packageFolder" -configurationToPackage $configurationToPackage -nugetAdditionalArgs "$nugetAdditionalArgs -version $nugetVersion"
Sadly this causes the following error:
2016-04-12T09:50:22.3652811Z ##[error]import-module : Could not load file or assembly 'Microsoft.TeamFoundation.DistributedTask.Agent.Interfaces,
2016-04-12T09:50:22.3652811Z ##[error]Version=14.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The system cannot find
2016-04-12T09:50:22.3652811Z ##[error]the file specified.
2016-04-12T09:50:22.3652811Z ##[error]At C:\Agent1\Tasks\NuGetPackager\0.1.56\NuGetPackager.ps1:19 char:1
Now one solution I found on the web indicates that I could add the looked for dlls to the GAC, but I really, really don't want to. Also clearly the tasks work just fine when called from TFS directly, so what configuration am I missing?
I tried adding the folder containing the dlls to the path and even call SetDllDirectory explicitly in the PowerShell, but neither of those help.
Environment: Windows Server 2012 R2 on both build agent and TFS server. TFS 2015 Update 1.
The Powershell task Host that's used by the build agent for 2015 RTM up to Update 2 is a custom host which does creative things to resolve assemblies and handle input/output. These tasks can't be called from outside the agent.
Plus, quite a few build tasks are implemented using Node, so you'll have to detect which one is which and invoke them accordingly.
The build tasks are being migrated to a new vsts-task-lib, which will support out-of-agent invocation. These would allow exactly what you want.
In the mean time you could take the existing tasks (they're a simple manifest plus script in most cases) and add one string parameter to the task in which you stick a variable which you can then treat as the condition. You'd need to replace all the standard tasks. Then push them again. if you keep the ExtensionID and the Task GUID the same, they'll act as in-place replacements. This is probably the easiest way to do what you want without having to perform all kinds of hacks that take away the Task's UI. Just set the version number to something ridiculously higher, like 100.0.1.83. that way you'll always end up using your version.
Note: the new builds are meant to be repeatable, in that calling the same build multiple times they always yield the same results. conditional actions can be captured in custom powershell scripts that are stored in source control. These can be executed as part of the workflow.
We have several automated build scripts, some of which are run automatically every 2 hours, and some of which are only ever run manually.
If a build script is started manually while another is already running, it can cause...problems. Such as merging untested branches into the production branch.
I'd like to prevent this happening again, and the simplest solution in my mind is to have each build script start by checking that another is not currently running.
Is there a way in ant to directly check if another ant instance/script is currently running?
If not, what's the simplest way to add such a check? My first thought is a file created at the beginning and deleted at the end of a build. I'd prefer a way that handles user-cancelled builds nicely, but it's not necessary. It needs to work if a build succeeded and if a build failed (but was not killed by the user).
If these are separate Ant processes, then I think the only solution is to define a lockfile of some sort that each Ant process needs to acquire before it can continue.
Perhaps the tempfile task could be used for this?
Actually, a sort-of semaphore based on a directory might be better because the tempfile really is a unique tempfile. The first thing your script does is use mkdir to create a shared resource directory name, but it only does this if the directory does not exist.
Upon exit it invokes delete on this shared resource name.
The idea is that the content and name of the directory is meaningless -- it only serves as an "IPC" cooperative locking mechanism.
This isn't particularly elegant, but I think your only other option is to set up a build server that handles scheduled and continuous builds based on various triggers. One that many people use is Jenkins (or has it been renamed?)
[Update]
Perhaps Do I have a way to check the existence of a directory in Ant (not a file)? would do the trick?
To be honest, this approach may work in the short term, but it just moves the problem around. Instead of resetting unit test results you'll be removing lockfiles by hand to get builds working again. My advice is to set up a CI build system, but I recognize this is a fair amount of work (and introduces a whole different set of future problems.)