Overriding TestSetting Parameters with *.RunSettings in VSTS build with SpecRun - specflow

Just been googling around and found , parameters can't be overridden with SpecRun in VSTS build's VSTest task when using *.RunSettings. Just wondering if anyone has successfully does it before?

You can modify the content of the file before running test, for example, using Tokenization task to modify file.
Using Tokenization (Token Replacement) for Builds/Releases in vNext/TFS 2015
On the other hand, there is a thread about reading parameter value in specflow test: How to read test run settings parameter value in specflow tests?

Related

Specflow test runsettings & usersecrets

I have some specflow tests which use runsettings files to pass parameters through to the test cases and some of these parameters are secrets.
In our build pipeline & release pipelines on ADO I have it figured out that I can choose the run settings file and override parameters with those on the pipeline variables but I am trying to figure out a way that we can still run these tests locally.
I want to be able to create user secrets using the .net tool and somehow get these passed into the tests at runtime and replace anything in the runsettings with the matching ID.
I don't want it to write anything into the runsettings files as this still poses a risk that someone might commit the runsettings files with the secrets in.
Any idea how I can accomplish this as struggling to find anything

Azure DevOps - Task Group - Visual Studio Test task - Path to custom test adapters - set variable by script in previous tasks does not work

We use xUnit to run tests using Visual Studio Test task after Visual Studio Build task.
The solution is created on .NET Fw 4.6.2 and using xUnit version 2.4.1.
There is some kind of bug, that makes the Test run resulting in all tests GREEN, but resulting the task to Fail.
As a solution/workaround for the Visual Studio Test there can be specified a Path to custom test adapters property, targeting directly to Packages/xunit.runner.visualstudio.2.4.1/build/_common where the Framework testing dlls are placed.
So far so good - using that workaround gives a way to go.
As an improvement I wrote a PS script, that runs before the the Test task, setting a variable that I also use in the Visual Studio Test task - Path to custom test adapters.
The script is about to set my custom variable "XunitRunnerFolder" to programmatically selected path.
The problem is, that athough the variable is correctly set in PS script (and confirmed by reading in an extra script that i added in between the PS and Test task), the Test task still shows the initial value of the "XunitRunnerFolder" instead of reading the current/changed one.
Recap:
initial value of XunitRunnerFolder is "auto" (defined on Task Group input variables)
PS script changes the XunitRunnerFolder variable to Packages/xunit.runner.visualstudio.2.4.1/build/_common, using ##vso[task.setvariable variable=XunitRunnerFolder;]xxx/Packages/xunit.runner.visualstudio.2.4.1/build/_common
an extra script that reads the $Env:XunitRunnerFolder confirms its value is set to the (2.) value
Visual Studio Test task has set in its property Path to custom test adapters the variable $(XunitRunnerFolder), but in its log it writes-out that the value is the initial one ("auto")
Is there, please, a way to solve this, resulting in correct use of the newly set value in Test task?
Googling for 2 days so far didn't help, maybe because of not being native english speaker to ask correctly.

How to get nUnit test report after TFS 2015 build?

Good day.
I have a TFS build and a Visual Studio Test step there. All test are run, but I can't get detailed info about them, only quantity, passed\failed\others, run duration.
Also, I see "Outcome: Failed" and "This build doesn't have test results for the outcome you've selected".
How to configure this outcome?
You can't directly use the build-in outcome to analysis the detail test result.
You need to run the Publish Test Result task. First generate an XML file which contains the results. And then use the publish results step and pointing to that file so that the test results will show up in the build output.
More details about how to use this, please refer this similar tutorial with xUnit test: Execute and publish xUnit Tests results with .NET Core and VSTS
Note: For now VSTS/TFS does not support NUnit3 format. Source Link: Support NUnit2 format

Can you ask for user input for TFS 2015 CI build?

This seems simple enough, but I can't find a solution for this online.
I am integrating SonarQube into our build definitions that get triggered on check in. I want the version SonarQube uses to be tied back to the project number defined by the business side of things.
Ideally, I would like to be able to prompt the user for input. When you go to check in and it kicks off the build, it would ask you for the project number to be used as the version for SonarQube. Is this something TFS 2015 supports?
User input for build definitions
As far as I know, build definitions that are not manually triggered do not prompt for user input. A prompt allowing users to set build variables is shown for manually triggered builds from the VSTS web page.
SonarQube project version
I would recommend against you using the build or assembly version in your build tasks. This is because the SonarQube concept of version is quite different from the build concept. SonarQube uses versions as a baselining mechanism / to determine the leak period. If you up the version number often, the leak period is going to be too short to be actionable.
I'd recommend keeping the SonarQube project version in sync with your release schedule instead.
The short answer to this question is no, there is no way to prompt for input on a non-manually triggered CI build.
Here's what we did to work around this:
I wrote a Powershell script to read a config file and set the values to environment variables exposed to later build steps. Those variables are then what are specified in the Sonar Begin Analysis build task. I packaged that script up as a custom build task that will read a "sonar.config" file. This means all we have to do is add a "sonar.config" file to each solution we want to run Sonar analysis for, defining the key, name and version for the project, and then this build task will populate all necessary environment variables as the first step in the build.
So not a perfect solution, but it gets the job done without us having to add a lot of extra code to our solutions.

How to make undocumented SpecFlow tests fail (not be marked inconclusive) when using Jenkins and NUnit?

We use SpecFlow and NUnit in Visual Studio at work. Very useful, etc, etc. Once we've finished development the checked in code goes to a build server which uses Jenkins to build and run all the tests. Very helpful, etc.
However, there is an annoying hole: if you a/ use a step in SpecFlow that you don't define, a default step definition is used which marks the test as Inconclusive, and then b/ NUnit ignores Inconclusive tests (apparently MSTest fails them instead), and then c/ Jenkins doesn't detect a problem and passes the build. This means our build servers can pass tests that aren't even defined correctly.
So, can anyone EITHER:
- tell me how get SpecFlow to throw errors on missing steps? (can't find an option for it or anything on the web) OR:
- tell me how to get NUnit to treat Inconclusive tests as failing? (once more, can't find an option for it or anything on the web) OR:
- somehow get Jenkins to pick up the Inconclusive results and treat that as failing?
All suggestions will be considered, left-field ones included! Thank you.
There is a configuration option to change this behavior.
See here for the documentation. The configuration we're interested in here is missingOrPendingStepsOutcome.
The default setting is:
missingOrPendingStepsOutcome="Inconclusive"
We simply need to change (or add the setting) as follows:
missingOrPendingStepsOutcome="Error"

Resources