TFS Multi-configuration with different set of variables - tfs

I need to deploy the same project and the same version on different environments. Each environment requires its own values for the given variables, e.g. ENV1 requires var1a, var1b, var1c and ENV2 requires var2a, var2b and var2c. You cannot have a combination var1a and var2b. That means that I need to run exactly the same build but with different set of variables.
There are about 20 variables and 50 environments so changing them manually 1000 times per version is not exactly an option.
I can create a different build definition for each set of variables' values and end up with 50 definitions, but that seems a little bit redundant. Not to mention in case I want to remove a step, I will need to update 50 definitions.
Can I somehow link a variable group to a build configuration and make the build definition switch the values automatically?

If you would like to not use another tool like Octopus Deploy (regradless that you could achieve the needed work with it), and stick to your TFS, assuming that you are on TFS 2015 or higher version already, you could probably take advantage of the creating and deploying release using Release Definitions through the Release Management and use Custom Variables to have the same variable to have different value based on the Environment you are going to use it for.

Related

How do I reuse release and environment variables in other existing release definitions in TFS 2018?

In TFS 2018 I have a list of Release definitions which should share the same variables. These variables consists of Release and Environment variables. For one release definition all the required release and environment variables are defined already.
However, I didn't find a way to easily reuse the defined variables in other existing release definitions in TFS 2018. Is there a way to do this?
Creating variable groups is not an efficient option since defining multiple variable names in that group is not possible
Example of defining the same variable name multiple times with different values and different scopes in a specific release definition. This is what I want also in a shared group:
Create variable groups for shared variables and link them to the releases and/or environments as appropriate.

Passing release variables between two agent phases

I'm working with TFS on-premise.
My issue is that during a release I have two agent phases separated by a manual intervention.
In the first agent phase, I set a variable with:
Write-Verbose $("##vso[task.setvariable variable={0};]{1}" -f $variablename, $variable)
Problem is that in the second agent phase, this variable doesn't exist anymore, even if the same agent is used for the second release phase.
How may I pass a variable between two agent phases during the same release?
There is no way to persist variables (no matter powershell variables or VSTS user defined variables) between two agent phases (and environments) for now.
And there is a related issue Variables set via logging commands are not persistent between agents, you can follow up.
The work around for now is define the variable again in next agent phase.
You can share a variable between the agent phases by using the TFS Rest API and creating a new variable in the release.
You can create a helper module to facilitate this task.
Get the release by using the environment variable $Env:Release_ReleaseId.
Add a NoteProperty, using Add-Member, to the variables hashtable of the release returned in step 1, where the name is your desired variable name and the value is a ConfigurationVariableValue.
Save the release with the added variable.
In order to use this approach, you would set the variable in your first agent. Then, in the second agent, you can simply read the TFS variable using the $(VariableName) semantic.
I've used the "Variable dehydration task" to write the value to my artifact folder in a build pipeline. I then read the json with inline-powershell. Currently, I'm reading it on every task in my release pipeline, which seems mental to me, but it sort of works. You ought to be able to set a global or env? variable and use that instead. Supposedly fixed in 2017, but I'm using 2015.
The right way to do it is using Variablegroups to persist between pipelines: https://learn.microsoft.com/en-us/azure/devops/pipelines/library/variable-groups?view=azure-devops&tabs=yaml

TFS 2017 - Is it possible to access the release variable in an environment which was set in another environment in the same release?

In TFS 2017, I set a variable in first environment during the execution using the below command
$myValue= "myValue"
Write-Host ("##vso[task.setvariable variable=myValue;]$myValue")
Is it possible to access this variable in the 2nd environment in the same release?
No, you are using a environment-level variable which for values that vary from environment to environment (and are the same for all the tasks in an environment).
However, this could be achieved, you just need to use release definition variables instead.
Share values across all of the environments by using release
definition variables. Choose a release definition variable when you
need to use the same value across all the environments and tasks in
the release definition, and you want to be able to change the value in
a single place. You define and manage these variables in the Variables
tab in a release definition.
More details about variables in Release Management, please refer this tutorial.

Jenkins Configuration change

How to change name of configuration in jenkins it is default . I want to change it to linux.
You need to specify axis to your Multi-Configuration Project; on the project configuration page click "Add axis":
Your choice here depends on what you are trying to achieve. Do you wan't your build to run on a specific Slave/Agent or do you simply wan't to set an environment variables to different values? There is a summary of different axis here.
But for simplicity sake, let's take user-defined axis, this will provide an environment variable which is set to the value provided:
In this case I've entered linux and windows and on the project view I get:
And each time I build the project it will run two instances of the project, one with platform equal to linux and one with platform equal to windows.

Jenkins and multi-configuration (matrix) jobs

Why are there two kinds of jobs for Jenkins, both the multi-configuration project and the free-style project project? I read somewhere that once you choose one of them, you can't convert to the other (easily). Why wouldn't I always pick the multi-configuration project in order to be safe for future changes?
I would like to setup a build for a project building both on Windows and Unix (and other platforms as well). I found this question), which asks the same thing, but I don't really get the answer. Why would I need three matrix projects (and not three free-style projects), one for each platform? Why can't I keep them all in one matrix, with platforms AND (for example) gcc version on one axis and (my) software versions on the other?
I also read this blog post, but that builds everything on the same machine, with just different Python versions.
So, in short: how do most people configure a multi-configuration project targeting many different platforms?
The two types of jobs have separate functions:
Free-style jobs: these allow you to build your project on a single computer or label (group of computers, for eg "Windows-XP-32").
Multi-configuration jobs: these allow you to build your project on multiple computers or labels, or a mix of the two, for eg Windows-XP, Windows-Vista, Windows-7 and RedHat - useful for checking compatibility or building for multiple platforms (qt programs?)
If you have a project which you want to build on Windows & Unix, you have two options:
Create a separate free-style job for each configuration, in which case you have to maintain each one individually
You have one multi-configuration job, and you select 2 (or more) labels/computers/slaves - 1 for Windows and 1 for Unix. In this case, you only have to maintain one job for the build
You can keep the your gcc versions on one axis, and software versions on another. There is no reason you should not be able to.
The question that you link has a fair point, but one that does not relate to your question directly: in his case, he had a multi-configuration job A, which - on success - triggered another job B. Now, in a multi-configuration job, if one of the configuration fails, the entire job fails (obviously, since you want your project to build successfully on all your configurations).
IMHO, for building the same project on multiple platforms, the better way to go is to use a multi-configuration style job.
Another option is to use a python build step to check the current OS and then call an appropriate setup or build script. In the python script, you can save the updated environment to a file and inject the environment again using the EnvInject plugin for subsequent build steps. Depending on the size of your build environment, you could also use a multi-platform build tool like SCons.
You could create a script (e.g. build) and a batch file (e.g. build.bat) that get checked in with your source code. In Jenkins in your build step you can call $WORKSPACE/build - Windows will execute build.bat whereas Linux will run build.
An option is to use user-defined axis combined with slaves(windows, linux, ...), so you need to add a filter for each combination and use the Conditional BuildStep Plugin to set the build step specific for each plataform(Executar shell, Windows command, ...)
This link has a tutorial but it is in portuguese, but it's easy to work it out based on image...
http://manhadalasanha.wordpress.com/2013/06/20/projeto-de-multiplas-configuracoes-matrix-no-jenkins/
You could use the variable that jenkins create when you define a configuration matrix axis. For example:
You create a slave axis with name OSTYPE and check the two slaves (Windows and Linux). Then you create two separate build steps and check for the OSTYPE environment variable.
You could use a improved script language instead, like python, which is multi-platform and can achieve the same functionality independent of the slaves' name and in just one build step.
If you go the matrix route with Windows and something else, you'll want the XShell plugin. You just create your two build scripts such as "build.bat" for cmd and "build" for bash, and tell XShell to run "build". The right one will be run in each case.
A hack to have batch files run on Windows and shell scripts on Unix:
On Unix, make batch files exit with 0 exit status:
ln -s /bin/true /bin/cmd
On Windows, either find a true.exe, name it sh.exe and place it somewhere in the PATH.
Alternatively, if you have any sh.exe installed on Windows (From Cygwin, Git, or other source), add this to the top of the shell script in Jenkins:
[ -n "$WINDIR" ] && exit 0
Why wouldn't you always pick the multi-configuration job type?
Some reasons come to mind:
Because jobs should be easy to create and configure. If it is hard to configure any job in your environment, you are probably doing something wrong outside the scope of the jenkins job. If you are happy that you managed to create that one job and it finally runs, and you are reluctant to do this whole work again, that's where you should try to improve.
Because multi configuration jobs are more complex. They usually require you to think about both the main job and the different sub job variables, and they tend to grow in complexity to a level beyond being manageable. So in a single job scenario, you'd probably waste thoughts on not using that complexity, and when extending the build variables, things might grow in the wrong direction. I'd suggest using the simple jobs as default, and the multi configuration jobs only if there is a need for multiple configurations.
Because executing multi configuration jobs might need more job slots on the slaves than single jobs. There will always be a master job that is executed on a special, invisible slot (that's no problem by itself) and triggers the sub jobs, but if these sub jobs do themselves trigger sub jobs, you might easily end in a deadlock if there are more sub jobs than slots, and some sub jobs trigger again sub jobs that then cannot execute because there are no more open slots. This problem might be circumvented by using some configuration setup on the slaves, but it is present and might only occur if several multi jobs run concurrently.
So in essence: The multi configuration job is a more complex thing, and because complexity should be avoided unless necessary, the regular freestyle job is a better default.
If you want to select on which slave you run the job, you need to use multi-configuration project (otherwise you won't be able to select/limit slaves on which you run it – there are three ways to do it, however I've tried them all (Tie plugin works only for master job, Restrict in Advanced Project Options is not rock-safe trigger as well so you want to use Slave axis that is proven to work correctly today.)

Resources