Is there a way to add inputs to a release or environment deployment, each time a new release gets triggered?
For example, I would want a parameter when launching a release or environment deployment that could be used inside of a step. Is this possible to accomplish through a step or some other way?
Not that I know of, I wanted something similar a while back.
I did manage to do it, by putting a file with the parameter in the artifact, so I basically fixed it in the build
I know this is old but in case anybody else comes across it, this is possible by creating a Draft Release. It's not very intuitive like Build variables are but it is possible...
Create a Draft Release from the release definition.
Change the variable values in the Draft Release & save.
Start the release from the Draft Release.
Found the solution here:
Defining variables while creating a Release in VSTS
Related
I am not using ANT at all so the proposed duplicate does answer this question about Jenkins.
I am working on a build script that will increment the version number of the program. To do this the version file will be checked out, next version number computed and written back, and then checked in.
It occurs to me that this will trigger yet another build in an endless cycle. When we used TFS builds we could put a string in the check in comment like ***NOCI*** or something and that check in would be ignored and not trigger a new build.
Is there any such option for Jenkins or a technique I can apply myself to solve this?
I am using the TFS plugin to access my SCM.
The Subversion SCM plugin allows you to specify paths that will be excluded when polling for new versions. Git SCM also can be configured to exclude some regions.
By excludng the file that contains the version number you will be able to avoid the vicious circle that you observed.
Since you cannot cloak or .tfignore your versioning file...you can use the NOCIOption property, and pass in the flag for it, in your comments.
You would setup the NOCIOption property of the SyncWorkspace workflow activity in TFS, and during your version change, pass "****NO_CI***" flag in the comments of the checkin. This is kind of hackish and could be avoided if you used GlobalAssemblyInfo.cs versioning, linked throughout your project instead.
I suggest not using your "versioning" file, as it's fundamentally wrong for the reason of cyclic checkins. I would suggest using the GlobalAssemblyInfo.cs linked throughout your .NET solution and stamping that prior to calling MSBuild. It works like a champ for setting and linking versioning throughout your .NET projects in your solution. You implement Global Assembly Info in your solution as described in this answer here.
You can understand more of it here, at "What are the best practices for using assembly attributes". You could simply stamp this file (via Powershell or whatever) and call MSBuild and your version will be present in all .DLLs.
I realize that TFS is meant for Continuous Delivery so the concept of assigning an entire Release Version is not a native concept. However, we have a need to deploy a few products manually and we would like to assign the version to a specific person to deploy. Currently I do not see an any options to associate an Assignee to a Release Version. Is it possible to do this?
You could disable Continuous deployment trigger, and make someone deploy the project manually. Or you could define pre-deployment approvers, to make someone check the release before deploy:
I'm working with TFS on-premise.
My issue is that during a release I have two agent phases separated by a manual intervention.
In the first agent phase, I set a variable with:
Write-Verbose $("##vso[task.setvariable variable={0};]{1}" -f $variablename, $variable)
Problem is that in the second agent phase, this variable doesn't exist anymore, even if the same agent is used for the second release phase.
How may I pass a variable between two agent phases during the same release?
There is no way to persist variables (no matter powershell variables or VSTS user defined variables) between two agent phases (and environments) for now.
And there is a related issue Variables set via logging commands are not persistent between agents, you can follow up.
The work around for now is define the variable again in next agent phase.
You can share a variable between the agent phases by using the TFS Rest API and creating a new variable in the release.
You can create a helper module to facilitate this task.
Get the release by using the environment variable $Env:Release_ReleaseId.
Add a NoteProperty, using Add-Member, to the variables hashtable of the release returned in step 1, where the name is your desired variable name and the value is a ConfigurationVariableValue.
Save the release with the added variable.
In order to use this approach, you would set the variable in your first agent. Then, in the second agent, you can simply read the TFS variable using the $(VariableName) semantic.
I've used the "Variable dehydration task" to write the value to my artifact folder in a build pipeline. I then read the json with inline-powershell. Currently, I'm reading it on every task in my release pipeline, which seems mental to me, but it sort of works. You ought to be able to set a global or env? variable and use that instead. Supposedly fixed in 2017, but I'm using 2015.
The right way to do it is using Variablegroups to persist between pipelines: https://learn.microsoft.com/en-us/azure/devops/pipelines/library/variable-groups?view=azure-devops&tabs=yaml
I have a requirement to do a dependent build using Jenkins Following is the requirement:
Project 1 has a branch which is used among two release lines. For example project1 development branch ikt/master is share in two release line rel1.2_4GB and rel_1.2_2JB.
When ever a change is submitted in ikt/master of project1 it should trigger build of both the release line rel1.2_4GB and rel_1.2_2JB simultaneously.
Build results should wait for other build to pass means both builds should be green.
Please suggest me steps using both plugin as well as without plugin (if possible).
Kind Regards,
I think your best option is to use the Parameterized Trigger Plugin to do this.
It's very simple and easy to use and you can trigger several child jobs and wait for their results. Based on their results, you can choose to fail or pass the build.
I suggest you read some more about it and do some experimenting. It works very well for me.
I hope this helps.
NOTE - I suggest not wasting time looking for a non plugin solution. If you have a good tool, use it. Don't loose time trying to be smarter...
I have a Jenkins build that can either be triggered via scheduling, by a user requesting it, or by being called as a build step from other builds. If this build is called as part of another build, it needs to save some information for the larger build to use. I want to pass this information back up by writing to a file. The only problem is having the builds agree on a location to write to.
One approach is to write it to a well known location, but this does not allow several builds to be run in parallel since one will clobber the other.
Another is to add a build parameter to the build that other builds will fill in with a file location to write to. This, to me, seems like a bit of a hack since it means that whenever the build is run, it will need to have a parameter passed in, even if it is just starting with the default value.
The final approach that I considered was having the parent build set an environment variable in the build and having the child check for the existence and content of the variable and act appropriately. Unfortunately, I cannot find a way to set this up in Jenkins.
It seems to me that a combination of archiving artifacts in post build and the Copy Artifact Plugin would do the job.
It sounds like you need the Parameterized Trigger Plugin.