User permissions for TFS Build server - tfs

I am creating a build using the new TFS 2015 Build definitions. I have msbuild tasks as well as npm/gulp tasks. I am looking at using variables to allow me to build and deploy to each environment, with DEV being the only one that runs on check-in. However, I don't want anyone to be able to start a deploy for production. How would I go about limiting the users that can start a deploy to production? I'd prefer to only have one build definition, for maintenance.

Use the Release hub capabilities for deployments and create an approval workflow for your environment pipeline.

Related

How to build different configs in Azure DevOps release pipeline?

I currenly have an Azure DevOps release pipeline containing Test, Acceptance and Production stage, that are triggered in that order. The Test is triggered when there is a new build available to deploy.
The problem I have with this is that all stages currently deploy the exact same artifact. But this is wrong, since they are deploying to different environments that need to have their own version of the Web.config.
How do I change my setup in such a way that all environments get the right package? Should I change my build setup in such a way that it builds for multiple different configs or should I have separate builds for each environment? And how do I select what artifact each stage of the release pipeline should deploy?
This is what my release pipeline looks like now:
Each environment can have its own variables defined. Simply click on the variables tab and make sure you scope any of those variables to the proper environment.
Then using the Azure App Service Deploy (if targeting Azure) or IIS Web app deploy tasks, you can update your configuration files with the values of your variables, here is the documentation on how to do so.

Jenkins TFS plugin deploy successful changes only

I like to do partial builds of changes from today and deploy them on other environments (TST, Support or UAT, Prod) can jenkins do this on its own?
It can, but a better question is should it. Jenkins is a build system and not a deploy system. You should instead use a release management tool like octopus deploy or release management for visual studio.
http://nakedalm.com/blog/create-release-management-pipeline-professional-developers/

How do you manage multiple releases in multiple environments in continuous integration/delivery?

I am trying to wrap my head around this. Most CI/CD examples/projects have a single master that is always released, and have some variant of, e.g. git-flow, to have a develop branch. Once tagged, it goes to master.
Either way, master is always released to production.
But in the real world as I see it, there are human gates for release to production and other environments. What mechanism do you use to manage the deployment of different versions?
For example:
v1.5 is the current production release
v1.6 has passed all tests, artifacts are ready, it is tagged as valid, but business decides to deploy it only to staging, awaiting an opportune moment to deploy
v1.5 is deployed to a demo environment
v2.0 has also passed all tests, but is in UAT, subject to the customer being happy, as it is a major release
There could be many more such environments - production, staging, UAT, demo, demo2, etc.
What mechanism do you use to handle the tagging of a particular version for a particular environment, and the actual deployment thereof?
Although there a probably a few ways to do it, I use the build pipeline plugin https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin Along with the copy artifacts plugin https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin
With these, you can create individual jobs for each piece of your environment, and link them altogether.
So as in your example, the pipeline would look like:
Build -> Test and Deploy to UAT (2.0) -> deploy to staging(1.6) -> demo(1.5) -> prod (1.5)
Each piece represents a different build in jenkins. The idea behind continuous integration is you create the binaries once, and you carry it down the pipeline, only changing configuration pieces along the way. In a build job, the artifacts are created and then archived. In any jobs after, the artifact is picked up from the upstream job, some stuff is done, and then it get's re-archived for the next downstream job. So the deploy to staging would go to the Test and Deploy to Uat job to get its binary. The entire concept of Continuous Delivery boils down to the the build pipeline. http://en.wikipedia.org/wiki/Continuous_delivery (and yes I did just cite wikipedia).
As for tagging individual binaries for specific environments, that is by definition, not continuous integration. A binary is suppose to be created in a way that it can easily be propagated from one environment to the next. So unfortunately, individual builds for specific environments can never be continuous delivery. You can use jenkins as a CI server all you want, but if your process does not match, you will never achieve true continuous integration.
Braching, merging and checkins always seems to be a touchy subject when it comes to Continuous Integration, so I won't go into it much. But a lot of people share the idea that : "If different members of the team are working on separate branches, then by definition, they not participating in continuous integration process." http://eugenedvorkin.com/continuous-integration-strategies-for-branching-and-merging/
EDIT
For Flagging specific builds, it sounds like your looking to take use of this feature : https://wiki.jenkins-ci.org/display/JENKINS/Fingerprint ... Which gets the job done effectively, giving you the entire life of any individual artifact. A bit more complex solution would be artifactory, which is essentially artifact source control.
I explained the concept of the deployment process above, and without information on your specific environment it is hard to go much further. But for me, for java applications deployed to tomcat containers, the deploy plugin works great https://wiki.jenkins-ci.org/display/JENKINS/Deploy+Plugin
You shouldn't have to worry about selection of which artifact to deploy. The pipeline should be setup to always deploy the latest artifact that was archived in its corresponding upstream job.
Maybe Docker can help you out with this issue. It is able to deploy images of projects to a specific environment. If that environment has a docker client or a docker deamon you are able to request specific information about that environment and the project (to be) deployed on it.
Jenkins can still play a huge part in your pipeline for the integration part and you could let docker do the delivery part.
Docker: https://www.docker.com
Docker plugin for jenkins: https://wiki.jenkins-ci.org/display/JENKINS/Docker+build+step+plugin
Docker also has support for windows machines and .NET.

Which is the best practice of using Jenkins?

Using a single server that is only contains one Jenkins building for dev, test, etc.
Using separate Jenkins on each dev, test servers to build and run tests.
Edit ;
this is an explanation of step by step our deployment and release model
Our server side developers develop and commit/push their code to github.
CI server that Jenkins is located in poll SCM and fetch changes than build. (within CI server), run unit tests.
After building process and deploying artifacts to repository server (artifactory server)
Then CI server starts to deploy latest successful build into Development Server.
then client mobile developers can develop on latest successful snapshot build of server side.
These are our standard deployment process.
By the way,
We are also doing test deployment to test server via CI server with another different job on Jenkins (same CI server) but, this is handling/triggering by manual.
Preproduction and production transitions are done by manual also. (preproduction and production are different servers of course)
Questions;
Integration tests should be run on test server. How can i figure it out by building system on remote CI server instead of building system on the same machine (test server) ?
As a further step, what would the best option be to construct a Continuous Delivery system. ?
Thanks
A good approach is to have a single CI system that builds the system continuously as development makes changes. This build will on each build run all the unit tests as well and result in some kind of package that can be deployed. That can be further connected with automation that deploys and runs other tests or it can be used by e.g. testers to further test the system.
Depending on your release model and branching strategy as well as type of system/product this basic setup can be adjusted to fit your needs.
If you want more details please explain what you build and how you release/deploy.

TFS or Teamcity, how to automate deployment to various environments?

Looking for advice on how to handle this scenerio.
We have 3 environments: Dev, QA and Production.
Currently pushing the code to each environment is a manual process, wondering how something like Cruisecontrol or TeamCity could streamline this process.
How can we push to the various environments in an automated way?
How should TFS be setup to make this happen? i.e. master branch, feature branches etc.
Scenerio:
Developer#1 pushes their changes to the Dev and QA servers.
Developer#2 pushes their changes to the Dev and QA servers.
Now we need to only push Developer#1's changes to production.
Should the main branch have only the code that should be going to production?
To control what gets pushed to each environment KMoraz's approach would be the correct one, using branches and merging.
Now, for build and deployment automation the latest setup I've been using is with Team City.
My setup is:
Trunk build: compiles on every commit, runs all unit tests, generates code coverage reports, runs FxCop
Static analysis build: runs nightly against Trunk, executing Duplicate Finder (Team City), ConQAT code clone analysis, StatSVN, and Resharper Code Inspections (Team City)
DEV Deployment (dependency on Trunk build): on every commit, if the Trunk build is successful, the application is automatically deployed to a DEV environment, using MS WebDeploy with config transformations.
QA Deployment: triggered manually through Team City's interface (click of a button), when moving to QA. Deploys the application to the QA server using MS WebDeploy with config transformations.
You would also set up builds for different branches, depending on your needs, especially for branches created for releases of stable versions.
The key part, is having different visual studio build configurations (just as you have "Release" and "Debug", you should have "Dev", "QA", etc), which you should use along with web.config transformations in order to get WebDeploy to configure your environment for you.
That way you'd have different web.Dev.config, web.QA.config transformations, one for each build configuration, with specific settings.
There's an excellent series of posts by Troy Hunt called "You're deploying it wrong!" which guides you through the setup of automated builds and deployments.
http://www.troyhunt.com/2010/11/you-deploying-it-wrong-teamcity.html
It was very useful to me when setting this up.
Now we need to only push Developer#1's changes to production.
-Developer #1 checked-in his code to the Dev branch. After QA verified his changes, now you merge the changes to the Main branch and build a release for production from the Main.
Should the main branch have only the code that should be going to
production?
-Yes. Ideally, production releases should be built from the Main branch.
How can we push to the various environments in an automated way?
-In TFS, a common practice is defining a build defintion per branch and/or build type. Apart from the source and build type, each defintion can also have its own tasks, I.e: run unit tests, publish to certain folders, deploy build artifacts to Lab Management, etc.
ProjectName-Main-Gated
ProjectName-Dev-CI
ProjectName-Dev-Nightly
ProjectName-Test-CI

Resources