Create environment with multiple servers - TFS Release Management - tfs

Is there a way (or some plugin/add-on) to add servers to an environment in TFS Release Management 2015?
I came from a team that used Octopus Deploy for DevOps. One thing that was extremely helpful was the ability to add multiple servers to an environment. Then, when you execute deployment steps on an environment, it applies those actions to all the servers that are part of the environment -- making deployments super easy. I have yet to find similar functionality in TFS Release Management and it's quite sad. They have a concept of an environment, but it's more like a "stage" than a logical/physical group of servers. To deploy the same step to multiple servers in an environment, you have to re-create the step multiple times or specifically write the names of all the servers in each step. Sad!

There isn’t the feature that execute deployment steps on an environment and applies to all the servers that in the environment.
But for web-based release management, you can provide a comma separated list of machine IP addresses or FQDNs along with ports for many steps/tasks of remote deploy, such as PowerShell on Target Machines, IIS Web Deployment and so on.
There is an article that may benefit you: Environments in Release Management
Regarding server and client based release management, there is environment that can include multiple servers, but you need to add steps multiple times for each server. I recommend that you use web-based release management.

Related

how to create deployment pipeline using jenkins

What does it mean by deploying code from dev to prod environments using Jenkins. Can anyone please help. I currently have the source code in my gitlab. I need to deploy this code from dev env to prod env
Thanks in advance.
Source code present in GitLab is just the files that is needed to create a WAR/EAR/JAR to run the application.
It's the environment files if present which makes the application behave slightly different on each environment i.e. DEV/PROD the data that you see on DEV will not be the same that you see on PROD(application is live), as developers tend to test/modify code/data to ensure that the application works as excepted. This is fine on DEV but is a big no-no on PROD as it will impact business.
Deploying code from dev to prod environments just means building the application with the right environment files e.g DEV points to xyz DB but prod points to abc DB.
All this can be achieved with jenkins and if your project uses maven/gradle then with a single line command you can achieve the above.(A bit of googling will help you here)
If your project doesn't involve Maven/Gradle then you will have to replace the environment file each time a build happens based on a parameter which can be passed from jenkins.
This whole process is part of DevOps culture. In simple terms it looks like this:
Developer pushes changes to source control (i.e. gitlab).
Build server (i.e. Jenkins) automatically downloads latest changes and builds an application (i.e. creates setup files or just the binaries). Usually you run tests (unit, integration, automation tests etc.). If something fails then developers get notified about it. This whole process is called continuous integration.
If everything went right then you can deploy your application to production. This part of the process is called continuous deployment.
It's a common strategy for web apps. For larger projects QA team tests the software and the software gets deployed once QA team approves it.

How to deploy separate team projects to one server

Using TFS 2018, I have a need to deploy (individually) different team projects to the same target server. Until update 2 is released, it is not possible to share deployment groups.
However, there must be a way to deploy different team projects to the same server.
My thought was maybe I have to create one release agent for each project, since I cannot share a deployment pool. However, I read a TechNet post from 2016 that says
It is recommended to limit the number of agents, in a build machine,
to the number of CPU cores it has.
Whether the article was being ambiguous and means build - or - release agents, or only means build agents only, I don't know. OK, my target server has 4 CPUs and I need the option to deploy any number of individual, independent Team Projects to the same server, so it's starting to look like creating a separate depolyment agent per team project is not going to be feasible.
Until update 2 is released it is not possible to share the same deployment group. My question is how do I actually achieve this necessary outcome of independently deploying more than one Team Project to the same server?
Please remember that I am restricted to TFS. VSTS is not an option in my scenario.
That recommendation is really more for build servers. Build servers have very different requirements in terms of CPU/memory than release agents. Build agents are very memory and CPU intensive while running builds. You're not going to be running builds on your release agents.
The release agent is going to be idle the vast majority of the time. I don't see a problem with creating a second deployment group with a second agent install as a workaround until you can upgrade.

TFS private build agents

I'm in the process of establishing a dedicated build machine with several build agents for doing CI/CD for multiple team projects.
I've configured one agent pool against our TFS server, and installed 10 agents from that pool as services on our machine.
Our work is mostly .net but we do have some python and js stuff as well.
My question is what are the pros/cons of using one machine with all the toolsets/dependencies?
Is there any good practice that i'm missing? I would love to hear some opinions.
Yes you can run multiple agents in a single machine. You might want to have multiple build agents to be able to run builds in parallel.
The biggest advantage is that avoid all the upfront capital costs (physical servers, MSFT server licensing, etc). And you just need to deploy the build machine in one time (install the toolsets/dependencies etc).
However please note that builds are typically IO constrained (disk/network read/write speeds), also constrained by the memory/CPU consumed. So running too many parallel builds on one machine will actually degrade the performance. Also may affect calls to the TFS and possibly lead to time-outs.
Besides, you may need to install additional or upgrade software components on the build server and that may need a reboot to take effect, this may affect the current building. Although you try to limit this as much as possible, eventually you could end up with conflicting software as the project advances and interrupting the existing builds.
So, recommend you to deploy separate build servers with the agents installed separately on each of them. That means use multiple build agents to support multiple build machines (e.g.: with 3 build machines - and 3 build agents - to distribute the load).

How to build and deploy Azure Cloud Service with multiple configurations in VSTS Release management?

We are using Team Services to maintain our web projects and Azure for hosting. At now, there are several Web Roles (asp mvc) and Worker Roles which is being hosted as Cloud Services. We are going to setup Continuous Integration and Delivery for them.
As you know, Team Services Build Definitions suggest to use Azure Cloud Service Template for building and Azure Cloud Service Deployment Task to deploy. We’ve tried it for single cloud service and it works.
In our case, there are web project (web role) and scheduler (worker role) as separate Cloud Services and they should be deployed simultaneously (in sequence), let it be DEV environment. But we have much more environments: dev, qa, ta, demo, preview, production, etc. Furthermore, each of them has slightly different web.config, ServiceDefinition.csdef and ServiceConfiguration.cscfg. And it became much more complicated task than just deploy one Cloud Service.
Questions are:
Should we build dozens of Cloud Service pachages (artifacts) and later decide which of them deploy or not? Could you suggest how to do it in a proper way? (in most cases it will be only Dev environment and we will waste time and resources for building other artifacts).
Will it be better to build one common artifact and later replace all configurations for specific environment? (It’s more complicated task because Cloud Service package zipped on several occasions with preconfigured ServiceDefinition and ServiceConfiguration)
What is the best way to replace configuration tokens (web.config, serviceconfiguration, etc) in Deployment mode, or it should be done while projects is being built?
I would be grateful if you suggest any best practices.
For azure cloud project, it’s better to apply changes to the project per to the environment before build, so you can build the project during the release process.
Regarding to deploy to corresponding environment, you can configure artifact filter with build tag.
For example:
Add a file (e.g. json, xml or txt) to project that used to determine which environments the release should deploy
Add a PowerShell task to build definition to read the data from that file (step1) and add build tag(s) through logging command (Write-Host "##vso[build.addbuildtag]build tag")
Add Publish Build Artifacts task to upload the source files
Create a release definition and link artifacts and add multiple environments
Configure Pre-deployment conditions for each environment: Enable Artifact filters=>Select artifact=>Specify build tags
Add tasks and variables (e.g. visual studio build) for each environment to deploy to corresponding environments.
On the other hand, regarding replace value, there are many ways, such as Replace Tokens, XDT Transform

Create a local end-to-end development environment

I am beginning to use terraform to control staging and production environments on various cloud providers (AWS for example). Is there a way to use terraform configuration files to create a local development environment for, say, a multi-tier application environment or do I have to maintain a different configuration via, say, vagrant for my development needs?
This may not be too difficult to do with two tools since most components are dockerized, but it would be nice to have a single configuration.
The problem with a cross platform orchestration tool is that it ends up catering for the lowest common denominator in terms of features available on all clouds. Terraform just describes the infrastructure using the resources available from the desired provider.
So long story short you'll need separate configurations, but if you're deploying to a cloud there is nothing stopping you from using that for QA or acceptance test environments.

Resources