Using TFS 2018, I have a need to deploy (individually) different team projects to the same target server. Until update 2 is released, it is not possible to share deployment groups.
However, there must be a way to deploy different team projects to the same server.
My thought was maybe I have to create one release agent for each project, since I cannot share a deployment pool. However, I read a TechNet post from 2016 that says
It is recommended to limit the number of agents, in a build machine,
to the number of CPU cores it has.
Whether the article was being ambiguous and means build - or - release agents, or only means build agents only, I don't know. OK, my target server has 4 CPUs and I need the option to deploy any number of individual, independent Team Projects to the same server, so it's starting to look like creating a separate depolyment agent per team project is not going to be feasible.
Until update 2 is released it is not possible to share the same deployment group. My question is how do I actually achieve this necessary outcome of independently deploying more than one Team Project to the same server?
Please remember that I am restricted to TFS. VSTS is not an option in my scenario.
That recommendation is really more for build servers. Build servers have very different requirements in terms of CPU/memory than release agents. Build agents are very memory and CPU intensive while running builds. You're not going to be running builds on your release agents.
The release agent is going to be idle the vast majority of the time. I don't see a problem with creating a second deployment group with a second agent install as a workaround until you can upgrade.
I am wondering how exactly does docker fit into CI /CD .
I understand that with help of containers, you may focus on code , rather than dependencies/environment. But once you check-in your code, you will expect tools like TeamCity, Jenkins or Bamboo to take care of integration build , integration test/unit tests and deployment to target servers ( after approvals) where you will expect same Docker container image to run the built code.
However, in all above, Docker is nowhere in the CI/CD cycle , though it comes into play when execution happens at server. So, why do I see articles listing it as one of the things for DevOps.
I could be wrong , as I am not a DevOps guru, please enlighten !
Docker is just another tool available to DevOps Engineers, DevOps practitioners, or whatever you want to call them. What Docker does is it encapsulates code and code dependencies in a single unit (a container) that can be run anywhere where the Docker engine is installed. Why is this useful? For multiple reasons; but in terms of CI/CD it can help Engineers separate Configuration from Code, decrease the amount of time spent doing dependency management etc., can use it to scale (with the help of some other tools of course). The list goes on.
For example: If I had a single code repository, in my build script I could pull in environment specific dependencies to create a Container that functionally behaves the same in each environment, as I'm building from the same source repository, but it can contain a set of environment specific certificates and configuration files etc.
Another example: If you have multiple build servers, you can create a bunch of utility Docker containers that can be used in your CI/CD Pipeline to do a certain operation by pulling down a Container to do something during a stage. The only dependency on your build server now becomes Docker Engine. And you can change, add, modify, these utility containers independent of any other operation performed by another utility container.
Having said all of that, there really is a great deal you can do to utilize Docker in your CI/CD Pipelines. I think an understanding of what Docker is, and what Docker can do is more important that a "how to use Docker in your CI/CD" guide. While there are some common patterns out there, it all comes down to the problem(s) you are trying to solve, and certain patterns may not apply to a certain use case.
Docker facilitates the notion of "configuration as code". I can write a Dockerfile that specifies a particular base image that has all the frameworks I need, along with the custom configuration files that are checked into my repository. I can then build that image using the Dockerfile, push it to my docker registry, then tell my target host to pull the latest image, and then run the image. I can do all of this automatically, using target hosts that have nothing but Linux installed on them.
This is a simple scenario that illustrates how Docker can contribute to CI/CD.
Docker is also usefull for building your applications. If you have multiple applications with different dependencies you can avoid having a lot of dependencies and conflicts on your CI machine by building everything in docker containers that have the necessary dependencies. If you need to scale in the future all you need is another machine running your CI tool (like jenkins slave), and an installation of docker.
When using microservices this is very important. One applicatio can depend on an old version of a framework while another needs the new version. With containers thats not problem.
Docker is a DevOps Enabler, Not DevOps Itself: Using Docker, developers can support new development, enhancement, and production support tasks easily. Docker containers define the exact versions of software in use, this means we can decouple a developer’s environment from the application that needs to be serviced or enhanced.
Without Pervasive Automation, Docker Won’t Do Much for You : You can’t achieve DevOps with bad code. You must first ensure that the code being delivered is of the highest quality by automating all developer code delivery tasks, such as Unit testing, Integration testing, Automated acceptance testing (AAT), Static code analysis, code review sign offs & pull request workflow, and security analysis.
Leapfrogging to Docker without Virtualization Know-How Won’t Work : Leapfrogging as an IT strategy rarely works. More often than not new technologies bring about abstractions over existing technologies. It is true that such abstractions increase productivity, but they are not an excuse to skip the part where we must understand how a piece of technology works.
Docker is a First-Class Citizen on All Computing Platforms : This is the right time to jump on to the Docker bandwagon. For the first time ever Docker is supported on all major computing platforms in the world. There are two kinds of servers: Linux servers and Windows servers. Native Docker support for Linux existed from Day 1, since then Linux support has been optimized to the point of having access to the pint-sized.
Agile is a Must to Achieve DevOps : DevOps is a must to achieve Agile. The point of Agile is adding and demonstrating value iteratively to all stakeholders without DevOps you likely won’t be able to demonstrate the value you’re adding to stakeholders in a timely manner. So why is Agile also a must to achieve DevOps? It takes a lot of discipline to create a stream of continuous improvement and an Agile framework like Scrum defines fundamental qualities that a team must possess to begin delivering iteratively.
Docker saves the wastage for your organization capital and resources by containerizing our application. Containers on a singe host are isolated from each other and thy uses same OS resources. This frees up RAM, CPU and storage etc. Docker makes it easy to package our application along with all the required dependencies in an image. For most of the application we have readily available base images. One can create customized base image as well. We build our own custom image by writing simple Dockerfile. We can have this image shipped to central registry from where we can PULL it to deploy into various environments like QA, STAGE and PROD. This All these activities can be automated by CI tools like Jenkins.
In a CI/CD pipeline you can expect the Docker coming into picture when the build is ready. Initially CI server (Jenkins) will checkout the code from SCM in a temporary workspace where the application is built. Once you have the build artifact ready, you can package it as an image with the dependencies. Jenkins does this by executing simple docker build commands.
Docker removes what we all know the matrix from hell problem, making the environments independent with its container technology. An open source project Docker changed the game by simplifying container workflows and this has resulted in a lot of excitement around using containers in all stages of the software delivery lifecycle, from development to production.
It is not just about containers, it involves building Docker images, managing your images and dependencies on any Docker registry, deploying to an orchestration platform, etc. and it all comes under CI/CD process.
DevOps is a culture or methodology or procedure to deliver our development is very fast. Docker is a one of the tool in our devops culture to deploy application as container technology (use less resources to deploy our application).
Docker just package devloper environment to run on other system so that developer need not to worry about whether there code work in there system and not work in production due to differences in environment and operating system.
It just make the code portable to other environments.
Is there a way (or some plugin/add-on) to add servers to an environment in TFS Release Management 2015?
I came from a team that used Octopus Deploy for DevOps. One thing that was extremely helpful was the ability to add multiple servers to an environment. Then, when you execute deployment steps on an environment, it applies those actions to all the servers that are part of the environment -- making deployments super easy. I have yet to find similar functionality in TFS Release Management and it's quite sad. They have a concept of an environment, but it's more like a "stage" than a logical/physical group of servers. To deploy the same step to multiple servers in an environment, you have to re-create the step multiple times or specifically write the names of all the servers in each step. Sad!
There isn’t the feature that execute deployment steps on an environment and applies to all the servers that in the environment.
But for web-based release management, you can provide a comma separated list of machine IP addresses or FQDNs along with ports for many steps/tasks of remote deploy, such as PowerShell on Target Machines, IIS Web Deployment and so on.
There is an article that may benefit you: Environments in Release Management
Regarding server and client based release management, there is environment that can include multiple servers, but you need to add steps multiple times for each server. I recommend that you use web-based release management.
I am going to be responsible for implementing TeamCity into our development environment pretty soon. I have searched around and see no real answers, does anyone know if there is a 'best practice' when it comes to a build server. Is it Ok to install TeamCity on the same server as TFS, is it preferred? Or should I install it onto a dedicated server (which I can do).
Thanks
I would think that Microsoft's own advice about TFS would also be relevant here:
You can host a build server on the same computer as your Team
Foundation Application-Tier Server, but, in most of these situations,
this build server should not host any build agents. Build agents place
heavy demands on the processor, which could significantly decrease the
performance of your application tier. In addition, you might want to
avoid running any build server components on the application tier
because installing Team Foundation Build Service increases the attack
surface on the computer.
So, you might see unnecessary slow downs on other operations like version control, work item tracking, etc.
Install it on its own server, you don't want it grinding tfs to a halt when it is performing a build.
You could install the Teamcity server on the tfs server but if you can a separate machine, but as its the agents that do the work it those that definitely need to be on a different machine from teamcity and tfs if possible.
How would you implement an automated build and deploy system for Windows services. Things to keep in mind:
The service will have to be stopped on the target machine.
The service entry in the Windows registry might need to be created/updated.
Some, but not all, of the services might need to be automatically started.
I am willing to use TFS for this, but it isn't a requirement. The target machines will always be development machines, we won't be doing this for production servers.
The automated build part can be done in multiple ways - TFS, TeamCity (what we use), CruiseControl.NET, etc. That in turn could call a build script in NAnt (again, what we use), MSBuild, etc.
As for stopping and installing a service remotely, see How to create a Windows service by using Sc.exe. Note that you could shell/exec out to this from your build script if there isn't a built-in task. (I haven't tried this recently, so do a quick spike first to make sure it works in your environment.)
Alternately, it's probably possible (and likely elegant) in Windows PowerShell 2.0.