VMware or Docker containers for TeamCity CI - docker

We have a setup of 200 VM's as Build Agents, working with TeamCity. We are thinking of saving the VMware License cost moving to Docker. Anyone having any prior experience on which would provide better performance?
My goal is not to compromise with performance, but if Docker gives even same performance as VMware, we'll switch to docker.
My build agent VM's runs on either windows or ubuntu. Builds on linux uses mainly Python, and windows system mainly uses Visual Studio (different versions). We'll be doing performance test ourselves, but I want to know if someone has done this before and experienced any benefits.

I've recently built my own ephemeral docker container build agents. I've been using them for about 4 months building for 25+ different projects. The dependency management is so much nicer than having different VM's running your build agent. You also have the option to have many build agents running on a single VM. I did not see a performance decrease when I switched from VM's to docker build agents. Using a swarm manager it is very easy to spin up more or less agents depending on your need.
If you are interested I also have a helpful script that automates authorizing a new agent from Docker. TeamCity has no way to automatically authorize an agent, it seems to have to be done through the API.
TL;DR Docker build agents are much easier to use, less overhead and ephemeral with no (visible) performance decrease.

Related

CI/CD with Jenkins and Vagrant

I wanted to build a Jenkins server which would run test of my puppet code on Vagrant. The issue I found is that the we run our server as VMs already, either in vmWare or AWS and Vagrant will not work as another virtualisation.
Does anyone have an idea how can I create a test platform for my puppet code. What I want to test the deployment of manifest on the nodes them self i.e. If I deploy a class web server or make changes to it I would like to check if it affects/breaks deployment of other classes.
The idea would be to iterate over all the classes/roles and see if the deployments are passing. I would like to make it automatic and independent of our engineers. At the moment we are running manual test with vagrant up however there are too many roles to do that by hand.
Any ideas how can I tackle this?
You can use either Docker or AWS provider for Vagrant.
In case of AWS provider you need to set-up RSync to get your environment into newly launched instance.
If your Vagrant scripts are robust, you can use the same script for both local deployment on your workstation and AWS/Docker deployment on CI server.
There are drawbacks to doing these techniques, in case of Docker you are limited to the same kernel that Jenkins server is running, in case of AWS you will incur additional costs. However, for AWS your don't need to allocate as much resources for your Jenkins server, so you might even save money this way because you will be using paying for extra VMs only when you are running you tests. Just make sure you will shut them down after you done.
Is there any special reason why you want to use vagrant? I'm not sure if you are setting up your production environment with vagrant or not.
In case you are not bound to vagrant, I would recommend you to think about using a docker image to prepare a lightweight environment to run your setups and verifications in.
When doing your tests, spin up a container from your image that contains your puppet distribution and run your setups/tests inside. If you have special kernel requirements, use a separate jenkins slave/agent machine rather than executing jobs on the jenkins master.
If you are not sure how to get started using jenkins with docker, have a look into the examples section of the Jenkins Documentation. The provided examples are showing the declarative pipeline syntax thats still a bit new. Also consider the collapsed Toggle Scripted Pipeline Sections which show the groovy pipeline scripts that are alot more forgiving for jenkins pipeline beginners.
Those should be quite good pointers to get started with running+testing your puppet scripts inside docker. For building and using a docker image there should be more than enough tutorials out there.
Let me know if this was a hint in the right direction or if I mistinterpreted your question.

Advantages/Disadvantages of Running Jenkins Slaves for Dev/Test/Prod?

Let's start by agreeing that we want to adhere to typical Docker/DevOps principles. Therefore, we want to keep tasks isolated, configurations versions controlled, and overall customization to a minimum.
The Landscape:
Jenkins is being used as the CI/CD tool on your cloud instance of choice.
The Plan:
Create separate instances for test/staging/prod, each with Docker installed
Spin up Jenkins slave containers on each instance, which are controlled by Jenkins master
When a commit is sent to 'test' branch, Jenkins master sends task to 'Test' slave which ultimately spins up version of application
Similarly, after tests are successfully run and code is pushed to staging or prod branches, Jenkins will have branch-respective slave build application.
The Question(s):
What is wrong with this approach?
What can be improved by this approach?
There are a few questions you should ask yourself when taking on this approach, a lot of those are covered in this blogpost.
The final paragraph suggests exposing the docker socket to the CI container, allowing you to build images on the host machine, instead of inside the CI container, saving you from a lot of pains that come from running Docker in Docker.
Other questions you should probably ask are what would be the orchestration service used for controlling the master and slave containers. I had a great time following this blog post by Stelligent to quickly create all I needed on AWS ECS using a Cloudformation stack, but other solutions are obviously an option.
So all in all, I don't see anything wrong with your approach, as long as you exercise caution and follow best practices.
Good luck.

Dockerizing Jenkins builds - slaves as containers or builds as containers?

I'm tyring to figure out the best strategy for containerizing builds in a Jenkins CI/CD infrastructure using Docker. From what I see I have 2 options:
(1) Use ephemeral slaves that get provisioned on-demand on Docker hosts using the Docker Plugin: https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin
Once the build completes the slave is disposed. As a consequence, only one build ever gets run on a single slave.
(2) Use static slaves (e.g. VMs) that run builds inside Docker containers using the CloudBees Docker Custom Build Environment Plugin: https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Custom+Build+Environment+Plugin As a consequence, multiple (isolated) builds can run on a single slave.
What are the main advantages/disadvantages of one approach over the other? When and why should should I choose one over the other? This does not appear at all obvious to me.
I suspect builds are lighter weight that slaves, so for a CI/CD infrastructure orchestrating a large end-to-end pipeline with many jobs running (2) would be more scalable - each Jenkins slave incurs at least 2 threads on the master node.
Edit
My preference is the option 1 (ephemeral slaves) with the Docker plugin.
With this plugin, you declare your build images in the global Jenkins settings, you can affect labels to your Docker images:
On your job, you just have to use the relevant labels, and the Docker plugin will create the relevant slave into a new container.
With the Docker plugin, Jenkins will spin-up a new slave in a few seconds. So even if you're using a pipeline with a lot of stages, it will work fine.
This is what I'm going to implement at Forgerock (my company):
2 powerful bare metal machines (with SSD, 32 cores and 1 TB of RAM)
The Jenkins Docker plugin
Maven artifacts caching using Artifactory (to not download the internet)
The docker container will use a local Maven cache (so I'm sure to not use an old/odd Maven artefact)
I did a POC on a small bare metal machine and it works well :)
If you are using ephemeral slaves without Maven caching, it can become a problem regarding the performance.
Regarding the Jenkins plugins, there is a new one developed by Nicolas De Loof: Docker Slaves plugin.
I have to try this new plugin.

Jenkins & .NET Build server

I would like to raise from scratch a build server for .NET applications using Jenkins, please note that i'm new to Jenkins CI.
Several Questions:
1) How should I decide on the build server specs? except for the OS which would be windows server 2012, how should I decide on the RAM and the CPU and HD space?
2) Should the Jenkins sit in the build machine or not, what is the recommended approach? I understood that the build server should be isolated from the Jenkins master
3) How do I decide on the Master/Slave approach, when should I use only Master and when should I use master and slave or slaves?
4) How would you recommend me to run the build and deployment tasks in the Jenkins CI, using NAnt/Python or any other scripting language ?
10x, and sorry for the igonrance :)
Responding to each in turn:
You can run Jenkins as a windows service (instructions here) and the machine can be a VM, so it doesn't have to be huge.
a) RAM and CPU: I'll put these together and will depend on how many jobs you plan to have running at the same time. The default number of build executors is 3 but can be increased as a global config change.
b) HDD: This depends on how many jobs you plan to have. Jenkins will checkout the source code (as well as the compiled output) to its home directory on a per job basis. This can get big. I would also recommend using the ThinBackup plugin to backup the Jenkins configuration.
Jenkins is the build machine. A vanilla installation of Jenkins is the master. In my experience you will not need a separate slave machine unless you're needing to do native builds on other platforms or have LOTS and LOTS of jobs. I've seen single masters running happily with hundreds of jobs.
Further to 2. above, suggest you start with a master and set up a slave later if you really need one.
As you have stated you are building .NET applications, you can simply install the MSBuild plugin which should serve you well. Builds for .NET applications in Jenkins are Freestyle builds so you will be using Windows Batch build steps often as well. This also is a great blog on Jenkins in a .NET environment.

Jenkins CI: should I have a server for Jenkins and a dedicated slave for building?

I am using Jenkins for CI,
I've heard that I should have a dedicated server and slave for running Jenkins and building tasks, respectively -
is this true?
I can understand this as the server may not be powerful enough to handle the server itself and running build tasks,
but is there any defined technical reason for this?
Best practice is to have a separate machine for Jenkins-Server,
and not to use it for builds at all.
This has nothing to do with CPU-power or memory-resources -
A build-machine should have a predefined configuration,
and Jenkins should not be part of it.
(Jenkins requirements may even conflict with those of the build-machine)
You should be able to boot / clone / upgrade / restore / trash the build-machine
without any impact on Jenkins.
Of course you can settle for a single machine, if your resources are limited,
but if you are serious about build-automation - Jenkins should have its own server.
You probably don't need dedicated hardware/VM to run a Jenkins server because the actual Jenkins process (no builds running) uses very little resources. But it all depends on what you want to accomplish with your Jenkins setup.
Do you want to run continuous builds across multiple platforms for multiple projects? Then using a master with slaves is the only way to go. If, on the other hand, you're running fairly simple builds for just a few projects, then you only need one machine to run the builds and the Jenkins process.
You can configure Jenkins to have multiple builds running concurrently so if you have a quad-core machine, you can safely run 2 builds and possibly a third once you analyze resource usage.
At my last gig, I used a quad-core machine with 8GB RAM to run:
Jenkins running Selenium builds
VirtualBox VM with Windows XP
Two instances of Tomcat each with two applications deployed.
And the machine still had more to spare.

Resources