For one of our application we are trying to automate the deployment process. We have end to end Continuous Integration implemented (Build/Test/Package/Report) for this application. Now we are trying to implement the automated deployment. This application needs to be deployed in 2000 servers and 50 clients under each server. Some of the components will be installed on the server and some of them will be installed on the client.
Tools used for CI: Jenkins, Github, msbuild, nunit, specflow, wix.
I have read the difference between continuous delivery and continuous deployment and understood that continuous delivery means the code/change is proven to go to live at any point of time and continuous deployment means the proven code/change will be automatically deployed to production server.
Most of the articles on the net explain how we can automate the deployments part of continues delivery / deployment to one of the servers (DEV/Staging/Preproduction/production). None of the article talks about deploying the application to large number of servers and clients.
Now my questions are
1) Deploying application to 2000+ servers and clients is part of continuous deployment or it should be handled out of CI/CD?
2) If it can be handled within CI/CD then how do I model this in the Jenkins delivery pipeline and trigger the deployment for all the servers from the CI tool?
3) Is there any tool which I can integrate with the CI tool to automate the deployment?
Thanks
I'd keep separate these 2 aspects:
deployment on a large number of servers (it doesn't matter if the artifacts to deploy come from CI or not)
hooking up such deployment into CI to perform CD
For #1 - I'm sure there are profesional IT tools out there that could be used. Check with your IT department. If deployment does't require superuser permissions or if you have such privileges (and knowledge) you could also write your own custom deployment/management tools.
For #2 - CD doesn't really specify if you should deploy to a single server, a certain percentage of your production servers or all of them). Your choice should be based on what makes sense or is more appropriate for your particular context. How exactly it's done if you decide to go that way? It really depends on #1 - you just need to trigger the process from your CI. Should be a breeze with a custom deployment tool :)
The key requirement (IMHO) in Continuous Deployment is the process orchestration, jenkins isn't ideal tool for this, but you can write an own groovy script wrapper or to invoke the jobs remotely form an another orchestration tool. Another issue in jenkins, at least for me, is the difficulty to track the progress.
I would model it as the following:
Divide the deployment process to logical levels, e.g Data centers->Application->Pools and create a wrapper for each level. It will allow you to see the progress at highest level in the main wrapper and drill down in case of need as you wish
Every wrapper should finish as SUCCESS only if ALL downstream jobs were SUCCESSFUL, otherwise it should be UNSTABLE or FAILURE . In this case there is no chance that you will miss something at the low level jobs.
1 job per 1 per product/application/package
1 job to control the single sequence run. For example I would use mcollective to run the installation job sequentially/parallel on the selected servers
1 wrapper job for every logical level
I would use:
mcollective - as mentinoed above
foreman to query the puppet to select the servers list for every sequence run
The package/application/artifact installation on the server I would prefer to do with a native OS software, e.g yum on the linuxserves. It will allow you to rely on their mechanism of installation verification
I'm sure that I missed something but I hope it will give you an acceptable start point
Related
I am a beginner user of Jenkins. I am trying to putting a development process onto the DevOps pipeline that includes Jenkins, GitHub, SonarQube, IBM UCD.
It is not a very complicated deployment process and it uses windows machine.
There are three environments, QA, DEV, and PROD.
I know that I need to install one IBM UCD agent for each of those three, but do I need to have three slaves in Jenkins as well , or just one master in Jenkins could do that deployment for three environments ? Which way is better ?
Usually for the complex deployment process companies are using "Master+Agent" scheme, but in your case there is no need to create some advanced Jenkins system with master and agents if you can build it on one host and you have not any additional projects or restrictions.
From official documentation:
It is pretty common when starting with Jenkins to have a single server which runs the master and all builds, however Jenkins architecture is fundamentally "Master+Agent". The master is designed to do co-ordination and provide the GUI and API endpoints, and the Agents are designed to perform the work. The reason being that workloads are often best "farmed out" to distributed servers. This may be for scale, or to provide different tools, or build on different target platforms. Another common reason for remote agents is to enact deployments into secured environments (without the master having direct access).
For additional information you can read the following articles: this and this.
I feel it's a little crazy I couldn't find anything along these lines, especially as it's an incredibly simple requirement: Is there a way you can deploy from Jenkins using SSH/SCP, yet write only one instance of a transfer-set/exec script?
As it stands, deploying to servers is kind of INSANE in that I need to create a new "Deploy to SSH" task, choose a different server from the drop down and then copy/past all transfer-sets and execs from the previous entry. Then do it again. And again. And again.
There must be a better way?
This may not be a short-term immediate solution for your question---
(On long run this can be used)
Your requirement seems to me like you need a configuration management equipment. You could use Chef, Puppet or Ansible. And automation of this deployment can be done using Jenkins CI.
One example of how to deploy an application on jboss using Ansible -
Deploy a hello world application
jboss: src=/tmp/hello-1.0-SNAPSHOT.war deployment=hello.war state=present
Of course, this will require installation of Ansible and little bit of initial configuration. Ansible is simplest of all deployment mechanisms.
Check this for more details - http://docs.ansible.com/ansible/intro.html
I am trying to wrap my head around this. Most CI/CD examples/projects have a single master that is always released, and have some variant of, e.g. git-flow, to have a develop branch. Once tagged, it goes to master.
Either way, master is always released to production.
But in the real world as I see it, there are human gates for release to production and other environments. What mechanism do you use to manage the deployment of different versions?
For example:
v1.5 is the current production release
v1.6 has passed all tests, artifacts are ready, it is tagged as valid, but business decides to deploy it only to staging, awaiting an opportune moment to deploy
v1.5 is deployed to a demo environment
v2.0 has also passed all tests, but is in UAT, subject to the customer being happy, as it is a major release
There could be many more such environments - production, staging, UAT, demo, demo2, etc.
What mechanism do you use to handle the tagging of a particular version for a particular environment, and the actual deployment thereof?
Although there a probably a few ways to do it, I use the build pipeline plugin https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin Along with the copy artifacts plugin https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin
With these, you can create individual jobs for each piece of your environment, and link them altogether.
So as in your example, the pipeline would look like:
Build -> Test and Deploy to UAT (2.0) -> deploy to staging(1.6) -> demo(1.5) -> prod (1.5)
Each piece represents a different build in jenkins. The idea behind continuous integration is you create the binaries once, and you carry it down the pipeline, only changing configuration pieces along the way. In a build job, the artifacts are created and then archived. In any jobs after, the artifact is picked up from the upstream job, some stuff is done, and then it get's re-archived for the next downstream job. So the deploy to staging would go to the Test and Deploy to Uat job to get its binary. The entire concept of Continuous Delivery boils down to the the build pipeline. http://en.wikipedia.org/wiki/Continuous_delivery (and yes I did just cite wikipedia).
As for tagging individual binaries for specific environments, that is by definition, not continuous integration. A binary is suppose to be created in a way that it can easily be propagated from one environment to the next. So unfortunately, individual builds for specific environments can never be continuous delivery. You can use jenkins as a CI server all you want, but if your process does not match, you will never achieve true continuous integration.
Braching, merging and checkins always seems to be a touchy subject when it comes to Continuous Integration, so I won't go into it much. But a lot of people share the idea that : "If different members of the team are working on separate branches, then by definition, they not participating in continuous integration process." http://eugenedvorkin.com/continuous-integration-strategies-for-branching-and-merging/
EDIT
For Flagging specific builds, it sounds like your looking to take use of this feature : https://wiki.jenkins-ci.org/display/JENKINS/Fingerprint ... Which gets the job done effectively, giving you the entire life of any individual artifact. A bit more complex solution would be artifactory, which is essentially artifact source control.
I explained the concept of the deployment process above, and without information on your specific environment it is hard to go much further. But for me, for java applications deployed to tomcat containers, the deploy plugin works great https://wiki.jenkins-ci.org/display/JENKINS/Deploy+Plugin
You shouldn't have to worry about selection of which artifact to deploy. The pipeline should be setup to always deploy the latest artifact that was archived in its corresponding upstream job.
Maybe Docker can help you out with this issue. It is able to deploy images of projects to a specific environment. If that environment has a docker client or a docker deamon you are able to request specific information about that environment and the project (to be) deployed on it.
Jenkins can still play a huge part in your pipeline for the integration part and you could let docker do the delivery part.
Docker: https://www.docker.com
Docker plugin for jenkins: https://wiki.jenkins-ci.org/display/JENKINS/Docker+build+step+plugin
Docker also has support for windows machines and .NET.
Using a single server that is only contains one Jenkins building for dev, test, etc.
Using separate Jenkins on each dev, test servers to build and run tests.
Edit ;
this is an explanation of step by step our deployment and release model
Our server side developers develop and commit/push their code to github.
CI server that Jenkins is located in poll SCM and fetch changes than build. (within CI server), run unit tests.
After building process and deploying artifacts to repository server (artifactory server)
Then CI server starts to deploy latest successful build into Development Server.
then client mobile developers can develop on latest successful snapshot build of server side.
These are our standard deployment process.
By the way,
We are also doing test deployment to test server via CI server with another different job on Jenkins (same CI server) but, this is handling/triggering by manual.
Preproduction and production transitions are done by manual also. (preproduction and production are different servers of course)
Questions;
Integration tests should be run on test server. How can i figure it out by building system on remote CI server instead of building system on the same machine (test server) ?
As a further step, what would the best option be to construct a Continuous Delivery system. ?
Thanks
A good approach is to have a single CI system that builds the system continuously as development makes changes. This build will on each build run all the unit tests as well and result in some kind of package that can be deployed. That can be further connected with automation that deploys and runs other tests or it can be used by e.g. testers to further test the system.
Depending on your release model and branching strategy as well as type of system/product this basic setup can be adjusted to fit your needs.
If you want more details please explain what you build and how you release/deploy.
Use case:
CI server polls some VSC repository and runs test suite for each revision. And if two or more revisions were commited, even in a relatively small time interval, I want the CI server to put each of them in queue, run tests for each, store the results, and never run tests again for those commits. And I don't want the CI server to launch jobs in parallel, to avoid performance issues and crashes in case of many simultaneous jobs.
Which CI server is able to handle this?
My additional, less important requirement is that I use Python and it is desirable to use software written in Python, so I looked at the Buildbot project, and I especially want to see reviews for this tool in the matter of is it usable in general and is it capable of replacing most popular solutions like Travis or Jenkins.
I have used jenkins to do this. (with subversion mainly, c/c++ build and also bash/python scripted jobs)
The easiest and default handling of VCS/SCM changes in jenkins is to poll for changes on a set time. A build is triggered if there is any change. More than one commit may be included in build (e.g. if 2 commits are done close together) when using this method. Jenkins shows links back to scm and scm update done as well as showing build logs and you can easily configure build outputs and test result presentation.
https://wiki.jenkins-ci.org/display/JENKINS/Building+a+software+project#Buildingasoftwareproject-Buildsbysourcechanges
What VCS/SCM are you using? Jenkins interfaces to a good few VCS/SCM:
https://wiki.jenkins-ci.org/display/JENKINS/Plugins#Plugins-Sourcecodemanagement
This question answers how to make Jenkins build on every subversion commit:
Jenkins CI: How to trigger builds on SVN commit
TeamCity is free (up to a number of builds and build agents) and feature-rich. It's very easy to install and configure, although it may take some time to find your way through the wealth of options. It is extremely well documented: http://www.jetbrains.com/teamcity/documentation/
It is written in Java but supports many tools natively and others through command-line execution, so you can build anything with it that you want. (I use it mostly for Ruby.) It understands the output of many testing tools; if you're not using one of them maybe yours can emulate their output. It's quite extensible; it has a REST API and a plugin API.
It can be configured to build on each commit, or to build all of the commits that arrived in a given time period, or to trigger in other ways. Docs here: http://confluence.jetbrains.com/display/TCD8/Configuring+VCS+Triggers
By default it starts a single build agent and runs one build at a time on that build agent. You can run more build agents for speed. If you don't want to run more than one build on a machine, only start one build agent on each machine.
I dont want that CI server would launch jobs in parallel to avoid
performance issues and crashes in cases of many simultanious jobs.
In buildbot you can limit the number of running jobs in a salve with max_build parameter or locks
As for Buildbot and Python, you may coordinate parallel builds by configuration, for example:
Modeling Parallel Processes: Steps
svn up
configure
make
make test
make dist
In addition, you can also try using a Triggerable scheduler for your builder which performs steps U,V,W.
From the docs:
The Triggerable scheduler waits to be triggered by a Trigger step (see
Triggering Schedulers) in another build. That step can optionally wait
for the scheduler's builds to complete. This provides two advantages
over Dependent schedulers.
References:
how to lock steps in buildbot
Coordinating Parallel Builds with
Buildbot
There is a Throttle Concurrent Builds Plugin for Jenkins and Hudson. It allows you to specify the number of concurrent builds per job. This is what it says on the plugin page:
It should be noted that Jenkins, by default, never executes the same Job in parallel, so you do not need to actually throttle anything if you go with the default. However, there is the option Execute concurrent builds if necessary, which allows for running the same Job multiple time in parallel, and of course if you use the categories below, you will also be able to restrict multiple Jobs.)
There is also Gitlab CI, a very nice modern Ruby project that uses runners to distribute builds so you could, I guess, limit the number of runners to 1 to get the effect you are after. It's tightly integrated with Gitlab so I don't know how hard it would be to use it as a standalone service.
www.gitlab.com
www.gitlab.com/gitlab-ci
To only run tests once for every revision you can do something like this:
build
post-build
check if the revision of the build is in /tmp/jenkins-test-run
if the revision is in the file skip tests
if the revision is NOT in the file run tests
if we ran the tests then write the ID in /tmp/jenkins-test-run