I have a build matrix for my tests. There are two builds that I don't want to run concurrently because they are hitting a remote server and if they both hit the server at the same time it would cause problems. Is there any settings to disable concurrency in the build matrix?
There certainly is, you can dial down the maximum number of jobs that are run per repository: http://blog.travis-ci.com/2014-07-18-per-repository-concurrency-setting/
Related
I am working on a Robotic process automation where i need to automate 10 different process flows.Robot needs to run 24/7.My solution is hosted in AWS cloud and i have got 10 cloud machines to run the scripts.
I have a master Jenkins job which will retrieve the list of automated jobs to execute from a database and i have 10 different jobs configured in Jenkins server.Number of jobs that i need to run at the same time varies from time to time.It may be N different scripts or N instances of the same script with different data combinations.
Challenge i am facing is in post build action i am not able to control the list of scripts/jobs that i need to run based on the output from Jenkins master job.Is there any way to run only the job i need based on the output from a build command?
I was able to achieve it using Jenkins Flexible Publish plugin.
I have many long running jobs that take almost a day to complete. Splitting is not possible. If the network fails then all progress is lost.
How can a slave survive disconnections?
EDIT 1
I have around 300 slaves running in Windows tied to one single Jenkins instance.
Slaves are connected using the manual method java - jar slave.jar -jlnpUrl <serverUrl> <slaveName>. I cannot run them as a regular Windows service because some tests manipulate GUI elements and require a real interactive session otherwise test fail.
EDIT 2
According to Jenkins Cookbook I should be using Cygwin + OpenSSH approach instead of custom script with JLNP-connector. Could this improve stability?
Jenkins was not originally designed for builds to survive across server or slave restarts. There is a CloudBees Long-Running Build plugin that supports long-running builds but, unfortunately, it is available only for enterprise users and still beta.
I didn't find any free alternative and would suggest you to try to improve your network stability and to split your long running jobs. At least you can divide your tests on logical groups (test suites).
Jenkins now has a workflow plugin. It claims to handle "server" restart and loss-of connectivity with slave.
From the link
A key feature of a workflow execution is that it's suspendable. That
is, while the workflow is running your script, you can shut down
Jenkins or lose a connectivity to a slave. When it comes back, Jenkins
will still remember what it was doing, and your workflow script
resumes execution as if it was never interrupted. A technique known as
the "continuation-passing style" execution plays a key role in
achieving this.
(not tested at all)
Edit: Copied from #Jesse Glick's comments :
Workflow is open source and available for anyone running Jenkins 1.580.1+ or later. CloudBees Jenkins Enterprise does include a checkpoint feature, but this is not necessary simply to have a build survive slave disconnections and Jenkins restarts: that is automatic
For one of our application we are trying to automate the deployment process. We have end to end Continuous Integration implemented (Build/Test/Package/Report) for this application. Now we are trying to implement the automated deployment. This application needs to be deployed in 2000 servers and 50 clients under each server. Some of the components will be installed on the server and some of them will be installed on the client.
Tools used for CI: Jenkins, Github, msbuild, nunit, specflow, wix.
I have read the difference between continuous delivery and continuous deployment and understood that continuous delivery means the code/change is proven to go to live at any point of time and continuous deployment means the proven code/change will be automatically deployed to production server.
Most of the articles on the net explain how we can automate the deployments part of continues delivery / deployment to one of the servers (DEV/Staging/Preproduction/production). None of the article talks about deploying the application to large number of servers and clients.
Now my questions are
1) Deploying application to 2000+ servers and clients is part of continuous deployment or it should be handled out of CI/CD?
2) If it can be handled within CI/CD then how do I model this in the Jenkins delivery pipeline and trigger the deployment for all the servers from the CI tool?
3) Is there any tool which I can integrate with the CI tool to automate the deployment?
Thanks
I'd keep separate these 2 aspects:
deployment on a large number of servers (it doesn't matter if the artifacts to deploy come from CI or not)
hooking up such deployment into CI to perform CD
For #1 - I'm sure there are profesional IT tools out there that could be used. Check with your IT department. If deployment does't require superuser permissions or if you have such privileges (and knowledge) you could also write your own custom deployment/management tools.
For #2 - CD doesn't really specify if you should deploy to a single server, a certain percentage of your production servers or all of them). Your choice should be based on what makes sense or is more appropriate for your particular context. How exactly it's done if you decide to go that way? It really depends on #1 - you just need to trigger the process from your CI. Should be a breeze with a custom deployment tool :)
The key requirement (IMHO) in Continuous Deployment is the process orchestration, jenkins isn't ideal tool for this, but you can write an own groovy script wrapper or to invoke the jobs remotely form an another orchestration tool. Another issue in jenkins, at least for me, is the difficulty to track the progress.
I would model it as the following:
Divide the deployment process to logical levels, e.g Data centers->Application->Pools and create a wrapper for each level. It will allow you to see the progress at highest level in the main wrapper and drill down in case of need as you wish
Every wrapper should finish as SUCCESS only if ALL downstream jobs were SUCCESSFUL, otherwise it should be UNSTABLE or FAILURE . In this case there is no chance that you will miss something at the low level jobs.
1 job per 1 per product/application/package
1 job to control the single sequence run. For example I would use mcollective to run the installation job sequentially/parallel on the selected servers
1 wrapper job for every logical level
I would use:
mcollective - as mentinoed above
foreman to query the puppet to select the servers list for every sequence run
The package/application/artifact installation on the server I would prefer to do with a native OS software, e.g yum on the linuxserves. It will allow you to rely on their mechanism of installation verification
I'm sure that I missed something but I hope it will give you an acceptable start point
I am trying to implement continuous integration using Jenkins and i came across below scenario.
I have a build, say Build A which is configured to run every 1 hour. This job require another process ( independant background java process ). But what happens is sometimes this background job will not respond or we have to restart the job in order to complete the Build A without any exceptions. If the process is down, we will get console exceptions and build will fail.
I have found a solution for this.
Abort the current Build A and start Build B.
Trigger Build A after build B is success.
But
What i am looking for is, if there is a console exception, pause this build and trigger Build B which will restart the process and I should be able to resume Build A when the build B is success.
There is no easy way known to do that in Jenkins. It would be much easier to start (and possibly restart) the fixture process from the build itself. Perhaps even integrate it into your build/test tool so the CI job can be easily replicated or reproduced locally.
Use case:
CI server polls some VSC repository and runs test suite for each revision. And if two or more revisions were commited, even in a relatively small time interval, I want the CI server to put each of them in queue, run tests for each, store the results, and never run tests again for those commits. And I don't want the CI server to launch jobs in parallel, to avoid performance issues and crashes in case of many simultaneous jobs.
Which CI server is able to handle this?
My additional, less important requirement is that I use Python and it is desirable to use software written in Python, so I looked at the Buildbot project, and I especially want to see reviews for this tool in the matter of is it usable in general and is it capable of replacing most popular solutions like Travis or Jenkins.
I have used jenkins to do this. (with subversion mainly, c/c++ build and also bash/python scripted jobs)
The easiest and default handling of VCS/SCM changes in jenkins is to poll for changes on a set time. A build is triggered if there is any change. More than one commit may be included in build (e.g. if 2 commits are done close together) when using this method. Jenkins shows links back to scm and scm update done as well as showing build logs and you can easily configure build outputs and test result presentation.
https://wiki.jenkins-ci.org/display/JENKINS/Building+a+software+project#Buildingasoftwareproject-Buildsbysourcechanges
What VCS/SCM are you using? Jenkins interfaces to a good few VCS/SCM:
https://wiki.jenkins-ci.org/display/JENKINS/Plugins#Plugins-Sourcecodemanagement
This question answers how to make Jenkins build on every subversion commit:
Jenkins CI: How to trigger builds on SVN commit
TeamCity is free (up to a number of builds and build agents) and feature-rich. It's very easy to install and configure, although it may take some time to find your way through the wealth of options. It is extremely well documented: http://www.jetbrains.com/teamcity/documentation/
It is written in Java but supports many tools natively and others through command-line execution, so you can build anything with it that you want. (I use it mostly for Ruby.) It understands the output of many testing tools; if you're not using one of them maybe yours can emulate their output. It's quite extensible; it has a REST API and a plugin API.
It can be configured to build on each commit, or to build all of the commits that arrived in a given time period, or to trigger in other ways. Docs here: http://confluence.jetbrains.com/display/TCD8/Configuring+VCS+Triggers
By default it starts a single build agent and runs one build at a time on that build agent. You can run more build agents for speed. If you don't want to run more than one build on a machine, only start one build agent on each machine.
I dont want that CI server would launch jobs in parallel to avoid
performance issues and crashes in cases of many simultanious jobs.
In buildbot you can limit the number of running jobs in a salve with max_build parameter or locks
As for Buildbot and Python, you may coordinate parallel builds by configuration, for example:
Modeling Parallel Processes: Steps
svn up
configure
make
make test
make dist
In addition, you can also try using a Triggerable scheduler for your builder which performs steps U,V,W.
From the docs:
The Triggerable scheduler waits to be triggered by a Trigger step (see
Triggering Schedulers) in another build. That step can optionally wait
for the scheduler's builds to complete. This provides two advantages
over Dependent schedulers.
References:
how to lock steps in buildbot
Coordinating Parallel Builds with
Buildbot
There is a Throttle Concurrent Builds Plugin for Jenkins and Hudson. It allows you to specify the number of concurrent builds per job. This is what it says on the plugin page:
It should be noted that Jenkins, by default, never executes the same Job in parallel, so you do not need to actually throttle anything if you go with the default. However, there is the option Execute concurrent builds if necessary, which allows for running the same Job multiple time in parallel, and of course if you use the categories below, you will also be able to restrict multiple Jobs.)
There is also Gitlab CI, a very nice modern Ruby project that uses runners to distribute builds so you could, I guess, limit the number of runners to 1 to get the effect you are after. It's tightly integrated with Gitlab so I don't know how hard it would be to use it as a standalone service.
www.gitlab.com
www.gitlab.com/gitlab-ci
To only run tests once for every revision you can do something like this:
build
post-build
check if the revision of the build is in /tmp/jenkins-test-run
if the revision is in the file skip tests
if the revision is NOT in the file run tests
if we ran the tests then write the ID in /tmp/jenkins-test-run