Jenkins parameterized job that reuses old build with same parameters - jenkins

Background:
We have a number of Jenkins top-level jobs that uses (and shares) other jobs as sort-of subroutines. To control the overall flow we use the Jenkins Parameterized Trigger Plugin.
The top-level jobs then gather test reports, build reports etc. from the sub-builds and conveniently publishes them in one go. It's working really well.
Problem at hand: Each of the top-level jobs are started with a number of parameters, and only a selection of these are passed on to the sub-jobs. For some sub-jobs, the parameters are the same as they were some time ago, when the sub-job was last called from this top-level job, but our top-level script isn't aware of this. In essence, we waste build time, building the sub-job again with the same parameters.
In a perfect world the Parameterized Trigger Plugin would have an option like
Do not rebuild job if identical parameters (and configuration unchanged).
which would perform the following steps:
Compare the build-parameters of all kept builds of the given job, to the current parameters.
If the job-configuration is unchanged since the found job was build, setup environmental variable to point to the old job that was found above.
If job not found or the job-configuration has been changed since the found job was build, perform the build as usual.
Unfortunately it does not seem to exist, nor can I find an alternative plugin that provides the functionality I seek.
Groovy to the rescue?
This is where I guess the Scriptler Plugin and a Groovy Script would come in handy, as it would allow me to perform the check in the sub-job, and then set an environment variable that I can use in the Conditional BuildStep Plugin to either perform the build as usual, or skip the build and setup the build environmental variables using the EnvInject Plugin.
My programming question: I'm new to Groovy, and to JAVA for that matter. I have lots of other (assembly, C and scripts) programming experience, though. I've searched for example scripts all over, but haven't found anything remotely similar to what I want to do here. Any hints on how to perform this, including alternative takes on the functionality, would be highly appreciated!

You're going in the right direction already. As there's no ready-made plugin available, the best way to implement a custom solution is to use Groovy.
Conceptually, it will be better to implement the "build-or-dont-build" decision on the triggering side (i.e., in the top-level job). This is because once a sub-job has been triggered, it will be difficult (and awkward) to prevent its actual execution or to re-use previous results in case of identical parameters. (The latter basically means implementing memoization for your sub-jobs; it's an interesting feature as such -- AFAIK there's no plugin for that, but it could be implemented with some scripting in the sub-jobs).
Regarding your programming question: personally, I also started from a more embedded/C-ish background. From my experience, if you're planning to work closer and longer with Jenkins, then learning Groovy will definitely pay off. I was reluctant initially to learn "just another scripting language", but Groovy has some very interesting concepts, and you will be able to control Jenkins in a much more flexible, powerful and efficient way than by just using plugins or the external REST/CLI APIs. You will also be less dependent on finding and running too many plugins, which is a plus from administration pov.

Related

A basic question about continuous integration

This is not a programming question, but I don't know any more active forum and besides programmers are the best people to be able to answer my question.
I am trying to understand the rationale behind continuous integration. On one hand, I understand that it is a good practice to daily commit your code before heading to home whether or not the coding and testing is complete or not and then there is continuous integration concept where the minute something is committed, it triggers a build and all the test cases are run. Aren't the two things contradictory?. If we commit daily whatever coding is done, it will cause daily failed builds..Why don't we manually trigger builds once the coding and testing is complete?.
Usually when you save your code daily is to be sure that your work will not be lost.
On the counterpart the CI or Continuous Integration is to test if what you produced is ok, in the majority of projects the CI isn't applied to individual branches ie: feature, bugfix, it's applied on major branches ie: master, develop, releases, etc. And these branches aren't updated daily as they need a pull request to be update and someone to approval that pull request.
The use case for having CI implemented on individual branches (feature, bugfix) is to check before merging a pull request into a major branch when it will check the tests and if the code builds.
So resuming, yes you need to commit your code daily, but you don't need to apply CI to it daily.
I suggest to you check the Gitflow workflow: https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow
The answer is obvious.
1. Committing Code: In general the code is committed only after testing with the environment locally.
Consider Developer_A working on Component_A hence one has to commit with minimum verification as the scope is to develop Component_A.
No imagine complex system with 50 developer developing Component_B...Component_Z++
If someone is committing the code without minimum test it is most probably going to give you failed result.
Or else developer might have it committed on development branch that all together depends on SCM strategy adapted in project.
2. Continues Integration test scope:
On the other hand integrator principally collects and synergies different codes (Software Components) together into 1 container and perform different tests.
Most importantly, integrator need to ensure that all the Components Developed from different developers is fitting good and at the end Software is working as expected. To ensure that, Integrator have acceptance criteria and to proactively prevent something which can go wrong, it is important to have these criteria automated with the help of Continues integration.
But among all factors, it is important to give feedback on the quality of software to the developers. It is best in favor of project (economically), to know about the bug earlier hence Continues Integration and DevOps.
In Complex System it is worth to have automated watcher to catch the sneaked mistakes from developers.
3 Tools and Automation:
To create human independent system, automation tools like Jenkins are helpful.
Based on the testing strategy different testing levels can be performed with the help of Automation tools.

Jenkins Plugin Management best practices

We are planning to implement jenkins for the whole organization. We want to go with single jenkins instance that will be used by many teams with Slave architecture. I want to know if there are any best practices for Plugin management. Since teams would request for installing different plugin, how can i manage these plugsin installation.
Thanks in advance for all your help
I would install Docker on all agent machines and instruct teams to make use of Docker containers in their Pipelines as much as possible. Thereby you forego the need of installing different programming languages and plugins on all your agents.
For plugins that really have to be added to Jenkins, I'd set up a test instance of your Jenkins server, to try out the plugin, and see whether it clashes with existing plugins. Try to keep the number of plugins low, suggest people to only use quality plugins that get recent upgrades, and remove plugins when you no longer need them.
One issue you will encounter is Jenkins has no (as far as I can find) authorization strategy for plugins. Basically, all plugins are available to everyone. Thia may be an issue if teams have different and contraindicated requirements (eg: a team not allowed to use ssh or HTTP requests). If you have a homogeneous approach to SW development, code, infra, tools, etc, then it becomes a matter of scale only..
In a large org, you also may have issues with a single Jenkins merely finding maintenance windows. It also creates a single point of failure.Are you OK w/that or need H/A?
You may benfit from several masters (per business unit or product) and use JCasC to manage common configurations overview, plugin to make your life easier.

Karate API Test Debugging in Jenkins

This is sort of an open-ended question/request (hope that's allowed).
On my team we are using Karate API testing for our project, which we love. The tests are easy to write and fairly understandable to people without coding backgrounds. The biggest problem we're facing is that these API tests have some inherent degree of flakiness (since the code we're testing makes calls to other systems). When running the tests locally on my machine, it's easy to see where the test failed. However, we're also using a Jenkins pipeline, and when the tests fail in Jenkins it's difficult to see why/how they failed. By default we get a message like this:
com.company.api.OurKarateTests > [crossdock] Find Crossdock Location.[1:7] LPN is invalid FAILED
com.intuit.karate.exception.KarateException
Basically all this tells us is the file name and starting line of the scenario that failed. We do have our pipeline set up so that we can pass in a debug flag and get more information. There are two problems with this however; one is that you have to remember to put in this flag in every commit you want to see info on; the other is that we go from having not enough information to too much (reading through a 24MB file of the whole build).
What I'm looking for is suggestions on how to improve this process, preferably without making changes to the Jenkins pipeline (another team manages this, and it will likely take a long time). Though if changing the pipeline is the only way to do this, I'd like to know that. I'm willing to "think outside the box" and entertain unorthodox solutions (like, posting to a slack integration).
We're currently on Karate version 0.9.3, but I will probably plan to upgrade to 0.9.5 as part of this effort. I've read a bit about the changes. Would the "ExecutionHook" thing be a good way to do this? I will be experimenting with this on my own a bit.
Have other teams/devs faced this issue? What were your solutions? Again we really love Karate, just struggling with the integration of it to Jenkins.
Aren't you using the Cucumber Reporting library as described here: https://github.com/intuit/karate/tree/master/karate-demo#example-report
If you do - you will get an HTML report with all traffic (and anything you print) in-line with the test-steps, and of-course error traces, and most teams find this sufficient for build troubleshooting, there is no need to dig through logs.
Do try upgrade as well, because we keep trying to improve the usefulness of the logs, and you may see improvements if you had failed in a JS block or karate-config.js.
Else, yes the ExecutionHook would be a good thing to explore - but I would be really surprised if the HTML report does not give you what you need.

Entering the second knowledge level of jenkins-scripted-pipeline

It is easy to find simple examples for declarative or scripted pipeline. But when the point comes where you go deep into scripting you need so much more information. When you're not familiar to the world of web, java and groovy you are running out of questions which can be asked to go future. Googeling helps you find some magic "hudson.model.Hudson..." or .methods and e.g. #NonCPS-operators solutions. Those solutions work, but I'm searching for the bigger context to work my self from the bottom up. Not from the top down. I'm looking for the knowledge, which is obvious to the insiders.
I'm looking for links/books/api-references or introductions to learn to find the entrance to knowledge around the jenkins scripted pipeline. e.g. like this one =).
I am not looking for answers to those questions below from the stackoverflow communety. This would be to much! I am looking for links of documentation to get deep into the topic. I assume that for an insider it's insider knowledge is not obvious. So I'm stating here some questions to make it obvious what I would describe as insider knowledge.
Example questions:
like : "hudson.model.Hudson..." but where do I get those magical dot.separated strings?
Is there a documentation of the Jenkins Api?
How can I find documentation of the classes and methods usable in jenkins like e.g. X.Y.collect?
Is there a way to debug a pipeline?
Is there a faster way in testing code than every time run it in a pipeline?
How does the inner mechanism work?
Is the Knowledge more about groovy or is it about general Jenkins? Or is it java?
Why println MyArrayList.getClass() class java.util.ArrayList which is a java class? does grooy inherit the types from java, or does the pipeline inherit the types from jenkins, which is java?
...
Asking one question at a time:
where do I get those magical dot.separated strings?
Those are inner java classes at the Jenkins core (or plugins). For the former, Javadoc is available, the latter have their code at Github
classes and methods usable in jenkins
Mostly every Java and Groovy class/method is usable
debug a pipeline?
You can only replay it, issuing changes on each Run
testing
you have two approaches: LesFurets one and the real-unit-one
innards
wide question and wider answer. pipelines are loaded, transformed and run as a near to groovy code (#NonCPS annotation alters this behaviour).
Knowledge about Java, Groovy and Jenkins will apply.
Groovy indeed extends Java hence, both languages apply

Any Hosted CI service that natively support JUnit XML reports?

Does anyobdy know a good solid CI service that provides the common features of build parallelization BUT also support for Junit reports?
The current ones that we have looked at (semaphoreapp, circleCI, travisCI,...) are good but relatively useless as we have to manually investigate what tests failed, since when, and how often, thus negating a lot of the benefits of a hosted service.
Things that we're looking to know (and are all provided by JUnit / Jenkins):
If the build failed, because of what test cases?
Total Number of Failures / Total Number of Tests (trends to better analyze things)
Individual Track record of any test (so we know exactly when it was broken, whether it's intermittent,...)
You mentioned the most famous CI services but there are alternatives where you can get a higher customization level, like installing plugins, fine configuration, etc.
CloudBees and ClinkerHQ are both based on Jenkins offered as a service. You can also get very useful metrics (coverage, failures, graphs, execution times, etc.) thanks to Jenkins Plugins and SonarQube. I think Jenkins and SonarQube are a perfect couple for you.
Notifications are very important too. You want to be notified when something is wrong. This feature is available on both.
Regards,
Antonio.
DISCLAIMER: I'm deeply involved in ClinkerHQ

Resources