Is there any way to store metrics of a build in Jenkins, that it is both visible on the build page, and that allows other jobs to later access these metrics and take action based on them?
In my particular case I am running a matrix configuration job. It is performing 25 or so builds. Each build result is archived as an artifact. Each build result has a metric indicating its quality. It is currently stored in a file among the artifacts.
A second job needs to take the build artifact with the best quality metric. Currently it is copying the artifacts from all 25 builds for evaluation, and deletes everything but the best one. But this takes time as each build artifact is about 100MB.
Additionally, it would be nice if the build quality metric was published visually per build in Jenkins.
My current best idea is to only copy the metric report file artifact from each build first, evaluate them, and then somehow copy the complete artifacts only from the best build. Perhaps using the groovy plugin or something similar.
But I am hoping there is a more integrated solution which also makes the metrics of each build more easily viewable on the build page.
The "Plot plugin" is very nice for visualising metrics, but it does not seem to make the metrics available to other jobs?
(Background for those interested: The matrix build job is performing FPGA Place&Route iterations distributed over a build farm, and the quality metric is the achieved timing margin for each iteration)
EDIT
For clarification, below is a image where I have tried to illustrate the current setup. All jobs run on Jenkins slaves. No jobs run on the Jenkins master, it only holds the artifacts.
As can be seen, a total of 2x25x100MB = 5 Gigabyte of data is copied between Jenkins slaves and the Jenkins master. This takes a significant amount of time.
As little as only 2x100MB would need to be copied if the metrics could be evaluated earlier.
Related
I have a multi-module Maven project that installs a whole bunch of artifacts (with different classifiers) into the local Maven repository. I also have a second Maven project that uses the Maven Dependency Plugin to collect those artifacts into a number of different directories (for installer building purposes). And finally I have a Jenkins that I want to do all that for me.
There are a number of requirements I would like to see fulfilled:
Building the source code (and running the tests) and building the installers should be two separate jobs, Job A and Job B.
Job A needs to finish quickly; as it contains the tests the developers should get feedback as fast as possible.
The artifacts of Job B take up a lot of space but they need to be archived so this job should only run when the results of Job A do meet certain requirements (which are not a part of this problem).
Job B needs to be connected to Job A. It must be possible to tell exactly which Job A instance created the files that were used in the build of Job B. (It is also possible that I need a run of Job B for a particular build of Job A which was three weeks and 200 builds ago.)
And finally both jobs should be able to be executed locally on a developer’s machine so I would love to keep most of the configuration within Maven and only relegate to Jenkins what’s absolutely necessary. (Using the Copy Artifacts Plugin I can collect the artifacts from Job A into the required directories in Job B but when removing the collection from the Maven project I also take away the developer’s ability to do local builds.)
Parts of 3 and 4 can be achieved using the Promoted Builds plugin for Jenkins. However, I cannot seem to make sure that the files collected in Job B are exactly the files created by a certain run of Job A. During development all our version numbers of all involved projects are suffixed with “-SNAPSHOT” so that an external job has no way of knowing whether it actually got the correct file or whether it was given a newer file because another instance of Job A has been running concurrently. The version numbers are then increased directly before a release.
Here are some things I have tried and found to be unsatisfactory:
Use a local repository in the workspace directory of Job A. This will, upon each build, download all of the dependencies from our Nexus. While this does not have a huge impact on the diskspace it does consume way too much time.
Merge Job A and Job B into a single job. As Job B takes more time than time A, developers have to wait longer for feedback, it still uses a lot of diskspace—and it doesn’t really solve the problem as there is still the possibility of another Job A+B running at the same time.
Am I missing something obvious here? Are Maven or Jenkins or the combination of both unable to do what I want? What else could I try?
I'm looking for a way to use Jenkins to build a single code base for multiple CPU architectures. at the moment this is amd_64 and armhf, although this may expand in the future. The ideal situation would be to run the build over a number of different jenkins slaves with a different CPU architectures.
These build jobs are not compiler based (maven, gradle ext.) but system independ shell scripts (bash and python) which auto detect their CPU architecture and produce build artifacts to match the CPU.
I may be missing something really obvious, but I don't see a way to automatically run a build a number of times over different architectures or bind a specific build to a specific architecture.
Could anyone point me in the right direction?
Funny you ask this question now. Published last Friday (2019-11-22) ...
You should review the Jenkins blog: Welcome to the Matrix
I often find myself needing to run the same actions on a bunch of
different configurations. Up to now, that meant I had to make multiple
copies of the same stages in my pipelines. When I needed to make
changes, I had to make the same changes in multiple places throughout
my pipeline. Maintaining even a small number of configuration was
difficult for larger pipelines.
Single configuration pipeline
Pipeline for multiple platforms and browsers
Excluding invalid combinations
We use the multi-configuration according to the BuildConfiguration variable and run the release and debug in parallel with Clean:false in one of our builds.
In the agent queue, we have two agents that meet the requirements for this particular build definition.
The problem is that the agents can not be set on this build.
That's why you can not say for sure that debug will always be built on agent x and release on agent y.
If now once release on the agent x is built, then the files are around there and will not be deleted.
If this causes it to copy something over it when populating the drop, then "outdated" files will end up there.
One option would be the Clean:All, but we do not want to miss the incremental mode.
Is there a solution for this problem?
No, Your scenario is simply not supported. You CAN work around it by having one queue / set of tags to basically have a group of ONE agent, but that is it.
Otherwise you simply are out of scope. Tasks on agents are supposed to be standalone. CLean all = false is supposed to be purely a performance tuning (no need to compile things not changed etc.) NOT supposed to allow followup jobs to reference as state another job has left an agent in.
What I do in some scenarios like that is using my own file server as buffer. Given that my agents run locally and have a VERY high bandiwdth connection (200 gigabit per server), I can just move compiled results into a buffer folder and back with basically zero overhead (as in: zero feeled overhead). Particularly in multi agent jobs that really helps (downloading selenium tests 16 times for 16 agents - no, thanks).
I am publishing build results to Sonarqube, with jenkins.
Each commit on git is triggering a jenkins build.
My problem is that build duration is not deterministic so build #2 can finish before build #1.
Consequently, results are published to Sonar in wrong order and differential view shows wrong results.
For example, if i corrected a unit test in build #2, results of build #1 will tell me that test is failing again.
Build result version is setted and it should be used to order builds instead of publication date.
Is there any way to do it?
Thank you.
The SonarQube platform is going to process the analysis reports in the order it receives them. It has no way of knowing anything about your Jenkins build number.
Your best bet is to enable the Throttle Concurrent Builds option to make sure that each new job waits its turn. That's the only way to ensure the order you expect.
I'm running a number of static analysis tools and I want to track the results from build to build. For example, if a commit to a branch increases the number of security vulnerabilities, I want to send an email to the committer. I know there are plugins like Sonar and Analysis Collector, but they don't cover all of the areas of analysis I want and they don't seem to have the ability to trigger actions based on build trends (correct me if I'm wrong).
You can use the Groovy Postbuild Plugin:
https://wiki.jenkins-ci.org/display/JENKINS/Groovy+Postbuild+Plugin
It lets you extract data (such as number of vulnerabilities detected) from the current build's buildlog with num_vul = manager.getLogMatcher(regexp)
And compare that to previous builds, by extracting info from their buildlog, e.g.:
currentBuildNumber = manager.build.number
manager.setBuildNumber(currentBuildNumber - 1)
prev_num_vul = manager.getLogMatcher(regexp)
Then, if the number of vulnerabilities had gone up, I would call manager.buildFailure() which sets the build status to FAILURE, and then have the next PostBuild step be to the Email-ext plugin which allows you to send email to the committer in the event of a failure.
I would recommend the SonarQube tool, which does just what you describe. You mention that you already looked at it, but maybe you missed the Notifications feature or the Build Breaker Plugin. There are more SonarQube features centered around Jenkins integration. SonarQube is free to use.
If you are still missing something, it might be worthwhile asking specifically how that aspect could be covered by SonarQube. Just my two cents.