How to execute one build many times with different compiler flags in Jenkins? - jenkins

I'm new to working with Jenkins and could use some help.
Right now, I have a project I want to build with Jenkins. I have a rough idea on how to build a simple project. What im wondering is, can I build the project with certain compiler flags, and then build the project again with different flags automatically?
My goal with all this is to be able to submit a program to Jenkins, and it will compile the program, run some tests, and then restart but this time with different compiler settings. Then I check the results to see under which compiler settings the code runs fastest. I need to use Jenkins and I need to do this testing.
My current strategy was to setup a master/agent system, and have the Master server go through a pipeline where each step it compiles the code a certain way and pushes it to the appropriate agent queue where it will be executed. Is this feasible? How should I go about this?

I dont know if I understood you correctly but I will do the different build you want to run the same compile/tests with different flags.
I will do a jenkins pipeline with all together and I will start different stages sequentially like:
Stages:
CheckSCM (git clone)
Build with flag1
cleanWS (CleanWork Space)
CheckSCM
build with flag2
cleanWS
CheckSCM
build with flag3

Related

Jenkins - Job A sets the build number for Job B without reloading project configuration from disk

I want to have one Jenkins job control the build number of another job but without the inconvenience of reloading the entire project configuration from disk. I have seen that it's easily possible to directly update the nextBuildNumber file of the target job (I can do this as a build step of Job A) but this does not take effect immediately. Restarting Jenkins or even reloading the Jenkins configs from disk takes way too long and can only be done when there are no builds in progress.
I have tried the groovy script mentioned in the below post by running it from the Manage Jenkins > Script Console. The same post also suggests the script can be saved as a file and run from the CLI. Can it be run from a build step?
I want Job A to determine Job B's next build number and set it so that Job B can run (later in the same day) with the desired build number.
https://stackoverflow.com/a/20077362/4306857
Perhaps I should clarify. I'm not familiar with Groovy so I'm looking at the various build step options like "Execute Windows batch command" which I have a lot of experience with. I can see an "Invoke Gradle script" option so I was wondering if there may be a plugin that can run Groovy scripts perhaps?
The reason this requirement has arisen is that we are compiling a product for two different platforms. We want to compile the codebase almost simultaneously for both platforms with two jobs (A & B) which will both update the JIRA cases included in the builds. We feel it will be less confusing to have both these jobs running with the same build number so that when we talk about a particular issue being addressed in build #75, say, we won't have to qualify that by stating the full job name. If JOB-A build #75 and JOB-B build #75 are both compiled on the same day from the same codebase we can test and compare the results of both builds with far less confusion than if the build numbers go out of sync.
Obviously, in the short term we will use the Set Next Build Number plugin to manually keep the build numbers in step but we want to automate this if possible.
Depends on whether or not you are using Version Number plugin:
[ X ] Create a formatted version number
Build Display Name [ X ] Use the formatted version number for build display name.
Assuming you are NOT, this groovy script will do:
def NextNumber=42
job=Jenkins.instance.getItemByFullName('path/to/jobName')
job.nextBuildNumber = NextNumber
job.save();
You will need groovy plugin for that. Place that in an "Execute system Groovy script" step. Make sure to choose system groovy. That will execute on the master, where the job config and metadata is stored so you have access to the Jenkins internals and data.
I'd suggest you should really be using the above options rather than relying on "keeping both jobs in sync" via a script or manually. You can then pass the label to be used from the first job as a parameter to the second job. That would also require Parameterized Trigger as well as Version Number plugins.
You can even use ${BUILD_DATE_FORMATTED} or ${BUILD_TIMESTAMP}, etc.
Postdate: thinking about the problemspace from a different perspective, that of running 2+ builds on different platforms (simultaneously), there's a plugin for that: Matrix project. You can run it as a freeatyle job on multiple nodes or is excellently described as Matrix building in scripted pipeline. Not sure how that would tie in to JIRA.

Accessing information from previous Jenkins pipeline run

I have a pipeline set up that builds multiple project configurations. The order of building the confgurations doesn't matter, all that matters is whether all builds succeeded.
What I'd like to know is to which build configuration failed in the previous run of the pipeline (if the pipeline failed). The plan is to use this information to start the next build from that configuration, as it's likely the one which is going to fail again. I can already access previous pipeline build status using currentBuild.getPreviousBuild().result. Either of the following would work for me:
Is it possible to store information between pipeline runs (full runs, not pipeline stages in the same run)?
or
Is it possible to get the stage at which previous pipeline has failed on?
You can persist the stage failed at the currentBuild.previousBuild.description.
As an alternative, you could archive a file an with whatever you need and acess it via currentBuild.previousBuild.rawBuild.artifactManager.root(). From there on, having a VirtualFile, you just would have to traverse the archived artifacts. Beware that this cleaner but long solution will (probably) require approving a good bunch of methods for the pipeline sandbox
PS: As stated by Jacek Ĺšlimok, after setting an env variable with
env.FAILING_BUILD="foo"
it is be available for the next build via
currentBuild.previousBuild.buildVariables.FAILING_BUILD

How to achieve Roll back using Jenkins

I know this Forum is not to provide strategy's.
Using Jenkins I have set up CI and CD to my Dev,QA and Staging environments. I am stuck up with Rollback strategy for all my environments.
1- What happens if my build fails in Dev
2- What happens if my build fails in QA and passed in Dev.
3- What happens if my build fails in Staging and passed in Dev and QA.
How should I roll back and get things done considering DB in not in place. I have created sample workflow but not sure its an right process.
Generally you can achieve this in 2 ways:
Setting up some sort of release management tool that tracks every execution of your pipeline and snapshots the variables, artifacts, etc... that was used on that exact execution, then you can just run an earlier release of it (check tools like octopus deploy)
If you are using a branch strategy with tags you can parameterize your jobs, passing the tag you wanna build, and build the "earlier tag" if something fails. Check the rebuild option for older job executions.

Jenkins with Shared jobs

I am working with Jenkins, and we have quite a few projects that all use the same tasks, i.e. we set a few variables, change the version, restore packages, start sonarqube, build the solution, run unit/integration tests, stop sonarqube etc. The only difference would be like {Solution_Name}, everything else is exactly the same.
What my question is, is there a way to create 1 'Shared' job, that does all that work, while the job for building the project passes the variables down to that shared worker job. What i'm looking for is the ability to not have to create all the tasks for all of our services/components. It be really nice if each of our services/components could have only 2 tasks, one to set the variables, another to run the shared job.
Is this possible?
Thanks in advance.
You could potentially benefit from looking into the new pipelines as code feature.
https://jenkins.io/doc/book/pipeline/
Using this pattern, you define your build pipeline in a groovy script rather than the jenkins' UI. This script is then kept in the codebase of the project it builds in a file called Jenkinsfile.
By checking this pipeline into a git repository, you can create a minimal configuration on the jenkins' side and simply tell it to look towards a specific repo and do the things that pipeline says to do.
There's a few benefits to this approach if it works for your setup. The big one being that your build pipeline will be fully versioned just like the project it builds. And the repository becomes portable, easily able to be built on any jenkins' installation across as many jobs as long as the pipeline plugins are installed.

Jenkins multi-configuration, ignore failed build

I've setup a Jenkins multi-configuration project in order to run builds on two different slave environments (Erlang R15B03 and Erlang 17.3). This in order to start preparing our projects for actual release on an 17.3 production environment.
Currently the 17.3 build for all projects is failing because of dependency failures which need to be fixed as we go along, and the R15B03 builds are all passing.
How can I make it so that Jenkins (for now) ignores the 17.3 result and will pass the build as succesful if the R15B03 build passes?
It is not a good idea to have a build passing if anything goes wrong. It is like commenting out failing tests to fix them later - you forget about them.
You should probably setup two separate jenkins builds for R15 and 17. This way, every time, you will see, that R15B03 passed and 17.3 did not until you fix all dependencies.
You can use parameterized trigger plugin to have a conditional trigger. Setup your multiconfiguration project to always trigger and then check the parameters to decide what downstream build to trigger.

Resources