I am configuring a Jenkins-workflow and requirement is to use Linux(server1) for compilation part of workflow and than windows (server2)for testing because testing tool is not compatible with Linux , After testing is complete need to switch back to same Linux(server1) to continue rest of workflow.
How to switch slaves in same workflow if not possible, what are other ways to achieve this.
Appreciate suggestions !
If by jenkins-workflow you mean Jenkins Pipeline then you can do it like this:
node('server1') {
//some compilation steps
node('server1') {
// more compilation steps
}
//continue workflow for server1
}
You can send any files between the nodes using stash/unstash steps.
A possible way is to use a wrapper job which starts compilation and test jobs with Trigger/call builds on other projects.
That way you can move the artifacts with help of Archive the artifacts option (which you would use ine the compilation post-build action) and the Copy Artifacts Plugin (which you'd use in test build step).
You can define on which machine/label your jobs run easily either static via default configuration in your job, or dynamically with help of NodeLabel Plugin
Note
You can also try option 3 mentioned here, however im not sure if it works when files have to be moved in between different machines.
Might be worth to check out though, as if it works, that could be alot more convenient.
Related
I'm looking at options of configuring Jenkins as code. What I've found so far are those two options:
Configuration as code plugin (https://github.com/jenkinsci/configuration-as-code-plugin/blob/master/README.md)
Pipeline (https://jenkins.io/doc/book/pipeline/)
What I don't understand yet is how those two work with each other? Do they both do the same and should I choose either one or another? Or maybe they do different things, in such case it would be great to know if those two can actually work together.
You can use pipeline to configure the build process as code.
You can use Jenkins Configuration as Code to configure the Jenkins instance as code.
You should also have a look at Job DSL Plugin to configure the jobs (everything but the build process) as code.
You may have a look at this repo to see it all work together: https://github.com/tomasbjerre/jenkins-configuration-as-code-sandbox
Background
I am using Jenkins with the Build Pipeline plugin to build some fairly complicated projects that require multiple compilation steps:
Build source RPM.
Build binary RPM (this is performed twice, once for each platform).
Deploy to YUM repository.
My strategy for solving build requirements involves splitting the common work into parameterized jobs that can be reused across projects and branches, with each job representing one stage in the pipeline. Each stage is triggered with parameters, and build artifacts passed along to the next job in the pipeline. However, I'm having some trouble with this strategy, and could really use some tips on how to go about solving this problem in the most elegant and flexible way possible.
To be more specific, there are two common libraries, which are shared by other projects (but not all projects). The libraries are built differently from the dependent projects, but it should not be necessary to specify in Jenkins what the dependent projects are.
There are multiple branches, the master branch (rebuilt nightly), the develop branch (polled for changes), feature branches (also polled), and release branches (polled, but built for release). The branches are built the same way across multiple projects.
We create multiple repositories every month, and whilst it is feasible to expect a little setup for a new project, generally we want this to be as simple and automated as possible.
The Problems
I have many projects with multiple branches, and I do not wish to build all branches or even all projects in the same way. Because most of the build steps are similar I can turn these common steps into parameterized build jobs, and get each job to trigger the next in the chain, passing parameters and build artifacts along the chain. However, this falls apart if one of the steps needs to be skipped, because I don't know of a way to conditionally skip a build step. This implies I would need to copy the build jobs so that I can customise them for each pipeline, resulting in a very large number of build jobs. I could use a combination of plugins to create a job generator (eg. dsl flow, dsl job, etc), and hide as much as possible from the users, but what's the most elegant Jenkins solution to this? Are there any plugins, or examples that I might have missed? What's your experience of doing this?
Because step 2 can be split into two jobs that can be run in parallel, this introduces a complexity that is causing me problems with my pipeline. My first attempt would trigger a parameterized build job twice with different parameters, and then join the jobs afterwards using the join plugin, but it was beginning to look like it would be complicated to copy in the build artifacts from the two upstream jobs. This is significant, because I need the build artifacts from both jobs for stage 3. What's the most elegant solution to join parallel jobs and copy artifacts from them all? Are there any examples that I might have missed?
I need to combine test results generated from both of the jobs in stage 2, and copy them to the job that triggers the build. What's the best way to handle this?
I'm happy to read articles, presentations, technical articles, reference documentation, write scripts and whatever else necessary to make this work nicely, but I'm not a Jenkins expert. If anyone can give me some advice on these 3 problems then that would be helpful. Additionally, I would appreciate any constructive advice on how to get the best out of pipeline CI builds in Jenkins, if relevant.
For the first point, the Job Generator plugin I wrote has been developed to address this use case. You can find more info on the wiki page of Job Generator.
There is also the same type of plugin with a different approach (job generator as a build step), it is called Jobcopy Builder.
The other approaches you mentioned require some kind of DSL and can be a good choice too.
Is it possible to set the build result for a build after that build is complete?
I could not find any plugins that do this already, and I was considering writing my own, but I wanted to see if this was even possible before going down that path.
(I have looked at existing code and how the "Fail The Build" plugin works as an example, but my understanding of the Jenkins code base is not advanced enough to understand what all the possibilities are.)
Use case: we have a build pipeline, and near the end of the pipeline there is a deploy-to-qa step that deploys the artifact to a QA environment. We have automated tests before this step to try to catch any problems with the artifact, but our test coverage is not very high in some areas so bugs could still slip through the cracks. I'd like to have the ability to mark a deploy-to-qa build as FAILED after the fact, to denote that that particular pipeline was invalid and is not a candidate for production release. (Basically the same as this Build Pipeline Plugin issue)
After some more investigation in the code, I believe that this is not possible.
From hudson.model.Run:
public void setResult(Result r) {
// state can change only when we are building
assert state==State.BUILDING;
// snip
...
}
So the build result cannot change except when in "building" state.
I could try to muck with the lastSuccessful and lastStable symlinks (as is done with the delete() function in hudson.model.AbstractBuild), but then those would be reset as soon as Jenkins reloaded the build results from jobs/JOBNAME/builds/.
I have an untested suggestion: Make a parametrized build, where the parameter determines if build will fail or not (for example simple bat / shell script testing the parameter from the environment variable it sets, and doing exit 0 or exit 1). This assumes that build pipelines manually triggered step will ask the parameters, and not use default values.
If it does not support interactive build parameters, then some other way is needed to tell this extra build step wether it should fail or not. Maybe editing upstream build description or display name to indicate failure, and then allowing build pipeline to continue to this extra build step, which probably has to use system groovy script to dig out upstream build description or display name.
I have seen several debates on this topic previously, and the outcome was always that it is theoretically possible to do so, but the codebase is not designed to allow this and it would have to be a very hacky workaround.
It's also been said that this is a bad practice in general, although I don't remember what the argument against it was.
I am facing the same requirement. I haven't found an appropriate plugin, changing the build status is not just a flag but has other impacts on links (eg latest successful build etc). So instead of changing the status of the build I looked for a possibility for qualifying the build. The Promoted Builds Plugin apply flags to build to define e.g. different quality stages. Build promotions can be performed manually or based on e.g. downstream project successful builds. Any successful build can be qualified, based on the promotion additional build and post build actions can be executed, e.g tagging or archiving.
Actually I was able to do it by changing the build.xml manually to <result>FAILURE</result>.
I've then played a little bit with mklink to create some symbolic links and also renamed the lastSuccessfulBuild to lastFailedBuild and it worked. If you are allowed to access the filesystem from within a Jenkins PlugIn, then it is possible to write one.
In case you are fine to delete the current build and start the same build using a version number and setting the next BUILD_NUMBER to the deleted one, then you could use this plugin to tell it to fail instead of succeed:
Fail The Build Plugin
I have a fairly complicated Jenkins job that builds, unit tests and packages a web application. Depending on the situation, I would like to do different things once this job completes. I have not found a re-usable/maintainable way to do this. Is that really the case or am I missing something?
The options I would like to have once my complicated job completes:
Do nothing
Start my low-risk-change build pipeline:
copies my WAR file to my artifact repository
deploys to production
Start my high-risk-change build pipeline:
copies my WAR file to my artifact repository
deploys to test
run acceptance tests
deploy to production
I have not found an easy way to do this. The simplest, but not very maintainable approach would be to make three separate jobs, each of which kicks off a downstream build. This approach scares me for a few reasons including the fact that changes would have to be made in three places instead of one. In addition, many of the downstream jobs are also nearly identical. The only difference is which downstream jobs they call. The proliferation of jobs seems like it would lead to an un-maintainable mess.
I have looked at using several approaches to keep this as one job, but none have worked so far:
Make the job a multi-configuration project (https://wiki.jenkins-ci.org/display/JENKINS/Building+a+matrix+project). This provides a way to inject the job with a parameter. I have not found a way to make the "build other projects" step respond to a parameter.
Use the Parameterized-Trigger plugin (https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Trigger+Plugin). This plugin lets you trigger downstream-jobs based on certain triggers. The triggers appear to be too restrictive though. They're all based on the state of the build, not arbitrary variables. I don't see any option provided here that would work for my use case.
Use the Flexible Publish plugin (https://wiki.jenkins-ci.org/display/JENKINS/Flexible+Publish+Plugin). This plugin has the opposite problem as the parameterized-trigger plugin. It has many useful conditions it can check, but it doesn't look like it can start building another project. Its actions are limited to publishing type activities.
Use Flexible Publish + Any Build Step plugin (https://wiki.jenkins-ci.org/display/JENKINS/Any+Build+Step+Plugin). The Any Build Step plugin allows making any build action available to the Flexible Publish plugin. While more actions were made available once this plugin was activated, those actions didn't include "build other projects."
Is there really not an easy way to do this? I'm surprised that I haven't found it and even more surprised that I haven't really seen any one else trying to do this? Am I doing something unusual? Is there something obvious that I am missing?
If I understood it correct you should be able to do this by following these Steps:
First Build Step:
Does the regular work. In your case: building, unit testing and packaging of the web application
Depending on the result let it create a file with a specific name.
This means if you want the low-risk-change to run afterwards create a file low-risk.prop
Second Build Step:
Create a Trigger/call builds on other projects Step from the Parameterized-Trigger
plugin.
Entery the name of your low-risk job into the Projects to build field
Click on: Add Parameter
Choose: Parameters from properties File
Enter low-risk.prop into the Use properties from file Field
Enable Don't trigger if any files are missing
Third Build Step:
Check if a low-risk.prop file exists
Delete the File
Do the same for the high-risk job
Now you should have the following Setup:
if a file called low-risk.prop occurs during the first Build Step the low-risk job will be started
if a file called high-risk.prop occurs during the first Build Step the high-risk job will be started
if there's no .prop File nothing happens
And that's what you wanted to achieve. Isn't it?
Have you looked at the Conditional Build Plugin? (https://wiki.jenkins.io/display/JENKINS/Conditional+BuildStep+Plugin)
I think it can do what you're looking for.
If you want a conditional post-build step, there is a plugin for that:
https://wiki.jenkins-ci.org/display/JENKINS/Post+build+task
It will search the console log for a RegEx you specify, and if found, will execute a custom script. You can configure fairly complex criteria, and you can configure multiple sets of criteria each executing different post build tasks.
It doesn't provide you with the usual "build step" actions, so you've got to write your own script there. You can trigger execution of the same job with different parameters, or another job with some parameters, in standard ways that jenkins supports (for example using curl)
Yet another alternative is Jenkins text finder plugin:
https://wiki.jenkins-ci.org/display/JENKINS/Text-finder+Plugin
This is a post-build step that allows to forcefully mark a build as "unstable" if a RegEx is found in console text (or even some file in workspace). So, in your build steps, depending on your conditions, echo a unique line into console log, and then do a RegEx for that line. You can then use "Trigger parameterized buids" and set the condition as "unstable". This has an added benefit of visually marking the build different (with a yellow ball), however you only have 1 conditional option with this method, and from your OP, looks like you need 2.
Try a combination of these 2 methods:
Do you use Ant for your builds?
If so, it's possible to do conditional building in ant by having a set of environment variables your build scripts can use to conditionally build. In Jenkins, your build will then be building all of the projects, but your actual build will decide whether it builds or just short-circuits.
I think the way to do it is to add an intermediate job that you put in the post-build step and pass to it all the parameters your downstream jobs could possibly need, and then within that job place conditional builds for the real downstream jobs.
The simplest approach I found is to trigger other jobs remotely, so that you can use Conditional Build Plugin or any other plugins to build other jobs conditionally.
Why are there two kinds of jobs for Jenkins, both the multi-configuration project and the free-style project project? I read somewhere that once you choose one of them, you can't convert to the other (easily). Why wouldn't I always pick the multi-configuration project in order to be safe for future changes?
I would like to setup a build for a project building both on Windows and Unix (and other platforms as well). I found this question), which asks the same thing, but I don't really get the answer. Why would I need three matrix projects (and not three free-style projects), one for each platform? Why can't I keep them all in one matrix, with platforms AND (for example) gcc version on one axis and (my) software versions on the other?
I also read this blog post, but that builds everything on the same machine, with just different Python versions.
So, in short: how do most people configure a multi-configuration project targeting many different platforms?
The two types of jobs have separate functions:
Free-style jobs: these allow you to build your project on a single computer or label (group of computers, for eg "Windows-XP-32").
Multi-configuration jobs: these allow you to build your project on multiple computers or labels, or a mix of the two, for eg Windows-XP, Windows-Vista, Windows-7 and RedHat - useful for checking compatibility or building for multiple platforms (qt programs?)
If you have a project which you want to build on Windows & Unix, you have two options:
Create a separate free-style job for each configuration, in which case you have to maintain each one individually
You have one multi-configuration job, and you select 2 (or more) labels/computers/slaves - 1 for Windows and 1 for Unix. In this case, you only have to maintain one job for the build
You can keep the your gcc versions on one axis, and software versions on another. There is no reason you should not be able to.
The question that you link has a fair point, but one that does not relate to your question directly: in his case, he had a multi-configuration job A, which - on success - triggered another job B. Now, in a multi-configuration job, if one of the configuration fails, the entire job fails (obviously, since you want your project to build successfully on all your configurations).
IMHO, for building the same project on multiple platforms, the better way to go is to use a multi-configuration style job.
Another option is to use a python build step to check the current OS and then call an appropriate setup or build script. In the python script, you can save the updated environment to a file and inject the environment again using the EnvInject plugin for subsequent build steps. Depending on the size of your build environment, you could also use a multi-platform build tool like SCons.
You could create a script (e.g. build) and a batch file (e.g. build.bat) that get checked in with your source code. In Jenkins in your build step you can call $WORKSPACE/build - Windows will execute build.bat whereas Linux will run build.
An option is to use user-defined axis combined with slaves(windows, linux, ...), so you need to add a filter for each combination and use the Conditional BuildStep Plugin to set the build step specific for each plataform(Executar shell, Windows command, ...)
This link has a tutorial but it is in portuguese, but it's easy to work it out based on image...
http://manhadalasanha.wordpress.com/2013/06/20/projeto-de-multiplas-configuracoes-matrix-no-jenkins/
You could use the variable that jenkins create when you define a configuration matrix axis. For example:
You create a slave axis with name OSTYPE and check the two slaves (Windows and Linux). Then you create two separate build steps and check for the OSTYPE environment variable.
You could use a improved script language instead, like python, which is multi-platform and can achieve the same functionality independent of the slaves' name and in just one build step.
If you go the matrix route with Windows and something else, you'll want the XShell plugin. You just create your two build scripts such as "build.bat" for cmd and "build" for bash, and tell XShell to run "build". The right one will be run in each case.
A hack to have batch files run on Windows and shell scripts on Unix:
On Unix, make batch files exit with 0 exit status:
ln -s /bin/true /bin/cmd
On Windows, either find a true.exe, name it sh.exe and place it somewhere in the PATH.
Alternatively, if you have any sh.exe installed on Windows (From Cygwin, Git, or other source), add this to the top of the shell script in Jenkins:
[ -n "$WINDIR" ] && exit 0
Why wouldn't you always pick the multi-configuration job type?
Some reasons come to mind:
Because jobs should be easy to create and configure. If it is hard to configure any job in your environment, you are probably doing something wrong outside the scope of the jenkins job. If you are happy that you managed to create that one job and it finally runs, and you are reluctant to do this whole work again, that's where you should try to improve.
Because multi configuration jobs are more complex. They usually require you to think about both the main job and the different sub job variables, and they tend to grow in complexity to a level beyond being manageable. So in a single job scenario, you'd probably waste thoughts on not using that complexity, and when extending the build variables, things might grow in the wrong direction. I'd suggest using the simple jobs as default, and the multi configuration jobs only if there is a need for multiple configurations.
Because executing multi configuration jobs might need more job slots on the slaves than single jobs. There will always be a master job that is executed on a special, invisible slot (that's no problem by itself) and triggers the sub jobs, but if these sub jobs do themselves trigger sub jobs, you might easily end in a deadlock if there are more sub jobs than slots, and some sub jobs trigger again sub jobs that then cannot execute because there are no more open slots. This problem might be circumvented by using some configuration setup on the slaves, but it is present and might only occur if several multi jobs run concurrently.
So in essence: The multi configuration job is a more complex thing, and because complexity should be avoided unless necessary, the regular freestyle job is a better default.
If you want to select on which slave you run the job, you need to use multi-configuration project (otherwise you won't be able to select/limit slaves on which you run it – there are three ways to do it, however I've tried them all (Tie plugin works only for master job, Restrict in Advanced Project Options is not rock-safe trigger as well so you want to use Slave axis that is proven to work correctly today.)