I have set up my hudson job A. Job A depends on job B and C. I have set them up with "Build other projects". This works well, although each job is in separate directory in my workspace (default structure). But I need job B and C in jobs A workspace (root folder).
I have considered two approaches:
Change the workspace for job A and push that variable to job via "Trigger parameterized build on other projects" and then use ant build script to copy them to that location, since I couldnt find an option to change the folder where job B or C should go
Trigger job B and then C from build script as part of job A. This is done via remote calls (found it somewhere on stackoverflow), but that option is missing in my configuration and I couldnt find any plugin that would add it.
Ideal approach for me would be to use ant build script and trigger job B and C from there with antsvn or something like that. But I cant find a solid example of this.
Reason why I want it this way is simple - job B is CMS which is essential for job A and job C has python scripts that need to be executed before new version can land on production server (this is already done with py-ant).
Or maybe there is some better way to manage dependencies like this. Any help is appreciated.
I hope it makes sense.
Think of Jobs "B" and "C" as producing "artifacts" that Job "A" needs. Then, all you have to do is import the artifacts produced by Jobs "B" and "C" whenever you build Job "A".
Your jobs shouldn't share workspaces. Otherwise what happens if Job "A" is building when Job "B" or "C" is triggered? You'll have multiple builds going on at once. However, if you separate out what "A" needs from jobs "B" and "C", you can have Job "A" import those dependencies. There are two ways of doing this:
The hard but correct way: You should create a release repository where jobs can fetch the artifacts they need. If this sounds Mavinish to you, well, it is. However, I've used Maven architectural stuff without Maven projects and it works fine. You can use something like Artifactory or Nexus as your release repository. Then use wget or curl to fetch the items from the repository and use Maven's deploy:deploy-file plugin to send the stuff over. You will need Maven (which is a Java process) to run deploy:deploy-file, but you don't need a Maven project, or even a Java project. The deploy:deploy-file plugin doesn't even require a Maven pom.xml file. Think of it more like a command line utility to send stuff to your release repository.
The easy, but incorrect way: Hudson has a Copy Artifacts plugin that you can use to do this. The problem is that it's easy to setup, but hard to start tracking. Plus, it makes you dependent upon a very specific tool. If you decide to move away from Hudson, you might not be able to duplicate this functionality.
Related
I have a multi-module Maven project that installs a whole bunch of artifacts (with different classifiers) into the local Maven repository. I also have a second Maven project that uses the Maven Dependency Plugin to collect those artifacts into a number of different directories (for installer building purposes). And finally I have a Jenkins that I want to do all that for me.
There are a number of requirements I would like to see fulfilled:
Building the source code (and running the tests) and building the installers should be two separate jobs, Job A and Job B.
Job A needs to finish quickly; as it contains the tests the developers should get feedback as fast as possible.
The artifacts of Job B take up a lot of space but they need to be archived so this job should only run when the results of Job A do meet certain requirements (which are not a part of this problem).
Job B needs to be connected to Job A. It must be possible to tell exactly which Job A instance created the files that were used in the build of Job B. (It is also possible that I need a run of Job B for a particular build of Job A which was three weeks and 200 builds ago.)
And finally both jobs should be able to be executed locally on a developer’s machine so I would love to keep most of the configuration within Maven and only relegate to Jenkins what’s absolutely necessary. (Using the Copy Artifacts Plugin I can collect the artifacts from Job A into the required directories in Job B but when removing the collection from the Maven project I also take away the developer’s ability to do local builds.)
Parts of 3 and 4 can be achieved using the Promoted Builds plugin for Jenkins. However, I cannot seem to make sure that the files collected in Job B are exactly the files created by a certain run of Job A. During development all our version numbers of all involved projects are suffixed with “-SNAPSHOT” so that an external job has no way of knowing whether it actually got the correct file or whether it was given a newer file because another instance of Job A has been running concurrently. The version numbers are then increased directly before a release.
Here are some things I have tried and found to be unsatisfactory:
Use a local repository in the workspace directory of Job A. This will, upon each build, download all of the dependencies from our Nexus. While this does not have a huge impact on the diskspace it does consume way too much time.
Merge Job A and Job B into a single job. As Job B takes more time than time A, developers have to wait longer for feedback, it still uses a lot of diskspace—and it doesn’t really solve the problem as there is still the possibility of another Job A+B running at the same time.
Am I missing something obvious here? Are Maven or Jenkins or the combination of both unable to do what I want? What else could I try?
I have a project that has 3-5 different mercurial branches going at all times. I want to schedule a weekly Jenkins test to run our tests on all relevant branches.
What I want, I think, is a parameterized build, with the branch name as the parameter, and then to have a list of branches, and once a week, run the parameterized build with each of the parameters in the list.
However, I see that you can't send parameters into a triggered build. I assume that there is a plugin for this. Is job Generator the correct plugin? Is there something better?
I should mention that currently, we are doing this with multiple SCMs, and having the body of the build have a sh loop that runs through each directory and runs the tests. This is really inefficient, and a pain to maintain...
I can suggest one solution but it couldn't be called elegant.
Firstly, you need create multi-configuration project (aka Matrix project).
In this project you need declare one node (it can be already existed master node)
And one type of axis (for example BRANCH - be careful don't use Jenkins Set Environment Variables variables) with values corresponding for each branch (for example default, testing, devel, etc).
After you need add in your project build action in which you need check environment variable (previously declared $BRANCH) and discover for which branch this build was launched (the main idea is illustrated by example with using bash).
And finally you need manually get sources from corresponding branch.
Next build steps can be the same for all branches.
This approach have set of drawbacks:
1. You can not triggered this project by changes in repository (you can check using Mercurial plugin only one branch).
2. All subprojects will be rebuilt even if they have not changed.
3. Appropriate only for statically defined branches.
4. Not elegant.
But it has one advantage versus parameterized build:
1. All artifacts (and build logs) of branches is stored in separated directories (because they are separate subprojects).
I am trying to make this rather unique build flow and I haven't found a plugin or a way to do it with jenkins yet.
There is one job called "JOB A" which is used by itself and creates a standalone installer.
Then there is "JOB B" which creates another installer but it needs to include everything built in "JOB A" in addition to some other stuff. Now I could just copy JOB A build steps into JOB B, but I want to actually build JOB A and maybe even use those artifacts later as well.
It cannot be a build trigger cause JOB B needs to continue building after JOB A has finished and I cannot use something like flow because that creates JOB C and only sequences other jobs and I would need to go into A and B to get the artifacts.
Bonus points would be if it checked JOB A source code in git for any changes since its last build when building JOB B and decide if it needs to build it again.
I looked at many plugins and I can't seem to find one that would do this.
I hope my explanation was not confusing. Sorry if it was, I could elaborate.
If I understand correctly what you want, then what you need is:
Custom (shared) workspace
Parameterized Trigger Plugin
For both, JOB A and JOB B, setup Custom Workspace to the same folder on the server (You can even leave JOB A workspace as is, and just point JOB B custome workspace to workspace of JOB A. I am not at my work computer with Jenkins and can't provide screenshots, so I will borrow this great guide for more info on how to setup custom workspace
Then, whenever appropriate, have JOB A execute a build step Trigger/call builds on other projects, namely JOB B. You can even pass it all the same parameters that JOB A had. By default, this will not wait for JOB B to complete. It will kick off JOB B, meanwhile JOB A will finish running, and then JOB B completes whenever it is done.
If needed, you can check-mark Block until triggered projects finish their builds, and then JOB A will wait for JOB B to finish before continuing.
So, the above will:
Share workspace, and not do extra checkouts if code didn't change
Let JOB A and JOB B exist independently, with it's own artifacts, and each being able to be triggered separately.
JOB B will get everything from JOB A through shared workspace and passed parameters.
I have a fairly complicated Jenkins job that builds, unit tests and packages a web application. Depending on the situation, I would like to do different things once this job completes. I have not found a re-usable/maintainable way to do this. Is that really the case or am I missing something?
The options I would like to have once my complicated job completes:
Do nothing
Start my low-risk-change build pipeline:
copies my WAR file to my artifact repository
deploys to production
Start my high-risk-change build pipeline:
copies my WAR file to my artifact repository
deploys to test
run acceptance tests
deploy to production
I have not found an easy way to do this. The simplest, but not very maintainable approach would be to make three separate jobs, each of which kicks off a downstream build. This approach scares me for a few reasons including the fact that changes would have to be made in three places instead of one. In addition, many of the downstream jobs are also nearly identical. The only difference is which downstream jobs they call. The proliferation of jobs seems like it would lead to an un-maintainable mess.
I have looked at using several approaches to keep this as one job, but none have worked so far:
Make the job a multi-configuration project (https://wiki.jenkins-ci.org/display/JENKINS/Building+a+matrix+project). This provides a way to inject the job with a parameter. I have not found a way to make the "build other projects" step respond to a parameter.
Use the Parameterized-Trigger plugin (https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Trigger+Plugin). This plugin lets you trigger downstream-jobs based on certain triggers. The triggers appear to be too restrictive though. They're all based on the state of the build, not arbitrary variables. I don't see any option provided here that would work for my use case.
Use the Flexible Publish plugin (https://wiki.jenkins-ci.org/display/JENKINS/Flexible+Publish+Plugin). This plugin has the opposite problem as the parameterized-trigger plugin. It has many useful conditions it can check, but it doesn't look like it can start building another project. Its actions are limited to publishing type activities.
Use Flexible Publish + Any Build Step plugin (https://wiki.jenkins-ci.org/display/JENKINS/Any+Build+Step+Plugin). The Any Build Step plugin allows making any build action available to the Flexible Publish plugin. While more actions were made available once this plugin was activated, those actions didn't include "build other projects."
Is there really not an easy way to do this? I'm surprised that I haven't found it and even more surprised that I haven't really seen any one else trying to do this? Am I doing something unusual? Is there something obvious that I am missing?
If I understood it correct you should be able to do this by following these Steps:
First Build Step:
Does the regular work. In your case: building, unit testing and packaging of the web application
Depending on the result let it create a file with a specific name.
This means if you want the low-risk-change to run afterwards create a file low-risk.prop
Second Build Step:
Create a Trigger/call builds on other projects Step from the Parameterized-Trigger
plugin.
Entery the name of your low-risk job into the Projects to build field
Click on: Add Parameter
Choose: Parameters from properties File
Enter low-risk.prop into the Use properties from file Field
Enable Don't trigger if any files are missing
Third Build Step:
Check if a low-risk.prop file exists
Delete the File
Do the same for the high-risk job
Now you should have the following Setup:
if a file called low-risk.prop occurs during the first Build Step the low-risk job will be started
if a file called high-risk.prop occurs during the first Build Step the high-risk job will be started
if there's no .prop File nothing happens
And that's what you wanted to achieve. Isn't it?
Have you looked at the Conditional Build Plugin? (https://wiki.jenkins.io/display/JENKINS/Conditional+BuildStep+Plugin)
I think it can do what you're looking for.
If you want a conditional post-build step, there is a plugin for that:
https://wiki.jenkins-ci.org/display/JENKINS/Post+build+task
It will search the console log for a RegEx you specify, and if found, will execute a custom script. You can configure fairly complex criteria, and you can configure multiple sets of criteria each executing different post build tasks.
It doesn't provide you with the usual "build step" actions, so you've got to write your own script there. You can trigger execution of the same job with different parameters, or another job with some parameters, in standard ways that jenkins supports (for example using curl)
Yet another alternative is Jenkins text finder plugin:
https://wiki.jenkins-ci.org/display/JENKINS/Text-finder+Plugin
This is a post-build step that allows to forcefully mark a build as "unstable" if a RegEx is found in console text (or even some file in workspace). So, in your build steps, depending on your conditions, echo a unique line into console log, and then do a RegEx for that line. You can then use "Trigger parameterized buids" and set the condition as "unstable". This has an added benefit of visually marking the build different (with a yellow ball), however you only have 1 conditional option with this method, and from your OP, looks like you need 2.
Try a combination of these 2 methods:
Do you use Ant for your builds?
If so, it's possible to do conditional building in ant by having a set of environment variables your build scripts can use to conditionally build. In Jenkins, your build will then be building all of the projects, but your actual build will decide whether it builds or just short-circuits.
I think the way to do it is to add an intermediate job that you put in the post-build step and pass to it all the parameters your downstream jobs could possibly need, and then within that job place conditional builds for the real downstream jobs.
The simplest approach I found is to trigger other jobs remotely, so that you can use Conditional Build Plugin or any other plugins to build other jobs conditionally.
Why are there two kinds of jobs for Jenkins, both the multi-configuration project and the free-style project project? I read somewhere that once you choose one of them, you can't convert to the other (easily). Why wouldn't I always pick the multi-configuration project in order to be safe for future changes?
I would like to setup a build for a project building both on Windows and Unix (and other platforms as well). I found this question), which asks the same thing, but I don't really get the answer. Why would I need three matrix projects (and not three free-style projects), one for each platform? Why can't I keep them all in one matrix, with platforms AND (for example) gcc version on one axis and (my) software versions on the other?
I also read this blog post, but that builds everything on the same machine, with just different Python versions.
So, in short: how do most people configure a multi-configuration project targeting many different platforms?
The two types of jobs have separate functions:
Free-style jobs: these allow you to build your project on a single computer or label (group of computers, for eg "Windows-XP-32").
Multi-configuration jobs: these allow you to build your project on multiple computers or labels, or a mix of the two, for eg Windows-XP, Windows-Vista, Windows-7 and RedHat - useful for checking compatibility or building for multiple platforms (qt programs?)
If you have a project which you want to build on Windows & Unix, you have two options:
Create a separate free-style job for each configuration, in which case you have to maintain each one individually
You have one multi-configuration job, and you select 2 (or more) labels/computers/slaves - 1 for Windows and 1 for Unix. In this case, you only have to maintain one job for the build
You can keep the your gcc versions on one axis, and software versions on another. There is no reason you should not be able to.
The question that you link has a fair point, but one that does not relate to your question directly: in his case, he had a multi-configuration job A, which - on success - triggered another job B. Now, in a multi-configuration job, if one of the configuration fails, the entire job fails (obviously, since you want your project to build successfully on all your configurations).
IMHO, for building the same project on multiple platforms, the better way to go is to use a multi-configuration style job.
Another option is to use a python build step to check the current OS and then call an appropriate setup or build script. In the python script, you can save the updated environment to a file and inject the environment again using the EnvInject plugin for subsequent build steps. Depending on the size of your build environment, you could also use a multi-platform build tool like SCons.
You could create a script (e.g. build) and a batch file (e.g. build.bat) that get checked in with your source code. In Jenkins in your build step you can call $WORKSPACE/build - Windows will execute build.bat whereas Linux will run build.
An option is to use user-defined axis combined with slaves(windows, linux, ...), so you need to add a filter for each combination and use the Conditional BuildStep Plugin to set the build step specific for each plataform(Executar shell, Windows command, ...)
This link has a tutorial but it is in portuguese, but it's easy to work it out based on image...
http://manhadalasanha.wordpress.com/2013/06/20/projeto-de-multiplas-configuracoes-matrix-no-jenkins/
You could use the variable that jenkins create when you define a configuration matrix axis. For example:
You create a slave axis with name OSTYPE and check the two slaves (Windows and Linux). Then you create two separate build steps and check for the OSTYPE environment variable.
You could use a improved script language instead, like python, which is multi-platform and can achieve the same functionality independent of the slaves' name and in just one build step.
If you go the matrix route with Windows and something else, you'll want the XShell plugin. You just create your two build scripts such as "build.bat" for cmd and "build" for bash, and tell XShell to run "build". The right one will be run in each case.
A hack to have batch files run on Windows and shell scripts on Unix:
On Unix, make batch files exit with 0 exit status:
ln -s /bin/true /bin/cmd
On Windows, either find a true.exe, name it sh.exe and place it somewhere in the PATH.
Alternatively, if you have any sh.exe installed on Windows (From Cygwin, Git, or other source), add this to the top of the shell script in Jenkins:
[ -n "$WINDIR" ] && exit 0
Why wouldn't you always pick the multi-configuration job type?
Some reasons come to mind:
Because jobs should be easy to create and configure. If it is hard to configure any job in your environment, you are probably doing something wrong outside the scope of the jenkins job. If you are happy that you managed to create that one job and it finally runs, and you are reluctant to do this whole work again, that's where you should try to improve.
Because multi configuration jobs are more complex. They usually require you to think about both the main job and the different sub job variables, and they tend to grow in complexity to a level beyond being manageable. So in a single job scenario, you'd probably waste thoughts on not using that complexity, and when extending the build variables, things might grow in the wrong direction. I'd suggest using the simple jobs as default, and the multi configuration jobs only if there is a need for multiple configurations.
Because executing multi configuration jobs might need more job slots on the slaves than single jobs. There will always be a master job that is executed on a special, invisible slot (that's no problem by itself) and triggers the sub jobs, but if these sub jobs do themselves trigger sub jobs, you might easily end in a deadlock if there are more sub jobs than slots, and some sub jobs trigger again sub jobs that then cannot execute because there are no more open slots. This problem might be circumvented by using some configuration setup on the slaves, but it is present and might only occur if several multi jobs run concurrently.
So in essence: The multi configuration job is a more complex thing, and because complexity should be avoided unless necessary, the regular freestyle job is a better default.
If you want to select on which slave you run the job, you need to use multi-configuration project (otherwise you won't be able to select/limit slaves on which you run it – there are three ways to do it, however I've tried them all (Tie plugin works only for master job, Restrict in Advanced Project Options is not rock-safe trigger as well so you want to use Slave axis that is proven to work correctly today.)