I have several separate github repositories in a github organization for which I want to run the same build test with travis-ci.
That is, I want to be able to use the same .travis.yml for all of these repositories. Moreover, I'd like to be able to update this file and have those changes be valid for each repository.
I could copy the .travis.yml into each repository. But if I have hundred or two hundred repositories, that gets annoying real fast.
Is there anyway to simply point each repository to an external .travis.yml rather than having to put a duplicate .travis.yml file in each repository.
There isn't a way to do this with a remote .travis.yml file, as Travis-CI will look at the root of the project for this file. An alternative approach I would suggest to accomplish your goal:
Build automation around updating all of your repository's .travis.yml files from a shared common file. Using your favorite scripting language, updating the file in all specified repositories and then pushed to GitHub/GitLab automatically. This should help in maintenance of your repositories with just a bit of extra automated work.
Related
I have been searching far and wide to see if I can find information on Jenkins incremental pipeline builds that does not involve Maven.
The general idea is that I want to build a generic project and run specific steps of the pipeline if the underlying code has changed. If the code did not change, I want to re-use the results from a previous build.
The reason why I want to do this, is to drastically reduce build times for huge projects.
Imagine that you only need to fix 1 line in a SCSS file, but the whole project needs to be rebuild, repackaged, etc because of this. In the meantime, the site is live and broken and waiting 15 mins to be fixed.
Can someone give a basic example of how such a build can be created or where I can find more information on incremental building?
The only thing I have been able to find is incremental building for Maven projects, but this is not applicable for me.
The standard solution is to create modules that depends on each others.
Publish the built artifact of your modules to a binary repository like Sonatype Nexus (you can easily create private npm repo as well as proxy npm repo).
During the build download the dependencies, instead of building them.
If this solution is not the one you want to take, you will have a hard time hacking a solution. To persist the state of your steps, an easy solution is to create files in the job workspace and read them at next build
I'm trying to setup master/slave on two osx machines, while using only slave for build.
Have one main git repo for the project but few additional files are kept in separate git repository. After adding two git repositories noticed after checkout it's creating two project directories under workspace and one has "#2" in the name.
That makes a problem with copying/accessing files between two repositories using $WORKSPACE variable as directory "project_name#2" is not logically part or $WORKSPACE (getting file not found error). And also cannot know which repo will be in which directory with every new build.
And more confusing sometimes it even pulls one repo to the master machine even though I've set that the build executes only on slave.
I'd appreciate any advice or suggestion?
If you're using the GIT plugin you can use advance cloning settings and specify a sub folder to clone to. make sure to clone to different folders under your workspace and then you can access both. I'm doing that in some of my projects and it works like a charm.
Use : https://wiki.jenkins-ci.org/display/JENKINS/Multiple+SCMs+Plugin
Good luck!
I would like each of my Git repositories to have their own build.xml file, but avoid having to copy paste a lot of macrodefs used by the different build scripts.
What would be the best way to organize this?
Adding the ant macrodefs to a seperate Git repository and make them available for all the build projects on my Jenkins server?
Adding them for instance to a directory of the Ant installation folder?
Does anybody have some experience with this kind of setup?
I do the same. I feel strongly that every project should be stand-alone and not depend on another source code repository. To achieve this I package my common macrodef's as an ANTLib. These are simply jar files that can be imported into the ANT build like other 3rd party tasks.
The following answer explains how to create an antlib:
How to manage a common ant build script across multiple project build jobs on jenkins?
The really big advantage of this approach is when you save the ANTlib jar in a versioned repository like Nexus. I use ivy to manage my dependencies and this allows my common build logic to be versioned, enabling me to safely change logic without breaking older builds. When you think about it this is the same kind of workflow implemented by Maven (which also downloads plugins).
I have looked into jenkins tutorials and all most all of them mention that we should provide with the URL to the git repo.
Fine.
But once jenkins has an access to the git repo, what part of project does it look into to figure out which tests should be run or wether to run them at all etc ? Is it some configuration file in the repo ?
Guess that depends on what kind of project your repo is. If I understand the question correctly. The provided url gives Jenkins the information to do a git clone url which checks out the project in Jenkins workspace.
Then according to the type, lets say it's a Maven-project, you fill in the goals you'd like Jenkins to run locally. Usually clean test. It is then run at top level, root of the project, guessing it will find a pom.xml there. If not you'll have to tell it where to look.
A more clearer answer would perhaps be easier if you told what kind of project you'd like to build.
I have a project with several dependencies on remote repositories (all on github.com at the moment if that helps). The dependencies don't change often. It would be nice if there was a way to keep the existing rebar.conf files the same so that they pointed to the upstream repositories, but to be able to cache the repos (or a snapshot) locally so that clean builds don't need go to the internet.
Is there anyway of doing this? I.e. rebar command line options, environment settings, git options, etc.?
I suppose you could do couple of things:
Make your own local clone of all repositories and change
rebar.config to take this repos from it. On the first look it seems
horrible solution, but it has a lot of advantages. Github is often
not available, clone speed will increase, and the last most valuable
is: projects are evolving and one day you will find that everything
is broken because one of the deps has changed their APi in master branch.
You could do local deps folder with all you need repos and share
it via symlink with every repo you need.
Rebar has a feature that lets you add a custom script file, rebar.config.script, to modify rebar's configuration dynamically. This lets you implement something similar to #danechkin's answer #2 except using an environment variable to switch between the local shared deps folder and the default one for the project. No changes to rebar.config needed. The example at https://github.com/basho/rebar/wiki/Dynamic-configuration shows how to do this.