setting up travis-ci for multiple sub project on one repo - travis-ci

I have several folders in one github project where each has a different .travis.yml file.
What is the correct way to setup travis-ci so that I can specify which folder / sub-project to build?
I can add before_script: cd function but is there a way to trigger the build only based function being updated on the develop branch - basically i don't want to trigger the build if i update a sub-folder.
Any advise is much appreciated

You can add a list of branches to build:
branches:
only:
- develop
- release
- master
And do different steps for each branch:
deploy:
- provider: script
script: bash -x deploy/script_deploy.sh develop
on:
branch: develop
Do you need something else?

Related

BitBucket pipeline is not using cache for npm install

I have a single BitBucket repository containing the code for an Angular app in a folder called ui and a Node API in a folder called api.
My BitBucket pipeline runs ng test for the Angular app, but the node_modules folder isn't being cached correctly.
This is my BitBucket Pipeline yml file:
image: trion/ng-cli-karma
pipelines:
default:
- step:
caches:
- angular-node
script:
- cd ui
- npm install
- ng test --watch=false
definitions:
caches:
angular-node: /ui/node_modules
When the builds runs it shows:
Cache "angular-node": Downloading
Cache "angular-node": Extracting
Cache "angular-node": Extracted
But when it performs the npm install step it says:
added 1623 packages in 41.944s
I am trying to speed the build up and I can't work out why npm needs to install the dependencies assuming they are already contained in the cache which has been restored.
my guess is, your cache position is not correct. there is a pre-configured node cache (named "node") that can just be activated. no need to do a custom cache for that. (the default cache fails, because your node build is in a sub folder of the clone directory, so you need a custom cache)
cache positons are relative to the clone directory. bitbucket clones into /opt/atlassian/pipelines/agent/build thats probably why your absolute cache-path did not work.
simply making the cache reference relative should do the trick
pipelines:
default:
- step:
caches:
- angular-node
script:
- cd ui
- npm install
- ng test --watch=false
definitions:
caches:
angular-node: ui/node_modules
that may fix your issue

Share gitlab-ci.yml between projects

We are thinking to move our ci from jenkins to gitlab. We have several projects that have the same build workflow. Right now we use a shared library where the pipelines are defined and the jenkinsfile inside the project only calls a method defined in the shared library defining the actual pipeline. So changes only have to be made at a single point affecting several projects.
I am wondering if the same is possible with gitlab ci? As far as i have found out it is not possible to define the gitlab-ci.yml outside the repository. Is there another way to define a pipeline and share this config with several projects to simplify maintainance?
GitLab 11.7 introduces new include methods, such as include:file:
https://docs.gitlab.com/ee/ci/yaml/#includefile
include:
- project: 'my-group/my-project'
ref: master
file: '/templates/.gitlab-ci-template.yml'
This will allow you to create a new project on the same GitLab instance which contains a shared .gitlab-ci.yml.
First let me start by saying: Thank you for asking this question! It triggered me to search for a solution (again) after often wondering if this was even possible myself. We also have like 20 - 30 projects that are quite identical and have .gitlab-ci.yml files of about 400 - 500 loc that have to each be changed if one thing changes.
So I found a working solution:
Inspired by the Auto DevOps .gitlab-ci.yml template Gitlab itself created, and where they use one template job to define all functions used and call every before_script to load them, I came up with the following setup.
Multiple project repo's (project-1, project-2) requiring a shared set of CI jobs / functions
Functions script containing all shared functions in separate repo
Files
So using a shared ci jobs scipt:
#!/bin/bash
function list_files {
ls -lah
}
function current_job_info {
echo "Running job $CI_JOB_ID on runner $CI_RUNNER_ID ($CI_RUNNER_DESCRIPTION) for pipeline $CI_PIPELINE_ID"
}
A common and generic .gitlab-ci.yml:
image: ubuntu:latest
before_script:
# Install curl
- apt-get update -qqq && apt-get install -qqqy curl
# Get shared functions script
- curl -s -o functions.sh https://gitlab.com/giix/demo-shared-ci-functions/raw/master/functions.sh
# Set permissions
- chmod +x functions.sh
# Run script and load functions
- . ./functions.sh
job1:
script:
- current_job_info
- list_files
You could copy-paste your file from project-1 to project-2 and it would be using the same shared Gitlab CI functions.
These examples are pretty verbose for example purposes, optimize them any way you like.
Lessons learned
So after applying the construction above on a large scale (40+ projects) I want to share some lessons learned so you don't have to find out the hard way:
Version (tag / release) your shared ci functions script. Changing one thing can now make all pipelines fail.
Using different Docker images could cause an issue in the requirement for bash to load the functions (e.g. I use some Alpine-based images for CLI tool based jobs that have sh by default)
Use project based CI/CD secret variables to personalize build jobs for projects. Like environment URL's etc.
Since gitlab version 12.6, it's possible define a external .gitlab-cy.yml file.
To customize the path:
Go to the project's Settings > CI / CD.
Expand the General pipelines section.
Provide a value in the Custom CI configuration path field.
Click Save changes.
...
If the CI configuration will be hosted on an external site, the URL link must end with .yml:
http://example.com/generate/ci/config.yml
If the CI configuration will be hosted in a different project within
GitLab, the path must be relative to the root directory in the other
project, with the group and project name added to the end:
.gitlab-ci.yml#mygroup/another-project
my/path/.my-custom-file.yml#mygroup/another-project
Use include feature, (available from GitLab 10.6):
https://docs.gitlab.com/ee/ci/yaml/#include
So, i always wanted to post, with what i came up with now:
Right now we use a mixed approach of #stefan-van-gastel's idea of a shared ci library and the relatively new include feature of gitlab 11.7. We are very satisfied with this approach as we can now manage our build pipeline for 40+ repositories in a single repository.
I have created a repository called ci_shared_library containing
a shell script for every single build job containing the execution logic for the step.
a pipeline.yml file containing the whole pipeline config. In the before script we load the ci_shared_library to /tmp/shared to be able to execute the scripts.
stages:
- test
- build
- deploy
- validate
services:
- docker:dind
before_script:
# Clear existing shared library
- rm -rf /tmp/shared
# Get shared library
- git clone https://oauth2:${GITLAB_TOKEN}#${SHARED_LIBRARY} /tmp/shared
- cd /tmp/shared && git checkout master && cd $CI_PROJECT_DIR
# Set permissions
- chmod -R +x /tmp/shared
# open access to registry
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
test:
stage: test
script:
- /tmp/shared/test.sh
build:
stage: build
script:
- /tmp/shared/build.sh
artifacts:
paths:
- $CI_PROJECT_DIR/target/RPMS/x86_64/*.rpm
expire_in: 3h
only:
- develop
- /release/.*/
deploy:
stage: deploy
script:
- /tmp/shared/deploy.sh
artifacts:
paths:
- $CI_PROJECT_DIR/tmp/*
expire_in: 12h
only:
- develop
- /release/.*/
validate:
stage: validate
script:
- /tmp/shared/validate.sh
only:
- develop
- /release\/.*/
Every project that want's to use this pipeline config has to have a .gitlab-ci.yml. In this file the only thing to do is to import the shared pipeline.yml file from the ci_shared_library repo.
# .gitlab-ci.yml
include:
- project: 'ci_shared_library'
ref: master
file: 'pipeline.yml'
With this approach really everything regarding to the pipeline lives in one single repository and is reusable. We have the whole pipeline-template in one file, but i think it would even be possible to split this up to have every single job in a yml-file. This way it would be more flexible and one could create default jobs that can be merged together differently for projects that have similar jobs but not every project needing all jobs...
With GitLab 13.5 (October 2020), the include feature is even more useful:
Validate expanded GitLab CI/CD configuration with the API
Writing and debugging complex pipelines is not a trivial task. You can use the include keyword to help reduce the length of your pipeline configuration files.
However, if you wanted to validate your entire pipeline via the API previously, you had to validate each included configuration file separately which was complicated and time consuming.
Now you have the ability to validate a fully-expanded version of your pipeline configuration through the API, with all the include configuration included.
Debugging large configurations is now easier and more efficient.
See Documentation and Issue.
And:
See GitLab 13.6 (November 2020)
Include multiple CI/CD configuration files as a list
Previously, when adding multiple files to your CI/CD configuration using the include:file syntax, you had to specify the project and ref for each file. In this release, you now have the ability to specify the project, ref, and provide a list of files all at once. This prevents you from having to repeat yourself and makes your pipeline configuration less verbose.
See Documentation) and Issue.
You could look into the concept of Dynamic Child pipeline.
It has evolved with GitLab 13.2 (July 2020):
Dynamically generate Child Pipeline configurations with Jsonnet
We released Dynamic Child Pipelines back in GitLab 12.9, which allow you to generate an entire .gitlab-ci.yml file at runtime.
This is a great solution for monorepos, for example, when you want runtime behavior to be even more dynamic.
We’ve now made it even easier to create CI/CD YAML at runtime by including a project template that demonstrates how to use Jsonnet to generate the YAML.
Jsonnet is a data templating language that provides functions, variables, loops, and conditionals that allow for fully parameterized YAML configuration.
See documentation and issue.

How to cache the entire directory (platformio dependencies) in bitbucket pipelines?

I am running a CI pipeline to build firmware for ESP8266 using plaitformio and bitbucket pipelines, my code builds successfully and now I want to cache the directory that contains the platformio libraries (.piolibdeps). Here are the contains of my platform.ini file.
[env:nodemcuv2]
platform = espressif8266
board = nodemcuv2
framework = arduino
upload_port = 192.168.1.108
lib_deps =
ESPAsyncTCP#1.1.0
OneWire
Time
FauxmoESP
Blynk
DallasTemperature
ArduinoJson
Adafruit NeoPixel
How to cache this directory in BitBucket pipelines? Please see below the contents of bitbucket-pipelines.yml file, with this it is not caching the defined directory, what's wrong here?
image: eclipse/platformio
pipelines:
branches:
develop:
- step:
name: Build Project
caches: # caches the depende
- directories
script: # Modify the commands below to build your repository.
- pio ci --project-conf=./Code/UrbanAquarium.Firmware/platformio.ini ./Code/UrbanAquarium.Firmware/src
- pwd
definitions:
caches:
directories: ./Code/UrbanAquarium.Firmware/.piolibdeps
And here my folder structure.
in case you're still looking for an answer - I think you got it almost right, but probably need to specify a custom --build-dir (so that you can specify the same path for your cache) as well as --keep-build-dir (see https://docs.platformio.org/en/latest/userguide/cmd_ci.html). Also, I'm not sure why you specified a ./Code/UrbanAquarium.Firmware/ prefix.
That said, I've tried the above and it became quickly ugly - for now I'll only cache ~/.platformio, as well as the default pip cache:
image: python:2.7.16
pipelines:
default:
- step:
caches:
- pip
- pio
script:
- pip install -U platformio
- platformio update
- platformio ci src/ --project-conf=platformio.ini
definitions:
caches:
pio: ~/.platformio

Preventing conflicts when deploying multiple distros to PyPI using Travis-CI

I'd like Travis CI to build and deploy the following artefacts to PyPI whenever a new commit hits the master branch:
Python 2 wheel
Python 3 wheel
Source
To make this happen, I've added the following to .travis.yml:
language: python
python:
- '2.7'
- '3.5'
- '3.6'
deploy:
on:
branch: master
provider: pypi
distribution: bdist_wheel sdist
For normal build/test, the configuration works great. However, it introduces a race condition when deploying to PyPI:
Uploading distributions to https://upload.pypi.org/legacy/
Uploading PyOTA-2.0.0b1.tar.gz
HTTPError: 400 Client Error: File already exists. for url: https://upload.pypi.org/legacy/
What changes should I make to .travis.yml to get Travis CI to deploy the correct artefacts to PyPI?
I ran into this problem today and eventually found this under-documented gem:
deploy:
provider: pypi
skip_existing: true
...
I use skip_existing: true on a project to get source and wheels published once even though I test across a couple of different configurations and python versions. Handy. More details in this resolved github issue. I also submitted a documentation diff.
Some days I think outside the box; other days it's just a really big box.
Previously, this project needed separate wheels for Python 2 and Python 3, so I needed Travis CI to build wheels using different versions of Python.
But recently I got the project to build universal wheels correctly, so now Travis can build all of the deployment artefacts using any one version of Python.
I modified .travis.yml accordingly, and everything is working great:
deploy:
on:
branch: master
python: '3.6'

Multiple Environments in Travis CI

Travis has an easy way to test a project against different PHP versions.
Now I want to run tests for plugins. For that I wrote a script that is called in the install phase of .travis.yml which checks out the main project and moves my plugin source into the correct directory. Then the tests are run. So far so good.
Now I would like to provide two of these scripts. One that checks out the main project's current master branch and one that checks out the latest stable version. The plugin should be tested against both checkouts in completely separate test runs just like it is ran against different PHP versions.
Is there a simple way to configure this in .travis.yml?
You need to use env option:
env:
- TEST_NAME=my_test_1
- TEST_NAME=my_test_2
- TEST_NAME=my_test_3
script:
- ./test-run.sh --test-name=${TEST_NAME}
documentation (Set environment variables section)
example

Resources