In my travis script I have the following:
after_success:
- ember build --environment=production
- ember build --environment=staging --output-path=dist-staging
After both of these build, I conditionally deploy to S3 the one that is appropriate, based on the current git branch.
It works, but it would save time if I only built the one I actually need. What is the easiest way to build based on the branch?
use the test command as used here.
after_success:
- test $TRAVIS_BRANCH = "master" &&
ember build
All travis env variables are available here.
You can execute shell script in after_success and check the current branch using travis environment variables:
#!/bin/sh
if [[ "$TRAVIS_BRANCH" != "master" ]]; then
echo "We're not on the master branch."
# analyze current branch and react accordingly
exit 0
fi
Put the script somewhere in the project and use it like:
after_success:
- ./scripts/deploy_to_s3.sh
There might be other useful travis variables to you, they are listed here.
With the following entry the script will only be executed if it is not a PR and the branch is master.
after_success:
- 'if [ "$TRAVIS_PULL_REQUEST" = "false" -a "$TRAVIS_BRANCH" = "master" ]; then bash doit.sh; fi'
It is not enough to evaluate TRAVIS_BRANCH. TRAVIS_BRANCH is set to master when a PR against master is created by a fork.
See also the description of TRAVIS_BRANCH on https://docs.travis-ci.com/user/environment-variables/:
for push builds, or builds not triggered by a pull request, this is the name of the branch
for builds triggered by a pull request this is the name of the branch targeted by the pull request
for builds triggered by a tag, this is the same as the name of the tag (TRAVIS_TAG)
If you work with tags you have to consider TRAVIS_TAG as well. If TRAVIS_TAG is set, TRAVIS_BRANCH is set to the value of TRAVIS_TAG.
after_success:
- if [ "$TRAVIS_PULL_REQUEST" = "false" -a \( "$TRAVIS_BRANCH" = "master" -o -n "$TRAVIS_TAG" \) ]; then doit.sh; fi
I would say the above solutions are good because they would transfer to non-travis-ci build systems as well, but there is a feature in TravisCI for similar to this:
stages:
- name: deploy
# require the branch name to be master (note for PRs this is the base branch name)
if: branch = master
Although I could not get it to work with after_success, the following page has a section on "Testing Conditions" which I didn't bother setting that up.
https://docs.travis-ci.com/user/conditional-builds-stages-jobs/
Related
I am trying to create a base image for my repo that is optionally re-built when branches (merge requests) make changes to dependencies.
Let's say I have this pipeline configuration:
stages:
- Test
- Build
variables:
- image: main
Changes A:
stage: Test
rules:
- if: '$CI_PIPELINE_SOURCE == "push"'
changes:
- path/to/a
script:
- docker build -t a .
- docker push a
- echo 'image=a' > dotenv
artifacts:
reports:
dotenv: dotenv
Build:
stage: Build
image: $image
script:
- echo build from $image
Let's say I push to a new branch and the first commit changes /path/to/a, the docker image is build and pushed, the dotenv is updated and the Build job successfully uses image=a.
Now, let's say I push a new commit to the same branch. However, the new commit does not change /path/to/a so the Changes A job does not run. Now, the Build stage pulls the "wrong" default image=main while I would like it to still pull image=a since it builds on top of the previous commit.
Any ideas on how to deal with this?
Is there a way to make rules.changes refer to origin/main?
Any other ideas on how to achieve what I am trying to do?
Is there a way to make rules.changes refer to origin/main?
Yes, there is, since GitLab 15.3 (August 2022):
Improved behavior of CI/CD changes with new branches
Improved behavior of CI/CD changes with new branches
Configuring CI/CD jobs to run on pipelines when certain files are changed by using rules: changes is very useful with merge request pipelines.
It compares the source and target branches to see what has changed, and adds jobs as needed.
Unfortunately, changes does not work well with branch pipelines.
For example, if the pipeline runs for a new branch, changes has nothing to compare to and always returns true, so jobs might run unexpectedly.
In this release we’re adding compare_to to rules:changes for both jobs and workflow:rules, to improve the behavior in branch pipelines.
You can now configure your jobs to check for changes between the new branch and the defined comparison branch.
Jobs that use rules:changes:compare will work the way you expect, comparing against the branch you define.
This is useful for monorepos, where many independent jobs could be configured to run based on which component in the repo is being worked on.
See Documentation and Issue.
You can use it only as part of a job, and it must be combined with rules:changes:paths.
Example:
docker build:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
changes:
paths:
- Dockerfile
compare_to: 'refs/heads/branch1'
In this example, the docker build job is only included when the Dockerfile has changed relative to refs/heads/branch1 and the pipeline source is a merge request event.
There is a project setting, which defines how your MR pipelines are setup. This is only working for Merge requests and can be found in Settings -> Merge Requests under the section Merge options
each commit individually - nothing checked
this means, each commit is treated on its own, and changes checks are done against the triggering commit on it's own
Enabled merged results pipeline
This will merge your MR with the target branch before running the CI Jobs. This also will evaluate all your changes, and take a look at all of them within the MR and not commit wise.
Merge trains
This is a whole different chapter, and for this usecase not relevant. But for completeness i have to mention it see https://gitlab.com/help/ci/pipelines/merge_trains.md
What you are looking for is Option 2 - Merged pipeline results. but as i said, this will only work in Merge Request pipelines and not general pipelines. So you would also need to adapt your rules to something like:
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
changes:
- path/to/a
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
I try to make a pretty basic GitLab CI job.
I want:
When I push to develop, gitlab builds docker image with tag "develop"
When I push to main, gitlab checks that current commit has tag, and builds image with it or job is not triggered.
Build and publish docker image:
stage: build
rules:
- if:
($CI_COMMIT_BRANCH == "main" && $CI_COMMIT_TAG && $CI_PIPELINE_SOURCE == "push")
variables:
TAG: $CI_COMMIT_TAG
- if:
($CI_COMMIT_BRANCH == "develop" && $CI_PIPELINE_SOURCE == "push")
variables:
TAG: develop
script:
- echo $TAG
- ...<another commands>
But it doesn't work as expected. $CI_COMMIT_TAG - is empty. Despite commit that triggers job(merge commit) has tag.
Explanation that i found that i found does not help achieve my goal using "if" statements.
Solution based on workflow suggested here not helpful either.
It's seems pretty common job with intuitive way of using variable called COMMIT_TAG.
But it just does not work. Can someone kind, please, explain to me how to achieve my goal?
Gitlab CI/CD has multiple 'pipeline sources', and some of the Predefined Variables only exist for certain sources.
For example, if you simply push a new commit to the remote, the value of CI_PIPELINE_SOURCE will be push. For push pipelines, many of the Predefined Variables will not exist, such as CI_COMMIT_TAG, CI_MERGE_REQUEST_SOURCE_BRANCH_NAME, CI_EXTERNAL_PULL_REQUEST_SOURCE_BRANCH_NAME, etc.
However if you create a Git Tag either in the GitLab UI or from a git push --tags command, it will create a Tag pipeline, and variables like CI_COMMIT_TAG will exist, but CI_COMMIT_BRANCH will not.
One variable that will always be present regardless what triggered the pipeline is CI_COMMIT_REF_NAME. For Push sources where the commit is tied to a branch, this variable will hold the branch name. If the commit isn't tied to a branch (ie, there was once a branch for that commit but now it's been deleted) it will hold the full commit SHA. Or, if the pipeline is for a tag, it will hold the tag name.
For more information, read through the different Pipeline Sources (in the description of the CI_PIPELINE_SOURCE variable) and the other variables in the docs linked above.
What I would do is move this check to the script section so we can make it more complex for our benefit, and either immediately exit 0 so that the job doesn't run and it doesn't fail, or run the rest of the script.
Build and publish docker image:
stage: build
script:
- if [ $CI_PIPELINE_SOURCE != 'push' ]; then exit 0 fi
- if [ $CI_COMMIT_REF_NAME != 'develop' && $CI_COMMIT_TAG == '' ]; then exit 0 fi
- if [ $CI_COMMIT_TAG != '' ]; then git branch --contains $CI_COMMIT_TAG | grep main; TAG_ON_MAIN=$? fi
- if [ $TAG_ON_MAIN -ne 0 ]; then exit 0 fi
- echo $TAG
- ...<other commands>
This is a bit confusing, so here it is line by line:
If the $CI_PIPELINE_SOURCE variable isn't 'push', we exit 0.
If the $CI_COMMIT_REF_NAME (again, either a commit SHA, tag name, or branch name) isn't develop and $CI_COMMIT_TAG is empty, exit 0
If $CI_COMMIT_TAG isn't empty, we run a command to see if the tag was based on main, git branch --contains <tag_name>. This will return all the branches this tag is a part of (which is, the branch it was created from, and all branches that exist after the tag was made). We then pass the results through grep to look for main. If main is in the result list, the exit code is 0 which we can get with the special variable $? (always returns the previous command's exit code). We then set this exit code to a variable to use in the next conditional.
We check to see if the exit code of the grep from step 3. is non-0 (that is, if main is not in the list of branches <tag_name> is part of), and we exit 0.
After all of these, we can be sure that the Pipeline Source is push, that either there is a tag and it's on the main branch, or that there isn't a tag and the pipeline is for the develop branch.
I've created a Jenkins Multibranch Pipeline with the GitHub Branch Source plugin. The Jenkinsfile essentially just calls a Cake Build script (build.ps1, build.cake) that contains all the build/deploy logic. This allows me to move to another CI service easily.
Unfortunately, I cannot seem to figure out how to add my Cake Build scripts as a trusted file so that PR's from forks will pull the files from the source repo instead. The Trust setting of the Discover pull requests from forks behavior seems to indicate that there can be other trusted files besides Jenkinsfile:
Nobody
Pull requests from forks will all be treated as untrusted. This means that where Jenkins requires a trusted file (e.g. Jenkinsfile) the contents of that file will be retrieved from the target branch on the origin repository and not from the pull request branch on the fork repository.
However, I cannot seem to find any documentation on adding other trusted files. The primary reason for this is to prevent a PR from a fork from accessing credentials from the Cake script. They wouldn't be able to change Jenkinsfile, but they could still change the Cake script to expose the credentials.
Is it actually possible to add other trusted files?
It seems like Jenkins does not support this. My solution is checking out the untrusted files manually from the base version instead. First getting the commit 's hash of the base version with:
def commit = sh(
script: 'git rev-parse HEAD',
returnStdout: true
).trim()
def base = sh(
script: "git rev-list --parents -n 1 ${commit}",
returnStdout: true
).trim().split('\\s+')[2]
git rev-list --parents -n 1 ${commit} will return the hash of current commit, which is a merge commit that was created by Jenkins; the latest commit of the PR and the latest commit of the target branch, separated by a space (e.g. 05e9322574ea03003f87dcbb44f172e6fa62581f b3f6ef892af9c645f490106757d7d05df3a26060 069ffd55ae36414a51b4de166aef86966f9447a8). Hence, we grab the hash of the latest commit of the target branch by trim().split('\\s+')[2].
Now we can do sh "git checkout ${base} FILE" on any file that we don't trust from the PR.
This does not works if the PR is already merged with the latest version of target branch. So what I did is something like this:
// revert untrusted files to the base version and backup it before we execute any untrusted code so the attacker
// don't have a chance to put a malicious content
def latest = sh(script: 'git rev-parse HEAD', returnStdout: true).trim()
sh "git checkout origin/${env.CHANGE_TARGET}"
def baseCompose = readFile('docker-compose.yml')
// switch back to latest commit
sh "git checkout ${latest}"
sh 'git clean -d -f -f -q -x'
I setup a simple Jenkinsfile that just echo a few steps.
I setup a new repo on Bitbucket (git) and two branches called master and develop.
When I commit something to master then both branches checkout and build in jenkins. Same behaviour on the develop branch.
Is it possible to limit only master to build in Jenkins once there is a commit to master branch? Similar behaviour to develop?
I think you can use a temporary text file to save last successful build SHA ($LAST_SUCCESSFUL_BUILD_SHA) of each branch. Then, when having a new commit from the repo, we'll check which branch the commit comes from.
CURRENT_SHA=$(git rev-parse HEAD)
if [ $FORCE_REBUILD = true ] || [ $CURRENT_SHA != $LAST_SUCCESSFUL_BUILD_SHA ]; then
echo "New commits available OR it was forced to build."
else
echo "Already up-to-date. Skip build."
curl -v -X POST --data "description=no changes, skip." ${JENKINS_BUILD_URL}submitDescription --user <username>:<password>
curl -v -X POST ${BUILD_URL}stop --user <username>:<password>
echo "Waiting for abort to take effect :D"
fi
If the new commit comes from another branch, it's good to use Jenkins open API to skip the build.
I have applied this for my freestyle jobs but haven't tried with Jenkinsfile.
I'm working on a project that deploys to a provider not currently supported by Travis, so I've written my deployment step in an after_success block. However, I'd like to configure Travis to only deploy on new tags. I know this is possible when using the deploy: block, by adding
deploy:
# ...
on:
tags: true
to the deploy: block.
Is the same possible in after_success? If not, is there another way to only do certain actions in after_success if I'm on a new tag?
If Travis doesn't support this, I can just write a shell script to run after all successes, check if on a new tag, and then conditionally do the deployment, but I'd much prefer to be able to have Travis do it automatically.
Thanks!
Yep! I need the exact same thing and worked around it by doing:
after_success:
if ([ "$TRAVIS_BRANCH" == "master" ] || [ ! -z "$TRAVIS_TAG" ]) &&
[ "$TRAVIS_PULL_REQUEST" == "false" ]; then
echo "This will deploy!"
else
echo "This will not deploy!"
fi
I hope they introduce the on: tags: functionality for the after_success event, it'll make things easier and keep the build script cleaner.