Perform after_success actions conditionally? - travis-ci

I'm working on a project that deploys to a provider not currently supported by Travis, so I've written my deployment step in an after_success block. However, I'd like to configure Travis to only deploy on new tags. I know this is possible when using the deploy: block, by adding
deploy:
# ...
on:
tags: true
to the deploy: block.
Is the same possible in after_success? If not, is there another way to only do certain actions in after_success if I'm on a new tag?
If Travis doesn't support this, I can just write a shell script to run after all successes, check if on a new tag, and then conditionally do the deployment, but I'd much prefer to be able to have Travis do it automatically.
Thanks!

Yep! I need the exact same thing and worked around it by doing:
after_success:
if ([ "$TRAVIS_BRANCH" == "master" ] || [ ! -z "$TRAVIS_TAG" ]) &&
[ "$TRAVIS_PULL_REQUEST" == "false" ]; then
echo "This will deploy!"
else
echo "This will not deploy!"
fi
I hope they introduce the on: tags: functionality for the after_success event, it'll make things easier and keep the build script cleaner.

Related

$CI_COMMIT_TAG in "if" statemets of regular job

I try to make a pretty basic GitLab CI job.
I want:
When I push to develop, gitlab builds docker image with tag "develop"
When I push to main, gitlab checks that current commit has tag, and builds image with it or job is not triggered.
Build and publish docker image:
stage: build
rules:
- if:
($CI_COMMIT_BRANCH == "main" && $CI_COMMIT_TAG && $CI_PIPELINE_SOURCE == "push")
variables:
TAG: $CI_COMMIT_TAG
- if:
($CI_COMMIT_BRANCH == "develop" && $CI_PIPELINE_SOURCE == "push")
variables:
TAG: develop
script:
- echo $TAG
- ...<another commands>
But it doesn't work as expected. $CI_COMMIT_TAG - is empty. Despite commit that triggers job(merge commit) has tag.
Explanation that i found that i found does not help achieve my goal using "if" statements.
Solution based on workflow suggested here not helpful either.
It's seems pretty common job with intuitive way of using variable called COMMIT_TAG.
But it just does not work. Can someone kind, please, explain to me how to achieve my goal?
Gitlab CI/CD has multiple 'pipeline sources', and some of the Predefined Variables only exist for certain sources.
For example, if you simply push a new commit to the remote, the value of CI_PIPELINE_SOURCE will be push. For push pipelines, many of the Predefined Variables will not exist, such as CI_COMMIT_TAG, CI_MERGE_REQUEST_SOURCE_BRANCH_NAME, CI_EXTERNAL_PULL_REQUEST_SOURCE_BRANCH_NAME, etc.
However if you create a Git Tag either in the GitLab UI or from a git push --tags command, it will create a Tag pipeline, and variables like CI_COMMIT_TAG will exist, but CI_COMMIT_BRANCH will not.
One variable that will always be present regardless what triggered the pipeline is CI_COMMIT_REF_NAME. For Push sources where the commit is tied to a branch, this variable will hold the branch name. If the commit isn't tied to a branch (ie, there was once a branch for that commit but now it's been deleted) it will hold the full commit SHA. Or, if the pipeline is for a tag, it will hold the tag name.
For more information, read through the different Pipeline Sources (in the description of the CI_PIPELINE_SOURCE variable) and the other variables in the docs linked above.
What I would do is move this check to the script section so we can make it more complex for our benefit, and either immediately exit 0 so that the job doesn't run and it doesn't fail, or run the rest of the script.
Build and publish docker image:
stage: build
script:
- if [ $CI_PIPELINE_SOURCE != 'push' ]; then exit 0 fi
- if [ $CI_COMMIT_REF_NAME != 'develop' && $CI_COMMIT_TAG == '' ]; then exit 0 fi
- if [ $CI_COMMIT_TAG != '' ]; then git branch --contains $CI_COMMIT_TAG | grep main; TAG_ON_MAIN=$? fi
- if [ $TAG_ON_MAIN -ne 0 ]; then exit 0 fi
- echo $TAG
- ...<other commands>
This is a bit confusing, so here it is line by line:
If the $CI_PIPELINE_SOURCE variable isn't 'push', we exit 0.
If the $CI_COMMIT_REF_NAME (again, either a commit SHA, tag name, or branch name) isn't develop and $CI_COMMIT_TAG is empty, exit 0
If $CI_COMMIT_TAG isn't empty, we run a command to see if the tag was based on main, git branch --contains <tag_name>. This will return all the branches this tag is a part of (which is, the branch it was created from, and all branches that exist after the tag was made). We then pass the results through grep to look for main. If main is in the result list, the exit code is 0 which we can get with the special variable $? (always returns the previous command's exit code). We then set this exit code to a variable to use in the next conditional.
We check to see if the exit code of the grep from step 3. is non-0 (that is, if main is not in the list of branches <tag_name> is part of), and we exit 0.
After all of these, we can be sure that the Pipeline Source is push, that either there is a tag and it's on the main branch, or that there isn't a tag and the pipeline is for the develop branch.

Jobs vs script: What is their difference in Travis-CI?

It seems that I can put commands, such as echo "helloworld", in script or in jobs section of .travis.yml. What is their difference?
They are completely different functionality defined in .travis.yml
script: is a build/job phrase that you run commands in this specific step. [1]
job: is a step where you will be able to define multiple ones within .travis.yml file and each job can run an additional build job that you can define their own script inside it. [2]
[1]https://docs.travis-ci.com/user/job-lifecycle/#the-job-lifecycle
[2]https://docs.travis-ci.com/user/build-matrix/#listing-individual-jobs

Jenkins post build task if status changed from "failed" to "success" (fixed)

In a Jenkins freestyle job (on an older 1.6x version, no support for 2.x pipeline jobs) I would like to run a shell command (curl -XPOST ...) as a post build step if the build status recovered(!) from FAILED to SUCCESS.
However, all plugins for determining the build status I am aware of can only do something if the current build status IS FAILED or SUCCESS but don't take into account whether it recovered in comparison to the last build.
Is there any way how to achieve this, e.g. using the Groovy Post build plugin and some lines of scripting?
I found that something like this is a good way to go.
You can build up some interesting logic, and the "currentBuild" variable has some decent documentation here: currentBuild variable doc
script {
if ( ( currentBuild.resultIsBetterOrEqualTo("SUCCESS") && currentBuild.previousBuild.resultIsWorseOrEqualTo("UNSTABLE") ) || currentBuild.resultIsWorseOrEqualTo("UNSTABLE")) {
echo "If current build is good, and last build is bad, or current build is bad"
}
}
Meanwhile I found a way to achieve this. It is not necessarily pretty and I still appreciate alternative solutions :)
First of all, a plugin is needed which lets you execute shell commands in a Post Build step. There might be different ones, I am using the PostBuildScript plugin for that.
Then, create a "Execute a set of scripts" post build step, set the step to execute to Build step and select Execute shell, for me this looks like this:
In there, I run the following shell script lines which use my Jenkins server's REST API in combination with a Python one-liner (you could also use jq or something else for this) to determine the status of the current build as well as of the last completed build:
statusOfCurrentBuild=$(curl --silent "${BUILD_URL}api/json" | python -c "import sys, json; print json.load(sys.stdin)['result']")
statusOfLastBuild=$(curl --silent "${JOB_URL}/lastCompletedBuild/api/json" | python -c "import sys, json; print json.load(sys.stdin)['result']")
if [ "${statusOfCurrentBuild}" == "SUCCESS" ] && [ "${statusOfLastBuild}" == "FAILURE" ]
then
echo "Build was fixed"
# do something interesting here
fi
Depending on your Jenkins settings, using the REST API might require authentication.

Jenkins Workflow CD with Kubernetes

To clarify, this is not a question about running Jenkins in Kubernetes, this is about deploying to Kubernetess from Jenkins.
I have recently settled on using Jenkins (and the workflow/pipeline plugin) to orchestrate our delivery process. Currently, I'm using the imperative style to deploy as per below:
stage 'Deploy to Integ'
// Clean up old releases
sh "kubectl delete svc,deployment ${serviceName} || true"
def cmd = """kubectl run ${serviceName} --image=${dockerRegistry}/${serviceName}:${env.BUILD_NUMBER} --replicas=2 --port=${containerPort} --expose --service-overrides='{ "spec": { "type": "LoadBalancer" }}' """
// execute shell for the command above
sh cmd
This works well because the ${env.BUILD_NUMBER} persists through the pipeline, making it easy for me to ensure the version I deploy is the same all the way through. The problem I have is that I would like to use the declarative approach as this isn't scalable, and I would like the definition in VCS.
Unfortunately, the declarative approach comes with the adverse effect of needing to explicitly state the version of the image (to be deployed) in the yaml. One way around this might be to use the latest tag, however this comes with its own risks. For example, lets take the scenario where I'm about to deploy latest to production and a new version gets tagged latest. The new latest may not have gone through testing.
I could get into changing the file programmatically, but that feels rather clunky, and doesn't help developers who have the file checked out to understand what is latest.
What have you done to solve this issue? Am I missing something obvious? What workflow are you using?
In my yaml file (server.origin.yml), I set my image as image-name:$BUILD_NUMBER
Then I run: envsubst < ./server.origin.yml > ./server.yml
This command will replace the string $BUILD_NUMBER by the value of the environment variable

Travis conditional on branch after_success

In my travis script I have the following:
after_success:
- ember build --environment=production
- ember build --environment=staging --output-path=dist-staging
After both of these build, I conditionally deploy to S3 the one that is appropriate, based on the current git branch.
It works, but it would save time if I only built the one I actually need. What is the easiest way to build based on the branch?
use the test command as used here.
after_success:
- test $TRAVIS_BRANCH = "master" &&
ember build
All travis env variables are available here.
You can execute shell script in after_success and check the current branch using travis environment variables:
#!/bin/sh
if [[ "$TRAVIS_BRANCH" != "master" ]]; then
echo "We're not on the master branch."
# analyze current branch and react accordingly
exit 0
fi
Put the script somewhere in the project and use it like:
after_success:
- ./scripts/deploy_to_s3.sh
There might be other useful travis variables to you, they are listed here.
With the following entry the script will only be executed if it is not a PR and the branch is master.
after_success:
- 'if [ "$TRAVIS_PULL_REQUEST" = "false" -a "$TRAVIS_BRANCH" = "master" ]; then bash doit.sh; fi'
It is not enough to evaluate TRAVIS_BRANCH. TRAVIS_BRANCH is set to master when a PR against master is created by a fork.
See also the description of TRAVIS_BRANCH on https://docs.travis-ci.com/user/environment-variables/:
for push builds, or builds not triggered by a pull request, this is the name of the branch
for builds triggered by a pull request this is the name of the branch targeted by the pull request
for builds triggered by a tag, this is the same as the name of the tag (TRAVIS_TAG)
If you work with tags you have to consider TRAVIS_TAG as well. If TRAVIS_TAG is set, TRAVIS_BRANCH is set to the value of TRAVIS_TAG.
after_success:
- if [ "$TRAVIS_PULL_REQUEST" = "false" -a \( "$TRAVIS_BRANCH" = "master" -o -n "$TRAVIS_TAG" \) ]; then doit.sh; fi
I would say the above solutions are good because they would transfer to non-travis-ci build systems as well, but there is a feature in TravisCI for similar to this:
stages:
- name: deploy
# require the branch name to be master (note for PRs this is the base branch name)
if: branch = master
Although I could not get it to work with after_success, the following page has a section on "Testing Conditions" which I didn't bother setting that up.
https://docs.travis-ci.com/user/conditional-builds-stages-jobs/

Resources