Is there a way to skip a stage in travis ci based on environment variables - travis-ci

We are using stages to deploy and run our different kinds of regression tests in parallel using stages. Is there a way I can skip certain scripts within stages based on an environment variable that is available when the travis job kicks off.
I read the travis-ci documentation and SO questions and this is the best I could come up with. This is a snippet of our current setup.
jobs:
include:
- stage: Setup
script:
- reset_db
- reset_es_index
name: Setting up Environment
if: type IN (push, api)
- stage: Tests
script:
- run_fast
name: Fast tests
if: type IN (push, api)
allow_failure: true
- script:
- run_api_tests
name: Api tests
if: type = api AND env(TRIGGER_REPO) != com_ui_project
- script:
- run_slow_tests
name: Slow tests
if: type = api AND env(TRIGGER_REPO) != com_ui_project
As you can see I want to run the Api tests and the Slow tests scripts in the Test stage only if the travis job was triggered by an API call AND the custom environment variable is NOT com_ui_project. Otherwise run the Fast tests script and skip the other scripts.
Right now if the travis was triggered by the api call, all the scripts in the stage Tests run. How can I avoid that?
I tried the following too
$TRIGGER_REPO != com_ui_project
env(TRIGGER_REPO) != "com_ui_project"
$TRIGGER_REPO!="com_ui_project" (Thinking that using shell formatting might help. It did not)

Related

Avoid trigger Bitbucket pipeline when the title starts with Draft or WIP

To automate our CI process, I need run the Bitbucket pipelines only when the title not starts with "Draft" or "WIP". Atlassian has only this features https://support.atlassian.com/bitbucket-cloud/docs/use-glob-patterns-on-the-pipelines-yaml-file/.
I tried with the regex ^(?!Draft:|WIP:).+ like this:
pipelines:
pull-requests:
'^(?!Draft:|WIP:).+':
- step:
name: Tests
but the pipeline not start under any circumstances (with or withour Draft:/WIP:). Any suggestions?
Note the PR pattern you define in the pipelines is matched against the source branch, not the PR title. Precisely, I used to feature an empty pipeline for PRs from wip/* branches, e.g.
pipelines:
pull-requests:
wip/*:
- step:
name: Pass
script:
- exit 0
"**":
- step:
name: Tests
# ...
But this workflow requires you to work on wip/* branches and changing their source branch later on. This is somewhat cumbersome and developers just did not opt-in.
This works, though.

How to run the same Bitbucket Pipeline with different environment variables for different branches?

I have a monorepo project that is deployed to 3 environments - testing, staging and production. Deploys to testing come from the next branch, while staging and production from the master branch. Testing deploys should run automatically on every commit to next (but I'm also fine with having to trigger them manually), but deploys from the master branch should be triggered manually. In addition, every deploy may consist of a client push and server push (depending on the files changed). The commands to deploy to each of the hosts are exactly the same, the only thing changing is the host itself and the environment variables.
Therefore I have 2 questions:
Can I make Bitbucket prompt me the deployment target when I manually trigger the pipeline, thus basically letting me choose the set of the env variables to inject into the set sequence of commands? I've seen a screenshot for this in a tutorial, but I lost it and can't find it since.
Can I have parallel sequences of commands? I'd like the server and the client push to run simultaneously, but both of them have different steps. Or do I need to merge those into the same step with multiple scripts to achieve that?
Thank you for your help.
The answer to both of your questions is 'Yes'.
The feature that makes it possible is called custom pipelines. Here is a neat doc that demonstrates how to use them.
There is a parallel keyword which you can use to define parallel steps. Check out this doc for details.
If I'm not misinterpreting the description of your setup, your final pipeline should look very similar to this:
pipelines:
custom:
deploy-to-staging-or-prod: # As you say the steps are the same, only variable values will define the destination.
- variables: # List variable names under here, and Bitbucket will prompt you to supply their values.
- name: VAR1
- name: VAR2
- parallel:
- step:
- ./deploy-client.sh
- step:
- ./deploy-server.sh
branches:
next:
- step:
script:
- ./deploy-to-testing.sh
UPD
If you need to use Deployments instead of providing each variable separately, use can utilise manual type of trigger:
definitions:
steps:
- step: &RunTests
script:
- ./run-tests.sh
- step: &DeployFromMaster
script:
- ./deploy-from-master.sh
pipelines:
branches:
next:
- step:
script:
- ./deploy-to-testing.sh
master:
- step: *RunTests
- parallel:
- step:
<<: *DeployFromMaster
deployment: staging
trigger: manual
- step:
<<: *DeployFromMaster
deployment: production
trigger: manual
Key docs for understanding this pipeline is still this one and this one for yaml anchors. Keep in mind that I introduced a 'RunTests' step on purpose, as
Since a pipeline is triggered on a commit, you can't make the first step manual.
It will act as a stopper for the deploy step which can only be manual due to your requirements.

Triggering the Jenkins job from the GitLab pipeline stage and on successfully completion of the job move to next stage

Can you please help, I have the following scenario and I went through many videos, blogs but could not find anything matching with my use-case
Requirement:
To write a CI\CD pipeline in GitLab, which can facilitate the following stages in this order
- verify # unit test, sonarqube, pages
- build # package
- publish # copy artifact in repository
- deploy # Deploy artifact on runtime in an test environment
- integration # run postman\integration tests
All other stages are fine and working but for the deploy stage, because of a few restrictions I have to submit an existing Jenkins job using Jenkin remote API with the following script but the problem that script returns an asynchronous response and start the Jenkins job and deploy stage completes and it moves to next stage (integration).
Run Jenkins Job:
image: maven:3-jdk-8
tags:
- java
environment: development
stage: deploy
script:
- artifact_no=$(grep -m1 '<version>' pom.xml | grep -oP '(?<=>).*(?=<)')
- curl -X POST http://myhost:8081/job/fpp/view/categorized/job/fpp_PREP_party/build --user mkumar:1121053c6b6d19bf0b3c1d6ab604f22867 --data-urlencode json="{\"parameter\":[{\"name\":\"app_version\",\"value\":\"$artifact_no\"}]}"
Note: Using GitLab CE edition and Jenkins CI project service is not available.
I am looking for a possible way of triggering the Jenkins job from the pipeline and only on successful completion of the Jenkins job my integration stage starts executing.
Thanks for the help!
Retrieving the status of a Jenkins job that is triggered programmatically through the remote access API is notorious for not being quite convoluted.
Normally you would expect to receive in the response header, under the Location attribute, a url that you can poll to get the status of your request, but unfortunately there are some in-between steps to reach that point. You can find a guide in this post. You may also have a look in this older post.
Once you have the url, you can pool and parse the status job and either sh "exit 1" or sh "exit 0" in your script to force the job that is invoking the external job to fail or succeed, depending on how you want to assert the result of the remote job

circleCI CLI - Cannot find a job named `build` to run in the `jobs:` section of your configuration file

I'm using the circleCI CLI locally to test my .circleci/config.yml. This is what it looks like:
version: 2.1
jobs:
test:
docker:
- image: circleci/node:4.8.2
steps:
- checkout
- run: echo 'test step'
workflows:
version: 2
workflow:
jobs:
- test
This fails with the following error:
* Cannot find a job named build to run in the jobs: section of your configuration file.
If you expected a workflow to run, check your config contains a top-level key called 'workflows:'
The 'hello world' workflow from the CLI docs works fine.
What am I missing here?
In the same CircleCI CLI documentation mentioned above it has in the 'limitations' section:
The CLI tool does not provide support for running workflows. By nature, workflows leverage running jobs concurrently on multiple machines allowing you to achieve faster, more complex builds. Because the CLI is only running on your machine, it can only run single jobs (which make up parts of a workflow).
So I guess running workflows with orbs works (as in the 'hello world' example), but running workflows with your own jobs does not work with the CLI.
Testing Jobs Locally
If you're looking to test your config locally like I was, you can still execute your individual jobs locally. In the same documentation linked above, under the title 'Running a Job' when using config with version 2.1+ you can explicitly call one of your jobs like so:
circleci config process .circleci/config.yml > process.yml
circleci local execute -c process.yml --job JOB_NAME

Do Travis CI tests in the same stage happen in the same instance?

In Travis docs, it states that Build stages is a way to group jobs, and run jobs in each stage in parallel, but run one stage after another sequentially.
I know that all jobs in a stage in run in parallel, but do these tests run in the same instance, i.e. do they share the same env variables?
Say I have 3 tests under a stage.
- stage: 'Tests'
name: 'Test1'
script: ./dotest1
-
name: 'Test2'
script: ./dotest2
-
name: 'Test3'
script: ./dotest3
If I set export $bleh_credential=$some_credential in test1, does it get carried over to test2? It seems like it shouldn't, as they run in parallel, correct? If that's the case, can I set a stage-wide env variable, or should I set them every time I run a new test?
No, jobs are all run on new containers, so nothing in the job process can be shared between. If you need some persistence between them, Travis requires you to use an external storage system like S3. Read more about it here: https://docs.travis-ci.com/user/build-stages/#data-persistence-between-stages-and-jobs
I would set the env vars for each job, perhaps using YAML anchors for the defaults: https://gist.github.com/bowsersenior/979804

Resources