Do Travis CI tests in the same stage happen in the same instance? - travis-ci

In Travis docs, it states that Build stages is a way to group jobs, and run jobs in each stage in parallel, but run one stage after another sequentially.
I know that all jobs in a stage in run in parallel, but do these tests run in the same instance, i.e. do they share the same env variables?
Say I have 3 tests under a stage.
- stage: 'Tests'
name: 'Test1'
script: ./dotest1
-
name: 'Test2'
script: ./dotest2
-
name: 'Test3'
script: ./dotest3
If I set export $bleh_credential=$some_credential in test1, does it get carried over to test2? It seems like it shouldn't, as they run in parallel, correct? If that's the case, can I set a stage-wide env variable, or should I set them every time I run a new test?

No, jobs are all run on new containers, so nothing in the job process can be shared between. If you need some persistence between them, Travis requires you to use an external storage system like S3. Read more about it here: https://docs.travis-ci.com/user/build-stages/#data-persistence-between-stages-and-jobs
I would set the env vars for each job, perhaps using YAML anchors for the defaults: https://gist.github.com/bowsersenior/979804

Related

Gitlab CI/CD variables: how to access environment scoped variables within jobs?

I have defined the following CI/CD variable (VAULT_PATH) in my gitlab project.
As you can see in the image, the variable is environment scoped so, in order to access its value within my jobs ($VAULT_PATH), I have added the environment property to each job.
job_build_preprod:
environment: preprod
script:
- echo $VAULT_PATH
job_deploy_preprod:
environment: preprod
script:
- echo $VAULT_PATH
job_build_production:
environment: production
script:
- echo $VAULT_PATH
job_deploy_production:
environment: production
script:
- echo $VAULT_PATH
The problem I am facing following this approach is that my "build" jobs are being tagged as deployment jobs (due to the fact that I am adding the environment property) when they are not.
But if I do not add the environment property, I cannot access the environment scoped variable that I need.
So, is there another way to access environment scoped variables within jobs without using the environment property?
I need to use them within build jobs, but I do not want gitlab to tag those build jobs as deployment jobs to the environment.
Check out actions inside environment https://docs.gitlab.com/ee/ci/yaml/#environmentaction.
There are a few actions which you can use which won't trigger deployment.
eg: for build u can use prepare
job_build_preprod
script:
- echo $VAULT_PATH
environment:
name: preprod
action: prepare
url: https://test.com

How to run the same Bitbucket Pipeline with different environment variables for different branches?

I have a monorepo project that is deployed to 3 environments - testing, staging and production. Deploys to testing come from the next branch, while staging and production from the master branch. Testing deploys should run automatically on every commit to next (but I'm also fine with having to trigger them manually), but deploys from the master branch should be triggered manually. In addition, every deploy may consist of a client push and server push (depending on the files changed). The commands to deploy to each of the hosts are exactly the same, the only thing changing is the host itself and the environment variables.
Therefore I have 2 questions:
Can I make Bitbucket prompt me the deployment target when I manually trigger the pipeline, thus basically letting me choose the set of the env variables to inject into the set sequence of commands? I've seen a screenshot for this in a tutorial, but I lost it and can't find it since.
Can I have parallel sequences of commands? I'd like the server and the client push to run simultaneously, but both of them have different steps. Or do I need to merge those into the same step with multiple scripts to achieve that?
Thank you for your help.
The answer to both of your questions is 'Yes'.
The feature that makes it possible is called custom pipelines. Here is a neat doc that demonstrates how to use them.
There is a parallel keyword which you can use to define parallel steps. Check out this doc for details.
If I'm not misinterpreting the description of your setup, your final pipeline should look very similar to this:
pipelines:
custom:
deploy-to-staging-or-prod: # As you say the steps are the same, only variable values will define the destination.
- variables: # List variable names under here, and Bitbucket will prompt you to supply their values.
- name: VAR1
- name: VAR2
- parallel:
- step:
- ./deploy-client.sh
- step:
- ./deploy-server.sh
branches:
next:
- step:
script:
- ./deploy-to-testing.sh
UPD
If you need to use Deployments instead of providing each variable separately, use can utilise manual type of trigger:
definitions:
steps:
- step: &RunTests
script:
- ./run-tests.sh
- step: &DeployFromMaster
script:
- ./deploy-from-master.sh
pipelines:
branches:
next:
- step:
script:
- ./deploy-to-testing.sh
master:
- step: *RunTests
- parallel:
- step:
<<: *DeployFromMaster
deployment: staging
trigger: manual
- step:
<<: *DeployFromMaster
deployment: production
trigger: manual
Key docs for understanding this pipeline is still this one and this one for yaml anchors. Keep in mind that I introduced a 'RunTests' step on purpose, as
Since a pipeline is triggered on a commit, you can't make the first step manual.
It will act as a stopper for the deploy step which can only be manual due to your requirements.

Jobs vs script: What is their difference in Travis-CI?

It seems that I can put commands, such as echo "helloworld", in script or in jobs section of .travis.yml. What is their difference?
They are completely different functionality defined in .travis.yml
script: is a build/job phrase that you run commands in this specific step. [1]
job: is a step where you will be able to define multiple ones within .travis.yml file and each job can run an additional build job that you can define their own script inside it. [2]
[1]https://docs.travis-ci.com/user/job-lifecycle/#the-job-lifecycle
[2]https://docs.travis-ci.com/user/build-matrix/#listing-individual-jobs

Is there a way to skip a stage in travis ci based on environment variables

We are using stages to deploy and run our different kinds of regression tests in parallel using stages. Is there a way I can skip certain scripts within stages based on an environment variable that is available when the travis job kicks off.
I read the travis-ci documentation and SO questions and this is the best I could come up with. This is a snippet of our current setup.
jobs:
include:
- stage: Setup
script:
- reset_db
- reset_es_index
name: Setting up Environment
if: type IN (push, api)
- stage: Tests
script:
- run_fast
name: Fast tests
if: type IN (push, api)
allow_failure: true
- script:
- run_api_tests
name: Api tests
if: type = api AND env(TRIGGER_REPO) != com_ui_project
- script:
- run_slow_tests
name: Slow tests
if: type = api AND env(TRIGGER_REPO) != com_ui_project
As you can see I want to run the Api tests and the Slow tests scripts in the Test stage only if the travis job was triggered by an API call AND the custom environment variable is NOT com_ui_project. Otherwise run the Fast tests script and skip the other scripts.
Right now if the travis was triggered by the api call, all the scripts in the stage Tests run. How can I avoid that?
I tried the following too
$TRIGGER_REPO != com_ui_project
env(TRIGGER_REPO) != "com_ui_project"
$TRIGGER_REPO!="com_ui_project" (Thinking that using shell formatting might help. It did not)

circleCI CLI - Cannot find a job named `build` to run in the `jobs:` section of your configuration file

I'm using the circleCI CLI locally to test my .circleci/config.yml. This is what it looks like:
version: 2.1
jobs:
test:
docker:
- image: circleci/node:4.8.2
steps:
- checkout
- run: echo 'test step'
workflows:
version: 2
workflow:
jobs:
- test
This fails with the following error:
* Cannot find a job named build to run in the jobs: section of your configuration file.
If you expected a workflow to run, check your config contains a top-level key called 'workflows:'
The 'hello world' workflow from the CLI docs works fine.
What am I missing here?
In the same CircleCI CLI documentation mentioned above it has in the 'limitations' section:
The CLI tool does not provide support for running workflows. By nature, workflows leverage running jobs concurrently on multiple machines allowing you to achieve faster, more complex builds. Because the CLI is only running on your machine, it can only run single jobs (which make up parts of a workflow).
So I guess running workflows with orbs works (as in the 'hello world' example), but running workflows with your own jobs does not work with the CLI.
Testing Jobs Locally
If you're looking to test your config locally like I was, you can still execute your individual jobs locally. In the same documentation linked above, under the title 'Running a Job' when using config with version 2.1+ you can explicitly call one of your jobs like so:
circleci config process .circleci/config.yml > process.yml
circleci local execute -c process.yml --job JOB_NAME

Resources