Travis CI - Using repository environment variables in .travis.yml - travis-ci

I'm looking to declare environment variables in my Travis CI repository settings and use them in my .travis.yml file to deploy an application and post a build notification in Slack.
I've set environment variables in my Travis CI repository settings like so:
My .travis.yml file appears as the following:
language: node_js
node_js:
- '0.12'
cache:
directories:
- node_modules
deploy:
edge: true
provider: cloudfoundry
api: $CF_API
username: $CF_USERNAME
password: $CF_PASSWORD
organization: $CF_ORGANIZATION
space: $CF_SPACE
notifications:
slack: $NOTIFICATIONS_SLACK
When I add the values into the .travis.yml file as they are, everything works as planned.
However, when I try to refer to the environment variables set in the repository, I receive no Slack notification on a build status, and the deployment fails.
Am I following this process correctly or is there something I'm overseeing?

I think it is probably too early in Travis CI's sequence for your environment variables to be read.
What I would suggest is to rather encrypt them using the travis command-line tool.
E.g.
$ travis encrypt
Reading from stdin, press Ctrl+D when done
username
Please add the following to your .travis.yml file:
secure: "TD955qR6qvpVIz3fLkGeeUhV76deeVRaLVYjW9YjV6Ob7wd+vPtACZ..."
Pro Tip: You can add it automatically by running with --add.
Then I would copy/paste the secure: "TD955qR6qvpVIz3fLkGeeUhV76d..." result at the appropriate place in your .travis.yml file:
language: node_js
node_js:
- '0.12'
cache:
directories:
- node_modules
deploy:
edge: true
provider: cloudfoundry
api:
secure: "bHU4+ZDFeZcHpuE/WRpgMBcxr8l..."
username:
secure: "TD955qR6qvpVIz3fLkGeeUhV76d..."
You can have more details about how to encrypt sensitive data on Travis CI here.
Hope this helps.

Related

How to run the same Bitbucket Pipeline with different environment variables for different branches?

I have a monorepo project that is deployed to 3 environments - testing, staging and production. Deploys to testing come from the next branch, while staging and production from the master branch. Testing deploys should run automatically on every commit to next (but I'm also fine with having to trigger them manually), but deploys from the master branch should be triggered manually. In addition, every deploy may consist of a client push and server push (depending on the files changed). The commands to deploy to each of the hosts are exactly the same, the only thing changing is the host itself and the environment variables.
Therefore I have 2 questions:
Can I make Bitbucket prompt me the deployment target when I manually trigger the pipeline, thus basically letting me choose the set of the env variables to inject into the set sequence of commands? I've seen a screenshot for this in a tutorial, but I lost it and can't find it since.
Can I have parallel sequences of commands? I'd like the server and the client push to run simultaneously, but both of them have different steps. Or do I need to merge those into the same step with multiple scripts to achieve that?
Thank you for your help.
The answer to both of your questions is 'Yes'.
The feature that makes it possible is called custom pipelines. Here is a neat doc that demonstrates how to use them.
There is a parallel keyword which you can use to define parallel steps. Check out this doc for details.
If I'm not misinterpreting the description of your setup, your final pipeline should look very similar to this:
pipelines:
custom:
deploy-to-staging-or-prod: # As you say the steps are the same, only variable values will define the destination.
- variables: # List variable names under here, and Bitbucket will prompt you to supply their values.
- name: VAR1
- name: VAR2
- parallel:
- step:
- ./deploy-client.sh
- step:
- ./deploy-server.sh
branches:
next:
- step:
script:
- ./deploy-to-testing.sh
UPD
If you need to use Deployments instead of providing each variable separately, use can utilise manual type of trigger:
definitions:
steps:
- step: &RunTests
script:
- ./run-tests.sh
- step: &DeployFromMaster
script:
- ./deploy-from-master.sh
pipelines:
branches:
next:
- step:
script:
- ./deploy-to-testing.sh
master:
- step: *RunTests
- parallel:
- step:
<<: *DeployFromMaster
deployment: staging
trigger: manual
- step:
<<: *DeployFromMaster
deployment: production
trigger: manual
Key docs for understanding this pipeline is still this one and this one for yaml anchors. Keep in mind that I introduced a 'RunTests' step on purpose, as
Since a pipeline is triggered on a commit, you can't make the first step manual.
It will act as a stopper for the deploy step which can only be manual due to your requirements.

Travis CI Trigger Build option on repo unabled

I'm working on my first Travis CI project. I'm not sure how it works yet.
The thing is when I try to configure trigger builds (Image example), option appears as not allowed, it doesn't let me click it.
And this is my .travis.yml file:
language: node_js
cache:
directories:
- ~/.npm
node_js:
- '12'
git:
depth: 3
script:
- yarn build
deploy:
provider: pages
edge: true
skip-cleanup: true
keep-history: true
github-token: $GITHUB_TOKEN
local-dir: dist/
target-branch: gh-pages
commit_message: "Deploy Release"
on:
branch: main
Did you try to confirm your account with the email that Travis CI is sent to you?
With the new feature of Travis CI, you will need to confirm your account once your sign up for travis-ci.com. To confirm you get an email that is attached to your GitHub accounts. After confirmation, you will be able to see the "Trigger Build" option visible.

Publishing to NPMJS with Travis CI

I've set up a Travis CI to run a few scripts that should:
Deploy some static pages to Github pages
Deploy an NPM package to npmjs
Item 1 works, Item 2 doesn't.
Here's what my travis.yml file looks like:
language: node_js
node_js:
- '10'
script:
- gulp build
- gulp npmDist
deploy:
- provider: pages
local_dir: dist-site/
skip_cleanup: true
github_token: "$GITHUB_TOKEN"
on:
branch: master
- provider: npm
email: myemail#mydomain.com
api_key:
secure: THE-API-KEY-I-GOT-BY-CREATING-A-TOKEN-ON-NPMJS-AND-ENCRYPTING-IT-USING-TRAVIS-ENCRYPT-COMMAND-IN-TERMINAL
on:
tags: true
repo: githubaccount/reponame
all_branches: true
I trigger the script in two ways:
- When I merge to master, it deploys to GitHub Pages.
- When I create a tag and push to master it should deploy the package to npmjs.
As stated, the first part of the file works, as it actually deploys to GitHub Pages.
Here's the error I get from npmjs:
npm ERR! publish Failed PUT 401
npm ERR! code E401
npm ERR! You must be logged in to publish packages. : package-name
(oh, and a strange thing: Travis returns with a "Build Passed" and the succesful status (green), even though there's obviously something wrong)
Hope this makes sense? Thanx in advance for any help.
Fixed it — instead of having this in the travis.yml-file:
api_key:
secure: THE-API-KEY-I-GOT-BY-CREATING-A-TOKEN-ON-NPMJS-AND-ENCRYPTING-IT-USING-TRAVIS-ENCRYPT-COMMAND-IN-TERMINAL
I changed it to:
api_key: "$NPM_TOKEN"
..and added the NPM Token as an environment variable inside the Travis CI dashboard.
(Still curious as to why it didn't work, but I can't be bothered to do something about, as I've already wasted way too much time on this issue today)
I had the same problem and I just removed all previous keys and generated them again and my code looks like this:
deploy:
provider: npm
email: $NPM_USER
api_key: $NPM_TOKEN
To create your NPM_TOKEN you must:
Go to your npm profile
Tokens
Create Token
Select "Read and Publish" and create it.
Then you can specify it in your env variables for the corresponding project.
The key do not have to be encrypted and the user is your email. That will be it.
You will receive a notification like:
Installing deploy dependencies
dpl.2
Preparing deploy
dpl.3
Deploying application
+ your-artifact#x.x.x

Get screenshot of failed tests from Travis CI

For local I know how to download the failed tests screenshots.
scp -P 2222 vagrant#127.0.0.1:/tmp/features_article_feature_817.png ~/Downloads/.
How do we download the screenshot from travis CI ?
For people who get here via Google, there is an alternative approach.
You can run a (failing) job/build in debug mode, which gives you access to an interactive session via ssh. See the Travis docs for more information on how to.
Once in your interactive environment, you can run your build phases and find info on failing specs in your tmp folder.
You can't really ssh to Travis CI. What you can do is to upload your build artifacts (like screenshots) to Amazon S3. Here's an example config that would result in uploading all png files found in the /tmp directory:
# .travis.yml
addons:
artifacts: true
paths:
- $(ls /tmp/*.png | tr "\n" ":")
You'll also have to configure some Amazon specific environment variables:
ARTIFACTS_KEY=(AWS access key id)
ARTIFACTS_SECRET=(AWS secret access key)
ARTIFACTS_BUCKET=(S3 bucket name)
Environment variables can be encrypted and securely defined in your .travis.yml with the travis tool.
Read more about amazon s3 uploader and secure variables in Travis CI docs:
https://docs.travis-ci.com/user/uploading-artifacts/
https://docs.travis-ci.com/user/environment-variables/#Defining-encrypted-variables-in-.travis.yml
There's a bit of an error in the yaml here - paths should be indented under artifacts. The .travis.yml fiel would have
# .travis.yml
addons:
artifacts:
paths:
- $(ls /tmp/*.png | tr "\n" ":")

Prevent Travis deploying multiple times when I'm running my tests on multiple Node versions?

I test on three different Node versions (mainly to alert me to any compatibility issues that might arise if I was forced to switch to another version in production):
sudo: false
language: node_js
node_js:
- iojs
- '0.12'
- '0.10'
deploy:
skip_cleanup: true
provider: script
script: ./deploy.sh
on:
branch: master
matrix:
allow_failures:
- node_js: iojs
But that means my ./deploy.sh script is run three times, from three different containers! I obviously only want one of the successful builds to be deployed. The other builds are just for catching Node issues.
Is there a way to configure it so it only runs my deploy script after one of the jobs? Maybe another setting under on:?
The docs for script provider don't cover this.
What about setting a node: '0.10' option under on:? Like so:
deploy:
skip_cleanup: true
provider: script
script: ./deploy.sh
on:
branch: master
node: '0.10'
This should run the deploy job only on the node: '0.10' target.
From the Travis Deployment official docs:
jdk, node, perl, php, python, ruby, scala, go: For language runtimes
that support multiple versions, you can limit the deployment to happen
only on the job that matches the desired version.
You could try using a conditional release.

Resources