I use and love the Travis CI continuous integration for an open source project on GitHub. I like the fast container builds, so I set sudo: false globally on my script.
However, in one particular build of my build matrix I want to spin up my own a docker container, so I think I need sudo: true here. Does this mean that I need to use sudo: true for all of my builds or is there some way around this? I would like to set sudo: true for just one build. Alternatively, is it possible to have multiple .travis.yml scripts in the same GitHub repository?
As shown in the numpy .travis.yml script you can specify sudo: true on a per-element basis.
include:
- python: 2.7
sudo: true
dist: trusty
env: ...
- python: 2.7
env: ...
Related
I am pushing Docker images to our private registry via Jenkins with the following command:
def dockerImage = docker.build("repo/myapp:${env.BUILD_NUMBER}")
(BUILD_NUMBER increases after every build.)
Because I am new to using Helm, I could not decide how should I give the tag for images in values.yaml.
I would like to deploy my app to multiple environments such as:
dev
test
prod
Let's say I was able to deploy my app via Helm to dev, and the latest BUILD_NUMBER is:
100 for dev
101 for test
102 for prod
What should be the tag value, then?
image:
repository: registryt/myrepo/image
tag:
You should put "some" tag into your values.yaml which will act as the default tag. Each Helm Chart has it, you can check the official Helm Charts here.
Now, you have two options on how to act with the different environments.
Option 1: Command line parameters
While installing your Helm Chart, you can specify the tag name dynamically with --set. For example:
$ helm install --set image.tag=12345 <your-chart-name>
Option 2: Separate values.yaml files
You can store separate values.yaml in your repository, like:
values.dev.yaml
values.prod.yaml
Then, update the correct values in your Jenkins pipeline.
I just ran into this issue with GitHub Actions. As the other answer already noted, set the image.tag in values.yaml to something as a default. I use latest as the default.
The problem is that the helm upgrade command only upgrades if the image tag is different. So "latest" isn't unique enough for Helm to do the rolling update. I use a SHA in GitHub Actions as my unique version tag. You could tag the image like this:
def dockerImage = docker.build("repo/myapp:${env.NAME}-${env.BUILD_NUMBER}")
Then in your helm command just add a --set:
helm upgrade <helm-app> <helm-chart> --set image.tag=${env.NAME}-${env.BUILD_NUMBER}
This command will override whatever value is in values.yaml. Keep in mind that the --set value must follow the structure of your values.yaml so in this case image is a top level object, with a property named tag:
values.yaml
image
name
tag
pullPolicy
port
replicaCount
May be its late, but hope helps someone later with this similar query.
I had similar situation and was looking for some options. It was painful as helm3 package doesn't come with --set option, which exists in version 2.
Solution:
Implemented with Python jinja packages, along with environment variables, with below steps.
Create values.yaml.j2 file inside your chart directory, with your values file along with templates as per below.
name: {{ APPLICATION | default("SampleApp") }}
labelname: {{ APPLICATION | default("SampleApp") }}
image:
imageRepo: "SampleApp"
imageTag: {{ APPLICATION_IMAGE_TAG | default("1.0.27") }}
Dependency packages (in container):
sh 'yum -y install python3'
sh 'yum -y install python3-pip'
sh 'yum -y install python-setuptools'
sh 'python3 -m pip install jinja-cli'
Sample Environment Variables In your Build Pipeline:
APPLICATION= "AppName"
APPLICATION_VERSION= '1.0'
APPLICATION_CHART_VERSION= '1.0'
APPLICATION_IMAGE_TAG= "1.0.${env.BUILD_NUMBER}"
Now in your pipeline, before packaging chart, Apply/Replace the templates with one jinja command as like below.
sh "jinja CHART_DIR/values.yaml.j2 -X APP.* -o CHART_DIR/values.yaml"
helm package CHART_DIR
Done!
If I:
run a build in travis, and
tests pass correctly, but
there is some problem uploading coverage results to codecov,
...Travis goes ahead and deploys anyway. How can I stop travis from deploying in this case?
The deploy-regardless-of-upload-failure:
.
Here's my .travis.yml:
dist: trusty
language: python
python:
- '3.6'
# Install tox and codecov
install:
- pip install tox-travis
- pip install codecov
# Use tox to run tests in the matrix of environments
script:
- tox -r
# Push the results back to codecov
after_success:
- codecov --commit=$TRAVIS_COMMIT"
# Deploy updates on master to pypi, which will only succeed if there's been a version bump
deploy:
provider: pypi
skip_cleanup: true
skip_existing: true
user: me
password:
secure: "stuff"
on:
branch: master
According to this https://bitbucket.org/ned/coveragepy/issues/139/easy-check-for-a-certain-coverage-in-tests, if you add the --fail-under switch to the coverage report command, it will exit with a non-zero exit code (which travis will see as a failure) if the code coverage is below the given percentage.
That would make the script section of your .travis.yml file look like:
script
- coverage run --source="mytestmodule" setup.py test
- coverage report --fail-under=80
Of course you could replace 80 with whatever percentage you'd like.
I'm trying to create two different actions within travis.ci. The first action is to execute a script on every push on every branch. This is currently working as desired. The second is to trigger a different script only when git push origin --tags. In short:
Execute script1 always (currently working)
Execute script2 when tags are pushed
Here is what I'm trying:
language: python
python:
- 3.7
matrix:
include:
- python: 3.7
sudo: true
install:
- pip install -r requirements.txt
script: # Always want this to happen
- invoke package
branches:
only:
- master
- /^x\/.*/
deploy: # Want this to occur on git push origin --tags
provider: script
script: invoke release
on:
tags: true
The deploy section is not being triggered, and I can find no evidence of the invoke release script being invoked.
Update:
It may be due to the way I'm pushing tags..? I'm seeing this log in travis now:
Skipping a deployment with the script provider because this is not a tagged commit
Solved it from this github issue. Changed the deploy section to this:
deploy:
provider: script
script: invoke release
on:
tags: true
all_branches: true
but had to remove the branches section. Deployment script was invoked, nonetheless.
I'm looking to declare environment variables in my Travis CI repository settings and use them in my .travis.yml file to deploy an application and post a build notification in Slack.
I've set environment variables in my Travis CI repository settings like so:
My .travis.yml file appears as the following:
language: node_js
node_js:
- '0.12'
cache:
directories:
- node_modules
deploy:
edge: true
provider: cloudfoundry
api: $CF_API
username: $CF_USERNAME
password: $CF_PASSWORD
organization: $CF_ORGANIZATION
space: $CF_SPACE
notifications:
slack: $NOTIFICATIONS_SLACK
When I add the values into the .travis.yml file as they are, everything works as planned.
However, when I try to refer to the environment variables set in the repository, I receive no Slack notification on a build status, and the deployment fails.
Am I following this process correctly or is there something I'm overseeing?
I think it is probably too early in Travis CI's sequence for your environment variables to be read.
What I would suggest is to rather encrypt them using the travis command-line tool.
E.g.
$ travis encrypt
Reading from stdin, press Ctrl+D when done
username
Please add the following to your .travis.yml file:
secure: "TD955qR6qvpVIz3fLkGeeUhV76deeVRaLVYjW9YjV6Ob7wd+vPtACZ..."
Pro Tip: You can add it automatically by running with --add.
Then I would copy/paste the secure: "TD955qR6qvpVIz3fLkGeeUhV76d..." result at the appropriate place in your .travis.yml file:
language: node_js
node_js:
- '0.12'
cache:
directories:
- node_modules
deploy:
edge: true
provider: cloudfoundry
api:
secure: "bHU4+ZDFeZcHpuE/WRpgMBcxr8l..."
username:
secure: "TD955qR6qvpVIz3fLkGeeUhV76d..."
You can have more details about how to encrypt sensitive data on Travis CI here.
Hope this helps.
I test on three different Node versions (mainly to alert me to any compatibility issues that might arise if I was forced to switch to another version in production):
sudo: false
language: node_js
node_js:
- iojs
- '0.12'
- '0.10'
deploy:
skip_cleanup: true
provider: script
script: ./deploy.sh
on:
branch: master
matrix:
allow_failures:
- node_js: iojs
But that means my ./deploy.sh script is run three times, from three different containers! I obviously only want one of the successful builds to be deployed. The other builds are just for catching Node issues.
Is there a way to configure it so it only runs my deploy script after one of the jobs? Maybe another setting under on:?
The docs for script provider don't cover this.
What about setting a node: '0.10' option under on:? Like so:
deploy:
skip_cleanup: true
provider: script
script: ./deploy.sh
on:
branch: master
node: '0.10'
This should run the deploy job only on the node: '0.10' target.
From the Travis Deployment official docs:
jdk, node, perl, php, python, ruby, scala, go: For language runtimes
that support multiple versions, you can limit the deployment to happen
only on the job that matches the desired version.
You could try using a conditional release.