How to setup a Travis/Rails project to submit to Coverity Scan? - ruby-on-rails

I'm looking for a std travis coverity setup for a rails application.
My current .travis.yml file looks like this:
# environment settings
env:
- DB=sqlite
- DB=mysql
- DB=postgresql
env:
global:
# The next declaration is the encrypted COVERITY_SCAN_TOKEN, created
# via the "travis encrypt" command using the project repo's public key
- secure: "<SECURE>"
# project language
language: ruby
rvm:
- 2.3.1
# branches to build (whitelist)
branches:
only:
- master
- coverity_scan
- testing
# command to run before install
before_install:
- echo -n | openssl s_client -connect scan.coverity.com:443 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | sudo tee -a /etc/ssl/certs/ca-
# arguments for the bundler
bundler_args: --without production development
# addons
addons:
coverity_scan:
project:
name: "<PROJECT_NAME>"
description: "Build submitted via Travis CI"
notification_email: <MY_EMAIL>
build_command_prepend: ""
build_command: "--no-command"
branch_pattern: coverity_scan
# script
script:
- RAILS_ENV=test bundle exec rake db:migrate --trace
- bundle exec rake db:test:prepare
- bundle exec rspec spec/
- bundle exec cucumber
# run before script
before_script:
- mysql -e 'create database my_app_test'
- psql -c 'create database my_app_test' -U postgres
I'm not sure what to put in the build_command part of addons.coverity_scan. I already tried leaving it empty, --no-command, bundle install, and bundle install --jobs=3 --retry=3, but none of them worked. --no_command, for example, gives me the following message:
Coverity Scan analysis selected for branch coverity_scan.
Coverity Scan analysis authorized per quota.
$ curl -s https://scan.coverity.com/scripts/travisci_build_coverity_scan.sh | COVERITY_SCAN_PROJECT_NAME="$PROJECT_NAME" COVERITY_SCAN_NOTIFICATION_EMAIL="<MY_EMAIL>" COVERITY_SCAN_BUILD_COMMAND="--no-command" COVERITY_SCAN_BUILD_COMMAND_PREPEND="" COVERITY_SCAN_BRANCH_PATTERN=coverity_scan bash
Note: COVERITY_SCAN_PROJECT_NAME and COVERITY_SCAN_TOKEN are available on Project Settings page on scan.coverity.com
Coverity Scan configured to run on branch coverity_scan
Coverity Scan analysis authorized per quota.
Downloading Coverity Scan Analysis Tool...
2016-09-13 23:26:36 URL:https://scan.coverity.com/download/Linux [449455458/449455458] -> "/tmp/cov-analysis-Linux.tgz" [1]
Extracting Coverity Scan Analysis Tool...
/tmp/coverity-scan-analysis ~/build/<PROJECT_NAME>
~/build/<PROJECT_NAME>
Running Coverity Scan Analysis Tool...
Coverity Build Capture (64-bit) version 8.5.0.3 on Linux 3.13.0-92-generic x86_64
Internal version numbers: db70178643 p-kent-push-26368.949
[WARNING] No files were emitted. This may be due to a problem with your configuration
or because no files were actually compiled by your build command.
Please make sure you have configured the compilers actually used in the compilation.
For more details, please look at:
/home/travis/build/<PROJECT_NAME>/cov-int/build-log.txt
Extracting SCM data for 0 files...
Please see the log file '/home/travis/build/<PROJECT_NAME>/cov-int/scm_log.txt' for warnings and SCM command issues.
[WARNING] Unable to gather all SCM data - see log at /home/travis/build/<PROJECT_NAME>/cov-int/scm_log.txt for details.
Successfully added SCM data for 0 files
Tarring Coverity Scan Analysis results...
Uploading Coverity Scan Analysis results...
And because I'm using travis I'm not able to look into the log files...
When the command is empty it fails with the error, that a command needs to be given and it does nothing.
Can someone help me with some kind of a std setup for a rails app?
Thanks in advance!

Instructions for using Coverity SCAN the new languages supported by 8.5 are found here : https://scan.coverity.com/download?tab=other . We assume that users first read these instructions prior to attempting TravisCI integration. It is also highly recommended that local integration is tested to ensure clean captures before attempting any CI.

Ruby is supported in Coverity 8.5. Debugging this is difficult without the log. It's possible the SCAN kit is missing a config for Ruby, which can be fixed by running cov-configure --ruby.
The more likely problem, however, is that Ruby is what Coverity considers a "non-compiled" language (meaning you don't usually have a build that compiles them before they can be used). So this means you need to inform cov-build about where your Ruby files are. Looking at your travis config, I expect you need to add this to your build_command: --fs-capture-search /path/to/your/ruby/files

It seems that it is not possible to analyse rails (interpreted ones in general) projects (correct me if I'm wrong). I deleted my project from coverity.

Related

Appium tests on Amazon device farm getting stuck in custom test environment stuck at npm install *.tgz

I have android tests written in webdriverIO, appium. Running fine on my local machine. But on Amazon device farm execution getting stuck in custom test environment at npm install *.tgz.
Tech stack selected on amazon device farm: node 14.19.1 appium 1.22.2
# Phases are collection of commands that get executed on Device Farm.
phases:
# The install phase includes commands that install dependencies that your tests use.
# Default dependencies for testing frameworks supported on Device Farm are already installed.
install:
commands:
# By default the Node.js version installed is 11.4.0, however we enable you to change it
# using the Node version manager (nvm). An example "nvm" command below changes the version to 14.19.1
- export NVM_DIR=$HOME/.nvm
- . $NVM_DIR/nvm.sh
- nvm install 14.19.1
# Unpackage and install the node modules that you uploaded in the test phase.
- echo "Navigate to test package directory"
- cd $DEVICEFARM_TEST_PACKAGE_PATH
- npm install *.tgz
# This test execution environment uses Appium version 1.9.1 by default, however we enable you to change it using
# the Appium version manager (avm). An example "avm" command below changes the version to 1.22.2.
# For your convenience, we have preinstalled the following versions: 1.22.2, 1.19.0, 1.18.3, 1.18.1, 1.18.0, 1.17.1, 1.16.0, 1.15.1, 1.14.2, 1.14.1, and 1.13.0.
# To use one of these Appium versions, change the version number in the "avm" command below to your desired version:
- export APPIUM_VERSION=1.22.2
- avm $APPIUM_VERSION
- ln -s /usr/local/avm/versions/$APPIUM_VERSION/node_modules/.bin/appium /usr/local/avm/versions/$APPIUM_VERSION/node_modules/appium/bin/appium.js
# The pre-test phase includes commands that setup your test environment.
pre_test:
commands:
# We recommend starting appium server process in the background using the command below.
# Appium server log will go to $DEVICEFARM_LOG_DIR directory.
# The environment variables below will be auto-populated during run time.
- echo "Start appium server"
- >-
appium --log-timestamp
--default-capabilities "{\"deviceName\": \"$DEVICEFARM_DEVICE_NAME\", \"platformName\":\"$DEVICEFARM_DEVICE_PLATFORM_NAME\",
\"app\":\"$DEVICEFARM_APP_PATH\", \"udid\":\"$DEVICEFARM_DEVICE_UDID\", \"platformVersion\":\"$DEVICEFARM_DEVICE_OS_VERSION\",
\"chromedriverExecutable\":\"$DEVICEFARM_CHROMEDRIVER_EXECUTABLE\"}"
>> $DEVICEFARM_LOG_DIR/appiumlog.txt 2>&1 &
- >-
start_appium_timeout=0;
while [ true ];
do
if [ $start_appium_timeout -gt 60 ];
then
echo "appium server never started in 60 seconds. Exiting";
exit 1;
fi;
grep -i "Appium REST http interface listener started on 0.0.0.0:4723" $DEVICEFARM_LOG_DIR/appiumlog.txt >> /dev/null 2>&1;
if [ $? -eq 0 ];
then
echo "Appium REST http interface listener started on 0.0.0.0:4723";
break;
else
echo "Waiting for appium server to start. Sleeping for 1 second";
sleep 1;
start_appium_timeout=$((start_appium_timeout+1));
fi;
done;
# The test phase includes commands that start your test suite execution.
test:
commands:
# Go into the root folder containing your source code and node_modules
- echo "Navigate to test source code"
# Change the directory to node_modules folder as it has your test code and the dependency node modules.
- cd $DEVICEFARM_TEST_PACKAGE_PATH/node_modules/*
- echo "Start Appium Node test"
# Enter the command below to start the tests . The comamnd should be similar to what you use to run the tests locally.
# For e.g. assuming you run your tests locally using command "node YOUR_TEST_FILENAME.js.", enter the same command below:
- npm run test:adf:android
# The post test phase includes are commands that are run after your tests are executed.
post_test:
commands:
# The artifacts phase lets you specify the location where your tests logs, device logs will be stored.
# And also let you specify the location of your test logs and artifacts which you want to be collected by Device Farm.
# These logs and artifacts will be available through ListArtifacts API in Device Farm.
artifacts:
# By default, Device Farm will collect your artifacts from following directories
- $DEVICEFARM_LOG_DIR
I followed this blog to setup amazon device farm set up amazon device farm

Bitbucket Server Installation Error

I'm attempting to install Bitbucket server on my linux server. I'm following the steps here. I'm stuck at step 3. I've installed Bitbucket server, and now when trying to "Setup Bitbucket Server" I'm not able to access it from my browser.
I've done the following:
Using SSH I've went to the directory containing /atlassian/bitbucket/4.4.1/
I run the command bin/start-bitbucket.sh.
it gives the following message:
Starting Atlassian Bitbucket as current user
-------------------------------------------------------------------------------
JAVA_HOME "/usr/local/jdk" does not point to a valid Java home directory.
-------------------------------------------------------------------------------
----------------------------------------------------------------------------------
Bitbucket is being run with a umask that contains potentially unsafe settings.
The following issues were found with the mask "u=rwx,g=rwx,o=rx" (0002):
- access is allowed to 'others'. It is recommended that 'others' be denied
all access for security reasons.
- write access is allowed to 'group'. It is recommend that 'group' be
denied write access. Read access to a restricted group is recommended
to allow access to the logs.
The recommended umask for Bitbucket is "u=,g=w,o=rwx" (0027) and can be
configured in setenv.sh
----------------------------------------------------------------------------------
Using BITBUCKET_HOME: /home/wbbstaging/atlassian/application-data/bitbucket
Using CATALINA_BASE: /home/wbbstaging/atlassian/bitbucket/4.4.1
Using CATALINA_HOME: /home/wbbstaging/atlassian/bitbucket/4.4.1
Using CATALINA_TMPDIR: /home/wbbstaging/atlassian/bitbucket/4.4.1/temp
Using JRE_HOME: /usr/local/jdk
Using CLASSPATH: /home/wbbstaging/atlassian/bitbucket/4.4.1/bin/bitbucket-bootstrap.jar:/home/wbbstaging/atlassian/bitbucket/4.4.1/bin/bootstrap.jar:/home/wbbstaging/atlassian/bitbucket/4.4.1/bin/tomcat-juli.jar
Using CATALINA_PID: /home/wbbstaging/atlassian/bitbucket/4.4.1/work/catalina.pid
Existing PID file found during start.
Removing/clearing stale PID file.
Tomcat started.
Success! You can now use Bitbucket at the following address:
http://localhost:7990/
If you cannot access Bitbucket at the above location within 3 minutes, or encounter any other issues starting or stopping Atlassian Bitbucket, please see the troubleshooting guide at:
I try to access http://myserveraddress:7990, but i receive ERR_CONNECTION_REFUSED message. Is it because of the message JAVA_HOME "/usr/local/jdk" does not point to a valid Java home directory?
My server is running:
CentOS Linux release 7.2.1511
And I'm attempting to install
Bitbucket Server 4.4.1
Make sure you have java installed by running java --version
If it's not installed, start there. If it is installed, verify where by running find /-name java
Open /root/.bash_profile through your text editor. (I prefer to use vi editor)
And paste the given below two lines(noting that my below version may be different from what you see)
export JAVA_HOME=/usr/java/jdk1.7.0_21
export PATH=/usr/java/jdk1.7.0_21/bin:$PATH
Now enable the Java variable without system restart (On system restart it bydefault set the java variable)
source /root/.bash_profile
Now check the Java version,JAVA_HOME and PATH variables.It should show you correct information as you have set.
java --version
echo $JAVA_HOME
echo $PATH
Below is my system’s root bash_profile file
[root#localhost ~]# cat /root/.bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
export JAVA_HOME=/usr/java/jdk1.7.0_21
export PATH=/usr/java/jdk1.7.0_21/bin:$PATH
[root#localhost ~]#

fpm is not recognised if executing script with jenkins and ssh

I am trying to execute a script over ssh connexion with Jenkins. I am using the SSH plugin and it is well configured. I arrive to execute the first part of the script, but when I try to execute a fpm command it says:
fpm: command not found
If I connect to the instance and run the same script that I call via Jenkins it runs and there is no error (fpm is installed).
So, I have created a test like a script test.sh:
#!/bin/bash -x
fpm
but, with Jenkins, I get the same error: fpm: command not found, while if I execute it I get a normal "parameter needed":
Missing required -s flag. What package source did you want? {:level=>:warn}
Missing required -t flag. What package output did you want? {:level=>:warn}
No parameters given. You need to pass additional command arguments so that I know what you want to build packages from. For example, for '-s dir' you would pass a list of files and directories. For '-s gem' you would pass a one or more gems to package from. As a full example, this will make an rpm of the 'json' rubygem: `fpm -s gem -t rpm json` {:level=>:warn}
Fix the above problems, and you'll be rolling packages in no time! {:level=>:fatal}
What am I missing? Why it cannot find fpm if it is installed?
Make sure fpm is in /usr/bin..
It seems that the problem came because the fpm was installed in the /home/user2connect/bin/, and the command was not recognised. For fixing this I had to call it wit the whole path:
/home/user2connect/bin/fpm ...
I have chosen to reinstall the fpm using sudo, so now it works.

Pronto-Rubocop on Gitlab is not working, wrong URL being called

I'm using:
Gitlab 7.11.2
Rails 3.2
Pronto 0.4.2
pronto-rubocop 0.4.4
And I'm having trouble setting up the script to run Rubocop on git commits. I want to use Pronto so only the changes are checked. I am not using GitLab to host so I am not sure how to proceed when I reach this point of the Pronto setup, https://github.com/mmozuras/pronto:
Set the GITLAB_API_ENDPOINT environment variable to your API endpoint URL. If you are using Gitlab.com's hosted service your endpoint will be https://gitlab.com/api/v3. Set the GITLAB_API_PRIVATE_TOKEN environment variable to your Gitlab private token which you can find in your account settings.
Then just run it:
GITLAB_API_ENDPOINT="https://gitlab.com/api/v3" GITLAB_API_PRIVATE_TOKEN=token pronto run -f gitlab -c origin/master
Where do I run the last command?:
GITLAB_API_ENDPOINT="https://gitlab.com/api/v3" GITLAB_API_PRIVATE_TOKEN=token pronto run -f gitlab -c origin/master
I created a .gitlab-ci.yml file and it has:
before_script:
- ruby -v
- gem install bundler --no-ri --no-rdoc
- bundle --without postgres
rubocop:
script: bundle exec pronto run --index
But I can't tell if this is running.
I also did not set up GitlabFormatter as it is mentioned on the Pronto page, when I tried looking up information about it was vague and unhelpful to me.
Can someone point me in the right direction?
Thanks!
**UPDATE
So I gave up on using .gitlab-ci.yml file route for now because the job on gitlab-ci worked. I figured out the endpoint and installed all the necessary requirements on the server and everything bundles correctly.
At the very end of the process, I run:
GITLAB_API_ENDPOINT="https://gitlab.com/api/v3" GITLAB_API_PRIVATE_TOKEN=token pronto run -f gitlab -c origin/master
But the URL it generates is incorrect and I cannot add comments to the commit:
https://gitlab.com/api/v3/projects/10022%2Fname%2Fapp-name/repository/commits/ad87234..asdf87923/comments
I'm getting an
(Gitlab::Error::NotFound)
and I looks like it is coming from this part of code:
../10022%2Fname%2Fapp-name/..
because I can see the JSON when I change it to
../name/app-name/..
Any idea on how to call the correct URL?
Finally got this working. I am hosting gitlab on my own server, the API endpoint will look up the correct project as long as you have "api/v3" at the end. It's when you specify projects that things can go wrong, you'll know this if you see errors similar to these:
"Project not found" --> don't specify the project
"Unauthorized" --> are you using the correct user token?
Finally, even after pronto is run and it tells you how many errors were found or how many comments left, it might not always show up on the commit BUT if you submit a merge request the rubocop comments will show up.
To have the gitlab build fail if there are style guide violations run pronto with the option:
--exit-code

How to run travis-ci locally

I'd rather not have to push every little change to .travis.yml and every little change I make to the source in order to run the build. With jenkins you can download jenkins and run locally. Does travis offer something like this?
Note: I've seen the travis-ci cli and downloaded it, but all it seems
to do is call their API, which then connects to my GitHub repo, so if
I don't push, it won't matter that I restart the last build.
This process allows you to completely reproduce any Travis build job on your computer. Also, you can interrupt the process at any time and debug. Below is an example where I perfectly reproduce the results of job #191.1 on php-school/cli-menu
.
Prerequisites
You have public repo on GitHub
You ran at least one build on Travis
You have Docker set up on your computer
Set up the build environment
Reference: https://docs.travis-ci.com/user/common-build-problems/
Make up your own temporary build ID
BUILDID="build-$RANDOM"
View the build log, open the show more button for WORKER INFORMATION and find the INSTANCE line, paste it in here and run (replace the tag after the colon with the newest available one):
INSTANCE="travisci/ci-garnet:packer-1512502276-986baf0"
Run the headless server
docker run --name $BUILDID -dit $INSTANCE /sbin/init
Run the attached client
docker exec -it $BUILDID bash -l
Run the job
Now you are now inside your Travis environment. Run su - travis to begin.
This step is well defined but it is more tedious and manual. You will find every command that Travis runs in the environment. To do this, look for for everything in the right column which has a tag like 0.03s.
On the left side you will see the actual commands. Run those commands, in order.
Result
Now is a good time to run the history command. You can restart the process and replay those commands to run the same test against an updated code base.
If your repo is private: ssh-keygen -t rsa -b 4096 -C "YOUR EMAIL REGISTERED IN GITHUB" then cat ~/.ssh/id_rsa.pub and click here to add a key
FYI: you can git pull from inside docker to load commits from your dev box before you push them to GitHub
If you want to change the commands Travis runs then it is YOUR responsibility to figure out how that translates back into a working .travis.yml.
I don't know how to clean up the Docker environment, it looks complicated, maybe this leaks memory
Travis-ci offers a new container-based infrastructure that uses docker. This can be very useful if you're trying to troubleshoot a travis-ci build by reproducing it locally. This is taken from Travis CI's documentation.
Troubleshooting Locally in a Docker Image
If you're having trouble tracking down the exact problem in a build it often helps to run the build locally. To do this you need to be using our container based infrastructure (ie, have sudo: false in your .travis.yml), and to know which Docker image you are using on Travis CI.
Running a Container Based Docker Image Locally
Download and install the Docker Engine.
Select an image from Docker Hub. If you're not using a language-specific image pick ci-ruby. Open a terminal and start an interactive Docker session using the image URL:
docker run -it travisci/ubuntu-ruby:18.04 /bin/bash
Switch to the travis user:
su - travis
Clone your git repository into the / folder of the image.
Manually install any dependencies.
Manually run your Travis CI build command.
UPDATE: I now have a complete turnkey, all-in-one answer, see https://stackoverflow.com/a/49019950/300224. Only took 3 years to figure out!
According to the Travis documentation: https://github.com/travis-ci/travis-ci there is a concoction of projects that collude to deliver the Travis CI web service we know and love. The following subset of projects appears to allow local make test functionality using the .travis.yml in your project:
travis-build
travis-build creates the build
script for each job. It takes the configuration from the .travis.yml file and
creates a bash script that is then run in the build environment by
travis-worker.
travis-cookbooks
travis-cookbooks holds the
Chef cookbooks that are used to provision the build environments.
travis-worker
travis-worker is responsible for
running the build scripts in a clean environment. It streams the log output to
travis-logs and pushes state updates (build starting/finishing)
to travis-hub.
(The other subprojects are responsible for communicating with GitHub, their web interface, email, and their API.)
Similar to Scott McLeod's but this also generates a bash script to run the steps from the .travis.yml.
Troubleshooting Locally in Docker with a generated Bash script
# choose the image according to the language chosen in .travis.yml
$ docker run -it -u travis quay.io/travisci/travis-jvm /bin/bash
# now that you are in the docker image, switch to the travis user
sudo - travis
# Install a recent ruby (default is 1.9.3)
rvm install 2.3.0
rvm use 2.3.0
# Install travis-build to generate a .sh out of .travis.yml
cd builds
git clone https://github.com/travis-ci/travis-build.git
cd travis-build
gem install travis
# to create ~/.travis
travis version
ln -s `pwd` ~/.travis/travis-build
bundle install
# Create project dir, assuming your project is `AUTHOR/PROJECT` on GitHub
cd ~/builds
mkdir AUTHOR
cd AUTHOR
git clone https://github.com/AUTHOR/PROJECT.git
cd PROJECT
# change to the branch or commit you want to investigate
travis compile > ci.sh
# You most likely will need to edit ci.sh as it ignores matrix and env
bash ci.sh
Use wwtd (what would travis do) ruby gem to run tests on your local machine roughly as they would run on travis.
It will recreate the build matrix and run each configuration, great to sanity check setup before pushing.
gem i wwtd
wwtd
tl;dr Use image specified at https://docs.travis-ci.com/user/common-build-problems/#troubleshooting-locally-in-a-docker-image in combination with https://github.com/travis-ci/travis-build#use-as-addon-for-travis-cli.
EDIT 2019-12-06
#troubleshooting-locally-in-a-docker-image section was replaced by #running-builds-in-debug-mode which also describes how to SSH to the job running in the debug mode.
EDIT 2019-07-26
#troubleshooting-locally-in-a-docker-image section is no longer part of the docs; here's why
https://github.com/travis-ci/docs-travis-ci-com/issues/2342
https://blog.travis-ci.com/2018-10-04-combining-linux-infrastructures
https://blog.travis-ci.com/2018-11-30-announcing-xenial-build-environment-for-enterprise
Though, it's still in git history: https://github.com/travis-ci/docs-travis-ci-com/pull/2193.
Look for (quite old, couldn't find newer) image versions at: https://travis-ci.org/travis-ci/docs-travis-ci-com/builds/230889063#L661.
I wanted to inspect why one of the tests in my build failed with an error I din't get locally.
Worked.
What actually worked was using the image specified at Troubleshooting Locally in a Docker Image docs page. In my case it was travisci/ci-garnet:packer-1512502276-986baf0.
I was able to add travise compile following steps described at https://github.com/travis-ci/travis-build#use-as-addon-for-travis-cli.
dm#z580:~$ docker run --name travis-debug -dit travisci/ci-garnet:packer-1512502276-986baf0 /sbin/init
dm#z580:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
travisci/ci-garnet packer-1512502276-986baf0 6cbda6a950d3 11 months ago 10.2GB
dm#z580:~$ docker exec -it travis-debug bash -l
root#912e43dbfea4:/# su - travis
travis#912e43dbfea4:~$ cd builds/
travis#912e43dbfea4:~/builds$ git clone https://github.com/travis-ci/travis-build
travis#912e43dbfea4:~/builds$ cd travis-build
travis#912e43dbfea4:~/builds/travis-build$ mkdir -p ~/.travis
travis#912e43dbfea4:~/builds/travis-build$ ln -s $PWD ~/.travis/travis-build
travis#912e43dbfea4:~/builds/travis-build$ gem install bundler
travis#912e43dbfea4:~/builds/travis-build$ bundle install --gemfile ~/.travis/travis-build/Gemfile
travis#912e43dbfea4:~/builds/travis-build$ bundler binstubs travis
travis#912e43dbfea4:~/builds/travis-build$ cd ..
travis#912e43dbfea4:~/builds$ git clone --depth=50 --branch=master https://github.com/DusanMadar/PySyncDroid.git DusanMadar/PySyncDroid
travis#912e43dbfea4:~/builds$ cd DusanMadar/PySyncDroid/
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ ~/.travis/travis-build/bin/travis compile > ci.sh
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ sed -i 's,--branch\\=\\\x27\\\x27,--branch\\=master,g' ci.sh
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ bash ci.sh
Everything from .travis.yml was executed as expected (dependencies installed, tests ran, ...).
Note that before running bash ci.sh I had to change --branch\=\'\'\ to --branch\=master\ (see the second to last sed -i ... command) in ci.sh.
If that doesn't work the command bellow will help to identify the target line number and you can edit the line manually.
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ cat ci.sh | grep -in branch
840: travis_cmd git\ clone\ --depth\=50\ --branch\=\'\'\ https://github.com/DusanMadar/PySyncDroid.git\ DusanMadar/PySyncDroid --echo --retry --timing
889:export TRAVIS_BRANCH=''
899:export TRAVIS_PULL_REQUEST_BRANCH=''
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$
Didn't work.
Followed the accepted answer for this question but didn't
find the image (travis-ci-garnet-trusty-1512502259-986baf0) mentioned by instance at https://hub.docker.com/u/travisci/.
Build worker version points to travis-ci/worker commit and its travis-worker-install references quay.io/travisci/ as image registry. So I tried it.
dm#z580:~$ docker run -it -u travis quay.io/travisci/travis-python /bin/bash
travis#370c23a773c9:/$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 12.04.5 LTS
Release: 12.04
Codename: precise
travis#370c23a773c9:/$
dm#z580:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/travisci/travis-python latest 753a216d776c 3 years ago 5.36GB
Definitely not Trusty (Ubuntu 14.04) and not small either.
You could try Trevor, which uses Docker to run your Travis build.
From its description:
I often need to run tests for multiple versions of Node.js. But I don't want to switch versions manually using n/nvm or push the code to Travis CI just to run the tests.
That's why I created Trevor. It reads .travis.yml and runs tests in all versions you requested, just like Travis CI. Now, you can test before push and keep your git history clean.
I'm not sure what was your original reason for running Travis locally, if you just wanted to play with it, then stop reading here as it's irrelevant for you.
If you already have experience with hosted Travis and you want to get the same experience in your own datacenter, read on.
Since Dec 2014 Travis CI offers an Enterprise on-premises version.
http://blog.travis-ci.com/2014-12-19-introducing-travis-ci-enterprise/
The pricing is part of the article as well:
The licensing is done per seats, where every license includes 20 users. Pricing starts at $6,000 per license, which includes 20 users and 5 concurrent builds. There's a premium option with unlimited builds for $8,500.
I wasn't able to use the answers here as-is. For starters, as noted, the Travis help document on running jobs locally has been taken down. All of the blog entries and articles I found are based on that. The new "debug" mode doesn't appeal to me because I want to avoid the queue times and the Travis infrastructure until I've got some confidence I have gotten somewhere with my changes.
In my case I'm updating a Puppet module and I'm not an expert in Puppet, nor particularly experienced in Ruby, Travis, or their ecosystems. But I managed to build a workable test image out of tips and ideas in this article and elsewhere, and by examining the Travis CI build logs pretty closely.
I was unable to find recent images matching the names in the CI logs (for example, I could find travisci/ci-sardonyx, but could not find anything with "xenial" or with the same build name). From the logs it appears images are now transferred via AMQP instead of a mechanism more familiar to me.
I was able to find an image travsci/ubuntu-ruby:16.04 which matches the OS I'm targeting for my particular case. It does not have all the components used in the Travis CI, so I built a new one based on this, with some components added to the image and others added in the container at runtime depending on the need.
So I can't offer a clear procedure, sorry. But what I did, essentially boiled down:
Find a recent Travis CI image in Docker Hub matching your target OS as closely as possible.
Clone the repository to a build directory, and launch the container with the build directory mounted as a volume, with the working directory set to the target volume
Now the hard work: go through the Travis build log and set up the environment. In my case, this meant setting up RVM, and then using bundle to install the project's dependencies. RVM appeared to be already present in the Travis environment but I had to install it; everything else came from reproducing the commands in the build log.
Run the tests.
If the results don't match what you saw in the Travis CI logs, go back to (3) and see where to go.
Optionally, create a reusable image.
Dev and test locally and then push and hopefully your Travis results will be as expected.
I know this is not concrete and may be obvious, and your mileage will definitely vary, but hopefully this is of some use to somebody. The Dockerfile and a README for my image are on GitHub for reference.
It is possible to SSH to Travis CI environment via a bounce host. The feature isn't built in Travis CI, but it can be achieved by the following steps.
On the bounce host, create travis user and ensure that you can SSH to it.
Put these lines in the script: section of your .travis.yml (e.g. at the end).
- echo travis:$sshpassword | sudo chpasswd
- sudo sed -i 's/ChallengeResponseAuthentication no/ChallengeResponseAuthentication yes/' /etc/ssh/sshd_config
- sudo service ssh restart
- sudo apt-get install sshpass
- sshpass -p $sshpassword ssh -R 9999:localhost:22 -o StrictHostKeyChecking=no travis#$bouncehostip
Where $bouncehostip is the IP/host of your bounce host, and $sshpassword is your defined SSH password. These variables can be added as encrypted variables.
Push the changes. You should be able to make an SSH connection to your bounce host.
Source: Shell into Travis CI Build Environment.
Here is the full example:
# use the new container infrastructure
sudo: required
dist: trusty
language: python
python: "2.7"
script:
- echo travis:$sshpassword | sudo chpasswd
- sudo sed -i 's/ChallengeResponseAuthentication no/ChallengeResponseAuthentication yes/' /etc/ssh/sshd_config
- sudo service ssh restart
- sudo apt-get install sshpass
- sshpass -p $sshpassword ssh -R 9999:localhost:22 -o StrictHostKeyChecking=no travisci#$bouncehostip
See: c-mart/travis-shell at GitHub.
See also: How to reproduce a travis-ci build environment for debugging

Resources