How to make gitlab-runner read from a specified file instead .gitlab-ci.yml? - docker

I'm using gitlab-runner locally with sudo gitlab-runner exec docker [job name] in the root of a repository which contains the .gitlab-ci.yml. Is it possible to read the runner configuration from another file than .gitlab-ci.yml?
I'm aware that I can manage different configurations in different jobs in the same .gitlab-ci.yml.
I'm aware that gitlab-runner exec has been deprecated with a plan to replace it only, but no realization yet (see https://gitlab.com/gitlab-org/gitlab-runner/issues/2797 for details), so I'm thankful for suggestions regarding the successor as well.
I created https://gitlab.com/krichter/gitlab-runner-local-different-name with a YAML named a containing a "Hello world"-echo in a job called main which I'd like to run as an example without the need to rename a to .gitlab-ci.yml.
Running it with sudo gitlab-runner exec docker main --env CI_CONFIG_PATH=a fails due to
WARNING: Since GitLab Runner 10.0 this command is marked as DEPRECATED and will be removed in one of upcoming releases
fatal: ambiguous argument 'HEAD~1': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
FATAL: open .gitlab-ci.yml: no such file or directory

Really you can't: .gitlab-ci.yml is hardcoded into the gitlab-runner exec code. I've created issue about this problem. Please upvote it if it still relevant for you.

You can gitlab-runner exec docker job --env CI_CONFIG_PATH=[pathtoyml]
See:
https://gitlab.com/gitlab-org/gitlab-runner/issues/312
See also:
https://docs.gitlab.com/ce/ci/variables/README.html

Related

How to "redirect" old docker-compose commands to new docker compose version?

since the new version of docker compose has arrived, the naming of the command has changed from
$ docker-compose [...]
to
$ docker compose [...]
Is there any way to "redirect" commands that already exist, written in the old way, to the new one?
It would be very useful for instances to make install scripts, or makefiles not failing?
If you have any info or idea, it would be great,
Thanks in advance.
(I have already tried to put some aliases, that would just be "docker-compose", and takes as an argument the rest of the query, to transform it to "docker compose" new version, but it is not very concluant and seems to give me a lot of problems..)

DITA-OT with docker and bootstrap plugin ... how do i get access to the plugin directory?

I'm pretty new to Docker and also to Dita. I would like to run dita in a docker container - I installed and set up everything (under Windows) according to the instructions #
Installing plug-ins in a Docker image
I also need the bootstrap plugin - so, my simple dockerfile looks like:
FROM docker.pkg.github.com/dita-ot/dita-ot/dita-ot:3.4
RUN dita --install https://github.com/infotexture/dita-bootstrap/archive/3.3.zip
Then i built the image and created the container:
docker image build -t dita_test:1.0 .
docker container run -it -v /c/Admin/DITA:/src dita_test:1.0 -i /src/my.ditamap -o /src/out/dita-bootstrap -f html5-bootstrap -v
The output is generated without errors and everything looks good .... but, which I don't understood:
how can I pass arguments to the bootstrap plugin? (e.g. --args.css site.css)
how can I make the bootstrap directory available outside of the container ? (want to extend the bootstrap.hdf.xml file ...)
I found older documentation where the opt/dita-ot/DITA-OT directory was mounted. But it doesn't work or confuses me.
Help would be great ... thanks!
-
Might this version 3.4 documentation for Running the dita command from a Docker image help? (3.4 being the most current version of the DITA-OT at this time...)

Apache PredictionIO - Docker run failed

I have been trying http://predictionio.apache.org/install/install-docker/ this tutorial. I have successfully built Docker image however when I try to run docker run i get the Can't open /etc/predictionio/pio-env.sh error.
docker build -t predictionio/pio pio
docker run -ti predictionio/pio
PS: If I comment out the last line CMD ["sh", "/usr/bin/pio_run"] I can build and run docker image successfully. I can open the file too from docker bash.
I think you need to grant permissions to execute this file. add the following line at the end of your Dockerfile
RUN chmod +x pio_run.sh
also, you might need to change CMD to ENTRYPOINT like following:
ENTRYPOINT ["sh","/usr/bin/pio_run.sh"]
Your output states you are running Windows. Did you use the default command prompt or did you use docker terminal? I had the same error messages in the past on Windows but mysteriously it disappeared after trying the tutorial again. I am not sure what I did different except I might possibly used docker instead of the default command prompt...
Could you also try using docker-compose instead of plain docker commands as described in the tutorial?
Ensure your storage (Postgres, MySQL or ElasticSearch) is running before starting PIO as well.
Just resolved it on my machine.
When you cloned repository on Windows, git converted end of line symbols from Unix-style (\n) to Windows style (\r\n).
You need to open file C:\wherever-you-cloned-pio-repository\predictionio\docker\pio\pio_run and change it back (for e.g. using Visual Studio Code, or Notepad++). Then you need to rebuild the image and it should work.
Also for the future you may want to disable automatic conversion Disable git EOL Conversions

How to "Unlock Jenkins"?

I am installing Jenkins 2 on windows,after installing,a page is opened,URL is:
http://localhost:8080/login?from=%2F
content of the page is like this:
Question:
How to "Unlock Jenkins"?
PS:I have looked for the answer in documentation and google.
Starting from version 2.0 of Jenkins you may use
-Djenkins.install.runSetupWizard=false
to prevent this screen.
Accroding to documentation
jenkins.install.runSetupWizard - Set to false to skip install wizard. Note that this leaves Jenkins unsecured by default.
Development-mode only: Set to true to not skip showing the setup wizard during Jenkins development.
More details about Jenkins properties can be found on this Jenkins Wiki page.
Check https://wiki.jenkins-ci.org/display/JENKINS/Logging to see where Jenkins is logging its files.
e.g. for Linux, use the command: less /var/log/jenkins/jenkins.log
And scroll down to the part: "Jenkins initial setup is required. An admin user has been created ... to proceed to installation:
[randompasswordhere] <--- Copy and paste
Linux
By default logs should be made available in /var/log/jenkins/jenkins.log, unless customized in /etc/default/jenkins (for *.deb) or via /etc/sysconfig/jenkins (for */rpm)
Windows
By default logs should be at %JENKINS_HOME%/jenkins.out and %JENKINS_HOME%/jenkins.err, unless customized in %JENKINS_HOME%/jenkins.xml
Mac OS X
Log files should be at /var/log/jenkins/jenkins.log, unless customized in org.jenkins-ci.plist
open file: e:\Program Files (x86)\Jenkins\secrets\initialAdminPassword
copy content file: 47c5d4f760014e54a6bffc27bd95c077
paste in input: http://localhost:8080/login?from=%2F
DONE
Some of the above instructions seem to have gone out of date. As of the released version 2.0, creating the following file will cause Jenkins to skip the unlock screen:
${JENKINS_HOME}/jenkins.install.InstallUtil.lastExecVersion
This file must contain the string 2.0 without any line terminators. I'm not sure if this is required but Jenkins also sets the owner/group to be the same as the Jenkins server, so that's probably a good thing to mimic as well.
I did not need to create the upgraded or .last_exec_version files.
I assume you were running jenkins.war manually with java -jar jenkins.war, then all logging information by default is output to standard out, just type the token to unlock jenkins2.0.
If you were not running jenkins with java -jar jenkins.war, then you can always follow this Official Document to find the correct log location.
Open your terminal and type code below to find all the containers.
docker container list -a
You will find jenkinsci/blueocean and/or docker:dind if not than
docker container run --name jenkins-docker --rm --detach ^
--privileged --network jenkins --network-alias docker ^
--env DOCKER_TLS_CERTDIR=/certs ^
--volume jenkins-docker-certs:/certs/client ^
--volume jenkins-data:/var/jenkins_home ^
--volume "%HOMEDRIVE%%HOMEPATH%":/home ^
docker:dind
and
docker container run --name jenkins-blueocean --rm --detach ^
--network jenkins --env DOCKER_HOST=tcp://docker:2376 ^
--env DOCKER_CERT_PATH=/certs/client --env DOCKER_TLS_VERIFY=1 ^
--volume jenkins-data:/var/jenkins_home ^
--volume jenkins-docker-certs:/certs/client:ro ^
--volume "%HOMEDRIVE%%HOMEPATH%":/home ^
--publish 8080:8080 --publish 50000:50000 jenkinsci/blueocean
run command
docker run jenkinsci/blueocean
or
docker run docker:dind
Copy and Paste the secret key.
One method to prevent the installation wizard is to do the following in $JENKINS_HOME:
Create an empty file named .last_exec_version
Create a file named upgraded
If left empty, a banner will prompt you to "upgrade" to 2.0 (which just means install a bunch of new plugins like Pipeline)
If the contents of that file is 2.0, you'll receive no banner and it will act like an regular old Jenkins install
Remember, this wizard is in place to prevent unauthorized access to Jenkins during setup. However, bypassing this wizard can be useful if, for example, you want to deploy automated installations of Jenkins with something like Ansible/Puppet/etc.
This was tested against Jenkins 2.0-beta-1 – so these instructions may not work in future beta or stable releases.
In the mac use:
sudo more /Users/Shared/Jenkins/Home/secrets/initialAdminPassword
I have seen a lot of response to the question, most of them were actually solution to it but they solve the problem when jenkins is actually run as a standalone CI without Application container using the command:
java -jar jenkins.war
But when running on Tomcat as it is the case in this scenario, Jenkins logs are displayed on the catalina logs since the software is running on a container.
So you need to go to the logs folder:
C:\Program Files\tomcat_folder\Tomcat 8.5\logs\catalina.log
in my own case. Just scroll almost to the middle to look for a generated password which is essentially a token and copy and paste it to unlock jenkins.
I hope this fix your problem.
Step 1: Open the terminal on your mac
Step 2: Concatenate or Paste
sudo cat **/Users/Shared/Jenkins/Home/secrets/initialAdminPassword**
Step 3: It will ask for password, type your mac password and enter
Step 4: A key would be generated.
Step 5: Copy and paste the security token in Jenkins
Step 6: Click continue
I found the token in the following file in the installation dir:
<jenkins install dir>\users\admin\config.xml
and the element
<jenkins.install.SetupWizard_-AuthenticationKey>
<key> THE KEY </key>
</jenkins.install.SetupWizard_-AuthenticationKey>
You might see it in the catalina.out. I installed Jenkins war in tomcat and I can see this in the catalina.out
The below method does not work anymore on 2.42.2
Create an empty file named .last_exec_version
Create a file named upgraded
If left empty, a banner will prompt you to "upgrade" to 2.0 (which just means install a bunch of new plugins like Pipeline)
If the contents of that file is 2.0, you'll receive no banner and it will act like an regular old Jenkins install
mostly jenkins will show you the path for initialAdminPassword if you dont find it there, then you have to check jenkins logs
in log you will see
05-May-2017 01:01:41.854 INFO [Jenkins initialization thread] jenkins.install.SetupWizard.init
Jenkins initial setup is required. An admin user has been created and a password generated.
Please use the following password to proceed to installation:
7c249e4ed93c4596972f57e55f7ff32e
This may also be found at: /opt/tomcat/.jenkins/secrets/initialAdminPassword
Use a lil shortcut to get to the folder:
cmd + shift + g
then insert /Users/Shared/Jenkins
there u can see the secrets folder - probably shows that it's locked.
to unlock it: right click on the folder and click info + click on the lock at the bottom. now u can change the rights shown at the bottom of the window
hope that helped :)
Greetings, Stefanie ^__^
If unable to find Jenkins password in the location C:\Windows\System32\config\systemprofile\.jenkins\secrets\initialAdminPassword
by installing Jenkins generic war on Tomcat server, try below
Solution:
Set environmental variable JENKINS_HOME to your jenkins path say
JENKINS_HOME ="C:/users/username/apachetomcat/webapps/jenkins"
Place Jenkins.war in the webapp folder of Tomcat and start Tomcat,
initial admin password gets generated in the path
C:\Program Files (x86)\Apache Software Foundation\Tomcat 9.0\webapps\jenkins\secrets\initialAdminPassword
Yet another way to bypass the unlock screen is to copy the UpgradeWizard state to the InstallUtil last execution version, add an install.runSetupWizard file with the contents 'false', and update the config.xml installStateName from NEW to RUNNING.
docker exec -it jenkins bash
sed -i s/NEW/RUNNING/ /var/jenkins_home/config.xml
echo 'false' > /var/jenkins_home/jenkins.install.runSetupWizard
cp /var/jenkins_home/jenkins.install.UpgradeWizard.state /var/jenkins_home/jenkins.install.InstallUtil.lastExecVersion
exit
docker restart jenkins
For reference, this is the command I use to run jenkins:
docker run --rm --name jenkins --network host -u root -d -v jenkins:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock jenkinsci/blueocean:1.16.0
You will also want to update the config with the Root URL:
echo "<?xml version='1.1' encoding='UTF-8'?><jenkins.model.JenkinsLocationConfiguration><jenkinsUrl>http://<IP>:8080/</jenkinsUrl></jenkins.model.JenkinsLocationConfiguration>" > jenkins.model.JenkinsLocationConfiguration.xml
exit
docker restart jenkins
In case, if you installed/upgraded new versions of jenkins and unable to find admin credentials on server then, ...
if you are using old version of jenkins and on the top of it you are trying to reinstall/upgrade new version of jenkins then,
the files under "JENKINS_HOME", namely -
${JENKINS_HOME}/jenkins.install.InstallUtil.lastExecVersion
${JENKINS_HOME}/jenkins.install.UpgradeWizard.state
will cause jenkins to skip the unlock (or admin credentials screen) and webpage directly ask you for username and password. even on server you will not able to find "${JENKINS_HOME}/secrets/initialAdminPassword".
In such case, don't get panic. just try to use old admin user creds in newly installed/upgraded jenkins server.
In simple language, if you have admin creds as admin/admin in old version of jenkins server then, after upgrading jenkins server, the new server won't ask you set password for admin user again. in fact it will use old creds only.
I have found the password in C:\Program Files\Jenkins\jenkins.err. Open jenkins.err text file and scroll down, and you will find the password.
Go to C:\Program Files (x86)\Jenkins\secrets
then with notepad ++ open file initail Admin Password and paste its content.
thats it
-->if you are using linux machine, then login as root user: sudo su
-->then go to the below path: cd /var/lib/jenkins/secrets
-->just view the IntialAdminPassword file ,you can see the secret key.
-->paste the secret key into jenkins window,it will be unlocked.
https://issues.jenkins-ci.org/browse/JENKINS-35981
Try this %2Fjenkins%2F instead of %2Fjenkins in the browser
Open the terminal on your mac and open new window (command+T)
Paste sudo cat /Users/Shared/Jenkins/Home/secrets/initialAdminPassword
It will ask for password, type your password(i gave my mac password, i haven't check if any other password would work) and enter
A key would be generated.
Copy the key and paste it where it asks you to enter admin password
click continue
The problem can be fixed in latest version: mine is 2.4. The error comes because of %2fjenkins%2f in URL. The previous version was coming with %2fjenkins and the same error used to come. They have resolved the issue, but the URL has been changed from %2fjenkins to %2fjenkins%. So as a summary in the URL currently %2fjenkins% is coming. before passing the admin password change it to %2fjenkins. Along with that add a .last_exec_version empty file.
If someone chooses running Jenkins as a Docker container, may face the same problem with me.
Because accessing-the-jenkins-blue-ocean-docker-container is quite different,
Common problem is /var/lib/jenkins/secrets: No such file or directory
You need to access through Docker, the link Jenkins provide is quite helpful.
Except <docker-container-name> maybe not specified, then you may need to use the container ID.
After
docker exec -it jenkins-blueocean bash
or
docker exec -it YOUR_JENKINS_CONTAINER_ID bash
The /var/lib/jenkins/secrets/initialAdminPassword would be accessible.
The password would be there.
I have setup Jenkins using Brew, But when I restarted Mac Jenkins was asking for initialAdminPassword(The screenshot attached in question)
And the problem was it was not generated under sercret directory.
So I'd found the Jenkins process which was running on port: 8080 using: $ sudo lsof -i -n -P | grep TCP and killed it using $ sudo kill 66(66 was process id).
Then I downloaded the latest jenkins .war file from: https://jenkins.io/download/
And executed command: $ java -jar jenkins.war (Make sure you are in jenkins.war directory).
And that's it everything is working fine.
This works well when you are stuck with Docker on Windows and are using Git-Bash
Presuming something like:
# docker run --detach --publish 8080:8080 --volume jenkins_home:/var/jenkins_home --name jenkins jenkins/jenkins:lts
Execute to get the Container ID, for example "d56686cb700d"
# docker ps -l
Now tell Docker to return the password written in the logs for that Container ID:
# docker logs d56686cb700d 2>&1 | grep -A5 -B5 Admin
2>&1 redirects stderr to stdout
-A5 includes 5 lines AFTER the line with "Admin" in it
-B5 includes 5 lines BEFORE the line with "Admin" in it
Output example:
Jenkins initial setup is required. An admin user has been created and a password generated.
Please use the following password to proceed to installation:
47647383733f4387a0d53c873334b707
This may also be found at: /var/jenkins_home/secrets/initialAdminPassword
*************************************************************
*************************************************************
*************************************************************
I found it under below directory. Full issue detail https://github.com/jenkinsci/ibm-security-appscansource-scanner-plugin/issues/2
C:\Windows\SysWOW64\config\systemprofile\AppData\Local\Jenkins\.jenkins
Open jenkins.err file in C:\Program Files\Jenkins\.
In that file check for a hash key after this line
Jenkins initial setup is required. An admin user has been created and a password generated.
Please use the following password to proceed to installation:
And paste it there in the jenkins prompt. Worked for me.
To solve this problem for docker container in Ubuntu 18.04.5 LTS (Bionic Beaver) - Ubuntu Releases
1- connect to your docker server or ubuntu server witch ssh or other method
2- run sudo docker ps
3- copy the container name parameter ("NAMES")
4- run sudo docker logs "your_parameters_NAMES_VALUES"
5- Find the folowing sentence "Jenkins initial setup is required. An admin user has been created and a password generated.
Please use the following password to proceed to installation:" and copy the password

How to run travis-ci locally

I'd rather not have to push every little change to .travis.yml and every little change I make to the source in order to run the build. With jenkins you can download jenkins and run locally. Does travis offer something like this?
Note: I've seen the travis-ci cli and downloaded it, but all it seems
to do is call their API, which then connects to my GitHub repo, so if
I don't push, it won't matter that I restart the last build.
This process allows you to completely reproduce any Travis build job on your computer. Also, you can interrupt the process at any time and debug. Below is an example where I perfectly reproduce the results of job #191.1 on php-school/cli-menu
.
Prerequisites
You have public repo on GitHub
You ran at least one build on Travis
You have Docker set up on your computer
Set up the build environment
Reference: https://docs.travis-ci.com/user/common-build-problems/
Make up your own temporary build ID
BUILDID="build-$RANDOM"
View the build log, open the show more button for WORKER INFORMATION and find the INSTANCE line, paste it in here and run (replace the tag after the colon with the newest available one):
INSTANCE="travisci/ci-garnet:packer-1512502276-986baf0"
Run the headless server
docker run --name $BUILDID -dit $INSTANCE /sbin/init
Run the attached client
docker exec -it $BUILDID bash -l
Run the job
Now you are now inside your Travis environment. Run su - travis to begin.
This step is well defined but it is more tedious and manual. You will find every command that Travis runs in the environment. To do this, look for for everything in the right column which has a tag like 0.03s.
On the left side you will see the actual commands. Run those commands, in order.
Result
Now is a good time to run the history command. You can restart the process and replay those commands to run the same test against an updated code base.
If your repo is private: ssh-keygen -t rsa -b 4096 -C "YOUR EMAIL REGISTERED IN GITHUB" then cat ~/.ssh/id_rsa.pub and click here to add a key
FYI: you can git pull from inside docker to load commits from your dev box before you push them to GitHub
If you want to change the commands Travis runs then it is YOUR responsibility to figure out how that translates back into a working .travis.yml.
I don't know how to clean up the Docker environment, it looks complicated, maybe this leaks memory
Travis-ci offers a new container-based infrastructure that uses docker. This can be very useful if you're trying to troubleshoot a travis-ci build by reproducing it locally. This is taken from Travis CI's documentation.
Troubleshooting Locally in a Docker Image
If you're having trouble tracking down the exact problem in a build it often helps to run the build locally. To do this you need to be using our container based infrastructure (ie, have sudo: false in your .travis.yml), and to know which Docker image you are using on Travis CI.
Running a Container Based Docker Image Locally
Download and install the Docker Engine.
Select an image from Docker Hub. If you're not using a language-specific image pick ci-ruby. Open a terminal and start an interactive Docker session using the image URL:
docker run -it travisci/ubuntu-ruby:18.04 /bin/bash
Switch to the travis user:
su - travis
Clone your git repository into the / folder of the image.
Manually install any dependencies.
Manually run your Travis CI build command.
UPDATE: I now have a complete turnkey, all-in-one answer, see https://stackoverflow.com/a/49019950/300224. Only took 3 years to figure out!
According to the Travis documentation: https://github.com/travis-ci/travis-ci there is a concoction of projects that collude to deliver the Travis CI web service we know and love. The following subset of projects appears to allow local make test functionality using the .travis.yml in your project:
travis-build
travis-build creates the build
script for each job. It takes the configuration from the .travis.yml file and
creates a bash script that is then run in the build environment by
travis-worker.
travis-cookbooks
travis-cookbooks holds the
Chef cookbooks that are used to provision the build environments.
travis-worker
travis-worker is responsible for
running the build scripts in a clean environment. It streams the log output to
travis-logs and pushes state updates (build starting/finishing)
to travis-hub.
(The other subprojects are responsible for communicating with GitHub, their web interface, email, and their API.)
Similar to Scott McLeod's but this also generates a bash script to run the steps from the .travis.yml.
Troubleshooting Locally in Docker with a generated Bash script
# choose the image according to the language chosen in .travis.yml
$ docker run -it -u travis quay.io/travisci/travis-jvm /bin/bash
# now that you are in the docker image, switch to the travis user
sudo - travis
# Install a recent ruby (default is 1.9.3)
rvm install 2.3.0
rvm use 2.3.0
# Install travis-build to generate a .sh out of .travis.yml
cd builds
git clone https://github.com/travis-ci/travis-build.git
cd travis-build
gem install travis
# to create ~/.travis
travis version
ln -s `pwd` ~/.travis/travis-build
bundle install
# Create project dir, assuming your project is `AUTHOR/PROJECT` on GitHub
cd ~/builds
mkdir AUTHOR
cd AUTHOR
git clone https://github.com/AUTHOR/PROJECT.git
cd PROJECT
# change to the branch or commit you want to investigate
travis compile > ci.sh
# You most likely will need to edit ci.sh as it ignores matrix and env
bash ci.sh
Use wwtd (what would travis do) ruby gem to run tests on your local machine roughly as they would run on travis.
It will recreate the build matrix and run each configuration, great to sanity check setup before pushing.
gem i wwtd
wwtd
tl;dr Use image specified at https://docs.travis-ci.com/user/common-build-problems/#troubleshooting-locally-in-a-docker-image in combination with https://github.com/travis-ci/travis-build#use-as-addon-for-travis-cli.
EDIT 2019-12-06
#troubleshooting-locally-in-a-docker-image section was replaced by #running-builds-in-debug-mode which also describes how to SSH to the job running in the debug mode.
EDIT 2019-07-26
#troubleshooting-locally-in-a-docker-image section is no longer part of the docs; here's why
https://github.com/travis-ci/docs-travis-ci-com/issues/2342
https://blog.travis-ci.com/2018-10-04-combining-linux-infrastructures
https://blog.travis-ci.com/2018-11-30-announcing-xenial-build-environment-for-enterprise
Though, it's still in git history: https://github.com/travis-ci/docs-travis-ci-com/pull/2193.
Look for (quite old, couldn't find newer) image versions at: https://travis-ci.org/travis-ci/docs-travis-ci-com/builds/230889063#L661.
I wanted to inspect why one of the tests in my build failed with an error I din't get locally.
Worked.
What actually worked was using the image specified at Troubleshooting Locally in a Docker Image docs page. In my case it was travisci/ci-garnet:packer-1512502276-986baf0.
I was able to add travise compile following steps described at https://github.com/travis-ci/travis-build#use-as-addon-for-travis-cli.
dm#z580:~$ docker run --name travis-debug -dit travisci/ci-garnet:packer-1512502276-986baf0 /sbin/init
dm#z580:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
travisci/ci-garnet packer-1512502276-986baf0 6cbda6a950d3 11 months ago 10.2GB
dm#z580:~$ docker exec -it travis-debug bash -l
root#912e43dbfea4:/# su - travis
travis#912e43dbfea4:~$ cd builds/
travis#912e43dbfea4:~/builds$ git clone https://github.com/travis-ci/travis-build
travis#912e43dbfea4:~/builds$ cd travis-build
travis#912e43dbfea4:~/builds/travis-build$ mkdir -p ~/.travis
travis#912e43dbfea4:~/builds/travis-build$ ln -s $PWD ~/.travis/travis-build
travis#912e43dbfea4:~/builds/travis-build$ gem install bundler
travis#912e43dbfea4:~/builds/travis-build$ bundle install --gemfile ~/.travis/travis-build/Gemfile
travis#912e43dbfea4:~/builds/travis-build$ bundler binstubs travis
travis#912e43dbfea4:~/builds/travis-build$ cd ..
travis#912e43dbfea4:~/builds$ git clone --depth=50 --branch=master https://github.com/DusanMadar/PySyncDroid.git DusanMadar/PySyncDroid
travis#912e43dbfea4:~/builds$ cd DusanMadar/PySyncDroid/
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ ~/.travis/travis-build/bin/travis compile > ci.sh
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ sed -i 's,--branch\\=\\\x27\\\x27,--branch\\=master,g' ci.sh
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ bash ci.sh
Everything from .travis.yml was executed as expected (dependencies installed, tests ran, ...).
Note that before running bash ci.sh I had to change --branch\=\'\'\ to --branch\=master\ (see the second to last sed -i ... command) in ci.sh.
If that doesn't work the command bellow will help to identify the target line number and you can edit the line manually.
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ cat ci.sh | grep -in branch
840: travis_cmd git\ clone\ --depth\=50\ --branch\=\'\'\ https://github.com/DusanMadar/PySyncDroid.git\ DusanMadar/PySyncDroid --echo --retry --timing
889:export TRAVIS_BRANCH=''
899:export TRAVIS_PULL_REQUEST_BRANCH=''
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$
Didn't work.
Followed the accepted answer for this question but didn't
find the image (travis-ci-garnet-trusty-1512502259-986baf0) mentioned by instance at https://hub.docker.com/u/travisci/.
Build worker version points to travis-ci/worker commit and its travis-worker-install references quay.io/travisci/ as image registry. So I tried it.
dm#z580:~$ docker run -it -u travis quay.io/travisci/travis-python /bin/bash
travis#370c23a773c9:/$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 12.04.5 LTS
Release: 12.04
Codename: precise
travis#370c23a773c9:/$
dm#z580:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/travisci/travis-python latest 753a216d776c 3 years ago 5.36GB
Definitely not Trusty (Ubuntu 14.04) and not small either.
You could try Trevor, which uses Docker to run your Travis build.
From its description:
I often need to run tests for multiple versions of Node.js. But I don't want to switch versions manually using n/nvm or push the code to Travis CI just to run the tests.
That's why I created Trevor. It reads .travis.yml and runs tests in all versions you requested, just like Travis CI. Now, you can test before push and keep your git history clean.
I'm not sure what was your original reason for running Travis locally, if you just wanted to play with it, then stop reading here as it's irrelevant for you.
If you already have experience with hosted Travis and you want to get the same experience in your own datacenter, read on.
Since Dec 2014 Travis CI offers an Enterprise on-premises version.
http://blog.travis-ci.com/2014-12-19-introducing-travis-ci-enterprise/
The pricing is part of the article as well:
The licensing is done per seats, where every license includes 20 users. Pricing starts at $6,000 per license, which includes 20 users and 5 concurrent builds. There's a premium option with unlimited builds for $8,500.
I wasn't able to use the answers here as-is. For starters, as noted, the Travis help document on running jobs locally has been taken down. All of the blog entries and articles I found are based on that. The new "debug" mode doesn't appeal to me because I want to avoid the queue times and the Travis infrastructure until I've got some confidence I have gotten somewhere with my changes.
In my case I'm updating a Puppet module and I'm not an expert in Puppet, nor particularly experienced in Ruby, Travis, or their ecosystems. But I managed to build a workable test image out of tips and ideas in this article and elsewhere, and by examining the Travis CI build logs pretty closely.
I was unable to find recent images matching the names in the CI logs (for example, I could find travisci/ci-sardonyx, but could not find anything with "xenial" or with the same build name). From the logs it appears images are now transferred via AMQP instead of a mechanism more familiar to me.
I was able to find an image travsci/ubuntu-ruby:16.04 which matches the OS I'm targeting for my particular case. It does not have all the components used in the Travis CI, so I built a new one based on this, with some components added to the image and others added in the container at runtime depending on the need.
So I can't offer a clear procedure, sorry. But what I did, essentially boiled down:
Find a recent Travis CI image in Docker Hub matching your target OS as closely as possible.
Clone the repository to a build directory, and launch the container with the build directory mounted as a volume, with the working directory set to the target volume
Now the hard work: go through the Travis build log and set up the environment. In my case, this meant setting up RVM, and then using bundle to install the project's dependencies. RVM appeared to be already present in the Travis environment but I had to install it; everything else came from reproducing the commands in the build log.
Run the tests.
If the results don't match what you saw in the Travis CI logs, go back to (3) and see where to go.
Optionally, create a reusable image.
Dev and test locally and then push and hopefully your Travis results will be as expected.
I know this is not concrete and may be obvious, and your mileage will definitely vary, but hopefully this is of some use to somebody. The Dockerfile and a README for my image are on GitHub for reference.
It is possible to SSH to Travis CI environment via a bounce host. The feature isn't built in Travis CI, but it can be achieved by the following steps.
On the bounce host, create travis user and ensure that you can SSH to it.
Put these lines in the script: section of your .travis.yml (e.g. at the end).
- echo travis:$sshpassword | sudo chpasswd
- sudo sed -i 's/ChallengeResponseAuthentication no/ChallengeResponseAuthentication yes/' /etc/ssh/sshd_config
- sudo service ssh restart
- sudo apt-get install sshpass
- sshpass -p $sshpassword ssh -R 9999:localhost:22 -o StrictHostKeyChecking=no travis#$bouncehostip
Where $bouncehostip is the IP/host of your bounce host, and $sshpassword is your defined SSH password. These variables can be added as encrypted variables.
Push the changes. You should be able to make an SSH connection to your bounce host.
Source: Shell into Travis CI Build Environment.
Here is the full example:
# use the new container infrastructure
sudo: required
dist: trusty
language: python
python: "2.7"
script:
- echo travis:$sshpassword | sudo chpasswd
- sudo sed -i 's/ChallengeResponseAuthentication no/ChallengeResponseAuthentication yes/' /etc/ssh/sshd_config
- sudo service ssh restart
- sudo apt-get install sshpass
- sshpass -p $sshpassword ssh -R 9999:localhost:22 -o StrictHostKeyChecking=no travisci#$bouncehostip
See: c-mart/travis-shell at GitHub.
See also: How to reproduce a travis-ci build environment for debugging

Resources