Related
I am struggling with the same problem mentioned by Gavin on
this question.
Specifically in with new
docker build secret information
What is the right way to use it that feature?
Looking around on the internet I only found some variations of the same example in docker documentation mentioned above, which prints the secret on build time. Maybe I didn't fully understand the example, so please help me.
If there is no way to get the secret in build time an use in another part of a Dockerfile (e.g. an ARG variable or RUN command), when and how that new feature can be used to truly keep my secret safe and also do the work?
My go is to use this new feature in build time and also keep my secret information safe in case someone get my image file and execute a history on it.
For example, ff I have a Dockerfile like this:
FROM influxdb:2.0
ENV DOCKER_INFLUXDB_INIT_MODE=setup
ENV DOCKER_INFLUXDB_INIT_USERNAME=admin
ENV DOCKER_INFLUXDB_INIT_PASSWORD="mypassword"
How can I use that new feature mentioned in docker documentation to set my variable (DOCKER_INFLUXDB_INIT_PASSWORD), for example, in a way that it will not logged in image history?
Thanks in advance
How can I use that new feature mentioned in docker documentation to
set my variable (DOCKER_INFLUXDB_INIT_PASSWORD), for example, in a way
that it will not logged in image history?
It depends on whether you need the secret only at build time, or
whether you actually want to use it at runtime. The latter situation
is probably more common, and there's already a canonical solution:
If you want to keep DOCKER_INFLUXDB_INIT_PASSWORD out of your image
history, just don't set it during the build process. Require it to be
set a runtime, e.g., via the -e VAR=VALUE argument to docker run
(or the --env-file option):
docker run -e DOCKER_INFLUXDB_INIT_PASSWORD=mypassword myimage
You could have an ENTRYPOINT script that checks for the variable at
runtime and exits with a helpful error message if it's not set.
The official Docker images for things like MySQL and PostgreSQL work
this way.
In contrast, a build secret is meant for secrets that are only
required at build time. For example, let's say your build process
needs to do something like this:
RUN curl -o /data/mydataset -u username:password \
https://example.com/dataset
You'd like to share your Dockerfile and associated sources with
other people, but you don't want to share your username and password.
This is where build secrets come in. You would instead stick your
authentication information in a file, and modify your Dockerfile to
read that information from a secret:
RUN --mount=type=secret,id=mysecret \
curl -o /data/mydataset -u $(cat /run/secrets/mysecret) \
https://example.com/dataset
In this example, once we've copied the remote file into the image,
we're done with the secret: we don't need it in order to run a
container from the image; it was only required during the build
process.
NB: The above assumes that you're providing the secret at build time as described in the documentation, so your build command might look something like:
DOCKER_BUILDKIT=1 docker build --secret id=mysecret,src=mysecret.txt -t myimage .
When using Google Endpoints with Cloud Run to provide the container service, one creates a YAML file (stagger 2.0 format) to specify the paths with all configurations. For EVERY CHANGE the following is what I do (based on the documentation (https://cloud.google.com/endpoints/docs/openapi/get-started-cloud-functions)
Step 1: Deploying the Endpoints configuration
gcloud endpoints services deploy openapi-functions.yaml \
--project ESP_PROJECT_ID
This gives me the following output:
Service Configuration [CONFIG_ID] uploaded for service [CLOUD_RUN_HOSTNAME]
Then,
Step 2: Download the script to local machine
chmod +x gcloud_build_image
./gcloud_build_image -s CLOUD_RUN_HOSTNAME \
-c CONFIG_ID -p ESP_PROJECT_ID
Then,
Step 3: Re deploy the Cloud Run service
gcloud run deploy CLOUD_RUN_SERVICE_NAME \
--image="gcr.io/ESP_PROJECT_ID/endpoints-runtime-serverless:CLOUD_RUN_HOSTNAME-CONFIG_ID" \
--allow-unauthenticated \
--platform managed \
--project=ESP_PROJECT_ID
Is this the process for every API path change? Or is there a simpler direct method of updating the YAML file and uploading it somewhere?
Thanks.
Based on the documentation, yes, this would be the process for every API path change. However, this may change in the future as this feature is currently on beta as stated on the documentation you shared.
You may want to look over here in order to create a feature request to GCP so they can improve this feature in the future.
In the meantime, I could advise to create a script for this process as it is always the same steps and doing something in bash that runs these commands would help you automatize the task.
Hope you find this useful.
When you use the default Cloud Endpoint image as described in the documentation the parameter --rollout_strategy=managed is automatically set.
You have to wait up to 1 minutes to use the new configuration. Personally it's what I observe in my deployments. Have a try on it!
I have a task to containerize a Spring & React web-app so that non-technical staff can make use of the container to demo the app to clients. Currently we develop on OSX & deploy to Tomcat on AWS managed by a 3rd party firm, and the non-technical staff use Windows laptops for their stuff.
So far I have bash scripts in OSX which will create a Packager container that has a Java 8 SDK & maven installed, & which will compile the app into a war file. A second script creates and initializes a mongodb container & gives it a name, and the third script creates a Tomcat/Java 8 container, loads the war file into it, links it to the mongodb container & sets it running. In bash on OSX this works fine, but I found it didn't work if I tried it in cygwin on Windows 10, and my CMD/Powershell-fu is too weak to script it in a Windows native fashion.
So, I'm trying to do the script in something that'll run on both OSX, an AWS linux server & Windows 10, & being a Java developer myself I thought of Groovy. This is my first time scripting Docker using Groovy so I've ended up resorting to structures like:
println "docker build -f Dockerfile.packager -t mycontainer .".execute().text
I wonder if Docker has a Java or Groovy API that I could plug into & do things like:
docker.build("Dockerfile.packager").tag("mycontainer")
Currently my script is determining the location of the project root & building up the Docker run command as a string, like:
File emToo = new File(System.getProperty("user.dir")+"/.m2")
String currentDirectory = new File(".").getCanonicalPath()
String projectRoot = new File(currentDirectory+"/../").getCanonicalPath()
I get an option string from the user via a command line prompt, "Do you want QA or Dev?" & then:
String dockerRunCmd = "docker run -it -v $projectRoot/:/usr/local/build/myproject:cached -v ${emToo.getCanonicalPath()}:/root/.m2:cached mycontainer $option"
println dockerRunCmd.execute().text
Currently it doesn't seem to do anything after asking for the option - it's kinda bombing out. I get the run command output to screen, & if I copy/paste that into a command line in the scripts directory it falls over saying that the parent pom can't be found. Remember though that if I run the OSX bash script to do this, it works just fine. The bash script is basically:
#! /usr/bin/env bash
CWD=`pwd`
options=$1
docker run -it -v $CWD/../:/usr/local/build/myproject:cached -v ~/.m2:/root/.m2:cached --rm mycontainer $options
...which I think amounts to the same thing, right? Where's it going wrong?
UPDATE: I've found a bug - I should have been setting emToo to
new File(System.getProperty("user.home")+"/.m2"). user.dir just picks up the current directory, & the maven .m2 directory is in the user's home, usually. Currently though, the script gives me a run command that works if I cut/paste into a command line, but which doesn't allow me to call .execute() on the string in Groovy. If I can get that to work, there'll be no need for the docker-client projects suggested.
There are different ways to communicate with docker from groovy or java (sdk's are listed there https://docs.docker.com/engine/api/sdks/#other-languages):
Groovy (https://github.com/gesellix/docker-client)
Java (https://github.com/docker-java/docker-java)
Many others can be also found on github.
But as I see you are using maven so probably it will be easier for you to use awesome docker maven plugin (https://dmp.fabric8.io) which can build, push images, run containers etc.
I am installing Jenkins 2 on windows,after installing,a page is opened,URL is:
http://localhost:8080/login?from=%2F
content of the page is like this:
Question:
How to "Unlock Jenkins"?
PS:I have looked for the answer in documentation and google.
Starting from version 2.0 of Jenkins you may use
-Djenkins.install.runSetupWizard=false
to prevent this screen.
Accroding to documentation
jenkins.install.runSetupWizard - Set to false to skip install wizard. Note that this leaves Jenkins unsecured by default.
Development-mode only: Set to true to not skip showing the setup wizard during Jenkins development.
More details about Jenkins properties can be found on this Jenkins Wiki page.
Check https://wiki.jenkins-ci.org/display/JENKINS/Logging to see where Jenkins is logging its files.
e.g. for Linux, use the command: less /var/log/jenkins/jenkins.log
And scroll down to the part: "Jenkins initial setup is required. An admin user has been created ... to proceed to installation:
[randompasswordhere] <--- Copy and paste
Linux
By default logs should be made available in /var/log/jenkins/jenkins.log, unless customized in /etc/default/jenkins (for *.deb) or via /etc/sysconfig/jenkins (for */rpm)
Windows
By default logs should be at %JENKINS_HOME%/jenkins.out and %JENKINS_HOME%/jenkins.err, unless customized in %JENKINS_HOME%/jenkins.xml
Mac OS X
Log files should be at /var/log/jenkins/jenkins.log, unless customized in org.jenkins-ci.plist
open file: e:\Program Files (x86)\Jenkins\secrets\initialAdminPassword
copy content file: 47c5d4f760014e54a6bffc27bd95c077
paste in input: http://localhost:8080/login?from=%2F
DONE
Some of the above instructions seem to have gone out of date. As of the released version 2.0, creating the following file will cause Jenkins to skip the unlock screen:
${JENKINS_HOME}/jenkins.install.InstallUtil.lastExecVersion
This file must contain the string 2.0 without any line terminators. I'm not sure if this is required but Jenkins also sets the owner/group to be the same as the Jenkins server, so that's probably a good thing to mimic as well.
I did not need to create the upgraded or .last_exec_version files.
I assume you were running jenkins.war manually with java -jar jenkins.war, then all logging information by default is output to standard out, just type the token to unlock jenkins2.0.
If you were not running jenkins with java -jar jenkins.war, then you can always follow this Official Document to find the correct log location.
Open your terminal and type code below to find all the containers.
docker container list -a
You will find jenkinsci/blueocean and/or docker:dind if not than
docker container run --name jenkins-docker --rm --detach ^
--privileged --network jenkins --network-alias docker ^
--env DOCKER_TLS_CERTDIR=/certs ^
--volume jenkins-docker-certs:/certs/client ^
--volume jenkins-data:/var/jenkins_home ^
--volume "%HOMEDRIVE%%HOMEPATH%":/home ^
docker:dind
and
docker container run --name jenkins-blueocean --rm --detach ^
--network jenkins --env DOCKER_HOST=tcp://docker:2376 ^
--env DOCKER_CERT_PATH=/certs/client --env DOCKER_TLS_VERIFY=1 ^
--volume jenkins-data:/var/jenkins_home ^
--volume jenkins-docker-certs:/certs/client:ro ^
--volume "%HOMEDRIVE%%HOMEPATH%":/home ^
--publish 8080:8080 --publish 50000:50000 jenkinsci/blueocean
run command
docker run jenkinsci/blueocean
or
docker run docker:dind
Copy and Paste the secret key.
One method to prevent the installation wizard is to do the following in $JENKINS_HOME:
Create an empty file named .last_exec_version
Create a file named upgraded
If left empty, a banner will prompt you to "upgrade" to 2.0 (which just means install a bunch of new plugins like Pipeline)
If the contents of that file is 2.0, you'll receive no banner and it will act like an regular old Jenkins install
Remember, this wizard is in place to prevent unauthorized access to Jenkins during setup. However, bypassing this wizard can be useful if, for example, you want to deploy automated installations of Jenkins with something like Ansible/Puppet/etc.
This was tested against Jenkins 2.0-beta-1 – so these instructions may not work in future beta or stable releases.
In the mac use:
sudo more /Users/Shared/Jenkins/Home/secrets/initialAdminPassword
I have seen a lot of response to the question, most of them were actually solution to it but they solve the problem when jenkins is actually run as a standalone CI without Application container using the command:
java -jar jenkins.war
But when running on Tomcat as it is the case in this scenario, Jenkins logs are displayed on the catalina logs since the software is running on a container.
So you need to go to the logs folder:
C:\Program Files\tomcat_folder\Tomcat 8.5\logs\catalina.log
in my own case. Just scroll almost to the middle to look for a generated password which is essentially a token and copy and paste it to unlock jenkins.
I hope this fix your problem.
Step 1: Open the terminal on your mac
Step 2: Concatenate or Paste
sudo cat **/Users/Shared/Jenkins/Home/secrets/initialAdminPassword**
Step 3: It will ask for password, type your mac password and enter
Step 4: A key would be generated.
Step 5: Copy and paste the security token in Jenkins
Step 6: Click continue
I found the token in the following file in the installation dir:
<jenkins install dir>\users\admin\config.xml
and the element
<jenkins.install.SetupWizard_-AuthenticationKey>
<key> THE KEY </key>
</jenkins.install.SetupWizard_-AuthenticationKey>
You might see it in the catalina.out. I installed Jenkins war in tomcat and I can see this in the catalina.out
The below method does not work anymore on 2.42.2
Create an empty file named .last_exec_version
Create a file named upgraded
If left empty, a banner will prompt you to "upgrade" to 2.0 (which just means install a bunch of new plugins like Pipeline)
If the contents of that file is 2.0, you'll receive no banner and it will act like an regular old Jenkins install
mostly jenkins will show you the path for initialAdminPassword if you dont find it there, then you have to check jenkins logs
in log you will see
05-May-2017 01:01:41.854 INFO [Jenkins initialization thread] jenkins.install.SetupWizard.init
Jenkins initial setup is required. An admin user has been created and a password generated.
Please use the following password to proceed to installation:
7c249e4ed93c4596972f57e55f7ff32e
This may also be found at: /opt/tomcat/.jenkins/secrets/initialAdminPassword
Use a lil shortcut to get to the folder:
cmd + shift + g
then insert /Users/Shared/Jenkins
there u can see the secrets folder - probably shows that it's locked.
to unlock it: right click on the folder and click info + click on the lock at the bottom. now u can change the rights shown at the bottom of the window
hope that helped :)
Greetings, Stefanie ^__^
If unable to find Jenkins password in the location C:\Windows\System32\config\systemprofile\.jenkins\secrets\initialAdminPassword
by installing Jenkins generic war on Tomcat server, try below
Solution:
Set environmental variable JENKINS_HOME to your jenkins path say
JENKINS_HOME ="C:/users/username/apachetomcat/webapps/jenkins"
Place Jenkins.war in the webapp folder of Tomcat and start Tomcat,
initial admin password gets generated in the path
C:\Program Files (x86)\Apache Software Foundation\Tomcat 9.0\webapps\jenkins\secrets\initialAdminPassword
Yet another way to bypass the unlock screen is to copy the UpgradeWizard state to the InstallUtil last execution version, add an install.runSetupWizard file with the contents 'false', and update the config.xml installStateName from NEW to RUNNING.
docker exec -it jenkins bash
sed -i s/NEW/RUNNING/ /var/jenkins_home/config.xml
echo 'false' > /var/jenkins_home/jenkins.install.runSetupWizard
cp /var/jenkins_home/jenkins.install.UpgradeWizard.state /var/jenkins_home/jenkins.install.InstallUtil.lastExecVersion
exit
docker restart jenkins
For reference, this is the command I use to run jenkins:
docker run --rm --name jenkins --network host -u root -d -v jenkins:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock jenkinsci/blueocean:1.16.0
You will also want to update the config with the Root URL:
echo "<?xml version='1.1' encoding='UTF-8'?><jenkins.model.JenkinsLocationConfiguration><jenkinsUrl>http://<IP>:8080/</jenkinsUrl></jenkins.model.JenkinsLocationConfiguration>" > jenkins.model.JenkinsLocationConfiguration.xml
exit
docker restart jenkins
In case, if you installed/upgraded new versions of jenkins and unable to find admin credentials on server then, ...
if you are using old version of jenkins and on the top of it you are trying to reinstall/upgrade new version of jenkins then,
the files under "JENKINS_HOME", namely -
${JENKINS_HOME}/jenkins.install.InstallUtil.lastExecVersion
${JENKINS_HOME}/jenkins.install.UpgradeWizard.state
will cause jenkins to skip the unlock (or admin credentials screen) and webpage directly ask you for username and password. even on server you will not able to find "${JENKINS_HOME}/secrets/initialAdminPassword".
In such case, don't get panic. just try to use old admin user creds in newly installed/upgraded jenkins server.
In simple language, if you have admin creds as admin/admin in old version of jenkins server then, after upgrading jenkins server, the new server won't ask you set password for admin user again. in fact it will use old creds only.
I have found the password in C:\Program Files\Jenkins\jenkins.err. Open jenkins.err text file and scroll down, and you will find the password.
Go to C:\Program Files (x86)\Jenkins\secrets
then with notepad ++ open file initail Admin Password and paste its content.
thats it
-->if you are using linux machine, then login as root user: sudo su
-->then go to the below path: cd /var/lib/jenkins/secrets
-->just view the IntialAdminPassword file ,you can see the secret key.
-->paste the secret key into jenkins window,it will be unlocked.
https://issues.jenkins-ci.org/browse/JENKINS-35981
Try this %2Fjenkins%2F instead of %2Fjenkins in the browser
Open the terminal on your mac and open new window (command+T)
Paste sudo cat /Users/Shared/Jenkins/Home/secrets/initialAdminPassword
It will ask for password, type your password(i gave my mac password, i haven't check if any other password would work) and enter
A key would be generated.
Copy the key and paste it where it asks you to enter admin password
click continue
The problem can be fixed in latest version: mine is 2.4. The error comes because of %2fjenkins%2f in URL. The previous version was coming with %2fjenkins and the same error used to come. They have resolved the issue, but the URL has been changed from %2fjenkins to %2fjenkins%. So as a summary in the URL currently %2fjenkins% is coming. before passing the admin password change it to %2fjenkins. Along with that add a .last_exec_version empty file.
If someone chooses running Jenkins as a Docker container, may face the same problem with me.
Because accessing-the-jenkins-blue-ocean-docker-container is quite different,
Common problem is /var/lib/jenkins/secrets: No such file or directory
You need to access through Docker, the link Jenkins provide is quite helpful.
Except <docker-container-name> maybe not specified, then you may need to use the container ID.
After
docker exec -it jenkins-blueocean bash
or
docker exec -it YOUR_JENKINS_CONTAINER_ID bash
The /var/lib/jenkins/secrets/initialAdminPassword would be accessible.
The password would be there.
I have setup Jenkins using Brew, But when I restarted Mac Jenkins was asking for initialAdminPassword(The screenshot attached in question)
And the problem was it was not generated under sercret directory.
So I'd found the Jenkins process which was running on port: 8080 using: $ sudo lsof -i -n -P | grep TCP and killed it using $ sudo kill 66(66 was process id).
Then I downloaded the latest jenkins .war file from: https://jenkins.io/download/
And executed command: $ java -jar jenkins.war (Make sure you are in jenkins.war directory).
And that's it everything is working fine.
This works well when you are stuck with Docker on Windows and are using Git-Bash
Presuming something like:
# docker run --detach --publish 8080:8080 --volume jenkins_home:/var/jenkins_home --name jenkins jenkins/jenkins:lts
Execute to get the Container ID, for example "d56686cb700d"
# docker ps -l
Now tell Docker to return the password written in the logs for that Container ID:
# docker logs d56686cb700d 2>&1 | grep -A5 -B5 Admin
2>&1 redirects stderr to stdout
-A5 includes 5 lines AFTER the line with "Admin" in it
-B5 includes 5 lines BEFORE the line with "Admin" in it
Output example:
Jenkins initial setup is required. An admin user has been created and a password generated.
Please use the following password to proceed to installation:
47647383733f4387a0d53c873334b707
This may also be found at: /var/jenkins_home/secrets/initialAdminPassword
*************************************************************
*************************************************************
*************************************************************
I found it under below directory. Full issue detail https://github.com/jenkinsci/ibm-security-appscansource-scanner-plugin/issues/2
C:\Windows\SysWOW64\config\systemprofile\AppData\Local\Jenkins\.jenkins
Open jenkins.err file in C:\Program Files\Jenkins\.
In that file check for a hash key after this line
Jenkins initial setup is required. An admin user has been created and a password generated.
Please use the following password to proceed to installation:
And paste it there in the jenkins prompt. Worked for me.
To solve this problem for docker container in Ubuntu 18.04.5 LTS (Bionic Beaver) - Ubuntu Releases
1- connect to your docker server or ubuntu server witch ssh or other method
2- run sudo docker ps
3- copy the container name parameter ("NAMES")
4- run sudo docker logs "your_parameters_NAMES_VALUES"
5- Find the folowing sentence "Jenkins initial setup is required. An admin user has been created and a password generated.
Please use the following password to proceed to installation:" and copy the password
I'd rather not have to push every little change to .travis.yml and every little change I make to the source in order to run the build. With jenkins you can download jenkins and run locally. Does travis offer something like this?
Note: I've seen the travis-ci cli and downloaded it, but all it seems
to do is call their API, which then connects to my GitHub repo, so if
I don't push, it won't matter that I restart the last build.
This process allows you to completely reproduce any Travis build job on your computer. Also, you can interrupt the process at any time and debug. Below is an example where I perfectly reproduce the results of job #191.1 on php-school/cli-menu
.
Prerequisites
You have public repo on GitHub
You ran at least one build on Travis
You have Docker set up on your computer
Set up the build environment
Reference: https://docs.travis-ci.com/user/common-build-problems/
Make up your own temporary build ID
BUILDID="build-$RANDOM"
View the build log, open the show more button for WORKER INFORMATION and find the INSTANCE line, paste it in here and run (replace the tag after the colon with the newest available one):
INSTANCE="travisci/ci-garnet:packer-1512502276-986baf0"
Run the headless server
docker run --name $BUILDID -dit $INSTANCE /sbin/init
Run the attached client
docker exec -it $BUILDID bash -l
Run the job
Now you are now inside your Travis environment. Run su - travis to begin.
This step is well defined but it is more tedious and manual. You will find every command that Travis runs in the environment. To do this, look for for everything in the right column which has a tag like 0.03s.
On the left side you will see the actual commands. Run those commands, in order.
Result
Now is a good time to run the history command. You can restart the process and replay those commands to run the same test against an updated code base.
If your repo is private: ssh-keygen -t rsa -b 4096 -C "YOUR EMAIL REGISTERED IN GITHUB" then cat ~/.ssh/id_rsa.pub and click here to add a key
FYI: you can git pull from inside docker to load commits from your dev box before you push them to GitHub
If you want to change the commands Travis runs then it is YOUR responsibility to figure out how that translates back into a working .travis.yml.
I don't know how to clean up the Docker environment, it looks complicated, maybe this leaks memory
Travis-ci offers a new container-based infrastructure that uses docker. This can be very useful if you're trying to troubleshoot a travis-ci build by reproducing it locally. This is taken from Travis CI's documentation.
Troubleshooting Locally in a Docker Image
If you're having trouble tracking down the exact problem in a build it often helps to run the build locally. To do this you need to be using our container based infrastructure (ie, have sudo: false in your .travis.yml), and to know which Docker image you are using on Travis CI.
Running a Container Based Docker Image Locally
Download and install the Docker Engine.
Select an image from Docker Hub. If you're not using a language-specific image pick ci-ruby. Open a terminal and start an interactive Docker session using the image URL:
docker run -it travisci/ubuntu-ruby:18.04 /bin/bash
Switch to the travis user:
su - travis
Clone your git repository into the / folder of the image.
Manually install any dependencies.
Manually run your Travis CI build command.
UPDATE: I now have a complete turnkey, all-in-one answer, see https://stackoverflow.com/a/49019950/300224. Only took 3 years to figure out!
According to the Travis documentation: https://github.com/travis-ci/travis-ci there is a concoction of projects that collude to deliver the Travis CI web service we know and love. The following subset of projects appears to allow local make test functionality using the .travis.yml in your project:
travis-build
travis-build creates the build
script for each job. It takes the configuration from the .travis.yml file and
creates a bash script that is then run in the build environment by
travis-worker.
travis-cookbooks
travis-cookbooks holds the
Chef cookbooks that are used to provision the build environments.
travis-worker
travis-worker is responsible for
running the build scripts in a clean environment. It streams the log output to
travis-logs and pushes state updates (build starting/finishing)
to travis-hub.
(The other subprojects are responsible for communicating with GitHub, their web interface, email, and their API.)
Similar to Scott McLeod's but this also generates a bash script to run the steps from the .travis.yml.
Troubleshooting Locally in Docker with a generated Bash script
# choose the image according to the language chosen in .travis.yml
$ docker run -it -u travis quay.io/travisci/travis-jvm /bin/bash
# now that you are in the docker image, switch to the travis user
sudo - travis
# Install a recent ruby (default is 1.9.3)
rvm install 2.3.0
rvm use 2.3.0
# Install travis-build to generate a .sh out of .travis.yml
cd builds
git clone https://github.com/travis-ci/travis-build.git
cd travis-build
gem install travis
# to create ~/.travis
travis version
ln -s `pwd` ~/.travis/travis-build
bundle install
# Create project dir, assuming your project is `AUTHOR/PROJECT` on GitHub
cd ~/builds
mkdir AUTHOR
cd AUTHOR
git clone https://github.com/AUTHOR/PROJECT.git
cd PROJECT
# change to the branch or commit you want to investigate
travis compile > ci.sh
# You most likely will need to edit ci.sh as it ignores matrix and env
bash ci.sh
Use wwtd (what would travis do) ruby gem to run tests on your local machine roughly as they would run on travis.
It will recreate the build matrix and run each configuration, great to sanity check setup before pushing.
gem i wwtd
wwtd
tl;dr Use image specified at https://docs.travis-ci.com/user/common-build-problems/#troubleshooting-locally-in-a-docker-image in combination with https://github.com/travis-ci/travis-build#use-as-addon-for-travis-cli.
EDIT 2019-12-06
#troubleshooting-locally-in-a-docker-image section was replaced by #running-builds-in-debug-mode which also describes how to SSH to the job running in the debug mode.
EDIT 2019-07-26
#troubleshooting-locally-in-a-docker-image section is no longer part of the docs; here's why
https://github.com/travis-ci/docs-travis-ci-com/issues/2342
https://blog.travis-ci.com/2018-10-04-combining-linux-infrastructures
https://blog.travis-ci.com/2018-11-30-announcing-xenial-build-environment-for-enterprise
Though, it's still in git history: https://github.com/travis-ci/docs-travis-ci-com/pull/2193.
Look for (quite old, couldn't find newer) image versions at: https://travis-ci.org/travis-ci/docs-travis-ci-com/builds/230889063#L661.
I wanted to inspect why one of the tests in my build failed with an error I din't get locally.
Worked.
What actually worked was using the image specified at Troubleshooting Locally in a Docker Image docs page. In my case it was travisci/ci-garnet:packer-1512502276-986baf0.
I was able to add travise compile following steps described at https://github.com/travis-ci/travis-build#use-as-addon-for-travis-cli.
dm#z580:~$ docker run --name travis-debug -dit travisci/ci-garnet:packer-1512502276-986baf0 /sbin/init
dm#z580:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
travisci/ci-garnet packer-1512502276-986baf0 6cbda6a950d3 11 months ago 10.2GB
dm#z580:~$ docker exec -it travis-debug bash -l
root#912e43dbfea4:/# su - travis
travis#912e43dbfea4:~$ cd builds/
travis#912e43dbfea4:~/builds$ git clone https://github.com/travis-ci/travis-build
travis#912e43dbfea4:~/builds$ cd travis-build
travis#912e43dbfea4:~/builds/travis-build$ mkdir -p ~/.travis
travis#912e43dbfea4:~/builds/travis-build$ ln -s $PWD ~/.travis/travis-build
travis#912e43dbfea4:~/builds/travis-build$ gem install bundler
travis#912e43dbfea4:~/builds/travis-build$ bundle install --gemfile ~/.travis/travis-build/Gemfile
travis#912e43dbfea4:~/builds/travis-build$ bundler binstubs travis
travis#912e43dbfea4:~/builds/travis-build$ cd ..
travis#912e43dbfea4:~/builds$ git clone --depth=50 --branch=master https://github.com/DusanMadar/PySyncDroid.git DusanMadar/PySyncDroid
travis#912e43dbfea4:~/builds$ cd DusanMadar/PySyncDroid/
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ ~/.travis/travis-build/bin/travis compile > ci.sh
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ sed -i 's,--branch\\=\\\x27\\\x27,--branch\\=master,g' ci.sh
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ bash ci.sh
Everything from .travis.yml was executed as expected (dependencies installed, tests ran, ...).
Note that before running bash ci.sh I had to change --branch\=\'\'\ to --branch\=master\ (see the second to last sed -i ... command) in ci.sh.
If that doesn't work the command bellow will help to identify the target line number and you can edit the line manually.
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ cat ci.sh | grep -in branch
840: travis_cmd git\ clone\ --depth\=50\ --branch\=\'\'\ https://github.com/DusanMadar/PySyncDroid.git\ DusanMadar/PySyncDroid --echo --retry --timing
889:export TRAVIS_BRANCH=''
899:export TRAVIS_PULL_REQUEST_BRANCH=''
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$
Didn't work.
Followed the accepted answer for this question but didn't
find the image (travis-ci-garnet-trusty-1512502259-986baf0) mentioned by instance at https://hub.docker.com/u/travisci/.
Build worker version points to travis-ci/worker commit and its travis-worker-install references quay.io/travisci/ as image registry. So I tried it.
dm#z580:~$ docker run -it -u travis quay.io/travisci/travis-python /bin/bash
travis#370c23a773c9:/$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 12.04.5 LTS
Release: 12.04
Codename: precise
travis#370c23a773c9:/$
dm#z580:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/travisci/travis-python latest 753a216d776c 3 years ago 5.36GB
Definitely not Trusty (Ubuntu 14.04) and not small either.
You could try Trevor, which uses Docker to run your Travis build.
From its description:
I often need to run tests for multiple versions of Node.js. But I don't want to switch versions manually using n/nvm or push the code to Travis CI just to run the tests.
That's why I created Trevor. It reads .travis.yml and runs tests in all versions you requested, just like Travis CI. Now, you can test before push and keep your git history clean.
I'm not sure what was your original reason for running Travis locally, if you just wanted to play with it, then stop reading here as it's irrelevant for you.
If you already have experience with hosted Travis and you want to get the same experience in your own datacenter, read on.
Since Dec 2014 Travis CI offers an Enterprise on-premises version.
http://blog.travis-ci.com/2014-12-19-introducing-travis-ci-enterprise/
The pricing is part of the article as well:
The licensing is done per seats, where every license includes 20 users. Pricing starts at $6,000 per license, which includes 20 users and 5 concurrent builds. There's a premium option with unlimited builds for $8,500.
I wasn't able to use the answers here as-is. For starters, as noted, the Travis help document on running jobs locally has been taken down. All of the blog entries and articles I found are based on that. The new "debug" mode doesn't appeal to me because I want to avoid the queue times and the Travis infrastructure until I've got some confidence I have gotten somewhere with my changes.
In my case I'm updating a Puppet module and I'm not an expert in Puppet, nor particularly experienced in Ruby, Travis, or their ecosystems. But I managed to build a workable test image out of tips and ideas in this article and elsewhere, and by examining the Travis CI build logs pretty closely.
I was unable to find recent images matching the names in the CI logs (for example, I could find travisci/ci-sardonyx, but could not find anything with "xenial" or with the same build name). From the logs it appears images are now transferred via AMQP instead of a mechanism more familiar to me.
I was able to find an image travsci/ubuntu-ruby:16.04 which matches the OS I'm targeting for my particular case. It does not have all the components used in the Travis CI, so I built a new one based on this, with some components added to the image and others added in the container at runtime depending on the need.
So I can't offer a clear procedure, sorry. But what I did, essentially boiled down:
Find a recent Travis CI image in Docker Hub matching your target OS as closely as possible.
Clone the repository to a build directory, and launch the container with the build directory mounted as a volume, with the working directory set to the target volume
Now the hard work: go through the Travis build log and set up the environment. In my case, this meant setting up RVM, and then using bundle to install the project's dependencies. RVM appeared to be already present in the Travis environment but I had to install it; everything else came from reproducing the commands in the build log.
Run the tests.
If the results don't match what you saw in the Travis CI logs, go back to (3) and see where to go.
Optionally, create a reusable image.
Dev and test locally and then push and hopefully your Travis results will be as expected.
I know this is not concrete and may be obvious, and your mileage will definitely vary, but hopefully this is of some use to somebody. The Dockerfile and a README for my image are on GitHub for reference.
It is possible to SSH to Travis CI environment via a bounce host. The feature isn't built in Travis CI, but it can be achieved by the following steps.
On the bounce host, create travis user and ensure that you can SSH to it.
Put these lines in the script: section of your .travis.yml (e.g. at the end).
- echo travis:$sshpassword | sudo chpasswd
- sudo sed -i 's/ChallengeResponseAuthentication no/ChallengeResponseAuthentication yes/' /etc/ssh/sshd_config
- sudo service ssh restart
- sudo apt-get install sshpass
- sshpass -p $sshpassword ssh -R 9999:localhost:22 -o StrictHostKeyChecking=no travis#$bouncehostip
Where $bouncehostip is the IP/host of your bounce host, and $sshpassword is your defined SSH password. These variables can be added as encrypted variables.
Push the changes. You should be able to make an SSH connection to your bounce host.
Source: Shell into Travis CI Build Environment.
Here is the full example:
# use the new container infrastructure
sudo: required
dist: trusty
language: python
python: "2.7"
script:
- echo travis:$sshpassword | sudo chpasswd
- sudo sed -i 's/ChallengeResponseAuthentication no/ChallengeResponseAuthentication yes/' /etc/ssh/sshd_config
- sudo service ssh restart
- sudo apt-get install sshpass
- sshpass -p $sshpassword ssh -R 9999:localhost:22 -o StrictHostKeyChecking=no travisci#$bouncehostip
See: c-mart/travis-shell at GitHub.
See also: How to reproduce a travis-ci build environment for debugging