Chef/Knife running two environments on same machine - environment

We have a series of cookbooks that are common across many environments. Up until recently, they've always been on different machines. Now we have a requirement that we need to run the same series of cookbooks across two environment files on the same machine.
What is the best practice here for doing this? Simply calling:
knife winrm %server% chef-client -m -x %user% -P %pass% -E %environment%
Is that the same as:
Knife node edit %server%
And changing the environment there? We need to be able to do command line switching of environments. Please advise, TIA!

The tool you are looking for is knife-flip by John Cowie from Etsy:
$ knife node flip mynode.foo.com myenv [--preview]

Related

In which terminal do I run these commands?

I'm trying to create a buildpack with libwebp, and I'm trying to follow this tutorial, which starts with the following commands:
heroku create buildpack-stager
heroku run bash --app buildpack-stager
curl -O https://mupdf.googlecode.com/files/mupdf-1.3-source.tar.gz
tar -xvzf mupdf-1.3-source.tar.gz
...
Should these initial commands be run in the terminal of my application on heroku?
heroku create should be run on your local machine. This creates a new app on Heroku, so if you already have an app to work with it may not be necessary.
heroku run bash should also be run on your local machine. This spins up a one-off dyno for your app and runs the command bash on it.
From there, it looks like they want you to continue with curl, tar, etc. in the same shell as the previous step. The effect will be that these commands are run on the one-off dyno you're using, but you don't need to change terminals or anything.
You might find the Heroku CLI commands documentation page helpful.
All these commands you should run on your local terminal (bash, etc.)

Best way to use rails console with cloud foundry

We've been experimenting with CF over Heroku and running into some issues. One of them deals with accessing the rails console in a CF AI. We're using Pivotal's PWS and have tried a number of things, including:
cd app; export HOME=$(pwd); source .profile.d/0_ruby.sh; rails c
and
cd app; export HOME=$(pwd); source .profile.d/*.sh; rails c
Both of which are hit or miss and typically don't work.
It seems a bit ridiculous that it's THIS much work to access the rails console via CF. I feel like there has to be a better, faster way.
Does anyone have any tips?
For anyone saying we should cf ssh in, here is what happens:
vcap#2f4663e4-f876-490c-65e2-a498:~$ cd app
vcap#2f4663e4-f876-490c-65e2-a498:~/app$ ls .profile.d/000_multi-supply.sh 0_ruby.sh
vcap#2f4663e4-f876-490c-65e2-a498:~/app$ source .profile.d/0_ruby.sh
vcap#2f4663e4-f876-490c-65e2-a498:~/app$ cd ..
vcap#2f4663e4-f876-490c-65e2-a498:~$ rails c
bash: rails: command not found
vcap#2f4663e4-f876-490c-65e2-a498:~$ source app/.profile.d/000_multisupply.sh
vcap#2f4663e4-f876-490c-65e2-a498:~$ rails c
bash: rails: command not found
As of writing this, to fire up a Rails console run cf ssh my-app -t -c "/tmp/lifecycle/launcher /home/vcap/app 'rails c' ''".
This will SSH into the container and use the lifecycle launcher, which sets up the environment for you, to execute the command.

Setting up an SSH tunnel on Heroku: installing SSH key pairs?

My RoR app needs to access a remote database (FWIW it's mysql hosted on rds.amazonaws.com). The only way to access it is through an SSH tunnel.
I've already tested access on my local machine. I'm setting up the tunnel via the equivalent of:
ssh -f -N -L 3307:longname.rds.amazonaws.com:3306 remote_user#remote_host.com
(but see https://stackoverflow.com/a/27305457/558639 to see how I'm actually doing it). At any rate, I will need to install an SSH key pair (both private and public parts) on Heroku for this to work.
I'm on unfamiliar territory here, though. I could write a script that starts up at the beginning of a Heroku session that installs the keys. What's the right way to accomplish this and not expose the private key unnecessarily?
Here's what I've come up with. (See SSH tunneling from Heroku for a longer description.)
set up a bunch of environment variables, including the public and private keys, using heroku config:set NAME1=value1 NAME2=value2 etc...
create .profile.d/web-setup.sh with the following contents. Note that as per https://devcenter.heroku.com/articles/profiled, any file in the .profile.d directory will be run when the dyno is first set up.
NOTE: This way, the private SSH key only appears as a configuration variable in the heroku environment. Since other sensitive information is kept there, I assume that this is a relatively safe approach.
The .profile.d/web-setup.sh file contains:
# file=.profile.d/web-setup.sh
# create keypair files on this dyno
echo $0: creating public and private key files
mkdir -p ${HOME}/.ssh
echo "${PUBLIC_KEY}" > ${HOME}/.ssh/heroku_id_rsa.pub
chmod 644 ${HOME}/.ssh/heroku_id_rsa.pub
# note the use of double quotes to preserve newlines!
echo "${PRIVATE_KEY}" > ${HOME}/.ssh/heroku_id_rsa
chmod 600 ${HOME}/.ssh/heroku_id_rsa
# You may need to preload known-hosts here. See
# https://stackoverflow.com/questions/21575582/ssh-tunneling-from-heroku/27361295#27361295
# on how to do that.
# open a tunnel if not already running
SSH_CMD="ssh -f -i ${HOME}/.ssh/heroku_id_rsa -N -L ${LOCAL_PORT}:${REMOTE_MYSQL_HOST}:${MYSQL_PORT} ${REMOTE_USER}#${REMOTE_SITE}"
PID=`pgrep -f "${SSH_CMD}"`
if [ $PID ] ; then
echo $0: tunnel already running on ${PID}
else
echo $0 launching tunnel
$SSH_CMD
fi

Setting up OpenProject on OpenShift

I'm trying to install OpenProject on OpenShift but I'm having difficulties in understanding the process. I've managed to create an OpenShift application and SSH into the domain, however I don't have permissions to download the zip file / create the folder as in the instructions.
I have to mention that my GIT/Ruby/Openshift knowledge is very limited.
Has anyone tried this before? Can you tell me if it's possible and how?
Thanks!
You'll want to ssh into your gear
$ rhc ssh -a
Then cd into the data dir
$ cd $OPENSHIFT_DATA_DIR
Wget your file
$ wget https://github.com/opf/openproject/archive/2.4.0.zip
I, despite this issue being so old, I would like to share my efforts. I also wanted to get OpenProject running on OpenShift. My way was to first get an initial POD running with root permissions to set things up with the WebUI and then run the individual services without root permissions.
Details can be found in this Github repository:
https://github.com/jngrb/openproject-openshift

How to run travis-ci locally

I'd rather not have to push every little change to .travis.yml and every little change I make to the source in order to run the build. With jenkins you can download jenkins and run locally. Does travis offer something like this?
Note: I've seen the travis-ci cli and downloaded it, but all it seems
to do is call their API, which then connects to my GitHub repo, so if
I don't push, it won't matter that I restart the last build.
This process allows you to completely reproduce any Travis build job on your computer. Also, you can interrupt the process at any time and debug. Below is an example where I perfectly reproduce the results of job #191.1 on php-school/cli-menu
.
Prerequisites
You have public repo on GitHub
You ran at least one build on Travis
You have Docker set up on your computer
Set up the build environment
Reference: https://docs.travis-ci.com/user/common-build-problems/
Make up your own temporary build ID
BUILDID="build-$RANDOM"
View the build log, open the show more button for WORKER INFORMATION and find the INSTANCE line, paste it in here and run (replace the tag after the colon with the newest available one):
INSTANCE="travisci/ci-garnet:packer-1512502276-986baf0"
Run the headless server
docker run --name $BUILDID -dit $INSTANCE /sbin/init
Run the attached client
docker exec -it $BUILDID bash -l
Run the job
Now you are now inside your Travis environment. Run su - travis to begin.
This step is well defined but it is more tedious and manual. You will find every command that Travis runs in the environment. To do this, look for for everything in the right column which has a tag like 0.03s.
On the left side you will see the actual commands. Run those commands, in order.
Result
Now is a good time to run the history command. You can restart the process and replay those commands to run the same test against an updated code base.
If your repo is private: ssh-keygen -t rsa -b 4096 -C "YOUR EMAIL REGISTERED IN GITHUB" then cat ~/.ssh/id_rsa.pub and click here to add a key
FYI: you can git pull from inside docker to load commits from your dev box before you push them to GitHub
If you want to change the commands Travis runs then it is YOUR responsibility to figure out how that translates back into a working .travis.yml.
I don't know how to clean up the Docker environment, it looks complicated, maybe this leaks memory
Travis-ci offers a new container-based infrastructure that uses docker. This can be very useful if you're trying to troubleshoot a travis-ci build by reproducing it locally. This is taken from Travis CI's documentation.
Troubleshooting Locally in a Docker Image
If you're having trouble tracking down the exact problem in a build it often helps to run the build locally. To do this you need to be using our container based infrastructure (ie, have sudo: false in your .travis.yml), and to know which Docker image you are using on Travis CI.
Running a Container Based Docker Image Locally
Download and install the Docker Engine.
Select an image from Docker Hub. If you're not using a language-specific image pick ci-ruby. Open a terminal and start an interactive Docker session using the image URL:
docker run -it travisci/ubuntu-ruby:18.04 /bin/bash
Switch to the travis user:
su - travis
Clone your git repository into the / folder of the image.
Manually install any dependencies.
Manually run your Travis CI build command.
UPDATE: I now have a complete turnkey, all-in-one answer, see https://stackoverflow.com/a/49019950/300224. Only took 3 years to figure out!
According to the Travis documentation: https://github.com/travis-ci/travis-ci there is a concoction of projects that collude to deliver the Travis CI web service we know and love. The following subset of projects appears to allow local make test functionality using the .travis.yml in your project:
travis-build
travis-build creates the build
script for each job. It takes the configuration from the .travis.yml file and
creates a bash script that is then run in the build environment by
travis-worker.
travis-cookbooks
travis-cookbooks holds the
Chef cookbooks that are used to provision the build environments.
travis-worker
travis-worker is responsible for
running the build scripts in a clean environment. It streams the log output to
travis-logs and pushes state updates (build starting/finishing)
to travis-hub.
(The other subprojects are responsible for communicating with GitHub, their web interface, email, and their API.)
Similar to Scott McLeod's but this also generates a bash script to run the steps from the .travis.yml.
Troubleshooting Locally in Docker with a generated Bash script
# choose the image according to the language chosen in .travis.yml
$ docker run -it -u travis quay.io/travisci/travis-jvm /bin/bash
# now that you are in the docker image, switch to the travis user
sudo - travis
# Install a recent ruby (default is 1.9.3)
rvm install 2.3.0
rvm use 2.3.0
# Install travis-build to generate a .sh out of .travis.yml
cd builds
git clone https://github.com/travis-ci/travis-build.git
cd travis-build
gem install travis
# to create ~/.travis
travis version
ln -s `pwd` ~/.travis/travis-build
bundle install
# Create project dir, assuming your project is `AUTHOR/PROJECT` on GitHub
cd ~/builds
mkdir AUTHOR
cd AUTHOR
git clone https://github.com/AUTHOR/PROJECT.git
cd PROJECT
# change to the branch or commit you want to investigate
travis compile > ci.sh
# You most likely will need to edit ci.sh as it ignores matrix and env
bash ci.sh
Use wwtd (what would travis do) ruby gem to run tests on your local machine roughly as they would run on travis.
It will recreate the build matrix and run each configuration, great to sanity check setup before pushing.
gem i wwtd
wwtd
tl;dr Use image specified at https://docs.travis-ci.com/user/common-build-problems/#troubleshooting-locally-in-a-docker-image in combination with https://github.com/travis-ci/travis-build#use-as-addon-for-travis-cli.
EDIT 2019-12-06
#troubleshooting-locally-in-a-docker-image section was replaced by #running-builds-in-debug-mode which also describes how to SSH to the job running in the debug mode.
EDIT 2019-07-26
#troubleshooting-locally-in-a-docker-image section is no longer part of the docs; here's why
https://github.com/travis-ci/docs-travis-ci-com/issues/2342
https://blog.travis-ci.com/2018-10-04-combining-linux-infrastructures
https://blog.travis-ci.com/2018-11-30-announcing-xenial-build-environment-for-enterprise
Though, it's still in git history: https://github.com/travis-ci/docs-travis-ci-com/pull/2193.
Look for (quite old, couldn't find newer) image versions at: https://travis-ci.org/travis-ci/docs-travis-ci-com/builds/230889063#L661.
I wanted to inspect why one of the tests in my build failed with an error I din't get locally.
Worked.
What actually worked was using the image specified at Troubleshooting Locally in a Docker Image docs page. In my case it was travisci/ci-garnet:packer-1512502276-986baf0.
I was able to add travise compile following steps described at https://github.com/travis-ci/travis-build#use-as-addon-for-travis-cli.
dm#z580:~$ docker run --name travis-debug -dit travisci/ci-garnet:packer-1512502276-986baf0 /sbin/init
dm#z580:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
travisci/ci-garnet packer-1512502276-986baf0 6cbda6a950d3 11 months ago 10.2GB
dm#z580:~$ docker exec -it travis-debug bash -l
root#912e43dbfea4:/# su - travis
travis#912e43dbfea4:~$ cd builds/
travis#912e43dbfea4:~/builds$ git clone https://github.com/travis-ci/travis-build
travis#912e43dbfea4:~/builds$ cd travis-build
travis#912e43dbfea4:~/builds/travis-build$ mkdir -p ~/.travis
travis#912e43dbfea4:~/builds/travis-build$ ln -s $PWD ~/.travis/travis-build
travis#912e43dbfea4:~/builds/travis-build$ gem install bundler
travis#912e43dbfea4:~/builds/travis-build$ bundle install --gemfile ~/.travis/travis-build/Gemfile
travis#912e43dbfea4:~/builds/travis-build$ bundler binstubs travis
travis#912e43dbfea4:~/builds/travis-build$ cd ..
travis#912e43dbfea4:~/builds$ git clone --depth=50 --branch=master https://github.com/DusanMadar/PySyncDroid.git DusanMadar/PySyncDroid
travis#912e43dbfea4:~/builds$ cd DusanMadar/PySyncDroid/
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ ~/.travis/travis-build/bin/travis compile > ci.sh
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ sed -i 's,--branch\\=\\\x27\\\x27,--branch\\=master,g' ci.sh
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ bash ci.sh
Everything from .travis.yml was executed as expected (dependencies installed, tests ran, ...).
Note that before running bash ci.sh I had to change --branch\=\'\'\ to --branch\=master\ (see the second to last sed -i ... command) in ci.sh.
If that doesn't work the command bellow will help to identify the target line number and you can edit the line manually.
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ cat ci.sh | grep -in branch
840: travis_cmd git\ clone\ --depth\=50\ --branch\=\'\'\ https://github.com/DusanMadar/PySyncDroid.git\ DusanMadar/PySyncDroid --echo --retry --timing
889:export TRAVIS_BRANCH=''
899:export TRAVIS_PULL_REQUEST_BRANCH=''
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$
Didn't work.
Followed the accepted answer for this question but didn't
find the image (travis-ci-garnet-trusty-1512502259-986baf0) mentioned by instance at https://hub.docker.com/u/travisci/.
Build worker version points to travis-ci/worker commit and its travis-worker-install references quay.io/travisci/ as image registry. So I tried it.
dm#z580:~$ docker run -it -u travis quay.io/travisci/travis-python /bin/bash
travis#370c23a773c9:/$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 12.04.5 LTS
Release: 12.04
Codename: precise
travis#370c23a773c9:/$
dm#z580:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/travisci/travis-python latest 753a216d776c 3 years ago 5.36GB
Definitely not Trusty (Ubuntu 14.04) and not small either.
You could try Trevor, which uses Docker to run your Travis build.
From its description:
I often need to run tests for multiple versions of Node.js. But I don't want to switch versions manually using n/nvm or push the code to Travis CI just to run the tests.
That's why I created Trevor. It reads .travis.yml and runs tests in all versions you requested, just like Travis CI. Now, you can test before push and keep your git history clean.
I'm not sure what was your original reason for running Travis locally, if you just wanted to play with it, then stop reading here as it's irrelevant for you.
If you already have experience with hosted Travis and you want to get the same experience in your own datacenter, read on.
Since Dec 2014 Travis CI offers an Enterprise on-premises version.
http://blog.travis-ci.com/2014-12-19-introducing-travis-ci-enterprise/
The pricing is part of the article as well:
The licensing is done per seats, where every license includes 20 users. Pricing starts at $6,000 per license, which includes 20 users and 5 concurrent builds. There's a premium option with unlimited builds for $8,500.
I wasn't able to use the answers here as-is. For starters, as noted, the Travis help document on running jobs locally has been taken down. All of the blog entries and articles I found are based on that. The new "debug" mode doesn't appeal to me because I want to avoid the queue times and the Travis infrastructure until I've got some confidence I have gotten somewhere with my changes.
In my case I'm updating a Puppet module and I'm not an expert in Puppet, nor particularly experienced in Ruby, Travis, or their ecosystems. But I managed to build a workable test image out of tips and ideas in this article and elsewhere, and by examining the Travis CI build logs pretty closely.
I was unable to find recent images matching the names in the CI logs (for example, I could find travisci/ci-sardonyx, but could not find anything with "xenial" or with the same build name). From the logs it appears images are now transferred via AMQP instead of a mechanism more familiar to me.
I was able to find an image travsci/ubuntu-ruby:16.04 which matches the OS I'm targeting for my particular case. It does not have all the components used in the Travis CI, so I built a new one based on this, with some components added to the image and others added in the container at runtime depending on the need.
So I can't offer a clear procedure, sorry. But what I did, essentially boiled down:
Find a recent Travis CI image in Docker Hub matching your target OS as closely as possible.
Clone the repository to a build directory, and launch the container with the build directory mounted as a volume, with the working directory set to the target volume
Now the hard work: go through the Travis build log and set up the environment. In my case, this meant setting up RVM, and then using bundle to install the project's dependencies. RVM appeared to be already present in the Travis environment but I had to install it; everything else came from reproducing the commands in the build log.
Run the tests.
If the results don't match what you saw in the Travis CI logs, go back to (3) and see where to go.
Optionally, create a reusable image.
Dev and test locally and then push and hopefully your Travis results will be as expected.
I know this is not concrete and may be obvious, and your mileage will definitely vary, but hopefully this is of some use to somebody. The Dockerfile and a README for my image are on GitHub for reference.
It is possible to SSH to Travis CI environment via a bounce host. The feature isn't built in Travis CI, but it can be achieved by the following steps.
On the bounce host, create travis user and ensure that you can SSH to it.
Put these lines in the script: section of your .travis.yml (e.g. at the end).
- echo travis:$sshpassword | sudo chpasswd
- sudo sed -i 's/ChallengeResponseAuthentication no/ChallengeResponseAuthentication yes/' /etc/ssh/sshd_config
- sudo service ssh restart
- sudo apt-get install sshpass
- sshpass -p $sshpassword ssh -R 9999:localhost:22 -o StrictHostKeyChecking=no travis#$bouncehostip
Where $bouncehostip is the IP/host of your bounce host, and $sshpassword is your defined SSH password. These variables can be added as encrypted variables.
Push the changes. You should be able to make an SSH connection to your bounce host.
Source: Shell into Travis CI Build Environment.
Here is the full example:
# use the new container infrastructure
sudo: required
dist: trusty
language: python
python: "2.7"
script:
- echo travis:$sshpassword | sudo chpasswd
- sudo sed -i 's/ChallengeResponseAuthentication no/ChallengeResponseAuthentication yes/' /etc/ssh/sshd_config
- sudo service ssh restart
- sudo apt-get install sshpass
- sshpass -p $sshpassword ssh -R 9999:localhost:22 -o StrictHostKeyChecking=no travisci#$bouncehostip
See: c-mart/travis-shell at GitHub.
See also: How to reproduce a travis-ci build environment for debugging

Resources