Jenkins / Hudson environment variables - jenkins

I am running Jenkins from user jenkins thats has $PATH set to something and when I go into Jenkins web interface, in the System Properties window (http://$host/systemInfo) I see a different $PATH.
I have installed Jenkins on Centos with the native rpm from Jenkins website. I am using the startup script provided with the installation using sudo /etc/init.d/jenkins start
Can anyone please explain to me why that happens?

Michael,
Two things:
When Jenkins connects to a computer, it goes to the sh shell, and not the bash shell (at least this is what I have noticed - I may be wrong). So any changes you make to $PATH in your bashrc file are not considered.
Also, any changes you make to $PATH in your local shell (one that you personally ssh into) will not show up in Jenkins.
To change the path that Jenkins uses, you have two options (AFAIK):
1) Edit your /etc/profile file and add the paths that you want there
2) Go to the configuration page of your slave, and add environment variable PATH, with value: $PATH:/followed-by/paths/you/want/to/add
If you use the second option, your System Information will still not show it, but your builds will see the added paths.

I kept running into this problem, but now I just add:
source /etc/profile
As the first step in my build process. Now all my subsequent rules are loaded for Jenkins to operate smoothly.

You can also edit the /etc/sysconfig/jenkins file to make any changes to the environment variables, etc. I simply added source /etc/profile to the end of the file. /etc/profile has all all of the proper PATH variables setup. When you do this, make sure you restart Jenkins
/etc/init.d/jenkins restart
We are running ZendServer CE which installs pear, phing, etc in a different path so this was helpful. Also, we don't get the LD_LIBRARY_PATH errors we used to get with Oracle client and Jenkins.

I tried /etc/profile, ~/.profile and ~/.bash_profile and none of those worked. I found that editing ~/.bashrc for the jenkins slave account did.

The information on this answer is out of date. You need to go to Configure Jenkins > And you can then click to add an Environment Variable key-value pair from there.
eg: export MYVAR=test would be MYVAR is the key, and test is the value.

I found two plugins for that.
One loads the values from a file and the other lets you configure the values in the job configuration screen.
Envfile Plugin — This plugin enables you to set environment variables via a file. The file's format must be the standard Java property file format.
EnvInject Plugin — This plugin makes it possible to add environment variables and execute a setup script in order to set up an environment for the Job.

On my newer EC2 instance, simply adding the new value to the Jenkins user's .profile's PATH and then restarting tomcat worked for me.
On an older instance where the config is different, using #2 from Sagar's answer was the only thing that worked (i.e. .profile, .bash* didn't work).

Couldn't you just add it as an environment variable in Jenkins settings:
Manage Jenkins -> Global properties > Environment variables:
And then click "Add" to add a property PATH and its value to what you need.

This is how I solved this annoying issue:
I changed the PATH variable as #sagar suggested in his 2nd option, but still I got different PATH value than I expected.
Eventually I found out that it was the EnvInject plugin that replaced my PATH variable!
So I could either uninstall EnvInject or just use it to inject the PATH variable.
As many of our Jenkins jobs use that plugin, I didn't want to uninstall it...
So I created a file: environment_variables.properties under my Jenkins home directory.
This file contained the path environment value that I needed:
PATH=$PATH:/usr/local/git/bin/.
From the Jenkins web interface: Manage Jenkins -> Configure System.
In that screen - I ticked the Prepare jobs environment option, and in the Properties File Path field I entered the path to my file: /var/lib/jenkins/environment_variables.properties.
This way every Jenkins job we have receive whatever variables I put in this environment_variables.properties file.

Jenkins also supports the format PATH+<name> to prepend to any variable, not only PATH:
Global Environment variables or node Environment variables:
This is also supported in the pipeline step withEnv:
node {
withEnv(['PATH+JAVA=/path/to/java/bin']) {
...
}
}
Just take note, it prepends to the variable. If it must be appended you need to do what the other answers show.
See the pipeline steps document here.
You may also use the syntax PATH+WHATEVER=/something to prepend /something to $PATH
Or the java docs on EnvVars here.

I only had progress on this issue after a "/etc/init.d/jenkins force-reload". I recommend trying that before anything else, and using that rather than restart.

On my Ubuntu 13.04, I tried quite a few tweaks before succeeding with this:
Edit /etc/init/jenkins.conf
Locate the spot where "exec start-stop-server..." begins
Insert the environment update just before that, i.e.
export PATH=$PATH:/some/new/path/bin

Add
/usr/bin/bash
at
Jenkins -> Manage Jenkins -> configure System -> Shell->Shell
executable
Jenkins use the sh so that even /etc/profile doesn't work for me
When I add this, I have all the env.

Solution that worked for me
source ~/.bashrc
Explanation
I first verified Jenkins was running BASH, with echo $SHELL and echo $BASH (note I'm explicitly putting #!/bin/bash atop the textarea in Jenkins, I'm not sure if that's a requirement to get BASH). sourceing /etc/profile as others suggested was not working.
Looking at /etc/profile I found
if [ "$PS1" ]; then
...
and inspecting "$PS1" found it null. I tried spoofing $PS1 to no avail like so
export PS1=1
bash -c 'echo $PATH'
however this did not produce the desired result (add the rest of the $PATH I expect to see). But if I tell bash to be interactive
export PS1=1
bash -ci 'echo $PATH'
the $PATH was altered as I expected.
I was trying to figure out how to properly spoof an interactive shell to get /etc/bash.bashrc to load, however it turns out all I needed was down in ~/.bashrc, so simply sourceing it solved the problem.

I tried all the things from above - didn't work for me.
I found two solution (both for SSH-Slave)
Go to the slave settings
Add a new environment variable
PATH
${PATH}:${HOME}/.pub-cache/bin:${HOME}/.local/bin
The "${HOME}" part is important. This makes the additional PATH absolute.
Relative path did not work for me.
Option II (pipeline-script)
pipeline {
agent {
label 'your-slave'
}
environment {
PATH = "/home/jenkins/.pub-cache/bin:$PATH"
}
stages {
stage('Test') {
steps {
ansiColor('xterm') {
echo "PATH is: $PATH"
}
}
}
}
}

On Ubuntu I just edit /etc/default/jenkins and add source /etc/profile at the end and it works to me.

Running the command with environment variable set is also effective. Of course, you have to do it for each command you run, but you probably have a job script, so you probably only have one command per build. My job script is a python script that uses the environment to decide which python to use, so I still needed to put /usr/local/bin/python2.7 in its path:
PATH=/usr/local/bin <my-command>

What worked for me was overriding the PATH environment for the slave.
Set: PATH
To: $PATH:/usr/local/bin
Then disconnecting and reconnecting the slave.
Despite what the system information was showing it worked.

I have Jenkins 1.639 installed on SLES 11 SP3 via zypper (the package manager).
Installation configured jenkins as a service
# service jenkins
Usage: /etc/init.d/jenkins {start|stop|status|try-restart|restart|force-reload|reload|probe}
Although /etc/init.d/jenkins sources /etc/sysconfig/jenkins, any env variables set there are not inherited by the jenkins process because it is started in a separate login shell with a new environment like this:
startproc -n 0 -s -e -l /var/log/jenkins.rc -p /var/run/jenkins.pid -t 1 /bin/su -l -s /bin/bash -c '/usr/java/default/bin/java -Djava.awt.headless=true -DJENKINS_HOME=/var/lib/jenkins -jar /usr/lib/jenkins/jenkins.war --javaHome=/usr/java/default --logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war --httpPort=8080 --ajp13Port=8009 --debug=9 --handlerCountMax=100 --handlerCountMaxIdle=20 &' jenkins
The way I managed to set env vars for the jenkins process is via .bashrc in its home directory - /var/lib/jenkins. I had to create /var/lib/jenkins/.bashrc as it did not exist before.

1- add to your profil file".bash_profile" file
it is in "/home/your_user/" folder
vi .bash_profile
add:
export JENKINS_HOME=/apps/data/jenkins
export PATH=$PATH:$JENKINS_HOME
==> it's the e jenkins workspace
2- If you use jetty :
go to jenkins.xml file
and add :
<Arg>/apps/data/jenkins</Arg>

Here is what i did on ubuntu 18.04 LTS with Jenkins 2.176.2
I created .bash_aliases file and added there path, proxy variables and so on.
In beginning of .bashrc there was this defined.
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
So it's checking that if we are start non-interactive shell then we don't do nothing here.
bottom of the .bashrc there was include for .bash_aliases
# Alias definitions.
# You may want to put all your additions into a separate file like
# ~/.bash_aliases, instead of adding them here directly.
# See /usr/share/doc/bash-doc/examples in the bash-doc package.
if [ -f ~/.bash_aliases ]; then
. ~/.bash_aliases
fi
so i moved .bash_aliases loading first at .bashrc just above non-interactive check.
This didn't work first but then i disconnected slave and re-connected it so it's loading variables again. You don't need to restart whole jenkins if you are modifying slave variables. just disconnect and re-connect.

If your pipeline is executed on the remote node that is connected via SSH, then actually Jenkins runs agent application that performs incoming actions.
By default zsh shell is used, not the bash (my Jenkins has version 2.346.3).
Furthermore jenkins-agent runs non-login shell which makes default PATH values even if you put some configuration to .zshrc. It will be skipped.
My choice is to put the following shebang at a script start
#!/bin/bash -l
-l option makes bash to run in the login mode and in this case bash performs configurations specified in /etc/profile and ~/.bash_profile.
If you run script in Jenkins pipeline it will look like:
steps {
sh '''#!/bin/bash -l
env
'''
}

Related

How to add to slave's PATH using Slave SetupPlugin?

I have 2 RHEL machines setup in a Master/Slave configuration using Jenkins ver. 1.609.2
The slave is being launched via SSH Slaves Plugin 1.10.
I'm trying to use the Slave Setup Plugin v 1.9 to install the tools that will be necessary for my slave machine to run builds. In particular I am installing sqlplus.
Here is the script that I am running in order to try installing sqlplus:
if command -v sqlplus >/dev/null; then
echo "sqlplus already setup. Nothing to do."
else
#Create directory for sqlplus and unzip it there.
mkdir /jenkins/tools/sqlplus
tar -xvf sqlplussetup/instantclient-basiclite-linux.x64-12.1.0.2.0.tar.gz -C /jenkins/tools/sqlplus || { echo 'unzip failed' ; exit 1; }
tar -xvf sqlplussetup/instantclient-sqlplus-linux.x64-12.1.0.2.0.tar.gz -C /jenkins/tools/sqlplus || { echo 'unzip failed' ; exit 1; }
cd /jenkins/tools/sqlplus/instantclient_12_1
#Create links for the Oracle libs
ln -s libclntsh.so.12.1 libclntsh.so || { echo 'Could not create link' ; exit 1; }
ln -s libocci.so.12.1 libocci.so || { echo 'Could not create link' ; exit 1; }
#Add two lines to .bashrc only if they don't already exist. Export LD_LIBRARY_PATH and add sqlplus to PATH.
grep -q -F 'export LD_LIBRARY_PATH=/jenkins/tools/sqlplus/instantclient_12_1:$LD_LIBRARY_PATH' /home/jenkins/.bashrc || echo 'export LD_LIBRARY_PATH=/jenkins/tools/sqlplus/instantclient_12_1:$LD_LIBRARY_PATH' >> /home/jenkins/.bashrc
grep -q -F 'export PATH=$PATH:/jenkins/tools/sqlplus/instantclient_12_1' /home/jenkins/.bashrc || echo 'export PATH=$PATH:/jenkins/tools/sqlplus/instantclient_12_1' >> /home/jenkins/.bashrc
#Export variables so they can be used right away
export LD_LIBRARY_PATH=/jenkins/tools/sqlplus/instantclient_12_1:$LD_LIBRARY_PATH
export PATH=$PATH:/jenkins/tools/sqlplus/instantclient_12_1
echo "sqlplus has been setup."
fi
This script runs successfully and everything appears to work until I try to run a build and execute the sqlplus command. The build fails because sqlplus is not a recognized command.
My main question is this:
What is the proper way to automatically add an environment variable when launching a slave?
Please note I am looking for an automated way of doing this. I don't want to go into the configuration screen for my slave, tick a checkbox and specify an environment variable. That is counter-productive to what I am trying to achieve which is a slave that is immediately usable for builds once connected.
I pretty much understand why my script doesn't work. When Jenkins is launching the slave it first makes an SSH connection and then it runs my setup script using the command
/bin/sh -xe /jenkins/tmp/hudson8035138410767957141.sh
Where the contents of hudson8035138410767957141.sh is my script from above. So obviously, the export isn't going to work. I was hoping adding the exports to the .bashrc file would get around this but it does not work. I think this is because this script is executed after the ssh connection is established and therefore the .bashrc has already been read.
Problem is I can't figure out any way to work around this limitation.
Bash does not read any of its startup files (.bashrc, .profile etc) for non-interative shells that don't have the --login option set explicitly -- that's why the exports don't work.
So, solution "A" is to keep the bashrc magic that you suggest above, and to add the --login option by changing the first line in your build step to
#!/bin/bash --login
<your script here>
The explicit shebang at on the first line will also prevent excessive debug output that you get from the default's -x option (see your console snippet above).
Alternative solution "B" uses the fact that bash will source any script whose name is given in $BASH_ENV (if that variable is defined and the file exists). Define that variable globally in your slave properties (e.g., set to /jenkins/tools/setup.sh) and add exports as needed during slave setup. Every bash shell build step will read the settings then.
With solution "B" you don't need to use the --login option and you don't have to mess up the .bashrc. However, the "BASH_ENV" feature is only active when bash runs in "bash mode". As Jenkins starts the shell via sh, bash tries to emulate historic sh, which does not have that feature. So, also for B, you need a shebang:
#!/bin/bash
<your script here>
But that you'd need anyway to get rid of the tracing output that's usually too much in production setups.

Ansible: How to globally set PATH for solaris

I am writing Ansible playbooks to setup and install our applications on Solaris servers.
The problem is that the (bash) scripts which I need to execute all assume that a certain directory lies on the PATH, namely /data/bin - which would normally not be a problem were it not for Ansible ignoring all the .profile and .bashrc config.
Now, I know that you can specify the environment for shell tasks via the environment flag, for example like this:
- shell: printenv
environment:
PATH: /usr/bin:/usr/sbin:/data/bin
This will properly path the /data/bin folder, and the printenv command will correctly display (or my bash scripts would correctly run).
But. There are two problems however:
First of all it is very annoying to have to specify the environment over and over again. I know that you can define the environment in some playbook base file variable and the reference that, but you still have to set environment: ... on every single shell task.
Secondly, the above example does not allow me to specify the path dynamically, e.g. as PATH: $PATH:/data/bin - because Ansible executes this in a way which does not resolve $PATH, thus the command fails catastrophically. So essentially this will override any other changes to PATH.
I am looking for a solution where
the additional PATH entry should only be added once
the additional PATH entry should not override entries added by other tasks
P.S. I found this nice explanation on how to do this on Linux, but it makes use of /etc/environment which does not exist on Solaris. (And /etc/profile is once again ignored by Ansible.)
try adding -o SendEnv=PATH to ssh_args in ansible.cfg. Requires that
the shell in which you run ansible has /data/bin in PATH. Or however ansible allows you to modify the current/local PATH variable.
remote machine has AcceptEnv set correctly.

How to add PATH variable to sudo in Fabric

when I try to use fabric to deploy Apache server remotely using Fabric, I encountered a problem. I tried to add a new path to the PATH variable first using sudo(), then I tried to echo $PATH using sudo() too. However, I found that it looks like the new path wasn't added to PATH at all. As a result, I cannot execute the bins in that path via sudo().
[name#IP:port] Executing task 'reboot'
[name#IP:port] sudo: export PATH=$PATH:/new/path/to/add/install/bin
[name#IP:port] out: sudo password:
[name#IP:port] sudo: echo $PATH
[name#IP:port] out: sudo password:
[name#IP:port] out: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Could anyone tell me how to add a path variable to sudo command in Fabric? Thanks in advance.
It should be habit to always give a full path to the executable when running as root, to avoid having trojan horses being pushed into your PATH.
Setting an environment variable via export works only for the current shell session - which is the one invoked by sudo. Once your command (export, in this case) is executed, the shell exits, and takes your environment variable with it. The next time you execute sudo, a new shell (with default environment) is set up, which does know nothing about your previous export.
The configuration file /etc/sudoers usually contains an entry like Defaults env_reset, the effect of which is that environment variables set in the calling environment are not copied to the environment invoked by sudo, so calling export in your current environment and then executing sudo does not work either. This is done for security reasons (ref. 1) above).
It is possible to set up /etc/sudoers to make exceptions to 3), via env_keep. Refer to man sudoers for details. However, see 1) - it is not a good idea.
There is the -E option to sudo, which allows to keep the caller's environment (including e.g. an extended PATH), but this requires SETENV being set in /etc/sudoers. Again, refer to man sudoers for details, and be mindful of 1).
use
sudo('PATH=$PATH:/new/path/to/add/install/bin commad')

Where to put environment variables when using nginx and Passenger on Ubuntu

I was trying to set up a system similar to heroku where I would store secret keys in environmental variables and then access them from my rails app like this:
secret = ENV['EMAIL_PASSWORD']
I know heroku lets you do heroku config:add EMAIL_PASSWORD=secret, and I wanted to do something like that for my own ubuntu box running nginx and Passenger.
Should I add these variables as exports in .bashrc or .bash_login so that on system reboot these variables are automatically set?
I'm not sure when each of those files gets read in.
You can use dotenv gem which loads the .env file as environmental variables. You can generate the .env file for different environments, and need not be rather should not checked into your repository.
Keep in mind that nginx may not be running under the same environment as you are, and usually (pronounced "Apache") we add env-vars in the server config file via SetEnv. However, nginx doesn't have such a feature... nor does it need one, I believe.
sudo -E /usr/local/sbin/nginx
When running nginx for it to be aware of your own user env vars.
Or, check out the env command (see here):
env EMAIL_PASSWORD=secret
To answer your question, yes, you should use export statements in your shell config files.
This is documented in nginx. It removes all environment variables except TZ when running the workers. If you want to add an environment variable, add the following to the top of the nginx configuration:
# The top of the configuration usually has things like:
user user-name;
pid pid-file-name;
# Add to this:
env VAR1=value1;
env VAR2=value2;
# OR simply add:
env VAR1;
# To inherit the VAR1 from whatever you set in bash
The normal export or anything you do in bash has no guarantee of getting passed on to nginx, due to the way the init scripts are written (we don't know if they're using sudo with a clean environment, etc). So I'd rather put these in the nginx configuration file itself, rather than depending on the shell to do it.
Edit: Fix link
(this is probably a overkill, but maybe it'll be useful)
Some things to keep in mind:
Environment variables are somewhat public, and can be seen by other processes as easily as added an option to the ps(1) command (like ps e $$ in bash) or looking at /proc/*/environ, though both are restricted at least to the same user (or root) on modern systems. Don't rely on them being secret if you have another fairly easy option available.
~/.bashrc is the wrong place for environment variables, since they can be computed once at login in ~/.bash_login, ~/.bash_profile, or ~/.profile, depending on your usage, and passed down to all descendent shells. In contrast, ~/.bashrc actions tend to be recomputed on every shell invocation (unless explicitly disabled).
Putting bash code in the ~/.profile can confuse other sh-descendent shells and non-shell tools which try to read that file, so having the bash-specific ~/.bash_login or -_profile contain the bash-specific things, and using . ~/.profile for the more general things (LESS, EDITOR, VISUAL, LC_COLLATE, LS_COLORS, etc), is friendlier to the other tools.
Environment variables in ~/.profile should be in the old Bourne shell form (VAR=value ; export VAR). On Linux, this isn't usually critical, though on other Unixen this can be a big issue when an older version of "sh" tries to read them.
Some X sessions will only read ~/.profile, not ~/.bash_login or the others mentioned above. Some will look for a ~/.xsession file will need to be modified to have . $HOME/.profile if it doesn't already somehow.
System-wide settings would be put instead in something like /etc/profile.d/similar-to-heroku.sh. Note that the ".sh" is only present since the file will be used with "." or "source" - shell scripts should never have command-name extensions in any form of Unix/Linux.
Most environment variables get ditched when one sudos to root, as ybakos points out. Similar issues show up in crontabs, at jobs, etc. When in doubt, adding env | sort > /tmp/envvars or the like a suspect script can really help in debugging.
Be aware some distributions have shell startup scripts so contorted they end up actually defying the order given in the bash(1) manual page. Anytime you find a default user ~/.profile checking for $BASH or $BASH_VERSION, you may be in one of these, um..., "interesting" environments, and may have to read through them to figure out where the control flow goes (they should be using a bash-specific ~/.bash_profile or ~/.bash_login, which includes the more generic ~/.profile by reference, thus letting the bash executable do the work instead of having to write $BASH checks in shell code).
~/.bash_profile (or ~/.bash_login) can certainly include . ~/.bashrc, but the environment variables belong in the ~/.bash_profile (if bash-specific) or the ~/.profile included from it (if you're using this mechanism and have envvars for everything else in there) as DeWitt says, just remember to put the . ~/.bashrc AFTER the .bash_profile's . ~/.profile and other environment variables, so that both login and all other invocations of the ~/.bashrc can rely on the envvars already being set. An Example ~/.bash_profile:
# .bash_profile
[ -r ~/.profile ] && . ~/.profile # envvars
[ -r ~/.bashrc ] && . ~/.bashrc # functions, per-tty settings, etc.
#---eof
The [ -r ... ] && ... works in any Bourne shell descendent and doesn't cause errors/aborts if the .profile is missing (I personally have a ~/.profile.d/*.sh setup as well, but this is left as an entirely optional exercise).
Note that bash only reads the first file of these three which it finds:
~/.bash_profile
~/.bash_login
~/.profile
...so once you have that one, the use of the other two is entirely under control of the user, from bash's perspective.
I put them in my nginx config, specifically in the server definition for the app using the passenger_env_var command:
server {
server_name www.foo.com;
root /webapps/foo/public;
passenger_enabled on;
passenger_env_var DATABASE_USERNAME foo_db;
passenger_env_var DATABASE_PASSWORD secret;
passenger_env_var SECRET_KEY_BASE the_secret_keybase;
}
This works for me. See the phusion passenger docs for more info.
I have a script in /usr/local/bin folder that sets some env vars and then executes Ruby. I define the path to Ruby in my (Apache, not Nginx) conf file to that file in /usr/local/bin.
example:
#!/bin/sh
# setup env vars here
export FOO=bar
export PATH_TO_FOO=/bar/bin
export PATH=$PATH:PATH_TO_FOO
# and execute Ruby with any arguments passed to this script
exec "/usr/bin/ruby" "$#"
You should read this response to another question, it will help:
https://stackoverflow.com/a/11765775/1217298
EDITED :
Ok sorry i read it too fast, you can check how to save your ENV variables here :
https://help.ubuntu.com/community/EnvironmentVariables
http://www.cyberciti.biz/faq/set-environment-variable-linux/
If you use Nginx as server on your local computer, you can define your env variable into your nginx config file.
location / {
...
fastcgi_param EMAIL_PASSWORD secret; #EMAIL_PASSWORD = secret
...
}
I'm using rbenv as a version manager. Good solution to store environment variables for the project was installing the rbenv-vars plugin and putting them in .rbenv-vars file.
Here is a useful post:
Deploying app ENV variables with Rbenv, Passenger and Capistrano
For those battling this that are using RVM. Make sure that your default environments file is including your user's .bashrc and .profile files
file: $rvm_path/environments/default
to find the path run this command:
ls -lah `whereis rvm`/environments/default
add these two lines before the first line in that file:
source $HOME/.bashrc
source $HOME/.profile
The best place to keep env variables for your project is /etc/profile.d/YOUR_FILE.sh,
Here you can find the documentation which explains in details where to keep env variables for different scenarios.
In case anyone had the same type of question as I did, here's a nice little writeup about the different .bash* files: http://www.joshstaiger.org/archives/2005/07/bash_profile_vs.html
In summary:
For the most part:
.bash_profile is read when you log into the computer and .bashrc is read when you start a new terminal. For Mac OSX .bash_profile is read with every terminal window you start.
So, the recommended procedure is to source .bashrc from .bash_profile so all the variables are set when you login to the computer. Just add this to .bash_profile:
if [ -f ~/.bashrc ]; then
source ~/.bashrc
fi
You have to add the export lines into your .profile file under your home folder...
Environment variables are being set on login...

How to run Ruby and GIT commands in one place on Windows

I have Ruby and GIT installed on my windows box. To run GIT commands I am utilizing the GIT Bash. To run Ruby commands I am using the command line.
I have not been successful running GIT commands from the CMD line nor can I seem to run Ruby commands from inside the GIT Bash. I would love to be able to run commands for both GIT and Ruby from the GIT Bash (ideal) or at the least from the CMD line.
What is the best way to go about this?
I run git commands from the CMD session all the time.
Make sure your PATH environment variable includes the 'cmd' directory from a msysgit distro:
Path=C:\Path\To\Git\1.7.1\cmd
If not, add it in your session:
set PATH=%PATH%;C:\Path\To\Git\1.7.1\cmd
and you are done. Git and Ruby commands in your CMD shell.
The reverse (Ruby commands) is possible in a Git bash, by adding to the PATH a value like /c/path/to/Ruby/186-27/bin
To elaborate on VonC's answer of making Ruby available in Git-Bash.
All you have to do is add the path to your Ruby bin folder in your windows environment variables. It doesn't have to be in the format /c/path/to/ruby, it can be C:\Ruby193\bin.
Step by step for Windows 7:
Start
Search programs and files (default textbox after hitting the Start icon), Search for 'environment'
Select 'edit the system environment variables'
Click 'Environment Variables' (bottom right of the form)
Add to the 'System Variables' 'PATH' the following ';C:\Ruby193\bin' (without the single quotes)
Restart your shell
Make sure to close your git-bash shell and restart it to pick up the new environment variable.
Go to My Computer -> Properties -> Advanced system settings ->
Environment Variables
Add a New System variable. Variable name = RUBY_BIN. Variable
value = C:\Ruby193\bin (path may vary).
Add a New System variable. Variable name = MSYSGIT_BIN. Variable
value = C:\msysgit\bin (path may vary).
Append ;%RUBY_BIN%;%MSYSGIT_BIN% to Path variable, under System variables.
Restart shell.
This will allow you to run ruby, git or sh (Git Bash) commands from Command Prompt, as well as Ruby from Git Bash.
My personal setup uses msysgit and tortoisegit. I've found that using TortoiseGit's Pageant manager for the keys lets me use everything from any command line, including powershell. The only annoyance is I have to have pageant running with the keys added, which seem to clear on every reboot. Fortunately I don't reboot often.
The combined answer for VonC and Rots helped me to achieve the desired results.
However, since I was not familiar with editing environment variables, I must have accidentally overwritten the path for my nodejs files.
As a result my solution included adding the ruby path and the nodejs path to my user variables instead of the system.
I'm using a Windows 7 machine.
while installing ruby installer, select the option "Add ruby executables to your path" . Then all git/ruby commands will run from git bash.

Resources