Ansible: How to globally set PATH for solaris - path

I am writing Ansible playbooks to setup and install our applications on Solaris servers.
The problem is that the (bash) scripts which I need to execute all assume that a certain directory lies on the PATH, namely /data/bin - which would normally not be a problem were it not for Ansible ignoring all the .profile and .bashrc config.
Now, I know that you can specify the environment for shell tasks via the environment flag, for example like this:
- shell: printenv
environment:
PATH: /usr/bin:/usr/sbin:/data/bin
This will properly path the /data/bin folder, and the printenv command will correctly display (or my bash scripts would correctly run).
But. There are two problems however:
First of all it is very annoying to have to specify the environment over and over again. I know that you can define the environment in some playbook base file variable and the reference that, but you still have to set environment: ... on every single shell task.
Secondly, the above example does not allow me to specify the path dynamically, e.g. as PATH: $PATH:/data/bin - because Ansible executes this in a way which does not resolve $PATH, thus the command fails catastrophically. So essentially this will override any other changes to PATH.
I am looking for a solution where
the additional PATH entry should only be added once
the additional PATH entry should not override entries added by other tasks
P.S. I found this nice explanation on how to do this on Linux, but it makes use of /etc/environment which does not exist on Solaris. (And /etc/profile is once again ignored by Ansible.)

try adding -o SendEnv=PATH to ssh_args in ansible.cfg. Requires that
the shell in which you run ansible has /data/bin in PATH. Or however ansible allows you to modify the current/local PATH variable.
remote machine has AcceptEnv set correctly.

Related

How to solve the problem of using modules to configure environment variables and losing them when starting tmux?

I'm trying to use the modules program to configure my linux computer's environment variables. I add the following command to add environment variables to my bashrc.
module load gcc/5.5
I expect to use the above command to add gcc5.5/bin to $PATH. If I open a new terminal, gcc5.5/bin is in $PATH, but if I use a tmux command, gcc5.5/bin is not added.

RHEL - Environment variable

I have an environment file named .env337_dev. I need to run this file to set the environment before running another command. How to run this file?
Inside the file, it contains several variables like this
export AB_HOME=/et/dev/abinitio/sit1/abinitio-V2 #/gcc3p32 # for 32-bit
export PATH=${AB_HOME}/bin:${PATH}
Apart from . ./.env337_devcommand which will run and set the environment, is there any other way to run this file ?
Are you looking for the user-specific .bashrc (bash is the default shell on RHEL 6) or a system-wide /etc/profile.d/<something>.sh? For the first, you would edit $HOME/.bashrc and append a line like . .env337_dev (it's still run before any "regular" command, because .bashrc is the Bash standard personal initialization file). Second option suggests that you use an absolute path.
If this doesn't answer your question, a more specific question and/or more details would be very helpful.
You tagged this ab-initio, so you should only be setting a very few environment variables, including:
export AB_HOME=<path-to-co>operating-system>
export PATH=$AB_HOME/bin:$PATH
If you are working with Ab Initio web applications:
export AB_APPLICATION_HUB=<path-to-application-hub>
export JAVA_HOME=<path-to-jdk>
export PATH=$JAVA_HOME/bin:$PATH
and specific settings for different applications, e.g.
export AB_MHUB_HOME=<path-to-metadata-hub-installation>
Typically you put those into the file .profile in your home directory, which shells evaluate for interactive sessions.

Accessing Elastic Beanstalk environment properties in Docker

So I've been looking around for an example of how I can specify environment variables for my Docker container from the AWS EB web interface. Typically in EB you can add environment properties which are available at runtime. I was using these for my previous deployment before I switched to Docker, but it appears as though Docker has some different rules with regards to how the environment properties are handled, is that correct? According to this article [1], ONLY the AWS credentials and PARAM1-PARAM5 will be present in the environment variables, but no custom properties will be present. That's what it sounds like to me, especially considering the containers that do support custom environment properties say it explicitly, like Python shown here [2]. Does anyone have any experience with this software combination? All I need to specify is a single environment variable that tells me whether the application is in "staging" or "production" mode, then all my environment specific configurations are set up by the application itself.
[1] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-docker
[2] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-python
Custom environment variables are supported with the AWS Elastic Beanstalk Docker container. Looks like a miss in the documentation. You can define custom environment variables for your environment and expect that they will be passed along to the docker container.
I've needed to pass environment variable in moment docker run using Elastic Beanstalk, but, is not allowed put this information in Dockerrun.aws.json.
Below the steps to resolve this scenario:
Create a folder .ebextensions
Create a .config file in the folder
Fill the .config file:
option_settings:
-option_name: VARIABLE_NAME value: VARIABLE_VALUE
Zip the folder .ebextensions file along with the Dockerrun.aws.json plus Dockerfile and upload it to Beanstalk
To see the result, inside EC2 instance, execute the command "docker inspect CONTAINER_ID" and will see the environment variable.
At least for me the environment variables that I set in the EB console were not being populated into the Docker container. I found the following link helpful though: https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-env-variables-shell/
I used a slightly different approach where instead of exporting the vars to the shell, I used the ebextension to create a .env file which I then loaded from Python within my container.
The steps would be as follows:
Create a directory called '.ebextensions' in your app root dir
Create a file in this directory called 'load-env-vars.config'
Enter the following contents:
commands:
setvars:
command: /opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "\(.key)=\"\(.value)\""' > /var/app/current/.env
packages:
yum:
jq: []
This will create a .env file in /var/app/current which is where your code should be within the EB instance
Use a package like python-dotenv to load the .env file or something similar if you aren't using Python. Note that this solution should be generic to any language/framework that you're using within your container.
I don't think the docs are a miss as Rohit Banga's answer suggests. Thought I agree that "you can define custom environment variables for your environment and expect that they will be passed along to the docker container".
The Docker container portion of the docs say, "No DOCKER-SPECIFIC configuration options are provided by Elastic Beanstalk" ... which doesn't necessarily mean that no environment variables are passed to the Docker container.
For example, for the Ruby container the Ruby-specific variables that are always passed are ... RAILS_SKIP_MIGRATIONS, RAILS_SKIP_ASSET_COMPILATION, BUNDLE_WITHOUT, RACK_ENV, RAILS_ENV. And so on. For the Ruby container, the assumption is you are running a Ruby app, hence setting some sensible defaults to make sure they are always available.
On the other hand, for the Docker container it seems it's open. You specify whatever variables you want ... they make no assumptions as to what you are running, Rails (Ruby), Django (Python) etc ... because it could be anything. They don't know before hand what you want to run and that makes it difficult to set sensible defaults.

How to add PATH variable to sudo in Fabric

when I try to use fabric to deploy Apache server remotely using Fabric, I encountered a problem. I tried to add a new path to the PATH variable first using sudo(), then I tried to echo $PATH using sudo() too. However, I found that it looks like the new path wasn't added to PATH at all. As a result, I cannot execute the bins in that path via sudo().
[name#IP:port] Executing task 'reboot'
[name#IP:port] sudo: export PATH=$PATH:/new/path/to/add/install/bin
[name#IP:port] out: sudo password:
[name#IP:port] sudo: echo $PATH
[name#IP:port] out: sudo password:
[name#IP:port] out: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Could anyone tell me how to add a path variable to sudo command in Fabric? Thanks in advance.
It should be habit to always give a full path to the executable when running as root, to avoid having trojan horses being pushed into your PATH.
Setting an environment variable via export works only for the current shell session - which is the one invoked by sudo. Once your command (export, in this case) is executed, the shell exits, and takes your environment variable with it. The next time you execute sudo, a new shell (with default environment) is set up, which does know nothing about your previous export.
The configuration file /etc/sudoers usually contains an entry like Defaults env_reset, the effect of which is that environment variables set in the calling environment are not copied to the environment invoked by sudo, so calling export in your current environment and then executing sudo does not work either. This is done for security reasons (ref. 1) above).
It is possible to set up /etc/sudoers to make exceptions to 3), via env_keep. Refer to man sudoers for details. However, see 1) - it is not a good idea.
There is the -E option to sudo, which allows to keep the caller's environment (including e.g. an extended PATH), but this requires SETENV being set in /etc/sudoers. Again, refer to man sudoers for details, and be mindful of 1).
use
sudo('PATH=$PATH:/new/path/to/add/install/bin commad')

Jenkins / Hudson environment variables

I am running Jenkins from user jenkins thats has $PATH set to something and when I go into Jenkins web interface, in the System Properties window (http://$host/systemInfo) I see a different $PATH.
I have installed Jenkins on Centos with the native rpm from Jenkins website. I am using the startup script provided with the installation using sudo /etc/init.d/jenkins start
Can anyone please explain to me why that happens?
Michael,
Two things:
When Jenkins connects to a computer, it goes to the sh shell, and not the bash shell (at least this is what I have noticed - I may be wrong). So any changes you make to $PATH in your bashrc file are not considered.
Also, any changes you make to $PATH in your local shell (one that you personally ssh into) will not show up in Jenkins.
To change the path that Jenkins uses, you have two options (AFAIK):
1) Edit your /etc/profile file and add the paths that you want there
2) Go to the configuration page of your slave, and add environment variable PATH, with value: $PATH:/followed-by/paths/you/want/to/add
If you use the second option, your System Information will still not show it, but your builds will see the added paths.
I kept running into this problem, but now I just add:
source /etc/profile
As the first step in my build process. Now all my subsequent rules are loaded for Jenkins to operate smoothly.
You can also edit the /etc/sysconfig/jenkins file to make any changes to the environment variables, etc. I simply added source /etc/profile to the end of the file. /etc/profile has all all of the proper PATH variables setup. When you do this, make sure you restart Jenkins
/etc/init.d/jenkins restart
We are running ZendServer CE which installs pear, phing, etc in a different path so this was helpful. Also, we don't get the LD_LIBRARY_PATH errors we used to get with Oracle client and Jenkins.
I tried /etc/profile, ~/.profile and ~/.bash_profile and none of those worked. I found that editing ~/.bashrc for the jenkins slave account did.
The information on this answer is out of date. You need to go to Configure Jenkins > And you can then click to add an Environment Variable key-value pair from there.
eg: export MYVAR=test would be MYVAR is the key, and test is the value.
I found two plugins for that.
One loads the values from a file and the other lets you configure the values in the job configuration screen.
Envfile Plugin — This plugin enables you to set environment variables via a file. The file's format must be the standard Java property file format.
EnvInject Plugin — This plugin makes it possible to add environment variables and execute a setup script in order to set up an environment for the Job.
On my newer EC2 instance, simply adding the new value to the Jenkins user's .profile's PATH and then restarting tomcat worked for me.
On an older instance where the config is different, using #2 from Sagar's answer was the only thing that worked (i.e. .profile, .bash* didn't work).
Couldn't you just add it as an environment variable in Jenkins settings:
Manage Jenkins -> Global properties > Environment variables:
And then click "Add" to add a property PATH and its value to what you need.
This is how I solved this annoying issue:
I changed the PATH variable as #sagar suggested in his 2nd option, but still I got different PATH value than I expected.
Eventually I found out that it was the EnvInject plugin that replaced my PATH variable!
So I could either uninstall EnvInject or just use it to inject the PATH variable.
As many of our Jenkins jobs use that plugin, I didn't want to uninstall it...
So I created a file: environment_variables.properties under my Jenkins home directory.
This file contained the path environment value that I needed:
PATH=$PATH:/usr/local/git/bin/.
From the Jenkins web interface: Manage Jenkins -> Configure System.
In that screen - I ticked the Prepare jobs environment option, and in the Properties File Path field I entered the path to my file: /var/lib/jenkins/environment_variables.properties.
This way every Jenkins job we have receive whatever variables I put in this environment_variables.properties file.
Jenkins also supports the format PATH+<name> to prepend to any variable, not only PATH:
Global Environment variables or node Environment variables:
This is also supported in the pipeline step withEnv:
node {
withEnv(['PATH+JAVA=/path/to/java/bin']) {
...
}
}
Just take note, it prepends to the variable. If it must be appended you need to do what the other answers show.
See the pipeline steps document here.
You may also use the syntax PATH+WHATEVER=/something to prepend /something to $PATH
Or the java docs on EnvVars here.
I only had progress on this issue after a "/etc/init.d/jenkins force-reload". I recommend trying that before anything else, and using that rather than restart.
On my Ubuntu 13.04, I tried quite a few tweaks before succeeding with this:
Edit /etc/init/jenkins.conf
Locate the spot where "exec start-stop-server..." begins
Insert the environment update just before that, i.e.
export PATH=$PATH:/some/new/path/bin
Add
/usr/bin/bash
at
Jenkins -> Manage Jenkins -> configure System -> Shell->Shell
executable
Jenkins use the sh so that even /etc/profile doesn't work for me
When I add this, I have all the env.
Solution that worked for me
source ~/.bashrc
Explanation
I first verified Jenkins was running BASH, with echo $SHELL and echo $BASH (note I'm explicitly putting #!/bin/bash atop the textarea in Jenkins, I'm not sure if that's a requirement to get BASH). sourceing /etc/profile as others suggested was not working.
Looking at /etc/profile I found
if [ "$PS1" ]; then
...
and inspecting "$PS1" found it null. I tried spoofing $PS1 to no avail like so
export PS1=1
bash -c 'echo $PATH'
however this did not produce the desired result (add the rest of the $PATH I expect to see). But if I tell bash to be interactive
export PS1=1
bash -ci 'echo $PATH'
the $PATH was altered as I expected.
I was trying to figure out how to properly spoof an interactive shell to get /etc/bash.bashrc to load, however it turns out all I needed was down in ~/.bashrc, so simply sourceing it solved the problem.
I tried all the things from above - didn't work for me.
I found two solution (both for SSH-Slave)
Go to the slave settings
Add a new environment variable
PATH
${PATH}:${HOME}/.pub-cache/bin:${HOME}/.local/bin
The "${HOME}" part is important. This makes the additional PATH absolute.
Relative path did not work for me.
Option II (pipeline-script)
pipeline {
agent {
label 'your-slave'
}
environment {
PATH = "/home/jenkins/.pub-cache/bin:$PATH"
}
stages {
stage('Test') {
steps {
ansiColor('xterm') {
echo "PATH is: $PATH"
}
}
}
}
}
On Ubuntu I just edit /etc/default/jenkins and add source /etc/profile at the end and it works to me.
Running the command with environment variable set is also effective. Of course, you have to do it for each command you run, but you probably have a job script, so you probably only have one command per build. My job script is a python script that uses the environment to decide which python to use, so I still needed to put /usr/local/bin/python2.7 in its path:
PATH=/usr/local/bin <my-command>
What worked for me was overriding the PATH environment for the slave.
Set: PATH
To: $PATH:/usr/local/bin
Then disconnecting and reconnecting the slave.
Despite what the system information was showing it worked.
I have Jenkins 1.639 installed on SLES 11 SP3 via zypper (the package manager).
Installation configured jenkins as a service
# service jenkins
Usage: /etc/init.d/jenkins {start|stop|status|try-restart|restart|force-reload|reload|probe}
Although /etc/init.d/jenkins sources /etc/sysconfig/jenkins, any env variables set there are not inherited by the jenkins process because it is started in a separate login shell with a new environment like this:
startproc -n 0 -s -e -l /var/log/jenkins.rc -p /var/run/jenkins.pid -t 1 /bin/su -l -s /bin/bash -c '/usr/java/default/bin/java -Djava.awt.headless=true -DJENKINS_HOME=/var/lib/jenkins -jar /usr/lib/jenkins/jenkins.war --javaHome=/usr/java/default --logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war --httpPort=8080 --ajp13Port=8009 --debug=9 --handlerCountMax=100 --handlerCountMaxIdle=20 &' jenkins
The way I managed to set env vars for the jenkins process is via .bashrc in its home directory - /var/lib/jenkins. I had to create /var/lib/jenkins/.bashrc as it did not exist before.
1- add to your profil file".bash_profile" file
it is in "/home/your_user/" folder
vi .bash_profile
add:
export JENKINS_HOME=/apps/data/jenkins
export PATH=$PATH:$JENKINS_HOME
==> it's the e jenkins workspace
2- If you use jetty :
go to jenkins.xml file
and add :
<Arg>/apps/data/jenkins</Arg>
Here is what i did on ubuntu 18.04 LTS with Jenkins 2.176.2
I created .bash_aliases file and added there path, proxy variables and so on.
In beginning of .bashrc there was this defined.
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
So it's checking that if we are start non-interactive shell then we don't do nothing here.
bottom of the .bashrc there was include for .bash_aliases
# Alias definitions.
# You may want to put all your additions into a separate file like
# ~/.bash_aliases, instead of adding them here directly.
# See /usr/share/doc/bash-doc/examples in the bash-doc package.
if [ -f ~/.bash_aliases ]; then
. ~/.bash_aliases
fi
so i moved .bash_aliases loading first at .bashrc just above non-interactive check.
This didn't work first but then i disconnected slave and re-connected it so it's loading variables again. You don't need to restart whole jenkins if you are modifying slave variables. just disconnect and re-connect.
If your pipeline is executed on the remote node that is connected via SSH, then actually Jenkins runs agent application that performs incoming actions.
By default zsh shell is used, not the bash (my Jenkins has version 2.346.3).
Furthermore jenkins-agent runs non-login shell which makes default PATH values even if you put some configuration to .zshrc. It will be skipped.
My choice is to put the following shebang at a script start
#!/bin/bash -l
-l option makes bash to run in the login mode and in this case bash performs configurations specified in /etc/profile and ~/.bash_profile.
If you run script in Jenkins pipeline it will look like:
steps {
sh '''#!/bin/bash -l
env
'''
}

Resources