I'm trying to get a Jenkins job to run sfdx force:data:soql:query commands in order to migrate configuration data sets between our production org and our sandboxes after a refresh. Certain configurations do not persist on a refresh so we need a way to move that data.
Running the queries from the command line on the Jenkins server work as expected, however the job when it runs fails with the following error:
'C:\Program' is not recognized as an internal or external command, operable program or batch file.
Build step 'Execute shell' marked build as failure
The job does three things:
Authorizes to the DevHub, lists out the connected orgs, and then performs a SQOL query to just print some data - 16 lines to be exact. Here are the commands in the shell script of the job:
sfdx force:auth:jwt:grant -i ${CONNECTED_APP_CONSUMER_KEY} -u ${DEV_HUB} -f ${JENKINS_HOME}/certs/prod/server.key -r [...] -a DevHub
sfdx force:org:list
sfdx force:data:soql:query -u ${DEV_HUB} -q "SELECT Id, Name FROM [...tablename...]" -r human
I am completely stumped on why this is happening. Again, running the SOQL command directly on the server through PowerShell or Command Line works as expected. I would appreciate any help with this.
This one stumped me for a long time but we finally got it figured out.
If you are seeing this error, make sure to check your machine's environmental variables. I saw a TON of other answers pointing to this as the issue where the install of SFDX path name had spaces in it as in C:|P:rogram Files\SFDX\bin but only showed some weird command line FOR loop that made no sense what so ever.
What we did was to completely uninstall all of SFDX making sure none of it was left on the machine and reinstalled into a folder we made where there was no spaces in the path name.
Once we did that, our job worked like it was supposed to. I hope this helps others who run into this same issue.
This question already has answers here:
Conditional ENV in Dockerfile
(8 answers)
Closed 3 years ago.
In a given Dockerfile, I want to set a variable based on content of another ENV variable (which is injected into the container beforehand, or defined within the Dockerfile)
I'm looking at something like this
FROM centos:7
ENV ENABLE_REMOTE_DEBUG "true"
ENV DEBUG_FLAG=""
RUN if [ "$ENABLE_REMOTE_DEBUG" = "true" ] ; then echo "set debug flag" ;export DEBUG_FLAG="some_flags"; else echo "remote debug not set" ; fi
RUN echo debug flags: ${DEBUG_FLAG}
## Use the debug flag in the Entrypoint : java jar ${DEBUG_FLAG} ...
the problem with this Dockerfile is $DEBUG_FLAG is not properly set (or is not being used in the next line? ) ... since output is empty:
debug flags:
What am I missing here? (I prefer not to call external bash script)
Let’s take a look at what’s going on when a Dockerfile is used to build a Docker image.
Each line, is executed in a fresh container. The resulting container state, after the line has been interpreted, is saved in a temporary image and used to start a container for the next command.
This temp image do not save any state apart from the files on disk, Docker-specific properties like EXPOSE and a few image settings. Temp images has the advantage of fast subsequent builds using cache.
Now come to your question, if you would want to do using RUN instead of writing in shell script, here is a work around
RUN if [ "$ENABLE_REMOTE_DEBUG" = "true" ] ; then echo "set debug flag" ;echo 'export DEBUG_FLAG="some_flags"' >>/tmp/myenv; else echo "remote debug not set" ; fi
RUN source /tmp/myenv;echo debug flags: ${DEBUG_FLAG}
Since the image is based on CentOS, the default shell is bash, thus sourcing your own environment file will work fine. In other shells where source might not work, then reading the file will help.
You cannot set environment variables using export in RUN instruction and expect to them to available in next instruction. Only filesystem changes created by RUN instruction are persisted. Other stuff like environment variables etc are discarded.
You should move the logic to a shell script.
I'm trying to write (what I thought would be) a simple bash script that will:
run virtualenv to create a new environment at $1
activate the virtual environment
do some more stuff (install django, add django-admin.py to the virtualenv's path, etc.)
Step 1 works quite well, but I can't seem to activate the virtualenv. For those not familiar with virtualenv, it creates an activate file that activates the virtual environment. From the CLI, you run it using source
source $env_name/bin/activate
Where $env_name, obviously, is the name of the dir that the virtual env is installed in.
In my script, after creating the virtual environment, I store the path to the activate script like this:
activate="`pwd`/$ENV_NAME/bin/activate"
But when I call source "$activate", I get this:
/home/clawlor/bin/scripts/djangoenv: 20: source: not found
I know that $activate contains the correct path to the activate script, in fact I even test that a file is there before I call source. But source itself can't seem to find it. I've also tried running all of the steps manually in the CLI, where everything works fine.
In my research I found this script, which is similar to what I want but is also doing a lot of other things that I don't need, like storing all of the virtual environments in a ~/.virtualenv directory (or whatever is in $WORKON_HOME). But it seems to me that he is creating the path to activate, and calling source "$activate" in basically the same way I am.
Here is the script in its entirety:
#!/bin/sh
PYTHON_PATH=~/bin/python-2.6.1/bin/python
if [ $# = 1 ]
then
ENV_NAME="$1"
virtualenv -p $PYTHON_PATH --no-site-packages $ENV_NAME
activate="`pwd`/$ENV_NAME/bin/activate"
if [ ! -f "$activate" ]
then
echo "ERROR: activate not found at $activate"
return 1
fi
source "$activate"
else
echo 'Usage: djangoenv ENV_NAME'
fi
DISCLAIMER: My bash script-fu is pretty weak. I'm fairly comfortable at the CLI, but there may well be some extremely stupid reason this isn't working.
If you're writing a bash script, call it by name:
#!/bin/bash
/bin/sh is not guaranteed to be bash. This caused a ton of broken scripts in Ubuntu some years ago (IIRC).
The source builtin works just fine in bash; but you might as well just use dot like Norman suggested.
In the POSIX standard, which /bin/sh is supposed to respect, the command is . (a single dot), not source. The source command is a csh-ism that has been pulled into bash.
Try
. $env_name/bin/activate
Or if you must have non-POSIX bash-isms in your code, use #!/bin/bash.
In Ubuntu if you execute the script with sh scriptname.sh you get this problem.
Try executing the script with ./scriptname.sh instead.
best to add the full path of the file you intend to source.
eg
source ./.env instead of source .env
or source /var/www/html/site1/.env
is it possible to reload $PATH from a chef recipe?
Im intrsted in the response about process signals given in the following thread:
How to have Chef reload global PATH
I dont understand very well that example that the omribahumi user gives.
I would like a clearer example with chef-client / recipe to understand,
with that he explains, seems it is possible with that workaround.
Thanks.
Well I see two reasons for this request:
add something to the path for immediate execution => easy, just update the ENV['PATH'] variable within the chef run.
Extend the PATH system wide to include something just installed.
For the 2, you may update /etc/environment file (for ubuntu) or add a file to /etc/profiled.d (better idea to keep control over it),
But obviously the new PATH variable won't be available to actually running processes (including your actual shell), it will work for process launched after the file update.
To explain a little more the link you gave what is done is:
create a file with export commands to set env variables
echo 'export MYVAR="my value"' > ~/my_environment
create a bash function loading env vars from a file
function reload_environment { source ~/my_environment; }
set a trap in bash to do something on a signal, here run the function when bash receive SIGHUP
trap reload_environment SIGHUP
Launch the function for a first sourcing of the env file, there's two way:
easy one: launch the function
reload_environment
complex one: Get the pid of your actual shell and send it a SIGHUP signal
kill -HUP `echo $$`
All of this is only for the current shell until you set this in your .bash_rc
Not exactly what you were asking for indeed, but I hope you'll understand there's no way to update context of an already running process.
The best you can do is: update the PATH with whatever method you wish (something in /etc/profile.d for exemple) and execute a wall (if chef run as root) to tell users to reload their envs
echo 'reload your shell env by executing: source /etc/profile' | wall
Once again, it could work for humans, not for other process already running, those will have to be restarted.
I am running Jenkins from user jenkins thats has $PATH set to something and when I go into Jenkins web interface, in the System Properties window (http://$host/systemInfo) I see a different $PATH.
I have installed Jenkins on Centos with the native rpm from Jenkins website. I am using the startup script provided with the installation using sudo /etc/init.d/jenkins start
Can anyone please explain to me why that happens?
Michael,
Two things:
When Jenkins connects to a computer, it goes to the sh shell, and not the bash shell (at least this is what I have noticed - I may be wrong). So any changes you make to $PATH in your bashrc file are not considered.
Also, any changes you make to $PATH in your local shell (one that you personally ssh into) will not show up in Jenkins.
To change the path that Jenkins uses, you have two options (AFAIK):
1) Edit your /etc/profile file and add the paths that you want there
2) Go to the configuration page of your slave, and add environment variable PATH, with value: $PATH:/followed-by/paths/you/want/to/add
If you use the second option, your System Information will still not show it, but your builds will see the added paths.
I kept running into this problem, but now I just add:
source /etc/profile
As the first step in my build process. Now all my subsequent rules are loaded for Jenkins to operate smoothly.
You can also edit the /etc/sysconfig/jenkins file to make any changes to the environment variables, etc. I simply added source /etc/profile to the end of the file. /etc/profile has all all of the proper PATH variables setup. When you do this, make sure you restart Jenkins
/etc/init.d/jenkins restart
We are running ZendServer CE which installs pear, phing, etc in a different path so this was helpful. Also, we don't get the LD_LIBRARY_PATH errors we used to get with Oracle client and Jenkins.
I tried /etc/profile, ~/.profile and ~/.bash_profile and none of those worked. I found that editing ~/.bashrc for the jenkins slave account did.
The information on this answer is out of date. You need to go to Configure Jenkins > And you can then click to add an Environment Variable key-value pair from there.
eg: export MYVAR=test would be MYVAR is the key, and test is the value.
I found two plugins for that.
One loads the values from a file and the other lets you configure the values in the job configuration screen.
Envfile Plugin — This plugin enables you to set environment variables via a file. The file's format must be the standard Java property file format.
EnvInject Plugin — This plugin makes it possible to add environment variables and execute a setup script in order to set up an environment for the Job.
On my newer EC2 instance, simply adding the new value to the Jenkins user's .profile's PATH and then restarting tomcat worked for me.
On an older instance where the config is different, using #2 from Sagar's answer was the only thing that worked (i.e. .profile, .bash* didn't work).
Couldn't you just add it as an environment variable in Jenkins settings:
Manage Jenkins -> Global properties > Environment variables:
And then click "Add" to add a property PATH and its value to what you need.
This is how I solved this annoying issue:
I changed the PATH variable as #sagar suggested in his 2nd option, but still I got different PATH value than I expected.
Eventually I found out that it was the EnvInject plugin that replaced my PATH variable!
So I could either uninstall EnvInject or just use it to inject the PATH variable.
As many of our Jenkins jobs use that plugin, I didn't want to uninstall it...
So I created a file: environment_variables.properties under my Jenkins home directory.
This file contained the path environment value that I needed:
PATH=$PATH:/usr/local/git/bin/.
From the Jenkins web interface: Manage Jenkins -> Configure System.
In that screen - I ticked the Prepare jobs environment option, and in the Properties File Path field I entered the path to my file: /var/lib/jenkins/environment_variables.properties.
This way every Jenkins job we have receive whatever variables I put in this environment_variables.properties file.
Jenkins also supports the format PATH+<name> to prepend to any variable, not only PATH:
Global Environment variables or node Environment variables:
This is also supported in the pipeline step withEnv:
node {
withEnv(['PATH+JAVA=/path/to/java/bin']) {
...
}
}
Just take note, it prepends to the variable. If it must be appended you need to do what the other answers show.
See the pipeline steps document here.
You may also use the syntax PATH+WHATEVER=/something to prepend /something to $PATH
Or the java docs on EnvVars here.
I only had progress on this issue after a "/etc/init.d/jenkins force-reload". I recommend trying that before anything else, and using that rather than restart.
On my Ubuntu 13.04, I tried quite a few tweaks before succeeding with this:
Edit /etc/init/jenkins.conf
Locate the spot where "exec start-stop-server..." begins
Insert the environment update just before that, i.e.
export PATH=$PATH:/some/new/path/bin
Add
/usr/bin/bash
at
Jenkins -> Manage Jenkins -> configure System -> Shell->Shell
executable
Jenkins use the sh so that even /etc/profile doesn't work for me
When I add this, I have all the env.
Solution that worked for me
source ~/.bashrc
Explanation
I first verified Jenkins was running BASH, with echo $SHELL and echo $BASH (note I'm explicitly putting #!/bin/bash atop the textarea in Jenkins, I'm not sure if that's a requirement to get BASH). sourceing /etc/profile as others suggested was not working.
Looking at /etc/profile I found
if [ "$PS1" ]; then
...
and inspecting "$PS1" found it null. I tried spoofing $PS1 to no avail like so
export PS1=1
bash -c 'echo $PATH'
however this did not produce the desired result (add the rest of the $PATH I expect to see). But if I tell bash to be interactive
export PS1=1
bash -ci 'echo $PATH'
the $PATH was altered as I expected.
I was trying to figure out how to properly spoof an interactive shell to get /etc/bash.bashrc to load, however it turns out all I needed was down in ~/.bashrc, so simply sourceing it solved the problem.
I tried all the things from above - didn't work for me.
I found two solution (both for SSH-Slave)
Go to the slave settings
Add a new environment variable
PATH
${PATH}:${HOME}/.pub-cache/bin:${HOME}/.local/bin
The "${HOME}" part is important. This makes the additional PATH absolute.
Relative path did not work for me.
Option II (pipeline-script)
pipeline {
agent {
label 'your-slave'
}
environment {
PATH = "/home/jenkins/.pub-cache/bin:$PATH"
}
stages {
stage('Test') {
steps {
ansiColor('xterm') {
echo "PATH is: $PATH"
}
}
}
}
}
On Ubuntu I just edit /etc/default/jenkins and add source /etc/profile at the end and it works to me.
Running the command with environment variable set is also effective. Of course, you have to do it for each command you run, but you probably have a job script, so you probably only have one command per build. My job script is a python script that uses the environment to decide which python to use, so I still needed to put /usr/local/bin/python2.7 in its path:
PATH=/usr/local/bin <my-command>
What worked for me was overriding the PATH environment for the slave.
Set: PATH
To: $PATH:/usr/local/bin
Then disconnecting and reconnecting the slave.
Despite what the system information was showing it worked.
I have Jenkins 1.639 installed on SLES 11 SP3 via zypper (the package manager).
Installation configured jenkins as a service
# service jenkins
Usage: /etc/init.d/jenkins {start|stop|status|try-restart|restart|force-reload|reload|probe}
Although /etc/init.d/jenkins sources /etc/sysconfig/jenkins, any env variables set there are not inherited by the jenkins process because it is started in a separate login shell with a new environment like this:
startproc -n 0 -s -e -l /var/log/jenkins.rc -p /var/run/jenkins.pid -t 1 /bin/su -l -s /bin/bash -c '/usr/java/default/bin/java -Djava.awt.headless=true -DJENKINS_HOME=/var/lib/jenkins -jar /usr/lib/jenkins/jenkins.war --javaHome=/usr/java/default --logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war --httpPort=8080 --ajp13Port=8009 --debug=9 --handlerCountMax=100 --handlerCountMaxIdle=20 &' jenkins
The way I managed to set env vars for the jenkins process is via .bashrc in its home directory - /var/lib/jenkins. I had to create /var/lib/jenkins/.bashrc as it did not exist before.
1- add to your profil file".bash_profile" file
it is in "/home/your_user/" folder
vi .bash_profile
add:
export JENKINS_HOME=/apps/data/jenkins
export PATH=$PATH:$JENKINS_HOME
==> it's the e jenkins workspace
2- If you use jetty :
go to jenkins.xml file
and add :
<Arg>/apps/data/jenkins</Arg>
Here is what i did on ubuntu 18.04 LTS with Jenkins 2.176.2
I created .bash_aliases file and added there path, proxy variables and so on.
In beginning of .bashrc there was this defined.
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
So it's checking that if we are start non-interactive shell then we don't do nothing here.
bottom of the .bashrc there was include for .bash_aliases
# Alias definitions.
# You may want to put all your additions into a separate file like
# ~/.bash_aliases, instead of adding them here directly.
# See /usr/share/doc/bash-doc/examples in the bash-doc package.
if [ -f ~/.bash_aliases ]; then
. ~/.bash_aliases
fi
so i moved .bash_aliases loading first at .bashrc just above non-interactive check.
This didn't work first but then i disconnected slave and re-connected it so it's loading variables again. You don't need to restart whole jenkins if you are modifying slave variables. just disconnect and re-connect.
If your pipeline is executed on the remote node that is connected via SSH, then actually Jenkins runs agent application that performs incoming actions.
By default zsh shell is used, not the bash (my Jenkins has version 2.346.3).
Furthermore jenkins-agent runs non-login shell which makes default PATH values even if you put some configuration to .zshrc. It will be skipped.
My choice is to put the following shebang at a script start
#!/bin/bash -l
-l option makes bash to run in the login mode and in this case bash performs configurations specified in /etc/profile and ~/.bash_profile.
If you run script in Jenkins pipeline it will look like:
steps {
sh '''#!/bin/bash -l
env
'''
}