Jenkins Job to run SOQL query - jenkins

I'm trying to get a Jenkins job to run sfdx force:data:soql:query commands in order to migrate configuration data sets between our production org and our sandboxes after a refresh. Certain configurations do not persist on a refresh so we need a way to move that data.
Running the queries from the command line on the Jenkins server work as expected, however the job when it runs fails with the following error:
'C:\Program' is not recognized as an internal or external command, operable program or batch file.
Build step 'Execute shell' marked build as failure
The job does three things:
Authorizes to the DevHub, lists out the connected orgs, and then performs a SQOL query to just print some data - 16 lines to be exact. Here are the commands in the shell script of the job:
sfdx force:auth:jwt:grant -i ${CONNECTED_APP_CONSUMER_KEY} -u ${DEV_HUB} -f ${JENKINS_HOME}/certs/prod/server.key -r [...] -a DevHub
sfdx force:org:list
sfdx force:data:soql:query -u ${DEV_HUB} -q "SELECT Id, Name FROM [...tablename...]" -r human
I am completely stumped on why this is happening. Again, running the SOQL command directly on the server through PowerShell or Command Line works as expected. I would appreciate any help with this.

This one stumped me for a long time but we finally got it figured out.
If you are seeing this error, make sure to check your machine's environmental variables. I saw a TON of other answers pointing to this as the issue where the install of SFDX path name had spaces in it as in C:|P:rogram Files\SFDX\bin but only showed some weird command line FOR loop that made no sense what so ever.
What we did was to completely uninstall all of SFDX making sure none of it was left on the machine and reinstalled into a folder we made where there was no spaces in the path name.
Once we did that, our job worked like it was supposed to. I hope this helps others who run into this same issue.

Related

How to run repo from a script inside a container in a jenkins job

I am unable to run repo non-interactively inside a container as part of a freestyle job.
It prompts for the user-name and email. I got round that by doing a git config --global inside the job.
But then it does the color test, and that hangs indefinitely.
Looking at the source code for repo I see this
if os.isatty(0) and os.isatty(1) and not self.manifest.IsMirror:
if opt.config_name or self._ShouldConfigureUser():
self._ConfigureUser()
self._ConfigureColor()
So, I ran the following inside the container:
python -C "import os; print os.isatty(0), os.isatty(1)"
and, sure enough, it printed out True True
Looking at the Jenkins log, it launches the container with --tty specified, and there seems no way to configure that option.
I can't find a bash option to force a script to be run in a non-interactive shell. If I put the above python line in a file and execute it with almost any combination of commands and options, it still prints out True True
The only way I see something different is if I use I/O redirection
bash <a.sh
which prints out False True - i.e. stdin is not a tty, and
bash <a.sh >a.log
which prints False False.
For a complex script, are there any problems using the bash <script approach?
Does anyone know any jenkins magic to prevent docker being launched using --tty?
I know that the --tty is the culprit. I built the container locally and ran the following
$ docker run repotest python -c "import os;print os.isatty(0), os.isatty(1)"
False False
$ docker run --tty repotest python -c "import os;print os.isatty(0), os.isatty(1)"
True True
Running Versions:
repo: 1.12.37 (per Ubuntu 16.04 apt-get)
Jenkins: 2.149
Cloudbees Docker Plugin: 1.7.3
Container base is ubuntu:xenial
I'm using the "Build inside a docker container" option.
To run bash script repo_script.sh "non-interactively", or more exactly speaking without having terminals associated with standard streams, you could run your script simply as
repo_script.sh < /dev/null 2>&1 | cat
assuming you want to see the output the way you would see it running simply as repo_script.sh. By piping the standard output and error to a different process the file descriptor appears as a pipe and not TTY to repo_script.sh. You could also direct output to a file, or even to /dev/null if you do not care about the output:
log_file=/dev/null
repo_script.sh < /dev/null > "${log_file}" 2>&1
Running the script as
bash < repo_script.sh | cat
might would work too, though it is very unorthodox and to my mind hackish way of running a script just to break the association of TTY to the standard input. From script engine point of view, it is different to read a script program from a file than from standard input (which typically, if it is a terminal, is not seekable), so there might be some subtle differences that could possibly bite you in unexpected ways. This way does not as clearly communicate your intention to the next person that need to understand your code, and may lead to partial hair loss in that person due to extraneous head scratching.
There is no need for any bash options, just using the output directions from within the interpreting shell as above described is an easy-to-comprehend, multi-platform compatible standard convention for changing the standard stream associations.
P.S. I think it should be enough for your repo script to just test if the standard input is a TTY. It looks to me like the author of that script did not think deeply enough there. There is simply no use waiting for input if you do not have terminal device associated with standard input, and you could determine that everything needs to run without user interaction from there or stop with an error if that is not possible.

PsExec is not recognized as an internal or external command

I have a job that needs to run a script on a remote computer. I'm doing so by using psexec via "Execute windows batch command":
C:\PsExec.exe \\computername -u username -p password -accepteula c:\xxx.exe
When I run the job I get the following error:
c:\PsExec.exe is not recognized as an internal or external command
** PsExec.exe is located under c:\
Any ideas?
First Define psexec.exe path in environment varaiable "PATH" or else place psexec.exe file in C:\Windows\System32\
And to Download Psexec.exe file
https://download.sysinternals.com/files/PSTools.zip
One possible explanation is the version of PsExec.exe: 32bits or 64bits.
If you have the 32 one on a 64bits machine, that command would not be recognized indeed. PsExec64.exe would.
I can see the age of this question and my answer may not be relevant to this topic since I was technically trying to solve a different problem, but maybe this will help other people who are stuck.
c:\PsExec.exe is not recognized as an internal or external command
I was trying to disable the Maintenance Configurator with PSExec (my problem is the never ending maintenance bug) and kept running into the same error as the OP BUT I got PSexec64 to run this command:
C:\PsExec64.exe -s schtasks /change /tn >"\Microsoft\Windows\TaskScheduler\Maintenance Configurator" /DISABLE
BY checking the "Run this program as an administrator" option under the Compatibility settings for "PsExec64.exe"
Don't know if this has solved my problem yet, but I think the OP would have been able to run his process if he had done this. Dear OP did you ever solve that?

Ensuring all .sh curl download scripts download using gnu parallel

I'm executing the following command which executes a group of scripts with each script being a curl download.
parallel --resume-failed --joblog logshd.log {1} ::: SH/*.sh
The set of files downloaded is quite large. I've noticed some files don't download.
I hoped that the resume-failed parameter would ensure that all the downloads that fail resume and complete.
I'm not clear on if that means I need to run the process again a second time or if that should occur when I run the one time.
From the gnu documentation
Where --resume-failed reads the commands from the command line (and
ignores the commands in the joblog), --retry-failed ignores the
command line and reruns the commands mentioned in the joblog.
I'm not clear on what ignoring the command line or ignores the commands in the job log means. Could that be clarified.
Can --resume-failed and --retry-failed be declared within the same command and if so what is the effect of that?
Regards
Conteh
If we assume the download fails intermittently then your answer is --retries 10. It will run the command 10 times before giving up.
--resume-failed and --retry-failed are both used when GNU Parallel has finished, and you then figure out that you want to retry some of the jobs again.
The difference between the two is in how to retry the command.
--retry-failed will run exactly the same command as failed before. It does that by looking in the joblog for the command. This is typically what you want.
--resume-failed is used if you figure out that the failing command actually needed some other parameter: i.e. GNU Parallel should not run exactly the same command, but it should run a (typically slightly changed) command with the same parameters instead.

fpm is not recognised if executing script with jenkins and ssh

I am trying to execute a script over ssh connexion with Jenkins. I am using the SSH plugin and it is well configured. I arrive to execute the first part of the script, but when I try to execute a fpm command it says:
fpm: command not found
If I connect to the instance and run the same script that I call via Jenkins it runs and there is no error (fpm is installed).
So, I have created a test like a script test.sh:
#!/bin/bash -x
fpm
but, with Jenkins, I get the same error: fpm: command not found, while if I execute it I get a normal "parameter needed":
Missing required -s flag. What package source did you want? {:level=>:warn}
Missing required -t flag. What package output did you want? {:level=>:warn}
No parameters given. You need to pass additional command arguments so that I know what you want to build packages from. For example, for '-s dir' you would pass a list of files and directories. For '-s gem' you would pass a one or more gems to package from. As a full example, this will make an rpm of the 'json' rubygem: `fpm -s gem -t rpm json` {:level=>:warn}
Fix the above problems, and you'll be rolling packages in no time! {:level=>:fatal}
What am I missing? Why it cannot find fpm if it is installed?
Make sure fpm is in /usr/bin..
It seems that the problem came because the fpm was installed in the /home/user2connect/bin/, and the command was not recognised. For fixing this I had to call it wit the whole path:
/home/user2connect/bin/fpm ...
I have chosen to reinstall the fpm using sudo, so now it works.

Jenkins accessing Window Server

I have the following problem: I have an ANT-task in Jenkins-CI that (apparently) needs access to OSX' window server (it needs to show a window). After doing some research, it appears that only the currently logged in user and the 'root' user (or SUDO) can access OSX' window server.
The ANT task (Adobe ADL) is one that actually 'runs' a build, so it has to popup a screen.
I'm on a macBook running OSX 10.7.something (Lion), Jenkins 1.487, Ant 1.8.4.
What i have tried so far:
to start with, tried the 'barebone' < exec > task to invoke ADL. Works, but getting error that means that Jenkins running as daemon (with homedir /Users/shared/Jenkins/Home) cannot access OSX' Window Server.
Run Jenkins as myself, by changing USER_NAME, GROUP_NAME, JENKINS_HOME in the jenkins launchd.conf file: https://wiki.jenkins-ci.org/display/JENKINS/Thanks+for+using+OSX+Installer
this gives a lot of errors/trouble, which i tried to solve in communication with the creator of the Jenkins CI but, unfortunately to no avail.
Try to have Ant run an < exec > task (running a shell script) in which i try to sudo with a password using this sneaky way of passing a password to the stdinput: echo < password > | sudo -S < command > which is really bad, but as i'm running Jenkins locally (not reachable from the outside of my LAN) it's np.
Tried to have Ant run an < exec > task, using a 'redirector' with as inputstring my password. also superbad, but yea, i just want it to work. which it did not.
Tried a Jenkins SSH plugin: didn't work. I could, however, SSH to my own localhost using terminal, thing is, i don't know what the Jenkins SSH was trying to do (how can i figure that out anyway?) so i don't know why it wouldn't work.
Tried to have Ant run an SSHEXEC task (which, after some hours, finally worked. Ant for mac is borked, something with optional .jar tasks not being re-named correctly or something) but i'm getting a "com.jcraft.jsch.JSchException: Auth fail" which i googled for, and can't seem to resolve. only applicable solution is to have sshd accept password auths, did that, still got the same error.
I think what i want to accomplish was NOT worth the 2 days that i spent so far on this problem, although i learned a lot. However, i just want this to work and will not accept defeat, yet :)
My question: have you had to solve a similar problem, how did you go about it? are there any other methods i can try to solve this problem? Is there a method mentioned that should JUST _WORK_ and i did something wrong?
[edit] I have decided to go with the Jenkins standalone app, as i think (for me) this is a nicer solution in total, as my laptop is not a build server. Also, the Jenkins app can start at startup so it actually acts as a local server.
Just a quick guess: if you don't want the interactivity of the script, and the script can do without it, you can try to set the headless mode on the java command-line:
-Djava.awt.headless=true
I have decided to go with the Jenkins standalone app, as i think (for me) this is a nicer solution anyway, as my laptop is not a (headless) build server. Also, the Jenkins app can start at startup so it acts as a server too.

Resources