For loop shell script in Jenkins - jenkins

I’m trying to delete pods through Jenkins stage. But for loop not working to delete pods. Any suggestions?
stage(' second') {
withCredentials([usernamePassword(credentialsId:"xxx", passwordVariable: 'xxx', usernameVariable: 'xxx')])
{
sh '''
xxxx logging into cluster working here <===
pods= $(oc -n ns get pods | grep -i pod** | awk {'print $1'}) <=== getting list of pods working fine
for pod in "${pods}"
do
echo "$pod"
oc -n ns delete $pod;
done
'''
}
}
## Heading ##

Probably you should remove the double quotes surrounding "${pods}". Something like below.
for pod in ${pods}
do
echo "$pod"
oc -n ns delete $pod;
done
Also if your default shell is not Bash try setting the shebang line at the top of your sh block.
#!/bin/bash

Remove this space: pods= $(oc
You can verify by echo "${pods}' right after setting.

Related

No such DSL method 'openShiftInstall' found among steps

I'm trying to call the following funtion in Jenkins:
def openShiftInstall(char CLUSTER_URL, char TOKEN) {
sh '''
oc login --token='''TOKEN''' --server='''CLUSTER_URL'''
if helm history --max 1 $ARTIFACT_ID 2>/dev/null | grep FAILED | cut -f1 | grep -q 1; then
helm delete --purge $ARTIFACT_ID
fi
helm upgrade --install -f container-root/helm/values-devref.yaml --set image.version=$RELEASE_VERSION $ARTIFACT_ID container-root/helm --namespace ${NAMESPACE} --debug
'''
}
in this stage
environment {
NAMESPACE = 'test'
OC_a_DEV_TOKEN = 123
OC_b_DEV_TOKEN = 456
OC_a_DEV_CLUSTER_URL = 'https://api.ocp4-a.net:443'
OC_b_DEV_CLUSTER_URL = 'https://api.ocp4-b.net:443'
}
steps {
container('oc') {
script {
openShift.openShiftInstall("${OC_a_DEV_CLUSTER_URL}", "${OC_a_DEV_TOKEN}")
//openShift.openShiftInstall(${OC_b_DEV_CLUSTER_URL}, ${OC_b_DEV_TOKEN})
}
}
}
I'm getting this error:
java.lang.NoSuchMethodError: No such DSL method 'openShiftInstall' found among steps
Somebody can help me to understand what I'm doing here wrong?
Maybe is there a better solution to loop through multiple cluster installation with different parameters? Right now I'm just trying to run the same function twice, but even one is not working with the parameters.
You seems to be accepting a Char in the function openShiftInstall but you are passing a String. Change the method signature like below.
def openShiftInstall(def CLUSTER_URL, def TOKEN) {
sh '''
oc login --token='''TOKEN''' --server='''CLUSTER_URL'''
if helm history --max 1 $ARTIFACT_ID 2>/dev/null | grep FAILED | cut -f1 | grep -q 1; then
helm delete --purge $ARTIFACT_ID
fi
helm upgrade --install -f container-root/helm/values-devref.yaml --set image.version=$RELEASE_VERSION $ARTIFACT_ID container-root/helm --namespace ${NAMESPACE} --debug
'''
}
I changed to the following and it seems to working now.
def openShiftInstall(String CLUSTER_URL, String TOKEN) {
unstash 'lapc-docker-root'
sh '''
oc login --token=''' + TOKEN + ''' --server=''' + CLUSTER_URL + '''
if helm history --max 1 $ARTIFACT_ID 2>/dev/null | grep FAILED | cut -f1 | grep -q 1; then
helm delete --purge $ARTIFACT_ID
fi
helm upgrade --install -f container-root/helm/values-devref.yaml --set image.version=$RELEASE_VERSION $ARTIFACT_ID container-root/helm --namespace ${NAMESPACE} --debug
'''
}

ENDSSH command not found

I'm writing a jenkins pipeline jenkinsfile and within the script clause I have to ssh to a box and run some commands. I think the problem has to do with the env vars that I'm using within the quotes. I'm getting a ENDSSH command not found error and I'm at a loss. Any help would be much appreciated.
stage("Checkout my-git-repo"){
steps {
script {
sh """
ssh -o StrictHostKeyChecking=accept-new -o LogLevel=ERROR -o UserKnownHostsFile=/dev/null -i ${JENKINS_KEY} ${JENKINS_KEY_USR}#${env.hostname} << ENDSSH
echo 'Removing current /opt/my-git-repo directory'
sudo rm -rf /opt/my-git-repo
echo 'Cloning new my-git-repo repo into /opt'
git clone ssh://${JENKINS_USR}#git.gitbox.com:30303/my-git-repo
sudo mv /home/jenkins/my-git-repo /opt
ENDSSH
"""
}
}
}
-bash: line 6: ENDSSH: command not found
I'm personally not familiar with jenkins, but I'd guess the issue is the whitespace before ENDSSH
White space in front of the delimiter is not allowed.
(https://linuxize.com/post/bash-heredoc/)
Try either removing the indentation:
stage("Checkout my-git-repo"){
steps {
script {
sh """
ssh -o StrictHostKeyChecking=accept-new -o LogLevel=ERROR -o UserKnownHostsFile=/dev/null -i ${JENKINS_KEY} ${JENKINS_KEY_USR}#${env.hostname} << ENDSSH
echo 'Removing current /opt/my-git-repo directory'
sudo rm -rf /opt/my-git-repo
echo 'Cloning new my-git-repo repo into /opt'
git clone ssh://${JENKINS_USR}#git.gitbox.com:30303/my-git-repo
sudo mv /home/jenkins/my-git-repo /opt
ENDSSH
"""
}
}
}
OR ensure that the whitespace is only tabs and replace << with <<-:
Appending a minus sign to the redirection operator <<-, will cause all
leading tab characters to be ignored. This allows you to use
indentation when writing here-documents in shell scripts. Leading
whitespace characters are not allowed, only tab.

Jenkins pipeline: kubectl: not found

I have the following Jenkinsfile:
node {
stage('Apply Kubernetes files') {
withKubeConfig([credentialsId: 'jenkins-deployer', serverUrl: 'https://192.168.64.2:8443']) {
sh 'kubectl apply -f '
}
}
}
While running it, I got "kubectl: not found". I installed Kubernetes-cli plugin to Jenkins, generated secret key via kubectl create sa jenkins-deployer. What's wrong here?
I know this is a fairly old question, but I decided to describe an easy workaround that might be helpful.
To use the Kubernetes CLI plugin we need to have an executor with kubectl installed.
One possible way to get kubectl is to install it in the Jenkins pipeline like in the snipped below:
NOTE: I'm using ./kubectl get pods to list all Pods in the default Namespace. Additionally, you may need to change kubectl version (v1.20.5).
node {
stage('List pods') {
withKubeConfig([credentialsId: 'kubernetes-config']) {
sh 'curl -LO "https://storage.googleapis.com/kubernetes-release/release/v1.20.5/bin/linux/amd64/kubectl"'
sh 'chmod u+x ./kubectl'
sh './kubectl get pods'
}
}
}
As a result, in the Console Output, we can see that it works as expected:
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.20.5/bin/linux/amd64/kubectl
...
[Pipeline] sh
+ chmod u+x ./kubectl
[Pipeline] sh
+ ./kubectl get pods
NAME READY STATUS RESTARTS AGE
default-zhxwb 1/1 Running 0 34s
my-jenkins-0 2/2 Running 0 134m
You call kubectl from the shellscript step. To be able to do that the agent (node) executing the build needs to have kubectl available as executable.

jenkins pipeline: multiline shell commands with pipe

I am trying to create a Jenkins pipeline where I need to execute multiple shell commands and use the result of one command in the next command or so. I found that wrapping the commands in a pair of three single quotes ''' can accomplish the same. However, I am facing issues while using pipe to feed output of one command to another command. For example
stage('Test') {
sh '''
echo "Executing Tests"
URL=`curl -s "http://localhost:4040/api/tunnels/command_line" | jq -r '.public_url'`
echo $URL
RESULT=`curl -sPOST "https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=$URL" | jq -r '.code'`
echo $RESULT
'''
}
Commands with pipe are not working properly. Here is the jenkins console output:
+ echo Executing Tests
Executing Tests
+ curl -s http://localhost:4040/api/tunnels/command_line
+ jq -r .public_url
+ URL=null
+ echo null
null
+ curl -sPOST https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=null
I tried entering all these commands in the jenkins snippet generator for pipeline and it gave the following output:
sh ''' echo "Executing Tests"
URL=`curl -s "http://localhost:4040/api/tunnels/command_line" | jq -r \'.public_url\'`
echo $URL
RESULT=`curl -sPOST "https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=$URL" | jq -r \'.code\'`
echo $RESULT
'''
Notice the escaped single quotes in the commands jq -r \'.public_url\' and jq -r \'.code\'. Using the code this way solved the problem
UPDATE: : After a while even that started to give problems. There were certain commands executing prior to these commands. One of them was grunt serve and the other was ./ngrok http 9000. I added some delay after each of these commands and it solved the problem for now.
The following scenario shows a real example that may need to use multiline shell commands. Which is, say you are using a plugin like Publish Over SSH and you need to execute a set of commands in the destination host in a single SSH session:
stage ('Prepare destination host') {
sh '''
ssh -t -t user#host 'bash -s << 'ENDSSH'
if [[ -d "/path/to/some/directory/" ]];
then
rm -f /path/to/some/directory/*.jar
else
sudo mkdir -p /path/to/some/directory/
sudo chmod -R 755 /path/to/some/directory/
sudo chown -R user:user /path/to/some/directory/
fi
ENDSSH'
'''
}
Special Notes:
The last ENDSSH' should not have any characters before it. So it
should be at the starting position of a new line.
use ssh -t -t if you have sudo within the remote shell command
I split the commands with &&
node {
FOO = world
stage('Preparation') { // for display purposes
sh "ls -a && pwd && echo ${FOO}"
}
}
The example outputs:
- ls -a (the files in your workspace
- pwd (location workspace)
- echo world

Jenkins pipeline sh does not seem to respect pipe in shell command

I am using a Jenkinsfile in a pipeline on version 2.32.2.
For various reasons I want to extract the version string from the pom. I was hoping I wouldn't have to add the maven help plugin and use evaluate.
I quickly came up with a little sed expression to get it out of the pom which uses pipes and works on the commandline in the jenkins workspace on the executor.
$ sed -n '/<version>/,/<version/p' pom.xml | head -1 | sed 's/[[:blank:]]*<\/*version>//g'
1.0.0-SNAPSHOT
It could probably be optimized, but I want to understand why the pipeline seems to be failing on piped sh commands. I've played with various string formats and am currently using a dollar slashy string.
The pipeline step looks like the following to allow for easy output of the command string:
script {
def ver_script = $/sed -n '/<version>/,/<version/p' pom.xml | head -1 | sed 's/[[:blank:]]*<\/*version>//g'/$
echo "${ver_script}"
POM_VERSION = sh(script: "${ver_script}", returnStdout: true)
echo "${POM_VERSION}"
}
When run in the jenkins pipeline I get the following console output where it seems to be separating the piped commands into separate commands:
[Pipeline] script
[Pipeline] {
[Pipeline] echo
sed -n '/<version>/,/<version/p' pom.xml | head -1 | sed 's/[[:blank:]]*<\/*version>//g'
[Pipeline] sh
[FRA-198-versioned-artifacts-44SD6DBQOGOI54UEF7NYE4ECARE7RMF7VQYXDPBVFOHS5CMSTFLA] Running shell script
+ sed -n /<version>/,/<version/p pom.xml
+ head -1
+ sed s/[[:blank:]]*<\/*version>//g
sed: couldn't write 89 items to stdout: Broken pipe
[Pipeline] }
[Pipeline] // script
Any guidance out there on how to properly use piped commands in a jenkinsfile ?
I finally put some thought into it and realized that pipe subshells are probably causing the issue. I know some of the evils of eval but I ended up wrappping this in an eval:
script {
def ver_script = $/eval "sed -n '/<version>/,/<version/p' pom.xml | head -1 | sed 's/[[:blank:]]*<\/*version>//g'"/$
echo "${ver_script}"
POM_VERSION = sh(script: "${ver_script}", returnStdout: true)
echo "${POM_VERSION}"
}
I know this kind of late answer, but whoever you who needs the solution without eval you can use /bin/bash -c "script" to make pipe works
script {
POM_VERSION = sh(script: "/bin/bash -c 'sed -n \'/<version>/,/<version/p\' pom.xml | head -1 | sed \'s/[[:blank:]]*<\/*version>//g\'\''", returnStdout: true)
echo "${POM_VERSION}"
}
The only problem with this method is hellish escape yet this way the subshell of pipe will be handled by our boy /bin/bash -c
If your environment allows it, I've found a simple solution to this problem to be to place your script containing pipes into a file, and then run that with sh, like so:
script.sh:
#!/bin/sh
kubectl exec --container bla -i $(kubectl get pods | awk '/foo-/{ print $1 }') -- php /code/dostuff
Jenkinsfile:
stage('Run script with pipes') {
steps {
sh "./script.sh"
}
}
The pipeline-utility-steps plugin nowadays includes a readMavenPom step, which allows to access the version as follows:
version = readMavenPom.getVersion()
So nothing detailed above worked for me using the scripted Jenkinsfile syntax with Groovy. I was able to get it working, however. The type of quotations you use are important. In the example below, I am trying to fetch the latest git tag from GitHub.
...
stage("Get latest git tag") {
if (env.CHANGE_BRANCH == 'master') {
sh 'git fetch --tags'
TAGGED_COMMIT = sh(script: 'git rev-list --branches=master --tags --max-count=1', returnStdout: true).trim()
LATEST_TAG = sh(script: 'git describe --abbrev=0 --tags ${TAGGED_COMMIT}', returnStdout: true).trim()
VERSION_NUMBER = sh(script: "echo ${LATEST_TAG} | cut -d 'v' -f 2", returnStdout: true).trim()
echo "VERSION_NUMBER: ${VERSION_NUMBER}"
sh 'echo "VERSION_NUMBER: ${VERSION_NUMBER}"'
}
}
...
Notice how the shell execution to assign LATEST_TAG works as expected (assigning the variable to v2.1.0). If we were to try the same thing (with single quotes) to assign VERSION_NUMBER, it would NOT work - the pipe messes everything up. Instead, we wrap the script in double quotes.
The first echo prints VERSION_NUMBER: 2.1.0 but the second prints VERSION_NUMBER:. If you want VERSION_NUMBER to be available in the shell commands, you have to assign the output of the shell command to env.VERSION_NUMBER as shown below:
...
stage("Get latest git tag") {
if (env.CHANGE_BRANCH == 'master') {
sh 'git fetch --tags'
TAGGED_COMMIT = sh(script: 'git rev-list --branches=master --tags --max-count=1', returnStdout: true).trim()
LATEST_TAG = sh(script: 'git describe --abbrev=0 --tags ${TAGGED_COMMIT}', returnStdout: true).trim()
env.VERSION_NUMBER = sh(script: "echo ${LATEST_TAG} | cut -d 'v' -f 2", returnStdout: true).trim()
echo "VERSION_NUMBER: ${VERSION_NUMBER}"
sh 'echo "VERSION_NUMBER: ${VERSION_NUMBER}"'
}
}
...
The first echo prints VERSION_NUMBER: 2.1.0 and the second prints VERSION_NUMBER: 2.1.0.
I am also struggling with the usage of pipe inside my jenkins pipeline but as a side note, if you want a simple way to extract the version of a maven pom, here's a very clean one I found in another post and that I'm using :
stage('Preparation') {
version = getVersion()
print "version : " + version
}
def getVersion() {
def matcher = readFile('pom.xml') =~ '<version>(.+)</version>'
matcher ? matcher[0][1] : null
}
gives you :
[Pipeline] echo
releaseVersion : 0.1.24
[Pipeline] sh

Resources