I can't seem to get the variable glooNamespaceExists to actually print.I see in the console the response from kubectl is there but the variable seems to be null - I'm looking to skip this entire build stage if the namespace already exists..
stage('Setup Gloo Ingress Controller') {
def glooNamespaceExists = sh(script: "kubectl get ns gloo-system -o jsonpath='{.status.phase}'")
if (glooNamespaceExists != 'Active') {
sh 'helm repo add gloo https://storage.googleapis.com/solo-public-helm'
sh 'helm repo update'
sh 'kubectl create namespace gloo-system'
sh 'helm install gloo gloo/gloo --namespace gloo-system'
}
}
EDIT: Console output of run:
[Pipeline] { (Setup Gloo Ingress Controller)
[Pipeline] sh
+ kubectl get ns gloo-system -o jsonpath={.status.phase}
Active
[Pipeline] sh
+ helm repo add gloo https://storage.googleapis.com/solo-public-helm
Please add returnStdout: true to your script command which will return exact output. Otherwise it will return status code which 0. Please add returnStdout: true like this so your command should be like this,
def glooNamespaceExists = sh(script: "kubectl get ns gloo-system -o jsonpath='{.status.phase}'", returnStdout: true)
Related
I have a Jenkins pipeline that uses an if statement to check if a docker container is running. I run the following command to get the running state:
def containerStatus = sh(script: "ssh -o StrictHostKeyChecking=no -l <user> <server> 'docker container inspect -f '{{.State.Status}}' ${tagName}'", returnStdout: true)
I have added
echo containerStatus
and in the Jenkins console the output for this is "running"
However, when I have the following in the pipeline:
if(containerStatus.toString() == 'running'){
echo 'Initial status: Container running'
...some code...
}
this condition is not executed (I hit my defined error condition). I have also tried removing the .toString(), but no luck.
The complete stage in the pipeline is:
stage("Container") {
steps {
script{
def containerStatus = sh(script: "ssh -o StrictHostKeyChecking=no -l <user> <server> 'docker container inspect -f '{{.State.Status}}' ${tagName}'", returnStdout: true)
echo containerStatus
if(containerStatus.toString() == 'running'){
echo 'Initial status: Container running'
...some code...
}
else {
error "Container not running"
}
}
}
}
You need to trim the resulted output:
def containerStatus = sh(script: "ssh -o StrictHostKeyChecking=no -l <user> <server> 'docker container inspect -f '{{.State.Status}}' ${tagName}'", returnStdout: true).trim()
Does anybody know why:
…
steps
{
script
{
sshagent(credentials: ['jenk'])
{
sh "git remote show …" //This does not work !
bat "git remote show …" //This works ??
}
}
}
...
The 'jenk' credentials are managed via Jenkins->credentials->System->global credentials
EDIT:
Sorry forgot the error msg:
Host key verification failed
fatal: Could not read from remote repository
Jenkins was configured using CYGWIN_NT-6.3-WOW (i686 Cygwin) for the sh commands.
After all this commands cleared everything:
if (isUnix())
{
echo "Jenkins runs on Linux"
}
else
{
echo "Jenkins runs on Windows"
}
echo "show shell kernel version (uname -a) : "
def res = sh (script: "uname -a", returnStdout: true)
echo "${res}" //=>CYGWIN_NT-6.3-WOW...
res2 = sh (script: "ls -al ~/.ssh", returnStdout: true)
echo "${res2}"
So the solution to the problem above is therefore adding the ssh-keys to cygwin
If you need your credentials you could do this:
https://codurance.com/2019/05/30/accessing-and-dumping-jenkins-credentials/
in my Mac, wget command working. How to fix this issue?
Error Message
wget
https://dl.google.com/android/repository/sdk-tools-linux-4333796.zip
/Users/don/.jenkins/workspace/demo#tmp/durable-2702e009/script.sh:
line 1: wget: command not found
Full Pipeline Script
node('master') {
def home = sh(script: "echo $ANDROID_HOME",returnStdout: true).trim()
def SDKPath = "$home/Android/sdk"
stage("Preparing SDK"){
// Check SDK Downloaded
def isSDKDownloaded = sh(script: "test -e sdk-tools-linux-4333796.zip && echo true || echo false",returnStdout: true).trim()
if(isSDKDownloaded == "false"){
// Download SDK
sh "wget 'https://dl.google.com/android/repository/sdk-tools-linux-4333796.zip'"
}
// Check if SDK is Extracted
def isExtracted = sh(script: "test -e $SDKPath/tools && echo true || echo false",returnStdout: true).trim()
if(isExtracted == "false"){
sh "mkdir -p $SDKPath"
//Unzip SDK
sh "unzip sdk-tools-linux-4333796.zip -d $SDKPath"
}
// Install SDK Tools
sh "yes | $SDKPath/tools/bin/sdkmanager 'build-tools;28.0.3' 'platform-tools' 'platforms;android-27'"
sh "ls $SDKPath/licenses"
// See installed And Available SDK
sh "$SDKPath/tools/bin/sdkmanager --list"
// Accept All SDK Licences
sh "yes | $SDKPath/tools/bin/sdkmanager --licenses"
}
def selectedBranch = SELECTED_RELEASE_BRANCH
stage('Checkout') {
git branch: selectedBranch, url: 'git#gitlab.com:o-apps/demo.git'
// Remove Existing local properties
sh 'rm local.properties ||:'
// Write sdk.dir Path into local properties file
sh "echo 'sdk.dir=$SDKPath' >> local.properties"
}
stage('Setup Tools') {
withCredentials([file(credentialsId: 'android_keystore', variable: 'KEYFILE')]) {
sh "cp \$KEYFILE app/key.jks"
}
}
stage('Build Release APK') {
sh "./gradlew clean assembleRelease"
}
stage('Upload to Play Store') {
androidApkUpload googleCredentialsId: 'key', apkFilesPattern: '**/*-release.apk', trackName: 'alpha'
}
stage('Cleanup Credential') {
sh "rm app/key.jks"
}
}
This is probably due to the $PATH environment variable which is different between your user and the user running Jenkins. Your user may be altering its $PATH by expanding it in the shell resource file (~/.bashrc, ~/.zshrc).
Not to worry, you can use the full path.
To find out the full path to wget, run this on the machine that runs the pipeline (the one labelled master):
% which wget
/usr/local/bin/wget
(Your path may naturally be different.)
Now use the full path:
// Download SDK
sh "/usr/local/bin/wget 'https://dl.google.com/android/repository/sdk-tools-linux-4333796.zip'"
I am trying to deploy k8s cluster using Helm 3 and jenkins. Jenkins and k8s running on different servers.I merged the kubeconfig files and I had all information in one config file ./kube directory. I would like to deploy my app to the related environment and namespace according to the GIT_BRANCH value. I have two question for below script.
1.What is the best way should I store k8s cluster credentials and will use in pipeline. I saw some plugins such as Kubernetes CLI but I can not be sure whether it will cover my requirement. If I use this plugin, should I store k8s file in to Jenkins machine manually or this plugin already handle this with uploading config file.
2.Should I change anything in below script to follow best practices?
stage('Deploy to dev'){
script{
steps{
if(env.GIT_BRANCH.contains("dev")){
def namespace="dev"
def ENV="development"
withCredentials([file(credentialsId: ...)]) {
// change context with related namespace
sh "kubectl config set-context $(kubectl config current-context) --namespace=${namespace}"
//Deploy with Helm
echo "Deploying"
sh "helm upgrade --install road-dashboard -f values.${ENV}.yaml --set tag=$TAG --namespace ${namespace}"
}
}
}
}
stage('Deploy to Test'){
script{
steps{
if(env.GIT_BRANCH.contains("test")){
def namespace="test"
def ENV="test"
withCredentials([file(credentialsId: ...)]) {
// change context with related namespace
sh "kubectl config set-context $(kubectl config current-context) --namespace=${namespace}"
//Deploy with Helm
echo "Deploying"
sh "helm upgrade --install road-dashboard -f values.${ENV}.yaml --set tag=$TAG --namespace ${namespace}"
}
}
}
}
}
stage ('Deploy to Production'){
when {
anyOf{
environment name: 'DEPLOY_TO_PROD' , value: 'true'
}
}
steps{
script{
DEPLOY_PROD = false
def namespace = "production"
withCredentials([file(credentialsId: 'kube-config', variable: 'kubecfg')]){
//Change context with related namespace
sh "kubectl config set-context $(kubectl config current-context) --namespace=${namespace}"
//Deploy with Helm
echo "Deploying to production"
sh "helm upgrade --install road-dashboard -f values.${ENV}.yaml --set tag=$TAG --namespace ${namespace}"
}
}
}
}
I have never tried this, but in theory the credentials variable is available as environment variable. Try to use KUBECONFIG as a variable name
withCredentials([file(credentialsId: 'secret', variable: 'KUBECONFIG')]) {
// change context with related namespace
sh "kubectl config set-context $(kubectl config current-context) --namespace=${namespace}"
//Deploy with Helm
echo "Deploying"
sh "helm upgrade --install road-dashboard -f values.${ENV}.yaml --set tag=$TAG --namespace ${namespace}"
}
A workaround that worked for me:
withCredentials([file(credentialsId: 'k8s-dk-staging', variable: 'KUBECRED')]) {
sh 'cat $KUBECRED > ~/.kube/config'
sh './deploy-app.sh'
}
I don't like to do that, ideally I would like use KUBECONFIG but for now this is that works for me.
I am using a Jenkinsfile in a pipeline on version 2.32.2.
For various reasons I want to extract the version string from the pom. I was hoping I wouldn't have to add the maven help plugin and use evaluate.
I quickly came up with a little sed expression to get it out of the pom which uses pipes and works on the commandline in the jenkins workspace on the executor.
$ sed -n '/<version>/,/<version/p' pom.xml | head -1 | sed 's/[[:blank:]]*<\/*version>//g'
1.0.0-SNAPSHOT
It could probably be optimized, but I want to understand why the pipeline seems to be failing on piped sh commands. I've played with various string formats and am currently using a dollar slashy string.
The pipeline step looks like the following to allow for easy output of the command string:
script {
def ver_script = $/sed -n '/<version>/,/<version/p' pom.xml | head -1 | sed 's/[[:blank:]]*<\/*version>//g'/$
echo "${ver_script}"
POM_VERSION = sh(script: "${ver_script}", returnStdout: true)
echo "${POM_VERSION}"
}
When run in the jenkins pipeline I get the following console output where it seems to be separating the piped commands into separate commands:
[Pipeline] script
[Pipeline] {
[Pipeline] echo
sed -n '/<version>/,/<version/p' pom.xml | head -1 | sed 's/[[:blank:]]*<\/*version>//g'
[Pipeline] sh
[FRA-198-versioned-artifacts-44SD6DBQOGOI54UEF7NYE4ECARE7RMF7VQYXDPBVFOHS5CMSTFLA] Running shell script
+ sed -n /<version>/,/<version/p pom.xml
+ head -1
+ sed s/[[:blank:]]*<\/*version>//g
sed: couldn't write 89 items to stdout: Broken pipe
[Pipeline] }
[Pipeline] // script
Any guidance out there on how to properly use piped commands in a jenkinsfile ?
I finally put some thought into it and realized that pipe subshells are probably causing the issue. I know some of the evils of eval but I ended up wrappping this in an eval:
script {
def ver_script = $/eval "sed -n '/<version>/,/<version/p' pom.xml | head -1 | sed 's/[[:blank:]]*<\/*version>//g'"/$
echo "${ver_script}"
POM_VERSION = sh(script: "${ver_script}", returnStdout: true)
echo "${POM_VERSION}"
}
I know this kind of late answer, but whoever you who needs the solution without eval you can use /bin/bash -c "script" to make pipe works
script {
POM_VERSION = sh(script: "/bin/bash -c 'sed -n \'/<version>/,/<version/p\' pom.xml | head -1 | sed \'s/[[:blank:]]*<\/*version>//g\'\''", returnStdout: true)
echo "${POM_VERSION}"
}
The only problem with this method is hellish escape yet this way the subshell of pipe will be handled by our boy /bin/bash -c
If your environment allows it, I've found a simple solution to this problem to be to place your script containing pipes into a file, and then run that with sh, like so:
script.sh:
#!/bin/sh
kubectl exec --container bla -i $(kubectl get pods | awk '/foo-/{ print $1 }') -- php /code/dostuff
Jenkinsfile:
stage('Run script with pipes') {
steps {
sh "./script.sh"
}
}
The pipeline-utility-steps plugin nowadays includes a readMavenPom step, which allows to access the version as follows:
version = readMavenPom.getVersion()
So nothing detailed above worked for me using the scripted Jenkinsfile syntax with Groovy. I was able to get it working, however. The type of quotations you use are important. In the example below, I am trying to fetch the latest git tag from GitHub.
...
stage("Get latest git tag") {
if (env.CHANGE_BRANCH == 'master') {
sh 'git fetch --tags'
TAGGED_COMMIT = sh(script: 'git rev-list --branches=master --tags --max-count=1', returnStdout: true).trim()
LATEST_TAG = sh(script: 'git describe --abbrev=0 --tags ${TAGGED_COMMIT}', returnStdout: true).trim()
VERSION_NUMBER = sh(script: "echo ${LATEST_TAG} | cut -d 'v' -f 2", returnStdout: true).trim()
echo "VERSION_NUMBER: ${VERSION_NUMBER}"
sh 'echo "VERSION_NUMBER: ${VERSION_NUMBER}"'
}
}
...
Notice how the shell execution to assign LATEST_TAG works as expected (assigning the variable to v2.1.0). If we were to try the same thing (with single quotes) to assign VERSION_NUMBER, it would NOT work - the pipe messes everything up. Instead, we wrap the script in double quotes.
The first echo prints VERSION_NUMBER: 2.1.0 but the second prints VERSION_NUMBER:. If you want VERSION_NUMBER to be available in the shell commands, you have to assign the output of the shell command to env.VERSION_NUMBER as shown below:
...
stage("Get latest git tag") {
if (env.CHANGE_BRANCH == 'master') {
sh 'git fetch --tags'
TAGGED_COMMIT = sh(script: 'git rev-list --branches=master --tags --max-count=1', returnStdout: true).trim()
LATEST_TAG = sh(script: 'git describe --abbrev=0 --tags ${TAGGED_COMMIT}', returnStdout: true).trim()
env.VERSION_NUMBER = sh(script: "echo ${LATEST_TAG} | cut -d 'v' -f 2", returnStdout: true).trim()
echo "VERSION_NUMBER: ${VERSION_NUMBER}"
sh 'echo "VERSION_NUMBER: ${VERSION_NUMBER}"'
}
}
...
The first echo prints VERSION_NUMBER: 2.1.0 and the second prints VERSION_NUMBER: 2.1.0.
I am also struggling with the usage of pipe inside my jenkins pipeline but as a side note, if you want a simple way to extract the version of a maven pom, here's a very clean one I found in another post and that I'm using :
stage('Preparation') {
version = getVersion()
print "version : " + version
}
def getVersion() {
def matcher = readFile('pom.xml') =~ '<version>(.+)</version>'
matcher ? matcher[0][1] : null
}
gives you :
[Pipeline] echo
releaseVersion : 0.1.24
[Pipeline] sh