how can i create gulp task to run a script shell? - jenkins

I have this script shell and i want to create task gulp to execute it.
for tag in `git tag -l | sort -V |head -n -4`
do
echo $tag
if [ -n "$(echo $tag | grep -P "(^v.*-*)")" ] then
echo VERSION=$(echo $tag | grep -P "(^v.*-*)")
git tag -d $(echo $tag | grep -P "(^v.*-*)")
git push origin :$tag
fi
done
I'm using those plugins in my gulpfile :
var gulp = require('gulp'),
shell = require('gulp-shell'),
pckg = require('./package.json'),
runSequence = require('run-sequence'),
is there any solution?

Related

Jenkins pass a shell script variable to a down stream job

I'm new to Jenkins.
I have a job with an "Execute Shell" Build trigger, and in that shell script i initiate some variables with values i take from some source.
I need to pass these values to a downstream job.
The values i want to pass are $IMG_NAME and $IMG_PATH from this shell script:
#!/bin/bash -x
whoami
echo "BASE_PATH: $BASE_PATH"
declare -A BRANCHES
for i in $(echo $BRANCHES_LIST | tr ',' '\n'); do BRANCHES[$i]= ; done
echo 'user' | sudo -S umount -rf /mnt/tlv-s-comp-prod/drops/
echo 'user' | sudo -S mount.nfs -o nolock,nfsvers=3 tlv-s-comp-prod:/export/drops /mnt/tlv-s-comp-prod/drops
ls /mnt/tlv-s-comp-prod/drops/
echo "cleanup workspace"
rm ${WORKSPACE}/*.txt &> /dev/null
i="0"
while [ $i -lt 6 ]
do
if [[ ${BASE_PATH} == *"Baseline"* ]]; then
unset BRANCHES[#]
declare -A BRANCHES
BRANCHES[Baseline]=
fi
for BRANCH in "${!BRANCHES[#]}"; do
echo "BRANCH: $BRANCH"
if [ $BRANCH == "Baseline" ]; then BRANCH=; fi
img_dir=$(ls -td -- ${BASE_PATH}/${BRANCH}/*/ | head -n 1)
echo "img_dir: $img_dir"
IMG_PATH=$(ls $img_dir*.rpm)
echo "IMG_PATH: $IMG_PATH"
cd $img_dir
IMG_NAME=$(ls *.rpm) > env.properties
if [ ! -z "$IMG_NAME" ]; then
if [ $(( $(date +%s) - $(stat -c %Z $IMG_PATH) )) -lt 10000800 ]; then
echo "IMG_NAME: ${IMG_NAME}"
#BRANCHES[$BRANCH]=$IMG_PATH
#echo "REG_OSA_SOFTSYNC_BUILD_IMG_FULL_PATH=${BRANCHES[$BRANCH]}" >> ${WORKSPACE}/$BRANCH.txt
echo "BRANCH_NAME=$BRANCH" >> ${WORKSPACE}/${BRANCH}_branch.txt
echo "REG_OSA_SOFTSYNC_BUILD_NAME=$BRANCH-$IMG_NAME" >> ${WORKSPACE}/${BRANCH}_branch.txt
else
echo "$IMG_NAME is out dated"
fi
else
echo "IMG_NAME is empty"
fi
BRANCH_NAME=""
done
$TEMP=$BRANCH_NAME
echo "TEMP: $TEMP"
if [ $(ls ${WORKSPACE}/*_branch.txt | wc -l) == $(echo ${#BRANCHES[#]}) ]; then break; fi
#for i in $(ls *_branch.txt); do i=$(echo $i | awk -F '_branch.txt' '{print $1}'); if [ $(echo ${!BRANCHES[#]} | grep $i | wc -l) == 0 ]; then state=1 break; fi done
i=$[$i+1]
sleep 1800
done
This is the "Trigger parameterized build on other projects" configuration:

Why is wc -l returning 0 in a sh step subshell in Jenkins/groovy

I have a Jenkins script that looks like this
stage ("Build and Deploy") {
steps {
script {
def statusCode = sh(script:"""ssh ${env.SERVER_NAME} << EOF
cd ${env.LOCATION}
git clone -b ${env.GIT_BRANCH} ${env.GIT_URL} ${env.FOLDER}
cd ${env.FOLDER}
... some other stuff goes here but isnt relevant ..
sudo docker-compose up -d --build
if [ ! \$(sudo docker container ls -f "name=config-provider-*" | wc -l ) -eq 4 ]
then
exit 1
fi
""", returnStatus:true).toString().trim()
if (statusCode == "1") {
error("At least one container failed to start")
}
}
}
}
What I want is to exit with error code 1 in the script if the number of running containers is not equal to 3 (wc -l == 4 including the header), but the if statement is evaluating true and exiting with error code 1 even though i know that the containers are successfully running.
I have tried
echo sh(script: """ssh ${env.SERVER_NAME} << EOF
echo \$(sudo docker container ls -f "name=config-provider-*" | wc -l)
""", returnStdout:true).toString()
and
echo sh(script: """ssh ${env.SERVER_NAME} << EOF
echo \$(sudo docker container ls -f "name=config-provider-*")
""", returnStdout:true).toString()
The latter outputted 4 lines within jenkins showing all of the running containers, as expected, but the former which includes "| wc -l" returned and printed out 0 in jenkins.
I have reproduced the steps of this script line by line manually from start to finish and it works as intended when not run from within jenkins.
Additionally, manually running the command:
[ ! $(sudo docker container ls -f "name=config-provider-*" | wc -l ) -eq 4 ] && echo failed
echoes nothing, and the following command returns an output of 4, which is expected.
echo $(sudo docker container ls -f "name=config-provider-*" | wc -l )

cron not running in alpine docker

I have created and added below entry in my entry-point.sh for docker file.
# start cron
/usr/sbin/crond &
exec "${DIST}/bin/ss" "$#"
my crontab.txt looks like below:
bash-4.4$ crontab -l
*/5 * * * * /cleanDisk.sh >> /apps/log/cleanDisk.log
So when I run the docker container, i don't see any file created called as cleanDisk.log.
I have setup all permissions and crond is running as a process in my container see below.
bash-4.4$ ps -ef | grep cron
12 sdc 0:00 /usr/sbin/crond
208 sdc 0:00 grep cron
SO, can anyone, guide me why the log file is not getting created?
my cleanDisk.sh looks like below. Since it runs for very first time,and it doesn't match all the criteria, so I would expect at least to print "No Error file found on Host $(hostname)" in cleanDisk.log.
#!/bin/bash
THRESHOLD_LIMIT=20
RETENTION_DAY=3
df -Ph /apps/ | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5,$1 }' | while read output
do
#echo $output
used=$(echo $output | awk '{print $1}' | sed s/%//g)
partition=$(echo $output | awk '{print $2}')
if [ $used -ge ${THRESHOLD_LIMIT} ]; then
echo "The partition \"$partition\" on $(hostname) has used $used% at $(date)"
FILE_COUNT=$(find ${SDC_LOG} -maxdepth 1 -mtime +${RETENTION_DAY} -type f -name "sdc-*.sdc" -print | wc -l)
if [ ${FILE_COUNT} -gt 0 ]; then
echo "There are ${FILE_COUNT} files older than ${RETENTION_DAY} days on Host $(hostname)."
for FILENAME in $(find ${SDC_LOG} -maxdepth 1 -mtime +${RETENTION_DAY} -type f -name "sdc-*.sdc" -print);
do
ERROR_FILE_SIZE=$(stat -c%s ${FILENAME} | awk '{ split( "B KB MB GB TB PB" , v ); s=1; while( $1>1024 ){ $1/=1024; s++ } printf "%.2f %s\n", $1, v[s] }')
echo "Before Deleting Error file ${FILENAME}, the size was ${ERROR_FILE_SIZE}."
rm -rf ${FILENAME}
rc=$?
if [[ $rc -eq 0 ]];
then
echo "Error log file ${FILENAME} with size ${ERROR_FILE_SIZE} is deleted on Host $(hostname)."
fi
done
fi
if [ ${FILE_COUNT} -eq 0 ]; then
echo "No Error file found on Host $(hostname)."
fi
fi
done
edit
my docker file looks like this
FROM adoptopenjdk/openjdk8:jdk8u192-b12-alpine
ARG SDC_UID=20159
ARG SDC_GID=20159
ARG SDC_USER=sdc
RUN apk add --update --no-cache bash \
busybox-suid \
sudo && \
echo 'hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4' >> /etc/nsswitch.conf
RUN addgroup --system ${SDC_USER} && \
adduser --system --disabled-password -u ${SDC_UID} -G ${SDC_USER} ${SDC_USER}
ADD --chown=sdc:sdc crontab.txt /etc/crontabs/sdc/
RUN chgrp sdc /etc/cron.d /etc/crontabs /usr/bin/crontab
# Also tried to run like this but not working
# RUN /usr/bin/crontab -u sdc /etc/crontabs/sdc/crontab.txt
USER ${SDC_USER}
EXPOSE 18631
RUN /usr/bin/crontab /etc/crontabs/sdc/crontab.txt
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["dc", "-exec"]

Jenkins doesn't pick up Quality Gate failure

I want my Jenkins build to fail if the code doesn't have 90% test coverage. For that, I have installed the Quality Gates plugin, which should check the SonarQube analysis.
I have the following configuration in Jenkins, under Quality Gates:
Name: SonarQubeServer
SonarQube Server URL: http://my-server.com:9000
SonarQube account login: admin
SonarQube account password: ****
SonarQube displays: Quality Gate Failed
Jenkins displays: SonarQube analysis completed: SUCCESS and the build passes.
Any idea why Jenkins doesn't get that the quality gate failed?
Eventually I realised that I should have added Quality Gates as a Post Build Action for every job I was using it on.
You can do that with the Shell commands: sharing this info if someone needs it
To mark build as failure when Quality gate is not passed using Sonar Rest api. Add “Execute Shell” after Sonar Step and use below code
Tip : Introduce sleep time of 10s before this step , just to ensure that Sonar site is updated with task result status.
Fetching TASKURL from report-task.txt in workspace
url=$(cat $WORKSPACE/.sonar/report-task.txt | grep ceTaskUrl | cut -c11- )
Fetching Task attributes from Sonar Server
curl -u admin:${admin_pwd} -L $url | python -m json.tool
Setting up task status to check if sonar scan is completed successfully.
curl -u admin:${admin_pwd} -L $url -o task.json
status=$(python -m json.tool < task.json | grep -i "status" | cut -c20- | sed 's/.(.)$/\1/'| sed 's/.$//' )
echo ${status}
If SonarScan is completed successfully then set analysis ID & URLS.
if [ $status = SUCCESS ]; then
analysisID=$(python -m json.tool < task.json | grep -i "analysisId" | cut -c24- | sed 's/.(.)$/\1/'| sed 's/.$//')
analysisUrl="https://sonar.net/api/qualitygates/project_status?analysisId=${analysisID}
echo ${analysisID}
echo ${analysisUrl}
else
echo "Sonnar run was not sucess"
exit 1
fi
Fetching SonarGate details using analysis URL
curl -u admin:$admin_pwd ${analysisUrl} | python -m json.tool
curl -u admin:$admin_pwd ${analysisUrl} | python -m json.tool | grep -i "status" | cut -c28- | sed 's/.$//' >> tmp.txt
cat tmp.txt
sed -n '/ERROR/p' tmp.txt >> error.txt
cat error.txt
if [ $(cat error.txt | wc -l) -eq 0 ]; then
echo "Quality Gate Passed ! Setting up SonarQube Job Status to Success ! "
else
exit 1
echo "Quality Gate Failed ! Setting up SonarQube Job Status to Failure ! "
fi
Cleaning up the files
unset url
unset status
unset analysisID
unset analysisUrl
task.json
tmp.txt
error.txt
In response to Sri who has some type/errors in his solution.
This is sonar4.5.5 building using sonar-scanner
if [ -e tmp.txt ];
then
rm tmp.txt
rm error.txt
rm task.json
fi
sleep 5
cat $WORKSPACE/.scannerwork/report-task.txt
url=$(cat $WORKSPACE/.scannerwork/report-task.txt | grep ceTaskUrl | cut -c11- )
echo $url
curl -u admin:pswd -L $url | python -m json.tool
curl -u admin:pswd -L $url -o task.json
status=$(python -m json.tool < task.json | grep -i "status" | cut --delimiter=: --fields=2 | sed 's/"//g' | sed 's/,//g' )
echo ${status}
if [ $status = SUCCESS ]; then
analysisID=$(python -m json.tool < task.json | grep -i "analysisId" | cut -c24- | sed 's/"//g' | sed 's/,//g')
analysisUrl="http://sonarserver/sonarqube/api/qualitygates/project_status?analysisId=${analysisID}"
echo ${analysisID}
echo ${analysisUrl}
else
echo "Sonar run was not success"
exit 1
fi
curl -u admin:pswd ${analysisUrl} | python -m json.tool
curl -u admin:pswd ${analysisUrl} | python -m json.tool | grep -i "status" | cut -c28- | sed 's/.$//' >> tmp.txt
cat tmp.txt
sed -n '/ERROR/p' tmp.txt >> error.txt
cat error.txt
if [ $(cat error.txt | wc -l) -eq 0 ]; then
echo "Quality Gate Passed ! Setting up SonarQube Job Status to Success ! "
else
echo "Quality Gate Failed ! Setting up SonarQube Job Status to Failure ! "
exit 1
fi
the plugin quality gates return just status :passed or failed , so you can build other job from jenkins from the result of those two flags . but if you want to make flag passed if the coverage resulat >90 % you have to configure it from sonarqube not jenkins . in this situation you can imagine this scenario :
test coverage <90 -> flag :failed . jenkins don't call other job .
test coverage >90 -> flag :passed. jenkins call other job .
i think this can help you somehow .

Jenkins to automate deb reprepro repository creation/signing

Problem statement:
Can sign repos from inside normal terminal (also inside docker). From jenkins job, repo creation/signing fails. Job hangs.
Configuration:
Jenkins spawns docker container to create/sign deb repository.
Private and public keys all present.
gpg-agent installed on the docker container to sign the packages.
~/.gnupg/gpg.conf file has "use-agent" enabled
Progress:
Can start gpg-agent using jenkins on the docker container.
Can use gpg-preset-passphrase to cache passphrase.
Can use [OUTSIDE JENKINS]
reprepro --ask-passphrase -Vb . includedeb ${_repo_name} ${_pkg_location}
to fetch the passphrase from gpg-agent and sign the repo.
Problem:
from inside a jenkins job, the command "reprepro --ask-passphrase -Vb ..." hangs.
Code:
starting gpg-agent:
GPGAGENT=/usr/bin/gpg-agent
GNUPG_PID_FILE=${GNUPGHOME}/gpg-agent-info
GNUPG_CFG=${GNUPGHOME}/gpg.conf
GNUPG_CFG=${GNUPGHOME}/gpg-agent.conf
function start_gpg_agent {
GPG_TTY=$(tty)
export GPG_TTY
if [ -r "${GNUPG_PID_FILE}" ]
then
source "${GNUPG_PID_FILE}" count=$(ps lax | grep "${GPGAGENT}" | grep "$SSH_AGENT_PID" | wc -l)
if [ $count -eq 0 ]
then
if ! ${GPGAGENT} 2>/dev/null then
$GPGAGENT --debug-all --options ${BASE_PATH}/sign/gpg-agent.options \
--daemon --enable-ssh-support \
--allow-preset-passphrase --write-env-file ${GNUPG_PID_FILE}
if [[ $? -eq 0 ]]
then
echo "INFO::agent started"
else
echo "INFO::Agent could not be started. Exit."
exit -101
fi
fi
fi
else
$GPGAGENT --debug-all --options ${BASE_PATH}/sign/gpg-agent.options \
--daemon --allow-preset-passphrase --write-env-file ${GNUPG_PID_FILE}
fi
}
options file:
default-cache-ttl 31536000
default-cache-ttl-ssh 31536000
max-cache-ttl 31536000
max-cache-ttl-ssh 31536000
enable-ssh-support
debug-all
saving passphrase.
/usr/lib/gnupg2/gpg-preset-passphrase -v --preset --passphrase ${_passphrase} ${_fp}
finally (for completion), sign repo:
reprepro --ask-passphrase -Vb . includedeb ${_repo_name} ${_pkg_location}

Resources