In coverage.py, how to get minimum file coverage? - code-coverage

I am running coverage with this line, where $FILES is list of modified file in the current pull request.
coverage run --include=$FILES -m pytest tests -vv -rf -x
This works fine, however this is the result I get
Name Stmts Miss Cover
--------------------------------------
src/database.py 153 24 84%
src/endpoints.py 94 20 79%
src/models.py 184 0 100%
--------------------------------------
TOTAL 431 44 90%
As you can see, the TOTAL field is the average, which is 90%, however my desired check is to make sure the minimum file coverage is over 80%, which endpoints.py fails at. How do I get this info and check that?
I am running coverage for github action and this is my relevant action file:
- name: Run pytest
if: env.file_size > 1
run: |
FILES=$(sed 's/ /,/g' <<< "${{ steps.files.outputs.files_updated }} ${{ steps.files.outputs.files_created }}")
echo "Files: $FILES"
coverage run --include=$FILES -m pytest tests -vv -rf -x
continue-on-error: true
- name: Build coverage report
if: env.file_size > 1
run: |
COVERAGE=$(coverage report --include=$FILES)
echo "$COVERAGE"
echo 'COVERAGE<<EOF' >> $GITHUB_ENV
echo "$COVERAGE" >> $GITHUB_ENV
echo 'EOF' >> $GITHUB_ENV
- name: Get Coverage %
if: env.file_size > 1
run: |
LAST_LINE=$(tail -1 <<< "${{ env.COVERAGE }}")
SCORE=$(sed 's/.*[[:space:]]\([0-9]\+\)%/\1/' <<< "$LAST_LINE")
echo "Modified file code coverage score is $SCORE"
echo 'SCORE<<EOF' >> $GITHUB_ENV
echo "$SCORE" >> $GITHUB_ENV
echo 'EOF' >> $GITHUB_ENV

Coverage.py itself doesn't offer the minimum file coverage. You can get the data in JSON form and then find it.
I wrote a proof-of-concept for assessing goals like this: https://nedbatchelder.com/blog/202111/coverage_goals.html

Related

No such file found in jenkis pipeline

Here is my groovy file
timestamps{
node('cf_slave'){
checkout scm
stage('Read the file') {
def PWD = pwd()
withEnv(["prj_option=${params.project}"]) {
def response =sh(returnStdout: true, script: 'sh \'jenkins/security/get_values.sh\'')
}
}
This is my get_values.sh file
echo "The project option is:" $prj_option
prj_name=$(echo "$prj_option" | tr '[:upper:]' '[:lower:]')
file_name="va_input_file_$prj_name.txt"
echo "The project option is:" $file_name
ls -la
chmod 775 jenkins/security/$file_name
ls -la
get_input_values() {
file=$1
IFS=''
while read line
do
if [ `echo ${line} | grep -E -c -w "NAME_SPACE" ` -gt 0 ]; then
NAME_SPACE=$(echo " ${line}" | cut -d'=' -f2)
echo "The name space value is $NAME_SPACE"
elif [ `echo ${line} | grep -E -c -w "IMAGE_NAMES" ` -gt 0 ]; then
values=$(echo " ${line}" | cut -d'=' -f2)
echo "THE DOCKERIMAGES are $DOCKER_IMAGES_NAMES"
else
echo "Please provide input for namespace and docker images to be scanned by VA_TOOl"
fi
done < ${file}
}
images=$(get_input_values ${file_name})
so here my text file is under jenkins/security folder of gitrepo but unfortunately I am getting this error:
16:05:28 + sh jenkins/security/get_values.sh
16:05:28 jenkins/security/get_values.sh: 16: jenkins/security/get_values.sh: cannot open va_input_file_icp.txt: No such file```
Unfortunately, there is a ticket for this in Jenkins (https://issues.jenkins-ci.org/browse/JENKINS-51245) which was closed as a duplicate of this ticket: (https://issues.jenkins-ci.org/browse/JENKINS-27413)
JENKINS-27413 was raised in 2015, and is still open. The File Parameter appears not to work in Jenkins Pipeline. It does however work when used in a Freestyle project. While not ideal, I would recommend changing your job to be a Freestyle project if that's feasible.

GNU parallel arguments

From the example
seq 1 100 | parallel -I ## \ > 'mkdir top-##;seq 1 100 | parallel -X mkdir top-##/sub-{}
How do -X , ##, {} work? Also, what will be the behavior when '1' or '.' is passed inside {}? Is /> used for redirection here?
I was trying to go through the tutorial from https://www.youtube.com/watch?v=P40akGWJ_gY&list=PL284C9FF2488BC6D1&index=2 and reading through man parallel page. I am able to gather some basic knowledge but not exactly how to use it or as such.
Let's do the easy stuff first.
The backslash (\) is just telling the shell that the following line is a continuation of the current one, and the greater than sign (>) is the shell prompting for the continuation line. It is no different from typing:
echo \
hi
where you will actually see this:
echo \
> hi
hi
So, I am saying you can ignore \> and just run the command on a single line.
Next, the things in {}. These are described in the GNU Parallel manual page, but essentially:
{1} refers to the first parameter
{2} refers to the second parameter, and so on
Test this with the following where the column separator is set to a space but we use the parameters in the reverse order:
echo A B | parallel --colsep ' ' echo {2} {1}
B A
{.} refers to a parameter, normally a filename, with its extension removed
Test this with:
echo fred.dat | parallel echo {.}
fred
Now let's come to the actual question, with the continuation line removed as described above and with everything on a single line:
seq 1 100 | parallel -I ## 'mkdir top-##;seq 1 100 | parallel -X mkdir top-##/sub-{}'
So, this is essentially running:
seq 1 100 | parallel -I ## 'ANOTHER COMMAND'
Ole has used ## in place of {} in this command so that the substitutions used in the second, inner, parallel command don't get confused with each other. So, where you see ## you just need to replace it with the values from first seq 1 100.
The second parallel command is pretty much the same as the first one, but here Ole has used X. If you watch the video you link to, you will see that he previously shows you how it works. It actually passes "as many parameters as possible" to a command according to the system's ARGMAX. So, if you want 10,000 directories created, instead of this:
seq 1 10000 | parallel mkdir {}
which will start 10,000 separate processes, each one running mkdir, you will start one mkdir but with 10,000 parameters:
seq 1 10000 | parallel -X mkdir
That avoids the need to create 10,000 separate processes and speeds things up.
Let's now look at the outer parallel invocation and do a dry run to see what it would do, without actually doing anything:
seq 1 100 | parallel -k --dry-run -I ## 'mkdir top-##;seq 1 100 | parallel -X mkdir top-##/sub-{}'
Output
mkdir top-1;seq 1 100 | parallel -X mkdir top-1/sub-{}
mkdir top-2;seq 1 100 | parallel -X mkdir top-2/sub-{}
mkdir top-3;seq 1 100 | parallel -X mkdir top-3/sub-{}
mkdir top-4;seq 1 100 | parallel -X mkdir top-4/sub-{}
mkdir top-5;seq 1 100 | parallel -X mkdir top-5/sub-{}
mkdir top-6;seq 1 100 | parallel -X mkdir top-6/sub-{}
mkdir top-7;seq 1 100 | parallel -X mkdir top-7/sub-{}
mkdir top-8;seq 1 100 | parallel -X mkdir top-8/sub-{}
...
...
mkdir top-99;seq 1 100 | parallel -X mkdir top-99/sub-{}
mkdir top-100;seq 1 100 | parallel -X mkdir top-100/sub-{}
So, now you can see it is going to start 100 processes, each of which will make a directory then start 100 further processes that will each create 100 subdirectories.

Jenkins doesn't pick up Quality Gate failure

I want my Jenkins build to fail if the code doesn't have 90% test coverage. For that, I have installed the Quality Gates plugin, which should check the SonarQube analysis.
I have the following configuration in Jenkins, under Quality Gates:
Name: SonarQubeServer
SonarQube Server URL: http://my-server.com:9000
SonarQube account login: admin
SonarQube account password: ****
SonarQube displays: Quality Gate Failed
Jenkins displays: SonarQube analysis completed: SUCCESS and the build passes.
Any idea why Jenkins doesn't get that the quality gate failed?
Eventually I realised that I should have added Quality Gates as a Post Build Action for every job I was using it on.
You can do that with the Shell commands: sharing this info if someone needs it
To mark build as failure when Quality gate is not passed using Sonar Rest api. Add “Execute Shell” after Sonar Step and use below code
Tip : Introduce sleep time of 10s before this step , just to ensure that Sonar site is updated with task result status.
Fetching TASKURL from report-task.txt in workspace
url=$(cat $WORKSPACE/.sonar/report-task.txt | grep ceTaskUrl | cut -c11- )
Fetching Task attributes from Sonar Server
curl -u admin:${admin_pwd} -L $url | python -m json.tool
Setting up task status to check if sonar scan is completed successfully.
curl -u admin:${admin_pwd} -L $url -o task.json
status=$(python -m json.tool < task.json | grep -i "status" | cut -c20- | sed 's/.(.)$/\1/'| sed 's/.$//' )
echo ${status}
If SonarScan is completed successfully then set analysis ID & URLS.
if [ $status = SUCCESS ]; then
analysisID=$(python -m json.tool < task.json | grep -i "analysisId" | cut -c24- | sed 's/.(.)$/\1/'| sed 's/.$//')
analysisUrl="https://sonar.net/api/qualitygates/project_status?analysisId=${analysisID}
echo ${analysisID}
echo ${analysisUrl}
else
echo "Sonnar run was not sucess"
exit 1
fi
Fetching SonarGate details using analysis URL
curl -u admin:$admin_pwd ${analysisUrl} | python -m json.tool
curl -u admin:$admin_pwd ${analysisUrl} | python -m json.tool | grep -i "status" | cut -c28- | sed 's/.$//' >> tmp.txt
cat tmp.txt
sed -n '/ERROR/p' tmp.txt >> error.txt
cat error.txt
if [ $(cat error.txt | wc -l) -eq 0 ]; then
echo "Quality Gate Passed ! Setting up SonarQube Job Status to Success ! "
else
exit 1
echo "Quality Gate Failed ! Setting up SonarQube Job Status to Failure ! "
fi
Cleaning up the files
unset url
unset status
unset analysisID
unset analysisUrl
task.json
tmp.txt
error.txt
In response to Sri who has some type/errors in his solution.
This is sonar4.5.5 building using sonar-scanner
if [ -e tmp.txt ];
then
rm tmp.txt
rm error.txt
rm task.json
fi
sleep 5
cat $WORKSPACE/.scannerwork/report-task.txt
url=$(cat $WORKSPACE/.scannerwork/report-task.txt | grep ceTaskUrl | cut -c11- )
echo $url
curl -u admin:pswd -L $url | python -m json.tool
curl -u admin:pswd -L $url -o task.json
status=$(python -m json.tool < task.json | grep -i "status" | cut --delimiter=: --fields=2 | sed 's/"//g' | sed 's/,//g' )
echo ${status}
if [ $status = SUCCESS ]; then
analysisID=$(python -m json.tool < task.json | grep -i "analysisId" | cut -c24- | sed 's/"//g' | sed 's/,//g')
analysisUrl="http://sonarserver/sonarqube/api/qualitygates/project_status?analysisId=${analysisID}"
echo ${analysisID}
echo ${analysisUrl}
else
echo "Sonar run was not success"
exit 1
fi
curl -u admin:pswd ${analysisUrl} | python -m json.tool
curl -u admin:pswd ${analysisUrl} | python -m json.tool | grep -i "status" | cut -c28- | sed 's/.$//' >> tmp.txt
cat tmp.txt
sed -n '/ERROR/p' tmp.txt >> error.txt
cat error.txt
if [ $(cat error.txt | wc -l) -eq 0 ]; then
echo "Quality Gate Passed ! Setting up SonarQube Job Status to Success ! "
else
echo "Quality Gate Failed ! Setting up SonarQube Job Status to Failure ! "
exit 1
fi
the plugin quality gates return just status :passed or failed , so you can build other job from jenkins from the result of those two flags . but if you want to make flag passed if the coverage resulat >90 % you have to configure it from sonarqube not jenkins . in this situation you can imagine this scenario :
test coverage <90 -> flag :failed . jenkins don't call other job .
test coverage >90 -> flag :passed. jenkins call other job .
i think this can help you somehow .

Error running egrep on Solaris

Sun OS 5.8
Bash shell script
Oracle 10g database
Error 1 the command executing at the time of the error was egrep ORA-\|TNS-\|PLS-\|Error\|PLW-\|IMP-\|EXP-\|RMAN-\|SQL- alert_work.log > alert.err on line 11
The "egrep" line runs successfully when I run it manually. But in a bash script (cron job) it gets the above error. Here is the script:
#!/bin/bash
SID=$ORACLE_SID
DOMAIN=$(uname -n)
DBALIST='dbak#xxx.com'
YESTERDAY=`TZ=CST+24 date +%Y-%m-%d`
cd $ORACLE_HOME/admin/$SID/bdump
mv alert_${SID}.log alert_work.log
touch alert_${SID}.log
cat alert_work.log >> alert_${SID}.hist
egrep ORA-\|TNS-\|PLS-\|Error\|PLW-\|IMP-\|EXP-\|RMAN-\|SQL- alert_work.log > alert.err
if [ `cat alert.err|wc -l` -gt 0 ]
then
mailx -s "${DOMAIN}.${SID} ALERT LOG ERRORS FOUND" $DBALIST < alert.err.log
fi
/usr/bin/mv alert_work.log $ORACLE_HOME/admin/$SID/bdump/hist/alert_${SID}_${YESTERDAY}.log
exit
I am suspicious of your egrep regular expression, The fact that you have not quoted it, and create a script from within a Bash script leads and then run the script, leads me to think that you will end up with:
egrep ORA-|TNS-|PLS-|Error|PLW-|IMP-|EXP-|RMAN-|SQL- alert_work.log > alert.err
which is not what you intended. Try:
egrep 'ORA-\|TNS-\|PLS-\|Error\|PLW-\|IMP-\|EXP-\|RMAN-\|SQL-' alert_work.log > alert.err
That should preserve the back slashes.

Jenkins - Configure Jenkins to poll changes in SCM

I am working with jenkins and I would like to run the maven goals when there is a change in the svn repository. I've attached a picture with my current configuration.
I know that checking the repository every 5 min is crazy. I would like to run it only when there is a new change, but I could not find the way. Anyway, it is not checking the repository. What am I doing wrong?
I believe best practice these days is H/5 * * * *, which means every 5 minutes with a hashing factor to avoid all jobs starting at EXACTLY the same time.
I think your cron is not correct. According to what you described, you may need to change cron schedule to
*/5 * * * *
What you put in your schedule now mean it will poll the SCM at 5 past of every hour.
That's an old question, I know. But, according to me, it is missing proper answer.
The actual / optimal workflow here would be to incorporate SVN's post-commit hook so it triggers Jenkins job after the actual commit is issued only, not in any other case. This way you avoid unneeded polls on your SCM system.
You may find the following links interesting:
Jenkins Wiki's post-commit hook description on Subversion Plugin's doc-site. Here you find documented example of the script you are interested in.
Hook-scripts contrib directory in the source of official Apache Foundation's Subversion's source control repository.
Similar question on StackOverflow.
In case of my setup in the corp's SVN server, I utilize the following (censored) script as a post-commit hook on the subversion server side:
#!/bin/sh
# POST-COMMIT HOOK
REPOS="$1"
REV="$2"
#TXN_NAME="$3"
LOGFILE=/var/log/xxx/svn/xxx.post-commit.log
MSG=$(svnlook pg --revprop $REPOS svn:log -r$REV)
JENK="http://jenkins.xxx.com:8080/job/xxx/job/xxx/buildWithParameters?token=xxx&username=xxx&cause=xxx+r$REV"
JENKtest="http://jenkins.xxx.com:8080/view/all/job/xxx/job/xxxx/buildWithParameters?token=xxx&username=xxx&cause=xxx+r$REV"
echo post-commit $* >> $LOGFILE 2>&1
# trigger Jenkins job - xxx
svnlook changed $REPOS -r $REV | cut -d' ' -f4 | grep -qP "branches/xxx/xxx/Source"
if test 0 -eq $? ; then
echo $(date) - $REPOS - $REV: >> $LOGFILE
svnlook changed $REPOS -r $REV | cut -d' ' -f4 | grep -P "branches/xxx/xxx/Source" >> $LOGFILE 2>&1
echo logmsg: $MSG >> $LOGFILE 2>&1
echo curl -qs $JENK >> $LOGFILE 2>&1
curl -qs $JENK >> $LOGFILE 2>&1
echo -------- >> $LOGFILE
fi
# trigger Jenkins job - xxxx
svnlook changed $REPOS -r $REV | cut -d' ' -f4 | grep -qP "branches/xxx_TEST"
if test 0 -eq $? ; then
echo $(date) - $REPOS - $REV: >> $LOGFILE
svnlook changed $REPOS -r $REV | cut -d' ' -f4 | grep -P "branches/xxx_TEST" >> $LOGFILE 2>&1
echo logmsg: $MSG >> $LOGFILE 2>&1
echo curl -qs $JENKtest >> $LOGFILE 2>&1
curl -qs $JENKtest >> $LOGFILE 2>&1
echo -------- >> $LOGFILE
fi
exit 0

Resources