unable to get pylint output to populate the violations graph - jenkins

my build steps:
cd $WORKSPACE
export TERM="linux"
. venv/bin/activate
pylint --rcfile=pylint.cfg $(find handlers -maxdepth 1 -name "*.py" -print) > pylint.log || exit 0
result of pylint.log:
************* Module handlers
C: 1, 0: Missing module docstring (missing-docstring)
C: 8, 0: Missing function docstring (missing-docstring)
************* Module handlers.foo
C: 1, 0: Black listed name "foo" (blacklisted-name)
C: 1, 0: Missing module docstring (missing-docstring)
C: 1, 0: Missing function docstring (missing-docstring)
E: 2,11: Undefined variable 'a' (undefined-variable)
E: 2,13: Undefined variable 'b' (undefined-variable)
Report
======
...
(the report continues with statistics by type, raw metrics, external dependencies)
the xml filename pattern for pylint is:
**/pylint.log
with the source path pattern being:
**/
Even after all this, and with pylint.log showing I have lint errors, the graph shows nothing.
any ideas how to get pylint and the violations plugin working nicely together?

it seems the correct pylint command is the following:
pylint --rcfile=pylint.cfg $(find handlers -maxdepth 1 -name "*.py" -print) --msg-template="{path}:{line}: [{msg_id}({symbol}), {obj}] {msg}" > pylint.log || exit 0
note the addition of the --msg-template param

I had this problem myself. I think it was a result of the way I installed the Violations plugin. It worked after restarting Jenkins:
$ sudo service jenkins restart

Related

bwa fail to load index using nextflow

I am writing a bwa mapping module using nextflow (dsl=2), modules/map_reads.nf to map single-end reads. When I execute this workflow it does not return error from the terminal and it also output bam files with the correct file names. However, I found that the bam files are not correctly mapped and I also found in .command.err an error:
[E::bwa_idx_load_from_disk] fail to locate the index files
I have checked the paths are correct and also execute shell command directly in terminal.
I appreciate any suggestions or solution to this problem.
modules/map_reads.nf
#!/usr/bin/env nextflow
nextflow.enable.dsl=2
process mapping {
conda 'envs/bwa.yml'
publishDir 'results/mapped', mode: 'copy'
input:
tuple val(sample_id), file(fastq)
file index
output:
tuple val(sample_id), file('*.bam')
script:
"""
bwa mem $index $fastq | samtools view -b - > ${sample_id}.bam
"""
}
workflow {
fastq_data = channel.fromPath( 'data/samples/*.fastq' ).map { file -> tuple(file.baseName, file) }
index = channel.fromPath( 'data/genome.fa' )
mapping( fastq_data, index )
}
Here is my directory structure:
envs/bwa.yml
name: bwa
channels:
- bioconda
- defaults
dependencies:
- bwa
- samtools=1.9
[E::bwa_idx_load_from_disk] fail to locate the index files
BWA MEM is expecting a number of index files to be provided with it's first argument, but you've only localized the genome FASTA file:
index = channel.fromPath( 'data/genome.fa' )
BWA MEM only requires the actual index files. I.e. it doesn't require the FASTA file (or FASTA index), so you can save some time and resources by skipping the localization of the FASTA (this is especially relevant if you localize from s3 for example, since the FASTA file is often a couple of gigabytes). Also ensure you use a value channel here, so that the channel can be used an unlimited number of times:
process mapping {
conda 'envs/bwa.yml'
publishDir 'results/mapped', mode: 'copy'
input:
tuple val(sample_id), path(fastq)
path bwa_index
output:
tuple val(sample_id), path("${sample_id}.bam")
script:
def idxbase = bwa_index[0].baseName
"""
bwa mem "${idxbase}" "${fastq}" | samtools view -b - > "${sample_id}.bam"
"""
}
workflow {
fastq_data = channel.fromPath( 'data/samples/*.fastq' ).map { file ->
tuple(file.baseName, file)
}
bwa_index = file( 'data/genome.fa.{,amb,ann,bwt,pac,sa}' )
mapping( fastq_data, bwa_index )
}
Also, the reason you didn't see an error on the command line is that, by default, the return value of a pipeline is the exit status of the last command. In your BWA MEM SAMtools pipeline, the last command (i.e. samtools) completes successfully and returns exit status zero. The option you want to add is called 'pipefail', man bash:
pipefail
If set, the return value of a pipeline is the value of the
last (rightmost) command to exit with a non-zero status, or zero if
all commands in the pipeline exit successfully. This option is
disabled by default.
Usually, you can just add the following to your nextflow.config to have it applied to all processes:
process {
shell = [ '/bin/bash', '-euo', 'pipefail' ]
}

gcov generating correct output but gcovr does not

Running through the setup example from gcovr here: https://gcovr.com/en/stable/guide.html#getting-started I can build the file and am seeing the following output from running gcovr -r .:
% gcovr -r .
------------------------------------------------------------------------------
GCC Code Coverage Report
Directory: .
------------------------------------------------------------------------------
File Lines Exec Cover Missing
------------------------------------------------------------------------------
example.cpp 0 0 --%
------------------------------------------------------------------------------
TOTAL 0 0 --%
------------------------------------------------------------------------------
If I run gcov example.cpp directly I can see that the generated .gcov data is correct:
% gcov example.cpp
File 'example.cpp'
Lines executed:87.50% of 8
Creating 'example.cpp.gcov'
I am unsure where the disconnect between this gcov output and the gcovr interpretation of it is.
I have tried downgrading to an older gcovr version, running the command on other projects, and switching python versions, but have not seen any different behavior.
My gcov and gcc are from the Xcode command line tools. gcovr was pip installed (within pyenv with python 3.8.5)
Edit: adding verbose output:
gcovr -r . -v
Filters for --root: (1)
- re.compile('^/Test/')
Filters for --filter: (1)
- DirectoryPrefixFilter(/Test/)
Filters for --exclude: (0)
Filters for --gcov-filter: (1)
- AlwaysMatchFilter()
Filters for --gcov-exclude: (0)
Filters for --exclude-directories: (0)
Scanning directory . for gcda/gcno files...
Found 2 files (and will process 1)
Pool started with 1 threads
Processing file: /Test/example.gcda
Running gcov: 'gcov /Test/example.gcda --branch-counts --branch-probabilities --preserve-paths --object-directory /Test' in '/var/folders/bc/20q4mkss6457skh36yzgm2bw0000gp/T/tmpo4mr2wh4'
Finding source file corresponding to a gcov data file
currdir /Test
gcov_fname /var/folders/bc/20q4mkss6457skh36yzgm2bw0000gp/T/tmpo4mr2wh4/example.cpp.gcov
[' -', ' 0', 'Source', 'example.cpp\n']
source_fname /Test/example.gcda
root /Test
fname /Test/example.cpp
Parsing coverage data for file /Test/example.cpp
Gathered coveraged data for 1 files
------------------------------------------------------------------------------
GCC Code Coverage Report
Directory: .
------------------------------------------------------------------------------
File Lines Exec Cover Missing
------------------------------------------------------------------------------
example.cpp 0 0 --%
------------------------------------------------------------------------------
TOTAL 0 0 --%
------------------------------------------------------------------------------

Parsing config file with sections in Jenkins Pipeline and get specific section

I have to parse a config with section values in Jenkins Pipeline . Below is the example config file
[deployment]
10.7.1.14
[control]
10.7.1.22
10.7.1.41
10.7.1.17
[worker]
10.7.1.45
10.7.1.42
10.7.1.49
10.7.1.43
10.7.1.39
[edge]
10.7.1.13
Expected Output:
control1 = 10.7.1.17 ,control2 = 10.7.1.22 ,control3 = 10.7.1.41
I tried the below code in my Jenkins Pipeline script section . But it seems to be incorrect function to use
def cluster_details = readProperties interpolate: true, file: 'inventory'
echo cluster_details
def Var1= cluster_details['control']
echo "Var1=${Var1}"
Could you please help me with the approach to achieve the expected result
Regarding to documentation readProperties is to read Java properties file. But not INI files.
https://jenkins.io/doc/pipeline/steps/pipeline-utility-steps/#readproperties-read-properties-from-files-in-the-workspace-or-text
I think to read INI file you have find available library for that,
e.g. https://ourcodeworld.com/articles/read/839/how-to-read-parse-from-and-write-to-ini-files-easily-in-java
Hi i got the solution for the problem
control_nodes = sh (script: """
manish=\$(ansible control -i inventory --list-host |sort -t . -g -k1,1 -k2,2 -k3,3 -k4,4 |awk '{if(NR>1)print}' |awk '{\$1=\$1;print}') ; \
echo \$manish
""",returnStdout: true).trim()
echo "Cluster Control Nodes are : ${control_nodes}"
def (control_ip1,control_ip2,control_ip3) = control_nodes.split(' ')
//println c1 // this also works
echo "Control 1: ${control_ip1}"
echo "Control 2: ${control_ip2}"
echo "Control 3: ${control_ip3}"
Explaination:
In the script section . I am getting the list of hostnames.Using sort i am sorting the hostname based on dot(.) delimeter. then using awk removing the first line in output. Using the later awk i am removing the leading white spaces.
Using returnStdout to save the shell variable output to jenkins property, which has list of ips separated by white space.
Now once i have the values in jenkins property variable, extracting the individual IPs using split methods.
Hope it helps.

what does `{}` mean as a file name in output from egrep?

I am on ubuntu debian 12.04, and I ran a find command to add something to all of my python files:
find . iname "*.py" -exec echo "import os" >> {} \;
The command runs without error and I want to validate the results so I egrep all of the files:
egrep -in "import os" *
And I get results looking like this:
{}:35:import os
{}:36:import os
{}:37:import os
{}:38:import os
{}:39:import os
...and the numbers go until 51 for some reason. What does this mean?
Thank you.
Your first command:
find . iname "*.py" -exec echo "import os" >> {} \;
Is looking for files ending in .py, and for each one is putting the string "import os" in a file called {}. Presumably there are 51 matches.
So egrep, when you run it, the * matches all files, including your file called {}. With {}:35:import os it's telling you that "in the file {}, at line 35, there's the string you're looking for"
This command:
find . iname "*.py" -exec echo "import os" >> {} \;
...creates a file named {} (in bash, and other shells which honor redirections in positions other than head and tail -- this is an extension which the POSIX sh standard does not require). It does not modify the files found by find. (This is because the >> is acting as a command to the shell that's starting find; it's not modifying the behavior of -exec -- and even if it did, -exec directly uses execve() to invoke the command given; it doesn't start that command through a shell, so it doesn't honor shell constructs such as redirections, so you'd be passing literal >> as an argument to echo on any shells not implementing this extension, still not performing a redirection on the individual files found).
Now, if you did want to modify the files found by find, you might do so like this:
find . -iname '*.py' -exec sh -c 'for f; do echo "import os" >>"$f"; done' {} +
Noteworthy differences:
The redirection is invoked inside a shell started with exec sh; thus, there's a shell present to honor it after the individual filenames have been resolved.
-exec ... {} + is used, which is much more efficient than -exec ... {} ; (the former runs as few subcommands as possible; the latter runs one per file found).
{} is a placeholder that is replaced by find with the filename that matches the given condition, in this case {} is replaced with filename that match the pattern "*.py".
However your find command isn't actually doing that, as the >> {} is not actually part of the -exec block, but interpreted by the shell as a redirect for the whole find command, so the {} never gets replaced by find with the proper filename and instead you are redirecting into a file called {}. To make things more clear, the command you are actually executing is this:
find . iname "*.py" -exec echo "import os" \; >> {}
Meaning for every *.py file you add a line containing "import os" into a file called {}. The output of grep is just filename:linenumber:matched_line so you get a {} in there as that is the filename.
If you are wondering how the \; survives and why you are not getting a:
find: missing argument to `-exec'
The shell doesn't actually care where in the command line the redirect occurs:
echo 1 2 3 4 5 6 7 > foo
is the same as:
echo 1 2 > foo 3 4 5 6 7
and gives you this each time:
$ cat foo
1 2 3 4 5 6 7
Also worth to mention >> is an append operator, so even if you fix your command you are adding to the end of the Python files, while import os probably should go to the top of the file.

waff wiki function in ns-3 does not get parameters

In ns-3 simulator documentation they provide a simple bash function to ease your life:
function waff {
CWD="$PWD"
cd $NS3DIR
./waf --cwd="$CWD" $*
cd -
}
This function is supposed to execute the ./waf program situated in the ns-3 root folder but inside the folder you are actually situated into.
So in the case of ~/project$ waff --run first waf will run the first script in the ~/project folder.
But if I try to run any simulation by adding one parameter to the script's command like ~/project$ waff --run "first --PrintHelp" it throws an error
waf: error: no such option: --PrintHelp.
It only works when I actually run the scripts from the root folder without the waff function.
How to modify the function to make it expand the $* to an argument between double commas?
Well, I feel embarrased because the solution was way easier than expected.
If anyone using DCE has the same problem, it's as easy as quoting the $*:
./waf --cwd="$CWD" $*
with:
./waf --cwd="$CWD" "$*"
This function works for me with bash (supposed you defined the environment variable $NS3DIR) :
function waff {
CWD="$PWD"
cd $NS3DIR >/dev/null
./waf --cwd="$CWD" "$#"
cd - >/dev/null
}
Proof it works is :
$ waff --run "wifi-simple-adhoc --help"
Waf: Entering directory `/home'
Waf: Leaving directory `/home'
'build' finished successfully (2.013s)
ns3.22-wifi-simple-adhoc-debug [Program Arguments] [General Arguments]
Program Arguments:
--phyMode: Wifi Phy mode [DsssRate1Mbps]
--rss: received signal strength [-80]
--packetSize: size of application packet sent [1000]
--numPackets: number of packets generated [1]
--interval: interval (seconds) between packets [1]
--verbose: turn on all WifiNetDevice log components [false]
General Arguments:
--PrintGlobals: Print the list of globals.
--PrintGroups: Print the list of groups.
--PrintGroup=[group]: Print all TypeIds of group.
--PrintTypeIds: Print all TypeIds.
--PrintAttributes=[typeid]: Print all attributes of typeid.
--PrintHelp: Print this help message.
$ waff --run wifi-simple-adhoc --command-template=" %s --help"
Waf: Entering directory `/home'
Waf: Leaving directory `/home'
'build' finished successfully (1.816s)
ns3.22-wifi-simple-adhoc-debug [Program Arguments] [General Arguments]
Program Arguments:
--phyMode: Wifi Phy mode [DsssRate1Mbps]
--rss: received signal strength [-80]
--packetSize: size of application packet sent [1000]
--numPackets: number of packets generated [1]
--interval: interval (seconds) between packets [1]
--verbose: turn on all WifiNetDevice log components [false]
General Arguments:
--PrintGlobals: Print the list of globals.
--PrintGroups: Print the list of groups.
--PrintGroup=[group]: Print all TypeIds of group.
--PrintTypeIds: Print all TypeIds.
--PrintAttributes=[typeid]: Print all attributes of typeid.
--PrintHelp: Print this help message.

Resources