Jenkins PyLint Warnings tool parses log files but reports 'found 0 issues' - jenkins

I have setup Jenkins to run pylint on all python source files and all the log files are generated (apparently correctly) into a sub-directory as follows:
Source\pylint_logs\pylint1.log, pylint2.log, ..., pylint75.log
I have included a --msg-template definition based on the instructions on my Jenkins Configure page: Post-build Actions->Record compiler warnings and static analysis results->Static Analysis Tools. The template is shown as:
msg-template={path}:{line}: [{msg_id}, {obj}] {msg} ({symbol})
An example of one of the log files being generated by Jenkins/pylint is as follows:
************* Module FigureView
myapp\Views\FigureView.py:1: [C0103, ] Module name "FigureView" doesn't conform to snake_case naming style (invalid-name)
myapp\Views\FigureView.py:30: [C0103, FigureView.__init__] Attribute name "ax" doesn't conform to snake_case naming style (invalid-name)
------------------------------------------------------------------
Your code has been rated at 8.57/10 (previous run: 8.57/10, +0.00)
For the PyLint Report File Pattern, I have: Source/pylint_logs/pylint*.log
It appears that PyLint Warnings is parsing the files because the console output looks like this:
[PyLint] Searching for all files in 'D:\Jenkins\workspace\PROJECT' that match the pattern 'Source/pylint_logs/pylint*.log'
[PyLint] -> found 75 files
[PyLint] Successfully parsed file D:\Jenkins\workspace\PROJECT\Source\pylint_logs\pylint1.log
[PyLint] -> found 0 issues (skipped 0 duplicates)
[PyLint] Successfully parsed file D:\Jenkins\workspace\PROJECT\Source\pylint_logs\pylint10.log
[PyLint] -> found 0 issues (skipped 0 duplicates)
This repeats for all 75 files, even though there are plenty of issues in the log files.
What is odd, is that when I was first prototyping the use of Jenkins on this project, I set it up to just run pylint on a single file. I ran across another StackOverflow post that showed a msg-template that allowed me to get it working (unable to get pylint output to populate the violations graph). I even got the graph to show up for the PyLint Warnings Trend. I used the following definition per the post:
msg-template={path}:{line}: [{msg_id}({symbol}), {obj}] {msg}
Note that this format is slightly different from the one recommended by my Jenkins page (shown earlier). Even though this worked for a single file, neither template now seems to work for multiple files, or else there is something other than the template causing the problem. My graph has flat-lined, and I always get 0 issues reported.
I have had trouble finding useful documentation on the Jenkins PyLint Warnings tool. Does anyone have any ideas or pointers to documentation I can research further? Thanks much!

Ensure pass output-format parameter in pylint command. Example:
pylint --exit-zero --output-format=parseable module1 module2 > pylint.report

you have to set the Pylint's option --message-template in .pylintrc as
msg-template={path}: {line}: [{msg_id} ({symbol}), {obj}] {msg}
output-format=text

Related

How to "reduce" Jenkins Pipeline output path

We were building our solution without any "Pipeline" in Jenkins until recently, so I'm currently in the progress to move our build to multibranch pipelines.
The issue that I'm running into is that we have a lot of structure une our solution(lot of subfolder, and sometimes some big names).
Currently, the jenkins pipeline extract everything in a folder that looks like:
D:\ws\ght-build_feature_pipelines-TMQ33LB5OQIQ5VXVMFKFDG2HWCD4MUOGEGUWJUOMZ5D2GI42BIQA
Which is very-long, and now we are reaching the 260 characters limit of MSBuild:
C:\Program Files (x86)\Microsoft Visual
Studio\2017\Professional\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(2991,5):
error MSB3553: Resource file
"obj\Release\xx.aaaaaaaaaa.yyy.bbbbbb.dddddddddddddd.yyyyyyy.vvv.dddddddddd.Resources.resources"
has an invalid name. The item metadata "%(FullPath)" cannot be applied
to the path
"obj\Release\xx.aaaaaaaaaa.yyy.bbbbbb.dddddddddddddd.yyyyyyy.vvv.dddddddddd.Resources.resources".
The specified path, file name, or both are too long. The fully
qualified file name must be less than 260 characters, and the
directory name must be less than 248 characters.
[D:\ws\ght-build_feature_pipelines-TMQ33LB5OQIQ5VXVMFKFDG2HWCD4MUOGEGUWJUOMZ5D2GI42BIQA\Src\bbbbbb\dddddd\dddddddddddddd\yyyyyyy\xx.aaaaaaaaaa.yyy.bbbbbb.dddddddddddddd.yyyyyyy.vvv\xx.aaaaaaaaaa.yyy.bbbbbb.dddddddddddddd.yyyyyyy.vvv.csproj]
We have so much cases where the length is big that it's really a big job to refactor everything, so I'm looking on how to specify to jenkins a smaller path?
What I finally did:
pipeline {
agent {
node{
label 'windows-node'
customWorkspace "D:\\ws\\${env.BRANCH_NAME}"
}
}
options{
skipDefaultCheckout()
}
...
}
And I've a step that does the checkout. It was easier for me to have a "per-job" behavior, without touching jenkins global settings.
Update (for any recent Jenkins instances)
Turns out that with recent Jenkins versions PATH_MAX seems to be ignored.
The only thing it does: Issue a warning in the Jenkins log when smaller than a certain value, which actually does not matter - as the setting itself will anyways be ignored (as seen on Jenkins 2.249.3). See also: JENKINS-2111
As far as I can tell - the new setting was introduced in jenkins-branch-api 2.0.21:
There's a new property introduced: MAX_LENGTH.
This defaults to 32 characters by default.
You can set it the same way like PATH_MAX:
As a java property - to ensure that Jenkins will start using the right setting, e.g.:
-Djenkins.branch.WorkspaceLocatorImpl.MAX_LENGTH=40
or during run-time, using the script console:
jenkins.branch.WorkspaceLocatorImpl.MAX_LENGTH=40
For older Jenkins instances
Actually there's a java property you can set to specify the length of the directory name, e.g.:
-Djenkins.branch.WorkspaceLocatorImpl.PATH_MAX=20
To make it permanent you have to specify this property in the Jenkins java startup configuration file.
You may also read and write this property using the Jenkins script console for temporary changes or to just give it a try as it takes effect immediately, e.g.
println jenkins.branch.WorkspaceLocatorImpl.PATH_MAX
jenkins.branch.WorkspaceLocatorImpl.PATH_MAX = 20
println jenkins.branch.WorkspaceLocatorImpl.PATH_MAX
Setting this value to 0 changes the path generation behavior.
For details please check:
https://issues.jenkins-ci.org/browse/JENKINS-34564
https://issues.jenkins-ci.org/browse/JENKINS-38706

Different checksum results for jar files compiled on subsequent build?

I am working verifying the jar files present on remote unix boxes with that of built on local machine(Windows & Cygwin) with same JVM.
As a POC I am trying to verify if same checksum is produced with jar files generated on my machine with consecutive builds, I tried below,
Generated the jar file first time using ant script
Calculated the checksum (e.g. "xyz abc")
Generated the jar file again with same ant script without changing anything
I got different checksum but same byte count (e.g. "xvw abc")
I am not sure how java internal processes produce the class files and then the jar files, Can someone please help me understand below points
Does the cksum utility of unix/cygwin consider timestamp of the file while coming up with the value?
Will the checksum be different for compiled class files/jar file produced if we keep every other things same [Compiler version + sourcecode + machine + environment]?
Answer to question 1: cksum doesn't consider the timestamp of the archive (e.g. jar-file) but it does consider the timestamps of the files inside the jarfile.
Answer to question 2: The checksums of the individual class-files will be the same with all other things the same (source-code, compiler etc.) The checksums of the jar-files will be different. Causes of differences can be the timestamp of the files inside the jarfile or if files are put into the archive in different orders (e.g. caused by parallel builds).
If you want to create a reproducible build with gradle you can do so with the config below:
tasks.withType(AbstractArchiveTask) {
preserveFileTimestamps = false
reproducibleFileOrder = true
}
Maven allows something similar, sorry I don't know how to do this with ant..
More info here:
https://dzone.com/articles/reproducible-builds-in-java
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=74682318

How do I get code coverage for all branches using Pester?

Multiple test cases have been written to test a new Chocolatey function using Pester. How do check whether all branches have been covered?
The current version of Pester (3.0) does support code coverage.
Simply use
Invoke-Pester -CodeCoverage *.ps1
to get a full rapport of code coverage (coverage %) and a summary of all the code lines (branches) not executed during testing:
Tests completed in 10.11s
Passed: 66 Failed: 0 Skipped: 0 Pending: 0
Code coverage report:
Covered 99,20 % of 501 analyzed commands in 22 files.
Missed commands:
File Function Line Command
---- -------- ---- -------
Set-ProgressColor.ps1 Set-ProgressColor 19 Write-Verbose "Progress colors are only supported on the PowerShell com...
UPDATE 2:
Thanks to oɔɯǝɹ for pointing out that Pester has now released a version of Pester that supporting code coverage.
UPDATE 1:
As of Pester Version 3.0, it is now possible to get code coverage information, using:
Invoke-Pester -CodeCoverage <path to file>
This is documented in the project wiki page:
https://github.com/pester/Pester/wiki/Code-Coverage
NOTE: In order to use this, you will require PowerShell version 3.0
ORIGINAL ANSWER:
To the best of my knowledge, Pester doesn't currently support code coverage analysis, but it is something that is being worked on.
There is an open issue for this feature here:
https://github.com/pester/Pester/issues/53
You can see it being worked on here:
http://davewyatt.wordpress.com/2014/06/29/code-coverage-analysis-for-pester-feedback-request/
And there is a screenshot of it working here:
https://twitter.com/nohwnd/status/485093995929157632
So basically, hold tight, and there will hopefully be something soon.
In terms of the actual Chocolatey code base, there is quite a good convention being used, namely that for each *.ps1 file, there "should" be a corresponding ".Tests.ps1 file. If this second file doesn't exist, then there are no unit tests for that function.

Running F# xUnit Fact from TestDriven.NET reporting "It looks like you're trying to execute an xUnit.net unit test."

I am trying to run xUnit tests (from an F# module, if it makes any difference) using TestDriven.NET, but whatever I do I get this error:
It looks like you're trying to execute an xUnit.net unit test.
For xUnit 1.5 or above (recommended):
Please ensure that the directory containing your 'xunit.dll' reference also contains xUnit's
test runner files ('xunit.dll.tdnet', 'xunit.runner.tdnet.dll' etc.)
For earlier versions:
You need to install support for TestDriven.Net using xUnit's 'xunit.installer.exe' application.
You can find xUnit.net downloads and support here:
http://www.codeplex.com/xunit
I tried following the suggestions, i.e. I copied the files
xunit.dll.tdnet
xunit.extensions.dll
xunit.gui.clr4.exe
xunit.runner.tdnet.dll
xunit.runner.utility.dll
xunit.runner.utility.xml
xunit.xml
to the folder with xunit.dll and I ran xunit.installer.exe. How can I get it to work?
I just figured out that I forgot to make the test a function in F# (so it was just a value). The error message can't be more misleading though!
You have two problems:
your Fact is broken:-
If you hover over the
please work
bit, you'll see something like: unit -> int
For a Fact to be picked up by an xUnit runner, it needs to yield `unit (void).
Hence, one key thing to get right first is to not return anything. In other words, replace your 123 with () (or an Assertion).
You can guard against this by putting a :unit stipulation on the test:-
[<Fact>]
let ``please work`` () : unit = 123
This will force a compilation error.
TestDriven.NET is reporting it cannot find the xunit.tdnet modules
It's critical to get step 1 right first. Then retry and the problem should be gone
If it remains...
Either try the VS-based runner which should work as long as it's installed and xunit.dll is getting to your output dir or look at the docs for your version of TD.NET for detailed troubleshooting notes (exec summary is if the .tdnet file was in your out dir or you undo and redo the xunit.installer from the folder containing the packages it should just work, esp if you are on latest)

Getting Statistics to show up in TC

I've setup teamcity with my sln file and got the unit tests to show up with the CppUnit plugin that teamcity has. And I get test results in the TeamCity UI.
Now I'm trying to get trending reports to show up for my unit tests and code coverage.
As of code coverage, we're using vsinstr.exe and vsperfmon.exe which produces an XML file.
I'm not quite sure as of what steps I should be taking to make the trending reports and code coverage(not as important) to show up.
I've already seen this post, but the answer seems to require editing the build script, which I don't think would work for my case since I'm building through MSBuild and the .sln file, and the tests are being ran through that build.
So basically I'm trying to get the Statistics tab to show up, and I'm not sure where to begin.
Just add simple Powershell step into your build configuration. Something like this:
function TeamCity-SetBuildStatistic([string]$key, [string]$value) {
Write-Output "##teamcity[buildStatisticValue key='$key' value='$value']"
}
$outputFile = 'MetricsResults.xml'
$xml = [xml] (Get-Content $outputFile)
$metrics = $xml.CodeMetricsReport.Targets.Target[0].Modules.Module.Metrics
$metrics.Metric
| foreach { TeamCity-SetBuildStatistic "$($_.Name)" $_.Value.Replace(',', '') }
It uses XML output from FxCop Metrics. You have to update the script for your actual schema.

Resources