Sonar analysis how to get rid of "Too many duplication references" in jenkins pipeline log - jenkins

A large portion of our code is based on a template, because of this sonar scanner falsely reports that code as having too many duplications.
I know it's only a warning, but it fills our Jenkins pipeline logs with warnings, we'll never fix, to the extent that real issues get overlooked.
Following https://stackoverflow.com/a/52869313/1817610 I added sonar.cpd.exclusions=**/*.w to our scanner properties, but that does not eliminate the warnings.
The fragment of the log below shows 25 lines in the log for a single source, we have more than a thousand sources like that.
00:21:14 INFO: 3970/4255 - current file: X:/cce/build/develop/git/smartlisa/appl/src/erprap/fwkal-u.w
00:21:14 WARN: Too many duplication references on file src/erprap/fwkal-u.w for block at line 523. Keep only the first 100 references.
00:21:14 WARN: Too many duplication references on file src/erprap/fwkal-u.w for block at line 525. Keep only the first 100 references.
00:21:14 WARN: Too many duplication references on file src/erprap/fwkal-u.w for block at line 523. Keep only the first 100 references.
00:21:14 WARN: Too many duplication references on file src/erprap/fwkal-u.w for block at line 527. Keep only the first 100 references.
/// trimmed 20 similar lines
...
00:21:14 WARN: Too many duplication references on file src/erprap/fwkal-u.w for block at line 523. Keep only the first 100 references.
using SonarScanner 4.7.0.2747

I found a solution using the https://plugins.jenkins.io/log-file-filter/ plugin.
I added a filter to replace
.* WARN: Too many duplication references on file .*\n
with an empty string.

You had only to google for "WARN: Too many duplication references" to find: https://community.sonarsource.com/t/supress-ignore-warning-warn-too-many-duplication-references-on-file/27946 .
This tells you that you can't control that.
And to be frank, it isn't a "false" report. I understand where you're coming from, as I used to work on a project where we generated a lot of our application code. It's really not a good strategy. You'd be better off with an architecture where common code is inherited or composed, not duplicated.

Related

With Jenkins Job Builder (JJB) what's the preferred way to inject values into a static set of job configuration files?

This bounty has ended. Answers to this question are eligible for a +100 reputation bounty. Bounty grace period ends in 14 hours.
frans is looking for an answer from a reputable source.
I have a working set of JJB YAML files successfully creating jobs and folders.
I now want to make certain values I use inside those YAML files configurable i.e. when running jenkins-jobs test|update -r jobfolder I want to set values for folder prefixes (to not damage existing production jobs), names for branches, nodes etc.
I don't want to use JJBs defaults approach for this since I'm already using it for configuration at a different place and it results in conflicts when used in projects and jobs together.
The ideal way of doing this I can think of would be a way to call JJB like this
jenkins-jobs test|update --define "folder-prefix=experimental/,node=test-node" -r jobfolder
Giving me variables I can use in the actual job definition files.
Since this option seemingly doesn't exist, I'm currently trying to provide files which contain those variables and somehow 'inject' them in my project.
Those are the approaches I can think of:
1 - having different configuration folders with YAML files inside, I would use like this:
jenkins-jobs test -r experimental-config:jobfolder
jenkins-jobs test -r production-config:jobfolder
with experimental-config and production-config being folders with additional files containing my configuration I can switch between.
But unfortunately I don't know how I would reference values I've defined in different yaml files. Is that even possible?
2 - having include files like described in the documentation
While that sounds promising I didn't manage to actually make this run. I tried to turn the following 'configuration header' I'm already using:
- dynamic-config: &dynamic-config
name: "dynamic-config"
folder-prefix: "experimental/"
node: "test-node"
[Rest of the file making use of dynamic-config]
into something making use of the !include statement like this:
!include: dynamic-config.yaml.inc
[Rest of the file making use of stuff defined in dynamic-config.yaml.inc]
giving me a seemingly unrelated parser error:
yaml.parser.ParserError: expected '<document start>', but found '<block sequence start>'
in "/home/me/my/project.yml", line 11, column 1
so I tried this snippet, which looks more like the example by putting it inside an existing element:
- dynamic-config: &dynamic-config
name: "dynamic-config"
!include: dynamic-config.yaml.inc
giving me a different error but still an error:
yaml.scanner.ScannerError: while scanning a simple key
in "/home/me/my/project.yml", line 7, column 5
could not find expected ':'
in "/home/me/my/project.yml", line 8, column 5
In both cases it doesn't make a difference whether or not the specified include file exists or not, which makes me doubt you can just 'include' a file like this at all.
What am I doing wrong here? Is there a more obvious / straight forward way to customize a jenkins-jobs run?
Update:
I somehow managed to use the !include tag for individual items now, like this:
- dynamic-config: &dynamic-config
name: "dynamic-config"
folder-prefix: !include: job-configs/active/folder-prefix.inc
branch-name: !include: job-configs/active/branch-name.inc
node-name: !include: job-configs/active/node-name.inc
But I wasn't able to put the whole dynamic-config element (with the anchor) into an include file yet.
2nd update:
Looks like I'm trying something similar as the guy from this question.
Can someone confirm, that this is currently still a problem? What's the JJB way of handling this?

Seeing Bad parsing rule for Jenkins Log parser plugin

I am trying to use Log Parser Plugin with Jenkins. Following is my rule file which I have taken from the sample given on the link.
# match line starting with 'error', case-insensitive
error /(?i)^error/
# list of warnings here...
warning /[Ww]arning/
warning /WARNING/
# create a quick access link to lines in the report containing 'INFO'
info /INFO/
# each line containing 'BUILD' represents the start of a section for grouping errors and warnings found after the line.
# also creates a quick access link.
start /BUILD/
I still see following at the end of the Parsed Console Output page:
NOTE: Some bad parsing rules have been found:
Bad parsing rule: , Error:1
Bad parsing rule: , Error:1
Bad parsing rule: , Error:1
I did come across this, but dint help as I am not using space anywhere.
Can someone help me resolving this issue?
It appears you have extra white-space somewhere in the file that the plugin is interpreting as you attempting to define a rule. Maybe try running it with the empty lines removed. That plugin has given me quite a bit of trouble as well, it's not very well documented (as is the case with many Jenkins plugins).
I had tried no spaces in the pattern, but that did not work. Turns out that the Parsing Rules files does not support empty lines in it. Once I removed the empty lines, I did not get this "Bad parsing rule: , Error:1".
I think since the line is empty - it doesn't echo any rule after the first colon. Would have been nice it the line number was reported where the problem is.
I posted the same to this thread too - Log parsing rules in Jenkins
Hopefully, it helps out other folks who may be wondering what is causing this.

clang-tidy: Analyze file with multiple errors

Is it possible to analyze a C/C++ file in clang-tidy, while ignoring its syntax/compilation errors?
I have a very big file that has several compilation errors, but I still want to analyze it with clang-tidy.
I'm getting the following error message:
20 warnings and 20 errors generated.
Error while processing <myfile.c>
error: too many errors emitted, stopping now [clang-diagnostic-error]
I saw that in a smaller file, it is possible to have some syntax errors, but still, issues like index past the end of the array are displayed.
Is there a way to still have my file to be analyzed, despite the errors (like increasing the number of possible errors)?
You may instruct clang-tidy to continue processing errors by applying -ferror-limit=0 to the compilation flags, that is, add the following to clang-tidy command line:
-extra-arg=-ferror-limit=0
The default value for -ferror-limit is indeed 20.
Alternatively, you may want to set the limit to a higher number of your choice, rather than disabling the limitation completely.
Note that if you are using the run-clang-tidy.py script, rather than clang-tidy directly, you'll need version 5.0 for -extra-arg parameter support.

Failure on CSV import into Neo4j 2.2.0-RC01

I'm having some weird issues when using the batch load into Neo4j 2.2.0-RC1. I am trying to import 10 different node sets (for different labels) along with 12 relationship files. The data sets vary in size - some node types have ~200-300k records, some are small (50-100 records). For most node types I have a separate file with a header and separate file with data for each of the sets (the data is generated from the DB and I want to be able to regenerate the dump files without worrying about preparing the :ID columns, describing data types etc.)
I am re-running the import task a number of times (with options --processors 1 --stacktrace) and I keep getting different errors (not a single change in the actual dataset) which makes me think it might be something concurrency-related. Sometimes import simply hangs with a message like this:
Nodes
[>:36.75 MB/s------------------------|*PROPERTIES-----------------------------------------|NOD|] 0
In most cases, it crashes with an error like below, except the number of nodes that it manages to import fine differs from run to run.
[>:27.23 MB/s-------------|*PROPERTIES--------------------------|NO|v:19.62 MB/s---------------]100kImport error: Panic called, so exiting
java.lang.RuntimeException: Panic called, so exiting
at org.neo4j.unsafe.impl.batchimport.staging.StageExecution.stillExecuting(StageExecution.java:63)
at org.neo4j.unsafe.impl.batchimport.staging.ExecutionSupervisor.anyStillExecuting(ExecutionSupervisor.java:79)
at org.neo4j.unsafe.impl.batchimport.staging.ExecutionSupervisor.finishAwareSleep(ExecutionSupervisor.java:102)
at org.neo4j.unsafe.impl.batchimport.staging.ExecutionSupervisor.supervise(ExecutionSupervisor.java:64)
at org.neo4j.unsafe.impl.batchimport.staging.ExecutionSupervisors.superviseDynamicExecution(ExecutionSupervisors.java:65)
at org.neo4j.unsafe.impl.batchimport.ParallelBatchImporter.executeStages(ParallelBatchImporter.java:226)
at org.neo4j.unsafe.impl.batchimport.ParallelBatchImporter.doImport(ParallelBatchImporter.java:151)
at org.neo4j.tooling.ImportTool.main(ImportTool.java:263)
Caused by: java.lang.RuntimeException: Panic called, so exiting
at org.neo4j.unsafe.impl.batchimport.staging.AbstractStep.assertHealthy(AbstractStep.java:189)
at org.neo4j.unsafe.impl.batchimport.staging.ProducerStep.process(ProducerStep.java:77)
at org.neo4j.unsafe.impl.batchimport.staging.ProducerStep$1.run(ProducerStep.java:54)
Caused by: java.lang.IllegalStateException: Nodes for any specific group must be added in sequence before adding nodes for any other group
at org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.EncodingIdMapper.put(EncodingIdMapper.java:137)
at org.neo4j.unsafe.impl.batchimport.NodeEncoderStep.process(NodeEncoderStep.java:76)
at org.neo4j.unsafe.impl.batchimport.NodeEncoderStep.process(NodeEncoderStep.java:41)
at org.neo4j.unsafe.impl.batchimport.staging.ExecutorServiceStep$2.call(ExecutorServiceStep.java:96)
at org.neo4j.unsafe.impl.batchimport.staging.ExecutorServiceStep$2.call(ExecutorServiceStep.java:87)
at org.neo4j.unsafe.impl.batchimport.executor.DynamicTaskExecutor$Processor.run(DynamicTaskExecutor.java:217)
I managed to run it successfully once, which, again, seems to imply that some sort of timing issue is at play.
Unfortunately I cannot provide the datasets as they contain confidential data.
The weirdest thing of all is that if I split the load into 2 different sets (the datasets are almost separate subgraphs, they have only 2 relationships in common) then all works fine (so not likely to be data related), but even loading just nodes doesn't work if I put them all into a single command. And because it's not possible to force a load into an existing database, loading it in 2 steps is sadly not an option.
1) Is that a known issue and if so, any ETA on a fix / issue that I could follow?
2) If not, is there any troubleshooting I can do to get to the bottom of it? The messages.log file in the target DB directory contains VERY little output, it would be nice if I could get some more details on what's going wrong.
I've spotted the problem. Thanks for reporting/asking. The next release will include this fix. I see an additional set of integration tests for the import tool. I'll provide link to commit once it's in.

Delphi Text Files get NULLS (0's) written to them instead of text

Unfortunately this question may be a bit vague, I have a problem that I am finding difficult to describe, it is intermittent and I cannot reproduce it myself, I am just hoping that someone else has seen something like it before.
My application has quite a lot of text and ini files that get written when it closes down. Typically this would be in response to a Close event, but may also be triggered by a WM_ENDSESSION. Unfortunately at the moment I am not sure if both or only one of these events can result in the problem I am about to describe, because I have been unable to reproduce this problem myself.
The issue I have is that for some users some of the text and ini files end up being written as NULLs. The file sizes end up looking about right, but instead of text, every character is written as a x00. So instead of 500 bytes of regular ASCII text I end up with 500 x00's. I also have an application log file that can sometimes end up with nulls written to it also. However the logging of x00's to the log file does not necessarily correspond to the exact same time as x00's were written to the config files.
For my files I am using TmemIniFile or TstringList which means that ultimately a Tstrings.SaveToFile is being called for all of my config files.
sl:=TstringList.Create;
try
SourceList.GetSpecificSubset(sl);
AppLogLogLine('Commands: Saving Always Available list. List has '+inttostr(sl.Count)+' commands.');
sl.SaveToFile(fn);
finally
sl.Free;
end;
But then I also have instance where I would already have a TstringList in memory and I just call SaveToFile on it. For TmemIniFile the structure would look similar to above. In some instances I may have an outer loop to write multiple lists. Some of those will result in files being written correctly, some will be full of 00's.
EDIT: GetSpecificSubset is simply a function that will populate "sl" with a list of command names. I have "GetAllUsersCommands", "GetHiddenCommands", "GetAlwaysVisibleCommands" etc. Note that my log file also writes this kind of thing, as a check for how big those lists are:
16/10/2013 11:17:49 AM: Commands: Saving Any User list. List has 8 commands.
16/10/2013 11:17:49 AM: Commands: Saving Always Visible list. List has 17 commands.
16/10/2013 11:17:49 AM: Commands: Saving Always Hidden list. List has 2 commands.
I accidentally left the logging line out of the code above. So this log line is the last thing written before calling Tstrings.SaveToFile, and at that point it thinks it has data. Even if somehow each line of text were NULLs, I would still expect to see x13x10 in the files, but that is not happening.
Here's a screen cap from a HEX editor:
EDIT 2: I just realised I left off a very important piece of information. This is only intermittent. It works 99% of the time. When saving files at shutdown it might not even be all files. Even if I have a loop saving multiple similar files, some may work fine and others may fail.

Resources