Count in .fpr file generated does not match with count generated using FPRUtility command on the same fpr file - fortify

In my Jenkins job, FPR file is generated after running the scan on particular build of code. If I download the FPR file and open it using SCA workbench it shows me the following count : Critical-0, High-0, Medium-0 and Low-313.
But when i run the below FPR utility command on the same fpr file using Command Line I get the count as follows : Critical-2, High-7, Medium-0 and Low-314.
Below is the command that I ran -
FPRUtility -[myfprfilename].fpr -information -search -query "[fortify priority order]:critical"
FPRUtility -[myfprfilename].fpr -information -search -query "[fortify priority order]:high"
FPRUtility -[myfprfilename].fpr -information -search -query "[fortify priority order]:medium"
FPRUtility -[myfprfilename].fpr -information -search -query "[fortify priority order]:low"
Initially I thought it is showing count of suppressed and hidden issues so in FPR file, under option I check marked show suppressed and show hidden issues, but still the count does not matched with count displayed by FPRUtility command.
I wan to know how are we getting the extra count and what can I do to remove the extra issues count?

I suspect this is a filter issue, maybe there is a default filter set on your AuditWorkbench hiding the raw issue counts.

Related

Determine perforce changelist number after running p4.run("sync") in Jenkins SCM pipeline

On the Jenkins server, Perforce plugin (P4) is installed.
Within my Jenkins server job pipeline (implemented as shared library in groovy-lang), there is a pipeline stage to sync from perforce to the jenkins workspace as:
p4.run("sync")
I want to determine the changelist number of this operation. I need to use this changelist number in the later stages of the pipeline.
I am thinking to do as follows:
p4.run("sync")
changelist_number = p4.run("changes -m1 #have")
Will this work? Or give me a better solution. Also I am very unfamiliar about this topic. It would be nice if you can explain what all this means.
The changelist number (that is, the highest changelist number associated with any synced revision) is returned as part of the p4 sync output if you're running in tagged mode:
C:\Perforce\test\merge>p4 changes ...
Change 226 on 2020/11/12 by Samwise#Samwise-dvcs-1509687817 'foo'
Change 202 on 2020/10/28 by Samwise#Samwise-dvcs-1509687817 'Populate //stream/test.'
C:\Perforce\test\merge>p4 -Ztag sync ...
... depotFile //stream/test/merge/foo.txt
... clientFile c:\Perforce\test\merge\foo.txt
... rev 2
... action updated
... fileSize 20
... totalFileSize 20
... totalFileCount 1
... change 226
Tagged output is converted into a dictionary that's returned by the run method, so you should be able to just do:
changelist_number = p4.run("sync")[0]["change"]
to sync and get the changelist number as a single operation.
There are some edge cases here -- deleted files aren't synced and so the deleted revisions won't factor into that changelist number.
A more ironclad method is to put the horse before the cart -- get the current changelist number (from the depot, not limited to what's in your client), and then sync to that exact number. That way consistency is guaranteed; if a new changelist is submitted between the two commands, your stored changelist number still matches what you synced to.
changelist_number = p4.run("changes", "-m1", "-ssubmitted")[0]["change"]
p4.run("sync", "#{changelist_number}")
Any other client syncing to that changelist number is guaranteed to get the same set of revisions (subject to its View).

error: ORA-02289 - Sequence doesn't exist in Agile PLM 9.3.5

Not sure if this is the right place to ask this question.
I am facing issues while performing any action in Agile PLM 9.3.5. I have upgraded PLM from 9.3.3 to 9.3.5. Checked in Sequence table also, all the sequences are available.Still, getting the above error while creating any Object or updating any user profile.
Thanks!
You can try this to resolve the issue if it's still not resolved:
After you upgraded to Agile 9.3.5, You need to run 'reorder_query.bat' shell script in the [AUT_HOME]/AUT/bin directory. This tool clears out temporary records and gaps to compact the query table to reuse sequence IDs. This information is in the Agile Database Upgrade Guide.
If that doesn't work, please refer to Doc ID 1606365.1 in MOS KB.
Else, if you don't have access, I am copy pasting the excerpt below about the plan of action.
Stop application server and bounce the database server to make sure all inflight transactions are committed. While the database is down, take a cold backup. Leave the application server down during this process to prevent users from connecting.
Download the attached script called GAP_HUNTER_GC_v1.0.sql to a machine that has Oracle database client installed and can connect to your Agile schema through SQL*Plus, and run it. For example, the output on the screen will look similar to this:
SQL> #GAP_HUNTER_GC_v1.0.sql
You are logging on DB User - AGILE
Your agile database data version is 9.3.095.0
Your agile database schema version is 9.3.095
Please enter the gap threshold, default 5000:
Please enter the number of top largest gaps, default 10:
>>>>>>>> Start to collect gap ....
>>>>>>>> Prepare for scanning tables....
>>>>>>>> Start to collect tables and Generate the mapping tables ....
>>>>>>>> Step 1: Collect Reused ids....Begin time:20131208 11:39:17
table is not existing:Regulation_addorreplace_action
table is not existing:Regulation_addorreplace_task
table is not existing:INSTANCES
table is not existing:REFERENCE_OBJECT
>>>>>>>> Step 2: Generate gap .... Begin time:20131208 11:39:17
>>>>>>>> Step 3: Finish the Gap Hunter Process ....
>>>>>>>> Report: There are 0 id(s) have been collected in the GAP
Sequence Indexer Number, Gap Size, Starting Number, Ending Number
67018473, 131226320, 1352956646, 1484182965
50955717, 94058060, 1031324895, 1125382954
89993219, 87600000, 1812982965, 1900582964
78036370, 87424300, 1573458652, 1660882951
29531387, 77700000, 601882965, 679582964
86572585, 68412680, 1744470274, 1812882953
59910085, 67800000, 1210682962, 1278482961
25834330, 59801320, 527781692, 587583011
83797585, 55500000, 1688882958, 1744382957
12104050, 47011460, 252171585, 299183044
>>>>>>>> End .........
The output from step 2 is placed into log files on the file system. The log files are located in the same directory where SQL*Plus was launched from. Look for the following files:
gap_hunter_version.log
gap_hunter.log
gap_hunter_report.log
Open the gap_hunter_report.log file and review the first set of numbers in the list. For example:
Sequence Indexer Number, Gap Size, Starting Number, Ending Number
67018473, 131226320, 1352956646, 1484182965
This indicates the largest set of number available with a gap size of 131226320, starting with 1352956646 and ending with 1484182965.
Drop and recreate the AGILEOBJECTIDSEQUENCE sequence using the numbers in step 4:
drop sequence AGILEOBJECTIDSEQUENCE;
create sequence AGILEOBJECTIDSEQUENCE minvalue 1 maxvalue [Ending Number] increment by 20 cache 20 noorder nocycle start with [Starting Number];
For example:
SQL> drop sequence AGILEOBJECTIDSEQUENCE;
Sequence dropped.
SQL> create sequence AGILEOBJECTIDSEQUENCE minvalue 1 maxvalue 1484182965 increment by 20 cache 20 noorder nocycle start with 1352956646;
Sequence created.

Create graph panel with multiple query

I have the following monitoring stack:
collecting data with telegraf-0.12
storing in influxdb-0.12
visualisation in grafana (3beta)
I am collecting "system" data from several hosts and I want to create a graph showing the "system.load1" of several host NOT merged. I though I could simply add multiple queries to the graph panel.
When creating my graph panel, I create the first serie and see the result but when I add the second query, I got an error.
Here is the panel creation with 2 queries
Here is the query generated by the panel:
SELECT mean("load1") FROM "system" WHERE "host" = 'xxx' AND time > now() - 24h GROUP BY time(1m) fill(null) SELECT mean("load1") FROM "system" WHERE "host" = 'yyy' AND time > now() - 24h GROUP BY time(1m) fill(null)
And the error:
{
"error": "error parsing query: found SELECT, expected ; at line 2, char 1",
"message": "error parsing query: found SELECT, expected ; at line 2, char 1"
}
So I can see that the generated query is malformed (2 select in one line without even a ';') but I don't know how to use Grafana to achieve what I want.
When I show or hide each query individually I see the corresponding graph.
I have created a similar graph (with multiple series) with chronograf but I would rather use grafana as I have many more control and plugins...
Is there something I am doing wrong here ?
After reading couple of thread in github issues, here is a quick fix.
As mentionned by #schup, the problem and its solution are described here:
https://github.com/grafana/grafana/issues/4533
The binaries are currently not fixed in grafana-3beta (if might in the next weeks). So there are 2 options: fixing the source and compile or patched an existing install.
I actually had to patch my current install:
/usr/share/grafana/public/app/app.<number_might_differ_here>.js
sed --in-place=backup 's/join("\\n");return k=k.replace/join(";\\n");return k=k.replace/;s/.replace(\/%3B\/gi,";").replace/.replace/' app.<number_might_differ_here>.js
Hope this might help (and that it will soon be fixed)
Seems to be an API change in influxdb 0.11
https://github.com/grafana/grafana/issues/4533

spss custom tables crashing when row matches column

I've defined a function for running batches of custom tables:
DEFINE !xtables (myvars=!CMDEND)
CTABLES
/VLABELS VARIABLES=!myvars retailer total DISPLAY=LABEL
/TABLE !myvars [C][COLPCT.COUNT PCT40.0, TOTALS[UCOUNT F40.0]] BY retailer [c] + total [c]
/SLABELS POSITION=ROW
/CRITERIA CILEVEL=95
/CATEGORIES VARIABLES=!myvars ORDER=D KEY=COLPCT.COUNT (!myvars) EMPTY=INCLUDE TOTAL=YES LABEL='Base' POSITION=AFTER
/COMPARETEST TYPE=PROP ALPHA=.05 ADJUST=BONFERRONI ORIGIN=COLUMN INCLUDEMRSETS=YES CATEGORIES=ALLVISIBLE MERGE=YES STYLE=SIMPLE SHOWSIG=NO
!ENDDEFINE.
I can then run a series for commands to run these in one batch.
!XTABLES MYVARS=q1.
!XTABLES MYVARS=q2.
!XTABLES MYVARS=q3.
However, if a table has the same row and column, Custom Tables freezes:
!XTABLES MYVARS=retailer.
The culprit appears to be SLABELS. I hadn't encountered this problem before v24.
I tried replicating a CTABLES spec as close as possible to yours and found that VLABELSdoes not like the same variable specified twice.
GET FILE="C:\Program Files\IBM\SPSS\Statistics\23\Samples\English\Employee data.sav".
CTABLES /VLABELS VARIABLES=Gender Gender DISPLAY=LABEL
/TABLE Gender[c][COLPCT.COUNT PCT40.0, TOTALS[UCOUNT F40.0]]
BY Gender[c] /SLABELS POSITION=ROW
/CATEGORIES VARIABLES=Gender ORDER=D KEY=COLPCT.COUNT(Gender) .
Which yields an error message:
VLABELS: Text GENDER. The same keyword, option, or subcommand is used more than once.
The macro has a parmeter named MYVARS, which suggests that more than one variable can be listed, however, if you do that, it will generate an invalid command. Something else to watch out for. I can see the infinite loop in V24. In V23, an error message is produced.

how to calculate total amount of Jenkin's jobs

in our working Jenkins there are hundreds executing jobs. And I'm interesting does Jenkins store somewhere total number of jobs which are currently executing or I should manually calculate it one by one?
Try "$N1 - $T jobs" to display the total number of jobs with this plugin
You can also use:
$T - The Total number of jobs
$S - The number of jobs currently Succeeding
$F - The number of jobs currently Failing
$U - The number of jobs currently Unstable
$D - The number of jobs currently Disabled
i´m triying to use this plugin also.
It´s easy to install and configure. All in Configure System.
For example in my screen/case, put this:
$N1: $T pipelines, $S Ok, $F Failed, $U Unstable, $D DIsabled

Resources