SAP - Code Coverage Analyzer Results - code-coverage

I'm trying to extract the results from my Code Coverage analysis from my own abap program from the database in SAP.
At the following website Coverage Analyzer: Technology I find this information:
Initially, RSCVR_COLLECT transfers data to the 'staging area' of table COVRES0. Finally, the new data is aggregated with the statistics in tables COVRES and COVREF, among others.
In the table COVRES i can see one row where my program is listed but there are no further information about statistics like branch coverage, etc.
Can anybody give me a hint, where I can find the results of the Code Coverage Analyzer, so I am able to extract them for further processing?
Best regards
Bernhard

Copied from the comments above:
Instead of using the code for Code Coverage Analyzer (scov), use the Coverage API.
Interfaces IF_SCV* in SE24

Related

How can I export the data from the results table in Image J Pendant Drop plugin?

I am using the pendant drop plug in (http://www.msc.univ-paris-diderot.fr/~daerr/misc/pendent_drop.html) to get the surface tension of droplets. It produces a table of results in a window called Results, however, it does not have the usual file, save as etc options. Also, when I try the getResults and nResults command in macro it doesnt give me any results and says the number of results is .
Do I need to edit the plug in to be able to output the results? My aim is to output the results as a csv file.
Pendent Drop is an ImageJ2-style plugin that generates a SciJava Table. In an up-to-date Fiji installation, you can save such tables using File > Export > Table....
The macro functions getResults and nResults do not work on those tables, because they require an ImageJ1 ResultsTable window.
See also this topic on the image.sc forum. In general, questions like this one are much better asked on https://forum.image.sc (see also the description of the imagej tag).

I want to know Complete project line of code in TFS project collection

Team,
Could you please help on the advise.
I want to know how to calculate the line of code for my TFS project collection. I need for entire instance to calculate the line of code.
Please advise. Thank you
Note: I'm assuming you're using TFVC, not Git.
You should be able to get this from the data warehouse (Tfs_Warehouse) assuming you have Reporting Services configured.
There is a Code Churn table. I believe you should be able to sum the NetLinesAdded field to get the total number of lines of code.
The Analysis Cube has a Total Lines field, as well.
However, you can also get this information from your file system with PowerShell, for example:
(gci -Path 'C:\Users\Daniel\Source\Repos\' -rec -Include '*.cs' | select-string .).Count
This comes with the caveat that "lines of code" is, in almost every single case, a totally meaningless, worthless number.

PENTAHO data integration data source/destination mapping

I'm reaching you hoping to find answers about Pentaho data integrator limitation.
I'm currentlty working on a 1 to 1 data source integration and would like to make it n to 1-n. This requires dynamic jobs creation and would like to know if any of came across such issue. My 1 to 1 is working perfectly, it integration form differents data source types (CSV, databases "Mysql, Oracle ...) to same date destination and need to make it n to 1-n.
There is a Metadata Injection Step just for that.
A use case similar to yours is described by Diethard here.
Because it seams that you have a lot of different source format, it may be a good investment to read the use case of Jens, the author of the step, here, which (apart for the automation) is precisely your case.
AFAIK in Pentaho DI, it is not possible to create dynamic transformations for any random data sources. PDI looks for the input columns to be available in the input stream before it loads the data to the target database. For example, if you are using 1 data source (in MySQL) and loading the same to the csv output, the csv output step is expecting the presence of input columns in the data source step (Table input). If you are trying to load any n random data sources you need to define input columns/fields for each of them individually.
Alternatively there are few things which you can explore:
1. Fast Dump in Text File Output step:
There is an option to fast data dump the data set in Text file output step. Here you don't need to define any output column. The input fields will be automatically dumped without formatting as it is. You can use this to map all of the input sources to a csv format and then load it to their targets.
2. Extending Java and Kettle together to build a solution:
PDI allows you to create custom JAVA codes on top of kettle. You can check this blog for more. You can use this idea to create custom code to pass n data sources fields to the kettle as a parameter and execute them. {note: i haven't tried this step, just thinking out loud here}
Hope this helps :)

lcov html report coverage limits

Lcov has genhtml tool which converts the lcov coverage info file into HTML report. It is possible to color code the results table- indicating low, medium and high coverage with following lcov configuration file options:
genhtml_hi_limit
genhtml_med_limit
However these limits seem to apply globally to all types of coverage metrics i.e. line, function and branch. Is there a way to set individual limits for the line, function and branch coverage metrics? Or can this be achieved with CSS somehow?
Although genhtml documentation describes only about the global limits, on inspecting the source, there is following limits than can be set in lcovrc file to set colors specific to coverage types.
genhtml_branch_hi_limit, genhtml_branch_med_limit
genhtml_function_hi_limit, genhtml_function_med_limit

How do I plot benchmark data in a Jenkins matrix project

I have several Jenkins matrix projects in where I output benchmark results (i.e. execution times) in a CSV file. I'd like to plot these execution times as a function of the build number, so I can see if my projects are regressing over time.
I can confirm Plot Plugin is a correct and quite useful approach. BTW, it supports CSV as well: plot configuration example
I've been using it for several years without any problem. Benchmarks results were generated as a property file. Benchmark id (series id) was used as a key and result as a value. One build produces one result for each benchmark. Having that data it is quite easy to create plot configuration ant track performance.
This may help you:
https://wiki.jenkins-ci.org/display/JENKINS/Plot+Plugin
It adds plotting capabilities to Jenkins.

Resources