lcov html report coverage limits - code-coverage

Lcov has genhtml tool which converts the lcov coverage info file into HTML report. It is possible to color code the results table- indicating low, medium and high coverage with following lcov configuration file options:
genhtml_hi_limit
genhtml_med_limit
However these limits seem to apply globally to all types of coverage metrics i.e. line, function and branch. Is there a way to set individual limits for the line, function and branch coverage metrics? Or can this be achieved with CSS somehow?

Although genhtml documentation describes only about the global limits, on inspecting the source, there is following limits than can be set in lcovrc file to set colors specific to coverage types.
genhtml_branch_hi_limit, genhtml_branch_med_limit
genhtml_function_hi_limit, genhtml_function_med_limit

Related

How to directly access/use Tensorflow Extended StatisticsGen statistics?

I'm experimenting with TFX for common ML pipeline work. I somewhat struggle to actually utilize StatisticsGen component to inspect an analyze data statistics.
While in case of TFDV I can access statistics in a straightforward manner:
import tensorflow_data_validation as tfdv
stats = tfdv.generate_statistics_from_csv('data.csv', delimiter=',')
stats # This gives a JSON-like output
in case of TFX itself, StatisticsGen generates a binary FeatureStats.pb file in artifacts/StatisticsGen/statistics/...
How to extract actual statistics from StatisticsGen to use it for checking data (or any other purpose)? I'm aware of the existence of interactive context's ability to visualize stats, but this is unhelpful in production environment.

How can I export the data from the results table in Image J Pendant Drop plugin?

I am using the pendant drop plug in (http://www.msc.univ-paris-diderot.fr/~daerr/misc/pendent_drop.html) to get the surface tension of droplets. It produces a table of results in a window called Results, however, it does not have the usual file, save as etc options. Also, when I try the getResults and nResults command in macro it doesnt give me any results and says the number of results is .
Do I need to edit the plug in to be able to output the results? My aim is to output the results as a csv file.
Pendent Drop is an ImageJ2-style plugin that generates a SciJava Table. In an up-to-date Fiji installation, you can save such tables using File > Export > Table....
The macro functions getResults and nResults do not work on those tables, because they require an ImageJ1 ResultsTable window.
See also this topic on the image.sc forum. In general, questions like this one are much better asked on https://forum.image.sc (see also the description of the imagej tag).

Windowing appears to work when running on the DirectRunner, but not when running on Cloud Dataflow

I'm trying to break fusion with a GroupByKey. This creates one huge window and since my job is big I'd rather start emitting.
With the direct runner using something like what I found here it seems to work. However, when run on Cloud Dataflow it seems to batch the GBK together and not emit output until the source nodes have "succeeded".
I'm doing a bounded/batch job. I'm extracting the contents of archive files and then writing them to gcs.
Everything works except it takes longer than I expected and cpu utilization is low. I suspect that this is due to fusion -- my hypothesis is that the extraction is fused to the write operation and so there's a pattern of extraction / higher CPU followed by less CPU because we're doing network calls and back again.
The code looks like:
.apply("Window",
Window.<MyType>into(new GlobalWindows())
.triggering(
Repeatedly.forever(
AfterProcessingTime.pastFirstElementInPane()
.plusDelayOf(Duration.standardSeconds(5))))
.withAllowedLateness(Duration.ZERO)
.discardingFiredPanes()
)
.apply("Add key", MapElements...)
.apply(GroupByKey.create())
Locally I verify using debug logs so that I can see work is being done after the GBK. The timestamp between the first extraction finishing and the first post-GBK op usually reflects the 5s duration (or other values I change it to (1,5,10,20,30)).
On GCP I verify by looking at the pipeline structure and I can see that everything after the GBK is "not started" and the output collection of the GBK is empty ("-") while the input collection has millions of elements.
Edit:
this is on beam v2.10.0.
the extraction is being done by a SplittableDoFn (not sure this is relevant)
Looks like the answer you referred to was for a streaming pipeline (unbounded input). For batch pipeline processing a bounded input, GroupByKey will not emit till all data for a given key has been processed. Please see here for more details.

SAP - Code Coverage Analyzer Results

I'm trying to extract the results from my Code Coverage analysis from my own abap program from the database in SAP.
At the following website Coverage Analyzer: Technology I find this information:
Initially, RSCVR_COLLECT transfers data to the 'staging area' of table COVRES0. Finally, the new data is aggregated with the statistics in tables COVRES and COVREF, among others.
In the table COVRES i can see one row where my program is listed but there are no further information about statistics like branch coverage, etc.
Can anybody give me a hint, where I can find the results of the Code Coverage Analyzer, so I am able to extract them for further processing?
Best regards
Bernhard
Copied from the comments above:
Instead of using the code for Code Coverage Analyzer (scov), use the Coverage API.
Interfaces IF_SCV* in SE24

How do I plot benchmark data in a Jenkins matrix project

I have several Jenkins matrix projects in where I output benchmark results (i.e. execution times) in a CSV file. I'd like to plot these execution times as a function of the build number, so I can see if my projects are regressing over time.
I can confirm Plot Plugin is a correct and quite useful approach. BTW, it supports CSV as well: plot configuration example
I've been using it for several years without any problem. Benchmarks results were generated as a property file. Benchmark id (series id) was used as a key and result as a value. One build produces one result for each benchmark. Having that data it is quite easy to create plot configuration ant track performance.
This may help you:
https://wiki.jenkins-ci.org/display/JENKINS/Plot+Plugin
It adds plotting capabilities to Jenkins.

Resources