How to load coverage_db? - code-coverage

I'm trying to write a functional coverage for my design. I wrote needed covergroups, but now I faced with difficulties to transport my coverage between test runs.
Here a few code examples:
`include "uvm_macros.svh"
package t;
import uvm_pkg::*;
class test extends uvm_test;
`uvm_component_utils(test)
byte unsigned data;
covergroup cg;
dat: coverpoint data;
endgroup
function new(string name, uvm_component parent);
super.new(name,parent);
cg = new();
endfunction: new
task run_phase(uvm_phase phase);
phase.raise_objection(this);
repeat(5) begin
#5data = $urandom();
#10 cg.sample();
end
phase.phase_done.set_drain_time(this, 50ns);
phase.drop_objection(this);
endtask
endclass: test
endpackage
module top();
import uvm_pkg::*;
import t::*;
initial begin
$load_coverage_db("coverage_report.db");
$set_coverage_db_name("coverage_report.db");
run_test();
end
endmodule
If I try to run test above, I get this error:
** Error: (vsim-6844) Covergroup '/t/test/cg' has no instance created in simulation, ignoring it
Clearly, the problem is that cg created after test start, while config_db loading when it's not created yet. So I placed $load_coverage_db in run_phase like this:
`include "uvm_macros.svh"
package t;
import uvm_pkg::*;
class test extends uvm_test;
`uvm_component_utils(test)
byte unsigned data;
covergroup cg;
dat: coverpoint data;
endgroup
function new(string name, uvm_component parent);
super.new(name,parent);
cg = new();
endfunction: new
task run_phase(uvm_phase phase);
$load_coverage_db("coverage_report.db");
phase.raise_objection(this);
repeat(5) begin
#5data = $urandom();
#10 cg.sample();
end
phase.phase_done.set_drain_time(this, 50ns);
phase.drop_objection(this);
endtask
endclass: test
endpackage
module top();
import uvm_pkg::*;
import t::*;
initial begin
$set_coverage_db_name("coverage_report.db");
run_test();
end
endmodule
Now, I'm getting this type of warning:
** Warning: (vsim-6841) Covergroup instance '/t::test::cg ' exists in simulation but not found in database
What I need to do, to get my old coverage in test?

After I already wrote a python script that creates and updates my own sqlite database after each run of tests from the coverage exported to a text file,I finally found out that there is a vcover merge command in questa sim that merges the coverage of all tests.
As far as I understand, this is the generally accepted practice: to keep the coverage of each of the tests and merge them together, rather than loading the total coverage into each of the new tests.
But there is still a problem that vcover has very poor help and there is practically no mention of it in the documentation.

Related

Can array of registers can be used in tasks in verilog?

i am trying to implement a task for a Ripple carry adder in Verilog HDL. There is an error showing:"root scope declaration is not allowed in Verilog 95/2K mode" at line no-1
task rca; `<---line 1`
input [15:0]in1,in2;
output reg [15:0]out2;
reg [15:0]c;
integer i;
begin
c=16'b0;
for(i=0;i<16;i=i+1)
begin
out2[i]=in1[i]^in2[i]^c[i];
c[i+1]=(in1[i]&in2[i])|(in2[i]&c[i])|(c[i]&in1[0]);
end
end
endtask
You have to declare your task inside a module. I suggest to create file with .vh extension (Verilog Header), place you task inside this file and include your task by `include directive. For example:
module example_module(
// list of ports
);
`include "tasks.vh"
// Use your task here
endmodule

Dart testing Command line program

Suppose I have the following program increment.dart,
import 'dart:io';
void main() {
var input = int.parse(stdin.readLineSync());
print(++input);
}
and I want to test it similar to expect() from test package like,
test('Increment', () {
expect(/*call program with input 0*/ , equals(1));
});
Elaborating my use case:
I use this website to practice by solving the puzzles. They do have an online IDE but it doesn't have any debugging tools and the programs use std io. So what I have to do for debugging my code locally is to replace every stdin.readLineSync() with hardcoded test values and then repeat for every test. I'm looking a way to automate this.(Much like how things work on their site)
Following #jamesdlin's suggestion, I looked up info about Processes and found this example and whipped up the following test:
#TestOn('vm')
import 'dart:convert';
import 'dart:io';
import 'package:test/test.dart';
void main() {
test('Increment 0', () async {
final input = 0;
final path = 'increment.dart';
final process = await Process.start('dart', ['$path']);
// Send input to increment.dart's stdin.
process.stdin.writeln(input);
final lineStream =
process.stdout.transform(Utf8Decoder()).transform(LineSplitter());
// Test output of increment.dart
expect(
lineStream,
emitsInOrder([
// Values match individual events.
'${input + 1}',
// By default, more events are allowed after the matcher finishes
// matching. This asserts instead that the stream emits a done event and
// nothing else.
emitsDone
]));
});
}
Trivia:
#TestOn()
Used to specify a Platform Selector.
Process.start()
Used to run commands from the program itself like, ls -l (code: Process.start('ls', ['-l'])). First argument takes the command to be executed and second argument takes the list of arguments to be passed.
Testing stream

Script returns a value in the stadout but not able to get value in return parameter

I am using the callexec function to call a python script. The python script returns a value in the stadout but I am not able to get the value in the return parameter. Is there a way to pass the value to the results variable?
The is the CANape script that I am using:
double err;
char result[];
err = CallExecutable("C:\\Program Files (x86)\\Python38-32\\python.exe", "C:\\Users\\XXXX\\Desktop\\Read_Current.py 1", 1, result);
print("%s", result);
Thanks in advance
The result buffer provided to CallExecutable will return the result of the exit code from the python program. The following python code if called will return 123 and that is what the value of result will be in your code above.
import sys
sys.exit(123)
If you are looking to pass data back from the python script I've done this using the function DLL mechanism (there is some demo code included with CANape for this). It involves a little C++ wrapper to interface with python or other languages.

I want Canopy web testing results to show in VS 2013 test explorer... and I'm SO CLOSE

I'm trying to figure out how to get the test results for Canopy to show in the VS test explorer. I can get my tests to show up and it will run them but it always shows a pass. It seems like the Run() function is "eating" the results so VS never sees a failure.
I'm sure it is a conflict between how Canopy is nicely interpreting the exceptions it gets into test results because normally you'd want Run() to succeed regardless of the outcome and report its results using its own reports.
Maybe I should be redirecting output and interpreting that in the MS testing code?
So here is how I have it set up right now...
The Visual Studio Test Runner looks at this file for what it sees as tests, these call the canopy methods that do the real testing.
open canopy
open runner
open System
open Microsoft.VisualStudio.TestTools.UnitTesting
[<TestClass>]
type testrun() =
// Look in the output directory for the web drivers
[<ClassInitialize>]
static member public setup(context : TestContext) =
// Look in the output directory for the web drivers
canopy.configuration.ieDir <- "."
canopy.configuration.chromeDir <- "."
// start an instance of the browser
start ie
()
[<TestMethod>]
member x.LocationNoteTest() =
let myTestModule = new myTestModule()
myTestModule.all()
run()
[<ClassCleanup>]
static member public cleanUpAfterTesting() =
quit()
()
myTestModule looks like
open canopy
open runner
open System
type myTestModule() =
// some helper methods
member x.basicCreate() =
context "The meat of my tests"
"Test1" &&& fun _ ->
// some canopy test statements like...
url "http://theURL.com/"
"#title" == "The title of my page"
//Does the text of the button match expectations?
"#addLocation" == "LOCATION"
// add a location note
click ".btn-Location"
member x.all() =
x.basicCreate()
// I could add additional tests here or I could decide to call them individually
I have it working now. I put the below after the run() for each test.
Assert.IsTrue(canopy.runner.failedCount = 0,results.ToString())
so now my tests look something like:
[<TestMethod>]
member x.LocationNoteTest() =
let locationTests = new LocationNote()
// Add the test to the canopy suite
// Note, this just defines the tests to run, the canopy portion
// of the tests do not actually execute until run is called.
locationTests.all()
// Tell canopy to run all the tests in the suites.
run()
Assert.IsTrue(canopy.runner.failedCount = 0,results.ToString())
Canopy and the UnitTesting infrastructure have some overlap in what they want to take care of. I want the UnitTesting infrasturcture to be the thing "reporting" the summary of all tests and details so I needed to find a way to "reset" the canopy portion so that I didn't have to track the last known state from canopy and then compare. So for this to work your canopy suite can only have one test but we want to have as many as we want at the UnitTesting level. To adjust for that we do the below in the [].
runner.suites <- [new suite()]
runner.failedCount <- 0
runner.passedCount <- 0
It might make sense to have something within canopy that could be called or configured when the user wants to use a different unit testing infrastructure around canopy.
Additionally I wanted the output that includes the error information to appear as it normally does when a test fails so I capture the console.out in a stringBuilder and clear that in []. I set it up in by including the below [] where common.results is the StringBuilder I then use in the asserts.
System.Console.SetOut(new System.IO.StringWriter(common.results))
Create a mutable type to pass into the 'myTestModule.all' call which can be updated accordingly upon failure and asserted upon after 'run()' completes.

TestDataConfig.groovy not found, build-test-data plugin proceeding without config file

I am getting the following error when including in Mixin Build in unit tests:
TestDataConfig.groovy not found, build-test-data plugin proceeding without config file
it works like charm in the integration tests but not part of unit tests. I mean, 'build' plugin works itself in unit test but the 'TestDataConfig' is not populating default values
Thank You
First you should verify the version from build-test-data in your BuildConfig.groovy
test ":build-test-data:2.0.3"
Second, check your test. If you want build objects you need:
import grails.buildtestdata.mixin.Build
...
#TestFor(TestingClass)
#Build([TestingClass, SupportClass, AnotherClass])
class TestingClassTest{
#Test
void testMethod{
def tc1 = TestingClass.build()
def sc1 = SuportClass.build()
def ac1 = AnotherClass.build()
}
}
Third, check the domains constraints, you could have some properties validations like unique that fails when you build two instances. You need set that properties in code:
def tc1 = TestingClass.build(uniqueProperty: 'unique')
def tc2 = TestingClass.build(uniqueProperty: 'special')
I guess the dependency should be:
test ":build-test-data:2.0.3"
Since is just used for testing, right?

Resources