I used the covfile and select_functional command in the running options of the regression, and the coverage groups appear, but I can't see the assertions in IMC in order to map them. If I run a test and give the covfile command it works, but on regressions it doesn't.
The covfile command should be in the run_options of the regression, but also in the run_options of the compile script.
I assume you are using Cadence irun since you mentions IMC. I also assume you are using vManager as your regression tool. Why the coverage does not show up in regression ucd, I suspect you are not using the same snapshot in the regression as you are running the sim locally, which create snapshot with the covfile on the fly. Double check your irun.log, see which snapshot it is using. Then check the elaboration arguments of that snapshot (which you can find it under the .INCA_lib/ directory to see whether the snapshot has coverage turned on. (-coverage all)
Related
I have google test based test suite. Since the tests manipulate the filesystem and do other things that I don't want to be left behind in case of a test crash, besides just not playing nicely with running tests on parallel, I want to run each test case in a new container. I am currently using CTest (aka. CMake test) to run the gtest binary, but I am not very attached to either of these, so if the best option is some other tool, I can accept that.
Can anyone suggest a way to automate this? Right now I am adding each individual test case manually to CTest with a call to docker run as part of the test command, but it is brittle and time consuming. Maybe I am doing this wrong?
You can run your GTest runner with --gtest_list_tests to list all tests.
You can then loop through this list and call your GTest runner with --gtest_filter set to the name of specific test.
The format of the list is a bit awkward to parse though, so need some shell scripting skills to get the actual test names.
Check the exit code of the GTest runner the know whether the test succeeded or failed.
I do not know if this integrates well with CTest.
My managers want we to determine which tests might have to be run, based on coding changes that were made to the application we are testing.
But, it is hard to know which tests are actually needed to be re-verified as a result of a code change. What we have done is common to test the entire area where the code change occurred / or the entire proj, solution.
We were told this could be achieved by TFS build or MTM tools. Could someone share the details?
PM:We are running on TFS 2015 update4,VS2017.
There is a concept of Test Impact Analysis which helps in analysis of impact of development on existing tests. Using TIA, developers know exactly which tests need to be verified as a result of their code change.
The Test Impact Analysis (TIA) feature specifically enables this – TIA
is all about incremental validation by automatic test selection. For a
given code commit entering the pipeline TIA will select and run only
the relevant tests required to validate that commit. Thus, that test
run is going to complete faster, if there is a failure you will get to
know about it faster, and because it is all scoped by relevance,
analysis will be faster as well.
Test Impact Analysis for managed automated tests is available via a checkbox in the 2.* preview version of the VSTest task.
If enabled, only the relevant set of managed automated tests that need to be run to validate a given code change will run. Test Impact Analysis requires the latest version of Visual Studio, and is presently supported in CI for managed automated tests.
However this is only available with TFS2017 update1(need 2.* preview version of VSTS task). More details please refer this blog: Accelerated Continuous Testing with Test Impact Analysis
Is it possible to run my simulation directly without seeing the GUI? All I am interested in are console output data, so there is no need for me to interact with a GUI to play, pause or reset my simulation.
Yes, it is. The easiest way to do this is to run it as a batch model (even if you only run one run at a time).
If you're using the standard distribution with Eclipse and have created projects as described in the docs, you will have a Run configuration called Batch <your_model_name> model If you run this, you'll launch in batch mode - input parameters will be taken from a batch file you write as a one off job (or as often as you want to change parameters).
See this documentation for more information on how to set up and run batch models.
Is there a way to aggregate code coverage data from jmockit-coverage and emma coverage? I can run the two different coverage steps in two separate junit ant tasks and generate the coverage data in two directories. Just not sure if the coverage outputs from these two are compatible and can be merged to display together.
No, they can't be aggregated into a single HTML report, since these are two different code coverage tools, which know nothing about each other. Of course, someone could create yet another tool which would do that; personally, I don't see much value in it, though.
`No valid coverage data available
Tracking and trending code coverage works best when like is compared with like. In this regard it is best to only track builds when all unit tests are passing.
This plugin will not report code coverage until there is at least one stable build.`
what should i do to have a rails stats report
#krs - From what you've said in your comments, you are correct.
rails stats only provides code coverage metrics for tests at that point in time. rails stats does not intend to keep data between builds and report test results vs. code converage.
You can connect your CI to an external service to see test coverage vs. test results. (EG, https://coveralls.io).
Please add coverage.xml file outside the project.It works fine for me.