Test coverage in REST Assured - rest-assured

I am planning to use RESTAssured for API Testing in our project.
I am wondering if any mechanism is available to determine the test coverage.
I did google search but didn't find any answer.

First of all, what kind of coverage?
-Requirement
-Code
-Endpoint
-Method
-Bug
-Environment
-etc.
I guess endpoint and/or requirement coverage you want to earn: Create a coverage matrix where you mark the covered endpoint and its method (y axis) with test case (x axis). The input endpoints can be granted in prior since you should know them. The rest can be collectet in runtime with a collector util.
I personaly used a custom annotation for tests where I marked the used endpoint(s) whilst the test result was provided by testNg ITestListener. At the end with a custom reporter I made my own coverage result as a heatmap.

Related

Bazel - best documentation for which providers are used by any given rule?

I am writing a custom rule that takes inputs from cc_library, cc_binary, apple_static_library, and a few other platform-specific rules. I'd like to view each API given to me via referencing ctx.attr.foo inside the custom rule's implementation function.
There is a list of providers here https://docs.bazel.build/versions/master/skylark/lib/skylark-provider.html but it doesn't say which rules are using them.
Is there a best practice for viewing what these rules are providing me, or does it require going through the source for each one?
This is how you get all providers and output groups from a target:
bazel cquery my_target --output=starlark --starlark:expr="providers(target)"
You can get a list of providers for a given target with dir. Something like this is helpful for debugging:
def _print_attrs_impl(ctx):
for target in ctx.attr.targets:
print('%s: %s' % (target.label, dir(target)))
Printing from inside a rule you're developing is often helpful too, to verify targets are actually what you expect them to be.
You can also apply dir to the providers themselves, to see what fields they have.

How to get a dataflow job's step details using Java Beam SDK?

I'm using Java Beam SDK for my dataflow job, and com.google.api.services.dataflow.model.Job class gives details about a particular job. However, it doesn't provide any method/property to get dataflow step information such as Elements Added, Estimated Size etc.
Below is the code I'm using to get the job's information,
PipelineResult result = p.run();
String jobId = ((DataflowPipelineJob) result).getJobId();
DataflowClient client = DataflowClient.create(options);
Job job = client.getJob(jobId);
I'm looking for something like,
job.getSteps("step name").getElementsAdded();
job.getSteps("step name").getEstimatedSize();
Thanks in advance.
The SinkMetrics class provides a bytesWritten() method and a elementsWritten() method. In addition, the SourceMetrics class provides an elementsRead() and a bytesRead() method.
If you use the classes in the org.apache.beam.sdk.metrics package to query for these metrics and filter by step, you should be able to get the underlying metrics for the stop (i.e., elements read).
I would add that if you're willing to look outside of the Beam Java SDK, since you're running on Google Cloud Dataflow you can use the Google Dataflow API In particular, you can use projects.jobs.getMetrics to fetch the detailed metrics for the job including the number of elements written/read. You will need to do some parsing of the metrics as there are hundreds of metrics for even a simple job, but the underlying data you're looking for is present via this API call (I just tested).

Ruby on Rails coverage tool that shows what is not being covered

I am developing an application using rails. Since the start we are using rspec to test our application. And we are using simplecov as a tool to show our test coverage.
But simplecov only shows the percentage of coverage inside a file, My question is if there is a tool that shows what line of code is not being covered?
If you click on the file name, simplecov will show you the line that is covered (with green) and not covered (with reds).
We use this in my workplace: https://codecov.io/#features
Codecov is used to help developers determine what lines of code were executed by their tests. There are three primary terms used to indicate the results of your tests: hit, partial and miss.
The value of 54% comes from a calculation of hit / ( hits + partial + miss) = coverage.
A hit is a line (aka statement) that is fully executed by your
tests.
A partial is a statement (typically a branch) that is not fully
executed.
Example if true:... will always be a partial hit because the branch was never skipped because true is always true.
A miss is a statement that was not executed by tests.
A grade of 54%, in simple terms, says "Half my code is tested". Use Codecov to investigate methods and statements in your code that are not tested to help guide you on where to write your next test and increase the coverage.

Machine parseable error messages

(From https://groups.google.com/d/msg/bazel-discuss/cIBIP-Oyzzw/caesbhdEAAAJ)
What is the recommended way for rules to export information about failures such that downstream tools can include them in UIs.
Example use case:
I ran bazel test //my:target, and one of the actions for //my:target fails because there is an unknown variable "usrname" in my/target.foo at line 7 column 10. It would also like to report that "username" is a valid variable and this is a possible misspelling. And thus wants to suggest an addition of an "e" character.
One way I have thought to do this is to have a separate file that my action produces //my:target.errors that is in a separate output group and have it write machine parseable data there in addition to human readable data on stdout.
I can then find all of these files and parse the data in them in downstream tools.
Is there any prior work on this, or does everything just try to parse the human readable output?
I recommend running the error checkers as extra actions.
I don't think Bazel currently has hooks for custom error handlers like you describe. Please consider opening a feature request: https://github.com/bazelbuild/bazel/issues/new

Sample TFS 2010 Build Process Template for NCover [duplicate]

I was wondering if any of you guys had any experience generating code coverage reports in TFS Build Server 2010 while running NUnit tests.
I know it can be easily done with the packaged alternative (MSTest + enabling coverage on the testrunconfig file), but things are a little more involved when using NUnit. I've found some info here and there pointing to NCover, but it seems outdated. I wonder if there are other alternatives and whether someone has actually implemented this or not.
Here's more info about our environment/needs:
- TFS Build Server 2010
- Tests are in plain class libraries (not Test libraries - i.e., no testrunconfig files associated), and are implemented in NUnit. We have no MSTests.
- We are interested in running coverage reports as part of each build and if possible setting coverage threshold requirements for pass/fail criteria.
We 've done it with NUnit-NCover and are pretty happy with our results. NUnit execution is followed by NUnitTfs execution in order to get our testing results published in the Build Log. Then NCover kicks in, generating our code coverage results.
One major thing that poses as a disadvantage is fact that setting up the arguments for properly invoking NCover wasn't trivial. But since I installed it, I never had to maintain it.
Two things could pose as disadvantages:
NUnitTfs doesn't work well with NCover (at least I couldn't find a way to execute both in the same step, so (since NCover invokes NUnit) I have to run Unit tests twice: (1) to get the test results and (2) to get coverage results over NCover. Naturally, that makes my builds last longer.
Setting up the arguments for properly invoking NCover wasn't trivial. But since I installed it, I never had to maintain it .
In any case, the resulting reporting (especially the Trend aspect) is very useful in monitoring how our code evolves within time. Especially if you 're working on a Platform (as opposed to short-timed Projects), Trend reports are of great value.
EDIT
I 'll try to present in a quick & dirty manner how I 've implemented this, I hope it can be useful. We currently have NCover 3.4.12 on our build server.
Our simple naming convention regarding our NUnit assemblies is that if we have a production assembly "123.dll", then another assembly named "123_nunit.dll" exists that implements its tests. So, each build has several *_nunit.dll assemblies that are of interest.
The part in the build process template under "If not disable tests" is the one that has been reworked in order to achieve our goals, in particular the section that was named "Run MSTest for Test Assemblies". The whole implementation is here, after some cleanups to make the flow easier to be understood (pic was too large to be directly inserted here).
At first, some additional Arguments are implemented in the Build Process Template & are then available to be set in each build definition:
We then form the NUnit args in "Formulate nunitCommandLine":
String.Format("{0} /xml={1}\\{2}.xml", nunitDLL, TestResultsDirectory, Path.GetFileNameWithoutExtension(nunitDLL))
This is then used in the "Invoke NUnit"
In case this succeeds & we have set coverage for this build we move to "Generate NCover NCCOV" (the coverage file for this particular assembly). For this we invoke NCover.Console.exe with the following as Args:
String.Format("""{0}"" ""{1}"" //w ""{2}"" //x ""{3}\{4}"" //literal //ias {5} //onlywithsource //p ""{6}""",
NUnitPath,
Path.GetFileName(nunitDLL),
Path.GetDirectoryName(nunitDLL),
Path.GetDirectoryName(Path.GetDirectoryName(nunitDLL)),
Path.GetFileName(nunitDLL).Replace("_nunit.dll", ".nccov"),
Path.GetFileNameWithoutExtension(nunitDLL).Replace("_nunit", ""),
BuildDetail.BuildNumber)
All these run in the foreach loop "For all nunit dlls". When we exit the loop, we enter "Final NCover Activities" & at first the part "Merge NCCovs", where NCover.Console.exe is executed again - this time with different args:
String.Format("""{0}\*.nccov"" //s ""{0}\{1}.nccov"" //at ""{2}\{3}\{3}.trend"" //p {1} ",
Path.GetDirectoryName(Path.GetDirectoryName(testAssemblies(0))),
BuildDetail.BuildNumber,
NCoverDropLocation,
BuildDetail.BuildDefinition.TeamProject
)
When this has run, we have reached the point where all NCCOV files of this build are merged into one NCCOV-file named after the build + the Trend file (that monitors the build throughout its life) has been updated with the elements of this current build.
We now have to only generate the final HTML report, this is done in "Generate final NCover rep" where we invoke NCover.reporting with the following args:
String.Format(" ""{0}\{1}.nccov"" //or FullCoverageReport //op ""{2}\{1}_NCoverReport.html"" //p ""{1}"" //at ""{3}\{4}\{4}_{5}.trend"" ",
Path.GetDirectoryName(Path.GetDirectoryName(testAssemblies(0))),
BuildDetail.BuildNumber,
PathForNCoverResults,
NCoverDropLocation,
BuildDetail.BuildDefinition.TeamProject,
BuildType
)

Resources