How to write multiple unittests in dart in multiple files? - dart

I am writing some Dart library and want to have it unittested. I created directory test and want to put my tests in here. Because I am going to have a lot of tests, I want to have them separated to multiple files. My questions is, what is the Dart convention, how to do that. I want to have my tests easily run all, however, I also want to be able to run just one file of tests.
What are your suggestions?

It is common to separate tests into multiple files. I am including an example of how you can do that.
Imagine that you have 2 files with tests, foo_test.dart, bar_test.dart that contain tests for your program. foo_test.dart could look something like this:
library foo_test;
import 'package:unittest/unittest.dart';
void main() {
test('foo test', () {
expect("foo".length, equals(3));
});
}
And bar_test.dart could look something like this:
library bar_test;
import 'package:unittest/unittest.dart';
void main() {
test('bar test', () {
expect("bar".length, equals(3));
});
}
You could run either file, and the test contained in that file would execute.
The, I would create something like an all_tests.dart file that would import the tests from foo_test.dart and bar_test.dart. Here is what all_tests.dart could look like:
import 'foo_test.dart' as foo_test;
import 'bar_test.dart' as bar_test;
void main() {
foo_test.main();
bar_test.main();
}
If you executed all_tests.dart, both the tests from foo_test.dart and bar_test.dart would execute.
One thing to note: for all this to work, you need to declare foo_test.dart and bar_test.dart as libraries (see the first line of each file). Then, in all_tests.dart, you can use import syntax to fetch the contents of the declared libraries.
This is how I organize most of my tests.

There is a tool that does exactly that, Dart Test Runner. An excerpt from that page:
Dart Test Runner will automatically detect and run all the tests in your Dart project in the correct environment (VM or Browser).
It detects any test writen in a file suffixed with _test.dart where your test code is inside a main() function. It doesn't have any problem detecting and running unittest tests.
It's pretty easy to install it and run it. Just two commands:
$ pub global activate test_runner
$ pub global run test_runner
For more options, please check Dart Test Runner page.

It's not necessary to have multiple files to isolate a test - see Running only a single test and Running a limited set of tests.
To isolate a test, change test() to solo_test().
So you can put all your tests in the same file (or into several parts).

In case it can be helpful for anybody while running a bunch of tests at once,
I was writing tests but my test file names did not end with *_test.dart
So I was not able to run all tests at once.
If you want to run all tests at once, it's mandatory to end your dart file with _test.dart.

Related

Can I add static analysis to a py_binary or py_library rule?

I have a repo which uses bazel to build a bunch of Python code. I would like to introduce various flavors of static analysis into the build and have the build fail if these static analyses throw errors. What is the best way to do this?
For example, I'd like to declare something like:
py_library_with_static_analysis(
name = "foo",
srcs = ["foo.py"],
)
py_library_with_static_analysis(
name = "bar",
srcs = ["bar.py"],
deps = [":foo"],
)
In a build file and have it error out if there are mypy/flake/etc errors in foo.py. I would like to be able to do this gradually, converting libraries/binaries to static analysis one target at a time. I'm not sure if I should do this via a new rule, a macro, an aspect or something else.
Essentially, I think I'm asking how to run an additional command while building a py_binary/py_library and fail if that command fails.
I could create my own version of a py_library rule and have it run static analysis within the implementation but that seems like something which is really easy to get wrong (my guess is that native.py_library is quite complex?) and there doesn't seem to be a way to instantiate a native.py_library within a custom rule.
I've also played around with macros a bit, but haven't been able to get that to work either. I think my issue there is that a macro doesn't actually specify new commands, only new targets and I can't figure out how to make the static analysis target get force built along with the py_library/py_binary I'm interested in.
A macro that adds implicit test targets is not such a bad idea: The test targets will be picked up automatically when you run bazel test //..., which you could do in a gating CI to prevent imperfect code from merging.
Bazel supports a BUILD prelude (which is underdocumented) that you could use to replace all py_binary, py_library, and even py_test with your test-adding wrapper macros with minimal changes to existing code.
If you somehow fail the build instead it will make it harder to quickly prototype things. Sometimes you want to just quickly try something out, and you don't care about any pydoc violations yet.
In case you do want to fail the build, you might be able to use the Validations Output Group of a rule that you implement to wrap or replace your py_libraries.

How to stop duplicating scripts in K6?

I have to write like 20 different scripts in K6 for an application. And most of these scripts contains common functionalities like login, choose some options, etc...
So is there a better way to write K6 scripts without duplicating these common functionalities? Can we implement common methods in somewhere and execute it inside the default function or something similar to that?
You can write your own module contains common functionalities then import them:
$ cat index.js
import { hello_world } from './modules/module.js';
export default function() {
hello_world();
}
$ cat module.js
export function hello_world() {
console.log("Hello world");
}
You can read here for more details.
Yes, you can move the common methods to separate JS files and then import them in the scripts that require them: https://docs.k6.io/docs/modules

Mocking a Groovy Script object with stubbed methods with Spock?

What I'm trying to do
I have a script that looks something like this:
def doStuff() {
println 'stuff done'
}
return this
I am loading this script in another script so that I have a Groovy Script object that I can call doStuff from. This is in a script, call it myscript.groovy that looks like this:
Script doer = load('doStuff.groovy')
doer.doStuff()
I would like to be able to mock the Script object that is returned by load, stub doStuff, and assert that it is called. Ideally, something like the following (assume that load is already mocked):
given:
Script myscript = load('myscript.groovy')
Script mockDoer = Mock(Script)
when:
myscript.execute()
then:
1 * load('doStuff.groovy') >> mockDoer
1 * mockDoer.doStuff()
However, I am getting an NPE at the line:
doer.doStuff()
How can I mock the Script object in a way that I can make sure that the doStuff method is stubbed and called properly in my test?
Why I'm doing it this way
I know this is a bit of a weird use case. I figured I should give some context why I am trying to do this in case people want to suggest completely different ways of doing this that might not apply to what I am trying to do.
I recently started working on a project that uses some fairly complex Jenkins Pipeline scripts. In order to modularize the scripts to some degree, utility functions and pieces of different pipelines are contained in different scripts and loaded and executed similarly to how doStuff.groovy is above.
I am trying to make a small change to the scripts at the same time as introducing some testing using this library: https://github.com/lesfurets/JenkinsPipelineUnit
In one test in particular I want to mock a particular utility method and assert that it is called depending on parameters to the pipeline.
Because the scripts are currently untested, reasonably complex, I am new to them, and many different projects depend on them I am reluctant to make any sweeping changes to how the code is structured or modularized.

Importing classes for use in grails script _Events.groovy

In Grails 2.3.7 I'm using _Events.groovy to hook into WAR packaging to do some special processing:
_Events.groovy
import demo.utils.XmlUtil
eventCreateWarStart = { name, stageDir ->
XmlUtil.doSomething()
...
log.debug('done!')
}
When building the WAR, Grails complains about XmlUtil import statement. _Events.groovy is not a class, so import statements don't work. How can I use a custom class in a script if I can't import it? And how can I perform logging instead of using println?
Update
Loading classes manually based on this and this seems to do the trick, also got logging to work thanks to Aaron's answer below:
eventCreateWarStart = { name, stageDir ->
def xmlUtil = loadRequiredClass('demo.utils.XmlUtil')
xmlUtil.doSomething()
...
grailsConsole.log('done!')
}
loadRequiredClass = {classname ->
classLoader.loadClass(classname)
}
Questions
What are all implicit objects available to Grails scripts?
It's a pain but it does make sense when you think about it. The _Events.groovy is part of the build process which is also responsible for compiling the classes that you are trying to use in _Events.groovy. Definitely a catch-22 scenario but I don't see how it could be made better without splitting _Events.groovy into separate files that compile and load at different stages of the build process.
You can use grailsConsole.log("hi") or grailsConsole.updateStatus("hi") to log output to the console.

Getting test results from Eunit in Erlang

I am working with Erlang and EUnit to do unit tests, and I would like to write a test runner to automate the running of my unit tests. The problem is that eunit:test/1 seems to only return "error" or "ok" and not a list of tests and what they returned in terms of what passed or failed.
So is there a way to run tests and get back some form of a data structure of what tests ran and their pass/fail state?
If you are using rebar you don't have to implement your own runner. You can simply run:
rebar eunit
Rebar will compile and run all tests in the test directory (as well as eunit tests inside your modules). Furthermore, rebar allows you set the same options in the rebar.config as in the shell:
{eunit_opts, [verbose, {report,{eunit_surefire,[{dir,"."}]}}]}.
You can use these options also in the shell:
> eunit:test([foo], [verbose, {report,{eunit_surefire,[{dir,"."}]}}]).
See also documentation for verbose option and structured report.
An alternative option would be to use Common Test instead of Eunit. Common Test comes with a runner (ct_run command) and gives you more flexibility in your test setup but is also a little more complex to use. Common Test lacks on the available macros but produces very comprehensible html reports.
No easy or documented way, but there are currently two ways you can do this. One is to give the option 'event_log' when you run the tests:
eunit:test(my_module, [event_log])
(this is undocumented and was really only meant for debugging). The resulting file "eunit-events.log" is a text file that can be read by Erlang using file:consult(Filename).
The more powerful way (and not really all that difficult) is to implement a custom event listener and give it as an option to eunit:
eunit:test(my_module, [{report, my_listener_module}])
This isn't documented yet, but it ought to be. A listener module implements the eunit_listener behaviour (see src/eunit_listener.erl). There are only five callback functions to implement. Look at src/eunit_tty.erl and src/eunit_surefire.erl for examples.
I've just pushed to GitHub a very trivial listener, which stores the EUnit results in a DETS table. This can be useful, if you need to further process those data, since they're stored as Erlang terms in the DETS table.
https://github.com/prof3ta/eunit_terms
Example of usage:
> eunit:test([fact_test], [{report,{eunit_terms,[]}}]).
All 3 tests passed.
ok
> {ok, Ref} = dets:open_file(results).
{ok,#Ref<0.0.0.114>}
> dets:lookup(Ref, testsuite).
[{testsuite,<<"module 'fact_test'">>,8,<<>>,3,0,0,0,
[{testcase,{fact_test,fact_zero_test,0,0},[],ok,0,<<>>},
{testcase,{fact_test,fact_neg_test,0,0},[],ok,0,<<>>},
{testcase,{fact_test,fact_pos_test,0,0},[],ok,0,<<>>}]}]
Hope this helps.

Resources