What's the codeceptjs equivalent of expect? - codeceptjs

I'm making an API call and want to verify that it succeeded. Something like this:
var response = await api.ensureSettings(data);
expect(response.code).toBe(200);
How can I check equivalence in codeceptjs?

You can use it with chai http://chaijs.com/. After install chai just do:
var expect = require('chai').expect;
on top of your test script, than you can use
expect(response.code).to.be.equal(200);
I recommend you use postman/newman to testing your Api, but you could do it too with codeceptjs.

you have to use additional libraries either codeceptjs chai or codeceptjs assert library
"codeceptjs-assert": "0.0.4",
"codeceptjs-chai": "^1.0.0",
I.assertStrictEqual(response.code, 200)

Related

Set environment parameter in Nix for a building phase for buildGoModule?

I am trying to build a Go moodule with buildGoModule. My issue is that during building time go tries to reach out to proxy.golang.org but it is block in my network and solution is to set an environment variable GOPROXY.
I thought that passthru = { GOPROXY = "direct"; }; would do the job, but the error persists. So I would like to know what is a good way to pass an env variable.
Overriding GOPROXY should work since I tested it in nix-shell separately - it works fine.
In buildGoModule it is possible to override go-modules derivation with overrideModAttrs.
Specifically for GOPROXY it would look like:
overrideModAttrs = (_: {
GOPROXY = "whatever";
});

app()->environment() not using value set by config()

I'm writing a unit test for a function that calls app()->environment(). The phpunit.xml file sets the environment as testing. I want to test the function in other environments as well.
I've tried:
config('app.env', 'prod')
config('env', 'prod')
$_ENV['app.env'] = 'prod'
$_ENV['env'] = 'prod'
I also included the orchestral/testbench package and used this:
protected function getEnvironmentSetUp($app)
{
//Both of the following
$app['config']->set('app.env', 'prod');
$app['config']->set('env', 'prod');
}
None of these have changed the output of app()->environment().
Am I missing something?
I ended up figuring this out by looking at the code for the ->environment() command. It uses $this['env'] for the comparison so in my test I wrote:
app()['env'] = 'prod';
This seems to be the only way I could get it to work.

iOS Testing: Is there a way to skip tests?

I don't want to execute certain tests if the feature is currently disabled. Is there a way to "skip" a test (and to get appropriate feedback on console)?
Something like this:
func testSomething() {
if !isEnabled(feature: Feature) {
skip("Test skipped, feature \(feature.name) is currently disabled.")
}
// actual test code with assertions here, but not run if skip above called.
}
You can disable XCTests run by Xcode by right clicking on the test symbol in the editor tray on the left.
You'll get this menu, and you can select the "Disable " option.
Right clicking again will allow you to re-enable. Also, as stated in user #sethf's answer, you'll see entries for currently disabled tests in your .xcscheme file.
As a final note, I'd recommend against disabling a test and committing the disabling code in your xcscheme. Tests are meant to fail, not be silenced because they're inconvenient.
Another possible solution which I found in some article: prefix your skipped tests with something like "skipped_"
Benefits:
XCode will not treat them as tests
You can easily find them using search
You can make them tests again, replacing "skipped_" to ""
Beginning with Xcode 11.4 you'll be able to using XCTSkipUnless(_:_:file:line:).
The release notes read,
XCTest now supports dynamically skipping tests based on runtime
conditions, such as only executing some tests when running on certain
device types or when a remote server is accessible. When a test is
skipped, Xcode displays it differently in the Test Navigator and Test
Report, and highlights the line of code where the skip occurred along
with an optional user description. Information about skipped tests is
also included in the .xcresult for programmatic access.
To skip a test, call one of the new XCTSkip* functions from within a
test method or setUp(). For example:
func test_canAuthenticate() throws {
try XCTSkipIf(AuthManager.canAccessServer == false, "Can't access server")
// Perform test…
}
The XCTSkipUnless(::file:line:) API is similar to
XCTSkipIf(::file:line:) but skips if the provided expression is
false instead of true, and the XCTSkip API can be used to skip
unconditionally. (13696693)
I've found a way to do this by modifying my ui test .xcscheme file and adding a section called SkippedTests under TestableReference, then adding individual Test tags with an 'Identifier' attribute with the name of your class and test method. Something like:
<SkippedTests>
<Test Identifier="ClassName/testMethodName" />
</SkippedTests>
Hope this helps
From Xcode 11.4+, you can use XCTSkipIf() or XCTSkipUnless().
try XCTSkipIf(skip condition, "message")
try XCTSkipUnless(non-skip condition, "message")
https://developer.apple.com/documentation/xctest/methods_for_skipping_tests#overview
This is what test schemes are meant to do.
You can have different schemes targeting different testing situations or needs.
For example, you may want to create a scheme that runs all your tests (full regression scheme), or you may want to select a handful of them to do a quick smoke test on your app when small changes are made.
This way, you can select different schemes according to how much testing you need to do.
Just go to
Product >> Scheme
It's not that universal, but you can override invokeTest in XCTestCase and avoid calling super where necessary. I'm not sure about the appropriate feedback in console though.
For instance the following fragment makes the test run only on iOS Simulator with iPhone 7 Plus/iPad Pro 9.7"/iOS 11.4:
class XXXTests : XCTestCase {
let supportedModelsAndRuntimeVersions: [(String, String)] = [
("iPhone9,2", "11.4"),
("iPad6,4", "11.4")
]
override func invokeTest() {
let environment = ProcessInfo().environment
guard let model = environment["SIMULATOR_MODEL_IDENTIFIER"], let version = environment["SIMULATOR_RUNTIME_VERSION"] else {
return
}
guard supportedModelsAndRuntimeVersions.contains(where: { $0 == (model, version) }) else {
return
}
super.invokeTest()
}
If you use Xcode 11 and TestPlan, you can tweak your configuration to skip or allow specific tests. Xcode TestPlan is a JSON format after all.
By default, all tests are enabled, you can skip a list of tests or test file.
"testTargets" : [
{
"skippedTests" : [
"SkippedFileTests", // skip the whole file
"FileTests\/testSkipped()" // skip one test in a file
]
...
On the opposite, you can also skip all tests by default and enable only few.
"testTargets" : [
{
"selectedTests" : [
"AllowedFileTests", // enable the whole file
"FileTests\/testAllowed()" // enable only a test in a file
]
...
I'm not sure if you can combine both configurations though. It flips the logic based on Automatically includes new tests.
Unfortunately, there is no build-in test case skipping. The test case either passes or fails.
That means you will have to add that functionality by yourself - you can add a function to XCTestCase (e.g. XCTestCase.skip) via a category that will print the information into console. However, you will have to put a return after that to prevent the other asserts from running.
While there are answers which covered almost similar logic, if you don't want to have an extra file to manage conditions then you can mark your test function with throws then use XCTSkip with a nice description to explain why it is skipped. Note a clear message is important as it will make it easy for you to just read it on Report Navigator and understand why it is skipped without having to open a related XCTestCase.
Example:
func test_whenInilizedWithAllPropertiesGraphQLQueryVariableDict_areSetCorrectly() throws {
// Skip intentionally so that we can remember to handle this.
throw XCTSkip("This method should be implemented to test equality of NSMutableDictoinary with heterogenious items.")
}
Official iOS documentation
https://developer.apple.com/documentation/xctest/methods_for_skipping_tests
Use XCTSkipIf() or XCTSkipUnless() when you have a Boolean condition that you can use to evaluate when to skip tests.
Throw an XCTSkip error when you have other circumstances that result in skipped tests. For example:
func testSomethingNew() throws {
guard #available(macOS <#VersionNumber#>, *) else {
throw XCTSkip("Required API is not available for this test.")
}
// perform test using <#VersionNumber#> APIs...
}
There is no test case skipping. You can use if-else block:nested and run/print your desired output.

Is it possible to pass command-line arguments to a new isolate from spawnUri()

When starting a new isolate with spawnUri(), is it possible to pass command line args into that new isolate?
eg: Command line:
dart.exe app.dart "Hello World"
In app.dart
#import("dart:isolate");
main() {
var options = new Options();
print(options.arguments); // prints ["Hello World"]
spawnUri("other.dart");
}
In other.dart
main() {
var options = new Options();
print(options.arguments); // prints [] when spawned from app.dart.
// Is it possible to supply
// Options from another isolate?
}
Although I can pass data into other.dart through its SendPort, the specific use I want is to use another dart app that hasn't been created with a recievePort callback (such as pub.dart, or any other command-line app).
As far as I can tell the answer is currently no, and it would be hard to simulate via message passing because the options would not be available in main().
I think there are two good feature requests here. One is to be able to pass options on spawn() so that a script can run the same from the root isolate or a spawned isolate.
The other feature, which could be used to implement the first, is a way to pass messages that are handled by libraries before main() is invoked so that objects that main() depends on can be initialized with data from the spawning isolate.
Your example doesn't call print(options.arguments); in other.dart using the current stable SDK.
However
spanUri("other.dart");
spawns an Uri. So how about spawnUri("other.dart?param=value#orViaHash"); and try if you can find the param/value pair via
print(options.executable);
print(options.script);

How put a Task (sfBaseTask) in unitest?

How i can write an unit test, for my Task (sfBaseTask) ?
If you're asking how to write a unit test for a task than firstly you need to initialize configuration:
$configuration = ProjectConfiguration::hasActive() ? ProjectConfiguration::getActive() : new ProjectConfiguration(realpath($_test_dir . ‘/..’));
Later, as tasks are just classes, you can easily initialize them and test:
$task = new myTask($configuration->getEventDispatcher(), new sfFormatter());
$task->run($argumentsArray, $optionsArray);
However, I think it's better to put task logic into separate class(es) and use them in task's execute() method. It's even easier to test this way.

Resources