Root level Hooks in playwright test - playwright

Referring the documentation provided by playwright, seems like the hooks (example: afterAll / beforeAll) can be used only inside a spec/ test file as below:
// example.spec.ts
import { test, expect } from '#playwright/test';
test.beforeAll(async () => {
console.log('Before tests');
});
test.afterAll(async () => {
console.log('After tests');
});
test('my test', async ({ page }) => {
// ...
});
My question: is there any support where there can be only one AfterAll() or beforeAll() hook in one single file which will be called for every test files ? the piece of code that i want to have inside the afterAll and beforeAll is common for all the test/ specs files and i dont want to have the same duplicated in all the spec files/ test file.
Any suggestion or thoughts on this?
TIA
Allen

Here is an update after my findings: Playwright does not support root level hooks.
this is currently not possible as parallel tests will run in separate workers and each of them will run afterAll hook once the test completes. A preferred solution to the above concern will be using the global-setup and global-teardown.

Related

Dart Functions Framework usage

I'm new to the Dart functions framework. My goal is to use this package to create several functions and deploy them to Cloud Run (in combination with Firebase, but I guess that's irrelevant to this question).
I've run the quick starts and I've read all of the contents in the docs.
The quick start mentions just one function at a time (e.g. Hello World, Cloud Events, etc..), like this:
import 'package:functions_framework/functions_framework.dart';
import 'package:shelf/shelf.dart';
#CloudFunction()
Response function(Request request) {
return Response.ok('Hello, World!');
}
But as you can see in the quickstarts only one function is handled in a project at a time. How about me wanting to deploy several functions? Should I:
Write several functions in the same project / file, so that the function framework compiles the 'server.dart` by itself
OR
Create a different functions_framework for each function?
Let me be more specific. Should I do the following (option 1 - which makes more sense to me):
import 'dart:math';
import 'package:functions_framework/functions_framework.dart';
import 'package:shelf/shelf.dart';
#CloudFunction()
Response function(Request request) {
return Response.ok('Hello, World!');
}
#CloudFunction()
Response function2(Request request) {
if (Random().nextBool()) {
return Response.ok('Hello, World!');
} else {
return Response.internalServerError();
}
}
Or should I build a different folder by running a build_runner for each function I need in my project?
Is there a difference and/or a best practice?
Thanks in advance.
EDIT. This question is related to the deployment on Cloud Run itself, and not just testing on my own PC. To test my own functions I did the following:
Run dart run build_runner build, so that it updates the server.dart file correctly (I can see that the framework does a lot behind the scenes and that the _nameToFunctionTarget is basically a router);
Run the server in two different terminals, like this: dart run bin/server.dart --port MYPORT --target MYFUNCTION (where MYPORT and MYFUNCTION are either 8080/8081 or function/function2 respectively).
I guess I'm just confused on how to correctly manage this framework once deployed on Cloud Run.
EDIT 2. I just gave up using Dart as a Serverless language or even a Backend language. There's just too much jargon even for the basic things. Any backend framework is either dead, or maintained by one single enthusiast guy (props to him!). This language has not yet received enough love from the Google Team / the community and at this moment in time is basically not possible to go fullstack on just Dart. It's a dream, but it can't be realized now. Furthermore, Dart hardly lacks a proper SDKs to use Firestore, etc., so Firebase isn't an option. I find it easier to just learn NodeJS and exploit the Firebase support for Firebase Functions written in NodeJS, and I'll wait for more support in there in the future, if there ever will be.
The documentation is a bit sparse right now (and I'm new to it also! I couldn't find any good examples, so here goes...)
You can only have a single function that is served. It should be
named 'function' (the type and name can be overriden, see the
cloudevent example dartfn generate cloudevent)
You 'could' have many of these deployed so that each does a specific thing, such as processing cloudevents above, but most people
want something more REST-like (see next)
You need to attach a Router() so that you can have the single entry point (function) handled by specific logic in your code.
Example for Rest
add to pubspec.yaml (in dependencies:) shelf_router: ^1.1.2
delegate the #CloudFunction to use the Router()
functions.dart
import 'package:functions_framework/functions_framework.dart';
import 'package:shelf/shelf.dart';
import 'package:shelf_router/shelf_router.dart';
Router app = Router()
..get('/health', (Request request) {
return Response.ok('healthy');
})
..get('/user/<user>', (Request request, String user) {
// fetch the user... (probably return as json)
return Response.ok('hello $user');
})
..post('/user', (Request request) {
// convert request body to json and persist... (probably return as json)
return Response.ok('saved the user');
});
#CloudFunction()
Future<Response> function(Request request) => app.call(request);

Jira Zephyr Integration with Webdriver IO

Being completely new to webdriver IO, wondering how do I update an existing test cycle (with specific name format) in Jira. I am running the test suite on 3 different browsers and have separate test cycles for these cycles in Jira. After execution, I want the suite to update these cycles with the status and screenshots for each browsers respectively. ANy help is much appreciated.
PS: At the moment I have a function that creates a new test cycle for each execution.
there are no plugins for zephyr scale integration till now, but you can use zephyr api to update your execution results .
I've created a Nodejs lib for creating testruns and reporting results back to Zephyr scale.
Maybe they can help you on your way.
If you have any questions or feedback, let me know!
https://www.npmjs.com/package/#dbouckaert/zephyr-scale-reporter
Example: get all testcases for a project
/**
* This function will get all testcases for a certain project and add them to variables.testCasesArray
* #returns {void}
*/
export const getAllTestcases = async (): Promise<void> => {
await request(variables.url)
.get(`/rest/tests/1.0/project/${variables.projectId}/testcases`)
.auth(variables.username, variables.password)
.expect(200)
.then((res) => {
variables.testCasesArray = res.body.testCases;
});
};

How to change download directory during test run

I have a few tests that download files and assert data inside. The problem I am facing is that the tests run in parallel so I can't delete the download directory after each test or else they delete each others files. The issue with not doing so however is that the filename includes timestamp(unique identifier) which is not known to the test so not possible to know which file to open. Is there a way to change default_directory for a given test in the middle of test run? The idea is to be able to tell capybara that for certain tests override the download path to be another path temporarily? I guess there could be cross wiring here too if it was possible as other tests could still be running expecting the original path to be set?
Alternatively, any suggestions on handling this?
My capybara config looks like this
options = Selenium::WebDriver::Chrome::Options.new
preferences = {
prompt_for_download: false,
credentials_enable_service: false,
default_directory: DownloadUtil::PATH
}
options.add_preference(:download, preferences)
options.add_argument('--disable-infobars')
options.add_argument('--headless')
Capybara::Selenium::Driver.new(app, browser: :chrome, options: options)
end
Most multi-process parallel test setups provide you with an environment variable you can use to configure things that need to be different between each instance of test runner (DB name, ports, etc). In the case of parallel_rspec that is TEST_ENV_NUMBER. Using that you can configure the selenium/chrome instance in each test runner to use a different download directory - something like
preferences = {
prompt_for_download: false,
credentials_enable_service: false,
default_directory: DownloadUtil::PATH + ENV['TEST_ENV_NUMBER']
}

Use BlockingDataflowPipelineRunner and post-processing code for Dataflow template

I'd like to run some code after my pipeline finishes all processing, so I'm using BlockingDataflowPipelineRunner and placing code after pipeline.run() in main.
This works properly when I run the job from the command line using BlockingDataflowPipelineRunner. The code under pipeline.run() runs after the pipeline has finished processing.
However, it does not work when I try to run the job as a template. I deployed the job as a template (with TemplatingDataflowPipelineRunner), and then tried to run the template in a Cloud Function like this:
dataflow.projects.templates.create({
projectId: 'PROJECT ID HERE',
resource: {
parameters: {
runner: 'BlockingDataflowPipelineRunner'
},
jobName: `JOB NAME HERE`,
gcsPath: 'GCS TEMPLATE PATH HERE'
}
}, function(err, response) {
if (err) {
// etc
}
callback();
});
The runner does not seem to take. I can put gibberish under runner and the job still runs.
The code I had under pipeline.run() does not run when each job runs -- it runs only when I deploy the template.
Is it expected that the code under pipeline.run() in main would not run each time the job runs? Is there a solution for executing code after the pipeline is finished?
(For context, the code after pipeline.run() moves a file from one Cloud Storage bucket to another. It's archiving the file that was just processed by the job.)
Yes, this expected behavior. A template represents the pipeline itself, and allows (re-)executing the pipeline by launching the template. Since the template doesn't include any of the code from the main() method, it doesn't allow doing anything after the pipeline execution.
Similarly, the dataflow.projects.templates.create API is just the API to launch the template.
The way the blocking runner accomplished this was to get the job ID from the created pipeline and periodically poll to observe when it has completed. For your use case, you'll need to do the same:
Execute dataflow.projects.templates.create(...) to create the Dataflow job. This should return the job ID.
Periodically (every 5-10s, for instance) poll dataflow.projects.jobs.get(...) to retrieve the job with the given ID, and check what state it is in.

Read a data file for unit test in Dart

I'm using this snippet to read a data file in a unit test:
var file = new File('/Users/chambery/projects/Foo/src/resources/skills.yaml');
Future<String> finishedReading = file.readAsString();
finishedReading.then((text) {
print(text);
print(loadYaml(text));
});
Running in the Dart Editor I get no error (but no printout),
...
PASS: calc_ranks
PASS: load_skills
All 7 tests passed.
unittest-suite-success
(edit: removed command line error; dart vm was out-of-date)
I don't need async file read.
I'm guessing that you don't tell the unittest framework that your test is asynchronous. The framework will therefore not wait for your asynchronous tests to finish and assume that they passed.
Use expectAsyncX (where "X" is the number of arguments) to make sure that the framework waits for your asynchronous tests to finish.
See the unittest documentation: https://api.dartlang.org/docs/channels/stable/latest/unittest.html
If you are dealing with Futures, you can also use expect(future, completes).

Resources