I'm integrating mock server to out tests. As part of it I need to clear all the request that was caught by mock server. Due to restrictions of the service we use, I can do it only once for all the endpoints. So I need to do it at the very begging before all suites, and only once.
It would be easy if we would run suits sequentially, but we run them in parallel.
How to implement before all suites in that case?
Related
I have made an implementation of a service working using Google's Workbox library and I want to test my custom route handlers and logic, but I am really unsure how I can best test this. The tests should be performed in the browser because I will need to verify if the cache or my database has been modified. I also load in my custom route and handlers via a config file, so I need to make sure the correct routes are registered.
Can somebody help me point in the right direction? I have tried unit testing with Jest, but I could make the functions work which use parts of the Workbox library.
I can point you to a few different resources, but there's no one clear best, modern way to test service worker functionality (that I'm aware of).
The sw-appcache-behavior test suite
The test suite for sw-appcache-behavior is not directly related to Workbox, but it does test service worker caching behavior. The advantage of basing your tests off of this is that it's a newer and simpler codebase than Workbox, and it shows off how to run a test suite that uses Puppeteer to orchestrate in-browser tests, including examining cache contents after service worker event handlers have completed.
The Workbox test suite
You could also look at how Workbox tests its own first-party code and use that as an example. I'd divide those tests up into three parts:
node unit tests
Some tests are run in a node environment using a mock service worker environment, and heavily uses sinon stubs to observe calls to the underlying fetch() and Cache Storage APIs to ensure that the expected calls are being made.
Current integration tests
These tests rely on Selenium (wrapped via the selenium-assistant library) to control instances of Safari, Chrome, and Firefox. You have the advantage over the mock environment in the unit tests of actually getting test behaviors in a real browser environment. Unfortunately, there's a decent amount of pre-test workarounds and general flakiness involved in this approach to testing.
Future integration tests
We've been working on a modernization of the Workbox integration test suite to use Playwright instead of selenium-assistant to orchestrate tests across Safari, Firefox, and Chrome. The resulting code ends up being much less complex, with fewer browser-specific workarounds. This is probably what I would recommend modeling a new codebase off of.
Unfortunately, we haven't had enough time to prioritize migrating the entire existing test suite over to Playwright, but you can take a look at some of what has been migrated over in a branch, and use that as inspiration.
I have inherited a system that consists of a couple daemons that asynchronously process messages. I am trying to find a clean way to introduce integration testing into this system with minimal impact/risk on the existing programs. Here is a very simplified overview of their responsibilities:
Process 1 polls a queue for messages, and inserts a row into a DB for each one it dequeues.
Process 2 polls the DB for rows inserted by Process 1, does some calculations, and then deposits a file into a directory on the host and sends an email.
These processes are quite old and complex, and I am strongly inclined to avoid modifying them in any way. What I would like to do is put each of them in a container, and also stand up the dependencies (queue, DB, mail server) in other containers. This part is straightforward, but what I'm unsure about is the best way to orchestrate these tests. Since these processes consume and generate output asynchronously I will need to poll or wait for the expected outcome (mail sent, file created).
Normally I would just write a series of tests in a single test suite of my language of choice (Java, Go, etc), and make the setUp / tearDown hooks responsible for resetting the environment to the desired state. But because these processes have a lot of internal state I am afraid I cannot successfully "clean up" properly after each distinct test. This would be a problem if, for example, one test failed to generate the desired output in a specific period of time so I marked it as failed, but a subsequent test falsely got marked as passed because the original test case actually did output something (albeit much slower than anticipated) that was mistakenly attributed to the subsequent test. For these reasons I feel I need to recreate the world between each test.
In order to do this the only options I can see are:
Use a shell script to actually run my tests -- having it bring up the containers, execute a single test file, and then terminate my containers for each test.
Follow my usual pattern of setUp / tearDown in my existing test framework but call out to docker to terminate and start up the containers between each test.
Am I missing another option? Is there some kind of existing framework or pattern used for this sort of testing?
I've been working on a project that I want to add automated tests. I already added some unit tests, but I'm not confident with the process that I've been using, I do not have a great experience with automated tests so I would like to ask for some advice.
The project is integrated with our web API, so it has a login process. According to the logged user the API provides a configuration file which will allow / disallow the access to some modules and permissions within the mobile application. We also have a sync process where the app will access several methods from the API to download files (PDFs, html, videos, etc) and also receive a lot of data through JSON files. The user basically doesn't have to insert data, just use the information received in the sync process.
What I did to add unit tests in this scenario so far was to simulate a logged user, then I added some fixture objects to the user and test them.
I was able to test the web service integration, I used Nocilla to return fake JSONs and assert the result. So far I was only able to test individual request, but I still don't know how should I test the sync process.
I'm having a hard time to create unit tests for my view controllers. Should I unit test just the business logic and do all the rest with tools like KIF / Calabash?
Is there an easy way to setup the fixture data and files?
Thanks!
Everybody's mileage may vary but here's what we settled on and why.
Unit tests: We use a similar strategy. Only difference is we use OHTTPStubs instead of Nocilla because we saw some more flexibility there that we needed and were happy to trade off the easier syntax of Nocilla.
Doing more complicated (non-single query) test cases quickly lost its luster because we were essentially rebuilding whole HTTP request/response flows and that wasn't very "unit". For functional tests, we did end up adopting KIF (at least for dev focused efforts, assuming you don't have a seaparte QA department) for a few reasons:
We didn't buy/need the multi-language abstraction layer afforded by
Appium.
We wanted to be able to run tests on many devices per
build server.
We wanted more whitebox testing and while
Subliminal was attractive, we didn't want to build hooks in our main
app code.
Testing view controller logic (anything that's not unit-oriented) is definitely much more useful using KIF/Calbash or something similar so that's the path I would suggest.
For bonus points, here are some other things we did. Goes to show what you could do I guess:
We have a basic proof of concept that binds KIF commands to a JSON RPC server. So you can run a test target on a device and have that device respond to HTTP requests, which will then fire off test cases or KIF commands. One of the advantage of this is that you can reuse some of the test code you wrote for single device for multiple device test cases.
Our CI server builds integration tests as a downstream build of our main build (which includes unit tests). When the build starts we use XCTool to precompile tests, and then we have some scripts that starts recording a quicktime screen recording, runs the KIF tests, exports the result, and then archive it on our CI server so we can see a live test run along with test logs.
Not really part of this answer but happy to share that if you ping me.
I am currently evaluating how to test a rather big and complex web application, based on Rails 4 on the server side and EmberJS on the client side. In our app, the client exclusively communicates through a restful JSON API with the server.
We did a lot of unit testing based on Konacha so far and are now willing to setup integration/acceptance tests too. We are not sure whether we should start writing end-to-end tests, so tests including a running instance of our server, or whether we should go for integration testing the API and the client side separately.
Our preferred choice at the moment is end-to-end testing, because we fear that in the case of integration testing API and client separately we have twice the effort of creating and maintaining tests and that there might be tiny, little specialities of the comunication between API and client which we could not catch.
Well, we like modern & fast testing frameworks like Konacha, so we don't really want to use Selenium. Not only because it feels a little bit old, but also because its performance is quite poor. Still you will need to control instantiation of mock data on the server and reset of the server, why we came up with the following approach:
We implemented a Testing API which conceptually is used to control the state of the server, e.g. it has the following methods:
GET /api/test/setup # Simple bootstrapping of the database, e.g. populate table with ISO language codes etc...
GET /api/test/reset # Reset the database, using `database_cleaner` gem
A konacha test case would then call setup and reset, before and after each test case respectively.
What do you think about this approach?
Not sure what I would call the test of the API and the client separately, but even if you think about running this kind of test, you still should got for the end-to-end test.
So yes, I think your idea of going for end-to-end testing is very good.
Your idea of setting up simple commands to allow test automation for your system (the setup and the reset commands) is very good as well. Be prepared to add more during automation - while
an end-to-end-test is conceptually a black-box test, in reality it's often a grey-box test, i.e. you will need to access the internal state of your system. I would call this the "operation and maintenance interface" of the system under test.
When I run the tests on my local environment they are all pass. However, when the tests run on a different machine, many of the VCR related tests are failing. I want to investigate the cause of this.
My guess is that the requests that are running must be slightly different. Is this the best assumption?