Testing a serviceworker using Workbox - service-worker

I have made an implementation of a service working using Google's Workbox library and I want to test my custom route handlers and logic, but I am really unsure how I can best test this. The tests should be performed in the browser because I will need to verify if the cache or my database has been modified. I also load in my custom route and handlers via a config file, so I need to make sure the correct routes are registered.
Can somebody help me point in the right direction? I have tried unit testing with Jest, but I could make the functions work which use parts of the Workbox library.

I can point you to a few different resources, but there's no one clear best, modern way to test service worker functionality (that I'm aware of).
The sw-appcache-behavior test suite
The test suite for sw-appcache-behavior is not directly related to Workbox, but it does test service worker caching behavior. The advantage of basing your tests off of this is that it's a newer and simpler codebase than Workbox, and it shows off how to run a test suite that uses Puppeteer to orchestrate in-browser tests, including examining cache contents after service worker event handlers have completed.
The Workbox test suite
You could also look at how Workbox tests its own first-party code and use that as an example. I'd divide those tests up into three parts:
node unit tests
Some tests are run in a node environment using a mock service worker environment, and heavily uses sinon stubs to observe calls to the underlying fetch() and Cache Storage APIs to ensure that the expected calls are being made.
Current integration tests
These tests rely on Selenium (wrapped via the selenium-assistant library) to control instances of Safari, Chrome, and Firefox. You have the advantage over the mock environment in the unit tests of actually getting test behaviors in a real browser environment. Unfortunately, there's a decent amount of pre-test workarounds and general flakiness involved in this approach to testing.
Future integration tests
We've been working on a modernization of the Workbox integration test suite to use Playwright instead of selenium-assistant to orchestrate tests across Safari, Firefox, and Chrome. The resulting code ends up being much less complex, with fewer browser-specific workarounds. This is probably what I would recommend modeling a new codebase off of.
Unfortunately, we haven't had enough time to prioritize migrating the entire existing test suite over to Playwright, but you can take a look at some of what has been migrated over in a branch, and use that as inspiration.

Related

iOS automated tests questions

I've been working on a project that I want to add automated tests. I already added some unit tests, but I'm not confident with the process that I've been using, I do not have a great experience with automated tests so I would like to ask for some advice.
The project is integrated with our web API, so it has a login process. According to the logged user the API provides a configuration file which will allow / disallow the access to some modules and permissions within the mobile application. We also have a sync process where the app will access several methods from the API to download files (PDFs, html, videos, etc) and also receive a lot of data through JSON files. The user basically doesn't have to insert data, just use the information received in the sync process.
What I did to add unit tests in this scenario so far was to simulate a logged user, then I added some fixture objects to the user and test them.
I was able to test the web service integration, I used Nocilla to return fake JSONs and assert the result. So far I was only able to test individual request, but I still don't know how should I test the sync process.
I'm having a hard time to create unit tests for my view controllers. Should I unit test just the business logic and do all the rest with tools like KIF / Calabash?
Is there an easy way to setup the fixture data and files?
Thanks!
Everybody's mileage may vary but here's what we settled on and why.
Unit tests: We use a similar strategy. Only difference is we use OHTTPStubs instead of Nocilla because we saw some more flexibility there that we needed and were happy to trade off the easier syntax of Nocilla.
Doing more complicated (non-single query) test cases quickly lost its luster because we were essentially rebuilding whole HTTP request/response flows and that wasn't very "unit". For functional tests, we did end up adopting KIF (at least for dev focused efforts, assuming you don't have a seaparte QA department) for a few reasons:
We didn't buy/need the multi-language abstraction layer afforded by
Appium.
We wanted to be able to run tests on many devices per
build server.
We wanted more whitebox testing and while
Subliminal was attractive, we didn't want to build hooks in our main
app code.
Testing view controller logic (anything that's not unit-oriented) is definitely much more useful using KIF/Calbash or something similar so that's the path I would suggest.
For bonus points, here are some other things we did. Goes to show what you could do I guess:
We have a basic proof of concept that binds KIF commands to a JSON RPC server. So you can run a test target on a device and have that device respond to HTTP requests, which will then fire off test cases or KIF commands. One of the advantage of this is that you can reuse some of the test code you wrote for single device for multiple device test cases.
Our CI server builds integration tests as a downstream build of our main build (which includes unit tests). When the build starts we use XCTool to precompile tests, and then we have some scripts that starts recording a quicktime screen recording, runs the KIF tests, exports the result, and then archive it on our CI server so we can see a live test run along with test logs.
Not really part of this answer but happy to share that if you ping me.

Does geb rollback the database to its virgin state after every each test?

I've been getting my feet wet with Geb on Grails, but there's not a lot of documentation about how it behaves. For instance, how does geb handle rollbacks? From what I'm observing, it runs the app and runs the test on the browser itself without turning it off in between tests.
What happens to the database data when one spec (spec A) alters an object (object Z), and a few tests later, another spec(spec B) alters the same object? Does geb rollback the database to it's virgin state every time a spec is ran? I'm trying to confirm because I have geb tests that ran fine when executed individually, but when I ran them as a suite, some of them fails, and the best reason I could come up with is the data isn't in pristine condition when a second test was run on it. Any thoughts?
Geb tests and functional tests in general are quite different from unit and integration tests. Unit and integration tests run in the same JVM, and the test runner starts a transaction before each test and rolls it back after the test runs, which has the effect of resetting the database but in reality it's just keeping tests from changing the database. But any data inserted into the database before the tests start (e.g. from BootStrap) will be there for each test.
But functional tests are typically run in one JVM but they make remote calls to your app running in a second JVM. This limits what you can do during tests, for example you can't manipulate metaclasses, or change Spring bean instance variables, and you can't start and rollback transactions to isolate data changes between tests. You can do any of those things, but they would affect the local JVM only.
Geb could remotely make these changes of course, but that would require modifying your application to add a controller or some other way of making remote calls, but it doesn't.
In general tests shouldn't be ordered and should be independent, but I find that when doing functional tests that it makes sense to break that rule and order them, where an earlier test does some inserts or other changes and later tests do further work and/or checks based on earlier changes. I've also added test-only controller actions that can be used to roll back changes (via a transaction or by deleting inserted data, undoing updates and deletes, etc.) and make other changes to assist the tests, but this has to be done carefully to ensure that it is only available during testing and doesn't become a significant security risk.

Acceptance Testing Rails using a "Testing API" for client side state control

I am currently evaluating how to test a rather big and complex web application, based on Rails 4 on the server side and EmberJS on the client side. In our app, the client exclusively communicates through a restful JSON API with the server.
We did a lot of unit testing based on Konacha so far and are now willing to setup integration/acceptance tests too. We are not sure whether we should start writing end-to-end tests, so tests including a running instance of our server, or whether we should go for integration testing the API and the client side separately.
Our preferred choice at the moment is end-to-end testing, because we fear that in the case of integration testing API and client separately we have twice the effort of creating and maintaining tests and that there might be tiny, little specialities of the comunication between API and client which we could not catch.
Well, we like modern & fast testing frameworks like Konacha, so we don't really want to use Selenium. Not only because it feels a little bit old, but also because its performance is quite poor. Still you will need to control instantiation of mock data on the server and reset of the server, why we came up with the following approach:
We implemented a Testing API which conceptually is used to control the state of the server, e.g. it has the following methods:
GET /api/test/setup # Simple bootstrapping of the database, e.g. populate table with ISO language codes etc...
GET /api/test/reset # Reset the database, using `database_cleaner` gem
A konacha test case would then call setup and reset, before and after each test case respectively.
What do you think about this approach?
Not sure what I would call the test of the API and the client separately, but even if you think about running this kind of test, you still should got for the end-to-end test.
So yes, I think your idea of going for end-to-end testing is very good.
Your idea of setting up simple commands to allow test automation for your system (the setup and the reset commands) is very good as well. Be prepared to add more during automation - while
an end-to-end-test is conceptually a black-box test, in reality it's often a grey-box test, i.e. you will need to access the internal state of your system. I would call this the "operation and maintenance interface" of the system under test.

Automate testing of process flows in Rails

I am building a educational service, and it is a rather process heavy application. I have a lot of user actions triggering various actions, some present, some future.
For example, when a student completes a lesson for his day, the following should happen:
updating progress count for his user-module record
checking if he has completed a particular module and progressing him to the next one (which in turn triggers more actions)
triggering current emails to other users
triggering FUTURE emails to himself (ongoing lesson plans)
creating range of other objects (grading todos by teachers)
any other special case events
All these triggers are built into the observers of various objects, and the execution delayed using Sidekiq.
What is killing me is the testing, and the paranoia that I might breaking something whenever I push something. In the past, I do a lot of assertion and validations checks, and they were sufficient. For this project, I think this is not enough, given the elevated complexity.
As such, I would like to implement a testing framework, but after reading through the various options (Rspec, Cucumber), it is not immediately clear what I should be investing my effort into, given my rather specific needs, especially for the observers and scheduled events.
Any advice and tips on what approach and framework would be the most appropriate? Would probably save my ass in the very near future ;)
Not that it matters, but I am using Rails 3.2 / Mongoid. Happy to upgrade if it works.
Testing can be a very subjective topic, with different approaches depending on the problems at hand.
I would say that given your primary need for testing end-to-end processes (often referred to as acceptance testing), you should definitely checkout something like cucumber or steak. Both allow you to drive a headless browser and run through your processes. This kind of testing will catch any big show stoppers and allow you to modify the system and be notified of breaks caused by your changes.
Unit testing, although very important, and should always be used in parallel with acceptance tests, isn't for doing end-to-end testing, Its primarily for testing the output of specific methods in isolation
A common pattern to use is called Test Driven Development (TDD). In this, you write your acceptance tests first, in the "outer" test loop, and then code your app with Unit tests as part of the "inner" test loop. The idea being, when you've finished the code in the inner loop, then the outer loop should also pass, and you should have built up enough test coverage to have confidence that any future changes to the code will either pass/fail the test depending on if the original requirements are still met.
And lastly, a test suite is something that should grow and change as your app does. You may find that whole sections of your test suite can (and maybe should) be rewritten depending on how the requirements of the system change.
Unit Testing is a must. you can use Rspec or TestUnit for that. It will give you atleast 80% confidence.
Enable "render views" for controller specs. You will catch syntax errors and simple logical errors faster that way.There are ways to test sidekiq jobs. Have a look at this.
Once you are confident that you have enough unit tests, you can start looking into using cucumber/capybara or rspec/capybara for feature testing.

How to test a class that connects to a FTP server?

I am developing a live update for my application. So far, I have created almost all unit tests but I have no idea how to test an specific class that connects to a FTP server and downloads new versions.
To test this class, should I create an FTP test server and use it in my unit tests? If so, how can I make sure this FTP server is always consistent to my tests? Should I create manually every file I will need before the test begin or should I automate this in my Test Class (tear down and setup methods)?
This question also applies to unit testing classes that connects do any kind of server.
EDIT
I am already mocking my ftp class so I dont always need to connect to the ftp server in other tests.
Let me see if I got this right about what Warren said in his comment:
I would argue that once you're talking to a separate app over TCP/IP
we should call that "integration tests". One is no longer testing a
unit or a method, but a system.
When a unit test needs to communicate to another app (that can be a HTTP server or FTP server) is this no longer a unit test but a integration server? If so, am I doing it wrong by trying to use unit testing techniques to create this test? Is it correct to say that I should not unit test this class? It does make sense to me because it seems to be a lot of work for a unit test.
In testing, the purpose is always first to answer the question: what is tested - that is, the scope of the test.
So if you are testing a FTP server implementation, you'll have to create a FTP client.
If you are testing a FTP client, you'll have to create a FTP server.
You'll have therefore to downsize the test extend, until you'll reach an unitary level.
It may be e.g. for your purpose:
Getting a list of the current files installed for the application;
Getting a list of the files available remotely;
Getting a file update;
Checking that a file is correct (checksum?);
and so on...
Each tested item is to have some mocks and stubs. See this article about the difference between the two. In short (AFAIK), a stub is just an emulation object, which always works. And a mock (which should be unique in each test) is the element which may change the test result (pass or fail).
For the exact purpose of a FTP connection, you may e.g. (when testing the client side) have some stubs which return a list of files, and a mock which will test several possible issues of the FTP server (time out, connection lost, wrong content). Then your client side shall react as expected. Your mock may be a true FTP server instance, but which will behave as expected to trigger all potential errors. Typically, each error shall raise an exception, which is to be tracked by the test units, in order to pass/fail each test.
This is a bit difficult to write good testing code. A test-driven approach is a bit time consuming at first, but it is always better in the long term. A good book is here mandatory, or at least some reference articles (like Martin Fowler's as linked above). In Delphi, using interfaces and SOLID principles may help you writing such code, and creating stubs/mocks to write your tests.
From my experiment, every programmer can be sometimes lost in writing tests... good test writing can be more time consuming than feature writing, in some circumstances... you are warned! Each test shall be see as a feature, and its cost shall be evaluated: is it worth it? Is not another test more suitable here? Is my test decoupled from the feature it is testing? Is it not already tested? Am I testing my code, or a third-party/library feature?
Out of the subject, but my two cents: HTTP/1.1 may be a better candidate nowadays than FTP, even for file update. You can resume a HTTP connection, load HTTP content by chunks in parallel, and this protocol is more proxy friendly than FTP. And it is much easier to host some HTTP content than FTP (some FTP servers have also known security issues). Most software updates are performed via HTTP/1.1 these days, not FTP (e.g. Microsoft products or most Linux repositories).
EDIT:
You may argue that you are making integration tests, when you use a remote protocol. It could make sense, but IMHO this is not the same.
To my understanding, integration tests take place when you let all your components work together as with the real application, then check that they are working as expected. My proposal about FTP testing is that you are mocking a FTP server in order to explicitly test all potential issues (timeout, connection or transmission error...). This is something else than integration tests: code coverage is much bigger. And you are only testing one part of the code, not the whole code integration. This is not because you are using some remote connection that you are doing integration tests: this is still unitary testing.
And, of course, integration and system tests shall be performed after unitary tests. But FTP client unitary tests can mock a FTP server, running it locally, but testing all potential issues which may occur in the real big world wide web.
If you are using Indy 10's TIdFTP component, then you can utilize Indy's TIdIOHandlerStream class to fake an FTP connection without actually making a physical connection to a real FTP server.
Create a TStream object, such as TMemoryStream or TStringStream, that contains the FTP responses you expect TIdFTP to receive for all of the commands it sends (use a packet sniffer to capture those beforehand to give you an idea of what you need to include), and place a copy of your update file in the local folder where you would normally download to. Create a TIdIOHandlerStream object and assign the TStream as its ReceiveStream, then assign that IOHandler to the TIdFTP.IOHandler property before calling Connect().
For example:
ResponseData := TStringStream.Create(
'220 Welcome' + EOL +
... + // login responses here, etc...
'150 Opening BINARY mode data connection for filename.ext' + EOL +
'226 Transfer finished' + EOL +
'221 Goodbye' + EOL);
IO := TIdIOHandlerStream.Create(FTP, ResponseData); // TIdIOHandlerStream takes ownership of ResponseData by default
FTP.IOHandler := IO;
FTP.Passive := False; // Passive=True does not work under this setup
FTP.Connect;
try
FTP.Get('filename.ext', 'c:\path\filename.ext');
// copy your test update file to 'c:\path\filename.ext'...
finally
FTP.Disconnect;
end;
Unit tests are supposed to be fast, lightening fast. Anything that slows them down discourages you from wanting to run them.
They are also supposed to be consistent from one run to another. Testing an actual file transfer would introduce the possibility for random failures in your unit tests.
If the class you are testing does nothing more than wrap the api of the ftp library you are using then you've reached one of the boundaries of your application you don't need to unit test it. (Well, sometimes you do. Its called exploratory testing but these are usually thrown away once you get your answer)
If, however, there is any logic in the class you should try to test it in isolation from the actual api. You do this by creating a wrapper for the ftp api. Then in your unit tests you create a test double that can stand in as a replacement for the wrapper. There are lots of variations that go by different names: stub, fake, mock object. Bottom line is you want to make sure your unit tests are isolated from any external influence. A unit test with sporadic behavior is less than useless.
Testing the actual file transfer mechanism should be done in integration testing which is usually run less frequently because its slower. Even in integration testing you'll want to try to control the test environment as much as possible. (i.e. testing with a ftp server on a the local network that is configured to mimic the production server).
And remember, you'll never catch everything up front. Errors will slip through no matter how good the tests are. Just make sure when they do that you add another test to catch them the next time.
I would recommend either buying or checking out a copy of XUnit Test Patterns by Gerard Meszaros. Its a treasure trove of useful information on what/when/how to unit test.
Just borrow the FTP or HTTP Server demo that comes with whatever socket component set you prefer (Indy, ICS, or whatever). Instant test server.
I would put it into a tools folder to go with my unit tests. I might write some code that checks if TestFtpServer.exe is already live, and if not, launch it.
I would keep it out of my unit test app's process memory space, thus the separate process.
Note that by the time you get to FTP server operations, unit testing should really be called "integration testing".
I would not manually create files from my unit test. I would expect that my code should check out from version control, and build, as it is, from a batch file, which runs my test program, which knows about a sub-folder called Tools that contains EXEs and maybe a folder called ServerData and LocalData that could be used to hold the data that is starting out on the server and being transferred down to my local unit test app. Maybe you can hack your demo server to have it terminate a session part way through (when you want to test failures) but I still don't think you're going to get good coverage.
Note If you're doing automatic updates, I think that no amount of unit testing is going to cut it. You need to deal with a lot of potential issues that are internet related. What happens when your hostname doesn't resolve? What happens when a download gets part way through and fails? Automatic-updating is not a great match with the capabilities of unit testing.
Write a couple of focused integration tests for the one component which knows how to communicate with an FTP server. For those tests you will need to start an FTP server before each tests, put there any files needed by the test, and after the test shutdown the server.
With that done, in all other tests you won't use the component which really connects to an FTP server, but you will use a fake or mock version of it (which is backed by some in-memory data structure instead of real files and network sockets). That way you can write unit tests, which don't need an FTP server or network connection, for everything else except the FTP client component.
In addition to those tests, it might be desirable to also have some end-to-end tests which launch the whole program (unlike the component-level focused integration tests) and connect a real FTP server. End-to-end tests can't cover all corner cases (unlike unit tests), but they can help to solve integration issues.

Resources