Structuring configuration for apps in Common Test suites - erlang

I have hit an issue with Common Test and the way I specify the configuration for the apps I test. I have several collections of test suites where each collection of test suites has a ct_hook module to set up some things.
The way I configure apps I’m going to test is to call application:load/1 and then application:set_env/3 before I call application:ensure_all_started/1.
For individual (collections) of test suites, this works well. however, when I run rebar3 ct, it (naturally) runs multiple test suites in succession, and if I need to configure an app that I’m going to use in a later run, it is then too late to call application:set_env/3 if that app was already loaded indirectly (as a dependency – or even a dependency’s dependency) in an earlier suite’s ct_hook:
init/2 in first_ct_hook:
% loads app_a, but also its dependency app_b and *app_b's* dependency app_z:
application:load(app_a),
application:set_env(app_a, database, my_db_config),
​
% …
% great success!
​
init/2 in second_ct_hook:
application:load(app_b), % loads app_b (its dependency app_z is already loaded)
application:set_env(app_a, database, my_db_config),
application:set_env(app_z, important, my_important_config), % oh no! too late!
What’s the proper way to do this?

Related

Understanding XCTestCase life cycle - How to perform setup BEFORE Xcode unit test launches the app?

I am new to unit tests in Xcode and Swift and have some trouble to understand the life cycle of XCTestCase.
How/where to add setup code which is executed before the actual app is launched?
Problem is, that first the host app is launched before any of the test setup methods (class func setUp(), func setUp(), func setUpWithError()) are executed.
It is even possible to run test code before the host app launches?
Details:
As described in a previous question my app uses a SQLite database to persist some data. When the app launches a database connection is created and data is read from the database.
To make tests consistent and repeatable I would like to use a fresh database with some well defined data every time a run the test. To archive this I tried to override setUpWithError remove the existing db file and move a file with pre-defined data in place instead.
Unfortunately this does not work, because setUpWithError is executed only after the host app was launched. The same is true for all other test setup methods.
Moving a fresh database file in place before running the tests is only an example. The problem is the same for all local data which should be in place before the host app launches to ensure repeatable tests.
An answer to my previous question included a UIApplication extension with a isTesting method which can be used to check if a test is performed. While I could use this in my app code to setup the test data, I would consider this a bad solution. I would like to keep the code completely separated from the production code. Is this possible?
There are several approaches to set up data before running a test case
NSPrincipalClass
As described here you can create a class, and the init method of that class is executed before running any test. This helps setting up dependencies used by many test cases. I don't think this is the way to go in your case.
isTesting
Instead of setting up the code in your app target, you can also check for isTesting early in the didFinishLaunchingWithOptions method of your AppDelegate and simply return there. In this case the regular code is not executed and you can run you custom database setup code in the setup of your test class.
Dependency Injection
Right now you are doing integration testing in my opinion. If you want to have proper unit tests, they should not operate against the database of the underlying app. Instead create an extra (in memory) database and inject that into the code you are testing. I think this is the way you should follow. The benefits are
Your app is not affected by your tests. Next time you start the app to manually test, your original data is still there.
Your unit tests are not affected by manual changes to the app
Your unit tests are also independent of each other. Order of execution doesn't change the results when setting up the database fresh for each test case.
It is even possible to run test code before the host app launches
If you have to ask that, your tests are not unit tests. Unit tests test code, not the app. A test that requires the app to be running would be a UI test.
In fact, the best approach for unit tests is to put all your testable code into a framework and give the framework unit tests; that way, your tests run much faster because the app never has to launch at all.
It sounds to me like the problem lies deeper in your app's code: you have evidently not written your code in such a way as to be testable. So that would be your first move. Writing testable code, and writing unit tests, is an art; you have to separate out the "system under test", which should be your code alone, and make sure you are not testing anything that doesn't belong to you and whose workings are already known — like Core Data.

Progress ABL How to Test for WEBSPEED in the PRE-PROCESSOR

I want to conditionally compile some blocks of code depending on type of client i'm running in. this is fine for batch and tty as i can use the {&BATCH-MODE} but how to test for when the code is being compiled in webspeed agent? eg. {&IF} not {&SOMETHING} EQ "YES" {&THEN}
{&ANALYSE-SUSPEND}
foo
bar
{&ANALYSE-RESUME}
{&ENDIF}
it would be helpful if this did not rely on defines auto generated by the architect in .w's etc but that would be a nice to have not essential.
Compile time isn't run time. If the program can be run different ways (as a part of a of webpage using webspeed, as a part of a batch and as a part of some other kind of client etc) you're most likely better of evaluating this in run time instead.
You can identify in what environment you're running:
SESSION:CLIENT-TYPE
This will identify your type of client.
DISPLAY SESSION:CLIENT-TYPE.
Type of client Attribute value
-------------------------------- -----------------------
ProVision standard ABL client 4GLCLIENT
WebClient WEBCLIENT
AppServer agent APPSERVER
WebSpeed agent WEBSPEED
Pacific Application Server agent MULTI-SESSION-AGENT
Other special-purpose clients Unknown value (?)
Documentation
Using VST
If you have at least one database connected
_Connect-ClientType tells you what kind of client this particular connection is:
Value Client
-------- ---------------------
ABL ABL client
SQLC SQL client
WTA Webspeed agent
APSV AppServer agent
SQFC SQL Federated client
Example:
FIND FIRST _myconnection NO-LOCK.
FIND FIRST _connect NO-LOCK WHERE _connect._connect-usr = _myconnection._MyConn-userid.
DISPLAY _connect._Connect-ClientType.
Based on OS
Perhaps you run different OS:es?
DISPLAY OPSYS.
Other ways
There's a number of other ways of doing this, including perhaps looking at PROPATH, Working directory etc.
Try to stick with a solution that won't change over the course of time because of Progress upgrades, new OS:es, new directory structures etc.
IMHO there is no such preprocessor variable out of the box.
But you could create your own include file and include that in the code that's relevant. You need two versions of that file, one says
&GLOBAL-DEFINE WebSpeed WebSpeed
and the other
&GLOBAL-DEFINE NoWebSpeed NoWebSpeed
And then configure your compile sessions so that they find exactly one of the files in propath.
But as you will agree, this is probably dangerous as the result will heavily rely on the proper PROPATH used during compilation. I'd rather attempt to use a runtime condition instead.
What are you trying to achieve in detail?
finally figured it out this morning {&webstream} and {&out} are not defined in in normal sessions so i can just test for that. runtime is not an issue in my case i just want to compile the code in all cases. in this shop dont ask me why but every single piece of code is session compiled. poor cpu but there u go. i could be defensive and add some logic with session:Client-type for bells and whistles you're right. if not can-do then boogie :)

Running Geb + spock tests headless

I have a number of geb functional tests for a grails application.
The tests are working as expected when executed from terminal or IDE.
Although the tests need to be executed by hudson, so they are run in headless mode using Xvfb.
The problem is that the tests keep failing, or behaving unexpectedly, returning errors like RequiredPageContentNotPresent and Stale Element Reference Exception in places that doesn't make sense.
For example:
(at LicencePage is verified above, and page isn't changed)
when:
addDocument(Data.Test_Doc_name,Data.Test_Doc_file)
sometimes throws
Failure: Add Actual Licence (HomePageSpec)
| geb.error.RequiredPageContentNotPresent: The required page content 'addDocument - SimplePageContent (owner: LicencePage, args: [Functional Test Doc, /var/lib/hudson/jobs/KB-Functional_Tests/workspace/app/../manual_test_data/so_v3/os_test_1], value: null)' is not present
at geb.content.TemplateDerivedPageContent.require(TemplateDerivedPageContent.groovy:61)
at geb.content.PageContentTemplate.create_closure1(PageContentTemplate.groovy:63)
at geb.content.PageContentTemplate.create(PageContentTemplate.groovy:82)
at geb.content.PageContentTemplate.get(PageContentTemplate.groovy:54)
at geb.content.NavigableSupport.getContent(NavigableSupport.groovy:45)
at geb.content.NavigableSupport.methodMissing(NavigableSupport.groovy:121)
at geb.Browser.methodMissing(Browser.groovy:194)
at geb.spock.GebSpec.methodMissing(GebSpec.groovy:51)
at HomePageSpec.Add Actual Licence (HomePageSpec.groovy:228)
The method addDocument() is defined on an 'abstract' page, which LicencePage is extending. In most cases like this, if I copy the method code directly into my Spec, it is going to work, although its ruining all the structure I have on my test pages.
Anyone has experience running geb tests with Xvfb? Have you faced these issues?
All tests are passing when executed locally, and this not a data issue as the DB is always cleared
Also, without making any changes, the tests are behaving non-deterministic (on hudson) so the above exception is not always thrown. Without any changes at all, tests are sometimes successful and sometimes fail.
The description you gave seems to be the symptom of a flackey test-suite. we were facing this problem as well some time ago. A good starting point for this is this presentation (around min. 35) and the documentation about the wait stuff in geb.
If you think, it could have something to do with xvfb (where i have no experiences with), you could try to use phantomjs as the test-runner and check if it works correctly.

Testing Web Apps Using DRT

I'm trying to use DRT for running acceptance tests.
Because it's an acceptance test I need to change the location to open the page under test. But of course, after I've done it my test script is gone.
I tried to use iFrames as a workaround, but Dart doesn't provide any means of getting the content of an iFrame. Which means that it's possible to load the page under test into an iframe, but it's impossible to get its html.
I've checked all the DRT tests in the Dart repo:
http://code.google.com/p/dart/source/browse/#svn%2Fbranches%2Fbleeding_edge%2Fdart%2Ftests%2Fhtml
but it seems that none of them changes the location.
Is it possible to use DRT for running acceptance tests? Is there a workaround I didn't think of?
We haven't come up with a good trick (redirection or iframes) to load the app as it is written and runs the test code on top of it. Instead, you could copy the entrypoint of an app and include the test code there, then run the modified app directly in DRT.
Here is an example from the web-ui codebase of a test that does this. This test runs the TodoMVC app and interacts with it:
https://github.com/dart-lang/web-ui/blob/master/test/data/input/todomvc_listorder_test.html
All we did is copy the original app's html, add the 'testing.js' script tag, and replace the dart script tag with the test code. It might be possible to create a script that automates what we do manually today, but we haven't done that.

How do I skip an eunit test?

I wonder how to mark a specific test in eunit in a way that will force it to be ignored (ie compiled, but not executed) on the next test run. I'm asking this question in a TDD context, ie I'd like to refactor in the green, but still have some test cases that I'll get to later.
I'd rather not comment-out the test, that is a good way of forgetting about it. eunit's test summary line does have a skipped line, but I could not find any docs about that functionality.
You can temporarily remove '_test' suffix from test's name (or add any other, e.g. '_ignore'). It will compile, but won't show up in the summary (as it will be treated like a regular function and thus will be ignored by eunit).
This is a workaround of course, eunit should support such functionality, but I'm afraid it doesn't.
EUnit's notion of "skipped" means that something prevented the test from running, such as a compilation failure, the node that was in charge of the test crashing, or the setup failing.
This concept is pretty deeply embedded in the code, so there's no simple way to get user-skipped tests.

Resources