How would you run jasmine tests on a CI environment *without nodejs* - jenkins

I have a bunch of jasmine tests that I would like to run on a jenkins CI server.
At the moment, we use an html page that runs the specs, that a developper can open in a browser on its own machine.
The transition to CI would be easy if I had access to some kind of server side test runner (like karma), however for some undisclosable reasons, I can not run nodejs on our CI server.
So in the spirit of creativity-under-constraints, what could I use to automate jasmine tests without node ? (But anything that can run with maven and a jdk is probably fine...)

You can make your test automatically spawn a browser with the page that runs your unit test. The tricky part tough is to get the result back to the main test runner. The solution that I have found for that is to use a custom jasmine reporter (you just need to implement the same function has to other reporter) and when a spec has finished to run you do an AJAX call to write that result in a file. The main runner just needs to wait until something is written in that file to see the results. Once the test are finish, just don't forget to kill the browser, otherwise your CI server will be flooded by window.

Related

NUnit3 with SpecFlow runs in VS and as batch command but not in Jenkins

I have my selenium tests written using SpecFlow(+SpecRun) and NUnit framework (v.3.8.1.0). I've configured Jenkins to run these tests. My Jenkins Windows Batch Command is as follows:
"C:\Program Files (x86)\NUnit.ConsoleRunner\3.7.0\tools\nunit3-console.exe"
C:\Projects\Selenium\ClassLibrary1\PortalTests\bin\Debug\PortalTests.dll
--test=TransactionTabTest;result="%WORKSPACE%\TestResults\TestR.xml";format=nunit3
When I trigger build test seems to start running as I'm getting as far as end of NUNIT3-CONSOLE [inputfiles] [options] with spinner indicating that test is running but it actually never ends and estimated remaining time is: N/A.
Now, when I run this script with windows cmd.exe:
"[PATH to Console.exe]\nunit3-console.exe" PortalTests.dll -- test=TransactionTabTest
this test pass successfully and so does in VS.
Now, I know this is very generic question but any clues will be much appreciated.
As you are using SpecFlow+Runner/Specrun, you can find the documentation how to configure it for the different build servers here: http://specflow.org/plus/documentation/SpecFlowPlus-and-Build-Servers/

Wallaby on a build server (CI)

we are currently using Wallaby.js for javascript unit testing. Works fine and is great. But within our development pipeline we of course want to run the same tests on the build server - in our case a tfs.
Is it possible to use wallaby on a tfs build server? Anf if yes how?
If not, what is the way to go to run the wallaby configured unit tests on the build server?
As we used the karma test runner earlier, I tried to execute the new test configuration with it but then I get
Can't find variable: wallaby
as in our main/ starting test file it is written
wallaby.delayStart();
require.config({
baseUrl: 'app',
(Originally from a karma/ requirejs configuration)
How to get around this?
Has anyone experience in this scenario?
Wallaby.js main idea is to integrate with editors, run tests for the code that you change and display the results in the editor. You can't use Wallaby.js in a CI build.
You may consider invoking other test runners, or use grunt/gulp task instead for javascript unit testing.
In TFS 2012 and later (might work in 2010 but not sure) you can extend the testing capabilities of the build system.
Check out these posts -
http://www.aspnetperformance.com/post/Unit-testing-JavaScript-as-part-of-TFS-Build.aspx
https://blogs.msdn.microsoft.com/visualstudioalm/2012/07/09/javascript-unit-tests-on-team-foundation-service-with-chutzpah/

Can yeoman tests be run in parallel?

We just hit an issue with yeoman-generator tests when they would pass when run in isolation but fail when run in parallel with other tests.
Specifically, we call require('yeoman-generator').test.run() to run the generator and then use require('yeoman-generator').assert.file to check that the correct files were generated, which is what the documentation says. However, the assert would sometimes fail saying the files don't exist.
How does the interaction between test.run() and assert.file work? Where are the files written? Is is a global variable / temp file that is always the same and therefore can be overwritten by other tests running at the same time?
This is the test, and an example of a failing build.
There's a github issue with detailed discussion and here's a discussion on how the tests suddenly started passing when run in isolation.
We are using the Jest testing framework which runs tests in parallel.
Looks like Yeoman tests can't be run in parallel.
require('yeoman-generator').test.run() does create a temp directory but then changes the current working directory to that directory. This interferes with other tests that also rely on the CWD and therefore the Yeoman tests can't be run in parallel with other tests.
Relevant comment in run-context.js and process.chdir in helpers.js.

Jasmine CI and Capturing Test Result Output on Jenkins Server

Background:
Have inherited a Ruby on Rails 3.1.x project which is in need of some BDD and testing for Javascript code. So following the Instructions I have added the jasmine gem for JS testing. This works ok via rake jasmine and gives me the local web-server accessible via http://some-host.com:8888/
Problem:
What I want to do is use the tests on the CI server, which is running Jenkins. The Jenkins project is setup with the command rake jasmine:ci to run the CI variant of Jasmine. The output on the Jenkins build console log is below:
Waiting for jasmine server on 32901...
jasmine server started.
Waiting for suite to finish in browser ...
................
Finished in 0.00454 seconds
16 examples, 0 failures
* Stopping Xvfb :66.0 Xvfb
...done.
I'd like to capture the output; as in the view that is generated from the Jasmine web server page and preserve this with the build run. I've tried the obvious of seeing if there is an -o <filename.out> option, but not had any success.
Does anyone know how to capture the output in the context of running in a CI instance ? Does it require PhantomJS ?
I use phantomjs in combination with a junit compatible xml reporter for jasmine. Then I simply use the JUnit Jenkins plugin.
The junit reporter and glue code can be found here:
https://github.com/larrymyers/jasmine-reporters
This github project by Larry Myers has a good example setup for this. It contains a rhino and a phantomjs setup. I have only tried the phantomjs part and I am really pleased.

gradle tests hang for spring security tests with embedded ldap server

I have a set of tests for spring-security 3.1.3 with embedded ldap server that runs properly from eclipse or when run through gradle with -Dtest.single option. However when i do a clean build to run the entire set of tests in the project the execution hangs at the point where it hits those tests, at which point i have to kill the gradle process. If I #Ignore the ldap tests other tests work fine. These tests work properly if i dont use embedded server i.e connect to an external server. Probably something to do with fact that gradle executes tests in multi-threaded way and it tries to host an in-memory server and all that.
Any body faced similar issues ? and how might i get more useful info on what might be going on ? --info or --debug on gradle doesn't help and the test reports (like the ones generated in case of a normal test failure ) are also not generated in case of killing the gradle process .
You probably need to set maxParallelForks to 1.
Why don't you copy the approach used by Spring Security itself, which configures a separate task for integration tests? It sets maxParallelForks to 1 for those tests.
That way you can continue to benefit from running unit tests in parallel.

Resources