Munit integration test cases fail when I create multiple suites but works when I add all the test cases in a single file - timeout

I am trying to write integration test cases(without mocking responses) for my mule flows using Munit. I have created 2 different test suites for 2 of the flows under test. Individually these tests run successfully in my local. But the build fails in my CI/CD pipeline when running them altogether.
If I add all these test cases in a single file it works successfully. Wondering if there is any fix for this as I do not want to clutter all my test cases in a single file.
Below are specific details:
Munit version: 2.3.0
Mule 4
CICD platform: Github actions
Any help would be appreciated. Thanks in advance.

Related

Why does bazel tests fail on Intel when bazel is built using bootstrap method?

While building Bazel v4.2.2, observed that some of the tests are failing on intel(ubuntu 18.04). Some of tests are observed to be passing on ci and some are excluded. The difference between the two is that - I have used bootstrapping method whereas on CI it does build using older Bazel binary. Below are the observations. Below tests are failing if I use bootstrap method but pass if Bazel is built using binary:
bazel_bootstrap_distfile_test
srcs_test
bazel_determinism_test
Tests which failed on intel but which are excluded from CI:
string_annotation_value_test
external_integration_test
workspace_resolved_test is failing in both the approaches. But the test is passing on CI. I was trying to find out why there are different test behaviors with respect to bootstrap and binary method. But unable to get the differences. Could you please let me know if there is any known issue or workaround to make these tests pass or is it ok to ignore the tests failing with bootstrap method only or which are excluded from CI builds.

Wallaby on a build server (CI)

we are currently using Wallaby.js for javascript unit testing. Works fine and is great. But within our development pipeline we of course want to run the same tests on the build server - in our case a tfs.
Is it possible to use wallaby on a tfs build server? Anf if yes how?
If not, what is the way to go to run the wallaby configured unit tests on the build server?
As we used the karma test runner earlier, I tried to execute the new test configuration with it but then I get
Can't find variable: wallaby
as in our main/ starting test file it is written
wallaby.delayStart();
require.config({
baseUrl: 'app',
(Originally from a karma/ requirejs configuration)
How to get around this?
Has anyone experience in this scenario?
Wallaby.js main idea is to integrate with editors, run tests for the code that you change and display the results in the editor. You can't use Wallaby.js in a CI build.
You may consider invoking other test runners, or use grunt/gulp task instead for javascript unit testing.
In TFS 2012 and later (might work in 2010 but not sure) you can extend the testing capabilities of the build system.
Check out these posts -
http://www.aspnetperformance.com/post/Unit-testing-JavaScript-as-part-of-TFS-Build.aspx
https://blogs.msdn.microsoft.com/visualstudioalm/2012/07/09/javascript-unit-tests-on-team-foundation-service-with-chutzpah/

Grails test cases run twice when I execute grails test-app

I am using Spock plug-in in my grails-2.3.4 application for automated unit and integration tests. When I run grails test-app, all the test cases run two times. Also in test report, every spec file is listed twice. As the application grew, number of test cases also grew, and all of them run twice. This takes double time to execute all of the test cases while development and deploying through Jenkins. Can anyone help me fix it (any help will be appreciated)?
http://grails.github.io/grails-doc/2.3.4/guide/upgradingFromPreviousVersionsOfGrails.html -> Spock included by default
You no longer need to add the Spock plugin to your projects. Simply
create Spock specifications as before and they will be run as unit
tests. In fact, don't install the Spock plugin, otherwise your
specifications will run twice [...].

gradle tests hang for spring security tests with embedded ldap server

I have a set of tests for spring-security 3.1.3 with embedded ldap server that runs properly from eclipse or when run through gradle with -Dtest.single option. However when i do a clean build to run the entire set of tests in the project the execution hangs at the point where it hits those tests, at which point i have to kill the gradle process. If I #Ignore the ldap tests other tests work fine. These tests work properly if i dont use embedded server i.e connect to an external server. Probably something to do with fact that gradle executes tests in multi-threaded way and it tries to host an in-memory server and all that.
Any body faced similar issues ? and how might i get more useful info on what might be going on ? --info or --debug on gradle doesn't help and the test reports (like the ones generated in case of a normal test failure ) are also not generated in case of killing the gradle process .
You probably need to set maxParallelForks to 1.
Why don't you copy the approach used by Spring Security itself, which configures a separate task for integration tests? It sets maxParallelForks to 1 for those tests.
That way you can continue to benefit from running unit tests in parallel.

Will Team Foundation Build Server execute UnitTests sequentially or in parallel

We use TFS 2010 and Automated builds.
We also make use of MSTests.
I would like some concrete information about the build server's test execution method.
Will the test engine (on build server) run the unit tests sequentially or in parallel?
By default it will run them sequentially. You can customize the build workflow by adding a Parallel activity and running different sets of tests in each. Or if you want to parallelize the test run across multiple build machines you can have the build use multiple RunOnAgent activities to do so (http://blogs.msdn.com/b/jimlamb/archive/2010/09/14/parallelized-builds-with-tfs2010.aspx).
Note: If you execute the tests across multiple test runs you will end up with multiple test reports (.trx files) that will not be merged together without further customization of the build.
#Dylan Smiths answer is correct, but does not cover option # 3.
Executing Unit Tests in parallel on a multi-CPU/core machine
DANGER WILL ROBINSON: This is only applicable to VS2010 and mstest.exe. VS2012 has a new test runner that does not support parallel test execution Visual Studio UserVoice Run unit tests in parallel The VS2012 test system can use the legacy testrunner, and you can make it work if you specify a .testsettings file using the MSTest/SettingsFile element. Configuring Unit Tests by using a .runsettings File
How to: Enable parallel test execution
Ensure you have a multi-core/CPU machine
Ensure you are running only unit tests
Ensure your tests are thread-safe
Ensure you do not have any data adapters on
Ensure you are running locally (cannot use TestController/TestAgent)
Modify your test settings file.
Right-click the test setting file and select "Open With" -> Open as Xml
Set the parallelTestCount attribute on the Execution element
Options are:
blank = 1 CPU/Core - this is the default
0 = Auto configure: We will use as many tests as we can based on your CPU and core count
n = The number of tests to run in parallel
Save your settings file

Resources