Setup and Teardown in Typeorm - typeorm

We have setup and teardown in JEST where we can run some code before and after all tests run, things like beforeAll, afterAll, globalSetup etc
Do we have a similar thing in typeorm to run code before and after running the migrations ? I want to run some code before any of my migrations run when I execute npm run migrate:run. I can't find any thing in the docs related to this

Related

running protractor test using npm

I am new to protractor, node and all. I have learned to create scripts with protractor. But now I need to run this test scripts along the build.
I have seen people can run it with npm test/ npm test e2e. How can I achieve this.
My project structure,
webapp/test/e2e
In my project they are using webpack and karma.
Sorry, I am really noob at this and don't know how to set it up.
We also need to configure it in Jenkins, so it runs and generates a flag for success or failure.
Any suggestions to set it up would really really be helpful. Please be as elaborate as possible as it is hard for me to understand things as of now.
Thank you so much!
If the package.json file is in the root folder of the project you can add the code below to it and run it using npm test and npm run test:e2e. This assumes you have Protractor and Karma installed locally in your project which would be a best practice (if it is not in the package.json as a dependency it probably is not).
"scripts": {
"postinstall": "node_modules/protractor/bin/webdriver-manager update",
"test": "node_modules/karma/bin/karma start webapp/test/unit/karma.conf.js"
"test:e2e": "node_modules/protractor/bin/protractor webapp/test/e2e/conf.js"
}
postinstall - This will run webdriver-manager update locally automatically after running npm install, you could run into issues with missing drivers when using a CI Server like Jenkins that is continuously checking our your repository (it also saves you a CI step)
test - This is equivalent to karma start karma.conf.js
test:e2e - This is equivalent to protractor conf.js it is just running it locally. To run these tests type npm run test:e2e from the directory with your package.json
It is recommended to not combine Protractor and Karma together (although possible) Karma should be used for Unit and Integration tests and Protractor should be used for e2e tests.
As an extra tip it is possible to pass protractor or karma command line arguments through the npm scripts. Simply append it the way you normally would (e.g.
"webapp/test/e2e/conf.js --baseUrl=https://yourbaseurl.com/")
Your question regarding integrating NPM into Jenkins has already been answered fairly well here how to run npm/grunt command from jenkins but be aware running your tests entirely in Jenkins will cause it to run headlessly which when using Windows will cause it to run in Session 0 where the screen resolution will be smaller which causes some tests to fail, opening the selenium server beforehand in a Terminal/Powershell window will cause Jenkins to route your tests through that so it will visually display on your computer.

Can yeoman tests be run in parallel?

We just hit an issue with yeoman-generator tests when they would pass when run in isolation but fail when run in parallel with other tests.
Specifically, we call require('yeoman-generator').test.run() to run the generator and then use require('yeoman-generator').assert.file to check that the correct files were generated, which is what the documentation says. However, the assert would sometimes fail saying the files don't exist.
How does the interaction between test.run() and assert.file work? Where are the files written? Is is a global variable / temp file that is always the same and therefore can be overwritten by other tests running at the same time?
This is the test, and an example of a failing build.
There's a github issue with detailed discussion and here's a discussion on how the tests suddenly started passing when run in isolation.
We are using the Jest testing framework which runs tests in parallel.
Looks like Yeoman tests can't be run in parallel.
require('yeoman-generator').test.run() does create a temp directory but then changes the current working directory to that directory. This interferes with other tests that also rely on the CWD and therefore the Yeoman tests can't be run in parallel with other tests.
Relevant comment in run-context.js and process.chdir in helpers.js.

Run all tests in spock

I want to run all spock specs in the given directory sharing the same driver instance. Also I want to keep running all the test even if some tests fail and generate a html report. With cucumber it was easy to do by just running cucumber in the given directory.
How to do it with spock. I am using geb with spock for grails application's functional testing
If your build.gradle file contains something like this:
task test(overwrite: true, dependsOn: drivers.collect { tasks["${it}Test"] })
then, to run the tests use the following commands:
gradlew chromeTest
gradlew firefoxTest
To run the tests for all the browsers, you can run the following command:
gradlew test
This should continue running even one of the test fails and when it will finish, gradle will provide you the location for the html report.

Why does clean compile "test-app -unit -integration" run tests twice?

In Jenkins job, my grails target does:
clean compile "test-app -unit -integration"
And it outputs the tests results twice.
I check .jenkins/jobs/myjob/target/test-reports
and there are XML corresponding to the tests but there is no duplication. So everything look likes it only executes once. Same with the console log - I can only see the test execute once.
However, when I look at the build results on Jenkins all the tests are duplicated.
I go to:
.jenkins/myjob/builds/buildnumber/junitResult.xml
and I can see the tests duplciated in this.
So it is as if when Jenkins creates the junitResult.xml file it copies tests.
Any ideas why?

How do I run a subset of spock functional tests in grails?

In some other testing frameworks I'm used to tagging tests, eg #really_slow, #front_end
And then running different batches of tests, like I might want to set up a build slave to run all the really_slow tests, and might want to run all the tests tagged as front end but none that are marked as really slow.
To run my spock+geb tests in grails at the moment I just run grails test-app functional:
How do I tell it to run a subset?
You could use JUnit suites with #Category. Or you could use a SpockConfig.groovy with the following contents:
runner {
include foo.bar.FrontEnd, foo.bar.BackEnd
exclude foo.bar.Slow
}
Here, foo.bar.FrontEnd, foo.bar.BackEnd, and foo.bar.Slow are your own annotations. To activate the configuration file, you have to set a spock.configuration system property pointing to it.

Resources