I'm trying to use --watch on mocha but when I save code source or test source, it doesn't re run tests. I have an enviroment using docker-compose with node:16-slim image and my tests run inside it. This same config works with bare metal enviroment.
The dev docker image run the app with:
USER node
CMD ["npm", "run", "dev"]
And this npm script is:
"dev": "npx nodemon --inspect=0.0.0.0:1080 src/index.js",
test npm script:
"test:tdd": "cross-env NODE_ENV=test mocha --config .mocharc.tdd.js",
.mocharc.tdd.js:
module.exports = {
"reporter": "dot",
"watch": true,
"watch-ignore": [],
"file": 'test/common.js',
"recursive": true
};
output:
> test-app#1.0.0 test:tdd
> cross-env NODE_ENV=test mocha --config .mocharc.tdd.js
!
0 passing (6ms)
1 failing
1) Events
abc:
MissingParamError: Missing param: Data
at updated (src/app/events.js:8:22)
at Context.<anonymous> (test/app/events.test.js:13:28)
ℹ [mocha] waiting for changes...
Versions:
➜ test-app git:(master) ✗ npx mocha --version
10.0.0
➜ test-app git:(master) ✗ node --version
v16.15.0
➜ test-app git:(master) ✗ npx nodemon --version
2.0.15
What can I do to fix this? Thanks in advance =)
I solved it. I added watch-files attr to config file.
mocharc.tdd.js:
module.exports = {
"reporter": "dot",
"watch": true,
"watch-files": ['test/**/*.js', 'src/**/*.js'],
"watch-ignore": ['node_modules'],
"file": 'test/common.js',
"recursive": true
};
Related
I am running Cypress tests inside a Docker container to generate a HTML test report.
Here is my folder structure:
As you can see in the cypress/reports/mocha folder, there are some JSON test results generated.
All tests are passing & the 3 JSON files there are populated.
Also, notice the empty cypress/reports/mochareports folder. This should contain the combined JSON of all test results, & a HTML test report.
Here is my package.json:
{
"name": "cypress-docker",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"clean:reports": "mkdir -p cypress/reports && rm -R -f cypress/reports/* && mkdir cypress/reports/mochareports",
"pretest": "npm run clean:reports",
"scripts": "cypress run",
"chrome:scripts": "cypress run --browser chrome ",
"firefox:scripts": "cypress run --browser firefox ",
"combine-reports": "mochawesome-merge cypress/reports/mocha/*.json > cypress/reports/mochareports/report.json",
"generate-report": "marge cypress/reports/mochareports/report.json -f report -o cypress/reports/mochareports",
"posttest": "npm run combine-reports && npm run generate-report",
"test": "npm run scripts || npm run posttest",
"chrome:test": "npm run pretest && npm run chrome:scripts || npm run posttest",
"firefox:test": "npm run pretest && npm run firefox:scripts || npm run posttest"
},
"keywords": [],
"author": "QA BOX <qabox#gmail.com>",
"license": "MIT",
"dependencies": {
"cypress": "^6.8.0",
"cypress-multi-reporters": "^1.4.0",
"mocha": "^8.2.1",
"mochawesome": "^6.2.1",
"mochawesome-merge": "^4.2.0",
"mochawesome-report-generator": "^5.1.0"
}
}
Here is my cypress.json:
{
"reporter": "cypress-multi-reporters",
"reporterOptions": {
"reporterEnabled": "mochawesome",
"mochawesomeReporterOptions": {
"reportDir": "cypress/reports/mocha",
"quite": true,
"overwrite": false,
"html": false,
"json": true
}
}
}
Here are the commands I use to run the tests:
To build the image - docker build -t cyp-dock-mocha-report .
docker-compose run e2e-chrome
Here is my Dockerfile:
FROM cypress/included:6.8.0
RUN mkdir /cypress-docker
WORKDIR /cypress-docker
COPY ./package.json .
COPY ./package-lock.json .
COPY ./cypress.json .
COPY ./cypress ./cypress
RUN npm install
ENTRYPOINT ["npm", "run"]
Here is my docker-compose.yml:
version: "3"
services:
# this container will run Cypress test using built-in Electron browser
e2e-electron:
image: "cyp-dock-mocha-report"
command: "test"
volumes:
- ./cypress/videos:/cypress-docker/cypress/videos
- ./cypress/reports:/cypress-docker/cypress/reports
# this container will run Cypress test using Chrome browser
e2e-chrome:
image: "cyp-dock-mocha-report"
command: "chrome:test"
volumes:
- ./cypress/videos:/cypress-docker/cypress/videos
- ./cypress/reports:/cypress-docker/cypress/reports
# this container will run Cypress test using Firefox browser
# note that both Chrome and Firefox browsers were pre-installed in the Docker image
e2e-firefox:
image: "cyp-dock-mocha-report"
command: "firefox:test"
# if you want to debug FF run, pass DEBUG variable like
environment:
- DEBUG=cypress:server:browsers:firefox-util,cypress:server:util:process_profiler
volumes:
- ./cypress/videos:/cypress-docker/cypress/videos
- ./cypress/reports:/cypress-docker/cypress/reports
All tests are passing as you can see below:
I don't know why the Mochawesome HTML test report isn't generating, or the merged JSO
Can someone please tell me why the merged JSON & the HTML test report aren't being generated in mochareports folder, & how I can get them to?
thanks for giving me a hint on how to use docker compose with this image! I think I see where the issue is: in the package.json file, under scripts, instead of "merge", you wrote "marge":
"generate-report": "marge cypress/reports/mochareports/report.json -f report -o cypress/reports/mochareports"
I'm trying to install and run Postman's Newman tests collection with HTML reporter (on a jenkins podTemplate container with docker image from Postman's account) but it keeps failing because no suitable Newman version is found:
"npm WARN newman-reporter-htmlextra#1.19.6 requires a peer of newman#>=4 but none is installed. You must install peer dependencies yourself"
Newman image docker is "postman/newman:5.2-alpine".
And the Run command is
sh "newman run tests/collection.json -r htmlextra --reporter-htmlextra-export var/reports/newman/html/index.html";
I've also tried to install with (the "sh" prefix is because it's in groovy script..in Jenkins) :
sh "npm install -g newman#4.6.1"
sh "npm install -g newman-reporter-htmlextra"
and then executing the same run command as above.
sh "newman run tests/collection.json -r htmlextra --reporter-htmlextra-export var/reports/newman/html/index.html";
But the results are the same.
What's weird is that right after I get the error mentioned above - the jenkinsfile executes the "newman run" command and successfully creates the tests report file:
Using htmlextra version 1.19.6
Created the htmlextra report in this location: var/reports/newman/html/index.html
But then exits the script/job with FAILURE.
What am I missing?
Any advice?
Thank you!
Thats a npm bug, https://github.com/npm/npm/issues/12905
for newman-reporter-htmlextra , newman is a peer dependency.
In npm peer dependency is not detected for global packages if the dependency and the package are not installed together
In this case you can fix it by installing it together using
npm install -g newman newman-reporter-htmlextra
Try :
podTemplate(label: "newmanPodHtmlExtra", containers: [
containerTemplate(name: "newman", image: "dannydainton/htmlextra", command: "cat", ttyEnabled: true),
]) {
node("newmanPodHtmlExtra") {
def testsFolder = "./tests";
container("newman") {
stage("Checkout") {
checkout scm;
}
try{
stage("Install & run Newman") {
sh "npm install -g newman newman-reporter-htmlextra";
sh "newman run ${testsFolder}/collection.json -r htmlextra --reporter-htmlextra-export var/reports/newman/html/index.html";
}
}catch(e){}finally{
stage("Show tests results") {
publishHTML([allowMissing: false, alwaysLinkToLastBuild: false, keepAll: false, reportDir: 'var/reports/newman/html', reportFiles: 'index.html', reportName: 'API Tests', reportTitles: ''
])
}
}
}
}
}
I'm trying to use Travis for an open source build of a PR. The configuration is quite simple and the logs seem to show that the appropriate modules are installed upon running yarn install and I am installing the same version of yarn as that which is used locally. The issue is that when I try to execute the scripts defined in the package.json scripts object the modules are not found when running in Travis. Below is the config and the errors I receive at build time.
before_install:
- curl -o- -L https://yarnpkg.com/install.sh | bash -s -- --version 1.7.0
- export PATH="$HOME/.yarn/bin:$PATH"
cache:
yarn: true
directories:
- "node_modules"
env:
- NODE_ENV=production
language: node_js
node_js:
- 8
- 9
- "stable"
install:
- yarn install
script:
- yarn run lint
- yarn test
The above producing the following output in the build logs:
yarn run v1.7.0
$ ./node_modules/.bin/eslint src/**
/bin/sh: 1: ./node_modules/.bin/eslint: not found
error Command failed with exit code 127.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
The command "yarn run lint" exited with 1.
0.54s$ yarn run test
yarn run v1.7.0
$ ./node_modules/.bin/jest
/bin/sh: 1: ./node_modules/.bin/jest: not found
error Command failed with exit code 127.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
The command "yarn run test" exited with 1.
The package.json for this project is as follows:
"scripts": {
"test": "./node_modules/.bin/jest",
"lint": "./node_modules/.bin/eslint src/**",
"precommit": "lint-staged",
"format": "prettier --trailing-comma es5 --single-quote --write 'src/*/*.js' '!(node_modules)/**/*.js'"
},
Was having the same issue and was able to solve it by doing the following in package.json file
"scripts": {
"test": "test --passWithNoTests",
}
I followed the Travis-CI documentation, to creating multiple deployments, and for notifications.
So this is my config: (the end has deploy and notifications)
sudo: required # is required to use docker service in travis
language: node_js
node_js:
- 'node'
services:
- docker
before_install:
- npm install -g yarn --cache-min 999999999
- "/sbin/start-stop-daemon --start --quiet --pidfile /tmp/custom_xvfb_99.pid --make-pidfile --background --exec /usr/bin/Xvfb -- :99 -ac -screen 0 1280x1024x16"
# Use yarn for faster installs
install:
- yarn
# Init GUI
before_script:
- "export DISPLAY=:99.0"
- "sh -e /etc/init.d/xvfb start"
- sleep 3 # give xvfb some time to start
script:
- npm run test:single-run
cache:
yarn: true
directories:
- ./node_modules
before_deploy:
- npm run build:backwards
- docker --version
- pip install --user awscli # install aws cli w/o sudo
- export PATH=$PATH:$HOME/.local/bin # put aws in the path
deploy:
- provider: script
script: scripts/deploy.sh ansyn/client-chrome.v.44 $TRAVIS_COMMIT
on:
branch: travis
- provider: script
script: scripts/deploy.sh ansyn/client $TRAVIS_TAG
on:
tags: true
notifications:
email: false
But this translates to (in Travis - view config): no deploy, no notifications
{
"sudo": "required",
"language": "node_js",
"node_js": "node",
"services": [
"docker"
],
"before_install": [
"npm install -g yarn --cache-min 999999999",
"/sbin/start-stop-daemon --start --quiet --pidfile /tmp/custom_xvfb_99.pid --make-pidfile --background --exec /usr/bin/Xvfb -- :99 -ac -screen 0 1280x1024x16"
],
"install": [
"yarn"
],
"before_script": [
"export DISPLAY=:99.0",
"sh -e /etc/init.d/xvfb start",
"sleep 3"
],
"script": [
"npm run test:single-run"
],
"cache": {
"yarn": true,
"directories": [
"./node_modules"
]
},
"before_deploy": [
"npm run build:backwards",
"docker --version",
"pip install --user awscli",
"export PATH=$PATH:$HOME/.local/bin"
],
"group": "stable",
"dist": "trusty",
"os": "linux"
}
Try changing
script: scripts/deploy.sh ansyn/client $TRAVIS_TAG
to
script: sh -x scripts/deploy.sh ansyn/client $TRAVIS_TAG
This will give a detailed result if the script is being executed or not. Also I looked into the build after those changes. It fails on below
Step 4/9 : COPY ./dist /opt/ansyn/app
You need to change your deploy section to
deploy:
- provider: script
script: sh -x scripts/deploy.sh ansyn/client-chrome.v.44 $TRAVIS_COMMIT
skip_cleanup: true
on:
branch: travis
- provider: script
script: sh -x scripts/deploy.sh ansyn/client $TRAVIS_TAG
skip_cleanup: true
on:
tags: true
So that the dist folder is there during deploy and not cleaned up
I am using Ansible local inside a Packer script to configure a Docker image. I have a role test that has a main.yml file that's supposed to output some information and create a directory to see that the script actually ran. However, the main.yml doesn't seem to get run.
Here is my playbook.yml:
---
- name: apply configuration
hosts: all
remote_user: root
roles:
- test
test/tasks/main.yml:
---
- name: Test output
shell: echo 'testing output from test'
- name: Make test directory
file: path=/test state=directory owner=root
When running this via packer build packer.json I get the following output from the portion related to Ansible:
docker: Executing Ansible: cd /tmp/packer-provisioner-ansible-local/59a33ccb-bd9f-3b49-65b0-4cc20783f193 && ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ansible-playbook /tmp/packer-provisioner-ansible-local/59a33ccb-bd9f-3b49-65b0-4cc20783f193/playbook.yml --extra-vars "packer_build_name=docker packer_builder_type=docker packer_http_addr=" -c local -i /tmp/packer-provisioner-ansible-local/59a33ccb-bd9f-3b49-65b0-4cc20783f193/packer-provisioner-ansible-local037775056
docker:
docker: PLAY [apply configuration] *****************************************************
docker:
docker: TASK [setup] *******************************************************************
docker: ok: [127.0.0.1]
docker:
docker: PLAY RECAP *********************************************************************
docker: 127.0.0.1 : ok=1 changed=0 unreachable=0 failed=0
I used to run a different more useful role this way and it worked fine. I hadn't run this for a few months and now it stopped working. Any ideas what I am doing wrong? Thank you!
EDIT:
here is my packer.json:
{
"builders": [
{
"type": "docker",
"image": "ubuntu:latest",
"commit": true,
"run_command": [ "-d", "-i", "-t", "--name", "{{user `ansible_host`}}", "{{.Image}}", "/bin/bash" ]
}
],
"provisioners": [
{
"type": "shell",
"inline": [
"apt-get -y update",
"apt-get -y install ansible"
]
},
{
"type": "ansible-local",
"playbook_file": "ansible/playbook.yml",
"playbook_dir": "ansible",
"role_paths": [
"ansible/roles/test"
]
}
]
}
This seems to be due to a bug in Packer. Everything works as expected with any Packer version other than 1.0.4. I recommend either downgrading to 1.0.3 or installing the yet to be released 1.1.0 version.
My best guess is that this is being caused by a known and fixed issue about how directories get copied by the docker builder when using Ansible local provisioner.