I'm configuring Karma to work with Jenkins CI as described here.
My junitReporter.outputFile, test-results.xml, is always empty.
Per the docs (linked above) Please note the test-results.xml files will be written to subdirectories named after the browsers the tests were run in inside the present working directory (and you will need to tell Jenkins where to find them).
I'm using PhantomJS to run my tests. I do not see any subdirectories named after PhantomJS.
Any ideas?
I ended up adding karma-junit-reporter to plugins in my karma.conf.js file and everything started working, like so:
// Which plugins to enable
plugins: [
"karma-phantomjs-launcher",
"karma-jasmine",
"karma-junit-reporter"
],
// Continuous Integration mode
// if true, it capture browsers, run tests and exit
singleRun: true,
reporters: ['progress', 'junit'],
// the default configuration
junitReporter: {
outputDir: 'test', // results will be saved as $outputDir/$browserName.xml
outputFile: 'test-results.xml', // if included, results will be saved as $outputDir/$browserName/$outputFile
suite: '', // suite will become the package name attribute in xml testsuite element
useBrowserName: true // add browser name to report and classes names
},
Related
I've scoured the internet and have bits and pieces but nothing is coming together for me. I have a local Drupal environment running with Lando. I've successfully installed and configured webpack. Everything is working except when I try to watch or hot reload.
When I run lando npm run build-dev (that currently uses webpack --watch I can see my changes compiled successfully into the correct folder. However, when I refresh my Drupal site, I do not see that changes. The only time I see my updated JS changes are when I run lando drush cr to clear cache. Same things are happening when I try to configure the webpack-dev-server. I can get everything to watch for changes and compile correctly but I cannot get my browser to reload my files, they stay cached. I'm at a loss.
I've tried configuring a proxy in my .lando.yml , and have tried different things with the config options for devServer. I'm just not getting a concise answer, and I just don't have the knowledge to understand exactly what is happening. I believe it has to do with Docker containers not being exposed to webpack (??) but I don't understand how to configure this properly.
These are the scripts I have set up in my package.json , build outputs my production ready files into i_screamz/js/dist, build-dev starts a watch and compiles non-minified versions to i_screamz/js/dist-dev - start I have in here from trying to get the devServer to work. I'd like to get webpack-dev-server running as I'd love to have reloading working.
"scripts": {
"start": "npm run build:dev",
"build:dev": "webpack --watch --progress --config webpack.config.js",
"build": "NODE_ENV=production webpack --progress --config webpack.config.js"
},
This is my webpack.config.js - no sass yet, this is just a working modular js build at this point.
const path = require("path");
const BrowserSyncPlugin = require('browser-sync-webpack-plugin');
const isDevMode = process.env.NODE_ENV !== 'production';
module.exports = {
mode: isDevMode ? 'development' : 'production',
devtool: isDevMode ? 'source-map' : false,
entry: {
main: ['./src/index.js']
},
output: {
filename: isDevMode ? 'main-dev.js' : 'main.js',
path: isDevMode ? path.resolve(__dirname, 'js/dist-dev') : path.resolve(__dirname, 'js/dist'),
publicPath: '/web/themes/custom/[MYSITE]/js/dist-dev'
},
module: {
rules: [
{
test: /\.js$/,
exclude: /node_modules/,
use: {
loader: 'babel-loader'
}
}
]
},
plugins: [
new BrowserSyncPlugin({
proxy: {
target: 'http://[MYSITE].lndo.site/',
proxyReq: [
function(proxyReq) {
proxyReq.setHeader('Cache-Control', 'no-cache, no-store');
}
]
},
open: false,
https: false,
files: [
{
match: ['**/*.css', '**/*.js'],
fn: (event, file) => {
if (event == 'change') {
const bs = require("browser-sync").get("bs-webpack-plugin");
if (file.split('.').pop()=='js') {
bs.reload();
} else {
bs.stream();
}
}
}
}
]
}, {
// prevent BrowserSync from reloading the page
// and let Webpack Dev Server take care of this
reload: false,
injectCss: true,
name: 'bs-webpack-plugin'
}),
],
watchOptions: {
aggregateTimeout: 300,
ignored: ['**/*.woff', '**/*.json', '**/*.woff2', '**/*.jpg', '**/*.png', '**/*.svg', 'node_modules'],
}
};
And here is the config I have setup in my .lando.yml - I did have the proxy key in here but it's been removed as I couldn't get it setup right.
name: [MYSITE]
recipe: pantheon
config:
framework: drupal8
site: [MYPANTHEONSITE]
services:
node:
type: node
build:
- npm install
tooling:
drush:
service: appserver
env:
DRUSH_OPTIONS_URI: "http://[MYSITE].lndo.site"
npm:
service: node
settings.local.php
<?php
/**
* Disable CSS and JS aggregation.
*/
$config['system.performance']['css']['preprocess'] = FALSE;
$config['system.performance']['js']['preprocess'] = FALSE;
I've updated my code files above to reflect reflect a final working setup with webpack. The main answer was a setting in
/web/sites/default/settings.local.php
**Disable CSS & JS aggregation. **
$config['system.performance']['css']['preprocess'] = FALSE;
$config['system.performance']['js']['preprocess'] = FALSE;
I found a working setup from saschaeggi and just tinkered around until I found this setting. So thank you! I also found more about what this means here. This issue took me way longer than I want to admit and it was so simple. I don't know why the 'Disabling Caching css/js aggregation' page never came up when I was furiously googling a caching issue. Hopefully this answer helps anyone else in this very edge case predicament.
I have webpack setup within my theme root folder with my Drupal theme files. I run everything with Lando, including NPM. I found a nifty trick to switch the dist-dev and dist libraries for development / production builds from thinkshout.
I should note my setup does not include hot-reloading but I can at least compile my files and refresh immediately and see my changes. The issue I was having before is that I would have to stop my watches to drush cr and that workflow was ridiculous. I've never gotten hot reloading to work with with either BrowserSync or Webpack Dev Server and I might try to again but I need to move on with my life at this point.
I've also note included sass yet, so these files paths will change to include compilation and output for both .scss and .js files but this is the basic bare min setup working.
In my company, I'm running a pipeline as code project in which my Jenkinsfile gets a dynamic IP from a shell script, and injects that into a PrivateIP environment variable. The next step invokes a custom (in-house developed) plugin that accepts a "servers" argument as IP(s), though supposedly does not parse it correctly, cause the error output indicates an unresolvable host.
I've echoed the PrivateIP variable immediately above the plugin step, and it definitely outputs the correct value.
The plugin works if given a hard-value for IP, but fails if given anything dynamic. Built-ins such as dir don't give similar problems. I haven't been able to get a hold of the plugin developer to report the issue, nor have I gotten any responses for my issue. Is this typical for custom plugins? I've seen some documentation in the plugin developer docs that suggests only the initial environment stage is respected in pipeline plugins, otherwise a #StepContextParameter is needed to get a contextual environment.
stage('Provision') {
environment {
PrivateIP = """${sh(
returnStdout: true,
script: '${WORKSPACE}/cicd/parse-ip.sh'
)}"""
}
steps {
echo "Calling Playbook. PrivateIP: ${PrivateIP}"
customPluginName env: 'AWS',
os: 'Linux',
parameter: '',
password: '',
playbook: 'provision.yaml',
servers: PrivateIP,
gitBranch: '{my branch}',
gitUrl: '{URL}',
username: '{custom user}'
}
}
I'd expect the variable to be respected and execute an Ansible Playbook successfully.
Error
>>> fatal: [ansible_ssh_user={custom user}]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname ansible_ssh_user={custom user}: Name or service not known\r\n", "unreachable": true}
If this in-fact a default behavior of custom plug-ins (not necessarily a bug), what are the good work arounds?
After digging into VSCode's documentation and playing with the tasks, I'm not sure I understand how tasks are definied.
I'd like to call my grunt tasks from VSCode. The problem is that I'm using a plugin called "load-grunt-config" which makes loading of grunt plugins automatic (no need to type grunt.loadNpmTasks() for each plugin you use), and makes it easy to have each plugin's configuration defined in its own file.
So my gruntfile is almost empty and tasks are defined in a file grunt/aliases.js, and configurations for each plugin is separated in its own file (for eg. grunt/connect.js contains the configurations for the connect plugin).
When typing "run tasks", it shows every task available (including builtin ones, like svgmin, imagemin,...) so it's a total of 30 or 40.
No only it's slow but it also shows tasks I haven't defined and do not directly use.
(type grunt --help to see what it shows).
Is there a way to only show aliases I have defined ?
What's the point of defining the task into the tasks.json file ?
Is there a way to dirrectly run a task without having to list all tasks ? (there are shortcuts for "build" & "test" tasks so you can type "run test" and it will run the test task but is there a way to define new tasks that will appear there too ?)
VSCODE V0.7.0 TASKS SHORTCUTS
Gulp, Grunt and Jake are autodetected only if the corresponding files are present in the root of the opened folder. With the Gulp tasks below, the sub-tasks optimize for serve-build, and inject for serve-dev, run additional sub-task and so forth and so on... This process makes two top-level tasks available for build and dev workflows.
gulpfile.js
//automate build node server start and restart on changes
gulp.task('serve-build', ['optimize'], function () {
serve(false /* is Build */);
});
//automate dev node server start and restart on changes
gulp.task('serve-dev', ['inject'], function () {
serve(true /* is Dev */);
});
The above Gulp tasks will be autodetected by VSCode as will many others.
However, you can manually define two keyboard shortcut tasks for TEST and BUILD in your tasks.json file. Subsequently, you can run TWO top-level tasks with CTRL-SHFT-T and CTRL-SHFT-B respectively.
tasks.json
{
"version": "0.1.0",
"command": "gulp",
"isShellCommand": true,
"args": [
"--no-color"
],
"tasks": [
{
"taskName": "serve-dev",
"isBuildCommand": false,
"isTestCommand": true,
"showOutput": "always",
"args": []
},
{
"taskName": "serve-build",
"isBuildCommand": false,
"isTestCommand": true,
"showOutput": "always",
"args": []
}
Notice the two properties isTestCommand and isBuildCommand. These directly map to CTRL-SHFT-T and CTRL-SHFT-B. So you can map any task to these keyboard shortcuts depending on your needs. If you have more than one task with these properties set to true VSCode will use the first task in the tasks.json file.
With tasks structured and nested properly you should be able to build your workflows with two keyboard shortcuts and avoid having VSCode enumerate your tasks each time via the CTRL+SHFT+P menus.
suppressTaskname
There's a global property suppressTaskName that can be used in the tasks.json. This property controls whether the task name is added as an argument to the command.
I'm not sure if this can help but it's good to know about. It looks like this a way to pass built-in arguments to the task runner. More here Task Appendix. I haven't had a chance to test this to determine how if differs from the traditional "args:[]" property. I know that prior to v0.7.0 there were problems with args that when is was used the arguments and tasknames had to be reversed. See accept args the task name and args had to be reversed
The showOutput property can also be useful with and without the with problemMatcher property for display output in the VSCode Output window.
{
"version": "0.1.0",
"command": "gulp",
"showOutput": "silent"
"isShellCommand": true,
"args": [
"--no-color"
],
"tasks": [
{
"taskName": "gulp-argument",
"suppressTaskName": true,
...
},
{
"taskName": "test",
"args": [],
"isBuildCommand": true,
"showOutput": "always",
"problemMatcher": "$msCompile"
...
},
{
}....
]
}
Following are a couple of other StackOverFlow theads on VSCode tasks:
Define multiple tasks in VSCode
Configure VSCode to execute different task
links
Jenkins already builds my Maven Java project. I want the results of karma unit tests to show up in Jenkins, but unfortunately I cannot introduce any configuration changes in Jenkins. How karma should be configured to acomplish that?
in order for Jenkins to be able to parse karma test results they must be published in the Junit XML format, the plugin that does that is karma-junit-reporter
junit test results (outputFile in the karma configuration file) must be stored in target/surefire-reports/TESTS-TestSuite.xml
reporters : ['progress', 'junit', 'coverage'],
port : 9876,
colors : true,
logLevel : config.LOG_INFO,
// don't watch for file change
autoWatch : false,
// only runs on headless browser
browsers : ['PhantomJS'],
// just run one time
singleRun : true,
// remove `karma-chrome-launcher` because we will be running on PhantomJS
// browser on Jenkins
plugins : [
'karma-jasmine',
'karma-phantomjs-launcher',
'karma-junit-reporter',
'karma-coverage',
'karma-jenkins-reporter'
],
// changes type to `cobertura`
coverageReporter : {
type : 'cobertura',
dir : 'target/coverage-reports/'
},
// saves report at `target/surefire-reports/TEST-*.xml` because Jenkins
// looks for this location and file prefix by default.
junitReporter : {
outputDir : 'target/surefire-reports/'
}
Old post, but top result in Google.
Configure karma.conf.js to use 'karma-junit-reporter' and report to outputDir = 'target/karma-reports/'
In your pom.xml, give the test execution step an execution/id like 'karmaTests'
In your pom.xml, set a property jenkins.karmaTest.reportsDirectory = target/karma-reports
Full examples are on this blog post.
Background
I'm trying to use the grunt-protractor-coverage in my grunt script to get code coverage for protractor functional e2e tests. To get started, I utilized this tutorial, with some minor modifications and it works perfectly. Using this as a guide, I created a new gruntfile, substituting the "express" app with a rails app backend.
The Problem
When running my gruntfile, I get the following stack trace:
../dummy/node_modules/grunt-protractor-coverage/node_modules/protractor/node_modules/glob/glob.js:130
throw new Error("must provide pattern")
^
Error: must provide pattern
at new Glob (../dummy/node_modules/grunt-protractor-coverage/node_modules/protractor/node_modules/glob/glob.js:130:11)
at glob ../dummy/node_modules/grunt-protractor-coverage/node_modules/protractor/node_modules/glob/glob.js:57:11)
at Function.globSync [as sync] (../dummy/node_modules/grunt-protractor-coverage/node_modules/protractor/node_modules/glob/glob.js:76:10)
at Function.ConfigParser.resolveFilePatterns (../dummy/node_modules/grunt-protractor-coverage/node_modules/protractor/lib/configParser.js:89:26)
at Runner.run (../dummy/node_modules/grunt-protractor-coverage/node_modules/protractor/lib/runner.js:323:24)
at process.<anonymous> (../dummy/node_modules/grunt-protractor-coverage/node_modules/protractor/lib/runFromLauncher.js:32:14)
at process.EventEmitter.emit (events.js:98:17)
at handleMessage (child_process.js:318:10)
at Pipe.channel.onread (child_process.js:345:11)
[launcher] Runner Process Exited With Error Code: 8
Tracing through the code in grunt's task.js file via [node-inspector] (https://github.com/node-inspector/node-inspector), it seems there are two possible issues:
I'm missing some parameter from my config file which would correctly retrieve the files needed
There is a syntax issue
Any idea why it's throwing this error?
My Config File
protractor_coverage: {
options: {
configFile: '/usr/local/lib/node_modules/protractor/referenceConf.js', // Default config file
keepAlive: true, // If false, the grunt process stops when the test fails.
noColor: false, // If true, protractor will not use colors in its output.
coverageDir: '<%= dirs.instrumentedE2E %>',
args: {}
},
phantom: {
options: {
args: {
baseUrl: 'http://localhost:3000/',
// Arguments passed to the command
'browser': 'phantomjs'
}
}
},
chrome: {
options: {
args: {
baseUrl: 'http://localhost:3000/',
// Arguments passed to the command
'browser': 'chrome'
}
}
}
},
This error means that the plugin was unable to locate your spec files. There were a couple of bugs related to this that have been fixed in recent releases.
You'll generally want your protractorConf.js to be part of your projects. I generally put it in a directory called 'tests'.
So your options would then look like:
options: {
configFile: 'tests/protractorConf.js',
keepAlive: true, // If false, the grunt process stops when the test fails.
noColor: false, // If true, protractor will not use colors in its output.
coverageDir: '<%= dirs.instrumentedE2E %>',
args: {}
},
You could then put your specs in a 'tests/specs' directory, and in this protractorConf.js, reference them as:
specs: [
'tests/specs/**/*.spec.js',
'!**/exclude.spec.js'
],
This would get any file that ends with .spec.js in /tests/specs and any subdirectories therein, unless the file is named exclude.spec.js.
I had this issue also, for me in my protractorConf file I had to change where my path was.
I originally had
specs: [
'./e2e/**/*.spec.js'
],
which worked fine with protractor, and grunt-protractor-runner
But for some reason, the protractor-coverage, this did not run. I changed it to
specs: [
'test/e2e/**/*.spec.js'
],
and this solved my issue. Basically protractor-coverage looks at the path from the base rather then from the config file.