nuxt pm2-runtime No pages directory found - docker

I have created a standard nuxt project (npx create-nuxt-app) and want to deploy it on the server with pm2-runtime. I have node v10.16.0 and npm 6.9.0. So I followed the documentation on the nuxt site: https://nuxtjs.org/faq/deployment-pm2
First I run npm run build, then I run pm2-runtime ecosystem.config.js. The problem I receive is the following:
ℹ Preparing project for development 13:33:36
ℹ Initial build may take a while 13:33:36
ERROR No pages directory found in /Users/Sites/nuxtapp/ecosystem.config.js. Did you mean to run nuxt in the parent (../) directory? 13:33:36
at Builder.validatePages (node_modules/#nuxt/builder/dist/builder.js:5653:13)
My ecosystem.config.js is as following:
module.exports = {
apps: [
{
name: 'nuxtapp',
exec_mode: 'cluster',
cwd: './',
instances: 'max',
script: './node_modules/nuxt/bin/nuxt.js',
args: 'start',
},
],
}
What am I doing wrong here?

Figured it out. The solution was adding rootDir: __dirname in nuxt.config.js

Related

AWS CDK Pipeline: Assets stage fails to find buildSpec FileAsset.yaml when publishAssetsInParallel is disabled

We are trying to get our multi-stack application deployed using the cdk pipeline library.
We have recently disabled the publishAssetsInParallel flag, as with the default setting our pipeline would create >20 FileAsset objects under the Assets stage, which AWS then complains as being too many CodeBuild projects running parallel.
However, with this property now disabled, I'm getting the following error for the Assets stage:
[Container] 2022/11/14 12:04:24 Phase complete: DOWNLOAD_SOURCE State: FAILED
[Container] 2022/11/14 12:04:24 Phase context status code: YAML_FILE_ERROR Message: stat /codebuild/output/src112668013/src/buildspec-c866864112c35d54804951dbe96b99440c9b891fde-FileAsset.yaml: no such file or directory
I'm assuming this is supposed to be a build spec that is create by cdk pipeline, as we didn't need to create a build spec when things were running in parallel.
Here is the current pipeline code:
const pipeline = new CodePipeline(this, 'Pipeline', {
publishAssetsInParallel: false,
selfMutation: false,
pipelineName: fullStackName('Pipeline', app),
synth: new CodeBuildStep('SynthStep', {
input: CodePipelineSource.codeCommit(repo, repoBranchName, {codeBuildCloneOutput: true}),
buildEnvironment: {computeType: ComputeType.MEDIUM},
installCommands: [
'npm install -g yarn',
'yarn install',
'cd apps/cloud-app',
'yarn install',
'yarn global add aws-cdk'
],
commands: [
'yarn build',
'cdk synth'
],
primaryOutputDirectory: 'apps/cloud-app/cdk.out'
}
)
});
UPDATE:
I reverted the publishAssetsInParallel flag to its default setting to compare, and it seems there is a fundamental difference in the way it creates the FileAsset CodeBuild projects based on this flag. With it enabled, when I inspect the build details for one of the FileAsset projects that is created, I can see under the buildspec section it contains a concrete implementation of a build spec, eg:
{
"version": "0.2",
"phases": {
"install": {
"commands": [
"npm install -g cdk-assets#2"
]
},
"build": {
"commands": [
"cdk-assets --path \"MyStack.assets.json\" --verbose publish \"2357296280127ce793d8dbb13e6c907db22f5dcc57a173ba77fcd19a76d8f444:12345678910-eu-west-2\""
]
}
}
}
With the flag disabled, the buildspec simply contains a pointer to a buildspec file as below, which it then fails to find...
buildspec-c866864112c35d54804951dbe96b99440c9b891fde-FileAsset.yaml
Self-mutation has to be enabled - currently, asset updates mutate the pipeline.
Reference: https://github.com/aws/aws-cdk/issues/9080

Cannot get webpack --watch or dev server to work using Lando to run a local Drupal environment

I've scoured the internet and have bits and pieces but nothing is coming together for me. I have a local Drupal environment running with Lando. I've successfully installed and configured webpack. Everything is working except when I try to watch or hot reload.
When I run lando npm run build-dev (that currently uses webpack --watch I can see my changes compiled successfully into the correct folder. However, when I refresh my Drupal site, I do not see that changes. The only time I see my updated JS changes are when I run lando drush cr to clear cache. Same things are happening when I try to configure the webpack-dev-server. I can get everything to watch for changes and compile correctly but I cannot get my browser to reload my files, they stay cached. I'm at a loss.
I've tried configuring a proxy in my .lando.yml , and have tried different things with the config options for devServer. I'm just not getting a concise answer, and I just don't have the knowledge to understand exactly what is happening. I believe it has to do with Docker containers not being exposed to webpack (??) but I don't understand how to configure this properly.
These are the scripts I have set up in my package.json , build outputs my production ready files into i_screamz/js/dist, build-dev starts a watch and compiles non-minified versions to i_screamz/js/dist-dev - start I have in here from trying to get the devServer to work. I'd like to get webpack-dev-server running as I'd love to have reloading working.
"scripts": {
"start": "npm run build:dev",
"build:dev": "webpack --watch --progress --config webpack.config.js",
"build": "NODE_ENV=production webpack --progress --config webpack.config.js"
},
This is my webpack.config.js - no sass yet, this is just a working modular js build at this point.
const path = require("path");
const BrowserSyncPlugin = require('browser-sync-webpack-plugin');
const isDevMode = process.env.NODE_ENV !== 'production';
module.exports = {
mode: isDevMode ? 'development' : 'production',
devtool: isDevMode ? 'source-map' : false,
entry: {
main: ['./src/index.js']
},
output: {
filename: isDevMode ? 'main-dev.js' : 'main.js',
path: isDevMode ? path.resolve(__dirname, 'js/dist-dev') : path.resolve(__dirname, 'js/dist'),
publicPath: '/web/themes/custom/[MYSITE]/js/dist-dev'
},
module: {
rules: [
{
test: /\.js$/,
exclude: /node_modules/,
use: {
loader: 'babel-loader'
}
}
]
},
plugins: [
new BrowserSyncPlugin({
proxy: {
target: 'http://[MYSITE].lndo.site/',
proxyReq: [
function(proxyReq) {
proxyReq.setHeader('Cache-Control', 'no-cache, no-store');
}
]
},
open: false,
https: false,
files: [
{
match: ['**/*.css', '**/*.js'],
fn: (event, file) => {
if (event == 'change') {
const bs = require("browser-sync").get("bs-webpack-plugin");
if (file.split('.').pop()=='js') {
bs.reload();
} else {
bs.stream();
}
}
}
}
]
}, {
// prevent BrowserSync from reloading the page
// and let Webpack Dev Server take care of this
reload: false,
injectCss: true,
name: 'bs-webpack-plugin'
}),
],
watchOptions: {
aggregateTimeout: 300,
ignored: ['**/*.woff', '**/*.json', '**/*.woff2', '**/*.jpg', '**/*.png', '**/*.svg', 'node_modules'],
}
};
And here is the config I have setup in my .lando.yml - I did have the proxy key in here but it's been removed as I couldn't get it setup right.
name: [MYSITE]
recipe: pantheon
config:
framework: drupal8
site: [MYPANTHEONSITE]
services:
node:
type: node
build:
- npm install
tooling:
drush:
service: appserver
env:
DRUSH_OPTIONS_URI: "http://[MYSITE].lndo.site"
npm:
service: node
settings.local.php
<?php
/**
* Disable CSS and JS aggregation.
*/
$config['system.performance']['css']['preprocess'] = FALSE;
$config['system.performance']['js']['preprocess'] = FALSE;
I've updated my code files above to reflect reflect a final working setup with webpack. The main answer was a setting in
/web/sites/default/settings.local.php
**Disable CSS & JS aggregation. **
$config['system.performance']['css']['preprocess'] = FALSE;
$config['system.performance']['js']['preprocess'] = FALSE;
I found a working setup from saschaeggi and just tinkered around until I found this setting. So thank you! I also found more about what this means here. This issue took me way longer than I want to admit and it was so simple. I don't know why the 'Disabling Caching css/js aggregation' page never came up when I was furiously googling a caching issue. Hopefully this answer helps anyone else in this very edge case predicament.
I have webpack setup within my theme root folder with my Drupal theme files. I run everything with Lando, including NPM. I found a nifty trick to switch the dist-dev and dist libraries for development / production builds from thinkshout.
I should note my setup does not include hot-reloading but I can at least compile my files and refresh immediately and see my changes. The issue I was having before is that I would have to stop my watches to drush cr and that workflow was ridiculous. I've never gotten hot reloading to work with with either BrowserSync or Webpack Dev Server and I might try to again but I need to move on with my life at this point.
I've also note included sass yet, so these files paths will change to include compilation and output for both .scss and .js files but this is the basic bare min setup working.

How can I inject environment variables during the build process in Github CI/CD?

I am creating a blogsite with Gatsby and Contentful for learning purposes. I want to deploy my site to surge using Github actions. I am using gatsby-source-contentful plugin to get my content from Contentful at build time. The plugin requires spaceId and accessToken to access my Contenful space. During development at my local machine, I am providing these values to the plugin using environment variables saved in a .env file.
However, during the build process in Github actions, I am getting this error:
success open and validate gatsby-configs - 2.325s
error Invalid plugin options for "gatsby-source-contentful":
- "accessToken" is required
- "spaceId" is required
not finished load plugins - 1.002s
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! gatsby-contentful-blogsite#0.1.0 build: `tsc && gatsby build`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the gatsby-contentful-blogsite#0.1.0 build script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/runner/.npm/_logs/2021-02-18T17_53_11_281Z-debug.log
Error: Process completed with exit code 1.
Is there a way to tell Github actions about these environment variables (spaceId and accessToken) so that the gatsby-source-contentful plugin can be configured successfully?
Contentful DevRel here. 👋
// In your gatsby-config.js
module.exports = {
plugins: [
{
resolve: `gatsby-source-contentful`,
options: {
spaceId: `your_space_id`,
// Learn about environment variables: https://gatsby.dev/env-vars
accessToken: process.env.CONTENTFUL_ACCESS_TOKEN,
},
},
],
}
In your gatsby config you can specify your environment variable such as above. Then in your GitHub repo you can define a secret and expose it as environment variable. You can find more information in the Github actions docs. Once you exposed the environment variable via the secret to the action, it should work fine.

How to expose node binary to electron app without install node

I build an Electron app with plugin system based on yarn (installed as node module), since I don't want to force users to install node or npm.
Some plugins depends on modules with postinstall script "node index.js".
This script failed because node doesn't exists.
Since Electron contains some version of node I thought to add the directory with electron's node to PATH, but I can't find the node binary in packaged app.
I missed something?
Can you think about any other workaround?
I use electron-packager on mac.
I don't think it's relevant to issue but here the main part of install function:
var modulePath = join(__dirname, '../', 'node_modules', 'yarn', 'bin', 'yarnpkg')
var args = ['add', `${plugin.name}#${plugin.version}`, '--json' ]
var child = fork(modulePath, args ,{ silent: true, cwd: this.pluginsPath })
process info:
{
"PATH": "/usr/bin:/bin:/usr/sbin:/sbin",
"versions": {
"http_parser": "2.7.0",
"node": "6.5.0",
"v8": "5.3.332.45",
"uv": "1.9.1",
"zlib": "1.2.8",
"ares": "1.10.1-DEV",
"modules": "50",
"openssl": "1.0.2h",
"electron": "1.4.5",
"chrome": "53.0.2785.113",
"atom-shell": "1.4.5"
}
}

coveralls github integration (with qunit, istanbul, grunt)

I'm having issues getting coveralls to work. I've created a simple project here.
It seems to be outputting the report correctly, but I'm definitely missing a step somewhere because coveralls doesn't see me as being set up.
No branches show up, and it simply gives instructions on how to set it up.
I've tried to copy what qunit is doing, because they obviously have it working.
Here is what I've done so far.
Created the project that uses node/grunt/qunit as well as the coveralls account and toggled on the project.
I've then replaced the qunit reference in the devDependencies section in package.json with this.
"grunt-coveralls": "0.3.0",
"grunt-qunit-istanbul": "^0.4.0"
I've added this to my package.json.
"scripts": {
"ci": "grunt && grunt coveralls"
}
I've added this config for qunit in my Gruntfile.js.
options: {
timeout: 30000,
"--web-security": "no",
coverage: {
src: [ "src/<%= pkg.name %>.js" ],
instrumentedFiles: "temp/",
coberturaReport: "report/",
htmlReport: "build/report/coverage",
lcovReport: "build/report/lcov",
linesThresholdPct: 70
}
},
I then added this to my .travis.yml.
language: node_js
node_js:
- "0.10"
before_install:
npm install -g grunt-cli
install:
npm install
before_script:
grunt
after_script:
npm run-script coveralls
I got it working, check the repo for the example https://github.com/thorst/Code-Coverage-Qunit
While its not always possible, I found jasmine to be easier in multiple ways. I have a complete example here: https://github.com/thorst/Code-Coverage-Jasmine
I still haven't gotten mocha to work though. That (broken) repo is here: https://github.com/thorst/Code-Coverage-Mocha

Resources