grunt deploy not deploy to target - jenkins

i'v try to deploy the grunt output folder ( dist ) to server space using grunt-deploy in Jenkins. it return success message after grunt deploy.but it actually not deploy to given target.and there is option for username and password of server.so i think its not secure method .if yes give me a correct method for that.also there is no option for source path . this is my deploy code.
deploy: {
liveservers: {
options:{
servers: [{
host: 'host',
port: 'port',
username: 'user',
password: 'pass'
}],
cmds_before_deploy: [],
cmds_after_deploy: [],
deploy_path: '/home/testdeploy'
}
} }
please help me :(

Use the mkdir command to create a releases subfolder:
cd /home/testdeploy
mkdir releases
then retry. The existence of releases is a hardcoded assumption in the source
References
grunt-deploy: deploy.js source

Related

Cannot get webpack --watch or dev server to work using Lando to run a local Drupal environment

I've scoured the internet and have bits and pieces but nothing is coming together for me. I have a local Drupal environment running with Lando. I've successfully installed and configured webpack. Everything is working except when I try to watch or hot reload.
When I run lando npm run build-dev (that currently uses webpack --watch I can see my changes compiled successfully into the correct folder. However, when I refresh my Drupal site, I do not see that changes. The only time I see my updated JS changes are when I run lando drush cr to clear cache. Same things are happening when I try to configure the webpack-dev-server. I can get everything to watch for changes and compile correctly but I cannot get my browser to reload my files, they stay cached. I'm at a loss.
I've tried configuring a proxy in my .lando.yml , and have tried different things with the config options for devServer. I'm just not getting a concise answer, and I just don't have the knowledge to understand exactly what is happening. I believe it has to do with Docker containers not being exposed to webpack (??) but I don't understand how to configure this properly.
These are the scripts I have set up in my package.json , build outputs my production ready files into i_screamz/js/dist, build-dev starts a watch and compiles non-minified versions to i_screamz/js/dist-dev - start I have in here from trying to get the devServer to work. I'd like to get webpack-dev-server running as I'd love to have reloading working.
"scripts": {
"start": "npm run build:dev",
"build:dev": "webpack --watch --progress --config webpack.config.js",
"build": "NODE_ENV=production webpack --progress --config webpack.config.js"
},
This is my webpack.config.js - no sass yet, this is just a working modular js build at this point.
const path = require("path");
const BrowserSyncPlugin = require('browser-sync-webpack-plugin');
const isDevMode = process.env.NODE_ENV !== 'production';
module.exports = {
mode: isDevMode ? 'development' : 'production',
devtool: isDevMode ? 'source-map' : false,
entry: {
main: ['./src/index.js']
},
output: {
filename: isDevMode ? 'main-dev.js' : 'main.js',
path: isDevMode ? path.resolve(__dirname, 'js/dist-dev') : path.resolve(__dirname, 'js/dist'),
publicPath: '/web/themes/custom/[MYSITE]/js/dist-dev'
},
module: {
rules: [
{
test: /\.js$/,
exclude: /node_modules/,
use: {
loader: 'babel-loader'
}
}
]
},
plugins: [
new BrowserSyncPlugin({
proxy: {
target: 'http://[MYSITE].lndo.site/',
proxyReq: [
function(proxyReq) {
proxyReq.setHeader('Cache-Control', 'no-cache, no-store');
}
]
},
open: false,
https: false,
files: [
{
match: ['**/*.css', '**/*.js'],
fn: (event, file) => {
if (event == 'change') {
const bs = require("browser-sync").get("bs-webpack-plugin");
if (file.split('.').pop()=='js') {
bs.reload();
} else {
bs.stream();
}
}
}
}
]
}, {
// prevent BrowserSync from reloading the page
// and let Webpack Dev Server take care of this
reload: false,
injectCss: true,
name: 'bs-webpack-plugin'
}),
],
watchOptions: {
aggregateTimeout: 300,
ignored: ['**/*.woff', '**/*.json', '**/*.woff2', '**/*.jpg', '**/*.png', '**/*.svg', 'node_modules'],
}
};
And here is the config I have setup in my .lando.yml - I did have the proxy key in here but it's been removed as I couldn't get it setup right.
name: [MYSITE]
recipe: pantheon
config:
framework: drupal8
site: [MYPANTHEONSITE]
services:
node:
type: node
build:
- npm install
tooling:
drush:
service: appserver
env:
DRUSH_OPTIONS_URI: "http://[MYSITE].lndo.site"
npm:
service: node
settings.local.php
<?php
/**
* Disable CSS and JS aggregation.
*/
$config['system.performance']['css']['preprocess'] = FALSE;
$config['system.performance']['js']['preprocess'] = FALSE;
I've updated my code files above to reflect reflect a final working setup with webpack. The main answer was a setting in
/web/sites/default/settings.local.php
**Disable CSS & JS aggregation. **
$config['system.performance']['css']['preprocess'] = FALSE;
$config['system.performance']['js']['preprocess'] = FALSE;
I found a working setup from saschaeggi and just tinkered around until I found this setting. So thank you! I also found more about what this means here. This issue took me way longer than I want to admit and it was so simple. I don't know why the 'Disabling Caching css/js aggregation' page never came up when I was furiously googling a caching issue. Hopefully this answer helps anyone else in this very edge case predicament.
I have webpack setup within my theme root folder with my Drupal theme files. I run everything with Lando, including NPM. I found a nifty trick to switch the dist-dev and dist libraries for development / production builds from thinkshout.
I should note my setup does not include hot-reloading but I can at least compile my files and refresh immediately and see my changes. The issue I was having before is that I would have to stop my watches to drush cr and that workflow was ridiculous. I've never gotten hot reloading to work with with either BrowserSync or Webpack Dev Server and I might try to again but I need to move on with my life at this point.
I've also note included sass yet, so these files paths will change to include compilation and output for both .scss and .js files but this is the basic bare min setup working.

nuxt pm2-runtime No pages directory found

I have created a standard nuxt project (npx create-nuxt-app) and want to deploy it on the server with pm2-runtime. I have node v10.16.0 and npm 6.9.0. So I followed the documentation on the nuxt site: https://nuxtjs.org/faq/deployment-pm2
First I run npm run build, then I run pm2-runtime ecosystem.config.js. The problem I receive is the following:
ℹ Preparing project for development 13:33:36
ℹ Initial build may take a while 13:33:36
ERROR No pages directory found in /Users/Sites/nuxtapp/ecosystem.config.js. Did you mean to run nuxt in the parent (../) directory? 13:33:36
at Builder.validatePages (node_modules/#nuxt/builder/dist/builder.js:5653:13)
My ecosystem.config.js is as following:
module.exports = {
apps: [
{
name: 'nuxtapp',
exec_mode: 'cluster',
cwd: './',
instances: 'max',
script: './node_modules/nuxt/bin/nuxt.js',
args: 'start',
},
],
}
What am I doing wrong here?
Figured it out. The solution was adding rootDir: __dirname in nuxt.config.js

Problem when generating JUnit report on Jenkins Pipeline

I'm creating a post job on Jenkins pipeline to publish test results using junit, html and cobertura. The code looks like this
post {
always {
publishTestResults(
script: this,
junit: [
active:true,
allowEmptyResults:true,
archive: true,
pattern: '**/reports/mocha.xml',
updateResults: true
],
cobertura: [
active:true,
allowEmptyResults:true,
archive:true,
pattern: '**/coverage/cobertura/cobertura-coverage.xml'
],
html: [
active:true,
allowEmptyResults:true,
archive:true,
name: 'NYC/Mocha',
path: '**/coverage/html'
],
lcov: [
active:true,
allowEmptyResults:true,
archive:true,
name: 'LCOV Coverage',
path: '**/coverage/lcov/lcov-report'
]
)
cobertura coberturaReportFile: 'coverage/cobertura/cobertura-coverage.xml'
junit 'reports/mocha.xml'
cleanWs()
// deleteDir()
script {
FAILED_STAGE = env.STAGE_NAME
}
}
}
}
The problem is when I execute the job on Jenkins I receive an error message:
find . -wholename **/reports/mocha.xml -exec touch {} ;
touch: cannot touch './reports/mocha.xml': Permission denied
I suppose the issue raised by junit command. How could I solve this problem?
P/S: Jenkins server runs on Ubuntu. I tried to modify /etc/sudoers and add this line to make Jenkins executes command as root. It still could not solve my problem.
jenkins ALL=(ALL) NOPASSWD: ALL
From checking the code at: https://github.com/SAP/jenkins-library/blob/5c13a0e2a20132336824c70b743c364bcb5341f4/vars/testsPublishResults.groovy#L136
Looks like you can avoid the issue by setting updateResults to false
If you absolutely have to update the timestamp on the result file, you'll have to open a terminal session, go to the project workspace (with jenkins user) and try to run touch ./reports/mocha.xml and then debug it from there.

Why Isn't a Dynamic Variable Parsed Correctly When Injected Into a Custom Pipeline as Code Plugin?

In my company, I'm running a pipeline as code project in which my Jenkinsfile gets a dynamic IP from a shell script, and injects that into a PrivateIP environment variable. The next step invokes a custom (in-house developed) plugin that accepts a "servers" argument as IP(s), though supposedly does not parse it correctly, cause the error output indicates an unresolvable host.
I've echoed the PrivateIP variable immediately above the plugin step, and it definitely outputs the correct value.
The plugin works if given a hard-value for IP, but fails if given anything dynamic. Built-ins such as dir don't give similar problems. I haven't been able to get a hold of the plugin developer to report the issue, nor have I gotten any responses for my issue. Is this typical for custom plugins? I've seen some documentation in the plugin developer docs that suggests only the initial environment stage is respected in pipeline plugins, otherwise a #StepContextParameter is needed to get a contextual environment.
stage('Provision') {
environment {
PrivateIP = """${sh(
returnStdout: true,
script: '${WORKSPACE}/cicd/parse-ip.sh'
)}"""
}
steps {
echo "Calling Playbook. PrivateIP: ${PrivateIP}"
customPluginName env: 'AWS',
os: 'Linux',
parameter: '',
password: '',
playbook: 'provision.yaml',
servers: PrivateIP,
gitBranch: '{my branch}',
gitUrl: '{URL}',
username: '{custom user}'
}
}
I'd expect the variable to be respected and execute an Ansible Playbook successfully.
Error
>>> fatal: [ansible_ssh_user={custom user}]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname ansible_ssh_user={custom user}: Name or service not known\r\n", "unreachable": true}
If this in-fact a default behavior of custom plug-ins (not necessarily a bug), what are the good work arounds?

Grails deploy: no such warName for class: Tomcat

When I try to deploy to a remote Tomcat server using grails prod deploy tomcat I get the error:
Has anybody encountered that?
P.S. On the contrary, the command mvn tomcat7:deploy works.
Problem was resolved by adding def warName = configureWarName() to Tomcat.groovy
...
switch (cmd) {
case 'deploy':
war()
def warName = configureWarName()
println "Deploying application $serverContextPath to Tomcat"
deploy(war: warName, url: url, path: serverContextPath, username: user, password: pass)
break
....

Resources