Travis - Executing bower in sh script fails - travis-ci

I try to execute bower commands in a sh script that is run in the after-success phase o a travis build. I installed bower in the install phase:
install:
- npm install -g bower
[...]
after_success:
- if [ ${TRAVIS_PULL_REQUEST} = "false" ] && [ "$TRAVIS_BRANCH" = "master" ]; then
./my-script.sh;
fi
Unfortunately, if I call bower in the sh script it produces the following output:
./my-script.sh: line 30: ./node_modules/.bin/bower: No such file or directory
I do not know how to proceed to fix the error. Any help would be greatly appreciated, thank you already!

I had to call the script using
bash my-script.sh;
instead of
./my-script.sh;
Now everything is working fine.

Related

Cant seem to get withPythonEnv to work in Jenkins

I am using an agent setup with multiple versions of Python (3.6, 3.7, 3.8, and 3.9) and have installed withPythonEnv plugin to see if it can switch runtimes during builds. The project is found here: https://github.com/jenkinsci/pyenv-pipeline-plugin.
When i try to run this in Jenkins running some simple commands:
stage ('Unit Test'){
steps {
withPythonEnv('/usr/bin/python3.8') {
script{
sh """
pwd
env
python --version
pip install --upgrade pip
pip install -r requirements-test.txt
python -m pytest foo/tests/ --cov foo --cov-report=xml --junitxml=junit.xml
"""
}
}
}
post {
always {
script {
junit "junit.xml"
}
}
}
}
I am constantly seeing the build failed. I dont EVER get any additional logging as to why this is occurring. The only message i see is this:
ERROR: Error while creating virtualenv: Error: Command '['/home/jenkins/workspace/foo/.pyenv-usr-bin-python3.8/bin/python3.8', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 1.
Does anyone know how to get around this using withPythonEnv? Their docs really dont say much more than the examples provided and I've tried a few of those already.
Could you use withEnv and then use PythonPath? Something like: withEnv('PYTHONPATH=/usr/bin/python3.8')

Running liquibase update from script in jenkins pipeline

I'm trying to set up a Jenkins pipeline which will run a liquibase update whenever something is pushed to the master branch. The liquibase runner plugin for Jenkins has a security risk and therefore, I can't install it and run liquibase updates from that.
My liquibase* file (the bash script) is in my repository at the following path
/repo/liquibase/liquibase/liquibase*
I've set up the pipeline to run the following shell script. NOTE: I have the command set to liquibase --help for test purposes, but normally I'd want to run an update command.
export PATH=$PATH:/var/lib/jenkins/workspace/repo/liquibase
export PATH=$PATH:/var/lib/jenkins/workspace/repo/liquibase/liquibase
export PATH=$PATH:/var/lib/jenkins/workspace/repo/liquibase/liquibase/liquibase
export PATH=$PATH:/var/lib/jenkins/workspace/repo/liquibase/liquibase/jre/bin
cd liquibase
ls -ltr
chmod 755 liquibase/liquibase
chmod 755 liquibase/jre/bin/java.exe
liquibase --help
The liquibase --help command runs fine from the directory path /repo/liquibase in git bash. However, when I run it from Jenkins, I get the following error.
/var/lib/jenkins/workspace/Database_and_Repos/liquibase/liquibase/liquibase/jre/bin/java: No such file or directory
Build step 'Execute shell' marked build as failure
My liquibase file looks like this and it is the last line in the file that is causing the error.
#!/usr/bin/env bash
if [ ! -n "${LIQUIBASE_HOME+x}" ]; then
# echo "LIQUIBASE_HOME is not set."
## resolve links - $0 may be a symlink
PRG="$0"
while [ -h "$PRG" ] ; do
ls=`ls -ld "$PRG"`
link=`expr "$ls" : '.*-> \(.*\)$'`
if expr "$link" : '/.*' > /dev/null; then
PRG="$link"
else
PRG=`dirname "$PRG"`"/$link"
fi
done
LIQUIBASE_HOME=`dirname "$PRG"`
# make it fully qualified
LIQUIBASE_HOME=`cd "$LIQUIBASE_HOME" && pwd`
# echo "Liquibase Home: $LIQUIBASE_HOME"
fi
# build classpath from all jars in lib
if [ -f /usr/bin/cygpath ]; then
CP=.
for i in "$LIQUIBASE_HOME"/liquibase*.jar; do
i=`cygpath --windows "$i"`
CP="$CP;$i"
done
for i in "$LIQUIBASE_HOME"/lib/*.jar; do
i=`cygpath --windows "$i"`
CP="$CP;$i"
done
else
if [[ $(uname) = MINGW* ]]; then
CP_SEPARATOR=";"
else
CP_SEPARATOR=":"
fi
CP=.
for i in "$LIQUIBASE_HOME"/liquibase*.jar; do
CP="$CP""$CP_SEPARATOR""$i"
done
CP="$CP""$CP_SEPARATOR""$LIQUIBASE_HOME/lib/"
for i in "$LIQUIBASE_HOME"/lib/*.jar; do
CP="$CP""$CP_SEPARATOR""$i"
done
fi
if [ -z "${JAVA_HOME}" ]; then
#JAVA_HOME not set, try to find a bundled version
if [ -d "${LIQUIBASE_HOME}/jre" ]; then
JAVA_HOME="$LIQUIBASE_HOME/jre"
elif [ -d "${LIQUIBASE_HOME}/.install4j/jre.bundle/Contents/Home" ]; then
JAVA_HOME="${LIQUIBASE_HOME}/.install4j/jre.bundle/Contents/Home"
fi
fi
if [ -z "${JAVA_HOME}" ]; then
JAVA_PATH="$(which java)"
if [ -z "${JAVA_PATH}" ]; then
echo "Cannot find java in your path. Install java or use the JAVA_HOME environment variable"
fi
else
#Use path in JAVA_HOME
JAVA_PATH="${JAVA_HOME}/bin/java"
fi
# add any JVM options here
JAVA_OPTS="${JAVA_OPTS-}"
"${JAVA_PATH}" -cp "$CP" $JAVA_OPTS liquibase.integration.commandline.Main ${1+"$#"}
Has anyone run into this problem with liquibase commands in Jenkins? I've been googling all day, but haven't found much similar to this exact issue. Any help in the right direction would be great.
(We're updating the Liquibase Runner plugin. We have a release that is being reviewed for the security issues by the Jenkins team now.)
The error message seems to say that your "Execute shell" command in your changelog is not working correctly. Maybe the command is not installed, maybe it's calling a script that is not on your build machine.
One way to explore this to add an "echo" of the "Execute shell" command prior. Also, I'd pass --logLeve=DEBUG to Liquibase to get a better idea on the command it's trying to run.
Thanks for using Liquibase and Jenkins! I'll be talking about it here next month: https://www.cloudbees.com/devops-world/.
You could use the liquibase-maven-plugin and just call the maven phase in the pipeline:
sh mvn resources:resources liquibase:update
As for me, it is the best decision. Follow the official documentation

lerna run --parallel not working for rollup watch

Background:
I have a lerna monorepo with yarn workspaces with two packages. I am using rollup as the bundler.
packages/module1/package.json:
{
scripts: {
"watch": "rollup -c rollup.config.js --watch",
"build": "NODE_ENV=production && rollup -c rollup.config.js"
}
}
packages/module2/package.json:
{
scripts: {
"watch": "rollup -c rollup.config.js --watch",
"build": "NODE_ENV=production && rollup -c rollup.config.js"
}
}
Expected Behavior:
lerna run build will run the build scripts for each package.
lerna run watch will run the watch scripts for each package in watch mode.
Current Behavior:
lerna run build works as expected. The build script runs properly for both packages.
lerna run watch just hangs there:
lerna notice cli v3.13.1
lerna info Executing command in 2 packages: "yarn run watch"
[[just hangs here]]
I have tried lerna run --parallel watch, and this only runs once. It exits after rollup completes. In other words, it never seems to be watching.
I believe the command you are looking for is lerna exec. This will run whatever command is passed to it over every package in your Monorepo.
lerna exec --parallel -- yarn build
If each package has the same build step, you could abstract it to the top level package.json like so:
lerna exec --parallel -- rollup -c=rollup.config.js
Which will go into each package and run that rollup command.
Sources:
Adding Rollup to a Monorepo
Creating a Monorepo with Lerna & Yarn Workspaces
It will need some tweaks to enable rollup to watch in parallel for the lerna monorepo.
lerna run --parallel watch
The code above will only run for one package and block the rest of the packages, and here is the code for the inner workings of the rollup. The following code snippet is the Watcher class constructor from the rollup github code base. As you can see, the watcher actually accept an array of configs. So now you only need to write some wrapper code to incorporate all your configs into one and then run the watch from the same config for all packages.
constructor(configs: GenericConfigObject[] | GenericConfigObject) {
this.emitter = new (class extends EventEmitter implements RollupWatcher {
close: () => void;
constructor(close: () => void) {
super();
this.close = close;
// Allows more than 10 bundles to be watched without
// showing the `MaxListenersExceededWarning` to the user.
this.setMaxListeners(Infinity);
}
})(this.close.bind(this));
this.tasks = (Array.isArray(configs) ? configs : configs ? [configs] : []).map(
config => new Task(this, config)
);
this.running = true;
process.nextTick(() => this.run());
}

nyc coveralls integration not working

I try to make nyc working with coveralls following the instruction:
https://github.com/istanbuljs/nyc#integrating-with-coveralls
But I can't get it to work. Here is an example repo:
https://github.com/unional/showdown-highlightjs-extension
Travis build is successful: https://travis-ci.org/unional/showdown-highlightjs-extension
And Coveralls notice the build, but does not seem to get any data:
https://coveralls.io/github/unional/showdown-highlightjs-extension
Here is my .travis.yml:
language: node_js
notifications:
email:
on_success: never
on_failure: change
node_js:
- "stable"
before_install:
- npm install -g npm
script:
- npm run verify
after_script:
- npm install coveralls && npm run coveralls
And here is my package.json:
{
...
"scripts": {
"coverage": "npm test && nyc check-coverage --branches 85 --functions 85 --lines 85",
"coveralls": "nyc report --reporter=text-lcov | coveralls",
"test": "npm run clean && tsc && nyc ava"
...
},
"nyc": {
"exclude": [
"scripts",
"**/*.spec.*",
"**/fixtures/**/*"
]
},
...
}
Try adding your Coveralls repo API token (which can be found on the Coveralls page for your repo) to a new COVERALLS_REPO_TOKEN encrypted environment variable on Travis, as per the (somewhat sketchy) documentation on the Coveralls site.
I found out the issue is in my tsconfig.json:
{
"compilerOptions": {
"sourceRoot": "/showdown-highlight-extension"
...
}
}
This setting gives me the correct (I assume) source map in the browser. See What's the proper way to set sourceRoot in typescript?
But is not liked by the coverage tool.
Once I remove it, it starts working.
Need to find an answer to that question.

coveralls github integration (with qunit, istanbul, grunt)

I'm having issues getting coveralls to work. I've created a simple project here.
It seems to be outputting the report correctly, but I'm definitely missing a step somewhere because coveralls doesn't see me as being set up.
No branches show up, and it simply gives instructions on how to set it up.
I've tried to copy what qunit is doing, because they obviously have it working.
Here is what I've done so far.
Created the project that uses node/grunt/qunit as well as the coveralls account and toggled on the project.
I've then replaced the qunit reference in the devDependencies section in package.json with this.
"grunt-coveralls": "0.3.0",
"grunt-qunit-istanbul": "^0.4.0"
I've added this to my package.json.
"scripts": {
"ci": "grunt && grunt coveralls"
}
I've added this config for qunit in my Gruntfile.js.
options: {
timeout: 30000,
"--web-security": "no",
coverage: {
src: [ "src/<%= pkg.name %>.js" ],
instrumentedFiles: "temp/",
coberturaReport: "report/",
htmlReport: "build/report/coverage",
lcovReport: "build/report/lcov",
linesThresholdPct: 70
}
},
I then added this to my .travis.yml.
language: node_js
node_js:
- "0.10"
before_install:
npm install -g grunt-cli
install:
npm install
before_script:
grunt
after_script:
npm run-script coveralls
I got it working, check the repo for the example https://github.com/thorst/Code-Coverage-Qunit
While its not always possible, I found jasmine to be easier in multiple ways. I have a complete example here: https://github.com/thorst/Code-Coverage-Jasmine
I still haven't gotten mocha to work though. That (broken) repo is here: https://github.com/thorst/Code-Coverage-Mocha

Resources