Cant seem to get withPythonEnv to work in Jenkins - jenkins

I am using an agent setup with multiple versions of Python (3.6, 3.7, 3.8, and 3.9) and have installed withPythonEnv plugin to see if it can switch runtimes during builds. The project is found here: https://github.com/jenkinsci/pyenv-pipeline-plugin.
When i try to run this in Jenkins running some simple commands:
stage ('Unit Test'){
steps {
withPythonEnv('/usr/bin/python3.8') {
script{
sh """
pwd
env
python --version
pip install --upgrade pip
pip install -r requirements-test.txt
python -m pytest foo/tests/ --cov foo --cov-report=xml --junitxml=junit.xml
"""
}
}
}
post {
always {
script {
junit "junit.xml"
}
}
}
}
I am constantly seeing the build failed. I dont EVER get any additional logging as to why this is occurring. The only message i see is this:
ERROR: Error while creating virtualenv: Error: Command '['/home/jenkins/workspace/foo/.pyenv-usr-bin-python3.8/bin/python3.8', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 1.
Does anyone know how to get around this using withPythonEnv? Their docs really dont say much more than the examples provided and I've tried a few of those already.

Could you use withEnv and then use PythonPath? Something like: withEnv('PYTHONPATH=/usr/bin/python3.8')

Related

How to not mark Jenkins job as FAILURE when pytest tests fail

I have a Jenkins setup with a pipeline that uses pytest to run some test suites. Sometimes a test fails and sometimes the test environment crashes (random HTTP timeout, external lib error, etc.). The job parses the XML test result but the build is marked as FAILURE as long as pytest returns non-zero.
I want Jenkins to get exit code zero from pytest even if there are failed tests but I also want other errors to be marked as failures. Are there any pytest options that can fix this? I found pytest-custom_exit_code but it can only suppress the empty test suite error. Maybe some Jenkins option or bash snippet?
A simplified version of my groovy pipeline:
pipeline {
stages {
stage ('Building application') {
steps {
sh "./build.sh"
}
}
stage ('Testing application') {
steps {
print('Running pytest')
sh "cd tests && python -m pytest"
}
post {
always {
archiveArtifacts artifacts: 'tests/output/'
junit 'tests/output/report.xml'
}
}
}
}
}
I have tried to catch exit code 1 (meaning some tests failed) but Jenkins still received exit code 1 and marks the build as FAILURE:
sh "cd tests && (python -m pytest; rc=\$?; if [ \$rc -eq 1 ]; then exit 0; else exit \$rc; fi)"
Your attempt does not work because Jenkins runs the shell with the errexit (-e) option enabled and that causes the shell to exit right after the pytest command before it reaches your if statement. There is a one-liner however that will work because it is executed as one statement: https://stackoverflow.com/a/31114992/1070890
So your build step would look like this:
sh 'cd tests && python -m pytest || [[ $? -eq 1 ]]'
My solution was to implement the support myself in pytest-custom-exit-code and create a pull request.
From version 0.3.0 of the plugin I can use pytest --suppress-tests-failed-exit-code to get the desired behavior.

Travis - Executing bower in sh script fails

I try to execute bower commands in a sh script that is run in the after-success phase o a travis build. I installed bower in the install phase:
install:
- npm install -g bower
[...]
after_success:
- if [ ${TRAVIS_PULL_REQUEST} = "false" ] && [ "$TRAVIS_BRANCH" = "master" ]; then
./my-script.sh;
fi
Unfortunately, if I call bower in the sh script it produces the following output:
./my-script.sh: line 30: ./node_modules/.bin/bower: No such file or directory
I do not know how to proceed to fix the error. Any help would be greatly appreciated, thank you already!
I had to call the script using
bash my-script.sh;
instead of
./my-script.sh;
Now everything is working fine.

lerna run --parallel not working for rollup watch

Background:
I have a lerna monorepo with yarn workspaces with two packages. I am using rollup as the bundler.
packages/module1/package.json:
{
scripts: {
"watch": "rollup -c rollup.config.js --watch",
"build": "NODE_ENV=production && rollup -c rollup.config.js"
}
}
packages/module2/package.json:
{
scripts: {
"watch": "rollup -c rollup.config.js --watch",
"build": "NODE_ENV=production && rollup -c rollup.config.js"
}
}
Expected Behavior:
lerna run build will run the build scripts for each package.
lerna run watch will run the watch scripts for each package in watch mode.
Current Behavior:
lerna run build works as expected. The build script runs properly for both packages.
lerna run watch just hangs there:
lerna notice cli v3.13.1
lerna info Executing command in 2 packages: "yarn run watch"
[[just hangs here]]
I have tried lerna run --parallel watch, and this only runs once. It exits after rollup completes. In other words, it never seems to be watching.
I believe the command you are looking for is lerna exec. This will run whatever command is passed to it over every package in your Monorepo.
lerna exec --parallel -- yarn build
If each package has the same build step, you could abstract it to the top level package.json like so:
lerna exec --parallel -- rollup -c=rollup.config.js
Which will go into each package and run that rollup command.
Sources:
Adding Rollup to a Monorepo
Creating a Monorepo with Lerna & Yarn Workspaces
It will need some tweaks to enable rollup to watch in parallel for the lerna monorepo.
lerna run --parallel watch
The code above will only run for one package and block the rest of the packages, and here is the code for the inner workings of the rollup. The following code snippet is the Watcher class constructor from the rollup github code base. As you can see, the watcher actually accept an array of configs. So now you only need to write some wrapper code to incorporate all your configs into one and then run the watch from the same config for all packages.
constructor(configs: GenericConfigObject[] | GenericConfigObject) {
this.emitter = new (class extends EventEmitter implements RollupWatcher {
close: () => void;
constructor(close: () => void) {
super();
this.close = close;
// Allows more than 10 bundles to be watched without
// showing the `MaxListenersExceededWarning` to the user.
this.setMaxListeners(Infinity);
}
})(this.close.bind(this));
this.tasks = (Array.isArray(configs) ? configs : configs ? [configs] : []).map(
config => new Task(this, config)
);
this.running = true;
process.nextTick(() => this.run());
}

Jenkins 2.0: Running SBT in a docker container

I have the following Jenkinsfile:
def notifySlack = { String color, String message ->
slackSend(color: color, message: "${message}: Job ${env.JOB_NAME} [${env.BUILD_NUMBER}] (${env.BUILD_URL})")
}
node {
try {
notifySlack('#FFFF00', 'STARTED')
stage('Checkout project') {
checkout scm
}
scalaImage = docker.image('<myNexus>/centos-sbt:2.11.8')
stage('Test project') {
docker.withRegistry('<myNexus>', 'jenkins-nexus') {
scalaImage.inside('-v /var/lib/jenkins/.ivy2:/root/.ivy2') { c ->
sh 'sbt clean test'
}
}
}
if (env.BRANCH_NAME == 'master') {
stage('Release new version') {
docker.withRegistry('<myNexus>', 'jenkins-nexus') {
scalaImage.inside('-v /var/lib/jenkins/.ivy2:/root/.ivy2') { c ->
sh 'sbt release'
}
}
}
}
notifySlack('#00FF00', 'SUCCESSFUL')
} catch (e) {
currentBuild.result = "FAILED"
notifySlack('#FF0000', 'FAILED')
throw e
}
}
Unfortunately when I reach the sbt clean test line I end up with the following error:
java.lang.IllegalArgumentException: URI has a query component
at java.io.File.<init>(File.java:427)
at sbt.IO$.uriToFile(IO.scala:160)
at sbt.IO$.toFile(IO.scala:135)
at sbt.Classpaths$.sbt$Classpaths$$bootRepository(Defaults.scala:1942)
at sbt.Classpaths$$anonfun$appRepositories$1.apply(Defaults.scala:1912)
at sbt.Classpaths$$anonfun$appRepositories$1.apply(Defaults.scala:1912)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at sbt.Classpaths$.appRepositories(Defaults.scala:1912)
at sbt.Classpaths$$anonfun$58.apply(Defaults.scala:1193)
at sbt.Classpaths$$anonfun$58.apply(Defaults.scala:1190)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
at sbt.EvaluateSettings$MixedNode.evaluate0(INode.scala:175)
at sbt.EvaluateSettings$INode.evaluate(INode.scala:135)
at sbt.EvaluateSettings$$anonfun$sbt$EvaluateSettings$$submitEvaluate$1.apply$mcV$sp(INode.scala:69)
at sbt.EvaluateSettings.sbt$EvaluateSettings$$run0(INode.scala:78)
at sbt.EvaluateSettings$$anon$3.run(INode.scala:74)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
If I run the simple docker run ... followed by docker exec I get what I want but I would like to work with the defined Jenkins functionality.
So this seems to be an SBT issue. I use version 0.13.16 inside the docker image. From what I understand the classpath contains a query parameter that SBT:
doesn't like
doesn't know how to handle
is illegal
I put no such query parameters myself so I thought that this .inside method does that. I checked the env in the container and found a single entry RUN_CHANGES_DISPLAY_URL=<my_ip>/job/scheduler/job/fix-jenkins-pipeline/23/display/redirect?page=changes. I tried to unset it but didn't manage to.
I'm out of ideas and am not really sure I'm in the right direction. Any help would be appreciated.
So after long and tedious searches what finally worked for me is setting explicitly the .sbt and .ivy2 folder like this inside the docker container:
sbt -Dsbt.global.base=.sbt -Dsbt.boot.directory=.sbt -Dsbt.ivy.home=.ivy2 clean test
That somehow prevents sbt from generating the ? folder and directly puts the aforementioned folders in the root of the directory checkout.
I spent a lot of time tracing this down through the code.
It looks like the easiest solution is to just pass -Duser.home=<path> to sbt, or to set it in the SBT_OPTS environment variable; then all the rest of the directories will be built as if the <path> is the user's home directory.
I fixed this by setting ivys cache directory -> How to override the location of Ivy's Cache?
The problem was that it wasn't set and by default it creates a ? folder which in return can't be handled by sbt itself.
I created a custom Dockerfile to have more control over sbt.
Here are the steps I executed to solve the issue:
I created a file called ivysettings.xml with the following contents:
<ivysettings>
<properties environment="env" />
<caches defaultCacheDir="/home/jenkins/.ivy2/cache" />
</ivysettings>
and a Dockerfile:
FROM openjdk:8
RUN wget -O- "http://downloads.lightbend.com/scala/2.11.11/scala-2.11.11.tgz" \
| tar xzf - -C /usr/local --strip-components=1
RUN curl -Ls https://git.io/sbt > /usr/bin/sbt && chmod 0755 /usr/bin/sbt
RUN adduser -u 1000 --disabled-password --gecos "" jenkins
ADD ./files/ivysettings.xml /home/jenkins/.ivy2/ivysettings.xml
RUN chown -R jenkins:jenkins /home/jenkins
USER jenkins
CMD ["sbt"]
I then pushed the image to our private docker repository and our pipeline finally works!
Problem is that jenkins is running a specific user inside the container. But overriding it does the trick.
withDockerContainer(args: "-u root -v ${HOME}/.sbt:/root/.sbt -v ${HOME}/.ivy2:/root/.ivy2 -e HOME=/root",
image: 'xyz/sbt:v') {

Jenkinsfile custom docker container "could not find FROM instruction"

I have just started working with Jenkinsfiles and Docker so apologies if this is something obvious.
I have a repo containing a Dockerfile and a Jenkins file.
The Dockerfile simply extends a base Ubuntu image (ubuntu:trusty) by adding several dependencies and build tools.
The Jenkinsfile currently only builds the Docker container for me:
node('docker') {
stage "Prepare environment"
checkout scm
docker.build('build-image')
}
When I run the Jenkins build, the output log shows the Docker container being successfully created, but just before it should finish successfully, I get:
Successfully built 04ba77c72c74
[Pipeline] dockerFingerprintFrom
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
[Bitbucket] Notifying commit build result
[Bitbucket] Build result notified
ERROR: could not find FROM instruction in /home/emackenzie/jenkins/workspace/001_test-project_PR-1-ROWUV6YLERZKDQWCAGJK5MQHNKY7RJRHC2TH4DNOZSEKE6PZB74A/Dockerfile
Finished: FAILURE
I have been unable to find any guidance on why I am getting this error from the internet, so any help would be greatly appreciated
Dockerfile:
FROM ubuntu:trusty
MAINTAINER Ed Mackenzie
# setup apt repos
RUN echo "deb http://archive.ubuntu.com/ubuntu/ trusty multiverse" >> /etc/apt/sources.list \
&& echo "deb-src http://archive.ubuntu.com/ubuntu/ trusty multiverse" >> /etc/apt/sources.list \
&& apt-get update
# python
RUN apt-get install -y python python-dev python-openssl
It's because your FROM line uses a tab for whitespace, instead of space(s). This is a bug in the Jenkins CI Docker workflow plugin, which expects the line to begin with FROM followed by a space.
From the jenkinsci/docker-workflow-plugin source on Github:
String fromImage = null;
// ... other stuff
if (line.startsWith("FROM ")) {
fromImage = line.substring(5);
break;
}
// ... other stuff ...
if (fromImage == null) {
throw new AbortException("could not find FROM instruction in " + dockerfile);
}
If you use spaces instead of tabs, it should work fine.
I just ran in to the same problem and it was a similar solution. Check if the file is encoded with a BOM at the beginning of the file (this can be done with something like Notepad++). If so, save it without the marker and the plugin will stop complaining.
The error can be solved by changing the "from" statement to "FROM"

Resources