Jenkins executes second command with sh using docker.image.withRun - docker

I currently have a Jenkins script which starts a Docker container in which the Selenium tests are run using Maven. The Selenium tests are executed successfully, and Maven returns "Build Success".
The problem is as following: Instead of only executing the sh command specified in the Jenkinsfile, Jenkins also executes an unknown second sh command.
Jenkins Pipeline Step
As shown in the image, the highlighted part is executed as command, which obviously is not a command, meaning that the Docker container returns error code 127.
Jenkinsfile:
node {
stage('Checkout Code') {
checkout scm
}
try {
withEnv(["JAVA_HOME=${tool 'JDK 11.0'}", "PATH+MAVEN=${tool 'apache-maven-3.x'}/bin", "PATH+JAVA=${env.JAVA_HOME}/bin"]) {
stage('Run Selenide Tests') {
docker.image('selenium/standalone-chrome').withRun('-v /dev/shm:/dev/shm -P') { c->
sh "mvn clean test -Denvironment=${env.Profile} -Dselenide.headless=true -Dselenide.remote=http://" + c.port(4444) + "/wd/hub"
}
}
}
}catch(e){
currentBuild.result = "FAILURE"
throw e
} finally {
stage('Notify Slack Channel of Tests status') {
// Hidden
}
}
}
Console Output (some parts hidden because not relevant):
+ docker run -d -v /dev/shm:/dev/shm -P selenium/standalone-chrome
+ docker port a15967ce0efbda908f6ba9bb7c8c633bb64e54a6557e5c23097ea47ed0540ff9 4444
+ mvn clean test -Denvironment=jenkins -Dselenide.headless=true -Dselenide.remote=http://0.0.0.0:49827
// Maven tests
[INFO]
[INFO] Results:
[INFO]
[INFO] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:14 min
[INFO] Finished at: 2022-04-14T15:36:38+02:00
[INFO] ------------------------------------------------------------------------
+ :::49821/wd/hub
/var/lib/jenkins/workspace/selenide-tests/test#tmp/durable-58ae7b8f/script.sh: 2:
/var/lib/jenkins/workspace/selenide-tests/test#tmp/durable-58ae7b8f/script.sh: :::49821/wd/hub: not found
+ docker stop a15967ce0efbda908f6ba9bb7c8c633bb64e54a6557e5c23097ea47ed0540ff9
a15967ce0efbda908f6ba9bb7c8c633bb64e54a6557e5c23097ea47ed0540ff9
+ docker rm -f a15967ce0efbda908f6ba9bb7c8c633bb64e54a6557e5c23097ea47ed0540ff9
a15967ce0efbda908f6ba9bb7c8c633bb64e54a6557e5c23097ea47ed0540ff9
ERROR: script returned exit code 127
Finished: FAILURE
Is this a common issue which is easily solvable, or is something wrong with my Jenkinsfile and how can I fix this?
Thanks

It seems the /wd/hub part comes from you executed line of code, which leads me to believe that your problem is due to the way you have added quotes.
Your line of code is:
sh "mvn clean test -Denvironment=${env.Profile} -Dselenide.headless=true -Dselenide.remote=http://" + c.port(4444) + "/wd/hub"
Specifically, you open you command with ", then close it after http:// with another ". I'm guessing Jenkins doesn't find this acceptable. Try creating the url separately
def url = "http://" + c.port(4444) + "/wd/hub"
and simply using this variable in your executing line
sh "mvn clean test -Denvironment=${env.Profile} -Dselenide.headless=true -Dselenide.remote=${url}"
I haven't used docker.image before so you might have to play around a bit to get this working.

After some more digging around the documentation and trying different stuff out, this is what worked for me:
node {
stage('Checkout Code') {
checkout scm
}
try {
withEnv(["JAVA_HOME=${tool 'JDK 11.0'}", "PATH+MAVEN=${tool 'apache-maven-3.x'}/bin", "PATH+JAVA=${env.JAVA_HOME}/bin"]) {
stage('Run Selenide Tests') {
docker.image('selenium/standalone-chrome').withRun('-v /dev/shm:/dev/shm -p 4444:4444') {
def selenideRemote = "http://0.0.0.0:4444/wd/hub"
sh "mvn clean test -Denvironment=${env.Profile} -Dselenide.headless=true -Dselenide.remote=${selenideRemote}
}
}
}
}catch(e){
currentBuild.result = "FAILURE"
throw e
} finally {
stage('Notify Slack Channel of Tests status') {
// Hidden
}
}
}
So I replaced .withRun('-v /dev/shm:/dev/shm -P') with .withRun(-v /dev/shm:/dev/shm -p 4444:4444') and replaced c.port(4444) with the selenideRemote variable.
Removing c.port(4444) stopped executing the second command in question. Replacing -P with -p 4444:4444 prevented port 4444 from inside the container to be assigned to a random port on the host, which also prevented the usage of c.port(4444).

Related

Jenkins with Multistage Dockerfile replace multiple groovy scripts

I inherited a set up for a Jenkins pipeline that had loads of tiny groovy scripts calling one another and I've spent the past week trying to make sense of it. I have converted part of this setup to a multi-stage Dockerfile with a build BUILD stage and a DEPLOY stage as we use a maven image for the build portion and then deploy using a lightweight Apline linux based container.
Here's where I'm stuck. There was previously a script that just called this:
buildJava()
sonar()
buildJava.groovy
def call() {
def version = getVersion()
script { writeFile(file: "src/main/resources/version.json", text: '{"-Duser.home=/var/maven": "' + version + '"}') }
stage("Build") {
docker.image('maven:3.6.1-jdk-11-slim').inside('-v $HOME:/var/maven -e MAVEN_CONFIG=/var/maven/.m2'){
sh "mvn -Duser.home=/var/maven -Dmaven.test.failure.ignore=true clean package"
}
junit '**/surefire-reports/*.xml'
}
}
sonar.groovy
def call() {
stage("SonarQube Analysis") {
withSonarQubeEnv('SonarQube') {
sh 'mvn org.sonarsource.scanner.maven:sonar-maven-plugin:3.2:sonar'
}
}
stage("Quality Gate") {
timeout(time: 10, unit: 'MINUTES') {
sleep(10)
def qg = waitForQualityGate()
if (qg.status != 'OK') {
error "Pipeline aborted due to quality gate failure: ${qg.status}"
}
}
}
}
I think my hangup is on making sense of the volume (-v) in the buildJava.groovy.
My new Dockerfile looks like this:
#BUILD STAGE
FROM maven:3-amazoncorretto-11 as BUILD
COPY pom.xml /usr/src/app/pom.xml
RUN mvn -f /usr/src/app/pom.xml dependency:resolve -q
COPY src /usr/src/app/src
RUN mvn -f /usr/src/app/pom.xml clean package
ENV MAVEN_CONFIG=/var/maven/.m2
VOLUME $HOME:/var/maven
#RUN STAGE
FROM docker.artifactory.mycompany.com/_base_java11:1.0 as DEPLOY
VOLUME /tmp
COPY --from=BUILD /usr/src/app/target/*.jar app.jar
COPY --from=BUILD /usr/src/app/target/surefire-reports/ surefire-reports
# Labels
ARG GIT_VERSION=unspecified
LABEL git_version=$GIT_VERSION
# Labels
ARG JENKINS_BUILD_TAG=unspecified
LABEL jenkins_build_tag=$JENKINS_BUILD_TAG
ARG BRANCH=unspecified
LABEL branch=$BRANCH
ENV JAVA_OPTS=""
CMD java -javaagent:apm-agent.jar -Djava.security.egd=file:/dev/./urandom $JAVA_OPTS -jar /app.jar
and the relevant part of the Jenkinsfile looks like this
buildImage = buildDocker(projectName, gitVersionString, deploymentEnvironment, ' --target BUILD ')
sonar()
image = buildDocker(projectName, gitVersionString, deploymentEnvironment, ' --target DEPLOY ')
The sonar.groovy file is still the same as before as I'm trying to get it to remain its own part of the Jenkins pipeline. the Jenkins build produces the dreaded
[INFO] JavaClasspath initialization
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 10.766s
[INFO] Finished at: Mon Sep 06 14:26:37 CDT 2021
[INFO] Final Memory: 99M/845M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.sonarsource.scanner.maven:sonar-maven-plugin:3.2:sonar (default-cli) on project myService: Please provide compiled classes of your project with sonar.java.binaries property -> [Help 1]
output. I feel like I'm missing something simple here, but it just won't come to me. For now, I just took out the sonar() call from the Jenkinsfile and it all runs fine. Essentially, I just want to run my sonar scan where it is still its own Jenkins pipeline stage, but runs in the BUILD stage container. Any help would be greatly appreciated.
buildDocker.groovy
def call(projectName, version, deployEnv, optionalArgs = '') {
def jenkinsBuildTag = env.BUILD_TAG.replaceAll(' ', '_')
String dockerRegistry = "docker.artifactory.mycompany.com/"
String dockerProjectName = dockerRegistry + "prefix-" + projectName + ":" + deployEnv
String activeSpringProfiles = getActiveSpringProfiles(deployEnv)
newImage = ''
stage('build docker image') {
// build the new image defined in the Dockerfile, named whatever the value of dockerProjectName, and using the various build arguments.
newImage = docker.build(dockerProjectName,
"--build-arg GIT_VERSION=${version} " +
"--build-arg JENKINS_BUILD_TAG=${jenkinsBuildTag} " +
"--build-arg CONFIGURATION=${deployEnv} " +
"--build-arg BUILD_ENV=${deployEnv} " +
"--build-arg SPRING_PROFILES_ACTIVE=${activeSpringProfiles} " +
optionalArgs +
".")
}
return newImage
}

Unable to start the Jmeter-Server in background in Jenkins pipeline. Getting ConnectException

I have a requirement to implement distributed performance testing where I have a chance of launching multiple slave node parallelly when user count is high. Hence I suppose to launch master and slave nodes.
I have tried all the way to start jmeter-server in the background since it has to keep on running in the slave node to receive the incoming request.
But still, I am unable to start it in the background.
node(performance) {
properties([disableConcurrentBuilds()])
stage('Setup') {
cleanAndInstall()
checkout()
}
max_instances_to_boot = 1
for (val in 1..max_instances_to_boot) {
def instance_id = val
node_builder[instance_id] = {
timestamps {
node(node_label) {
stage('Node -> ' + instance_id + ' Launch') {
def ipAddr = ''
script {
ipAddr = sh(script: 'curl http://xxx.xxx.xxx.xxx/latest/meta-data/local-ipv4', returnStdout: true)
node_ipAddr.add(ipAddr)
}
cleanAndInstall()
checkout()
println "Node IP Address:"+node_ipAddr
dir('apache-jmeter/bin') {
exec_cmd = "nohup sh jmeter-server -Jserver.rmi.ssl.disable=true -Djava.rmi.server.hostname=$ipAddr > ${env.WORKSPACE}/jmeter-server-nohup.out &"
println 'Server Execution Command: ' + exec_cmd
sh exec_cmd
}
sleep time: 1, unit: 'MINUTES'
sh """#!/bin/bash
echo "============ jmeter-server.log ============"
cat jmeter-server.log
echo "============ nohup.log ============"
cat jmeter-server-nohup.out
"""
}
}
}
}
}
parallel node_builder
stage('Execution') {
exec_cmd = "apache-jmeter/bin/jmeter -n -t /home/jenkins/workspace/release-performance-tests/test_plans/delights/fd_regression_delight.jmx -e -o /home/jenkins/workspace/release-performance-tests/Performance-Report -l /home/jenkins/workspace/release-performance-tests/JTL-FD-773.jtl -R xx.0.3.210 -Jserver.rmi.ssl.disable=true -Dclient.tries=3"
println 'Execution Command: ' + exec_cmd
sh exec_cmd
}
}
Getting following error
Error in rconfigure() method java.rmi.ConnectException: Connection refused to host: xx.0.3.210; nested exception is:
java.net.ConnectException: Connection refused (Connection refused)
We're unable to provide the answer without seeing the contents of your nohup.out file which is supposed to contain your script output.
Blind shot: by default JMeter uses secure communication between the master and the slaves so you need to have a Java Keystore to contain certificates necessary for the requests encryption. The script is create-rmi-keystore.sh and you need to launch and perform the configuration prior to starting the JMeter Slave.
If you don't need encrypted communication between master and slaves you can turn this feature off so you won't to create the keystore, it can be done either by adding the following command-line argument:
server.rmi.ssl.disable=true
like:
nohup jmeter-server -Jserver.rmi.ssl.disable=true &
or alternatively add the next line to user.properties file (lives in "bin" folder of your JMeter installation)
server.rmi.ssl.disable=true
More information:
Configuring JMeter
Apache JMeter Properties Customization Guide
Remote hosts and RMI configuration
This is resolve by adding inside the node stage.
JENKINS_NODE_COOKIE=dontKillMe nohup sh jmeter-server -Jserver.rmi.ssl.disable=true -Djava.rmi.server.hostname=xx.xx.xx.xxx > ${env.WORKSPACE}/jmeter-server-nohup.out &

Jenkins declarative pipeline with docker and git

I am trying to build a pipeline for Node JS application using git and dockers. I have made declarative Jenkinsfile from which everything works smoothly. I have set SCM Poll for every two minutes and it gets invoked correctly but the problem comes as old pipeline still running so new poll get queued with the message Waiting for next available executor. I wanted to know if I have done all correctly and what I am missing.
My complete code can be found here.
I have tried making npm start in deliver.sh file with & to make it run in daemon mode and used input message option in Jenkinsfile to stop the pipeline from finishing as otherwise only with "npm start &" and without "input message" pipeline reaches to the end of pipeline and app container created get killed. I am sure this approach is not correct. I did then with npm start without & and wihtout input message and scm poll when invoked and pipeline also started executing stages but as the last container is already published to port 3000, obviously it won't publish new to 3000, so pipeline returns error.
Dockerfile
FROM node:alpine
COPY . .
EXPOSE 3000
Jenkinsfile
pipeline {
triggers {
pollSCM 'H/2 * * * *'
}
agent { dockerfile {
args '-p 3000:3000'
}
}
stages {
stage('Build') {
steps {
sh 'npm install'
}
}
stage('Deliver') {
steps {
sh './jenkins/scripts/deliver.sh'
// input message: 'Finished using the web site? (Click "Proceed" to continue)'
// sh './jenkins/scripts/kill.sh'
}
}
}
}
deliver.sh script
# set -x
# npm start &
npm start
# sleep 1
# copying process ID of npm start to file name pidfile, this id will
# be used when the user press any key to stop the app
# echo $! > .pidfile
# set +x
Any help in this regard would be highly appreciated.
Add
disableConcurrentBuilds()
inside an 'options' section to prevent 2 builds running at the same time.
pipeline {
triggers {
pollSCM 'H/2 * * * *'
}
agent { dockerfile {
args '-p 3000:3000'
}
options {
disableConcurrentBuilds()
}
stages {
stage('Build') {
steps {
sh 'npm install'
}
}
stage('Deliver') {
steps {
sh './jenkins/scripts/deliver.sh'
// input message: 'Finished using the web site? (Click "Proceed" to continue)'
// sh './jenkins/scripts/kill.sh'
}
}
}
}

How to not mark Jenkins job as FAILURE when pytest tests fail

I have a Jenkins setup with a pipeline that uses pytest to run some test suites. Sometimes a test fails and sometimes the test environment crashes (random HTTP timeout, external lib error, etc.). The job parses the XML test result but the build is marked as FAILURE as long as pytest returns non-zero.
I want Jenkins to get exit code zero from pytest even if there are failed tests but I also want other errors to be marked as failures. Are there any pytest options that can fix this? I found pytest-custom_exit_code but it can only suppress the empty test suite error. Maybe some Jenkins option or bash snippet?
A simplified version of my groovy pipeline:
pipeline {
stages {
stage ('Building application') {
steps {
sh "./build.sh"
}
}
stage ('Testing application') {
steps {
print('Running pytest')
sh "cd tests && python -m pytest"
}
post {
always {
archiveArtifacts artifacts: 'tests/output/'
junit 'tests/output/report.xml'
}
}
}
}
}
I have tried to catch exit code 1 (meaning some tests failed) but Jenkins still received exit code 1 and marks the build as FAILURE:
sh "cd tests && (python -m pytest; rc=\$?; if [ \$rc -eq 1 ]; then exit 0; else exit \$rc; fi)"
Your attempt does not work because Jenkins runs the shell with the errexit (-e) option enabled and that causes the shell to exit right after the pytest command before it reaches your if statement. There is a one-liner however that will work because it is executed as one statement: https://stackoverflow.com/a/31114992/1070890
So your build step would look like this:
sh 'cd tests && python -m pytest || [[ $? -eq 1 ]]'
My solution was to implement the support myself in pytest-custom-exit-code and create a pull request.
From version 0.3.0 of the plugin I can use pytest --suppress-tests-failed-exit-code to get the desired behavior.

What is a rule to make target in Docker container?

To learn how to manipulate Docker images from within Jenkins, I am reading this link.
What is a "rule to make a target" in Docker? The simple example below is failing because there is "no rule to make a target". What does this mean in Docker?
The Error And The Code That Triggers The Error
The sh 'make test' line of code in a Jenkinsfile from the link above is throwing an error when run inside the following block:
testImage.inside {
sh "whoami"
sh 'make test'
}
The actual error that Jenkins throws when trying to interpret the sh 'make test' line of code is:
make test— Shell Script<1s
[ple-dockere-containered-app-WWNVRTE6XFKMI4JPEVK2F2U3HOGDZICATW6XBFM2IQUW5PAG5FWA] Running shell script
+ make test
make: *** No rule to make target 'test'. Stop.
The complete Jenkinsfile is:
node {
// Clean workspace before doing anything
deleteDir()
try {
stage ('Clone') {
checkout scm
}
stage ('Build') {
def testImage = docker.build("test-image", "./app")
testImage.inside {
sh "whoami"
sh 'make test'
}
}
} catch (err) {
currentBuild.result = 'FAILED'
throw err
}
}
Note that the make test command is being run inside the container.

Resources