How to keep a Jenkins step 'alive' with declarative pipelines? - jenkins

My use case is the following: I have a web application written in Node, and I've done a set of functional tests with Java and Selenium. I have configured a job in Jenkins using the new declarative pipelines syntax.
Here are the contents of the Jenkinsfile:
#!groovy
pipeline {
agent any
stages {
stage('Test') {
steps {
nodejs(nodeJSInstallationName: 'node:8.2.0') {
sh 'echo $PATH'
sh 'npm -v'
sh 'node -v'
dir('src/webapp') {
sh 'npm install'
sh 'nohup npm start &> todomvc.out &'
}
sh './gradlew clean test'
}
}
}
stage('Clean up') {
steps {
deleteDir()
}
}
}
}
As you can see, first I launch the webapp using npm start and I send it to the background (in order to continue to the next step which is the actual testing).
However when the tests run the webapp isn't available, making them fail.
I've tried replacing:
sh 'nohup npm start &> todomvc.out &'
with:
npm start
and when I go to the port I've specified there is an instance of the webapp as expected. However, this blocks the next steps.
What I want is to launch an instance of the webapp and then test it with ./gradlew clean test.

Related

How to specify JDK version in Jenkinsfile pipeline script

I have a pipeline script to deploy applications to server. I'm building project using maven, I want Jenkins to use specified JDK version for building the project. My pipeline script looks like this:
pipeline {
agent any
tools {
// Install the Maven version configured as "M3" and add it to the path.
maven "Maven 3.6.3"
}
stages {
stage('Build') {
steps {
// Run Maven on a Unix agent.
sh "mvn clean package -DskipTests=true -U"
}
post {
// If Maven was able to run the tests, even if some of the test
// failed, record the test results and archive the jar file.
success {
archiveArtifacts "**/${war}"
}
}
}
stage('Deploy EQM Instance 1') {
steps {
sshagent(credentials: ['credentials']) {
sh "echo 1"
sh "echo Initializing deployment to Instance 1"
sh "scp target/${war} ${bastionHost}:/home/opc"
sh "echo 2.1"
sh "echo Uploaded war file to bastion instance"
sh "scp ${bastionHost}:/home/opc/${war} opc#${instanceDns}:/home/opc"
sh "echo 3.2"
sh "echo Uploaded war file from bastion instance to Instance 1 ${instanceDns}"
sh "echo Undeploying old war file"
sh "ssh ${bastionHost} -tt ssh opc#${instanceDns} sudo rm /opt/tomcat/webapps/${war}"
sh "echo 4.2.2"
sh "ssh ${bastionHost} -tt ssh opc#${instanceDns} sudo chown tomcat:tomcat -R ${war}"
sh "echo Deploying new war file"
sh "ssh ${bastionHost} -tt ssh opc#${instanceDns} sudo mv ${war} /opt/tomcat/webapps/"
sh "echo 4.3"
}
}
}
}
There are other already configured on Jenkins, I don't want to disturb them. So I want to specify JDK version in desired job configuration.

How to run Jenkins parallel cypress on different agents?

I'm running parallel cypress in Jenkins on the same slave, and it's working,
I want to change the parallel stages so each stage will run on a different slave, how can I do it?
for example:
run "cypress tester A" on slave-1.
run "cypress tester B" on slave-2.
run "cypress tester C" on slave-3.
this is my current Jenkinsfile:
pipeline {
options {
timeout(time: 15, unit: 'MINUTES')
}
agent {
docker {
image 'cypress/base:12.18.2'
label 'slave-1'
}
}
parameters {
string(defaultValue: 'master', description: 'Branch name', name: 'branchName')
}
stages {
stage('build') {
steps {
echo 'Running build...'
sh 'npm ci'
sh 'npm run cy:verify'
}
}
stage('cypress parallel tests') {
environment {
CYPRESS_RECORD_KEY = 'MY_CYPRESS_RECORD_KEY'
CYPRESS_trashAssetsBeforeRuns = 'false'
}
parallel {
stage('cypress tester A') {
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run cypress:run"
}
}
stage('cypress tester B') {
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run cypress:run"
}
}
stage('cypress tester C') {
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run cypress:run"
}
}
}
}
}
post {
always {
cleanWs(deleteDirs: true)
echo 'Tests are finished'
}
}
}
The cypress:run command is:
cypress run --record --parallel --config videoUploadOnPasses=false --ci-build-id $BUILD_TAG
I was able to get this to work by explicityly defining the agent within each parallel stage:
parallel {
stage('cypress tester A') {
agent {
node: {
label "slave-1"
}
}
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run cypress:run"
}
}
stage('cypress tester B') {
agent {
node: {
label "slave-2"
}
}
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run cypress:run"
}
}
stage('cypress tester C') {
agent {
node: {
label "slave-3"
}
}
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run cypress:run"
}
}
}
However, one disadvantage I found is now that you're running cypress in each individual node/virtual machine, cypress needs to know where to find the running instance of your application. Cypress looks into cypress.json at baseUrl to see where to find your app. Its common to use a localhost address for development, which means cypress runnning on slave-1 will look for an app running on localhost of slave-1 - but there isn't one, so it will fail.
For simplicity's sake, I just did an npm install and npm start & npx wait-on http://localhost:3000 in each node:
stage('cypress tester A') {
agent {
node: {
label "slave-1"
}
}
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm install --silent"
sh "npm start & npx wait-on http://localhost:3000"
sh "npm run cypress:run"
}
}
This is obviously not very efficient because you have to install and run the app on each node. However, you could potentially set up a previous stage on a dedicated node (say, slave-0) to install and serve your project, and use that. Within your Jenkinsfile, you'll need to know the IP of that slave-0, or you could get it dynamically within your Jenkinsfile. Then instead of installing and running your project on slave-1, 2 and 3, you would install and run it just on slave-0, and use the CYPRESS_BASE_URL env variable to tell cypress where to find the running instance of your app. If the IP of slave-0 is 2222.2222.2222.2222, you might try something like this:
pipeline {
stage ('Serve your project'){
agent {
label 'slave-0'
}
steps {
sh 'npm install --silent'
sh 'npm start & npx wait-on http://localhost:3000'
}
}
stage('Cypress'){
environment {
CYPRESS_BASE_URL=2222.2222.2222.2222:3000
// other env vars
}
parallel {
stage {
agent {
label 'slave-1'
}
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run cypress:run"
}
}
// more parallel stages
}
}
}
There's a lot of variations you can do, but hopefully that will get you started.

Npm test in Jenkins build takes 8 hours

My Jenkins build is still not finished after 8hrs. I have a simple React project I want to implement Continuous Integration with.
My Jenkinsfile looks like this:
pipeline {
agent {
docker {
image 'node'
args '-u root'
}
}
stages {
stage('Build') {
steps {
echo 'Building...'
sh 'npm install'
sh 'npm install node'
}
}
stage('Test') {
steps {
echo 'Testing...'
sh 'npm test'
}
}
}
}
I think what is happening is npm test is testing ALL the node modules. The build itself takes 44s.
Also, I have not been able to get npm install to install the node modules? So far as I understand it should install node automatically?
How can I stop it taking so long?
Override docker entrypoint with command --entrypoint \'\'
agent will therefore look like
agent {
docker {
image 'node'
args '-u root --entrypoint \'\''
}
}
This is a wild guess, all I can do with so little information

mvn command not found in jenkins scripted pipeline

I am new to jenkins. I am getting the below error when i am trying to trigger scripted pipeline through jenkins.
/Users/Shared/Jenkins/Home/workspace/pipelinedemo#tmp/durable-a4f2db2a/script.sh:
line 1: mvn: command not found
Below is the code snippet .
pipeline {
agent any
stages {
stage('clone repo and clean it ') {
steps {
sh "rm -rf my-app"
sh "git clone https://github.com/Testing/my-app"
sh "mvn clean -f my-app"
}
}
stage('Test') {
steps {
sh "mvn test -f my-app"
}
}
stage('Deploy') {
steps {
sh "mvn package -f my-app"
}
}
}
}
Please note that i am able to run mvn command through freestyle project. i am getting this error from scripted line. Please answer this. thanks in advance. Geeth
ssh to your linux server, then
1. Install maven on your linux machine.
2. Add it to your PATH variable. (you should able to run mvn -version command there)
3. Restart your jenkins service.
Then, you should be able to use it in your pipeline script.

How to push the docker image to DTR using Jenkins multibranch pipeline

My problem statement is to push the docker image to DTR using Jenkins multibranch pipeline.
I want to configure my Jenkins file in such a way that it will build the image and then Push it.
for building the image I will use 10.1.2.3 machine and DTR will be https://something.dtr01.eastus2.cloudapp.azure.com/
I am totally new to Jenkins as per the instruction I have put the sample Jenkins file in the git repo.
Please suggest me the configuration changes in Jenkisfile
node {
// Clean workspace before doing anything
deleteDir()
try {
stage ('Clone') {
checkout scm
}
stage ('Build') {
"echo 'shell scripts to build project...'"
}
stage ('Tests') {
parallel 'static': {
sh "echo 'shell scripts to run static tests...'"
},
'unit': {
sh "echo 'shell scripts to run unit tests...'"
},
'integration': {
sh "echo 'shell scripts to run integration tests...'"
}
}
stage ('Deploy') {
sh "echo 'shell scripts to deploy to server...'"
}
} catch (err) {
currentBuild.result = 'FAILED'
throw err
}
}
Any help will be appreciable

Resources