Npm test in Jenkins build takes 8 hours - jenkins

My Jenkins build is still not finished after 8hrs. I have a simple React project I want to implement Continuous Integration with.
My Jenkinsfile looks like this:
pipeline {
agent {
docker {
image 'node'
args '-u root'
}
}
stages {
stage('Build') {
steps {
echo 'Building...'
sh 'npm install'
sh 'npm install node'
}
}
stage('Test') {
steps {
echo 'Testing...'
sh 'npm test'
}
}
}
}
I think what is happening is npm test is testing ALL the node modules. The build itself takes 44s.
Also, I have not been able to get npm install to install the node modules? So far as I understand it should install node automatically?
How can I stop it taking so long?

Override docker entrypoint with command --entrypoint \'\'
agent will therefore look like
agent {
docker {
image 'node'
args '-u root --entrypoint \'\''
}
}
This is a wild guess, all I can do with so little information

Related

How to run Jenkins parallel cypress on different agents?

I'm running parallel cypress in Jenkins on the same slave, and it's working,
I want to change the parallel stages so each stage will run on a different slave, how can I do it?
for example:
run "cypress tester A" on slave-1.
run "cypress tester B" on slave-2.
run "cypress tester C" on slave-3.
this is my current Jenkinsfile:
pipeline {
options {
timeout(time: 15, unit: 'MINUTES')
}
agent {
docker {
image 'cypress/base:12.18.2'
label 'slave-1'
}
}
parameters {
string(defaultValue: 'master', description: 'Branch name', name: 'branchName')
}
stages {
stage('build') {
steps {
echo 'Running build...'
sh 'npm ci'
sh 'npm run cy:verify'
}
}
stage('cypress parallel tests') {
environment {
CYPRESS_RECORD_KEY = 'MY_CYPRESS_RECORD_KEY'
CYPRESS_trashAssetsBeforeRuns = 'false'
}
parallel {
stage('cypress tester A') {
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run cypress:run"
}
}
stage('cypress tester B') {
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run cypress:run"
}
}
stage('cypress tester C') {
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run cypress:run"
}
}
}
}
}
post {
always {
cleanWs(deleteDirs: true)
echo 'Tests are finished'
}
}
}
The cypress:run command is:
cypress run --record --parallel --config videoUploadOnPasses=false --ci-build-id $BUILD_TAG
I was able to get this to work by explicityly defining the agent within each parallel stage:
parallel {
stage('cypress tester A') {
agent {
node: {
label "slave-1"
}
}
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run cypress:run"
}
}
stage('cypress tester B') {
agent {
node: {
label "slave-2"
}
}
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run cypress:run"
}
}
stage('cypress tester C') {
agent {
node: {
label "slave-3"
}
}
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run cypress:run"
}
}
}
However, one disadvantage I found is now that you're running cypress in each individual node/virtual machine, cypress needs to know where to find the running instance of your application. Cypress looks into cypress.json at baseUrl to see where to find your app. Its common to use a localhost address for development, which means cypress runnning on slave-1 will look for an app running on localhost of slave-1 - but there isn't one, so it will fail.
For simplicity's sake, I just did an npm install and npm start & npx wait-on http://localhost:3000 in each node:
stage('cypress tester A') {
agent {
node: {
label "slave-1"
}
}
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm install --silent"
sh "npm start & npx wait-on http://localhost:3000"
sh "npm run cypress:run"
}
}
This is obviously not very efficient because you have to install and run the app on each node. However, you could potentially set up a previous stage on a dedicated node (say, slave-0) to install and serve your project, and use that. Within your Jenkinsfile, you'll need to know the IP of that slave-0, or you could get it dynamically within your Jenkinsfile. Then instead of installing and running your project on slave-1, 2 and 3, you would install and run it just on slave-0, and use the CYPRESS_BASE_URL env variable to tell cypress where to find the running instance of your app. If the IP of slave-0 is 2222.2222.2222.2222, you might try something like this:
pipeline {
stage ('Serve your project'){
agent {
label 'slave-0'
}
steps {
sh 'npm install --silent'
sh 'npm start & npx wait-on http://localhost:3000'
}
}
stage('Cypress'){
environment {
CYPRESS_BASE_URL=2222.2222.2222.2222:3000
// other env vars
}
parallel {
stage {
agent {
label 'slave-1'
}
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run cypress:run"
}
}
// more parallel stages
}
}
}
There's a lot of variations you can do, but hopefully that will get you started.

Jenkins | DSL| Workspace DIR issue

I have a Jenkin DSL JOB. It's for java build. I am stuck in a strange problem.
jobname is DSL, I saw a workspace with the name of DSL is created, But when the job runs it added another workspace with the name of DSL#2. The problem I can not get final jar file from DSL workspace
pipeline
{
agent any
stages
{
stage('Build')
{
agent {
docker { image 'maven:latest'
args '-v /home/ubuntu/jenkins/jenkins_home/.m2:/root/.m2'
}
}
steps {
git branch: "${params.branch}", url: "git#github.org/repo.git"
sh 'mvn clean install -Dmaven.test.skip=true -Dfindbugs.skip=true'
sh "ls -la target/name.jar "
}
}
stage('Copy Artifects')
{
steps {
//print "$params.IP"
// sh '${params.IP}"
sh "ls -la && pwd "
sh "scp target/name.jar ubuntu#${params.IP}:/home/ubuntu/target/name.jar_2"
}
}
}
}
OUT Of the JOB
Compiling 19 source files to /var/jenkins_home/workspace/dsl#2/auth-client/target/classes
DSL#2 means you either have a concurrent job configured and two builds runnning at the same time, OR you got a bug https://issues.jenkins-ci.org/browse/JENKINS-30231
To address your issue:
you are building stage('Build') inside a docker container created from maven image.
However, stage('Copy Artifects') is run OUTSIDE of that container
To fix it, you need to move agent{} to pipeline{} level like this:
pipeline
{
agent {
docker {
image 'maven:latest'
args '-v /home/ubuntu/jenkins/jenkins_home/.m2:/root/.m2'
}
}
stages
{
stage('Build')
{
steps {
git branch: "${params.branch}", url: "git#github.org/repo.git"
sh 'mvn clean install -Dmaven.test.skip=true -Dfindbugs.skip=true'
sh "ls -la target/name.jar "
}
}
stage('Copy Artifects')
{
steps {
sh "ls -la && pwd "
sh "scp target/name.jar ubuntu#${params.IP}:/home/ubuntu/target/name.jar_2"
}
}
}
}

Getting error Jenkin pipeline docker: command not found

Dockerfile:
pipeline {
agent any
stages {
stage ('Compile') {
steps {
withMaven(maven: 'maven_3_6_3') {
sh 'mvn clean compile'
}
}
}
stage ('unit test and Package') {
steps {
withMaven(maven: 'maven_3_6_3') {
sh 'mvn package'
}
}
}
stage ('Docker build') {
steps {
sh 'docker build -t dockerId/cakemanager .'
}
}
}
}
docker build -t dockerId/cakemanager .
/Users/Shared/Jenkins/Home/workspace/CDCI-Cake-Manager_master#tmp/durable-e630df16/script.sh:
line 1: docker: command not found
First install docker plugin from Manage Jenkins >> Manage Plugins >> Click on available and search for Docker and install it.
and then configure it on Manage Jenkins >> Global tool configuration.
You need to manually install docker on your Jenkins master or on agents if you're running builds on them.
Here's the doc to install docker on OS X https://docs.docker.com/docker-for-mac/install/

How do I specify docker run args when using a dockerfile agent in jenkins

I am setting up a simple jenkins pipeline with a dockerfile agent, Jenkinsfile as follows:
pipeline {
agent {
dockerfile {
dir 'docker'
args '-v yarn_cache:usr/local/share/.cache/yarn'
}
}
environment {
CI = 'true'
}
stages {
stage('Build') {
steps {
sh 'yarn install'
sh 'yarn run build'
}
}
stage('Test') {
steps {
sh 'yarn run test'
}
}
}
}
I would like the yarn cache persist in a volume so I want the image to be started with '-v yarn_cache:usr/local/share/.cache/yarn'.
With the given Jenkinsfile, jenkins stalls after creating the image.
The args parameter is not actually documentented for the dockerfile agent but for the docker agent.
Do I really have to use a predefined (and uploaded) image just to be able to use parameters ?
Cheers Thomas
OK, figured it out:
It actually works just like I configured it only I have forgotten the leading / in the volume path. So with
args `'-v yarn_cache:/usr/local/share/.cache/yarn'`
it works just fine..

How to keep a Jenkins step 'alive' with declarative pipelines?

My use case is the following: I have a web application written in Node, and I've done a set of functional tests with Java and Selenium. I have configured a job in Jenkins using the new declarative pipelines syntax.
Here are the contents of the Jenkinsfile:
#!groovy
pipeline {
agent any
stages {
stage('Test') {
steps {
nodejs(nodeJSInstallationName: 'node:8.2.0') {
sh 'echo $PATH'
sh 'npm -v'
sh 'node -v'
dir('src/webapp') {
sh 'npm install'
sh 'nohup npm start &> todomvc.out &'
}
sh './gradlew clean test'
}
}
}
stage('Clean up') {
steps {
deleteDir()
}
}
}
}
As you can see, first I launch the webapp using npm start and I send it to the background (in order to continue to the next step which is the actual testing).
However when the tests run the webapp isn't available, making them fail.
I've tried replacing:
sh 'nohup npm start &> todomvc.out &'
with:
npm start
and when I go to the port I've specified there is an instance of the webapp as expected. However, this blocks the next steps.
What I want is to launch an instance of the webapp and then test it with ./gradlew clean test.

Resources