I am trying to configure the pipeline to run automated e2e test on each PR to dev branch.
For that I am pulling the project, build it and when I want to run my tests I can not do this because when the project runs the pipeline doesn't switch to the second stage.
The question is when I build the project in Jenkins and it runs, how to force my test to run?
I tried parallel stage execution but it also doesn't work, because my tests start running when the project starts building.
My pipeline:
pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Cloning..'
git branch: 'dev', url: 'https://github.com/...'
echo 'Building..'
sh 'npm install'
sh 'npm run dev'
}
}
stage('e2e Test') {
steps {
echo 'Cloning..'
git branch: 'cypress-tests', url: 'https://github.com/...'
echo 'Testing..'
sh 'cd cypress-e2e'
sh 'npm install'
sh 'npm run dev'
}
}
}
}
You can add a stage for cloning the test branch and then run the build and the test stages in the same tame using parallel. The following pipeline should work:
pipeline {
agent any
stages {
stage ('Clone branchs') {
steps {
echo 'Cloning cypress-tests'
git branch: 'cypress-tests', url: 'https://github.com/...'
echo 'Cloning dev ..'
git branch: 'dev', url: 'https://github.com/...'
}
}
stage('Build and test') {
parallel {
stage('build') {
steps {
echo 'Building..'
sh 'npm install'
sh 'npm run dev'
}
}
stage('e2e Test') {
steps {
echo 'Testing..'
sh 'cd cypress-e2e'
sh 'npm install'
sh 'npm run dev'
}
}
}
}
}
Your will have the following pipeline:
I can think of two potential ways to handle this:
Execute each stage on different node. So that different workspaces would be created for each stage. Example:
pipeline {
agent any
stages {
stage('Build') {
agent {
label "node1"
}
steps {
echo 'Cloning..'
git branch: 'dev', url: 'https://github.com/...'
echo 'Building..'
sh 'npm install'
sh 'npm run dev'
}
}
stage('e2e Test') {
agent {
label "node2"
}
steps {
echo 'Cloning..'
git branch: 'cypress-tests', url: 'https://github.com/...'
echo 'Testing..'
sh 'cd cypress-e2e'
sh 'npm install'
sh 'npm run dev'
}
}
}
}
Create separate directories for BUILD_DIR and E2E_DIR.
cd into the relevant one for each stage and do the git checkout and the rest of the steps there. Example:
pipeline {
agent any
stages {
stage('Build') {
steps {
sh '''
BUILD="${WORKSPACE}/BUILD_DIR"
mkdir -p "${BUILD}" && cd "${BUILD}"
'''
echo 'Cloning..'
git branch: 'dev', url: 'https://github.com/...'
echo 'Building..'
sh 'npm install'
sh 'npm run dev'
}
}
stage('e2e Test') {
steps {
sh '''
E2E="${WORKSPACE}/E2E_DIR"
mkdir -p "${E2E}" && cd "${E2E}"
'''
echo 'Cloning..'
git branch: 'cypress-tests', url: 'https://github.com/...'
echo 'Testing..'
sh 'cd cypress-e2e'
sh 'npm install'
sh 'npm run dev'
}
}
}
}
Related
I am running jenkins and docker on Ubuntu server 20.04 LTS.
when starting jenkins pipeline to run docker commands, I have this error:
Sorry, home directories outside of /home are not currently supported.
See https://forum.snapcraft.io/t/11209 for details.
script returned exit code 1
Can anyone help me ? Thank you!
the script is below
pipeline {
agent any
tools {
nodejs "nodejs"
gradle 'gradle'
}
stages {
stage('clean'){
steps{
sh 'docker-compose down'
step([$class: 'WsCleanup'])
}
}
stage('clone'){
steps{
git branch: 'develop', credentialsId: 'key', url: 'url'
}
}
stage('front_build'){
steps{
dir('frontend'){
sh 'npm install'
sh 'npm run build'
}
}
}
stage('back_build'){
steps{
dir('backend'){
sh 'gradle clean'
sh 'gradle build'
}
}
}
stage('deploy'){
steps{
sh 'docker-compose up --build'
}
}
}
}
Hi I have a project with e2e tests. The goal is to run these tests in jenkins many times. Before actuall running I have to install every time chrome browser. I mean exactly commands in JenkinsFile:
sh 'wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb'
sh 'apt-get update && apt-get install -y ./google-chrome-stable_current_amd64.deb'
In case when I will run this pipeline let's say 30 times in the minute then the browser will be downloaded 30 times from scratch. I would like to cache this browser. As I know I can achieve that with volumes.
My whole JenkinsFile with declarative syntax is:
pipeline {
agent {
docker {
registryCredentialsId 'dockerhub-read'
image 'node:17.3-buster'
args '-v $HOME/google-chrome-stable_current_amd64.deb:/root/google-chrome-stable_current_amd64.deb'
reuseNode true
}
}
parameters {
string(name: 'X_VAULT_TOKEN', defaultValue: '', description: 'Token for connection with Vault')
string(name: 'SUITE_ACCOUNT', defaultValue: '', description: 'Account on which scenario/scenarios will be executed')
string(name: 'Scenario', defaultValue: '', description: 'Scenario for execution')
choice(name: 'Environment', choices:
['latest', 'sprint', 'production (EU1)', 'production (EU3)', 'production (US2)', 'production (US8)', 'production (AU3)'],
description: 'Environment for tests')
}
options {
disableConcurrentBuilds()
}
stages {
stage("Initialize") {
steps {
sh 'wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb'
sh 'apt-get update && apt-get install -y ./google-chrome-stable_current_amd64.deb'
sh 'yarn install'
sh "./init.sh ${params.Environment} ${params.X_VAULT_TOKEN} ${params.SUITE_ACCOUNT}"
}
}
stage("Run Feature tests") {
steps {
echo 'Running scenario'
sh 'yarn --version'
sh 'node --version'
sh """yarn test --tags "#${params.Scenario}" """
}
}
}
}
I'm trying to add in docker section:
args '-v $HOME/google-chrome-stable_current_amd64.deb:/root/google-chrome-stable_current_amd64.deb'
based on section Caching data for containers in the article https://www.jenkins.io/doc/book/pipeline/docker/
This dosen't work. Browser downloads again and again. What's wrong?
I'm running parallel cypress in Jenkins on the same slave, and it's working,
I want to change the parallel stages so each stage will run on a different slave, how can I do it?
for example:
run "cypress tester A" on slave-1.
run "cypress tester B" on slave-2.
run "cypress tester C" on slave-3.
this is my current Jenkinsfile:
pipeline {
options {
timeout(time: 15, unit: 'MINUTES')
}
agent {
docker {
image 'cypress/base:12.18.2'
label 'slave-1'
}
}
parameters {
string(defaultValue: 'master', description: 'Branch name', name: 'branchName')
}
stages {
stage('build') {
steps {
echo 'Running build...'
sh 'npm ci'
sh 'npm run cy:verify'
}
}
stage('cypress parallel tests') {
environment {
CYPRESS_RECORD_KEY = 'MY_CYPRESS_RECORD_KEY'
CYPRESS_trashAssetsBeforeRuns = 'false'
}
parallel {
stage('cypress tester A') {
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run cypress:run"
}
}
stage('cypress tester B') {
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run cypress:run"
}
}
stage('cypress tester C') {
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run cypress:run"
}
}
}
}
}
post {
always {
cleanWs(deleteDirs: true)
echo 'Tests are finished'
}
}
}
The cypress:run command is:
cypress run --record --parallel --config videoUploadOnPasses=false --ci-build-id $BUILD_TAG
I was able to get this to work by explicityly defining the agent within each parallel stage:
parallel {
stage('cypress tester A') {
agent {
node: {
label "slave-1"
}
}
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run cypress:run"
}
}
stage('cypress tester B') {
agent {
node: {
label "slave-2"
}
}
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run cypress:run"
}
}
stage('cypress tester C') {
agent {
node: {
label "slave-3"
}
}
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run cypress:run"
}
}
}
However, one disadvantage I found is now that you're running cypress in each individual node/virtual machine, cypress needs to know where to find the running instance of your application. Cypress looks into cypress.json at baseUrl to see where to find your app. Its common to use a localhost address for development, which means cypress runnning on slave-1 will look for an app running on localhost of slave-1 - but there isn't one, so it will fail.
For simplicity's sake, I just did an npm install and npm start & npx wait-on http://localhost:3000 in each node:
stage('cypress tester A') {
agent {
node: {
label "slave-1"
}
}
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm install --silent"
sh "npm start & npx wait-on http://localhost:3000"
sh "npm run cypress:run"
}
}
This is obviously not very efficient because you have to install and run the app on each node. However, you could potentially set up a previous stage on a dedicated node (say, slave-0) to install and serve your project, and use that. Within your Jenkinsfile, you'll need to know the IP of that slave-0, or you could get it dynamically within your Jenkinsfile. Then instead of installing and running your project on slave-1, 2 and 3, you would install and run it just on slave-0, and use the CYPRESS_BASE_URL env variable to tell cypress where to find the running instance of your app. If the IP of slave-0 is 2222.2222.2222.2222, you might try something like this:
pipeline {
stage ('Serve your project'){
agent {
label 'slave-0'
}
steps {
sh 'npm install --silent'
sh 'npm start & npx wait-on http://localhost:3000'
}
}
stage('Cypress'){
environment {
CYPRESS_BASE_URL=2222.2222.2222.2222:3000
// other env vars
}
parallel {
stage {
agent {
label 'slave-1'
}
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run cypress:run"
}
}
// more parallel stages
}
}
}
There's a lot of variations you can do, but hopefully that will get you started.
In my application I have a build script in package.json.
The build makes dist folder and inside I have my application.
I set Jenkins master and Jenkins agent as say in boxboat setup jenkins with docker and watch the video in youtube.
But now after I did this, I don't think my bash commands running inside a container.
I want to clone the repo and run npm i and npm run build - inside the docker container.
How I modify this configuration to able to do that?
throttle(['throttleDocker']) {
node('docker') {
wrap([$class: 'AnsiColorBuildWrapper']) {
try{
stage('Build') {
checkout scm
sh '''
echo "in Setup"
docker ps -a
echo "after docker"
# ./ci/docker-down.sh
# ./ci/docker-up.sh
'''
}
stage('Test'){
parallel (
"unit": {
sh '''
echo "in unit"
# ./ci/test/unit.sh
'''
},
"functional": {
sh '''
echo "in functional"
# ./ci/test/functional.sh
'''
}
)
}
stage('Capacity Test') {
sh '''
echo "in Capacity Test"
# ./ci/test/stress.sh
'''
}
}
finally {
stage('Cleanup') {
sh '''
echo "in Cleanup"
# ./ci/docker-down.sh
'''
}
}
}
}
}
I tried to this codes but they don't work. I also add agent after try.
stage('Build') {
agent {
docker {
label 'docker'
image 'node:latest'
}
}
steps {
checkout scm
sh 'node -v'
}
...
You can try below scripted pipeline
node {
docker.image('yourimage').inside {
stage('Build'){
sh 'echo "Build stage inside container"'
}
stage('Test'){
sh 'echo "Test Stage inside container"'
}
}
}
I have a Jenkin DSL JOB. It's for java build. I am stuck in a strange problem.
jobname is DSL, I saw a workspace with the name of DSL is created, But when the job runs it added another workspace with the name of DSL#2. The problem I can not get final jar file from DSL workspace
pipeline
{
agent any
stages
{
stage('Build')
{
agent {
docker { image 'maven:latest'
args '-v /home/ubuntu/jenkins/jenkins_home/.m2:/root/.m2'
}
}
steps {
git branch: "${params.branch}", url: "git#github.org/repo.git"
sh 'mvn clean install -Dmaven.test.skip=true -Dfindbugs.skip=true'
sh "ls -la target/name.jar "
}
}
stage('Copy Artifects')
{
steps {
//print "$params.IP"
// sh '${params.IP}"
sh "ls -la && pwd "
sh "scp target/name.jar ubuntu#${params.IP}:/home/ubuntu/target/name.jar_2"
}
}
}
}
OUT Of the JOB
Compiling 19 source files to /var/jenkins_home/workspace/dsl#2/auth-client/target/classes
DSL#2 means you either have a concurrent job configured and two builds runnning at the same time, OR you got a bug https://issues.jenkins-ci.org/browse/JENKINS-30231
To address your issue:
you are building stage('Build') inside a docker container created from maven image.
However, stage('Copy Artifects') is run OUTSIDE of that container
To fix it, you need to move agent{} to pipeline{} level like this:
pipeline
{
agent {
docker {
image 'maven:latest'
args '-v /home/ubuntu/jenkins/jenkins_home/.m2:/root/.m2'
}
}
stages
{
stage('Build')
{
steps {
git branch: "${params.branch}", url: "git#github.org/repo.git"
sh 'mvn clean install -Dmaven.test.skip=true -Dfindbugs.skip=true'
sh "ls -la target/name.jar "
}
}
stage('Copy Artifects')
{
steps {
sh "ls -la && pwd "
sh "scp target/name.jar ubuntu#${params.IP}:/home/ubuntu/target/name.jar_2"
}
}
}
}