I have a pipeline job for Spring and gradle:
pipeline {
agent any
triggers {
pollSCM '* * * * *'
}
tools {
jdk 'jdk-16'
}
stages {
stage('Build') {
steps {
sh 'java -version'
sh "chmod +x gradlew"
sh './gradlew assemble'
}
}
stage('Test') {
steps {
sh 'java -version'
sh "chmod +x gradlew"
sh './gradlew test'
}
}
stage('Publish Test Coverage Report') {
steps {
step([$class: 'JacocoPublisher',
execPattern: '**/build/jacoco/*.exec',
classPattern: '**/build/classes',
sourcePattern: 'src/main/java',
exclusionPattern: 'src/test*'
])
}
}
}
}
I am uploading the coverage it is available on the jenkins server, but I also want to upload it to codecov on the codecov page for jenkins and java there is a guide for freestyle job: https://about.codecov.io/blog/how-to-set-up-codecov-with-java-and-jenkins/
name: Jenkins CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up JDK 11
uses: actions/setup-java#v2
with:
java-version: '11'
distribution: 'adopt'
- name: Grant execute permission for gradlew
run: chmod +x gradlew
- name: Build with Gradle
run: ./gradlew clean build
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Run tests
run: ./gradlew clean build
- name: Coverage Report
run: ./gradlew jacocoTestReport
- name: Upload coverage to Codecov
uses: codecov/codecov-action#v1
with:
fail_ci_if_error: false
How can I integrate this in my pipline flow instead of a jenkins.yml file?
I ended up adding codecov commands to the Publish Test Coverage Report stage:
sh 'curl -Os https://uploader.codecov.io/latest/linux/codecov'
sh 'chmod +x codecov'
sh './codecov -t ${token}'
The Report Stage:
stage('Publish Test Coverage Report') {
steps {
step([$class: 'JacocoPublisher',
execPattern: '**/build/jacoco/*.exec',
classPattern: '**/build/classes',
sourcePattern: 'src/main/java',
exclusionPattern: 'src/test*'
])
sh 'curl -Os https://uploader.codecov.io/latest/linux/codecov'
sh 'chmod +x codecov'
sh './codecov -t ${TOKEN}'
}
}
It is the new beta Uploader that is replacing the deprecating bash. Commands for other OS: https://about.codecov.io/blog/introducing-codecovs-new-uploader/
Related
I want to configure Jenkins 2.375.2 to build gradle project. But when I configure the pipe using Blue Ocean plugin and I run the pipeline I get error:
+ ./gradlew build
/var/lib/jenkins/workspace/jenkins_master#tmp/durable-dcccf1cd/script.sh: 1: ./gradlew: not found
Jenkins file:
pipeline {
agent any
stages {
stage('Build Image') {
steps {
sh "echo 'building..'"
// configure credentials under http://192.168.1.28:8080/user/test/credentials/ and put credentials ID
git credentialsId: '8f6bc3ab-9ef5-4d89-8e14-4972d63325c5 ', url: 'http://192.168.1.30:7990/scm/jen/spring-boot-microservice.git', branch: 'master'
// execute Java -jar ... and build docker image
sh './gradlew build'
sh 'docker build -t springio/gs-spring-boot-docker .'
}
}
}
I tried to add Gradle config
But still I get the same error. Do you know how I can fix this issue?
Hi I have a project with e2e tests. The goal is to run these tests in jenkins many times. Before actuall running I have to install every time chrome browser. I mean exactly commands in JenkinsFile:
sh 'wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb'
sh 'apt-get update && apt-get install -y ./google-chrome-stable_current_amd64.deb'
In case when I will run this pipeline let's say 30 times in the minute then the browser will be downloaded 30 times from scratch. I would like to cache this browser. As I know I can achieve that with volumes.
My whole JenkinsFile with declarative syntax is:
pipeline {
agent {
docker {
registryCredentialsId 'dockerhub-read'
image 'node:17.3-buster'
args '-v $HOME/google-chrome-stable_current_amd64.deb:/root/google-chrome-stable_current_amd64.deb'
reuseNode true
}
}
parameters {
string(name: 'X_VAULT_TOKEN', defaultValue: '', description: 'Token for connection with Vault')
string(name: 'SUITE_ACCOUNT', defaultValue: '', description: 'Account on which scenario/scenarios will be executed')
string(name: 'Scenario', defaultValue: '', description: 'Scenario for execution')
choice(name: 'Environment', choices:
['latest', 'sprint', 'production (EU1)', 'production (EU3)', 'production (US2)', 'production (US8)', 'production (AU3)'],
description: 'Environment for tests')
}
options {
disableConcurrentBuilds()
}
stages {
stage("Initialize") {
steps {
sh 'wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb'
sh 'apt-get update && apt-get install -y ./google-chrome-stable_current_amd64.deb'
sh 'yarn install'
sh "./init.sh ${params.Environment} ${params.X_VAULT_TOKEN} ${params.SUITE_ACCOUNT}"
}
}
stage("Run Feature tests") {
steps {
echo 'Running scenario'
sh 'yarn --version'
sh 'node --version'
sh """yarn test --tags "#${params.Scenario}" """
}
}
}
}
I'm trying to add in docker section:
args '-v $HOME/google-chrome-stable_current_amd64.deb:/root/google-chrome-stable_current_amd64.deb'
based on section Caching data for containers in the article https://www.jenkins.io/doc/book/pipeline/docker/
This dosen't work. Browser downloads again and again. What's wrong?
I have a Jenkins pipeline for .Net Core REST API and I am getting an error on the command for executing Jmeter tests :
[Pipeline] { (Performance Test)
[Pipeline] sh
+ docker exec 884627942e26 bash
[Pipeline] sh
+ /bin/sh -c cd /opt/apache-jmeter-5.4.1/bin
[Pipeline] sh
+ /bin/sh -c ./jmeter -n -t /home/getaccountperftest.jmx -l /home/golide/Reports/LoadTestReport.csv -e -o /home/golide/Reports/PerfHtmlReport
-n: 1: -n: ./jmeter: not found
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Performance Test Report)
Stage "Performance Test Report" skipped due to earlier failure(s)
I have jmeter running as a Docker container on the server as per this guide Jmeter On Linux and I am able to extract the reports but this same command fails when I run within Jenkins context :
/bin/sh -c ./jmeter -n -t /home/getaccountperftest.jmx -l /home/golide/Reports/LoadTestReport.csv -e -o /home/golide/Reports/PerfHtmlReport
This is my pipeline :
pipeline {
agent any
triggers {
githubPush()
}
environment {
NAME = "cassavagateway"
REGISTRYUSERNAME = "golide"
WORKSPACE = "/var/lib/jenkins/workspace/OnlineRemit_main"
VERSION = "${env.BUILD_ID}-${env.GIT_COMMIT}"
IMAGE = "${NAME}:${VERSION}"
}
stages {
.....
.....
stage ("Publish Test Report") {
steps{
publishHTML target: [
allowMissing: false,
alwaysLinkToLastBuild: true,
keepAll: true,
reportDir: '/var/lib/jenkins/workspace/OnlineRemit_main/IntegrationTests/BuildReports/Coverage',
reportFiles: 'index.html',
reportName: 'Code Coverage'
]
archiveArtifacts artifacts: 'IntegrationTests/BuildReports/Coverage/*.*'
}
}
stage ("Performance Test") {
steps{
sh 'docker exec 884627942e26 bash'
sh '/bin/sh -c cd /opt/apache-jmeter-5.4.1/bin'
sh '/bin/sh -c ./jmeter -n -t /home/getaccountperftest.jmx -l /home/golide/Reports/LoadTestReport.csv -e -o /home/Reports/HtmlReport'
sh 'docker cp 884627942e26:/home/Reports/HtmlReport /var/lib/jenkins/workspace/FlexToEcocash_main/IntegrationTests/BuildReports/Coverage bash'
}
}
stage ("Publish Performance Test Report") {
steps{
step([$class: 'ArtifactArchiver', artifacts: '**/*.jtl, **/jmeter.log'])
}
}
stage ("Docker Build") {
steps {
sh 'cd /var/lib/jenkins/workspace/OnlineRemit_main/OnlineRemit'
echo "Running ${VERSION} on ${env.JENKINS_URL}"
sh "docker build -t ${NAME} /var/lib/jenkins/workspace/OnlineRemit_main/OnlineRemit"
sh "docker tag ${NAME}:latest ${REGISTRYUSERNAME}/${NAME}:${VERSION}"
}
}
stage("Deploy To K8S"){
sh 'kubectl apply -f {yaml file name}.yaml'
sh 'kubectl set image deployments/{deploymentName} {container name given in deployment yaml file}={dockerId}/{projectName}:${BUILD_NUMBER}'
}
}
}
My issues :
What doI need to change for that command to execute ?
How can I incorporate a condition to break the pipeline if the tests fail?
Jenkins Environment : Debian 10
Platform : .Net Core 3.1
The Shift-Left.jtl is a results file which JMeter will generate after execution of the `Shift-Left.jmx
By default it will be in CSV format, depending on what you're trying to achieve you can:
Generate charts from the .CSV file
Generate HTML Reporting Dashboard
If you have Jenkins Performance Plugin you can get performance trend graphs, possibility to automatically fail the build depending on various criteria, etc.
I am trying to configure the pipeline to run automated e2e test on each PR to dev branch.
For that I am pulling the project, build it and when I want to run my tests I can not do this because when the project runs the pipeline doesn't switch to the second stage.
The question is when I build the project in Jenkins and it runs, how to force my test to run?
I tried parallel stage execution but it also doesn't work, because my tests start running when the project starts building.
My pipeline:
pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Cloning..'
git branch: 'dev', url: 'https://github.com/...'
echo 'Building..'
sh 'npm install'
sh 'npm run dev'
}
}
stage('e2e Test') {
steps {
echo 'Cloning..'
git branch: 'cypress-tests', url: 'https://github.com/...'
echo 'Testing..'
sh 'cd cypress-e2e'
sh 'npm install'
sh 'npm run dev'
}
}
}
}
You can add a stage for cloning the test branch and then run the build and the test stages in the same tame using parallel. The following pipeline should work:
pipeline {
agent any
stages {
stage ('Clone branchs') {
steps {
echo 'Cloning cypress-tests'
git branch: 'cypress-tests', url: 'https://github.com/...'
echo 'Cloning dev ..'
git branch: 'dev', url: 'https://github.com/...'
}
}
stage('Build and test') {
parallel {
stage('build') {
steps {
echo 'Building..'
sh 'npm install'
sh 'npm run dev'
}
}
stage('e2e Test') {
steps {
echo 'Testing..'
sh 'cd cypress-e2e'
sh 'npm install'
sh 'npm run dev'
}
}
}
}
}
Your will have the following pipeline:
I can think of two potential ways to handle this:
Execute each stage on different node. So that different workspaces would be created for each stage. Example:
pipeline {
agent any
stages {
stage('Build') {
agent {
label "node1"
}
steps {
echo 'Cloning..'
git branch: 'dev', url: 'https://github.com/...'
echo 'Building..'
sh 'npm install'
sh 'npm run dev'
}
}
stage('e2e Test') {
agent {
label "node2"
}
steps {
echo 'Cloning..'
git branch: 'cypress-tests', url: 'https://github.com/...'
echo 'Testing..'
sh 'cd cypress-e2e'
sh 'npm install'
sh 'npm run dev'
}
}
}
}
Create separate directories for BUILD_DIR and E2E_DIR.
cd into the relevant one for each stage and do the git checkout and the rest of the steps there. Example:
pipeline {
agent any
stages {
stage('Build') {
steps {
sh '''
BUILD="${WORKSPACE}/BUILD_DIR"
mkdir -p "${BUILD}" && cd "${BUILD}"
'''
echo 'Cloning..'
git branch: 'dev', url: 'https://github.com/...'
echo 'Building..'
sh 'npm install'
sh 'npm run dev'
}
}
stage('e2e Test') {
steps {
sh '''
E2E="${WORKSPACE}/E2E_DIR"
mkdir -p "${E2E}" && cd "${E2E}"
'''
echo 'Cloning..'
git branch: 'cypress-tests', url: 'https://github.com/...'
echo 'Testing..'
sh 'cd cypress-e2e'
sh 'npm install'
sh 'npm run dev'
}
}
}
}
I use Jenkins from docker container. And I want to build docker image in Jenkins pipeline but docker is not exist in this container (where Jenkins).
Jenkins container deployed by Docker Compose, yml file:
version: "3.3"
services:
jenkins:
image: jenkins:alpine
ports:
- 8085:8080
volumes:
- ./FOR_JENKINS:/var/jenkins_home
What we can do to build docker image in Jenkins pipeline?
Can we deploy some docker container with docker and use once for build docker image? or something else? How do you doing with they?
Edit:
Thanks #VonC, I checked your information, but... "permission denied"
Docker Compose file:
version: "3.3"
services:
jenkins:
image: jenkins:alpine
ports:
- 8085:8080
volumes:
- ./FOR_JENKINS:/var/jenkins_home
# - /var/run/docker.sock:/var/run/docker.sock:rw
- /var/run:/var/run:rw
Jenkinsfile:
pipeline {
agent any
stages {
stage('Build') {
steps {
echo "Compiling..."
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt compile"
}
}
/*stage('Unit Test') {
steps {
echo "Testing..."
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt coverage 'test-only * -- -F 4'"
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt coverageReport"
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt scalastyle || true"
}
}*/
stage('DockerPublish') {
steps {
echo "Docker Stage ..."
// Generate Jenkinsfile and prepare the artifact files.
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt docker:stage"
echo "Docker Build-2 ..."
// Run the Docker tool to build the image
script {
docker.withTool('docker') {
echo "D1- ..."
//withDockerServer([credentialsId: "AWS-Jenkins-Build-Slave", uri: "tcp://192.168.0.29:2376"]) {
echo "D2- ..."
sh "printenv"
echo "D3- ..."
//sh "docker images"
echo "D4- ..."
docker.build('my-app:latest', 'target/docker/stage').inside("--volume=/var/run/docker.sock:/var/run/docker.sock")
echo "D5- ..."
//base.push("tmp-fromjenkins")
//}
}
}
}
}
}
}
Result:
[job1] Running shell script
+ docker build -t my-app:latest target/docker/stage
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.29/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&shmsize=0&t=my-app%3Alatest&target=&ulimits=null: dial unix /var/run/docker.sock: connect: permission denied
script returned exit code 1
Edit:
Last problem with "permission denied" fixed with:
>>sudo chmod 0777 /var/run/docker.sock
Worked State:
Call in host:
>>sudo chmod 0777 /var/run/docker.sock
Docker Compose file:
version: "3.3"
services:
jenkins:
image: jenkins:alpine
ports:
- 8085:8080
volumes:
- ./FOR_JENKINS:/var/jenkins_home
# - /var/run/docker.sock:/var/run/docker.sock:rw
- /var/run:/var/run:rw
Jenkinsfile:
pipeline {
agent any
stages {
stage('Build') {
steps {
echo "Compiling..."
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt compile"
}
}
/*stage('Unit Test') {
steps {
echo "Testing..."
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt coverage 'test-only * -- -F 4'"
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt coverageReport"
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt scalastyle || true"
}
}*/
stage('DockerPublish') {
steps {
echo "Docker Stage ..."
// Generate Jenkinsfile and prepare the artifact files.
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt docker:stage"
echo "Docker Build-2 ..."
// Run the Docker tool to build the image
script {
docker.withTool('docker') {
echo "D1- ..."
//withDockerServer([credentialsId: "AWS-Jenkins-Build-Slave", uri: "tcp://192.168.0.29:2376"]) {
echo "D2- ..."
sh "printenv"
echo "D3- ..."
//sh "docker images"
echo "D4- ..."
docker.build('my-app:latest', 'target/docker/stage')
echo "D5- ..."
//base.push("tmp-fromjenkins")
//}
}
}
}
}
}
}
My resolve:
I add some step in Jenkinsfile and get:
pipeline {
agent any
//def app
stages {
stage('Build') {
steps {
echo "Compiling..."
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt compile"
}
}
stage('DockerPublish') {
steps {
echo "Docker Stage ..."
// Generate Jenkinsfile and prepare the artifact files.
sh "${tool name: 'sbt', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt docker:stage"
echo "Docker Build ..."
// Run the Docker tool to build the image
script {
docker.withTool('docker') {
echo "Environment:"
sh "printenv"
app = docker.build('ivanbuh/myservice:latest', 'target/docker/stage')
echo "Push to Docker repository ..."
docker.withRegistry('https://registry.hub.docker.com', 'docker-hub-credentials') {
app.push("${env.BUILD_NUMBER}")
app.push("latest")
}
echo "Complated ..."
}
}
}
}
//https://boxboat.com/2017/05/30/jenkins-blue-ocean-pipeline/
//https://gist.github.com/bvis/68f3ab6946134f7379c80f1a9132057a
stage ('Deploy') {
steps {
sh "docker stack deploy myservice --compose-file docker-compose.yml"
}
}
}
}
You can look at "Docker in Docker in Jenkins pipeline". It includes the step:
inside the Jenkinsfile, I need to connect my build container to the outer Docker instance. This is done by mounting the Docker socket itself:
docker.build('my-build-image').inside("--volume=/var/run/docker.sock:/var/run/docker.sock") {
// The build here
}
You can see a similar approach in "Building containers with Docker in Docker and Jenkins".
In order to make the Docker from the host system available I need to make the API available to the Jenkins docker container. You can do this by mapping the docker socket that is available on the parent system.
I have created a small docker-compose file where I map both my volumes and the docker socket as following:
jenkins:
container_name: jenkins
image: myjenkins:latest
ports:
- "8080:8080"
volumes:
- /Users/devuser/dev/docker/volumes/jenkins:/var/jenkins_home
- /var/run:/var/run:rw
Please note the special mapping the ‘/var/run’ with rw privileges, this is needed to make sure the Jenkins container has access to the host systems docker.sock.
And, as I mentioned before, you might need to run docker in privilege mode.
Or, as the OP reported:
sudo chmod 0777 /var/run/docker.sock