I found Jenkins just ignore my variable ${BuildFolder}, thanks for the help.
node {
def BuildFolder = '/Build/${JOB_NAME}'+ '.' +'${BUILD_ID}'
stage ('prepare'){
sh "echo Build Folder: ${BuildFolder}"
sh "rm -rf ${BuildFolder} && mkdir -p ${BuildFolder}"
}
stage ('Checkout'){
checkout([$class: 'GitSCM',
branches: [[name: '*/master']],
doGenerateSubmoduleConfigurations: false,
extensions: [[$class: 'RelativeTargetDirectory',
relativeTargetDir: '${BuildFolder}']],
submoduleCfg: [],
userRemoteConfigs: [[credentialsId: '',
url: '']]])
}
You can create variables before the pipeline block starts. Then it should be work.
For Example,
def BuildFolder = '/Build/${JOB_NAME}'+ '.' +'${BUILD_ID}'
node
{
stage ('prepare')
{
sh "echo Build Folder: ${BuildFolder}"
sh "rm -rf ${BuildFolder} && mkdir -p ${BuildFolder}"
}
}
Related
Currently , my cypress testes are runnning in docker container on one stage
stage('Run E2E tests') {
steps {
withCredentials([
sshUserPrivateKey(credentialsId: '*********', keyFileVariable: 'SSH_KEY_FILE', usernameVariable: 'SSH_USER')
]) {
sh """
eval `ssh-agent -s`
ssh-add ${SSH_KEY_FILE}
~/earthly \
--no-cache \
--config=.earthly/config.yaml \
+e2e
eval `ssh-agent -k`
"""
}
}
}
And publishing the test report to via publishHTML.
post {
always {
echo "-- Archive report artifacts"
archiveArtifacts artifacts: 'results', allowEmptyArchive: 'true'
echo "-- Publish HTLM test result report"
publishHTML (target: [
allowMissing: false,
alwaysLinkToLastBuild: false,
keepAll: true,
reportDir: 'results/html/',
reportFiles: 'mochawesome-bundle.html',
reportName: "Test Result Report"
])
}
}
But i need to make the build failure if any of the TC failure in the cypress mocha report
what can be the solution for this..?
Thanks in advance
I am trying to tag my docker image`. I am using Jenkins to do that for me by declaring a string parameter.
In the docker-compose.yml file I have my image like so:
image: api:"${version}"
I get an error saying the tag is incorrect.
In my Jenkins pipeline I have a string parameter named version with default LATEST. However, I want to be able to enter v1 or v2 which will be used as an image tag.
I am doing it using blue-green deployment.
You can set the VERSION in the environment for the build using withEnv in your pipeline, for example:
# Jenkinsfile
---
stage('build'){
node('vagrant'){
withEnv([
'VERSION=0.1'
]){
git_checkout()
dir('app'){
ansiColor('xterm') {
sh 'mvn clean install'
}
}
// build docker image with version
sh 'docker build --rm -t app:${VERSION} .'
}
}
}
def git_checkout(){
checkout([
$class: 'GitSCM',
branches: [[name: '*/' + env.BRANCH_NAME]],
doGenerateSubmoduleConfigurations: false,
extensions: [
[$class: 'SubmoduleOption', disableSubmodules: false, parentCredentials: true, recursiveSubmodules: false, reference: '', trackingSubmodules: true],
[$class: 'AuthorInChangelog'],
[$class: 'CloneOption', depth: 0, noTags: false, reference: '', shallow: false]
],
submoduleCfg: [],
userRemoteConfigs: [
[credentialsId: '...', url: 'ssh://vagrant#ubuntu18/usr/local/repos/app.git']
]
])
}
# Dockerfile
---
FROM ubuntu:18.04
RUN apt update && \
apt install -y openjdk-11-jre && \
apt clean
COPY app/special-security/target/special-security.jar /bin
ENTRYPOINT ["java", "-jar", "/bin/special-security.jar"]
The version number set in the Jenkins build environment is used by the docker build command.
Note: the java application that I'm building with maven (e.g. mvn clean install) is purely for example purposes, the code is available https://stackoverflow.com/a/54450290/1423507. Also, the colorized output in Jenkins console requires the AnsiColor Plugin as explained https://stackoverflow.com/a/53227633/1423507. Finally, not using docker-compose in this example, there is no difference for setting the version in the environment.
The problem
In my Jenkins pipeline I have a string parameter named version with default LASTEST. However I want to be able to enter v1 or v2 and that is the tag the container uses.
Assuming that docker compose runs inside of you Jenkins pipeline then the ${version} you use indocker-compose.yml` must be available in the shell environment of the Jenkins pipeline, otherwise will evaluate to nothing, thus giving you the error saying the tag is invalid.
The solution
Sorry but I am not familiar with Jenkins so I can't tell how you can set properly the value for ${version} in the shell environment is running, thus you need to do some search around this.
Tip
Just as a tip in docker-compose.yml you can use bash expansion to assign default values to the variables you use, like:
image: api:"${version:-latest}"
or if you want a explicit error
image: api:"${version? Mipssing version for docker image!!!}"
I am running a mysql sidecar like the following :
docker.image("mysql:5.6").withRun("-e MYSQL_ALLOW_EMPTY_PASSWORD=yes -e", '--lower_case_table_names=1') { c ->
docker.image("mysql:5.6").inside("--link ${c.id}:mysql") {
/* Wait until MySQL service is up */
sh "while ! mysqladmin ping -u root -h mysql -p ; do sleep 1; done"
sh "mysql -u root -h mysql -p --batch -e 'show databases;'"
}
dockerRunArgs.add("--link ${c.id}:mysql")
docker.build(image, dockerBuildArgs.join(' ')).inside(dockerRunArgs.join(' ')) {
// the actual building, archiving, deployment, etc, stages go here
withCredentials([string(credentialsId: 'CREDENTIALID', variable: 'VARIABLE')]) {
stage('Build') {
sh 'chmod 777 ./build.sh'
sh "./build.sh"
}
stage('DB migrations checkout ') {
checkout([
$class: 'GitSCM',
branches: [[name: 'develop']],
userRemoteConfigs: [[
credentialsId: 'TOKEN',
url: 'mygithuburl.git'
]]
])
sh 'composer install --prefer-dist --no-interaction --no-dev --no-progress'
sh 'php artisan migrate:refresh --seed'
}
}
}
}
This is as shown in Jenkins documentation. Now I need to run some of the other services like Redis, Elasticsearch, Memcached and Beanstalkd . So where I need to add these docker images ?
Now I am building the docker image inside the MySQL docker image. Is it possible to run each of the container side cars in one stage and then do the migrations and run tests in next stage ?
Based on this post, I'm trying to test this pipeline code in my environment:
pipeline {
agent any
stages {
stage ('push artifact') {
steps {
sh '[ -d archive ] || mkdir archive'
sh 'echo test > archive/test.txt'
sh 'rm -f test.zip'
zip zipFile: 'test.zip', archive: false, dir: 'archive'
archiveArtifacts artifacts: 'test.zip', fingerprint: true
}
}
stage('pull artifact') {
steps {
sh 'pwd'
sh 'ls -l'
sh 'env'
step([ $class: 'CopyArtifact',
filter: 'test.zip',
projectName: '${JOB_NAME}',
fingerprintArtifacts: true,
selector: [$class: 'SpecificBuildSelector', buildNumber: '${BUILD_NUMBER}']
])
unzip zipFile: 'test.zip', dir: './archive_new'
sh 'cat archive_new/test.txt'
}
}
}
}
but it gives the error message:
ERROR: Unable to find project for artifact copy: test
This may be due to incorrect project name or permission settings; see help for project name in job configuration.
Finished: FAILURE
How can I fix his pipeline code?
If you enable authorization(like rbac), you must grant permission 'Copy Artifact' to the project. In project configuration, General -> Permission to Copy Artifact, check the box and set the projects that can copy the artifact
Rather than using projectName: '${JOB_NAME}', what worked for me is using projectName: env.JOB_NAME. I.e. your complete copy-artifacts step would look like this:
step([ $class: 'CopyArtifact',
filter: 'test.zip',
projectName: env.JOB_NAME,
fingerprintArtifacts: true,
selector: [$class: 'SpecificBuildSelector', buildNumber: env.BUILD_NUMBER]
])
Or using the more modern syntax:
copyArtifacts(
filter: 'test.zip',
projectName: env.JOB_NAME,
fingerprintArtifacts: true,
selector: specific(env.BUILD_NUMBER)
)
Jenkins pipeline versus Jenkins gui.
I have a basic jenkins job - it contain a bash step :
export CHROME_BIN=/usr/bin/google-chrome-stable
git --version
node --version
npm -version
java -version
npm install
Xvfb :99 &
export DISPLAY=:99
npm run ci
it works fine with no errors.
I tried to covert it to a new jenkins pipeline -
node ('ubuntu-aws'){
env.JAVA_HOME="${tool '1.8.92'}"
env.PATH="${env.JAVA_HOME}/bin:${env.PATH}"
sh 'java -version'
timestamps {
//sh "docker pull main-virtual.docker.vidible.aolcloud.net/main/travis-test"
sh 'whoami'
sshagent(['24195acf-44c2-4f07-98e4-13365b2e49dc']) {
stage "git checkout"
checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], gitTool: 'Default', submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'YYYYYYY', url: 'XXXXXXXX']]])
sh '''git --version
node --version
npm --version'''
withEnv(['CHROME_BIN=/usr/bin/google-chrome-stable', 'CONTINUOUS_INTEGRATION=true']) {
sh 'env'
sh 'java -version'
stage "npm install"
sh "npm install"
stage "npm run ci"
sh '''Xvfb :99 &'''
sh 'export DISPLAY=:99'
sh "npm run ci"
//sh 'npm run build'
sh 'ls'
}
}//ssh agent
archiveArtifacts allowEmptyArchive: true, artifacts: 'dist/*.*', excludes: null
}
}
and it failed
07:09:03 [31m03 01 2017 07:08:14.838:ERROR [launcher]: [39mCannot start Chrome
07:09:03
07:09:03 [31m03 01 2017 07:08:17.103:ERROR [launcher]: [39mCannot start Chrome
07:09:03
07:09:03 [31m03 01 2017 07:08:18.887:ERROR [launcher]: [39mCannot start Chrome
both jobs run on the same Jenkins server and on the same slave.
any idea for this error ?