I have a multi branch pipeline job which is being started by another jenkins pipeline job. I want to copy the artifacts to the upstream (calling) job. When I make simple pipelines it works, but as soon as I have a jobname like PHPDummyApp/feature%2FnewDeploySetup I get an error like
ERROR: Failed to copy artifacts from PHPDummyApp/feature%2FnewDeploySetup with filter: backend.zip
I have tried several variations:
PHPDummyApp/feature/newDeploySetup
PHPDummyApp%2Ffeature%2FnewDeploySetup
feature%2FnewDeploySetup
but they give another error:
Unable to find project for artifact copy: PHPDummyApp%2Ffeature%2FnewDeploySetup
PHPDummyApp/feature%2FnewDeploySetup seems to be actually finding the proper job but not finding the artifact.
I have ensured that the artifact exists:
Working: createartifact Job
pipeline {
agent { label 'php8' }
options {
disableConcurrentBuilds()
copyArtifactPermission('fetchartifact');
}
stages {
stage('Hello') {
environment {
BUILD_DIR="dist"
}
steps {
sh "mkdir ${BUILD_DIR}"
sh "head -c 100000 /dev/urandom >${BUILD_DIR}/dummy1.txt"
sh "head -c 100000 /dev/urandom >${BUILD_DIR}/dummy2.txt"
//archiveArtifacts artifacts: '*.txt', fingerprint: true
zip archive: true, defaultExcludes: false, dir: "${BUILD_DIR}", exclude: '', glob: '', zipFile: 'backend.zip'
}
}
}
}
Working: fetchartifact Job
pipeline {
agent { label 'php8' }
options {
disableConcurrentBuilds()
buildDiscarder logRotator(artifactDaysToKeepStr: '', artifactNumToKeepStr: '', daysToKeepStr: '', numToKeepStr: '4')
}
triggers {
upstream 'createartifact'
}
environment {
SOURCE_DIR = "${WORKSPACE}/src"
}
stages {
stage('Fetch Artifact') {
steps {
build 'Artifacttest/createartifact'
step ([$class: 'CopyArtifact',
projectName: 'createartifact',
filter: "backend.zip",
target: 'dist']);
//copyArtifacts(projectName: 'createartifact', target: 'dist');
sh "unzip dist/backend.zip"
sh "ls dist"
}
}
}
}
Related
I am using more than one agents in my declarative pipeline. Is there anyway to copy artifacts (input.txt) from agent1 to agent2? here is my declarative pipeline,
pipeline {
agent none
stages {
stage('Build') {
agent {
label 'agent1'
}
steps {
sh 'echo arjun > input.txt'
}
post {
always {
archiveArtifacts artifacts: 'input.txt',
fingerprint: true
}
}
}
stage('Test') {
agent {
label 'agent2'
}
steps {
sh 'cat input.txt'
}
}
}
}
You can use Copy Artifact Plugin that can do exactly that.
Given your Jenkinsfile, it then turns into this:
pipeline {
agent none
stages {
stage('Build') {
agent { label 'agent1' }
steps {
sh 'echo arjun > input.txt'
}
post {
always {
archiveArtifacts artifacts: 'input.txt', fingerprint: true
}
}
}
stage('Test') {
agent { label 'agent2' }
steps {
// this brings artifacts from job named as this one, and this build
step([
$class: 'CopyArtifact',
filter: 'input.txt',
fingerprintArtifacts: true,
optional: true,
projectName: env.JOB_NAME,
selector: [$class: 'SpecificBuildSelector',
buildNumber: env.BUILD_NUMBER]
])
sh 'cat input.txt'
}
}
}
}
Use stash and unstash.
example from:
https://www.jenkins.io/doc/pipeline/examples/
// First we'll generate a text file in a subdirectory on one node and stash it.
stage "first step on first node"
// Run on a node with the "first-node" label.
node('first-node') {
// Make the output directory.
sh "mkdir -p output"
// Write a text file there.
writeFile file: "output/somefile", text: "Hey look, some text."
// Stash that directory and file.
// Note that the includes could be "output/", "output/*" as below, or even
// "output/**/*" - it all works out basically the same.
stash name: "first-stash", includes: "output/*"
}
// Next, we'll make a new directory on a second node, and unstash the original
// into that new directory, rather than into the root of the build.
stage "second step on second node"
// Run on a node with the "second-node" label.
node('second-node') {
// Run the unstash from within that directory!
dir("first-stash") {
unstash "first-stash"
}
// Look, no output directory under the root!
// pwd() outputs the current directory Pipeline is running in.
sh "ls -la ${pwd()}"
// And look, output directory is there under first-stash!
sh "ls -la ${pwd()}/first-stash"
}
I am using more than one agents in my declarative pipeline. i am trying copy artifacts (input.txt) from agent1 to agent2 in same pipeline from last stage? in below code you can see i am giving permission by using copyArtifactPermission to main branch but as i am using multi branch pipeline, i need to give permission to all my branches. how can we define that ?
agent none
options {
copyArtifactPermission('main');
}
stages {
stage('Build') {
agent { label 'agent1' }
steps {
sh 'echo arjun > input.txt'
}
post {
always {
archiveArtifacts artifacts: 'input.txt', fingerprint: true
}
}
}
stage('Test') {
agent { label 'agent2' }
steps {
// this brings artifacts from job named as this one, and this build
step([
$class: 'CopyArtifact',
filter: 'input.txt',
fingerprintArtifacts: true,
optional: true,
projectName: env.BRANCH_NAME,
selector: [$class: 'SpecificBuildSelector',
buildNumber: env.BUILD_NUMBER]
])
sh 'cat input.txt'
}
}
}
}
Pass variable from jenkins pipeline job to other pipeline job:
I have a following job:
stage ('Upgrade') {
steps {
build job: 'Upgrade',
parameters: [string(name: 'sourcePath', value: '%publishPath%"\"%folderBuild%')]
}
}
Call to other job
pipeline {
agent { label 'master' }
stages {
stage('Upgrade') {
steps {
sh "ansible-playbook -i inventory playbook.yml --extra-vars "name=build_path value=%sourcePath%"
}
}
}
}
Question: what's wrong?
stage ('Upgrade') {
steps {
build job: 'Upgrade', parameters: [string(name: 'sourcePath', value: env.buildPath)]
}
}
In following this job you have to define String parameter called SourcePath
stages {
stage('Upgrade') {
steps {
sh label: '', script: 'ansible-playbook -i inventory upgrade.yml -e "buildPath=${sourcePath}"'
}
}
}
In Ansible create env var as following:
vars:
build_path: "{{ buildPath }}" // buildPath from Jenkins job
We've set up a nightly build using a simple pipeline job that runs periodically every day and but the developers are not getting email notifications for it.
We're using the emailext plugin for sending those emails and Kubernetes agents as nodes.
The job is started by a timer because it's a periodic build, making it run the following pipeline configuration (you can ignore the agent's container definition as it's not relevant, IMO):
pipeline {
agent {
kubernetes {
yaml """
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: agent
image: python:3.7
command:
- cat
tty: true
"""
}
}
options {
timestamps()
}
stages {
stage('SCM') {
steps {
checkout(
changelog: false,
poll: false,
scm: [$class : 'GitSCM',
userRemoteConfigs : [[credentialsId: 'Git SSH Key', url: 'git#bitbucket.org:company/repository.git']]],
branches : [[name: 'master']],
doGenerateSubmoduleConfigurations: false,
extensions : [[$class : 'CloneOption',
depth : 1,
noTags : false,
reference: '',
shallow : true],
[$class: 'PruneStaleBranch'],
[$class: 'GitLFSPull'],
[$class : 'SubmoduleOption',
disableSubmodules : false,
parentCredentials : true,
recursiveSubmodules: true,
reference : '',
trackingSubmodules : false]],
submoduleCfg : []
)
}
}
stage('Build') {
steps {
container('agent') {
echo '-> install tox'
sh 'pip install tox'
sh 'python --version'
sh 'pip --version'
}
}
}
stage('Test') {
steps {
container('agent') {
sh 'tox -c ./tox.ini'
}
post {
always {
echo '-> collecting artifacts'
archiveArtifacts allowEmptyArchive: true, artifacts: '*.txt'
echo '-> collecting test results'
junit allowEmptyResults: true, testResults: 'output/pytest.xml'
}
}
}
}
}
post {
changed {
emailext(
subject: '$DEFAULT_SUBJECT',
body: '$DEFAULT_CONTENT',
recipientProviders: [culprits(),
developers(),
requestor(),
brokenBuildSuspects(),
brokenTestsSuspects(),
upstreamDevelopers()]
)
}
}
}
The above does work when there is a manual start of the job (the starting developer gets the relevant email), however, when the job is triggered from periodic build (cron) - the recipients' list is always empty:
An attempt to send an e-mail to an empty list of recipients, ignored.
What might be the problem?
Another way is to obtain the committers using git:
// Get all commits from the latest merge in an array
def gitCommits = sh(returnStdout: true, script: 'git log --merges -1 --format=%p').trim().split(' ')
// Get committer emails:
def emails = ""
gitCommits.each {
emails = emails + sh(returnStdout: true, script: 'git --no-pager show -s --format=%ae ${it}').trim() + ","
}
echo "${emails}"
Obviously all the recipient providers return empty list.
requestor() returns empty list because there is no requestor
upstreamDevelopers() returns empty list because there is no
upstream build
Check the source code to figure out why the culprits(), developers(), brokenBuildSuspects() and brokenTestsSuspects() return empty list.
I am using the declarative syntax for my pipeline, and would like to store the path to the workspace being used on one of my stages, so that same path can be used in a later stage.
I have seen I can call pwd() to get the current directory, but how do I assign to a variable to be used between stages?
EDIT
I have tried to do this by defining by own custom variable and using like so with the ws directive:
pipeline {
agent { label 'master' }
stages {
stage('Build') {
steps {
script {
def workspace = pwd()
}
sh '''
npm install
bower install
gulp set-staging-node-env
gulp prepare-staging-files
gulp webpack
'''
stash includes: 'dist/**/*', name: 'builtSources'
stash includes: 'config/**/*', name: 'appConfig'
node('Protractor') {
dir('/opt/foo/deploy/') {
unstash 'builtSources'
unstash 'appConfig'
}
}
}
}
stage('Unit Tests') {
steps {
parallel (
"Jasmine": {
node('master') {
ws("${workspace}"){
sh 'gulp karma-tests-ci'
}
}
},
"Mocha": {
node('master') {
ws("${workspace}"){
sh 'gulp mocha-tests'
}
}
}
)
}
post {
success {
sh 'gulp combine-coverage-reports'
sh 'gulp clean-lcov'
publishHTML(target: [
allowMissing: false,
alwaysLinkToLastBuild: false,
keepAll: false,
reportDir: 'test/coverage',
reportFiles: 'index.html',
reportName: 'Test Coverage Report'
])
}
}
}
}
}
In the Jenkins build console, I see this happens:
[Jasmine] Running on master in /var/lib/jenkins/workspace/_Pipelines_IACT-Jenkinsfile-UL3RGRZZQD3LOPY2FUEKN5XCY4ZZ6AGJVM24PLTO3OPL54KTJCEQ#2
[Pipeline] [Jasmine] {
[Pipeline] [Jasmine] ws
[Jasmine] Running in /var/lib/jenkins/workspace/_Pipelines_IACT-Jenkinsfile-UL3RGRZZQD3LOPY2FUEKN5XCY4ZZ6AGJVM24PLTO3OPL54KTJCEQ#2#2
The original workspace allocated from the first stage is actually _Pipelines_IACT-Jenkinsfile-UL3RGRZZQD3LOPY2FUEKN5XCY4ZZ6AGJVM24PLTO3OPL54KTJCEQ
So it doesnt look like it working, what am I doing wrong here?
Thanks
pipeline {
agent none
stages {
stage('Stage-One') {
steps {
echo 'StageOne.....'
script{ name = 'StackOverFlow'}
}
}
stage('Stage-Two'){
steps{
echo 'StageTwo.....'
echo "${name}"
}
}
}
}
Above prints StackOverFlow in StageTwo for echo "${name}"
You can also use sh "echo ${env.WORKSPACE}" to get The absolute path of the directory assigned to the build as a workspace.
You could put the value into an environment variable like described in this answer
CURRENT_PATH= sh (
script: 'pwd',
returnStdout: true
).trim()
Which version are you running? Maybe you can just assign the WORKSPACE variable to an environment var?
Or did i totally misunderstand and this is what you are looking for?