I am using copy artifact 1.46 in Jenkins 2.263.4 and want to copy a file from one job to another. However, it fails to do so. The error is always:
ERROR: Failed to copy artifacts from TestPack with filter: **
Have tried this with both a scripted pipeline job and a freestyle one, on both Windows and Centos, but same result. I know it has found the job, because I get an error if the job name if wrong. The job I want to copy from only has a single text file in its root directory.
My pipeline script is:
node ("${env.Node}") {
stage('dodeploy') {
copyArtifacts(projectName: 'TestPack');
}
}
I have tried copyArtifacts with and without a filter and with and without a target. In the freestyle project I tried similar settings settings, but get exactly the same error.
Feel I must be missing something obvious, but cannot see what.
Turns out that I was not interpreting the 'Artifacts' part of 'copyArtifacts' literally enough. It looks as though copyArtifacts can only copy files that have previously been archived in a post build step (or pipeline stage).
Related
I have installed the copyArtifacts plugin and created two freestyle jobs: experiment-main and experiment-1
experiment-1 just creates a file called artifact.txt with the build # in it, and archives it.
experiment-main triggers experiment-1 and then tries to copy the artifact like this:
but this is the result:
Running as SYSTEM
Building on master in workspace /var/lib/jenkins/workspace/experiment-main
Waiting for the completion of experiment-1
experiment-1 #4 started.
experiment-1 #4 completed. Result was SUCCESS
Build step 'Trigger/call builds on other projects' changed build result to SUCCESS
ERROR: Unable to find a build for artifact copy from: experiment-1
Finished: FAILURE
which isn't what I expected (or at least what I was hoping for)
I hoped it would find the experiment-1 build that was downstream from the current build.
Any ideas?
I figured out that there are variables with the numbers of triggered builds that I can use. To figure out the variable, I just printed all the environment variables with env and then found the right variable in the list.
Then I configured the copy artifacts plugin to use that build number.
I couldn't do it how #alex-o suggested, just getting the last build of the subjob, because I might have more than one job using the subjob at once, but if you don't have that problem, that might work for you.
Yes, this is unexpected behavior indeed.
The reason why this won't work is hidden in the help text of the "Upstream Project Name" input field:
Downstream builds are found using fingerprints of files. That is, a build that is triggered from a build isn't always considered downstream, but you need to fingerprint files used in builds to let Jenkins track them.
So, the Copy-Artifact plugin relies on fingerprint data to determine job ancestry. For that reason, you can not use the "Downstream build of..." feature using the current job as a parent: fingerprints are recorded in a post-build step, so an ongoing build of example-master does not have any fingerprints associated to it by the time it is looking for a matching build of experiment-1.
It is possible to modify fingerprint information at build run-time (e.g., via Groovy), but then, it's probably best to avoid the Copy-Artifact plugin entirely and to implement the whole procedure in Groovy right away.
Your best bet is probably to refer to example-1 via "Last successful build" and to ensure that this is the build that you triggered before (usually this will be correct, but depending on your setup there can be race conditions).
In a nutshell:
How can I access the location of the produced artifacts within a shell script started in a build or post-build action?
The longer story:
I'm trying to setup a jenkins job to automate the building and propagation of debian packages.
So far, I was already successfull in using the debian-pbuilder plugin to perform the build process, such that jenkins presents the final artifacts after successfully finishing the job:
mypackage_1+020200224114528.NOREV.4_all.deb
mypackage_1+020200224114528.NOREV.4_amd64.buildinfo
mypackage_1+020200224114528.NOREV.4_amd64.changes
mypackage_1+020200224114528.NOREV.4.dsc
mypackage_1+020200224114528.NOREV.4.tar.xz
Now I would like to also automate the deployment process into the local reprepro repository, which would actually just require a simple shell script invocation, I've put together.
My problem: I find no way to determine the artifact location for that deployment script to operate on. The "debian-pbuilder" plugin generates the artifacts in a temporary directory ($WORKSPACE/binaries.tmp15567690749093469649), which changes with every build.
Since the artifacts are listed properly in the finished job status view, I would expect that the artifact details are provided to the script (e.g. by environment variables). But that is obvously not the case.
I've already search extensively for a solution, but didn't find anything helpful.
Or is it me (still somewhat a Rookie in Jenkins), following a wron approach here?
You can use archiveArtifacts. You have binaries.tmp directory in the Workspace and you can use it, but before execute clear workspace using deleteDir().
Pipeline example:
pipeline {
agent any
stages {
stage('Build') {
steps {
deleteDir()
...
}
}
}
post {
always {
archiveArtifacts artifacts: 'binaries*/**', fingerprint: true
}
}
}
You can also check https://plugins.jenkins.io/copyartifact/
I have some windows slave at my Jenkins so I need to copy file to them in pipeline. I heard about Copy To Slave and Copy Artifact plugins, but they doesn't have pipeline syntax manual. So I don't know how to use them in pipeline.
Direct copy doesn't work.
def inputFile = input message: 'Upload file', parameters: [file(name: 'parameters.xml')]
new hudson.FilePath(new File("${ENV:WORKSPACE}\\parameters.xml")).copyFrom(inputFile)
This code returns and error:
Caused: java.io.IOException: Failed to copy /var/lib/jenkins/jobs/_dev/jobs/(TEST)job/builds/107/parameters.xml to d:\Jenkins\workspace\_dev\(TEST)job\parameters.xml
Is there any way to copy file from master to slave in Jenkins Pipeline?
As I understand copyFrom is executed on your Windows node, therefore the source path is not accessible.
I think you want to look into the stash/unstash steps (Jenkins Pipeline: Basic Steps), which work across different nodes. Also this example might be helpful.
Pipeline DSL context runs on master node even that your write node('someAgentName') in your pipeline.
Try to use stash/unstash, but it is bad for large files.
Try External Workspace Manager Plugin. It has
pipelines steps and good for large files.
Try to use an intermediate storage. archive() and sh("wget $url") will be helpful.
If the requirement is to copy an executable to the test slave and to publish the test results, this is easy to do without the Copy to Slave plugin.
A shared folder should be created on each test slave (normal Windows shared folder).
After build: Build script copies the executable to the shared directory on each slave. A simple batch script using copy command is sufficient for this.
stage ('Copy to slaves') {
steps {
bat 'call "copy-to-slave.bat"'
}
}
During test: The test script copies the executable to another directory and runs it.
After test: Post-build action "Publish Robot Framework test results" can be used to report the test results. It is not necessary to copy the test result files back to the master first.
I recommend on Pipeline: Phoenix AutoTest plugin
Jenkins plugin website:
https://plugins.jenkins.io/phoenix-autotest/#documentation
GitHub repository of plugin:
https://github.com/jenkinsci/phoenix-autotest-plugin
I'm setting up a Jenkins declarative pipeline, where I need to copy an artifact from a different job. The artifact is of substantial size, 10.8 M, and seems to get corrupted when copied. I save the copied artifact again as an artifact in the second job and see the size as 10.78 M. Is there any reason for this behaviour or ways to avoid it?
The resulting code from the pipeline seems corrupted and a byte-by-byte comparison reveals differences between the artifact in the first and second jobs.
I use the Copy Artifact Plugin for Jenkins like so:
step ([$class: 'CopyArtifact',
projectName: 'First_Job',
filter: '**/*.rbf',
fingerprintArtifacts: true,
target: '.',
])
And I save the artifact for the second time like this:
archiveArtifacts artifacts: 'My_Artifact.rbf', fingerprint: true
The artifact is copied and renamed on the system using a bat script between copying to the second job and archiving again.
After digging around on the second build machine, I've found that the problem was a 'bug' in the Copy Artifact plugin. The copied artifact wasn't being cleaned up correctly after each build and the plugin doesn't overwrite the previous artifact, nor does it give a message saying it can't overwrite a file.
This gave the appearance of a successful copy while the pipeline used the old artifact.
Prior Jenkins2 I was using Build Pipeline Plugin to build and manually deploy application to server.
Old configuration:
That works great, but I want to use new Jenkins pipeline, generated from groovy script (Jenkinsfile), to create manual step.
So far I came up with input jenkins step.
Used jenkinsfile script:
node {
stage 'Checkout'
// Get some code from repository
stage 'Build'
// Run the build
}
stage 'deployment'
input 'Do you approve deployment?'
node {
//deploy things
}
But this waits for user input, noting that build is not completed. I could add timeout to input, but this won't allow me to pick/trigger a build and deploy it later on:
How can I achive same/similiar result for manual step/trigger with new jenkins-pipeline as prior with Build Pipeline Plugin?
This is a huge gap in the Jenkins Pipeline capabilities IMO. Definitely hard to provide due to the fact that a pipeline is a single job. One solution might be to "archive" the workspace as an "artifact" (tar and archive **/* as 'workspace.tar.gz'), and then have another pipeline copy the artifact and and untar it into the new workspace. This allows the second pipeline to pickup where the previous one left off. Of course there is no way to gauentee that the second pipeline cannot be executed out of turn or more than once. Which is too bad. The Delivery Pipeline Plugin really shines here. You execute a new pipeline right from the view - instead of the first job. Anyway - not much of an answer - but its the path I'm going to try.
EDIT: This plugin looks promising:
https://github.com/jenkinsci/external-workspace-manager-plugin/blob/master/doc/PIPELINE_EXAMPLES.md