Optimized alternative to stashing files in jenkins - jenkins

The jenkins pipeline currently does a build and deploy, stashes some files, unstashes them and runs end to end tests on these files as so:
// build and deploy code
stash([
name: 'end-to-end-tests',
includes: <a bunch of files>
])
unstash('end-to-end-tests')
// code to run tests using npm run test:end-to-end-tests
In the interest of speeding up this pipeline, is there a way to get around the stash? I need end-to-end-tests in order to run my tests with the appropriate npm command later on, but how can I use this without stashing (if possible)?

Related

Jobs vs script: What is their difference in Travis-CI?

It seems that I can put commands, such as echo "helloworld", in script or in jobs section of .travis.yml. What is their difference?
They are completely different functionality defined in .travis.yml
script: is a build/job phrase that you run commands in this specific step. [1]
job: is a step where you will be able to define multiple ones within .travis.yml file and each job can run an additional build job that you can define their own script inside it. [2]
[1]https://docs.travis-ci.com/user/job-lifecycle/#the-job-lifecycle
[2]https://docs.travis-ci.com/user/build-matrix/#listing-individual-jobs

Copy file from Jenkins master to slave in Pipeline

I have some windows slave at my Jenkins so I need to copy file to them in pipeline. I heard about Copy To Slave and Copy Artifact plugins, but they doesn't have pipeline syntax manual. So I don't know how to use them in pipeline.
Direct copy doesn't work.
def inputFile = input message: 'Upload file', parameters: [file(name: 'parameters.xml')]
new hudson.FilePath(new File("${ENV:WORKSPACE}\\parameters.xml")).copyFrom(inputFile)
This code returns and error:
Caused: java.io.IOException: Failed to copy /var/lib/jenkins/jobs/_dev/jobs/(TEST)job/builds/107/parameters.xml to d:\Jenkins\workspace\_dev\(TEST)job\parameters.xml
Is there any way to copy file from master to slave in Jenkins Pipeline?
As I understand copyFrom is executed on your Windows node, therefore the source path is not accessible.
I think you want to look into the stash/unstash steps (Jenkins Pipeline: Basic Steps), which work across different nodes. Also this example might be helpful.
Pipeline DSL context runs on master node even that your write node('someAgentName') in your pipeline.
Try to use stash/unstash, but it is bad for large files.
Try External Workspace Manager Plugin. It has
pipelines steps and good for large files.
Try to use an intermediate storage. archive() and sh("wget $url") will be helpful.
If the requirement is to copy an executable to the test slave and to publish the test results, this is easy to do without the Copy to Slave plugin.
A shared folder should be created on each test slave (normal Windows shared folder).
After build: Build script copies the executable to the shared directory on each slave. A simple batch script using copy command is sufficient for this.
stage ('Copy to slaves') {
steps {
bat 'call "copy-to-slave.bat"'
}
}
During test: The test script copies the executable to another directory and runs it.
After test: Post-build action "Publish Robot Framework test results" can be used to report the test results. It is not necessary to copy the test result files back to the master first.
I recommend on Pipeline: Phoenix AutoTest plugin
Jenkins plugin website:
https://plugins.jenkins.io/phoenix-autotest/#documentation
GitHub repository of plugin:
https://github.com/jenkinsci/phoenix-autotest-plugin

Files not appearing in Jenkins workspace

I have an npm script that basically executes some unit tests with Jest and then generates a test-results.xml file in a dist/ folder. This is executed within a deploy.sh script that basically has a npm run test line in it that kicks off the test task.
When Jenkins runs my code, however, I can see that the tests are passing in the Jenkins console, but for some reason Jenkins can't see the test-results.xml file in the workspace. Hence, I get this error:
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files were found. Configuration error?
It's weird because I SSH'd into my server and can clearly see that my test script generated a test-results.xml file under dist/ folder, but for some reason the entire dist/ folder and everything in it is conspicuously absent from my Jenkins workspace.
This question was resolved in question comments, but for future reference and other in similar situations:
Make sure that you are looking at the same place as Jenkins is. Jenkins always resolves the pattern provided in the Public JUnit test result report relative the workspace root.
For freestyle jobs or node steps in pipeline jobs, the location of the workspace root is printed on the console output, look for:
Building in workspace <jenkins_root>\workspace\<job_name>
or
Building remotely on Agent42 in workspace <path_to_workspaces>/workspace/<job_name>

What are the differences between the {before_,}{install,script} .travis.yml options?

Inside the .travis.yml configuration file what is the practical difference between before_install, install, before_script and script options?
I have found no documentation explaining the differences between these options.
You don't need to use these sections, but if you do, you communicate the intent of what you're doing:
before_install:
# execute all of the commands which need to be executed
# before installing dependencies
- composer self-update
- composer validate
install:
# install all of the dependencies you need here
- composer install --prefer-dist
before_script:
# execute all of the commands which need to be executed
# before running actual tests
- mysql -u root -e 'CREATE DATABASE test'
- bin/doctrine-migrations migrations:migrate
script:
# execute all of the commands which
# should make the build pass or fail
- vendor/bin/phpunit
- vendor/bin/php-cs-fixer fix --verbose --diff --dry-run
See, for example, https://github.com/localheinz/composer-normalize/blob/0.8.0/.travis.yml.
The difference is in the state of the job when something goes wrong.
Git 2.17 (Q2 2018) illustrates that in commit 3c93b82 (08 Jan 2018) by SZEDER Gábor (szeder).
(Merged by Junio C Hamano -- gitster -- in commit c710d18, 08 Mar 2018)
That illustrates the practical difference between before_install, install, before_script and script options
travis-ci: build Git during the 'script' phase
Ever since we started building and testing Git on Travis CI (522354d: Add Travis CI support, 2015-11-27, Git v2.7.0-rc0), we build Git in the
'before_script' phase and run the test suite in the 'script' phase
(except in the later introduced 32 bit Linux and Windows build jobs,
where we build in the 'script' phase').
Contrarily, the Travis CI practice is to build and test in the
'script' phase; indeed Travis CI's default build command for the
'script' phase of C/C++ projects is:
./configure && make && make test
The reason why Travis CI does it this way and why it's a better
approach than ours lies in how unsuccessful build jobs are
categorized. After something went wrong in a build job, its state can
be:
'failed', if a command in the 'script' phase returned an error.
This is indicated by a red 'X' on the Travis CI web interface.
'errored', if a command in the 'before_install', 'install', or
'before_script' phase returned an error, or the build job exceeded
the time limit.
This is shown as a red '!' on the web interface.
This makes it easier, both for humans looking at the Travis CI web
interface and for automated tools querying the Travis CI API, to
decide when an unsuccessful build is our responsibility requiring
human attention, i.e. when a build job 'failed' because of a compiler
error or a test failure, and when it's caused by something beyond our
control and might be fixed by restarting the build job, e.g. when a
build job 'errored' because a dependency couldn't be installed due to
a temporary network error or because the OSX build job exceeded its
time limit.
The drawback of building Git in the 'before_script' phase is that one
has to check the trace log of all 'errored' build jobs, too, to see
what caused the error, as it might have been caused by a compiler
error.
This requires additional clicks and page loads on the web interface and additional complexity and API requests in automated tools.
Therefore, move building Git from the 'before_script' phase to the
'script' phase, updating the script's name accordingly as well.
'ci/run-builds.sh' now becomes basically empty, remove it.
Several of our build job configurations override our default 'before_script' to do nothing; with this change our default 'before_script' won't do
anything, either, so remove those overriding directives as well.

Is it possible to run part of Job on master and the other part on slave?

I'm new to Jenkins. I have a requirement where I need to run part of a job on the Master node and the rest on a slave node.
I tried searching on forums but couldn't find anything related to that. Is it possible to do this?
If not, I'll have to break it into two separate jobs.
EDIT
Basically I have a job that checks out source code from svn, then compiles and builds jar files. After that it's building a wise installer for this application. I'd like to do source code checkout and compilation on the master(Linux) and delegate Wise Installer setup to a Windows slave.
It's definitely easier to do this with two separate jobs; you can make the master job trigger the slave job (or vice versa).
If you publish the files that need to be bundled into the installer as build artifacts from the master build, you can pull them onto the slave via a Jenkins URL and create the installer. Use the "Archive artifacts" post build step in the master build to do this.
The Pipeline Plugin allows you to write jobs that run on multiple slave nodes. You don't even have to go create other separate jobs in Jenkins -- just write another node statement in the Pipeline script and that block will just run on an assigned node. You can specify labels if you want to restrict the type of node it runs on.
For example, this Pipeline script will execute parts of it on two different nodes:
node('linux') {
git url: 'https://github.com/jglick/simple-maven-project-with-tests.git'
sh "make"
step([$class: 'ArtifactArchiver', artifacts: 'build/program', fingerprint: true])
}
node('windows && amd64') {
git url: 'https://github.com/jglick/simple-maven-project-with-tests.git'
sh "mytest.exe"
}
Some more information at the Pipeline plugin tutorial. (Note that it was previously called the Workflow Plugin.)
You can use the Multijob plugin which adds an the idea of a build phase which runs other jobs in parallel as a build step. You can still continue to use the regular freestyle job build and post build options as well

Resources