Can I speed up Gradle daemon startup on Jenkins CI? - jenkins

Every time I push my gradle build to Jenkins, it spends a considerable amount of time on this step:
Starting a Gradle Daemon (subsequent builds will be faster)
The relevant part of my Jenkinsfile looks like this:
stage('Build') {
steps {
withGradle() {
sh 'chmod +x gradlew'
sh './gradlew build jar'
}
}
}
I assumed withGradle() would try to persistently run a gradle daemon in the background on Jenkins to avoid this sort of thing, but at this point i'm not entirely sure it does anything - the docs for it are incredibly vague.
How do I improve build times with this system?

withGradle is contributed by Jenkins' Gradle Plugin and contributes console output highlighting and build scan URL capturing (showing the build scan URL in the Jenkins UI). It certainly doesn't do anything with the Gradle daemon. You do not need withGradle to run you Gradle builds in Jenkins, depending whether you use build scans of course. Doing just
stage('Build') {
steps {
sh 'chmod +x gradlew'
sh './gradlew build jar'
}
}
is perfectly fine.
Gradle daemons stop themselves after being idle for 3 hours (FAQ). If a build runs just once a day the daemon will be dead for sure. This is usually the reason why the daemon is absent and needs to be started.
Gradle might also decide to start a new daemon instance if the running daemon is classified incompatible (build environment, e.g. heap memory settings, changed). This is explicitly highlighted in the build output according to my information.
With regards to slow daemon startup performance, the usual advice to run the build on the latest Gradle and Java versions.
One last tip though. Should you be using Git as version control system, you can get rid of the sh 'chmod +x gradlew' by letting Git setting the executable flag via update-index:
git update-index --chmod=+x gradlew

Related

How to run jar through jenkins as a separate process?

The question is this: I need to run a jar file on the node, in the Jenkins pipeline I write
stage('Start bot') {
steps {
sh 'nohup java -jar /home/oomnpe/workspace/oomnpe_bot/target/oomnpe_bot-1.0-jar-with-dependencies.jar'
}
}
But the build goes on endlessly after launching the jar, showing the logs of the application, if you make requests to it. If I stop the build, then the application also stops.
How to make the jar run on the remote machine and the build stop? Everywhere they write about "nohup", but I use it and there is no result.
Try the following. Check this issue for more details.
withEnv(['JENKINS_NODE_COOKIE=dontkill']) {
sh "nohup java -jar /home/oomnpe/workspace/oomnpe_bot/target/oomnpe_bot-1.0-jar-with-dependencies.jar &"
}

Automation Testing in Docker Containers Triggering from Jenkins

I'm running the Automation tests on Docker Container to release some pressure on windows agents. While I tried to restore the NuGet packages with below Jenkins CI/CD pipeline script
stage('Dotnet Restore') {
steps {
echo delimiter+' Dotnet Restore '+start
sh "/tools/dotnet/5.0.400/dotnet restore ${solution} --configfile NuGet.Config"
echo delimiter+' Dotnet Restore '+end
}
}
in console output I'm getting following error
error MSB4006: There is a circular dependency in the target dependency graph involving target "_GenerateRestoreProjectPathWalk".
Can some one guide me to how to find solution for the issue.
The same bat script and same shell script is working fine with other models

jenkins pipeline. Ssh to a server get stuck on job

I need to ssh to a server from a simple jenkin pipeline and make a deploy which is simply moving to a directory and do a git fetch and some other comands (nmp install among others). Thing is that when jenkin job ssh to the remote server it connects ok but then It gets stucked, I have to stop it. I just now modify the script to simply do a "ssh to server " and a "pwd command" to go to the easiest but it connects to it and it get stuck untill I abort. What Am I missing? here is the simpe pipeline script and the output on an screenshot
pipeline {
agent any
stages {
stage('Connect to server') {
steps {
sh "ssh -t -t jenkins#10.x.x.xx"
sh "pwd"
}
}
stage('branch status') {
steps {
sh "git status"
}
}
}
}
Jenkins executes each "sh" step as a separate shell script. Content is written to a temporary file on Jenkins node and only then executed. Each command is executed in separate session and is not aware of previous one. So neither ssh session or changes in environment variable will persist between the two.
More importantly though, you are forcing pseudo-terminal allocation with -t flag. This is pretty much opposite to what you want to achieve, i.e. run shell commands non-interactively. Simply
sh "ssh jenkins#10.x.x.xx pwd"
is enough for your example to work. Placing the commands on separate lines would not work with regular shell script, regardless of Jenkins. However you still need to have private key available on node, otherwise the job will hang, waiting for you to provide password interactively. Normally, you will want to use SSH Agent Plugin to provide private key at runtime.
script {
sshagent(["your-ssh-credentals"]) {
sh "..."
}
}
For execution on longer commands see What is the cleanest way to ssh and run multiple commands in Bash?

Jenkins Using result artifacts in different steps - stash and unstash

I have a Jenkinsfile declarative pipeline which has two steps:
build an RPM file inside a docker container
build a docker image with the RPM and run it
The first step is built inside a docker container because it require a specific app to build the RPM.
The second step is run directly on a Jenkins slave, can be other slave than the slave which ran the first step.
In order to use the RPM produced by the first step I'm currently using stash and unstash steps. If I do not use them the second step doesn't have access to the RPM file.
The RPM file is about 215MB which is more than the 100MB recommended limit so I'll like to know if there is a better solution?
pipeline {
agent any
options {
timestamps()
}
stages {
stage('Gradle: build') {
agent {
docker {
image 'some-internal-image'
}
}
steps {
sh """
chmod +x gradlew
./gradlew buildRpm
"""
}
post {
success {
stash name: 'rpm', includes: 'Server/target/myapp.rpm'
}
}
}
stage('Gradle: build docker image') {
steps {
unstash 'rpm'
sh """
chmod +x gradlew
./gradlew buildDockerImage
"""
}
}
}
}
You could use docker's multi-stage build, but I'm not aware of a nice implementation using Jenkins Pipelines.
We're stashing also several hundreds of megabytes to distribute it to build agents. I've experimented with uploading the artifacts to S3 and downloading them again from there with now visible performance improvement (only that it takes off load from the Jenkins Master).
So my very opinionated recommendation: Keep it like it is and optimize, once you really run into performance / load issues.
you can use Artifactory or any other binary repository manager..
From Artifactory's webpage:
As the first, and only, universal Artifact Repository Manager on the
market, JFrog Artifactory fully supports software packages created by
any language or technology.
...
...Artifactory provides an end-to-end, automated and bullet-proof
solution for tracking artifacts from development to production.

In Jenkins, on a Windows remote connected through Cygwin sshd, how to run an sh pipeline step?

We are porting our Jenkins pipeline to work on Windows environments.
The Jenkins' master connects to our Windows remote -named winremote- using Cygwin sshd.
As described on this page, the Remote root directory of the node is given as a plain Windows path (in this case, it is set to C:\cygwin64\home\jenkins\jenkins-slave-dir)
This minimal pipeline example:
node("winremote")
{
echo "Entering Windows remote"
sh "ls -l"
}
fails with the error:
[Pipeline] echo
Entering Windows rmeote
[Pipeline] sh
[C:\cygwin64\home\jenkins\jenkins-slave-dir\workspace\proto-platforms] Running shell script
sh: C:\cygwin64\home\jenkins\jenkins-slave-dir\workspace\proto-platforms#tmp\durable-a739272f\script.sh: command not found
SSHing into the Windows remote, I was able to see that Jenkins actually created workspace subdirectory in C:\cygwin64\home\jenkins\jenkins-slave-dir, but it is left empty.
Is there a known way to use the sh pipeline step on such a remote ?
A PR from blatinville, that was merged a few hours after this question, solves this first issue.
Sadly, it introduces another problem, described in the ticket JENKINS-41225, with the error:
nohup: failed to run command 'sh': No such file or directory
There is a proposed PR for a quickfix of this issue.
Then there is a last problem with how the durable-task-plugin evaluate if a task is still alive using 'ps', with another PR fixing it.
Temporary solution
Until those (or equivalent) fixes are applied, one could compile a Cygwin compatible durable-task-plugin with the following commands:
git clone https://github.com/Adnn/durable-task-plugin.git -b cygwin_fixes
cd durable-task-plugin/
mvn clean install -DskipTests
Which notably generates target/durable-task.hpi file, which can be used to replace the durable-task.jpi file as installed by Jenkins in its plugins folder. It is then required to restart Jenkins.

Resources