Jenkins avoid tool installation if it is installed already - jenkins

Not a jenkins expert here. I have a scripted pipeline where I have tool installed (Node). Unfortunately it was configured to pull in other dependencies which takes 250sec in overall now. I'd like to add a condition to avoid this installation if it(Node with packages) was already installed previously, but don't know where to start. Perhaps jenkins stores meta info from prev runs that can be checked?
node {
env.NODEJS_HOME = "${tool 'Node v8.11.3'}"
env.PATH = "${env.NODEJS_HOME}/bin:${env.PATH}"
env.PATH = "/opt/xs/bin:${env.PATH}"
// ...
}

Are you using dynamic jenkins agents (docker containers)? In this case tools will be installed everytime you run build.
Mount volumes to containers, use persistant agents or build your own docker image with installed nodejs.
As I see you use workaround to install nodejs tool.
Jenkins supports it native way (declarative style):
pipeline {
agent any
tools {
nodejs 'NodeJS_14.5'
}
stages {
stage ('nodejs test') {
steps {
sh 'npm -v'
}
}
}
}
On first run tools will be installed. On next ones - will not since it is installed.

Related

Playwright Docker Image as Jenkins agent

I am trying to use Playwright docker image in Jenkins. In the official documentation, they give an example of how to use Docker plugin:
pipeline {
agent { docker { image 'mcr.microsoft.com/playwright:v1.25.0-focal' } }
stages {
stage('e2e-tests') {
steps {
// Depends on your language / test framework
sh 'npm install'
sh 'npm run test'
}
}
}
}
However, it is not a possibility for me to use the Docker plugin and I have to use pod templates instead. Here is the setting that I am using:
With this setting, I can see the pod is running by running commands in the pod terminal, however, I get this messages in the logs in Jenkins and it eventually timeout and the agents gets suspended.
Waiting for agent to connect (30/100):
What do I need to change in pod/container template config?

How to use multiple tools in Jenkins Pipeline

I need to use nodejs as well as terraform tools on build stages.The declarative pipeline I used is:
pipeline{
agent any
tools { nodejs "node12.14.1" terraform "terraform-v0.12.19"}
...
Only nodejs tool can be used. terraform is not installed and gives command not found error.
We need to specify each tools on new line instead.
pipeline {
agent any
tools {
nodejs "node12.14.1"
terraform "terraform-v0.12.19"
}
...

Jenkins differences between tools and docker agent

Sorry it might be a simple question but what is the differences between using tools and docker agent.I think using docker agent is much more flexible instead of using tools. When should I use docker agent or tools?
Tools
pipeline {
agent any
tools {
maven 'Maven 3.3.9'
jdk 'jdk8'
}
stages {
stage ('Initialize') {
steps {
sh '''
echo "PATH = ${PATH}"
echo "M2_HOME = ${M2_HOME}"
'''
}
}
stage ('Build') {
steps {
sh 'mvn -Dmaven.test.failure.ignore=true install'
}
Docker Agent
pipeline {
agent none
stages {
stage('Back-end') {
agent {
docker { image 'maven:3-alpine' }
}
steps {
sh 'mvn --version'
}
}
These two options serve a bit different purpose. The tools block allows you to add specific versions of maven, jdk, or gradle in your PATH. You can't use any version - you can only use versions that are configured in the Global Tool Configuration Jenkins page:
If your Jenkins configuration contains only a single Maven version, e.g., Maven 3.6.3, you can use only this version. Specifying a version that is not configured in the Global Tool Configuration will cause your pipeline to fail.
pipeline {
agent any
tools {
maven 'Maven 3.6.3'
}
stages {
stage('Example') {
steps {
sh 'mvn --version'
}
}
}
}
Using the tools block to specify different versions of supported tools will be a good option if your Jenkins server does not support running docker containers.
The docker agent, on the other hand, gives you total freedom when it comes to specifying tools and their versions. It does not limit you to maven, jdk, and gradle, and it does not require any pre-configuration in your Jenkins server. The only tool you need is docker, and you are free to use any tool you need in your Jenkins pipeline.
pipeline {
agent {
docker {
image "maven:3.6.3-jdk-11-slim"
}
}
stages {
stage('Example') {
steps {
sh 'mvn --version'
}
}
}
}
When to use one over another?
There is no single right answer to this question. It depends on the context. The tools block is very limiting, but it gives you control over what tools are used in your Jenkins. In some cases, people decide not to use docker in their Jenkins environment, and they prefer to control what tools are available to their users. We can agree with this or not. When it comes to using the docker agent, you get full access to any tools that can be shipped as a docker container.
In some cases, this is the best choice when it comes to using a tool with a specific version - your operating system may not allow you to install the desired version. Of course, you need to keep in mind that this power and flexibility comes with a cost. You lose control over what tools are used in your Jenkins pipelines. Also, if you pull tons of different docker images, you will increase disk space consumption. Not to mention that the docker agent allows you to run the pipeline with tools that may consume lots of CPU and memory. (I have seen Jenkins pipelines starting Elasticsearch, Logstash, Zookeeper, and other services, on nodes that were not prepared for that load.)

"Docker: command not found" from Jenkins on MacOS

When running jobs from Jenkinsfile with Pipeline syntax and a Docker agent, the pipeline fails with "Docker: command not found." I understand this to mean that either (1) Docker is not installed; or (2) Jenkins is not pointing to the correct Docker installation path. My situation is very similar to this issue: Docker command not found in local Jenkins multi branch pipeline . Jenkins is installed on MacOS and running off of localhost:8080. Docker is also installed (v18.06.0-ce-mac70)./
That user's solution included a switch from pipeline declarative syntax to node scripted syntax. However I want to resolve the issue while retaining the declarative syntax.
Jenkinsfile
#!groovy
pipeline {
agent {
docker {
image 'node:7-alpine'
}
}
stages {
stage('Unit') {
steps {
sh 'node -v'
sh 'npm -v'
}
}
}
}
Error message
docker inspect -f . node:7-alpine
docker: command not found
docker pull node:7-alpine
docker: command not found
In Jenkins Global Tool Configuration, for Docker installations I tried both (1) install automatically (from docker.com); and (2) local installation with installation root /usr/local/.
All of the relevant plugins appears to be installed as well.
I solved this problem here: https://stackoverflow.com/a/58688536/8160903
(Add Docker's path to Homebrew Jenkins plist /usr/local/Cellar/jenkins-lts/2.176.3/homebrew.mxcl.jenkins-lts.plist)
I would check the user who is running the jenkins process and make sure they are part of the docker group.
You can try adding the full path of docker executable on your machine to Jenkins at Manage Jenkins > Global Tool Configuration.
I've seen it happen sometimes that the user which has started Jenkins doesn't have the executable's location on $PATH.

Jenkins Using result artifacts in different steps - stash and unstash

I have a Jenkinsfile declarative pipeline which has two steps:
build an RPM file inside a docker container
build a docker image with the RPM and run it
The first step is built inside a docker container because it require a specific app to build the RPM.
The second step is run directly on a Jenkins slave, can be other slave than the slave which ran the first step.
In order to use the RPM produced by the first step I'm currently using stash and unstash steps. If I do not use them the second step doesn't have access to the RPM file.
The RPM file is about 215MB which is more than the 100MB recommended limit so I'll like to know if there is a better solution?
pipeline {
agent any
options {
timestamps()
}
stages {
stage('Gradle: build') {
agent {
docker {
image 'some-internal-image'
}
}
steps {
sh """
chmod +x gradlew
./gradlew buildRpm
"""
}
post {
success {
stash name: 'rpm', includes: 'Server/target/myapp.rpm'
}
}
}
stage('Gradle: build docker image') {
steps {
unstash 'rpm'
sh """
chmod +x gradlew
./gradlew buildDockerImage
"""
}
}
}
}
You could use docker's multi-stage build, but I'm not aware of a nice implementation using Jenkins Pipelines.
We're stashing also several hundreds of megabytes to distribute it to build agents. I've experimented with uploading the artifacts to S3 and downloading them again from there with now visible performance improvement (only that it takes off load from the Jenkins Master).
So my very opinionated recommendation: Keep it like it is and optimize, once you really run into performance / load issues.
you can use Artifactory or any other binary repository manager..
From Artifactory's webpage:
As the first, and only, universal Artifact Repository Manager on the
market, JFrog Artifactory fully supports software packages created by
any language or technology.
...
...Artifactory provides an end-to-end, automated and bullet-proof
solution for tracking artifacts from development to production.

Resources