Jenkins Declarative Pipeline won't work with Docker Swarm - docker

Running this simple pipeline:
pipeline {
agent { label 'docker-swarm' }
/* ------------------- */
stages {
stage("Build") {
agent {
docker {
reuseNode true
image 'maven:3.5.0-jdk-8'
}
}
steps {
sh 'mvn -version'
}
}
}
}
Produces this error:
Queued: All nodes of label ‘docker-swarm’ are offline
After ~1 minute the error message changes to:
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
The strange thing is that when I test the connection in Manage Jenkins → Cloud it can connect without a problem:
Anybody got an idea hot to fix this?

Update 11/27/2017: The screenshot shows an incorrect configuration which does not use the swarm, but instead uses a single manager node. To use the swarm, I switched to port 3376 and the Test Connection output changes to Version = swarm/1.2.8, API Version = 1.22. I noticed my mistake after never seeing builds run on the other nodes in the swarm, and the swarm manager getting overwhelmed.
In the original question it is not clear what versions of the various Jenkins plugins are being used, nor how the swarm was configured.
I have successfully used:
Jenkins 2.73.3
Docker Plugin 1.0.4
Standalone (aka Classic) Docker Swarm
At the time I write this, the newer Docker Swarm mode was not supported by the Docker Plugin. Although a non-swarm docker engine supposedly would work.
Configured like so:
The credentials provide the certificates needed to connect to the swarm. I do recall it taking me a few tries to get that right.
The following pipeline works:
pipeline {
agent {
docker 'maven:3.5.0-jdk-8'
}
stages {
stage('Build') {
steps {
sh 'mvn -version'
}
}
}
}
Giving output:
Apache Maven 3.5.0 (ff8f5e7444045639af65f6095c62210b5713f426; 2017-04-03T19:39:06Z)
Maven home: /usr/share/maven
Java version: 1.8.0_141, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-8-openjdk-amd64/jre
Default locale: en, platform encoding: UTF-8
OS name: "linux", version: "4.4.93-boot2docker", arch: "amd64", family: "unix"
The key differences with the pipeline in the original question are:
Use of a single pipeline scoped agent with the image.
Not using reuseNode.
Not using label.
I have not personally tried reuseNode or label.

Related

Playwright Docker Image as Jenkins agent

I am trying to use Playwright docker image in Jenkins. In the official documentation, they give an example of how to use Docker plugin:
pipeline {
agent { docker { image 'mcr.microsoft.com/playwright:v1.25.0-focal' } }
stages {
stage('e2e-tests') {
steps {
// Depends on your language / test framework
sh 'npm install'
sh 'npm run test'
}
}
}
}
However, it is not a possibility for me to use the Docker plugin and I have to use pod templates instead. Here is the setting that I am using:
With this setting, I can see the pod is running by running commands in the pod terminal, however, I get this messages in the logs in Jenkins and it eventually timeout and the agents gets suspended.
Waiting for agent to connect (30/100):
What do I need to change in pod/container template config?

Docker in jenkins openshift

When using docker in jenkins pipeline got a problem: "docker: command not found"
How can I start docker on OpenShift Jenkins?
Also, got no permissions to open plugins page in Jenkins.
The easiest way is to use jenkins installation provided by OpenShift
The doc for for jenkins in OpenShift is here : https://docs.openshift.com/container-platform/4.8/openshift_images/using_images/images-other-jenkins.html
To build docker images, the proposed solution is to use "source-to-image" (s2i) and/or use"BuildConfig"to manege "docker builds"
The doc also explains how to manage jenkins authentication/authorizations with jenkins integrated with OCP (OAuth + role bindings to OpenShift)
Bonus points: jenkins is configured with all the required plugins (k8s, OCP...) and it is automatically upgraded when you upgrade OCP
Using Jenkins on OpenShift, you won't find any Docker runtime on your Jenkins master - and probably not on the slaves either: cri-o being the preferred runtime (and only runtime, as of 4.x).
If you want to start containers, you're supposed to use any of the OpenShift or Kubernetes plugins: creating Pods. In OpenShift, you won't need to configure or install anything on Jenkins itself - aside from writing your pipeline. As long as your Jenkins ServiceAccount has edit or admin privileges over a Project, then it would be able to manage containers in the corresponding Namespace.
pipeline {
agent {
node { label 'maven' }
}
stages {
stage('clone') {
// clone the repository serving your YAMLs
}
stage('create') {
steps {
script {
openshift.withCluster() {
openshift.withProject() {
try {
timeout(5) {
objectsFromTemplate = openshift.process("-f", "/path/to/clone/directory/deploy.yaml", '-p', "FRONTNAME=${params.buildHash}",
'-p', "LDAP_IMAGE_TAG=${params.buildHash}", '-p', "ROOT_DOMAIN=${params.rootDomain}")
echo "The template will create ${objectsFromTemplate.size()} objects"
for (o in objectsFromTemplate) { o.metadata.labels["${templateSel}"] = "${templateMark}-${params.buildHash}" }
created = openshift.create(objectsFromTemplate)
created.withEach { echo "Created ${it.name()} from template with labels ${it.object().metadata.labels}" }
}
} catch(e) {
// DoSomething
}
}
}
}
}
}
}
}
If you don't have full access to your Jenkins admin interface, check the RoleBindings in that namespace: maybe you're not admin?
It won't prevent you from creating your own podTemplates (using ConfigMaps), Credentials (using Secrets) or jobs/jenkinsFiles (using BuildConfigs). While you may install additional plugins to Jenkins, adding/changing the content of the INSTALL_PLUGINS environment variable, in your Jenkins deployment.
Also note that the Jenkins ServiceAccount Token could be used authenticating against Jenkins as admin, working around Jenkins OpenShift OAuth integration.

Jenkins avoid tool installation if it is installed already

Not a jenkins expert here. I have a scripted pipeline where I have tool installed (Node). Unfortunately it was configured to pull in other dependencies which takes 250sec in overall now. I'd like to add a condition to avoid this installation if it(Node with packages) was already installed previously, but don't know where to start. Perhaps jenkins stores meta info from prev runs that can be checked?
node {
env.NODEJS_HOME = "${tool 'Node v8.11.3'}"
env.PATH = "${env.NODEJS_HOME}/bin:${env.PATH}"
env.PATH = "/opt/xs/bin:${env.PATH}"
// ...
}
Are you using dynamic jenkins agents (docker containers)? In this case tools will be installed everytime you run build.
Mount volumes to containers, use persistant agents or build your own docker image with installed nodejs.
As I see you use workaround to install nodejs tool.
Jenkins supports it native way (declarative style):
pipeline {
agent any
tools {
nodejs 'NodeJS_14.5'
}
stages {
stage ('nodejs test') {
steps {
sh 'npm -v'
}
}
}
}
On first run tools will be installed. On next ones - will not since it is installed.

Jenkins differences between tools and docker agent

Sorry it might be a simple question but what is the differences between using tools and docker agent.I think using docker agent is much more flexible instead of using tools. When should I use docker agent or tools?
Tools
pipeline {
agent any
tools {
maven 'Maven 3.3.9'
jdk 'jdk8'
}
stages {
stage ('Initialize') {
steps {
sh '''
echo "PATH = ${PATH}"
echo "M2_HOME = ${M2_HOME}"
'''
}
}
stage ('Build') {
steps {
sh 'mvn -Dmaven.test.failure.ignore=true install'
}
Docker Agent
pipeline {
agent none
stages {
stage('Back-end') {
agent {
docker { image 'maven:3-alpine' }
}
steps {
sh 'mvn --version'
}
}
These two options serve a bit different purpose. The tools block allows you to add specific versions of maven, jdk, or gradle in your PATH. You can't use any version - you can only use versions that are configured in the Global Tool Configuration Jenkins page:
If your Jenkins configuration contains only a single Maven version, e.g., Maven 3.6.3, you can use only this version. Specifying a version that is not configured in the Global Tool Configuration will cause your pipeline to fail.
pipeline {
agent any
tools {
maven 'Maven 3.6.3'
}
stages {
stage('Example') {
steps {
sh 'mvn --version'
}
}
}
}
Using the tools block to specify different versions of supported tools will be a good option if your Jenkins server does not support running docker containers.
The docker agent, on the other hand, gives you total freedom when it comes to specifying tools and their versions. It does not limit you to maven, jdk, and gradle, and it does not require any pre-configuration in your Jenkins server. The only tool you need is docker, and you are free to use any tool you need in your Jenkins pipeline.
pipeline {
agent {
docker {
image "maven:3.6.3-jdk-11-slim"
}
}
stages {
stage('Example') {
steps {
sh 'mvn --version'
}
}
}
}
When to use one over another?
There is no single right answer to this question. It depends on the context. The tools block is very limiting, but it gives you control over what tools are used in your Jenkins. In some cases, people decide not to use docker in their Jenkins environment, and they prefer to control what tools are available to their users. We can agree with this or not. When it comes to using the docker agent, you get full access to any tools that can be shipped as a docker container.
In some cases, this is the best choice when it comes to using a tool with a specific version - your operating system may not allow you to install the desired version. Of course, you need to keep in mind that this power and flexibility comes with a cost. You lose control over what tools are used in your Jenkins pipelines. Also, if you pull tons of different docker images, you will increase disk space consumption. Not to mention that the docker agent allows you to run the pipeline with tools that may consume lots of CPU and memory. (I have seen Jenkins pipelines starting Elasticsearch, Logstash, Zookeeper, and other services, on nodes that were not prepared for that load.)

"Docker: command not found" from Jenkins on MacOS

When running jobs from Jenkinsfile with Pipeline syntax and a Docker agent, the pipeline fails with "Docker: command not found." I understand this to mean that either (1) Docker is not installed; or (2) Jenkins is not pointing to the correct Docker installation path. My situation is very similar to this issue: Docker command not found in local Jenkins multi branch pipeline . Jenkins is installed on MacOS and running off of localhost:8080. Docker is also installed (v18.06.0-ce-mac70)./
That user's solution included a switch from pipeline declarative syntax to node scripted syntax. However I want to resolve the issue while retaining the declarative syntax.
Jenkinsfile
#!groovy
pipeline {
agent {
docker {
image 'node:7-alpine'
}
}
stages {
stage('Unit') {
steps {
sh 'node -v'
sh 'npm -v'
}
}
}
}
Error message
docker inspect -f . node:7-alpine
docker: command not found
docker pull node:7-alpine
docker: command not found
In Jenkins Global Tool Configuration, for Docker installations I tried both (1) install automatically (from docker.com); and (2) local installation with installation root /usr/local/.
All of the relevant plugins appears to be installed as well.
I solved this problem here: https://stackoverflow.com/a/58688536/8160903
(Add Docker's path to Homebrew Jenkins plist /usr/local/Cellar/jenkins-lts/2.176.3/homebrew.mxcl.jenkins-lts.plist)
I would check the user who is running the jenkins process and make sure they are part of the docker group.
You can try adding the full path of docker executable on your machine to Jenkins at Manage Jenkins > Global Tool Configuration.
I've seen it happen sometimes that the user which has started Jenkins doesn't have the executable's location on $PATH.

Resources