Docker in jenkins openshift - docker

When using docker in jenkins pipeline got a problem: "docker: command not found"
How can I start docker on OpenShift Jenkins?
Also, got no permissions to open plugins page in Jenkins.

The easiest way is to use jenkins installation provided by OpenShift
The doc for for jenkins in OpenShift is here : https://docs.openshift.com/container-platform/4.8/openshift_images/using_images/images-other-jenkins.html
To build docker images, the proposed solution is to use "source-to-image" (s2i) and/or use"BuildConfig"to manege "docker builds"
The doc also explains how to manage jenkins authentication/authorizations with jenkins integrated with OCP (OAuth + role bindings to OpenShift)
Bonus points: jenkins is configured with all the required plugins (k8s, OCP...) and it is automatically upgraded when you upgrade OCP

Using Jenkins on OpenShift, you won't find any Docker runtime on your Jenkins master - and probably not on the slaves either: cri-o being the preferred runtime (and only runtime, as of 4.x).
If you want to start containers, you're supposed to use any of the OpenShift or Kubernetes plugins: creating Pods. In OpenShift, you won't need to configure or install anything on Jenkins itself - aside from writing your pipeline. As long as your Jenkins ServiceAccount has edit or admin privileges over a Project, then it would be able to manage containers in the corresponding Namespace.
pipeline {
agent {
node { label 'maven' }
}
stages {
stage('clone') {
// clone the repository serving your YAMLs
}
stage('create') {
steps {
script {
openshift.withCluster() {
openshift.withProject() {
try {
timeout(5) {
objectsFromTemplate = openshift.process("-f", "/path/to/clone/directory/deploy.yaml", '-p', "FRONTNAME=${params.buildHash}",
'-p', "LDAP_IMAGE_TAG=${params.buildHash}", '-p', "ROOT_DOMAIN=${params.rootDomain}")
echo "The template will create ${objectsFromTemplate.size()} objects"
for (o in objectsFromTemplate) { o.metadata.labels["${templateSel}"] = "${templateMark}-${params.buildHash}" }
created = openshift.create(objectsFromTemplate)
created.withEach { echo "Created ${it.name()} from template with labels ${it.object().metadata.labels}" }
}
} catch(e) {
// DoSomething
}
}
}
}
}
}
}
}
If you don't have full access to your Jenkins admin interface, check the RoleBindings in that namespace: maybe you're not admin?
It won't prevent you from creating your own podTemplates (using ConfigMaps), Credentials (using Secrets) or jobs/jenkinsFiles (using BuildConfigs). While you may install additional plugins to Jenkins, adding/changing the content of the INSTALL_PLUGINS environment variable, in your Jenkins deployment.
Also note that the Jenkins ServiceAccount Token could be used authenticating against Jenkins as admin, working around Jenkins OpenShift OAuth integration.

Related

Is there any kubernetes agent in Tekton pipeline similar to Jenkins Agent

I am new to Tekton pipeline. I am migrating Jenkins file from Tekton pipeline
I do have the follow jenkinsfile codes
pipeline {
libraries {
lib 'operator-pipeline-library'
}
parameters {
string(name: 'OPERATOR_CATALOG_REPO', description: ' operator images that are in the catalog', defaultValue: 'operator-3rdparty-dev-local.net')
string(name: 'OPERATOR_INDEX', description: 'Artifactory ', defaultValue: 'operator-3rdparty-catalog-dev-local.net')
}
agent {
kubernetes {
cloud 'openshift'
yamlFile 'operator-mirror-pod.yaml'
}
}
}
I do want to know how do I re-write the following
agent {
kubernetes {
cloud 'openshift'
yamlFile 'operator-pod.yaml'
}
}
in Tekton pipelines
When I want to do something with Tekton, the first thing I would recommend checking is the Tekton Catalog. They have a GitHub repository.
In your case, they have the openshift-client task that could be useful.
Depending on how your Jenkins was configured, you may have to reproduce its credentials configuration as Kubernetes Secrets - or executing commands against local cluster, make sure your PipelineRun uses a ServiceAccount with enough privileges to create objects.
As a more generic answer: how do I translate Jenkins groovy into Tekton: you don't. You rewrite everything from scratch. And while it may be the beginning of a long road: at the end of it, you're done with groovy. You can script in python, ruby, shell, whatever runtime you have in your pods. All of which would be better options, in terms of performance, maintenance, ...

Use existing service in Kubernetes via Jenkins

I have a Jenkins pipeline which runs an application in cloud using Kubernetes plugin.
So far, I have a simple yaml file which configures a pod. The Jenkins pipeline creates a pod and it does some operations (It's parsing some data).
I've created a service (with 1 replica) which I deployed and I want to use that in Jenkins instead of creating the same pod everytime I run.
Can someone please advise how to do that?
Currently this is how I am running the pipeline:
stage('Parse logs') {
agent {
kubernetes {
cloud 'sandbox'
label 'log-parser'
yamlFile 'jenkins/logparser.yaml'
}
}
when {
beforeAgent true
expression { params.parse_logs }
}
steps {
container('log-parser'') {
sh '/usr/local/openjdk-11/bin/java -jar /opt/log-parser/log-parser.jar --year=$year --month=$month --day=$day --hour=$hour
}
}
}
Can you please advise how to use the created service 'log-parser' instead of creating a pod everytime I run the pipeline?

Jenkins avoid tool installation if it is installed already

Not a jenkins expert here. I have a scripted pipeline where I have tool installed (Node). Unfortunately it was configured to pull in other dependencies which takes 250sec in overall now. I'd like to add a condition to avoid this installation if it(Node with packages) was already installed previously, but don't know where to start. Perhaps jenkins stores meta info from prev runs that can be checked?
node {
env.NODEJS_HOME = "${tool 'Node v8.11.3'}"
env.PATH = "${env.NODEJS_HOME}/bin:${env.PATH}"
env.PATH = "/opt/xs/bin:${env.PATH}"
// ...
}
Are you using dynamic jenkins agents (docker containers)? In this case tools will be installed everytime you run build.
Mount volumes to containers, use persistant agents or build your own docker image with installed nodejs.
As I see you use workaround to install nodejs tool.
Jenkins supports it native way (declarative style):
pipeline {
agent any
tools {
nodejs 'NodeJS_14.5'
}
stages {
stage ('nodejs test') {
steps {
sh 'npm -v'
}
}
}
}
On first run tools will be installed. On next ones - will not since it is installed.

Triggering vSphere build via Jenkins pipeline agent

My goal is to set up a declarative pipeline job which automatically triggers the vSphere plugin to create a VM on which the build and test runs in a clean environment.
I've configured the vSphere Cloud Plugin in Jenkins' global settings to build slaves with label "appliance-slave", and this does trigger for freestyle jobs with "Restrict where this project can be run" set to that label. However, the following example pipeline never triggers the vSphere plugin (based on tailing the Jenkins log):
pipeline {
agent {
label 'appliance-slave'
}
stages {
stage('Test') {
steps {
sh "hostname && hostname -i"
}
}
}
}
I've searched the documentation without any luck. Is there some configuration option or alternate agent declaration that I'm missing that would allow this?
Finally resolved the problem; the issue was that I needed to go in to the actual slave configuration and set up the slave there. The vSphere plugin modifies the slave configuration page to allow exactly what I was trying to do: shutting down and reverting the VM once the build is complete.

Jenkins Declarative Pipeline won't work with Docker Swarm

Running this simple pipeline:
pipeline {
agent { label 'docker-swarm' }
/* ------------------- */
stages {
stage("Build") {
agent {
docker {
reuseNode true
image 'maven:3.5.0-jdk-8'
}
}
steps {
sh 'mvn -version'
}
}
}
}
Produces this error:
Queued: All nodes of label ‘docker-swarm’ are offline
After ~1 minute the error message changes to:
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
The strange thing is that when I test the connection in Manage Jenkins → Cloud it can connect without a problem:
Anybody got an idea hot to fix this?
Update 11/27/2017: The screenshot shows an incorrect configuration which does not use the swarm, but instead uses a single manager node. To use the swarm, I switched to port 3376 and the Test Connection output changes to Version = swarm/1.2.8, API Version = 1.22. I noticed my mistake after never seeing builds run on the other nodes in the swarm, and the swarm manager getting overwhelmed.
In the original question it is not clear what versions of the various Jenkins plugins are being used, nor how the swarm was configured.
I have successfully used:
Jenkins 2.73.3
Docker Plugin 1.0.4
Standalone (aka Classic) Docker Swarm
At the time I write this, the newer Docker Swarm mode was not supported by the Docker Plugin. Although a non-swarm docker engine supposedly would work.
Configured like so:
The credentials provide the certificates needed to connect to the swarm. I do recall it taking me a few tries to get that right.
The following pipeline works:
pipeline {
agent {
docker 'maven:3.5.0-jdk-8'
}
stages {
stage('Build') {
steps {
sh 'mvn -version'
}
}
}
}
Giving output:
Apache Maven 3.5.0 (ff8f5e7444045639af65f6095c62210b5713f426; 2017-04-03T19:39:06Z)
Maven home: /usr/share/maven
Java version: 1.8.0_141, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-8-openjdk-amd64/jre
Default locale: en, platform encoding: UTF-8
OS name: "linux", version: "4.4.93-boot2docker", arch: "amd64", family: "unix"
The key differences with the pipeline in the original question are:
Use of a single pipeline scoped agent with the image.
Not using reuseNode.
Not using label.
I have not personally tried reuseNode or label.

Resources