kubernetes jenkins docker command not found - docker

Installed Jenkins using helm
helm install --name jenkins -f values.yaml stable/jenkins
Jenkins Plugin Installed
- kubernetes:1.12.6
- workflow-job:2.31
- workflow-aggregator:2.5
- credentials-binding:1.16
- git:3.9.3
- docker:1.1.6
Defined Jenkins pipeline to build docker container
node {
checkout scm
def customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.inside {
sh 'make test'
}
}
Throws the error : docker not found

You can define agent pod with containers with required tools(docker, Maven, Helm etc) in the pipeline for that:
First, create agentpod.yaml with following values:
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: pod
spec:
containers:
- name: maven
image: maven:3.3.9-jdk-8-alpine
command:
- cat
tty: true
volumeMounts:
- name: m2
mountPath: /root/.m2
- name: docker
image: docker:19.03
command:
- cat
tty: true
privileged: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: m2
hostPath:
path: /root/.m2
Then configure the pipeline as:
pipeline {
agent {
kubernetes {
defaultContainer 'jnlp'
yamlFile 'agentpod.yaml'
}
}
stages {
stage('Build') {
steps {
container('maven') {
sh 'mvn package'
}
}
}
stage('Docker Build') {
steps {
container('docker') {
sh "docker build -t dockerimage ."
}
}
}
}
}

It seems like you have only installed plugins but not packages. Two possibilities.
Configure plugins to install packages using Jenkins.
Go to Manage Jenkins
Global Tools Configuration
Docker -> Fill name (eg: Docker-latest)
Check on install automatically and then add installer (Download from
here).
Then save
If you have installed on your machine then update the PATH variable in Jenkins with the location of Docker.

Related

Jenkins pipeline - start pod with image built in previous stage in a different pod

I have a Jenkins deployed on a kubernetes cluster A, and that can spawn pods in the same cluster A. My workflow is the following
Start a pod with an image designed to build a docker image. This image will be built with a deterministic tag example/appname:${GIT_COMMIT}
After the image was build in the previous pod, start a new pod with the image from the previous build (example/appname:${GIT_COMMIT}), and run multiple test tools
Currently I am successfully making a build with
podTemplate(yaml: """
apiVersion: v1
kind: Pod
spec:
containers:
- name: aws-dockerizer
image: example/aws-dockerizer:0.1.7
command: ['cat']
tty: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
""") {
node(POD_LABEL) {
stage('Clone') {
git url: 'https://github.com/example/my-app/', branch: '${build_branch_name}', credentialsId: 'github-app'
container('aws-dockerizer') {
stage('Build and deploy') {
withAWS(credentials: 'aws-credentials', region: 'eu-central-1') {
sh '''#!/usr/bin/env bash
git config --global --add safe.directory ${WORKSPACE}
scripts/build_and_push_docker_image_to_aws.sh
'''
}
}
}
}
}
}
I'd like to add the following stage to my pipeline. Note that the new pod "experience-${GIT_COMMIT}" CANNOT be started because he image is not available until the previous step is complete.
podTemplate(
label: "experience-${GIT_COMMIT}"
yaml: """
apiVersion: v1
kind: Pod
spec:
containers:
- name: experience
image: example.dkr.ecr.eu-central-1.amazonaws.com/example/appname:${GIT_COMMIT}
command: ['cat']
tty: true
""")
stage('Run tests') {
node("experience-${GIT_COMMIT}") {
stage('Run tests') {
container('experience') {
stage('Run Rspec') {
sh 'bundle exec rspec'
}
post {}
}
}
}
}
}
Any idea if this is possible ? What's the DSL / concepts I need to use ? How do I "merge" the two pieces of code to achieve what I want ?
I've tried to play around a bit, but when I declare both pod templates at the beginning, it hangs the job until the 2nd pod is ready, which it never will be...

Curl in Kubernetes agent on Jenkins

I have a script that uses curl and that script should be run in Kubernetes agent on Jenkins. Here is my original agent configuration:
pipeline {
agent {
kubernetes {
customWorkspace 'ng-cleaner'
yaml """
kind: Pod
metadata:
spec:
imagePullSecrets:
- name: jenkins-docker
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: agentpool
operator: In
values:
- build
schedulerName: default-scheduler
tolerations:
- key: type
operator: Equal
value: jenkins
effect: NoSchedule
containers:
- name: jnlp
env:
- name: CONTAINER_ENV_VAR
value: jnlp
- name: build
image: tixartifactory-docker.jfrog.io/baseimages/helm:helm3.2.1-helm2.16.2-kubectl.0
ttyEnabled: true
command:
- cat
tty: true
"""
}
}
The error message is "curl ....
/home/jenkins/agent/ng-cleaner#tmp/durable-0d154ecf/script.sh: 2: curl: not found"
What I tried:
added shell step to main "build" container:
shell: sh "apk add --no-cache curl", also tried "apt install curl"- didn't help
added new container with curl image:
- name: curl
image: curlimages/curl:7.83.1
ttyEnabled: true
tty: true
command:
- cat - didn't help as well
Any suggestions on how I can make it work?
I resolved it.
It was needed to add shell step to main container:
shell: sh "apk add --no-cache curl"
and then place my script inside container block:
stages {
stage('MyStage') {
steps {
container('build'){
script {

Set environment variables to Jenkins pipeline for deploying kubernetes pods

I have Kubernetes Cluster where deployed and scaled Jenkins,
below podTemplate yaml file which I run in Jenkins pipeline:
podTemplate(yaml: """
apiVersion: v1
kind: Pod
spec:
containers:
- name: docker
image: docker/compose
command: ['cat']
tty: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
"""
) {
def image = "image/name"
node(POD_LABEL) {
stage('Build and run Docker image') {
git branch: 'test',
credentialsId: 'github-credentials',
url: 'https://github.com/project/project.git'
container('docker') {
sh "docker-compose up -d --build"
}
}
}
}
I have got an error:
[Pipeline] sh
+ docker-compose up -d --build
The ENV variable is not set. Defaulting to a blank string.
[Pipeline] }
Which is best practice to set env vars during the this kind of deployments?
Update:
yes, it's working outside of Jenkins,
I have listed env var files in docker compose yaml file:
...
context: ./validator
ports:
- '${VALIDATOR_PROD_ONE_PORT}:${VALIDATOR_PROD_ONE_PORT}'
volumes:
- ./validator/logs_1:/usr/src/app/logs
***env_file: .env.validator.test***
...
Of course I cat set env var in Jenkins pipeline before executing docker-compose build like this, for example:
container('docker') {
***sh ' echo "someEnvVar=value" > .env.validator.test'***
sh "docker-compose up -d --build"
}
This way also working, but not beautiful (:
You should be able to set environment variables on your stage as following:
stage('Build and run Docker image') {
environment {
SOME_ENV_VAR = "SOME_VAL"
}
git branch: 'test',
credentialsId: 'github-credentials',
url: 'https://github.com/project/project.git'
container('docker') {
sh "docker-compose up -d --build"
}
}
This would essentially set shell Environment variables, which should precede over those in .env file.

Use "label" or define a pod template in jenkinsfile for kubernetes-plugin?

In General
I'm trying to use label when using kubernetes-plugin for Jenkins, but I get a little bit confused.
In my pipeline bellow I'm trying to build test job in parallel steps with different labels (agents).
I already have configured the plugin with pod template and container in my Jenkins config, where I use same settings as it's in the pipeline podTemplate defined.
Issue
The problem is that when I use agent label in stage 2 there is jnpl image running instead the image that I point in the config someimage:latest.
In stage 1 where I define the pod in pipeline there is no problem and the required images are running fine.
Question
What I'm doing wrong?
Here is my jenkinsfile and config of the kubernetes-plugin in Jenkins:
def podTemplate = """
apiVersion: v1
kind: Pod
spec:
containers:
- name: k8s
image: someimage:latest
command:
- sleep
args:
- infinity
volumeMounts:
- name: workspace-volume
mountPath: /home/jenkins/agent
workingDir: "/home/jenkins/agent"
volumes:
- name: "workspace-volume"
persistentVolumeClaim:
claimName: "jenkins-worker-pvc"
readOnly: false
"""
pipeline {
agent none
stages {
stage("Parallel") {
parallel {
stage("1.k8s") {
agent {
kubernetes {
yaml podTemplate
defaultContainer 'k8s'
}
}
steps {
sh """
mvn -version
"""
}
}
stage("2. k8s") {
agent { label 'k8s' }
steps {
sh """
mvn -version
"""
}
}
stage("win") {
agent { label 'windows' }
steps {
bat "dir"
}
}
}
}
}
}
You did not specified an image for stage with label k8s and windows.
You can read in the docs that:
The plugin creates a Kubernetes Pod for each agent started, defined by the Docker image to run, and stops it after each build.
Agents are launched using JNLP, so it is expected that the image connects automatically to the Jenkins master.
You are using podTemplate and I would advice setting up container , this might look like the following:
podTemplate(containers: [
containerTemplate(name: 'maven', image: 'maven:3.3.9-jdk-8-alpine', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'golang', image: 'golang:1.8.0', ttyEnabled: true, command: 'cat')
]) {
node(POD_LABEL) {
stage('Get a Maven project') {
git 'https://github.com/jenkinsci/kubernetes-plugin.git'
container('maven') {
stage('Build a Maven project') {
sh 'mvn -B clean install'
}
}
}
stage('Get a Golang project') {
git url: 'https://github.com/hashicorp/terraform.git'
container('golang') {
stage('Build a Go project') {
sh """
mkdir -p /go/src/github.com/hashicorp
ln -s `pwd` /go/src/github.com/hashicorp/terraform
cd /go/src/github.com/hashicorp/terraform && make core-dev
"""
}
}
}
}
}
You can read more about Container Configuration and Container Group Support

Run Kaniko in Jenkins Slave

I want to run the kaniko as a slave in jenkins . My pipeline is running on the docker plugin and how can I set the gcr credentials with the kaniko.
I want to upload GCR credentials to the Jenkins Master server .
My pipeline groovy is shown as below :
node("kaniko-jnlp") {
stage('Building Stage') {
git 'https://github.com/jenkinsci/docker-jnlp-slave.git'
sh ''' /kaniko/executor -f `pwd`/Dockerfile -c `pwd` --insecure-
skip-tls-verify --cache=true
--- destination=gcr.io/project/project:v1 '''
}
I am using Kaniko to build images and push to a private repo. My Kaniko docker image uses a Kubernetes pull-secret for authentication, but you should be able to use the following code:
stage('Kaniko') {
environment {
ARTIFACTORY_CREDS = credentials('your-credentials')
}
steps{
sh "echo ********** EXAMPLE APP **********"
container(name: 'kaniko', shell: '/busybox/sh') {
withEnv(['PATH+EXTRA=/busybox']) {
sh '''#!/busybox/sh
/kaniko/executor --context `pwd` --cleanup --dockerfile=your/Dockerfile --build-arg ARTIFACTORY_USER=$ARTIFACTORY_CREDS_USR --build-arg ARTIFACTORY_PASS=$ARTIFACTORY_CREDS_PSW --destination=your.docker.repo/team/image:tag
'''
}
}
}
}
I run my whole pipeline encapsulated inside a pod, here how I use Kaniko:
pipeline {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
jenkins: worker
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:debug
command: ["/busybox/cat"]
tty: true
volumeMounts:
- name: dockercred
mountPath: /root/.docker/
volumes:
- name: dockercred
secret:
secretName: dockercred
"""
}
}
stages {
stage('Stage 1: Build with Kaniko') {
steps {
container('kaniko') {
sh '/kaniko/executor --context=git://github.com/repository/project.git \
--destination=repository/image:tag \
--insecure \
--skip-tls-verify \
-v=debug'
}
}
}
}
}

Resources