Curl in Kubernetes agent on Jenkins - docker

I have a script that uses curl and that script should be run in Kubernetes agent on Jenkins. Here is my original agent configuration:
pipeline {
agent {
kubernetes {
customWorkspace 'ng-cleaner'
yaml """
kind: Pod
metadata:
spec:
imagePullSecrets:
- name: jenkins-docker
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: agentpool
operator: In
values:
- build
schedulerName: default-scheduler
tolerations:
- key: type
operator: Equal
value: jenkins
effect: NoSchedule
containers:
- name: jnlp
env:
- name: CONTAINER_ENV_VAR
value: jnlp
- name: build
image: tixartifactory-docker.jfrog.io/baseimages/helm:helm3.2.1-helm2.16.2-kubectl.0
ttyEnabled: true
command:
- cat
tty: true
"""
}
}
The error message is "curl ....
/home/jenkins/agent/ng-cleaner#tmp/durable-0d154ecf/script.sh: 2: curl: not found"
What I tried:
added shell step to main "build" container:
shell: sh "apk add --no-cache curl", also tried "apt install curl"- didn't help
added new container with curl image:
- name: curl
image: curlimages/curl:7.83.1
ttyEnabled: true
tty: true
command:
- cat - didn't help as well
Any suggestions on how I can make it work?

I resolved it.
It was needed to add shell step to main container:
shell: sh "apk add --no-cache curl"
and then place my script inside container block:
stages {
stage('MyStage') {
steps {
container('build'){
script {

Related

Running a command inside a docker image in Jenkins Pipeline Stage

I have a simple Jenkins pipeline which creates a pod with 3 containers - jnlp, dind, example-test
This looks as follows -
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
metadata:
name: example-pb
annotations:
container.apparmor.security.beta.kubernetes.io/dind: unconfined
container.seccomp.security.alpha.kubernetes.io/dind: unconfined
labels:
some-label: label1
spec:
serviceAccountName: example
securityContext:
runAsUser: 10000
runAsGroup: 10000
containers:
- name: jnlp
image: 'jenkins/jnlp-slave:4.3-4-alpine'
args: ['\$(JENKINS_SECRET)', '\$(JENKINS_NAME)']
- name: dind
image: docker:dind
securityContext:
runAsUser: 0
runAsGroup: 0
fsGroup: 0
privileged: true
tty: true
volumeMounts:
- name: var-run
mountPath: /var/run
- name: example-test
image: pranavbhatia/example-test:0.1
securityContext:
runAsUser: 0
runAsGroup: 0
fsGroup: 0
volumeMounts:
- name: var-run
mountPath: /var/run
volumes:
- emptyDir: {}
name: var-run
"""
}
}
Also defined a few stages -
stages {
stage ('DIND') {
steps {
container('dind') {
sh 'pwd && echo "Pulling image" && docker pull ubuntu:18.04'
}
}
}
stage ('EXAMPLE') {
steps {
container('example-test') {
sh './example'
}
}
}
So now I have this script "example" in my root folder and I want to run this but somehow it's not able to find it.
The Dockerfile looks something like this -
FROM ubuntu:18.04
COPY ./example ./example
#make it executable
RUN chmod +x ./example
#command to keep container running in detached mode
CMD tail -f /dev/null
pwd returns with "/home/jenkins/agent/workspace/test-pipeline" and not the docker container path.
The output is as follows -
Started by user admin
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Created Pod: test-pipeline-14-s7167-4zcg5-s68gw in namespace dc-pipeline
Still waiting to schedule task
‘test-pipeline-14-s7167-4zcg5-s68gw’ is offline
Agent test-pipeline-14-s7167-4zcg5-s68gw is provisioned from template test-pipeline_14-s7167-4zcg5
---
apiVersion: "v1"
kind: "Pod"
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/dind: "unconfined"
container.seccomp.security.alpha.kubernetes.io/dind: "unconfined"
buildUrl: "http://jenkins-164-229:8080/job/test-pipeline/14/"
runUrl: "job/test-pipeline/14/"
labels:
some-label: "label1"
jenkins: "slave"
jenkins/label: "test-pipeline_14-s7167"
name: "test-pipeline-14-s7167-4zcg5-s68gw"
spec:
containers:
- args:
- "$(JENKINS_SECRET)"
- "$(JENKINS_NAME)"
env:
- name: "JENKINS_SECRET"
value: "********"
- name: "JENKINS_TUNNEL"
value: "jenkins-164-229-agent:50000"
- name: "JENKINS_AGENT_NAME"
value: "test-pipeline-14-s7167-4zcg5-s68gw"
- name: "JENKINS_NAME"
value: "test-pipeline-14-s7167-4zcg5-s68gw"
- name: "JENKINS_AGENT_WORKDIR"
value: "/home/jenkins/agent"
- name: "JENKINS_URL"
value: "http://jenkins-164-229:8080/"
- name: "HOME"
value: "/home/jenkins"
image: "jenkins/jnlp-slave:4.3-4-alpine"
name: "jnlp"
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
- image: "pranavbhatia/example-test:0.1"
name: "example-test"
securityContext:
runAsGroup: 0
runAsUser: 0
volumeMounts:
- mountPath: "/var/run"
name: "var-run"
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
- image: "docker:dind"
name: "dind"
securityContext:
privileged: true
runAsGroup: 0
runAsUser: 0
tty: true
volumeMounts:
- mountPath: "/var/run"
name: "var-run"
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
nodeSelector:
beta.kubernetes.io/os: "linux"
restartPolicy: "Never"
securityContext:
runAsGroup: 10000
runAsUser: 10000
serviceAccountName: "example"
volumes:
- emptyDir: {}
name: "var-run"
- emptyDir:
medium: ""
name: "workspace-volume"
Running on test-pipeline-14-s7167-4zcg5-s68gw in /home/jenkins/agent/workspace/test-pipeline
[Pipeline] {
[Pipeline] stage
[Pipeline] { (DIND)
[Pipeline] container
[Pipeline] {
[Pipeline] sh
+ pwd
/home/jenkins/agent/workspace/test-pipeline
+ echo 'Pulling image'
Pulling image
+ docker pull ubuntu:18.04
18.04: Pulling from library/ubuntu
7595c8c21622: Pulling fs layer
d13af8ca898f: Pulling fs layer
70799171ddba: Pulling fs layer
b6c12202c5ef: Pulling fs layer
b6c12202c5ef: Waiting
d13af8ca898f: Verifying Checksum
d13af8ca898f: Download complete
70799171ddba: Verifying Checksum
70799171ddba: Download complete
b6c12202c5ef: Verifying Checksum
b6c12202c5ef: Download complete
7595c8c21622: Verifying Checksum
7595c8c21622: Download complete
7595c8c21622: Pull complete
d13af8ca898f: Pull complete
70799171ddba: Pull complete
b6c12202c5ef: Pull complete
Digest: sha256:a61728f6128fb4a7a20efaa7597607ed6e69973ee9b9123e3b4fd28b7bba100b
Status: Downloaded newer image for ubuntu:18.04
docker.io/library/ubuntu:18.04
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (EXAMPLE)
[Pipeline] container
[Pipeline] {
[Pipeline] sh
+ pwd
/home/jenkins/agent/workspace/test-pipeline
+ ./example
/home/jenkins/agent/workspace/test-pipeline#tmp/durable-26584660/script.sh: 1: /home/jenkins/agent/workspace/test-pipeline#tmp/durable-26584660/script.sh: ./example: not found
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
Any idea on how to fix this?
It may work to execute it with sh '/example' (without the dot). You installed it in the root filesystem, but when commands within the container are run from Jenkins the PWD will be the workspace.

How to define workspace volume for jenkins pipeline declarative

I am trying to setup declarative pipeline where I would like to persiste workspace as volume claim so large git checkout can be faster. Based on doc there are options workspaceVolume and persistentVolumeClaimWorkspaceVolume but I am not able to make it work - jenkins always does following:
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
volumes:
- emptyDir: {}
name: "workspace-volume"
Try something like
podTemplate(
containers: [
containerTemplate(name: 'tree', image: 'iankoulski/tree', ttyEnabled: true, command: 'cat')
],
workspaceVolume: persistentVolumeClaimWorkspaceVolume(claimName: 'workspace', readOnly: false),
) {
node(POD_LABEL) {
stage('read workspace') {
checkout scm
container('tree') {
sh 'env'
sh 'tree'
sh 'test -f old-env.txt && cat old-env.txt'
sh 'env > old-env.txt'
}
}
}
}
Here is an example for declarative pipeline:
pipeline {
agent {
kubernetes {
yamlFile 'jenkins/pv-pod.yaml'
workspaceVolume persistentVolumeClaimWorkspaceVolume(claimName: 'workspace', readOnly: false)
}
}
If you post your jenkins deployment then I might help in that.
Mean while you can visit this yaml that I used and worked very well for me.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins:2.32.2
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}

Permission denied when connecting to docker daemon on jenkinsci/blueocean image deployed to kubernetes

Summary
Running a declarative pipeline job in jenkins which was deployed to a kubernetes cluster fails when using the docker agent with the following error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.39/images/create?fromImage=node&tag=10.15.1: dial unix /var/run/docker.sock: connect: permission denied
How can I solve this permission error in the kubernetes declaration?
Background
We have a jenkins server which was deployed to a kubernetes cluster using the jenkinsci/blueocean image. The kubernetes declaration as done as follows:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-master
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-master
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: jenkins
containers:
- name: jenkins-master
image: jenkinsci/blueocean
imagePullPolicy: Always
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
env:
- name: "JAVA_OPTS"
value: "-Dorg.jenkinsci.plugins.durabletask.BourneShellScript.HEARTBEAT_CHECK_INTERVAL=3600"
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
- name: docker-socket
mountPath: /var/run/docker.sock
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins
- name: docker-socket
hostPath:
path: /var/run/docker.sock
type: File
We then declare a declarative pipeline jenkins job as follows:
pipeline {
agent {
docker {
image 'node:10.15.1'
label 'master'
}
}
stages {
stage('Checkout source code') {
steps {
checkout scm
}
}
stage('Build project') {
steps {
sh 'npm install'
sh 'npm run compile'
}
}
stage('Run quality assurance') {
steps {
sh 'npm run style:check'
sh 'npm run test:coverage'
}
}
}
}
This job fails with the aforementioned error. My suspicion is that the docker socket was mounted into the system, but the user running the job does not have permission to execute the socket. I, however, cannot add the user to the group in the created pod using sudo usermod -a -G docker $USER since the pod will be recreated upon each redeploy.
Questions
Is it possible to mount the docker volume using the correct user in the kubernetes declaration?
Can I declare the pipeline differently, if it is not possible to set up the permission in the kubernetes declaration?
Is there some other solution which I have not thought about?
Thanks.
I, however, cannot add the user to the group in the created pod using
sudo usermod -a -G docker $USER since the pod will be recreated upon
each redeploy.
Actually, you can.
Define a usermod command for your container in the deployment yaml, e.g
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-master
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-master
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: jenkins
containers:
- name: jenkins-master
image: jenkinsci/blueocean
imagePullPolicy: Always
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
env:
- name: "JAVA_OPTS"
value: "-Dorg.jenkinsci.plugins.durabletask.BourneShellScript.HEARTBEAT_CHECK_INTERVAL=3600"
- name: "USER"
value: "Awemo"
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
- name: docker-socket
mountPath: /var/run/docker.sock
command: ["/bin/sh"]
args: ["-c", "usermod -aG docker $USER"]
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins
- name: docker-socket
hostPath:
path: /var/run/docker.sock
type: File
So, whenever a new pod is created, a user will be added to the docker usergroup

kubernetes jenkins docker command not found

Installed Jenkins using helm
helm install --name jenkins -f values.yaml stable/jenkins
Jenkins Plugin Installed
- kubernetes:1.12.6
- workflow-job:2.31
- workflow-aggregator:2.5
- credentials-binding:1.16
- git:3.9.3
- docker:1.1.6
Defined Jenkins pipeline to build docker container
node {
checkout scm
def customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.inside {
sh 'make test'
}
}
Throws the error : docker not found
You can define agent pod with containers with required tools(docker, Maven, Helm etc) in the pipeline for that:
First, create agentpod.yaml with following values:
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: pod
spec:
containers:
- name: maven
image: maven:3.3.9-jdk-8-alpine
command:
- cat
tty: true
volumeMounts:
- name: m2
mountPath: /root/.m2
- name: docker
image: docker:19.03
command:
- cat
tty: true
privileged: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: m2
hostPath:
path: /root/.m2
Then configure the pipeline as:
pipeline {
agent {
kubernetes {
defaultContainer 'jnlp'
yamlFile 'agentpod.yaml'
}
}
stages {
stage('Build') {
steps {
container('maven') {
sh 'mvn package'
}
}
}
stage('Docker Build') {
steps {
container('docker') {
sh "docker build -t dockerimage ."
}
}
}
}
}
It seems like you have only installed plugins but not packages. Two possibilities.
Configure plugins to install packages using Jenkins.
Go to Manage Jenkins
Global Tools Configuration
Docker -> Fill name (eg: Docker-latest)
Check on install automatically and then add installer (Download from
here).
Then save
If you have installed on your machine then update the PATH variable in Jenkins with the location of Docker.

Error pushing kaniko build image to azure container registry from jenkins groovy pipeline

I have a scenario, I run my Jenkins in K8s cluster in Minikube. I run a groovy script within my Jenkins Pipeline which builds the docker image
using Kaniko (which builds a docker image without docker daemon) and pushes to Azure container registry. I have created secret to authenticate to Azure.
But when I push an image - I get an error
" [36mINFO[0m[0004] Taking snapshot of files...
[36mINFO[0m[0004] ENTRYPOINT ["jenkins-slave"]
error pushing image: failed to push to destination Testimage.azurecr.io/test:latest: unexpected end of JSON input
[Pipeline] }"
My Script
My groovy script --
def label = "kaniko-${UUID.randomUUID().toString()}"
podTemplate(name: 'kaniko', label: label, yaml: """
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:debug
imagePullPolicy: Always
command:
- /busybox/cat
tty: true
volumeMounts:
- name: jenkins-pv
mountPath: /root
volumes:
- name: jenkins-pv
projected:
sources:
- secret:
name: pass
items:
- key: .dockerconfigjson
path: .docker/config.json
"""
) {
node(label) {
stage('Build with Kaniko') {
git 'https://github.com/jenkinsci/docker-jnlp-slave.git'
container(name: 'kaniko', shell: '/busybox/sh') {
sh '''#!/busybox/sh
/kaniko/executor -f `pwd`/Dockerfile -c `pwd` --skip-tls-verify --destination=testimage.azurecr.io/test:latest
'''
}
}
}
}
Could you please help how to overcome the error? And also :
How do I know the name of the image build by kaiko?
I m just pushing like - registry.acr.io/test: latest, probably it's an incorrect image name that's the reason I get JSON output error?

Resources