I want to get a Docker-Image from my private-docker-registry. I can not find a good solution to describe a authentication in the Jenkinsfile. What do I need to add in the Jenkinsfile to get my Image "my-private-registry.image-name:tag"?
pipeline {
agent {
kubernetes {
label "${jenkins_slave_id}"
defaultContainer 'jnlp'
serviceAccount 'jenkins'
yaml """
apiVersion: v1
kind: Pod
spec:
restartPolicy: Never
containers:
- name: "my-private-container"
image: "my-private-registry.image-name:tag"
tty: true
command:
- cat
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
- name: "jnlp"
stages {
stage("Do something") {
steps {
script {
container('my-private-container') {
script {
//Do something
}
}
}
}
}
Yes, I have added my imagePullSecret like this:
- name: "my-private-container"
image: "my-private-registry.image-name:tag"
imagePullSecrets: ['Docker-Registry']
tty: true
command:
- cat
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
I configured my imagePullSecrets in the secrets as "Username and Password".
When I try to run it I get following message:
Failed to pull image "my-private-registry.image-name:tag": unknown: Authentication is required
All you need to create cluster object secret with your private registry credentials and then add the secret name in your Pod template as below:
kubectl create secret docker-registry regcred
--docker-server=YOUR-PRIVATE-REGISTRY --docker-username=USER --docker-password=PASSWORD
pipeline {
agent {
kubernetes {
label "${jenkins_slave_id}"
defaultContainer 'jnlp'
serviceAccount 'jenkins'
yaml """
apiVersion: v1
kind: Pod
spec:
imagePullSecrets:
- name: regcred
restartPolicy: Never
containers:
- name: "my-private-container"
image: "my-private-registry.image-name:tag"
tty: true
command:
- cat
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
- name: "jnlp"
Related
I'm beginner in Kubernetes, what I would like to achieve is :
Pass user's ssh private/public key to the Pod and then to the Docker container (there's a shell script that will be using this key)
So I would like to know if it's possible to do that in the Kubectl apply ?
My pod.yaml looks like :
apiVersion: v1
kind: Pod
metadata:
generateName: testing
labels:
type: testing
namespace: ns-test
name: testing-config
spec:
restartPolicy: OnFailure
hostNetwork: true
containers:
- name: mycontainer
image: ".../mycontainer:latest"
you have to store the private / public key in a kubernetes secret object
apiVersion: v1
kind: Secret
metadata:
name: mysshkey
namespace: ns-test
data:
id_rsa: {{ value }}
id_rsa.pub: {{ value }}
and now you can mount this secret file in your container:
containers:
- image: "my-image:latest"
name: my-app
...
volumeMounts:
- mountPath: "/var/my-app"
name: ssh-key
readOnly: true
volumes:
- name: ssh-key
secret:
secretName: mysshkey
The documentation of kuberentes provides also an chapter of Using Secrets as files from a Pod
It's not tested but i hope it works.
First, you create a secret with your keys: kubectl create secret generic mysecret-keys --from-file=privatekey=</path/to/the/key/file/on/your/host> --from-file=publickey=</path/to/the/key/file/on/your/host>
Then you refer to the key files using the secret in your pod:
apiVersion: v1
kind: Pod
metadata:
...
spec:
...
containers:
- name: mycontainer
image: ".../mycontainer:latest"
volumeMounts:
- name: mysecret-keys
mountPath: /path/in/the/container # <-- privatekey & publickey will be mounted as file in this directory where your shell script can access
volumes:
- name: mysecret-keys
secret:
secretName: mysecret-keys # <-- mount the secret resource you created above
You can check the secret with kubectl get secret mysecret-keys --output yaml. You can check the pod and its mounting with kubectl describe pod testing-config.
I am looking to create pv/pvc for each jenkins jobs running on slave agent on runtime.
Basically what I am trying to achieve is, create a pv and share it between pods and later delete it when job is done.
pipeline {
agent {
kubernetes {
label 'scm'
yaml """
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim1
spec:
accessModes:
- ReadWriteMany
storageClassName: fast
resources:
requests:
storage: 1Gi
---
apiVersion: "v1"
kind: "Pod"
spec:
containers:
image: "jenkins/jnlp-slave:3.35-5-alpine"
name: "jnlp"
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
- command:
- "cat"
tty: true
volumeMounts:
- mountPath: "/home/jenkins/wsp1"
name: "workspace-volume"
readOnly: false
volumes:
- name: "workspace-volume"
persistentVolumeClaim:
claimName: claim1
"""
}
}
stages {
stage('Checkout code') {
agent { label 'scm'}
steps {
git branch: 'master',
credentialsId: 'key',
url: 'giturl'
sh "ls -lat"
}
}
stage('Build ') {
agent {
kubernetes {
label 'Build-pod'
yaml """
spec:
containers:
- name: maven
image: maven:3.3.9-jdk-8-alpine
command:
- cat
tty: true
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
"""
}
}
steps {
sh "echo Workspace dir is ${pwd()}"
sh "mvn clean install
}
}
}
}
Above script does not work for obvious reasons. Do we have any other solutions.
and how to use runtime name for pvc and use it pod.
metadata:
name: claim1
I am running jenkins helm on k8s.
You can create a PVC per pod using something like workspaceVolume: dynamicPVC(requestsSize: "10Gi") but it will be tied to the pod and deleted when the pod is deleted
https://github.com/jenkinsci/kubernetes-plugin/blob/342166c1864e84791f2e94dd823709eb6e672a6e/src/test/resources/org/csanchez/jenkins/plugins/kubernetes/pipeline/dynamicPVC.groovy
I am trying to setup declarative pipeline where I would like to persiste workspace as volume claim so large git checkout can be faster. Based on doc there are options workspaceVolume and persistentVolumeClaimWorkspaceVolume but I am not able to make it work - jenkins always does following:
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
volumes:
- emptyDir: {}
name: "workspace-volume"
Try something like
podTemplate(
containers: [
containerTemplate(name: 'tree', image: 'iankoulski/tree', ttyEnabled: true, command: 'cat')
],
workspaceVolume: persistentVolumeClaimWorkspaceVolume(claimName: 'workspace', readOnly: false),
) {
node(POD_LABEL) {
stage('read workspace') {
checkout scm
container('tree') {
sh 'env'
sh 'tree'
sh 'test -f old-env.txt && cat old-env.txt'
sh 'env > old-env.txt'
}
}
}
}
Here is an example for declarative pipeline:
pipeline {
agent {
kubernetes {
yamlFile 'jenkins/pv-pod.yaml'
workspaceVolume persistentVolumeClaimWorkspaceVolume(claimName: 'workspace', readOnly: false)
}
}
If you post your jenkins deployment then I might help in that.
Mean while you can visit this yaml that I used and worked very well for me.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins:2.32.2
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
Summary
Running a declarative pipeline job in jenkins which was deployed to a kubernetes cluster fails when using the docker agent with the following error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.39/images/create?fromImage=node&tag=10.15.1: dial unix /var/run/docker.sock: connect: permission denied
How can I solve this permission error in the kubernetes declaration?
Background
We have a jenkins server which was deployed to a kubernetes cluster using the jenkinsci/blueocean image. The kubernetes declaration as done as follows:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-master
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-master
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: jenkins
containers:
- name: jenkins-master
image: jenkinsci/blueocean
imagePullPolicy: Always
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
env:
- name: "JAVA_OPTS"
value: "-Dorg.jenkinsci.plugins.durabletask.BourneShellScript.HEARTBEAT_CHECK_INTERVAL=3600"
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
- name: docker-socket
mountPath: /var/run/docker.sock
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins
- name: docker-socket
hostPath:
path: /var/run/docker.sock
type: File
We then declare a declarative pipeline jenkins job as follows:
pipeline {
agent {
docker {
image 'node:10.15.1'
label 'master'
}
}
stages {
stage('Checkout source code') {
steps {
checkout scm
}
}
stage('Build project') {
steps {
sh 'npm install'
sh 'npm run compile'
}
}
stage('Run quality assurance') {
steps {
sh 'npm run style:check'
sh 'npm run test:coverage'
}
}
}
}
This job fails with the aforementioned error. My suspicion is that the docker socket was mounted into the system, but the user running the job does not have permission to execute the socket. I, however, cannot add the user to the group in the created pod using sudo usermod -a -G docker $USER since the pod will be recreated upon each redeploy.
Questions
Is it possible to mount the docker volume using the correct user in the kubernetes declaration?
Can I declare the pipeline differently, if it is not possible to set up the permission in the kubernetes declaration?
Is there some other solution which I have not thought about?
Thanks.
I, however, cannot add the user to the group in the created pod using
sudo usermod -a -G docker $USER since the pod will be recreated upon
each redeploy.
Actually, you can.
Define a usermod command for your container in the deployment yaml, e.g
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-master
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-master
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: jenkins
containers:
- name: jenkins-master
image: jenkinsci/blueocean
imagePullPolicy: Always
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
env:
- name: "JAVA_OPTS"
value: "-Dorg.jenkinsci.plugins.durabletask.BourneShellScript.HEARTBEAT_CHECK_INTERVAL=3600"
- name: "USER"
value: "Awemo"
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
- name: docker-socket
mountPath: /var/run/docker.sock
command: ["/bin/sh"]
args: ["-c", "usermod -aG docker $USER"]
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins
- name: docker-socket
hostPath:
path: /var/run/docker.sock
type: File
So, whenever a new pod is created, a user will be added to the docker usergroup
How I can change securityContext of my pods in Jenkins Kubernetes Plugin. For example, run docker in docker images with privileged mode in docker environment.
I believe this should work (as per the docs):
def label = "mypod-${UUID.randomUUID().toString()}"
podTemplate(label: label, yaml: """
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: busybox
image: busybox
command:
- cat
tty: true
securityContext:
allowPrivilegeEscalation: true
"""
) {
node (label) {
container('busybox') {
sh "hostname"
}
}
}