How I can change securityContext of my pods in Jenkins Kubernetes Plugin. For example, run docker in docker images with privileged mode in docker environment.
I believe this should work (as per the docs):
def label = "mypod-${UUID.randomUUID().toString()}"
podTemplate(label: label, yaml: """
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: busybox
image: busybox
command:
- cat
tty: true
securityContext:
allowPrivilegeEscalation: true
"""
) {
node (label) {
container('busybox') {
sh "hostname"
}
}
}
Related
I was able to publish a Docker image using the jenkins pipeline, but not pull the docker image from the nexus.I used kaniko to build the image.
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-app
name: test-app
namespace: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
spec:
hostNetwork: false
containers:
- name: test-app
image: ip_adress/demo:0.1.0
imagePullPolicy: Always
resources:
limits: {}
imagePullSecrets:
- name: registrypullsecret
service.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: test-app
name: test-app-service
namespace: jenkins
spec:
ports:
- nodePort: 32225
port: 8081
protocol: TCP
targetPort: 8081
selector:
app: test-app
type: NodePort
Jenkins pipeline main script
stage ('Build Image'){
container('kaniko'){
script {
sh '''
/kaniko/executor --dockerfile `pwd`/Dockerfile --context `pwd` --destination="$ip_adress:8082/demo:0.1.0" --insecure --skip-tls-verify
'''
}
stage('Kubernetes Deployment'){
container('kubectl'){
withKubeConfig([credentialsId: 'kube-config', namespace:'jenkins']){
sh 'kubectl get pods'
sh 'kubectl apply -f deployment.yml'
sh 'kubectl apply -f service.yml'
}
I've created a dockerfile of a Spring boot Java application. I've sent the image to Nexus using the Jenkins pipeline, but I can't deploy it.
kubectl get pod -n jenkins
test-app-... 0/1 ImagePullBackOff
kubectl describe pod test-app-.....
Error from server (NotFound): pods "test-app-.." not found
docker pull $ip_adress:8081/repository/docker-releases/demo:0.1.0 ```
Error response from daemon: Get "https://$ip_adress/v2/": http:server
gave HTTP response to HTTPS client
ip adress: private ip address
How can I send as http?
First of all try to edit /etc/containerd/config.toml and add your registry ip:port like this { "insecure-registries": ["172.16.4.93:5000"] }
if there was still a problem, add your nexus registry credential to yaml kubernetes file like link below
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
If we want to use a private registry to pull the images in kubernetes we need to configure the registry endpoint and credentials as a secret and use it in pod deployment configuration.
Note: The secrets must have to be in the same namespace as Pod
Refer this official k8 document to know more details about configuring private registry in Kubernetes
In your case you are using secret registrypullsecret . So check the secret one more time whether it is configured properly or not. If not, try following the documentation mentioned above.
I want to get a Docker-Image from my private-docker-registry. I can not find a good solution to describe a authentication in the Jenkinsfile. What do I need to add in the Jenkinsfile to get my Image "my-private-registry.image-name:tag"?
pipeline {
agent {
kubernetes {
label "${jenkins_slave_id}"
defaultContainer 'jnlp'
serviceAccount 'jenkins'
yaml """
apiVersion: v1
kind: Pod
spec:
restartPolicy: Never
containers:
- name: "my-private-container"
image: "my-private-registry.image-name:tag"
tty: true
command:
- cat
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
- name: "jnlp"
stages {
stage("Do something") {
steps {
script {
container('my-private-container') {
script {
//Do something
}
}
}
}
}
Yes, I have added my imagePullSecret like this:
- name: "my-private-container"
image: "my-private-registry.image-name:tag"
imagePullSecrets: ['Docker-Registry']
tty: true
command:
- cat
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
I configured my imagePullSecrets in the secrets as "Username and Password".
When I try to run it I get following message:
Failed to pull image "my-private-registry.image-name:tag": unknown: Authentication is required
All you need to create cluster object secret with your private registry credentials and then add the secret name in your Pod template as below:
kubectl create secret docker-registry regcred
--docker-server=YOUR-PRIVATE-REGISTRY --docker-username=USER --docker-password=PASSWORD
pipeline {
agent {
kubernetes {
label "${jenkins_slave_id}"
defaultContainer 'jnlp'
serviceAccount 'jenkins'
yaml """
apiVersion: v1
kind: Pod
spec:
imagePullSecrets:
- name: regcred
restartPolicy: Never
containers:
- name: "my-private-container"
image: "my-private-registry.image-name:tag"
tty: true
command:
- cat
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
- name: "jnlp"
I am looking to create pv/pvc for each jenkins jobs running on slave agent on runtime.
Basically what I am trying to achieve is, create a pv and share it between pods and later delete it when job is done.
pipeline {
agent {
kubernetes {
label 'scm'
yaml """
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim1
spec:
accessModes:
- ReadWriteMany
storageClassName: fast
resources:
requests:
storage: 1Gi
---
apiVersion: "v1"
kind: "Pod"
spec:
containers:
image: "jenkins/jnlp-slave:3.35-5-alpine"
name: "jnlp"
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
- command:
- "cat"
tty: true
volumeMounts:
- mountPath: "/home/jenkins/wsp1"
name: "workspace-volume"
readOnly: false
volumes:
- name: "workspace-volume"
persistentVolumeClaim:
claimName: claim1
"""
}
}
stages {
stage('Checkout code') {
agent { label 'scm'}
steps {
git branch: 'master',
credentialsId: 'key',
url: 'giturl'
sh "ls -lat"
}
}
stage('Build ') {
agent {
kubernetes {
label 'Build-pod'
yaml """
spec:
containers:
- name: maven
image: maven:3.3.9-jdk-8-alpine
command:
- cat
tty: true
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
"""
}
}
steps {
sh "echo Workspace dir is ${pwd()}"
sh "mvn clean install
}
}
}
}
Above script does not work for obvious reasons. Do we have any other solutions.
and how to use runtime name for pvc and use it pod.
metadata:
name: claim1
I am running jenkins helm on k8s.
You can create a PVC per pod using something like workspaceVolume: dynamicPVC(requestsSize: "10Gi") but it will be tied to the pod and deleted when the pod is deleted
https://github.com/jenkinsci/kubernetes-plugin/blob/342166c1864e84791f2e94dd823709eb6e672a6e/src/test/resources/org/csanchez/jenkins/plugins/kubernetes/pipeline/dynamicPVC.groovy
Summary
Running a declarative pipeline job in jenkins which was deployed to a kubernetes cluster fails when using the docker agent with the following error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.39/images/create?fromImage=node&tag=10.15.1: dial unix /var/run/docker.sock: connect: permission denied
How can I solve this permission error in the kubernetes declaration?
Background
We have a jenkins server which was deployed to a kubernetes cluster using the jenkinsci/blueocean image. The kubernetes declaration as done as follows:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-master
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-master
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: jenkins
containers:
- name: jenkins-master
image: jenkinsci/blueocean
imagePullPolicy: Always
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
env:
- name: "JAVA_OPTS"
value: "-Dorg.jenkinsci.plugins.durabletask.BourneShellScript.HEARTBEAT_CHECK_INTERVAL=3600"
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
- name: docker-socket
mountPath: /var/run/docker.sock
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins
- name: docker-socket
hostPath:
path: /var/run/docker.sock
type: File
We then declare a declarative pipeline jenkins job as follows:
pipeline {
agent {
docker {
image 'node:10.15.1'
label 'master'
}
}
stages {
stage('Checkout source code') {
steps {
checkout scm
}
}
stage('Build project') {
steps {
sh 'npm install'
sh 'npm run compile'
}
}
stage('Run quality assurance') {
steps {
sh 'npm run style:check'
sh 'npm run test:coverage'
}
}
}
}
This job fails with the aforementioned error. My suspicion is that the docker socket was mounted into the system, but the user running the job does not have permission to execute the socket. I, however, cannot add the user to the group in the created pod using sudo usermod -a -G docker $USER since the pod will be recreated upon each redeploy.
Questions
Is it possible to mount the docker volume using the correct user in the kubernetes declaration?
Can I declare the pipeline differently, if it is not possible to set up the permission in the kubernetes declaration?
Is there some other solution which I have not thought about?
Thanks.
I, however, cannot add the user to the group in the created pod using
sudo usermod -a -G docker $USER since the pod will be recreated upon
each redeploy.
Actually, you can.
Define a usermod command for your container in the deployment yaml, e.g
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-master
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-master
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: jenkins
containers:
- name: jenkins-master
image: jenkinsci/blueocean
imagePullPolicy: Always
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
env:
- name: "JAVA_OPTS"
value: "-Dorg.jenkinsci.plugins.durabletask.BourneShellScript.HEARTBEAT_CHECK_INTERVAL=3600"
- name: "USER"
value: "Awemo"
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
- name: docker-socket
mountPath: /var/run/docker.sock
command: ["/bin/sh"]
args: ["-c", "usermod -aG docker $USER"]
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins
- name: docker-socket
hostPath:
path: /var/run/docker.sock
type: File
So, whenever a new pod is created, a user will be added to the docker usergroup
I want to run a private docker image on my minikube k8s .
But the pod is never able to pull my image from docker .
How can i pull private image in k8s and use it?
This my yaml for pod
{apiVersion: v1
kind: Pod
metadata:
name: privaterepo
spec:
containers:
- name: private-reg-container
image: raveena1/test
imagePullSecrets:
- name: regsecret}
The log is:-
container "private-reg-container" in pod "privaterepo" is waiting to start: trying and failing to pull image
You need to create a secret & use it in your YAML/JSON deployment file -
Create secret (Like for Docker registry, you can change the registry server URL) -
$ kubectl create secret docker-registry regsecret --docker-server=https://index.docker.io/v1/ --docker-username=$USERNM --docker-password=$PASSWD --docker-email=vivekyad4v#gmail.com
deployment.yaml (use regsecret)-
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: local-simple-python
spec:
replicas: 2
selector:
matchLabels:
app: local-simple-python
template:
metadata:
labels:
app: local-simple-python
spec:
containers:
- name: python
image: vivekyad4v/local-simple-python:latest
ports:
- containerPort: 8080
imagePullSecrets:
- name: regsecret
Deploy -
$ kubectl create -f deployment.yml
Your pods should now be able to fetch docker images on private registry.
You can find more info on -
https://github.com/vivekyad4v/kubernetes/tree/master/kubernetes-for-beginners
Official doc - https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/