Jenkins in k8s don`t save install plugin - jenkins

There is the following job, save jenkins state using pv / pvc. The problem is that it can't mount in /var/jenkins_home ,but it is mounted in any other folder, tell me what to do)
Or save the state of jenkins plugins to a folder and then get them from there using some script?
jenkins-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
ports:
- name: http-port
containerPort: 8080
volumeMounts:
- name: test-pvc
mountPath: /var/jenkins_home/
volumes:
- name: test-pvc
persistentVolumeClaim:
claimName: test-pvc
pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv
spec:
capacity:
storage: 2Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
storageClassName: local-storage
hostPath:
path: /data/jenkins_home/
pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
volumeName: jenkins-pv
storageClassName: local-storage

I figured it out))
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
namespace: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
ports:
- name: http-port
containerPort: 8080
volumeMounts:
- name: jenkins-storage
mountPath: /var/jenkins_home/
volumes:
- name: jenkins-storage
persistentVolumeClaim:
claimName: jenkins-pv-clain
---
apiVersion: v1
kind: Service
metadata:
name: jenkins
namespace: jenkins
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30000
selector:
app: jenkins
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv-clain
namespace: jenkins
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

Related

Pod status as `CreateContainerConfigError` in Kubernetes cluster

I am new to Kubernates and have to deploy TheHive in our infrastructure. I use the docker image created by the cummunity thehiveproject/thehive.
Below are my scripts that I'm using for deployment.
apiVersion: v1
kind: Service
metadata:
name: thehive
labels:
app: thehive
spec:
type: NodePort
ports:
- port: 9000
targetPort: 9000
nodePort: 30900
protocol: TCP
selector:
app: thehive
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: thehive-pv-claim
labels:
app: thehive
spec:
accessModes:
- ReadWriteOnce
storageClassName: "local-path"
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: thehive
labels:
app: thehive
spec:
selector:
matchLabels:
app: thehive
template:
metadata:
labels:
app: thehive
spec:
containers:
- image: thehiveproject/thehive
name: thehive
env:
- name: TH_NO_CONFIG
value: 1
- name: TH_SECRET
value: "test#123"
- name: TH_CONFIG_ES
value: "elasticsearch"
- name: TH_CORTEX_PORT
value: "9001"
ports:
- containerPort: 9000
name: thehive
volumeMounts:
- name: thehive-config-file
mountPath: /etc/thehive/application.conf
subPath: application.conf
- name: thehive-storage
mountPath: /etc/thehive/
volumes:
- name: thehive-storage
persistentVolumeClaim:
claimName: thehive-pv-claim
- name: thehive-config-file
hostPath:
path: /home/ubuntu/k8s/thehive
Unfortunattly when I do
kubectl apply -f thehive-dep.yml
I get a CreateContainerConfigError. Elasticsearch is successfully deployed with the service name elasticsearch.
What am i doing wrong?
thank for every help :(

Issue with Jenkins Deployment File: Unknown resource kind: Deployment

I'm struggling to figure out what the solution might be, so I thought to ask here. I'm trying to use the code below to deploy a Jenkins pod to Kubernetes, but it fails with a Unknown resource kind: Deployment error:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts-alpine
ports:
- name: http-port
containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
The output of kubectl api-versions is:
admissionregistration.k8s.io/v1
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
coordination.k8s.io/v1
coordination.k8s.io/v1beta1
discovery.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
networking.k8s.io/v1
networking.k8s.io/v1beta1
node.k8s.io/v1beta1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
Does anyone know what the problem might be?
If this is an indentation issue, I'm failing to see it.
It seems the apiVersion is deprecated. You can simply convert to current apiVersion and apply.
$ kubectl convert -f jenkins-dep.yml
kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: jenkins
name: jenkins-deployment
spec:
progressDeadlineSeconds: 2147483647
replicas: 1
revisionHistoryLimit: 2147483647
selector:
matchLabels:
app: jenkins
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: jenkins
spec:
containers:
- image: jenkins/jenkins:lts-alpine
imagePullPolicy: IfNotPresent
name: jenkins
ports:
- containerPort: 8080
name: http-port
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/jenkins_home
name: jenkins-home
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: jenkins-home
status: {}
$ kubectl convert -f jenkins-dep.yml -oyaml > jenkins-dep-latest.yml
Change the apiVersion from extensions/v1beta1 to apps/v1 and use kubectl version to check if the kubectl client and kube API Server version is matching and not too old.

Jenkins container persistence on Kubernetes cluster - PersistentVolumeClaim (VMware/Vsphere)

Trying to persist my jenkins jobs on to vsphere storage when I delete the deployments/services.
I've tried using the standard approach: used StorageClass, then made a PersistentVolumeClaim which is referenced in the .ayml file that will create the deployments.
storage-class.yml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mystorage
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
persistent-volume-claim.yml:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc0003
spec:
storageClassName: mystorage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 15Gi
jenkins.yml:
---
apiVersion: v1
kind: Service
metadata:
name: jenkins-auto-ci
labels:
app: jenkins-auto-ci
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
selector:
app: jenkins-auto-ci
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-auto-ci
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-auto-ci
spec:
containers:
- name: jenkins-auto-ci
image: jenkins
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- name: http-port
containerPort: 80
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: "/var"
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: pvc0003
I expect the jenkins jobs to persist when I delete and recreate the deployments.
You should create VMDK which is Virtual Machine Disk.
You can do that using govc which is vSphere CLI.
govc datastore.disk.create -ds datastore1 -size 2G volumes/myDisk.vmdk
Or using ESXi CLI by ssh into the host as root and executing:
vmkfstools -c 2G /vmfs/volumes/datastore1/volumes/myDisk.vmdk
Once this is done you should create your PV let's call it vsphere_pv.yaml which might look like the following:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
vsphereVolume:
volumePath: "[datastore1] volumes/myDisk"
fsType: ext4
The datastore1 in this example was created in root folder of vCenter, if you have it in a different location you need to change the volumePath. If it's located in DatastoreCluster then set volumePath to"[DatastoreCluster/datastore1] volumes/myDisk".
Apply the yaml to the Kubernetes by kubectl apply -f vsphere_pv.yaml
You can check if it was created by describing it kubectl describe pv pv0001
Now you need PVC let's call it vsphere_pvc.yaml to consume PV.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc0001
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Apply the yaml to the Kubernetes by kubectl apply -f vsphere_pvc.yaml
You can check if it was created by describing it kubectl describe pvc pv0001
Once this is done your yaml might be looking like the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-auto-ci
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-auto-ci
spec:
containers:
- name: jenkins-auto-ci
image: jenkins
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- name: http-port
containerPort: 80
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: "/var"
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: pvc0001
All this is nicely explained on Vmware GitHub vsphere-storage-for-kubernetes.

how to link tomcat with mysql db container in kubernetes

My tomcat and mysql containers are not connecting.so how can I link them so that my war file can run succesfully.
I built my tomcat image using docker file
FROM picoded/tomcat7
COPY data-core-0.0.1-SNAPSHOT.war /usr/local/tomcat/webapps/data-core-0.0.1-SNAPSHOT.war
mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
imagePullPolicy: "IfNotPresent"
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_DATABASE
value: data-core
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-initdb-pv-claim
mysqlpersistantvolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-initdb-pv-volume
labels:
type: local
app: mysql
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/home/vignesh/stackoverflow/tmp/data" //this is the path were my
sql init script is placed.
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-initdb-pv-claim
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
tomcat.yaml
apiVersion: v1
kind: Service
metadata:
name: tomcat
labels:
app: tomcat
spec:
type: NodePort
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: tomcat
tier: frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat
labels:
app: tomcat
spec:
selector:
matchLabels:
app: tomcat
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: tomcat
tier: frontend
spec:
containers:
- image: suji165475/vignesh:tomcatserver
name: tomcat
env:
- name: DB_PORT_3306_TCP_ADDR
value: mysql #service name of mysql
- name: DB_ENV_MYSQL_DATABASE
value: data-core
- name: DB_ENV_MYSQL_ROOT_PASSWORD
value: root
ports:
- containerPort: 8080
name: http
volumeMounts:
- name: tomcat-persistent-storage
mountPath: /var/data
volumes:
- name: tomcat-persistent-storage
persistentVolumeClaim:
claimName: tomcat-pv-claim
tomcatpersistantvolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: tomcat-pv
labels:
type: local
app: mysql
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/app"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: tomcat-pv-claim
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
currently using type:Nodeport for tomcat service. Do I have to use Nodeport for mysql also?? If so then should i give the same nodeport or different??
Note: Iam running all of this on a server using putty terminal
When kubetnetes start service, it adds env variables for host, port etc. Try using environment variable MYSQL_SERVICE_HOST

Kubernetes Deployment populates wrong Persistent Volume

I'm trying to create two deployments, one for Wordpress the other for MySQL which refer to two different Persistent Volumes.
Sometimes, while deleting and recreating volumes and deployments, the MySQL deployment populates the Wordpress volume (ending up with a database in the wordpress-volume directory).
This is clearer when you do kubectl get pv --namespace my-namespace:
mysql-volume 2Gi RWO Retain Bound flashart-it/wordpress-volume-claim manual 1h
wordpress-volume 2Gi RWO Retain Bound flashart-it/mysql-volume-claim manual
.
I'm pretty sure the settings are ok. Please find the yaml file below.
Persistent Volume Claims + Persistent Volumes
kind: PersistentVolume
apiVersion: v1
metadata:
namespace: my-namespace
name: mysql-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /path/to/mount/mysql-volume
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: my-namespace
name: mysql-volume-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
kind: PersistentVolume
apiVersion: v1
metadata:
namespace: my-namespace
name: wordpress-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /path/to/mount/wordpress-volume
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: my-namespace
name: wordpress-volume-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Deployments
kind: Deployment
apiVersion: apps/v1
metadata:
name: wordpress
namespace: my-namespace
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
namespace: my-namespace
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:5.0-php7.1-apache
name: wordpress
env:
# ...
ports:
# ...
volumeMounts:
- name: wordpress-volume
mountPath: /var/www/html
volumes:
- name: wordpress-volume
persistentVolumeClaim:
claimName: wordpress-volume-claim
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: my-namespace
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
namespace: my-namespace
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
# ...
ports:
# ...
volumeMounts:
- name: mysql-volume
mountPath: /var/lib/mysql
volumes:
- name: mysql-volume
persistentVolumeClaim:
claimName: mysql-volume-claim
It's expected behavior in Kubernetes. PVC can bind to any available PV, given that storage class is matched, access mode is matched, and storage size is sufficient. Names are not used to match PVC and PV.
A possible solution for your scenario is to use label selector on PVC to filter qualified PV.
First, add a label to PV (in this case: app=mysql)
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-volume
labels:
app: mysql
Then, add a label selector in PVC to filter PV.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: my-namespace
name: mysql-volume-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
selector:
matchLabels:
app: mysql

Resources