Issue with Jenkins Deployment File: Unknown resource kind: Deployment - jenkins

I'm struggling to figure out what the solution might be, so I thought to ask here. I'm trying to use the code below to deploy a Jenkins pod to Kubernetes, but it fails with a Unknown resource kind: Deployment error:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts-alpine
ports:
- name: http-port
containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
The output of kubectl api-versions is:
admissionregistration.k8s.io/v1
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
coordination.k8s.io/v1
coordination.k8s.io/v1beta1
discovery.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
networking.k8s.io/v1
networking.k8s.io/v1beta1
node.k8s.io/v1beta1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
Does anyone know what the problem might be?
If this is an indentation issue, I'm failing to see it.

It seems the apiVersion is deprecated. You can simply convert to current apiVersion and apply.
$ kubectl convert -f jenkins-dep.yml
kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: jenkins
name: jenkins-deployment
spec:
progressDeadlineSeconds: 2147483647
replicas: 1
revisionHistoryLimit: 2147483647
selector:
matchLabels:
app: jenkins
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: jenkins
spec:
containers:
- image: jenkins/jenkins:lts-alpine
imagePullPolicy: IfNotPresent
name: jenkins
ports:
- containerPort: 8080
name: http-port
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/jenkins_home
name: jenkins-home
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: jenkins-home
status: {}
$ kubectl convert -f jenkins-dep.yml -oyaml > jenkins-dep-latest.yml

Change the apiVersion from extensions/v1beta1 to apps/v1 and use kubectl version to check if the kubectl client and kube API Server version is matching and not too old.

Related

What's the Kubernetes equivalent to docker --privileged?

I wanted to host a TDengine cluster in Kubernetes, then met an error when I enabled coredump in the container.
I've searched Stack Overflow and found the Docker solution, How to modify the `core_pattern` when building docker image, but not the Kubernetes one.
Here's an example:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
containers:
- image: my-image
name: my-app
...
securityContext:
allowPrivilegeEscalation: false
runAsUser: 0
Just some fix to the picked answer, the final worked yaml is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: core
spec:
selector:
matchLabels:
app: core
template:
metadata:
labels:
app: core
spec:
containers:
- image: zitsen/enable-coredump:0.1.0
name: core
command: ["/bin/sleep", "3650d"]
securityContext:
allowPrivilegeEscalation: false
runAsUser: 0

How to push logs to fluentd deamonset using logrus

I have a simple golang application running inside kubernetes which just logs whatever text we place in request params. Code :
package main
import (
"github.com/sirupsen/logrus"
"net/http"
"os"
)
var (
logger = logrus.New()
)
func main() {
//logFile, err := os.OpenFile("/var/log/app.log", os.O_CREATE | os.O_WRONLY | os.O_APPEND, 0666)
//if err != nil {
// panic(err)
//}
logger.Out = os.Stdout
http.HandleFunc("/get", func(w http.ResponseWriter, r *http.Request) {
content := r.URL.Query().Get("m")
if content == "" {
logger.Warn("m is empty")
content = "nothing"
}
logger.Info("content is - ", content)
w.Write([]byte(content))
})
logger.Info("starting server")
http.ListenAndServe(":9999", nil)
}
I have fluentd deamonset running on my local which is connected to elasticsearch (also running in local but not in kubernetes). Now I want my logs to go to elasticsearch and ultimately appear in kibana but that's not happening and I am not able to debug.
My fluentd yaml looks like -
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-system
labels:
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
selector:
matchLabels:
k8s-app: fluentd-logging
template:
metadata:
labels:
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:elasticsearch
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "minikube-host"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENT_UID
value: "0"
# X-Pack Authentication
# =====================
- name: FLUENT_ELASTICSEARCH_USER
value: "elastic"
- name: FLUENT_ELASTICSEARCH_PASSWORD
value: "MyPw123"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
Here minikube-host is just a proxy service that fluentd uses to connect to host machine (elasticsearch runs directly on host).
fluentd rbac looks like -
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: kube-system
labels:
k8s-app: fluentd-logging
version: v1
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: fluentd
namespace: kube-system
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: kube-system
fluentd is able to push logs to ES because I can see logs from fluentd pod in kibana. Actually I am able to see logs from fluentd pod only (no app logs, no logs from pods running in kube-system)
Not sure if it is relevant: my application runs inside docker with dockerfile
FROM golang:1.15 as base_image
WORKDIR /go/src/demo
ADD . .
RUN go get -d ./...
RUN go build -o /build/main-linux main.go
WORKDIR /build
CMD ["./main-linux"]
and my application deployment file is -
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-deployment
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: yashschandra/demo
imagePullPolicy: Always
ports:
- containerPort: 9999
volumes:
- name: varlog
hostPath:
path: /var/log
My question is -
What am I missing in my golang application because of which fluentd is not receiving logs from app (ie. why don't I see "starting server" in my kibana)?
I have tried both logging to stdout as well as logging to a file but none is working.
I feel it has something to do with pipelining logs correctly to fluentd but almost all articles I read on the topic does not do anything special
references -
https://www.metricfire.com/blog/logging-for-kubernetes-fluentd-and-elasticsearch/
https://medium.com/kubernetes-tutorials/cluster-level-logging-in-kubernetes-with-fluentd-e59aa2b6093a

Jenkins in k8s don`t save install plugin

There is the following job, save jenkins state using pv / pvc. The problem is that it can't mount in /var/jenkins_home ,but it is mounted in any other folder, tell me what to do)
Or save the state of jenkins plugins to a folder and then get them from there using some script?
jenkins-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
ports:
- name: http-port
containerPort: 8080
volumeMounts:
- name: test-pvc
mountPath: /var/jenkins_home/
volumes:
- name: test-pvc
persistentVolumeClaim:
claimName: test-pvc
pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv
spec:
capacity:
storage: 2Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
storageClassName: local-storage
hostPath:
path: /data/jenkins_home/
pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
volumeName: jenkins-pv
storageClassName: local-storage
I figured it out))
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
namespace: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
ports:
- name: http-port
containerPort: 8080
volumeMounts:
- name: jenkins-storage
mountPath: /var/jenkins_home/
volumes:
- name: jenkins-storage
persistentVolumeClaim:
claimName: jenkins-pv-clain
---
apiVersion: v1
kind: Service
metadata:
name: jenkins
namespace: jenkins
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30000
selector:
app: jenkins
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv-clain
namespace: jenkins
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

Jenkins container persistence on Kubernetes cluster - PersistentVolumeClaim (VMware/Vsphere)

Trying to persist my jenkins jobs on to vsphere storage when I delete the deployments/services.
I've tried using the standard approach: used StorageClass, then made a PersistentVolumeClaim which is referenced in the .ayml file that will create the deployments.
storage-class.yml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mystorage
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
persistent-volume-claim.yml:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc0003
spec:
storageClassName: mystorage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 15Gi
jenkins.yml:
---
apiVersion: v1
kind: Service
metadata:
name: jenkins-auto-ci
labels:
app: jenkins-auto-ci
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
selector:
app: jenkins-auto-ci
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-auto-ci
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-auto-ci
spec:
containers:
- name: jenkins-auto-ci
image: jenkins
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- name: http-port
containerPort: 80
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: "/var"
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: pvc0003
I expect the jenkins jobs to persist when I delete and recreate the deployments.
You should create VMDK which is Virtual Machine Disk.
You can do that using govc which is vSphere CLI.
govc datastore.disk.create -ds datastore1 -size 2G volumes/myDisk.vmdk
Or using ESXi CLI by ssh into the host as root and executing:
vmkfstools -c 2G /vmfs/volumes/datastore1/volumes/myDisk.vmdk
Once this is done you should create your PV let's call it vsphere_pv.yaml which might look like the following:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
vsphereVolume:
volumePath: "[datastore1] volumes/myDisk"
fsType: ext4
The datastore1 in this example was created in root folder of vCenter, if you have it in a different location you need to change the volumePath. If it's located in DatastoreCluster then set volumePath to"[DatastoreCluster/datastore1] volumes/myDisk".
Apply the yaml to the Kubernetes by kubectl apply -f vsphere_pv.yaml
You can check if it was created by describing it kubectl describe pv pv0001
Now you need PVC let's call it vsphere_pvc.yaml to consume PV.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc0001
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Apply the yaml to the Kubernetes by kubectl apply -f vsphere_pvc.yaml
You can check if it was created by describing it kubectl describe pvc pv0001
Once this is done your yaml might be looking like the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-auto-ci
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-auto-ci
spec:
containers:
- name: jenkins-auto-ci
image: jenkins
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- name: http-port
containerPort: 80
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: "/var"
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: pvc0001
All this is nicely explained on Vmware GitHub vsphere-storage-for-kubernetes.

k8s doesn't download docker container

when i run my command to apply the modification or just to create ( pods, service, Deployments)
kubectl apply -f hello-kubernetes-oliver.yml
I dont have an error.
But when i do docker ps to see if the container was downloaded from my private registery. i've nothing :(
If i run the command docker-all.attanea.net/hello_world:latestit download the container.
i dont understand why it doesn't download my container with the first command ?
you will find below my hello-kubernetes-oliver.yml
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-oliver
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-oliver
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-kubernetes-oliver
spec:
replicas: 1
template:
metadata:
labels:
app: hello-kubernetes-oliver
spec:
containers:
- name: hello-kubernetes-oliver
image: private-registery.net/hello_world:latest
ports:
- containerPort: 80
In order to download Images from the Private registry, You need to create a Secret which is used in the Deployment Manifest.
kubectl create secret docker-registry regcred --docker-server= --docker-username="your-name" --docker-password="your-pword" --docker-email="your-email"
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-in-the-cluster-that-holds-your-authorization-token
regcred is the name of the secret resources.
Then you attach regcred secret in your deployment file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-kubernetes-oliver
spec:
replicas: 1
template:
metadata:
labels:
app: hello-kubernetes-oliver
spec:
containers:
- name: hello-kubernetes-oliver
image: private-registery.net/hello_world:latest
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred

Resources