spec.volumes[1].name: Duplicate value: "workspace-volume" - jenkins

I'm using jenkins to start my builds in a kubernetes cluster via the kubernetes plugin.
When trying to set my jenkins workspace-volume to medium: Memory so that it runs in RAM, I receive the following error:
spec.volumes[1].name: Duplicate value: "workspace-volume"
This is the corresponding yaml:
apiVersion: v1
kind: Pod
metadata:
name: jenkins-job-xyz
labels:
identifier: jenkins-job-xyz
spec:
restartPolicy: Never
containers:
- name: jnlp
image: 'jenkins/jnlp-slave:alpine'
volumeMounts:
- name: workspace-volume
mountPath: /home/jenkins
- name: maven
image: maven:latest
imagePullPolicy: Always
volumeMounts:
- name: workspace-volume
mountPath: /home/jenkins
volumes:
- name: workspace-volume
emptyDir:
medium: Memory
The only thing I added is the volumes: part at the end.

The volume workspace-volume is auto-generated by the kubernetes plugin and so a manual declaration will result in a duplicate entry.
For running the workspace-volume in RAM, set
workspaceVolume: emptyDirWorkspaceVolume(memory: true)
inside the podTemplate closure according to the documentation.

Related

Jenkins, docker and kubernetes (minikube)

i'm using Jenkins to deploy pipeline, so first step i did it is deploying jenkins to minikube, and it's work at first, but each time i run minikube stop and restart it , jenkins restart too from first (unlock jenkins), i just followed this tutorial :
https://www.digitalocean.com/community/tutorials/how-to-install-jenkins-on-kubernetes
and this is jenkins everytime i run minikube :
Hope someone have an answer for me ! thank you
It looks like the secret is not mounted for deployment you can do it following
create secret using
kubectl create secret jenkins --from-literal jenkins_password="ADD YOUR SECRET TOKEN Which you will find in jenkins pod logs"
and mount it like this
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
env:
- name: JENKINS_PASSWORD
valueFrom:
secretKeyRef:
key: jenkins_password
name: jenkins
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-data
mountPath: /var/jenkins_home
volumes:
- name: jenkins-data
persistentVolumeClaim:
claimName: jenkins-data
Next time it will not ask your the token. Also I would highly recommend to use PVC for the data to persist. if you install plugin/or configure jobs etc. next time when you restart jenkin, the plugins/jobs will be gone.
so for pvc you can use it like this
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-data
namespace: jenkins
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi

Binary not found in Kubernetes deployment

I'm trying to deploy rocketmq on my testing cluster. I started from the scripts provided in the apache/rocketmq-docker repo on github, but they do not work. I created my own yaml deployment starting from the one in the repo I previously cited, and it works for mqnamsrv, but not for broker. In the following the 2 deployments:
apiVersion: apps/v1
kind: Deployment
metadata:
name: rocketmq-namesrv
spec:
replicas: 1
selector:
matchLabels:
app: rocketmq-namesrv
template:
metadata:
labels:
app: rocketmq-namesrv
spec:
containers:
- name: namesrv
image: myrepo/rocketmq:4.9.3-alpine
command: ["sh", "mqnamesrv"]
imagePullPolicy: IfNotPresent
resources:
limits:
memory: "128Mi"
cpu: "400m"
ports:
- containerPort: 9876
volumeMounts:
- name: namesrv-log
mountPath: /var/log
volumes:
- name: namesrv-log
persistentVolumeClaim:
claimName: rocketmq-namesrv-pvc
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rocketmq-broker
spec:
replicas: 1
selector:
matchLabels:
app: rocketmq-broker
template:
metadata:
labels:
app: rocketmq-broker
spec:
containers:
- name: broker
image: myrepo/rocketmq:4.9.3-alpine
command: ["sh", "mqbroker", "-n", "localhost:9876"]
imagePullPolicy: IfNotPresent
resources:
limits:
memory: "128Mi"
cpu: "400m"
ports:
- containerPort: 10909
- containerPort: 10911
volumeMounts:
- name: broker-log
mountPath: /var/log
- name: broker-store
mountPath: /home/rocketmq
volumes:
- name: broker-log
persistentVolumeClaim:
claimName: rocketmq-broker-log-pvc
- name: broker-store
persistentVolumeClaim:
claimName: rocketmq-broker-store-pvc
The image rocketmq:4.9.3-alpine was created following the procedure on the apache/rocketmq-docker repo.
After the deployment the rocketmq-namesrv works, but the broker's pod logs: sh: can't open 'mqbroker': No such file or directory. ut if I try to run manually the container with kubectl run -ti rocketmq-broker --image=myrepo/rocketmq:4.9.3-alpine --restart=Never -- sh mqbroker -n localhost:9876 it works...
What could it be the problem in the yaml? Am I making something wrong?
I think the problem is with the mount path.
- name: broker-store
mountPath: /home/rocketmq
So your binaries won't be there and so the error

Deploy Dind and secure Docker Registry on Kubernetes (colon issues)

I have an issue with one of my project. Here is what I want to do :
Have a private docker registry on my cluster Kubernetes
Have a docker deamon running so that I can pull / push and build image directly inside the cluster
For this project I'm using some certificate to secure all those interactions.
1. How to reproduce :
Note: I'm working on a linux-based system
Here are the files that I'm using :
Deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: docker
spec:
replicas: 1
selector:
matchLabels:
app: docker
template:
metadata:
labels:
app: docker
spec:
containers:
- name: docker
image: docker:dind
resources:
limits:
cpu: "0.5"
memory: "256Mi"
requests:
memory: "128Mi"
securityContext:
privileged: true
volumeMounts:
- name: dind-client-cert
mountPath: /certs/client/
- name: docker-graph-storage
mountPath: /var/lib/docker
- name: dind-registry-cert
mountPath: >-
/etc/docker/certs.d/registry:5000/ca.crt
ports:
- containerPort: 2376
volumes:
- name: docker-graph-storage
emptyDir: {}
- name: dind-client-cert
persistentVolumeClaim:
claimName: certs-client
- name: dind-registry-cert
secret:
secretName: ca.crt
- name: init-reg-vol
secret:
secretName: init-reg
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: registry
spec:
replicas: 1
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
image: registry:2
env:
- name: DOCKER_TLS_CERTDIR
value: /certs
- name: REGISTRY_HTTP_TLS_KEY
value: /certs/registry.pem
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: /certs/registry.crt
volumeMounts:
- name: dind-client-cert
mountPath: /certs/client/
- name: dind-registry-cert
mountPath: /certs/
- name: registry-data
mountPath: /var/lib/registry
ports:
- containerPort: 5000
volumes:
- name: dind-client-cert
persistentVolumeClaim:
claimName: certs-client
- name: dind-registry-cert
secret:
secretName: registry
- name: registry-data
persistentVolumeClaim:
claimName: registry-data
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: client
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: docker
command: ['sleep','200']
resources:
limits:
cpu: "0.5"
memory: "256Mi"
requests:
memory: "128Mi"
env:
- name: DOCKER_HOST
value: tcp://docker:2376
- name: DOCKER_TLS_VERIFY
value: '1'
- name: DOCKER_TLS_CERTDIR
value: /certs
- name: DOCKER_CERT_PATH
value: /certs/client
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: /certs/registry.crt
volumeMounts:
- name: dind-client-cert
mountPath: /certs/client/
readOnly: true
- name: dind-registry-cert
mountPath: /usr/local/share/ca-certificate/ca.crt
readOnly: true
volumes:
- name: dind-client-cert
persistentVolumeClaim:
claimName: certs-client
- name: dind-registry-cert
secret:
secretName: ca.crt
Services.yaml
---
apiVersion: v1
kind: Service
metadata:
name: docker
spec:
selector:
app: docker
ports:
- name: docker
protocol: TCP
port: 2376
targetPort: 2376
---
apiVersion: v1
kind: Service
metadata:
name: registry
spec:
selector:
app: registry
ports:
- name: registry
protocol: TCP
port: 5000
targetPort: 5000
Pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: certs-client
spec:
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Mi
status: {}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: registry-data
spec:
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
limits:
storage: 50Gi
requests:
storage: 2Gi
status: {}
For the cert files I have the following folder certs/ certs/client certs.d/registry:5000/ and I use these command line to generate the certs :
openssl req -newkey rsa:4096 -nodes -keyout ./certs/registry.pem -x509 -days 365 -out ./certs/registry.crt -subj "/C=''/ST=''/L=''/O=''/OU=''/CN=registry"
cp ./certs/registry.crt ./certs.d/registry\:5000/ca.crt
Then I use secrets to pass those certs inside the pods :
kubectl create secret generic registry --from-file=certs/registry.crt --from-file=certs/registry.pem
kubectl create secret generic ca.crt --from-file=certs/registry.crt
The to launch the project the following line is used :
kubectl apply -f pvc.yaml,deployment.yaml,service.yaml
2. My issues
I have a problem on my docker pods with this error :
Error: Error response from daemon: invalid volume specification: '/var/lib/kubelet/pods/727d0f2a-bef6-4217-a292-427c5d76e071/volumes/kubernetes.io~secret/dind-registry-cert:/etc/docker/certs.d/registry:5000/ca.crt:ro
So the problem seems to comme from the colon in the path name. Then I tried to escape the colon and I got this sublime error
error: error parsing deployment.yaml: error converting YAML to JSON: yaml: line 34: found unknown escape character
The real problem here is that if the folder is not named 'registry:5000' the certificat is not reconised as correct and I have a x509 error when trying to push an image from the client.
For the overall project I know that it can work like that since I already succes to deploy it localy with a docker-compose (here is the link to the github project if any of you are curious)
So I looked a bit on to it and found out that it's a recuring problem on docker (I mean on Docker Desktop for mount volumes on containers) but I can't find anything about the same issue on Kubernetes.
Do any of you have any lead / suggestion / workaround on this mater ?
As always, thanks for your times :)
------------------------------- EDIT following #HelloWorld answer -------------------------------
Thanks to the workaround with simlink the ca.cert is correctly mounted inside. Howerver since I was mounting it on the deployement that was use to run the docker deamon, the entrypoint of the container docker:dind was overwrite by the commands. For future reader here is the solution that I found : geting the entry-point.sh and running it manualy.
Here is the deployement as I write those lines :
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: docker
spec:
replicas: 1
selector:
matchLabels:
app: docker
template:
metadata:
labels:
app: docker
spec:
containers:
- name: docker
image: docker:dind
resources:
limits:
cpu: "0.5"
memory: "256Mi"
requests:
memory: "128Mi"
securityContext:
privileged: true
command: ['sh', '-c', 'mkdir -p /etc/docker/certs.d/registry:5000 && ln -s /random/registry.crt /etc/docker/certs.d/registry:5000/ca.crt && wget https://raw.githubusercontent.com/docker-library/docker/a73d96e731e2dd5d6822c99a9af4dcbfbbedb2be/19.03/dind/dockerd-entrypoint.sh && chmod +x dockerd-entrypoint.sh && ./dockerd-entrypoint.sh']
volumeMounts:
- name: dind-client-cert
mountPath: /certs/client/
readOnly: false
- name: dind-registry-cert
mountPath: /random/
readOnly: false
ports:
- containerPort: 2376
volumes:
- name: dind-client-cert
persistentVolumeClaim:
claimName: certs-client
- name: dind-registry-cert
secret:
secretName: ca.crt
I hope it will be usefull for someone in the futur :)
The only thing I come up with is using symlinks. I tested it and it works. I also tried searching for better solution but didn't find anything satisfying.
Have a look at this example:
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: centos:7
command: ['sh', '-c', 'mkdir -p /etc/docker/certs.d/registry:5000 && ln -s /some/random/path/ca.crt /etc/docker/certs.d/registry:5000/ca.crt && exec sleep 10000']
volumeMounts:
- mountPath: '/some/random/path'
name: registry-cert
volumes:
- name: registry-cert
secret:
secretName: my-secret
And here is a template secret i used:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
namespace: default
type: Opaque
data:
ca.crt: <<< some_random_Data >>>
I have mounted this secret into a /some/random/path location (without colon so it wouldn't throw errors) and created a symlink between /some/random/path/ca.crt and /etc/docker/certs.d/registry:5000/ca.crt.
Of course you also need to create a dir structure before running ln -s ..., that is why I run mkdir -p ....
Let me know if you have any further questions. I'd be happy to answer them.

Fluentd Failing to connect to ElasticSearch cluster

I have a local kubernetes cluster where I added a Fluentd Daemonset using the preconfigured elasticsearch image (fluent/fluentd-kubernetes-daemonset:elasticsearch). Step 2 of this article. I also have an elastic cluster running in the cloud. You can pass some env variables to the fluentd-elasticsearch image for configuration. It looks pretty straightforward, but when running the fluentd Pod I keep getting the error:
"Fluent::ElasticsearchOutput::ConnectionFailure" error="Can not reach Elasticsearch cluster ({:host=>\"fa0acce34bf64db9bc9e46f98743c185.westeurope.azure.elastic-cloud.com\", :port=>9243, :scheme=>\"https\", :user=>\"username\", :password=>\"obfuscated\"})!" plugin_id="out_es"
when I try to reach the elastic cluster from within the pod with
# wget https://fa0acce34bf64db9bc9e46f98743c185.westeurope.azure.elastic-cloud.com:9243/ I get a 401 unauthorized (cuz I havent submitted user/pass here), but it at least shows that the address is reachable.
Why is it failing to connect?
I already set the FLUENT_ELASTICSEARCH_SSL_VERSION to 'TLSv1_2', i saw that that solved some problems for others.
Daemonset configuration:
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-logging
labels:
app: fluentd
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:elasticsearch
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "fa0acce34bf64db9bc9e46f98743c185.westeurope.azure.elastic-cloud.com"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9243"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "https"
- name: FLUENT_UID
value: "0"
- name: FLUENT_ELASTICSEARCH_SSL_VERIFY
value: "false"
- name: FLUENT_ELASTICSEARCH_SSL_VERSION
value: "TLSv1_2"
- name: FLUENT_ELASTICSEARCH_USER
value: "<user>"
- name: FLUENT_ELASTICSEARCH_PASSWORD
value: "<password>"
resources:
limits:
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
For anyone else who runs into this problem:
I was following a tutorial that used the 'image: fluent/fluentd-kubernetes-daemonset:elasticsearch' image. When you check their DockerHub (https://hub.docker.com/r/fluent/fluentd-kubernetes-daemonset) you can see that the :elaticsearch tag is a year old and probably outdated.
I changed the image for the DaemonSet to a more recent and stable tag 'fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch' and boom it works now.

How do I copy a Kubernetes configmap to a write enabled area of a pod?

I am trying to deploy a redis sentinel deployment in Kubernetes. I have accomplished that but want to use ConfigMaps to allow us to change the IP address of the master in the sentinel.conf file. I started this but redis cant write to the config file because the mount point for configMaps are readOnly.
I was hoping to run an init container and copy the redis conf to a different dir just in the pod. But the init container couldn't find the conf file.
What are my options? Init Container? Something other than ConfigMap?
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: redis-sentinel
spec:
replicas: 3
template:
metadata:
labels:
app: redis-sentinel
spec:
hostNetwork: true
containers:
- name: redis-sentinel
image: IP/redis-sentinel
ports:
- containerPort: 63790
- containerPort: 26379
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /usr/local/etc/redis/conf
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: sentinel-redis-config
items:
- key: redis-config-sentinel
path: sentinel.conf
According to #P Ekambaram proposal, you can try this one:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: redis-sentinel
spec:
replicas: 3
template:
metadata:
labels:
app: redis-sentinel
spec:
hostNetwork: true
containers:
- name: redis-sentinel
image: redis:5.0.4
ports:
- containerPort: 63790
- containerPort: 26379
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /usr/local/etc/redis/conf
name: config
initContainers:
- name: copy
image: redis:5.0.4
command: ["bash", "-c", "cp /redis-master/redis.conf /redis-master-data/"]
volumeMounts:
- mountPath: /redis-master
name: config
- mountPath: /redis-master-data
name: data
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: example-redis-config
items:
- key: redis-config
path: redis.conf
In this example initContainer copy the file from ConfigMap into writable dir.
Note:
An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node. As the name says, it is initially empty. Containers in the Pod can all read and write the same files in the emptyDir volume, though that volume can be mounted at the same or different paths in each Container. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever.
Create a startup script. In that copy the configMap file that is mounted in a volume to writable location. Then run the container process.

Resources