Offline agent when creating a Kubernetes pod template using Jenkins scripted pipeline - jenkins

I have created a shared library and it has a groovy file named myBuildPlugin.groovy:
def label = "worker-${UUID.randomUUID().toString()}"
podTemplate(label: label, yaml: """
apiVersion: v1
kind: Pod
metadata:
name: my-build
spec:
containers:
- name: jnlp
image: dtr.myhost.com/test/jenkins-build-agent:latest
ports:
- containerPort: 8080
- containerPort: 50000
resources:
limits:
cpu : 1
memory : 1Gi
requests:
cpu: 200m
memory: 256Mi
env:
- name: JENKINS_URL
value: http://jenkins:8080
- name: mongo
image: mongo
ports:
- containerPort: 8080
- containerPort: 50000
- containerPort: 27017
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 1
memory: 512Mi
imagePullSecrets:
- name: dtrsecret""")
{
node(label) {
pipelineParams.step1.call([label : label])
}
}
When in my project I use myBuildPlugin as below, the log shows it waits for an executor forever. When I look at Jenkins I can see the agent is being created but for some reason it can't talk to it via port 50000 (or perhaps the pod can't talk to the agent!)
Later I tried to remove yaml and instead used the following code:
podTemplate(label: 'mypod', cloud: 'kubernetes', containers: [
containerTemplate(
name: 'jnlp',
image: 'dtr.myhost.com/test/jenkins-build-agent:latest',
ttyEnabled: true,
privileged: false,
alwaysPullImage: false,
workingDir: '/home/jenkins',
resourceRequestCpu: '1',
resourceLimitCpu: '100m',
resourceRequestMemory: '100Mi',
resourceLimitMemory: '200Mi',
envVars: [
envVar(key: 'JENKINS_URL', value: 'http://jenkins:8080'),
]
),
containerTemplate(name: 'maven', image: 'maven:3.5.0', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'docker', image: 'docker', command: 'cat', ttyEnabled: true)
],
volumes: [
emptyDirVolume(mountPath: '/etc/mount1', memory: false),
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock')
],
imagePullSecrets: [ 'dtrsecret' ],
)
{
node(label) {
pipelineParams.step1.call([label : label])
}
}
Still no luck. Interestingly if I define all these containers in Jenkins configuration, things work smoothly. This is my configuration:
and this is the pod template configuration:
It appears that if I change the label to something other that jenkins-jenkins-slave the issue happens. This is the case even if it's defined via Jenkins' configuration page. If that's the case, how am I suppose to create multiple Pod template for different type of projects?
Just today, I also tried to use pod inheritance as below without any success:
def label = 'kubepod-test'
podTemplate(label : label, inheritFrom : 'default',
containers : [
containerTemplate(name : 'mongodb', image : 'mongo', command : '', ttyEnabled : true)
]
)
{
node(label) {
}
}
Please help me on this issue. Thanks

There's something iffy about your pod configuration, you can't have your Jenkins and Mongo containers using the same port 50000. Generally, you want to specify a unique port since pods share the same port space.
In this case looks like you need port 50000 to set up a tunnel to the Jenkins agent. Keep in mind that the Jenkins plugin might be doing other things such as setting up a Kubernetes Service or using the internal Kubernetes DNS.
In the second example, I don't even see port 50000 exposed.

Related

Install cassandra exporter for prometheus monitoring in cassandra pod in kubernetes

I am using Cassandra image w.r.t.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 3
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
terminationGracePeriodSeconds: 1800
containers:
- name: cassandra
image: gcr.io/google-samples/cassandra:v13
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
resources:
limits:
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEEDS
value: "cassandra-0.cassandra.default.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "K8Demo"
- name: CASSANDRA_DC
value: "DC1-K8Demo"
- name: CASSANDRA_RACK
value: "Rack1-K8Demo"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- /ready-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 5
# These volume mounts are persistent. They are like inline claims,
# but not exactly because the names need to match exactly one of
# the stateful pod volumes.
volumeMounts:
- name: cassandra-data
mountPath: /cassandra_data
# These are converted to volume claims by the controller
# and mounted at the paths mentioned above.
# do not use these in production until ssd GCEPersistentDisk or other ssd pd
volumeClaimTemplates:
- metadata:
name: cassandra-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: fast
resources:
requests:
storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: k8s.io/minikube-hostpath
parameters:
type: pd-ssd
Now I need to add below line to cassandra-env.sh in postStart or in cassandra yaml file:
-JVM_OPTS="$JVM_OPTS
-javaagent:$CASSANDRA_HOME/lib/cassandra-exporter-agent-<version>.jar"
Now I was able to achieve this, but after this step, Cassandra requires a restart but as it's already running as a pod, I don't know how to restart the process. So is there any way that this step is done prior to running the pod and not after it is up?
I was suggested below solution:-
This won’t work. Commands that run postStart don’t impact the running container. You need to change the startup commands passed to Cassandra.
The only way that I know to do this is to create a new container image in the artifactory based on the existing image. and pull from there.
But I don't know how to achieve this.

How to resolve the hostname between the pods in the kubernetes cluster?

I am creating two pods with a custom docker image(ubuntu is the base image). I am trying to ping the pods from their terminal. I am able to reach it using the IP address but not the hostname. How to achieve without manually adding /etc/hosts in the pods?
Note: I am not running any services in the node. I am basically trying to setup slurm using this.
Pod Manifest File:
apiVersion: v1
kind: Pod
metadata:
name: slurmctld
labels:
app: slurm
spec:
nodeName: docker-desktop
hostname: slurmctld
containers:
- name: slurmctld
image: slurmcontroller
imagePullPolicy: Always
ports:
- containerPort: 6817
resources:
requests:
memory: "1000Mi"
cpu: "1000m"
limits:
memory: "1500Mi"
cpu: "1500m"
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
---
apiVersion: v1
kind: Pod
metadata:
name: worker1
labels:
app: slurm
spec:
nodeName: docker-desktop
hostname: worker1
containers:
- name: worker1
image: slurmworker
imagePullPolicy: Always
ports:
- containerPort: 6818
resources:
requests:
memory: "1000Mi"
cpu: "1000m"
limits:
memory: "1500Mi"
cpu: "1500m"
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
From the docs here
In general a pod has the following DNS resolution:
pod-ip-address.my-namespace.pod.cluster-domain.example.
For example, if a pod in the default namespace has the IP address
172.17.0.3, and the domain name for your cluster is cluster.local, then the Pod has a DNS name:
172-17-0-3.default.pod.cluster.local.
Any pods created by a Deployment or DaemonSet exposed by a Service
have the following DNS resolution available:
pod-ip-address.deployment-name.my-namespace.svc.cluster-domain.example
If you don't like to deal with ever changing IP of a pod then you need to create service to expose the pods using DNS hostnames. Below is an example of service to expose the slurmctld pod.
apiVersion: v1
kind: Service
metadata:
name: slurmctld-service
spec:
selector:
app: slurm
ports:
- protocol: TCP
port: 80
targetPort: 6817
Assuming you are doing these on default namespace You should now be able to access it via slurmctld-service.default.svc.cluster.local
You can also use hostname -i which in all k8s installs i've tested resolves to the pods IP address.

kubernetes jenkins plugin does not create 2 containers

I have 2 Jenkins instances, one use version 1.8 and second version 1.18.
Oldest version is able to create both containers.
Agent specification [Kubernetes Pod Template] (mo-aio-build-supplier):
* [jnlp] mynexus.services.com/mo-base/jenkins-slave-mo-aio:1.8.2-ca(resourceRequestCpu: 0.25, resourceRequestMemory: 256Mi, resourceLimitCpu: 1, resourceLimitMemory: 1.5Gi)
* [postgres] mynexus.services.com:443/mo-base/mo-base-postgresql-95-openshift
Newest version are not able to create postgres container
Container postgres exited with error 1. Logs: mkdir: cannot create directory '/home/jenkins': Permission denied
Both use same podTemplate
podTemplate(
name: label,
label: label,
cloud: 'openshift',
serviceAccount: 'jenkins',
containers: [
containerTemplate(
name: 'jnlp',
image: 'mynexus.services.theosmo.com/jenkins-slave-mo-aio:v3.11.104-14_jdk8',
resourceRequestCpu: env.CPU_REQUEST,
resourceLimitCpu: env.CPU_LIMIT,
resourceRequestMemory: env.RAM_REQUEST,
resourceLimitMemory: env.RAM_LIMIT,
workingDir: '/tmp',
args: '${computer.jnlpmac} ${computer.name}',
command: ''
),
containerTemplate(
name: 'postgres',
image: 'mynexus.services.theosmo.com:443/mo-base/mo-base-postgresql-95-openshift',
envVars: [
envVar(key: "POSTGRESQL_USER", value: "admin"),
envVar(key: "POSTGRESQL_PASSWORD", value: "admin"),
envVar(key: "POSTGRESQL_DATABASE", value: "supplier_data"),
]
)
],
volumes: [emptyDirVolume(mountPath: '/dev/shm', memory: true)]
)
Also, I've noticed YAML created by newest version is a bit weird
apiVersion: "v1"
kind: "Pod"
metadata:
annotations:
buildUrl: "http://jenkins.svc:80/job/build-supplier/473/"
labels:
jenkins: "slave"
jenkins/mo-aio-build-supplier: "true"
name: "mo-aio-build-supplier-xfgmn-qmrdl"
spec:
containers:
- args:
- "********"
- "mo-aio-build-supplier-xfgmn-qmrdl"
env:
- name: "JENKINS_SECRET"
value: "********"
- name: "JENKINS_TUNNEL"
value: "jenkins-jnlp.svc:50000"
- name: "JENKINS_AGENT_NAME"
value: "mo-aio-build-supplier-xfgmn-qmrdl"
- name: "JENKINS_NAME"
value: "mo-aio-build-supplier-xfgmn-qmrdl"
- name: "JENKINS_AGENT_WORKDIR"
value: "/tmp"
- name: "JENKINS_URL"
value: "http://jenkins.svc:80/"
- name: "HOME"
value: "/home/jenkins"
image: "mynexus.services.com/mo-base/jenkins-slave-mo-aio:1.8.2-ca"
imagePullPolicy: "IfNotPresent"
name: "jnlp"
resources:
limits:
memory: "1.5Gi"
cpu: "1"
requests:
memory: "256Mi"
cpu: "0.25"
securityContext:
privileged: false
tty: false
volumeMounts:
- mountPath: "/dev/shm"
name: "volume-0"
readOnly: false
- mountPath: "/tmp"
name: "workspace-volume"
readOnly: false
workingDir: "/tmp"
- env:
- name: "POSTGRESQL_DATABASE"
value: "supplier_data"
- name: "POSTGRESQL_USER"
value: "admin"
- name: "HOME"
value: "/home/jenkins"
- name: "POSTGRESQL_PASSWORD"
value: "admin"
image: "mynexus.services.com:443/mo-base/mo-base-postgresql-95-openshift"
imagePullPolicy: "IfNotPresent"
name: "postgres"
resources:
limits: {}
requests: {}
securityContext:
privileged: false
tty: false
volumeMounts:
- mountPath: "/dev/shm"
name: "volume-0"
readOnly: false
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
workingDir: "/home/jenkins/agent"
nodeSelector: {}
restartPolicy: "Never"
serviceAccount: "jenkins"
volumes:
- emptyDir:
medium: "Memory"
name: "volume-0"
- emptyDir: {}
name: "workspace-volume"
As you are able to see above:
postgres container is under an env tree
Any suggestion? Thanks in advance
As far as I checked there
The problem
Since Kubernetes Plugin version 1.18.0, the default working directory of the pod containers was changed from /home/jenkins to /home/jenkins/agent. But the default HOME environment variable enforcement is still pointing to /home/jenkins. The impact of this change is that if pod container images do not have a /home/jenkins directory with sufficient permissions for the running user, builds will fail to do anything directly under their HOME directory, /home/jenkins.
Resolution
There are different workaround to that problem:
Change the default HOME variable
The simplest and preferred workaround is to add the system property -Dorg.csanchez.jenkins.plugins.kubernetes.PodTemplateBuilder.defaultHome=/home/jenkins/agent on Jenkins startup. This requires a restart.
This workaround will reflect the behavior of kubernetes plugin pre-1.18.0 but on the new working directory /home/jenkins/agent
Use /home/jenkins as the working directory
A workaround is to change the working directory of pod containers back to /home/jenkins. This workaround is only possible when using YAML to define agent pod templates (see JENKINS-60977).
Prepare images for Jenkins
A workaround could be to ensure that the images used in agent pods have a /home/jenkins directory that is owned by the root group and writable by the root group as mentioned in OpenShift Container Platform-specific guidelines.
Additionaly there is the issue on jenkins.
Hope this helps.

How to deploy elasticsearch in kubernetes established by AWS EKS [duplicate]

I am running my kubernetes cluster on AWS EKS which runs kubernetes 1.10.
I am following this guide to deploy elasticsearch in my Cluster
elasticsearch Kubernetes
The first time I deployed it everything worked fine. Now, When I redeploy it gives me the following error.
ERROR: [2] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
[2018-08-24T18:07:28,448][INFO ][o.e.n.Node ] [es-master-6987757898-5pzz9] stopping ...
[2018-08-24T18:07:28,534][INFO ][o.e.n.Node ] [es-master-6987757898-5pzz9] stopped
[2018-08-24T18:07:28,534][INFO ][o.e.n.Node ] [es-master-6987757898-5pzz9] closing ...
[2018-08-24T18:07:28,555][INFO ][o.e.n.Node ] [es-master-6987757898-5pzz9] closed
Here is my deployment file.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: es-master
labels:
component: elasticsearch
role: master
spec:
replicas: 3
template:
metadata:
labels:
component: elasticsearch
role: master
spec:
initContainers:
- name: init-sysctl
image: busybox:1.27.2
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: es-master
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.2
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: myesdb
- name: NUMBER_OF_MASTERS
value: "2"
- name: NODE_MASTER
value: "true"
- name: NODE_INGEST
value: "false"
- name: NODE_DATA
value: "false"
- name: HTTP_ENABLE
value: "false"
- name: ES_JAVA_OPTS
value: -Xms512m -Xmx512m
- name: NETWORK_HOST
value: "0.0.0.0"
- name: PROCESSORS
valueFrom:
resourceFieldRef:
resource: limits.cpu
resources:
requests:
cpu: 0.25
limits:
cpu: 1
ports:
- containerPort: 9300
name: transport
livenessProbe:
tcpSocket:
port: transport
initialDelaySeconds: 20
periodSeconds: 10
volumeMounts:
- name: storage
mountPath: /data
volumes:
- emptyDir:
medium: ""
name: "storage"
I have seen a lot of posts talking about increasing the value but I am not sure how to do it. Any help would be appreciated.
Just want to append to this issue:
If you create EKS cluster by eksctl then you can append to NodeGroup creation yaml:
preBootstrapCommand:
- "sed -i -e 's/1024:4096/65536:65536/g' /etc/sysconfig/docker"
- "systemctl restart docker"
This will solve the problem for newly created cluster by fixing docker daemon config.
Update default-ulimit parameter in the file '/etc/docker/daemon.json'
"default-ulimits": {
"nofile": {
"Name": "nofile",
"Soft": 65536,
"Hard": 65536
}
}
and restart docker daemon.
This is the only thing that worked for me using EKS setting up an EFK stack. Add this to your nodegroup creation YAML file under nodeGroups:. Then create your nodegroup and apply your ES pods on it.
preBootstrapCommands:
- "sysctl -w vm.max_map_count=262144"
- "systemctl restart docker"

How to write a kubernetes pod configuration to start two containers

I would like to create a kubernetes pod that contains 2 containers, both with different images, so I can start both containers together.
Currently I have tried the following configuration:
{
"id": "podId",
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "podId",
"containers": [{
"name": "type1",
"image": "local/image"
},
{
"name": "type2",
"image": "local/secondary"
}]
}
},
"labels": {
"name": "imageTest"
}
}
However when I execute kubecfg -c app.json create /pods I get the following error:
F0909 08:40:13.028433 01141 kubecfg.go:283] Got request error: request [&http.Request{Method:"POST", URL:(*url.URL)(0xc20800ee00), Proto:"HTTP/1.1", ProtoMajor:1, ProtoMinor:1, Header:http.Header{}, B
ody:ioutil.nopCloser{Reader:(*bytes.Buffer)(0xc20800ed20)}, ContentLength:396, TransferEncoding:[]string(nil), Close:false, Host:"127.0.0.1:8080", Form:url.Values(nil), PostForm:url.Values(nil), Multi
partForm:(*multipart.Form)(nil), Trailer:http.Header(nil), RemoteAddr:"", RequestURI:"", TLS:(*tls.ConnectionState)(nil)}] failed (500) 500 Internal Server Error: {"kind":"Status","creationTimestamp":
null,"apiVersion":"v1beta1","status":"failure","message":"failed to find fit for api.Pod{JSONBase:api.JSONBase{Kind:\"\", ID:\"SSH podId\", CreationTimestamp:util.Time{Time:time.Time{sec:63545848813, nsec
:0x14114e1, loc:(*time.Location)(0xb9a720)}}, SelfLink:\"\", ResourceVersion:0x0, APIVersion:\"\"}, Labels:map[string]string{\"name\":\"imageTest\"}, DesiredState:api.PodState{Manifest:api.ContainerMa
nifest{Version:\"v1beta1\", ID:\"podId\", Volumes:[]api.Volume(nil), Containers:[]api.Container{api.Container{Name:\"type1\", Image:\"local/image\", Command:[]string(nil), WorkingDir:\"\", Ports:[]ap
i.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}, api.Container{Name:\"type2\", Image:\"local/secondary\", Command:[]string(n
il), WorkingDir:\"\", Ports:[]api.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}}}, Status:\"\", Host:\"\", HostIP:\"\
", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"RestartAlways\"}}, CurrentState:api.PodState{Manifest:api.ContainerManifest{Version:\"\", ID:\"\", Volumes:[]api.Volume(nil
), Containers:[]api.Container(nil)}, Status:\"\", Host:\"\", HostIP:\"\", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"\"}}}","code":500}
How can I modify the configuration accordingly?
Running kubernetes on a vagrant vm (yungsang/coreos).
The error in question here is "failed to find fit". This generally happens when you have a port conflict (try and use the same hostPort too many times or perhaps you don't have any worker nodes/minions.
I'd suggest you either use the Vagrant file that is in the Kubernetes git repo (see http://kubernetes.io) as we have been trying to make sure that stays working as Kubernetes is under very active development. If you want to make it work with the CoreOS single machine set up, I suggest you hop on IRC (#google-containers on freenode) and try and get in touch with Kelsey Hightower.
Your pod spec file looks like invalid.
According to http://kubernetes.io/v1.0/docs/user-guide/walkthrough/README.html#multiple-containers, a valid multiple containers pod spec should like this
apiVersion: v1
kind: Pod
metadata:
name: www
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /srv/www
name: www-data
readOnly: true
- name: git-monitor
image: kubernetes/git-monitor
env:
- name: GIT_REPO
value: http://github.com/some/repo.git
volumeMounts:
- mountPath: /data
name: www-data
volumes:
- name: www-data
emptyDir: {}
Latest doc at http://kubernetes.io/docs/user-guide/walkthrough/#multiple-containers
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: wp
image: wordpress
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
- name: ng
image: nginx
imagePullPolicy: IfNotPresent

Resources