Unresponsive SonarQube in Kubernetes - docker

We're creating a kubernetes deployment for sonar. When using the embedded H2 DB the deployment works fine and SonarQube is available thru the kube Ingress controller.
But when setting JDBC parameters for persistence the SonarQube instance fails to respond to any request and outputs the following error (in logs)
01:31:51.000 (unknown):0 warning: already initialized constant Input
01:31:51.000 WARNING: while creating new bindings for class org.jruby.rack.RackInput,
01:31:51.000 found an existing binding; you may want to run a clean build.
Here's the Kubernetes deployment descriptor:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sonar-deployment
namespace: jenkins
spec:
replicas: 1
template:
metadata:
labels:
app: sonar
spec:
containers:
- name: sonar
image: sonarqube:latest
imagePullPolicy: Always
ports:
- containerPort: 9000
env:
- name: SONARQUBE_JDBC_USERNAME
value: sonar
- name: SONARQUBE_JDBC_PASSWORD
value: sonar
- name: SONARQUBE_JDBC_URL
value: "jdbc:mysql://xxx.xxx.xxx.xxx/sonar?useUnicode=true&characterEncoding=utf8"

Deployments is experiment feature for the kubernetes. Use replicationcontroller here is my configuration. It's work in production.
apiVersion: v1
kind: ReplicationController
metadata:
labels:
app: sonarqube
name: sonarqube
namespace: services
spec:
replicas: 1
selector:
app: sonarqube
template:
metadata:
labels:
app: sonarqube
spec:
containers:
- env:
- name: SONARQUBE_JDBC_URL
value: jdbc:mysql://mysql:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
- name: SONARQUBE_JDBC_USERNAME
value: sonar
- name: SONARQUBE_JDBC_PASSWORD
value: sonar
image: sonarqube
imagePullPolicy: Always
livenessProbe:
failureThreshold: 20
httpGet:
path: /
port: 9000
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 60
name: sonarqube
ports:
- containerPort: 9000
protocol: TCP
- containerPort: 9292
protocol: TCP
resources:
limits:
cpu: 500m
memory: 1000Mi

Related

EKS connection refused when trying to talk to Jaeger agent daemonset

I recently deployed the jaeger agent as a daemonset on my k8s cluster alongside a collector. When trying to send spans to the agent using:
- name: JAEGER_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
When looking at the application logs I see:
failed to flush Jaeger spans to server: write udp <Pod-Ip>:42531-><Node-Ip>:6831: write: connection refused
All nodes can access each other as the security group does not block ports between them, when using a sidecar agent the spans are sent without issue.
Replicate:
Deploy agent using:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: jaeger-agent
labels:
app: jaeger
app.kubernetes.io/name: jaeger
app.kubernetes.io/component: agent
namespace: observability
spec:
selector:
matchLabels:
app: jaeger
app.kubernetes.io/name: jaeger
app.kubernetes.io/component: agent
template:
metadata:
labels:
app: jaeger
app.kubernetes.io/name: jaeger
app.kubernetes.io/component: agent
spec:
containers:
- name: jaeger-agent
image: jaegertracing/jaeger-agent:1.18.0
args: ["--reporter.grpc.host-port=<collector-name>:14250"]
ports:
- containerPort: 5775
protocol: UDP
- containerPort: 6831
protocol: UDP
- containerPort: 6832
protocol: UDP
- containerPort: 5778
protocol: TCP
Then deploy hotrod application:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hotrod
labels:
app: hotrod
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: hotrod
template:
metadata:
labels:
app: hotrod
spec:
containers:
- name: hotrod
image: jaegertracing/example-hotrod:latest
imagePullPolicy: Always
env:
- name: JAEGER_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
ports:
- containerPort: 8080
Looks like your DaemonSet misses the hostNetwork property, to be able to listen on the node IP.
You can check that article for further info: https://medium.com/#masroor.hasan/tracing-infrastructure-with-jaeger-on-kubernetes-6800132a677

Configure prometheus to collect custom metrics from dockerized nodejs pod

I have set up prom-client (unofficial client library for prometheus) to collect custom metrics what I need.
I have prometheus server deployed from helm following this eks setup guide. Now I am trying to edit default configmap to collect my app metrics as well, but getting error
parsing YAML file /etc/config/prometheus.yml: yaml: unmarshal errors:\n line 22: field cluster_ip not found in type kubernetes.plain\n line 25: cannot unmarshal !!strdefaultinto []string
This is what I have done as per docs
prometheus.yaml configmap file
apiVersion: v1
data:
alerting_rules.yml: |
{}
alerts: |
{}
prometheus.yml: |
global:
evaluation_interval: 1m
scrape_interval: 1m
scrape_timeout: 10s
rule_files:
- /etc/config/recording_rules.yml
- /etc/config/alerting_rules.yml
- /etc/config/rules
- /etc/config/alerts
scrape_configs:
...DEFAULT CONFIGS...
- job_name: my_metrics
scrape_interval: 5m
scrape_timeout: 10s
honor_labels: true
metrics_path: /api/metrics
kubernetes_sd_configs:
- role: service
cluster_ip: 10.100.200.92
namespaces:
names:
default
recording_rules.yml: |
{}
rules: |
{}
kind: ConfigMap
metadata:
creationTimestamp: "2020-06-08T09:26:38Z"
labels:
app: prometheus
chart: prometheus-11.3.0
component: server
heritage: Helm
release: prometheus
name: prometheus-server
namespace: prometheus
uid: 8fadb17a-f5c5-4f9d-a931-fa1f77684847
Here clusterIP is the IP assigned for my service to expose deployment.
My deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 2
strategy:
type: RollingUpdate
selector:
matchLabels:
name: myapp
template:
metadata:
labels:
name: myapp
spec:
containers:
- image: IMAGE_URL:BUILD_NUMBER
name: myapp
resources:
limits:
cpu: "1000m"
memory: "2400Mi"
requests:
cpu: "500m"
memory: "2000Mi"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
name: myapp
My service.yaml file which is exposing deployment
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
selector:
deploy: staging
name: myapp
type: ClusterIP
ports:
- port: 80
targetPort: 5000
protocol: TCP
Is there some different/efficient way to target my app for metrics collection, please let me know. Thanks
This is what I am using to enable prometheus scraping inside the cluster.
In the scrape config, I have this snippet:
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- action: labeldrop
regex: '(kubernetes_pod|app_kubernetes_io_instance|app_kubernetes_io_name|instance)'
This is taken directly from the default values for the prometheus helm chart: https://github.com/helm/charts/blob/master/stable/prometheus/values.yaml#L1452
What that does, is instruct prometheus to scrape every pod that has the annotation:
prometheus.io/scrape: "true"
set. With these annotations on the pod you can then configure the port and path of the scrape:
prometheus.io/path: "/metrics"
prometheus.io/port: "9090"
So, you would need to modify your deployment.yaml to specify these annotations as well:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 2
strategy:
type: RollingUpdate
selector:
matchLabels:
name: myapp
template:
metadata:
labels:
name: myapp
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "<enter port of pod to scrape>"
prometheus.io/path: "<enter path to scrape>"
spec:
containers:
- image: IMAGE_URL:BUILD_NUMBER
...

Connection refused error when deploying couchbase in kubernetes {failed to connect to 127.0.0.1 port 8091: Connection refused}

I used the following yaml files to deploy couchbase in kubernetes.
Master:
apiVersion: v1
kind: ReplicationController
metadata:
name: couchbase-master-rc
spec:
replicas: 1
selector:
app: master-pod
template:
metadata:
labels:
app: master-pod
spec:
containers:
- name: couchbase-master
image: arungupta/couchbase:k8s
env:
- name: TYPE
value: MASTER
ports:
- containerPort: 8091
---
apiVersion: v1
kind: Service
metadata:
name: couchbase-master-service
labels:
app: couchbase-master-service
spec:
ports:
- port: 8091
selector:
app: master-pod
type: LoadBalancer
Worker:
apiVersion: v1
kind: ReplicationController
metadata:
name: couchbase-worker-rc
spec:
replicas: 1
selector:
app: couchbase-worker-pod
template:
metadata:
labels:
app: couchbase-worker-pod
spec:
containers:
- name: couchbase-worker
image: arungupta/couchbase:k8s
env:
- name: TYPE
value: "WORKER"
- name: COUCHBASE_MASTER
value: "couchbase-master-service"
- name: AUTO_REBALANCE
value: "false"
ports:
- containerPort: 8091
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: couchbase
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: xxx.com
http:
paths:
- path: /
backend:
serviceName: couchbase-master-service
servicePort: 8091
The pods started running and nothing seems to have an issue at first glance. But when I tried to hit the HostUrl it gives me bad gateway. And when I look into the logs of master's pod it shows me connection refused at 127.0.0.1:8091. I tried to exec into the pod and apply the curl statements from entrypoint.sh manually, but it also gave me the error "failed to connect to 127.0.0.1 port 8091: Connection refused".
I have found that master image is using this entrypoint script
I ran this container image and it looks like the curl is failing because 15s sleep is not enough time for couchbase-server to start and open 8091 port.
The easiest thing you could do is to set this sleep to higher value, but sleep is usually not the best option. (Actually this whole image is full of bad practises).
Better approach would be to replace sleep with following lines that wait until port 8091 is open:
while ! nc -z localhost 8091; do
sleep 1
done

Jenkins slave JNLP4- connection timeout

I see this error in some of the Jenkins jobs
Cannot contact jenkins-slave-l65p0-0f7m0: hudson.remoting.ChannelClosedException: Channel "unknown": Remote call on JNLP4-connect connection from 100.99.111.187/100.99.111.187:46776 failed. The channel is closing down or has closed down
I have a jenkins master - slave setup.
On the slave following logs are found
java.nio.channels.ClosedChannelException
at org.jenkinsci.remoting.protocol.NetworkLayer.onRecvClosed(NetworkLayer.java:154)
at org.jenkinsci.remoting.protocol.impl.NIONetworkLayer.ready(NIONetworkLayer.java:142)
at org.jenkinsci.remoting.protocol.IOHub$OnReady.run(IOHub.java:795)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Jenkins is on a kubernetes cluster.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
namespace: default
name: jenkins-deployment
spec:
serviceName: "jenkins-pod"
replicas: 1
template:
metadata:
labels:
app: jenkins-pod
spec:
initContainers:
- name: volume-mount-hack
image: busybox
command: ["sh", "-c", "chmod -R 777 /usr/mnt"]
volumeMounts:
- name: jenkinsdir
mountPath: /usr/mnt
containers:
- name: jenkins-container
imagePullPolicy: Always
readinessProbe:
exec:
command:
- curl
- http://localhost:8080/login
- -o
- /dev/null
livenessProbe:
httpGet:
path: /login
port: 8080
initialDelaySeconds: 120
periodSeconds: 10
env:
- name: JAVA_OPTS
value: "-Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85"
resources:
requests:
memory: "7100Mi"
cpu: "2000m"
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumeMounts:
- mountPath: /var/run
name: docker-sock
- mountPath: /var/jenkins_home
name: jenkinsdir
volumes:
- name: jenkinsdir
persistentVolumeClaim:
claimName: "jenkins-persistence"
- name: docker-sock
hostPath:
path: /var/run
---
apiVersion: v1
kind: Service
metadata:
namespace: default
name: jenkins
labels:
app: jenkins
spec:
type: NodePort
ports:
- name: http
port: 8080
targetPort: 8080
nodePort: 30099
protocol: TCP
selector:
app: jenkins-pod
---
apiVersion: v1
kind: Service
metadata:
namespace: default
name: jenkins-external
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
labels:
app: jenkins
spec:
type: LoadBalancer
ports:
- name: http
port: 8080
targetPort: 8080
protocol: TCP
selector:
app: jenkins-pod
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: jenkins-master-pdb
namespace: default
spec:
maxUnavailable: 0
selector:
matchLabels:
app: jenkins-pod
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: jenkins-slave-pdb
namespace: default
spec:
maxUnavailable: 0
selector:
matchLabels:
jenkins: slave
---
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: default
labels:
app: jenkins
spec:
selector:
app: jenkins-pod
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves
I doubt this has anything to do with kubernetes but still putting it out there.
I am assuming you are using Jenkins Kubernetes Plugin,
You can increase Timeout in seconds for Jenkins connection under Kubernetes Pod template. It may solve your issue.
Description for Timeout in seconds for Jenkins connection:
Specify time in seconds up to which Jenkins should wait for the JNLP
agent to estabilish a connection. Value should be a positive integer,
default being 100.
Did you configure the JNLP port in Jenkins itself? It is located in Manage Jenkins > Configure Global Security > Agents. Click the "Fixed" radio button (since you already assigned a TCP port). Set the "TCP port for JNLP agents" to 50000.
I think, "jenkins-slave" is not a valid name. You can try rename it to "jnlp"
Explain here:
This was related to this issue. If the name of the custom agent is not jnlp, then another agent with the default jnlp image is created. This explains messages like channel already closed etc..

Kubernetes deployment database connection error

I'm trying to deploy GLPI application (http://glpi-project.org/) over my Kubernetes cluster but i encounter an issue.
Here is my deployment code:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pv-claim-glpi
labels:
type: openebs
spec:
storageClassName: openebs-storageclass
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: glpi
namespace: jb
labels:
app: glpi
spec:
selector:
matchLabels:
app: glpi
replicas: 1 # tells deployment to run 1 pods matching the template
template: # create pods using pod definition in this template
metadata:
# unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
# generated from the deployment name
labels:
app: glpi
spec:
volumes:
- name: pv-storage-glpi
persistentVolumeClaim:
claimName: pv-claim-glpi
containers:
- name: mariadb
image: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
value: "glpi"
- name: MYSQL_DATABASE
value: "glpi"
- name: MYSQL_USER
value: "glpi"
- name: MYSQL_PASSWORD
value: "glpi"
- name: GLPI_SOURCE_URL
value: "https://forge.glpi-project.org/attachments/download/2020/glpi-0.85.4.tar.gz"
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
- mountPath: /var/lib/mariadb/
name: pv-storage-glpi
subPath: mariadb
- name: glpi
image: driket54/glpi
ports:
- containerPort: 80
name: http
- containerPort: 8090
name: https
volumeMounts:
- mountPath: /var/glpidata
name: pv-storage-glpi
subPath: glpidata
---
apiVersion: v1
kind: Service
metadata:
name: glpi
namespace: jb
spec:
selector:
app: glpi
ports:
- protocol: "TCP"
port: 80
targetPort: http
name: http
- protocol: "TCP"
port: 8090
targetPort: https
name: https
- protocol: "TCP"
port: 3306
targetPort: mariadb
name: mariadb
type: NodePort
---
The docker image is properly deployed but in my test phase, during the setup of the app, i get the following error while setting up the database (mysql).
I've already checked the credentials (host, username, password) and the are correct
Please help
Not really an answer since I don' t have the Kubernetes knowledge expected, but I can't add a comment yet :(
What you should alter first is your GLPi version.
Use this link. It's the last one:
https://github.com/glpi-project/glpi/releases/download/9.3.0/glpi-9.3.tgz
Then you may use cli tools to setup the database.
https://glpi-install.readthedocs.io/en/latest/command-line.html
Using what I get from your file:
php scripts/cliinstall.php --host=mariadb(not sure about this one in your environment but you get hte idea) --db=glpi --user=glpi --pass=glpi

Resources