I recently deployed the jaeger agent as a daemonset on my k8s cluster alongside a collector. When trying to send spans to the agent using:
- name: JAEGER_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
When looking at the application logs I see:
failed to flush Jaeger spans to server: write udp <Pod-Ip>:42531-><Node-Ip>:6831: write: connection refused
All nodes can access each other as the security group does not block ports between them, when using a sidecar agent the spans are sent without issue.
Replicate:
Deploy agent using:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: jaeger-agent
labels:
app: jaeger
app.kubernetes.io/name: jaeger
app.kubernetes.io/component: agent
namespace: observability
spec:
selector:
matchLabels:
app: jaeger
app.kubernetes.io/name: jaeger
app.kubernetes.io/component: agent
template:
metadata:
labels:
app: jaeger
app.kubernetes.io/name: jaeger
app.kubernetes.io/component: agent
spec:
containers:
- name: jaeger-agent
image: jaegertracing/jaeger-agent:1.18.0
args: ["--reporter.grpc.host-port=<collector-name>:14250"]
ports:
- containerPort: 5775
protocol: UDP
- containerPort: 6831
protocol: UDP
- containerPort: 6832
protocol: UDP
- containerPort: 5778
protocol: TCP
Then deploy hotrod application:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hotrod
labels:
app: hotrod
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: hotrod
template:
metadata:
labels:
app: hotrod
spec:
containers:
- name: hotrod
image: jaegertracing/example-hotrod:latest
imagePullPolicy: Always
env:
- name: JAEGER_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
ports:
- containerPort: 8080
Looks like your DaemonSet misses the hostNetwork property, to be able to listen on the node IP.
You can check that article for further info: https://medium.com/#masroor.hasan/tracing-infrastructure-with-jaeger-on-kubernetes-6800132a677
Related
I have clean ubuntu 18.04 server where installed minikube, kubectl and docker.
And I have several items for it.
One deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express-deployment
labels:
app: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongo-db-secret
key: mongo-db-root-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongo-db-secret
key: mongo-db-root-password
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongo-db-configmap
key: mongo-db-url
one internal service. because tried to connect through ingress
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
spec:
selector:
app: mongo-express
ports:
- protocol: TCP
port: 8081
targetPort: 8081
one ingress for it
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
spec:
rules:
- host: my-host.com
http:
paths:
- path: "/"
pathType: "Prefix"
backend:
service:
name: mongo-express-service
port:
number: 8081
And one external service because I tried to connect through them
apiVersion: v1
kind: Service
metadata:
name: mongo-express-external-service
spec:
selector:
app: mongo-express
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30000
But each of these options does not work for me. I tried add update to host file and add
192.168.47.2 my-host.com
but it also didn't help me.
When I run curl my-host.com in server terminal I receive correct response, but I can't get it from my browser.
My domain set up to my server and when I use nginx only all work fine.
May be need to add something else or update my config?
I hope you can help me.
I followed the official instruction and had no problem with running kie server and workbench on Docker. However, when I try with Kubernetes I bump into some problem. There is no Execution server in the list (Business Central -> Deploy -> Execution Servers). Both of them are up and running, I can access Business Central, http://localhost:31002/kie-server/services/rest/server/ is responding correctly :
<response type="SUCCESS" msg="Kie Server info">
<kie-server-info>
<capabilities>KieServer</capabilities>
<capabilities>BRM</capabilities>
<capabilities>BPM</capabilities>
<capabilities>CaseMgmt</capabilities>
<capabilities>BPM-UI</capabilities>
<capabilities>BRP</capabilities>
<capabilities>DMN</capabilities>
<capabilities>Swagger</capabilities>
<location>http://localhost:8080/kie-server/services/rest/server</location>
<messages>
<content>Server KieServerInfo{serverId='kie-server-kie-server-7fcc96f568-2gf29', version='7.45.0.Final', name='kie-server-kie-server-7fcc96f568-2gf29', location='http://localhost:8080/kie-server/services/rest/server', capabilities=[KieServer, BRM, BPM, CaseMgmt, BPM-UI, BRP, DMN, Swagger]', messages=null', mode=DEVELOPMENT}started successfully at Tue Oct 27 10:36:09 UTC 2020</content>
<severity>INFO</severity>
<timestamp>2020-10-27T10:36:09.433Z</timestamp>
</messages>
<mode>DEVELOPMENT</mode>
<name>kie-server-kie-server-7fcc96f568-2gf29</name>
<id>kie-server-kie-server-7fcc96f568-2gf29</id>
<version>7.45.0.Final</version>
</kie-server-info>
</response>
Here is my yaml file that I am using to create deployments and services
apiVersion: apps/v1
kind: Deployment
metadata:
name: kie-wb
spec:
replicas: 1
selector:
matchLabels:
app: kie-wb
template:
metadata:
labels:
app: kie-wb
spec:
containers:
- name: kie-wb
image: jboss/drools-workbench-showcase:latest
ports:
- containerPort: 8080
- containerPort: 8001
securityContext:
privileged: true
---
kind: Service
apiVersion: v1
metadata:
name: kie-wb
spec:
selector:
app: kie-wb
ports:
- name: "8080"
port: 8080
targetPort: 8080
- name: "8001"
port: 8001
targetPort: 8001
# type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: kie-wb-np
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 31001
selector:
app: kie-wb
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kie-server
spec:
replicas: 1
selector:
matchLabels:
app: kie
template:
metadata:
labels:
app: kie
spec:
containers:
- name: kie
image: jboss/kie-server-showcase:latest
ports:
- containerPort: 8080
securityContext:
privileged: true
---
kind: Service
apiVersion: v1
metadata:
name: kie-server
spec:
selector:
app: kie
ports:
- name: "8080"
port: 8080
targetPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: kie-server-np
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 31002
selector:
app: kie
# type: LoadBalancer
When deploying to Docker I am using --link drools-wb:kie-wb
docker run -p 8180:8080 -d --name kie-server --link drools-wb:kie-wb jboss/kie-server-showcase:latest
In Kubernetes I created service called kie-wb, but that doesn't help.
What am I missing here?
I was working on a similar set up and used your YAML file as a start (thanks for that)!
I had to add the following snippet to the kia-server-showcase container:
env:
- name: KIE_WB_ENV_KIE_CONTEXT_PATH
value: "business-central"
It does work now, at least as far as I can tell.
Kind Note: I have googled a lot and take a look too many questions related to this issue at StackOverflow also but couldn't solve my issue, that's why don't mark this as duplicate, please!
I'm trying to deploy 2 services (One is Python flask and other is NodeJS) on Google Kubernetes Engine. I have created two Kubernetes-deployments one for each service and two Kubernetes-services one for each service of type NodePort. Then, I have created an Ingress and mentioned my endpoints but Ingress says that One backend service is UNHEALTHY.
Here are my Deployments YAML definitions:
# Pyservice deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: pyservice
labels:
app: pyservice
namespace: default
spec:
selector:
matchLabels:
app: pyservice
template:
metadata:
labels:
app: pyservice
spec:
containers:
- name: pyservice
image: docker.io/arycloud/docker_web_app:pyservice
ports:
- containerPort: 5000
imagePullSecrets:
- name: docksecret
# # Nodeservice deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodeservice
labels:
app: nodeservice
namespace: default
spec:
selector:
matchLabels:
app: nodeservice
template:
metadata:
labels:
app: nodeservice
tier: web
spec:
containers:
- name: nodeservice
image: docker.io/arycloud/docker_web_app:nodeservice
ports:
- containerPort: 8080
imagePullSecrets:
- name: docksecret
And, here are my services and Ingress YAML definitions:
# pyservcie service
kind: Service
apiVersion: v1
metadata:
name: pyservice
spec:
type: NodePort
selector:
app: pyservice
ports:
- protocol: TCP
port: 5000
nodePort: 30001
---
# nodeservcie service
kind: Service
apiVersion: v1
metadata:
name: nodeservcie
spec:
type: NodePort
selector:
app: nodeservcie
ports:
- protocol: TCP
port: 8080
nodePort: 30002
---
# Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "gce"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: pyservice
servicePort: 5000
- path: /*
backend:
serviceName: pyservice
servicePort: 5000
- path: /node/svc/
backend:
serviceName: nodeservcie
servicePort: 8080
The pyservice is working fine but the nodeservice shows as UNHEALTHY backend. Here's a screenshot:
Even I have edited the Firewall Rules for all gke-.... and allow all ports just for getting out from this issue, but it still showing the UNHEALTHY status for nodeservice.
What's wrong here?
Thanks in advance!
Why are you using a GCE ingress class and then specifying a nginx rewrite annotation? In case you haven't realised, the annotation won't do anything to the GCE ingress.
You have also got 'nodeservcie' as your selector instead of 'nodeservice'.
I see this error in some of the Jenkins jobs
Cannot contact jenkins-slave-l65p0-0f7m0: hudson.remoting.ChannelClosedException: Channel "unknown": Remote call on JNLP4-connect connection from 100.99.111.187/100.99.111.187:46776 failed. The channel is closing down or has closed down
I have a jenkins master - slave setup.
On the slave following logs are found
java.nio.channels.ClosedChannelException
at org.jenkinsci.remoting.protocol.NetworkLayer.onRecvClosed(NetworkLayer.java:154)
at org.jenkinsci.remoting.protocol.impl.NIONetworkLayer.ready(NIONetworkLayer.java:142)
at org.jenkinsci.remoting.protocol.IOHub$OnReady.run(IOHub.java:795)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Jenkins is on a kubernetes cluster.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
namespace: default
name: jenkins-deployment
spec:
serviceName: "jenkins-pod"
replicas: 1
template:
metadata:
labels:
app: jenkins-pod
spec:
initContainers:
- name: volume-mount-hack
image: busybox
command: ["sh", "-c", "chmod -R 777 /usr/mnt"]
volumeMounts:
- name: jenkinsdir
mountPath: /usr/mnt
containers:
- name: jenkins-container
imagePullPolicy: Always
readinessProbe:
exec:
command:
- curl
- http://localhost:8080/login
- -o
- /dev/null
livenessProbe:
httpGet:
path: /login
port: 8080
initialDelaySeconds: 120
periodSeconds: 10
env:
- name: JAVA_OPTS
value: "-Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85"
resources:
requests:
memory: "7100Mi"
cpu: "2000m"
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumeMounts:
- mountPath: /var/run
name: docker-sock
- mountPath: /var/jenkins_home
name: jenkinsdir
volumes:
- name: jenkinsdir
persistentVolumeClaim:
claimName: "jenkins-persistence"
- name: docker-sock
hostPath:
path: /var/run
---
apiVersion: v1
kind: Service
metadata:
namespace: default
name: jenkins
labels:
app: jenkins
spec:
type: NodePort
ports:
- name: http
port: 8080
targetPort: 8080
nodePort: 30099
protocol: TCP
selector:
app: jenkins-pod
---
apiVersion: v1
kind: Service
metadata:
namespace: default
name: jenkins-external
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
labels:
app: jenkins
spec:
type: LoadBalancer
ports:
- name: http
port: 8080
targetPort: 8080
protocol: TCP
selector:
app: jenkins-pod
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: jenkins-master-pdb
namespace: default
spec:
maxUnavailable: 0
selector:
matchLabels:
app: jenkins-pod
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: jenkins-slave-pdb
namespace: default
spec:
maxUnavailable: 0
selector:
matchLabels:
jenkins: slave
---
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: default
labels:
app: jenkins
spec:
selector:
app: jenkins-pod
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves
I doubt this has anything to do with kubernetes but still putting it out there.
I am assuming you are using Jenkins Kubernetes Plugin,
You can increase Timeout in seconds for Jenkins connection under Kubernetes Pod template. It may solve your issue.
Description for Timeout in seconds for Jenkins connection:
Specify time in seconds up to which Jenkins should wait for the JNLP
agent to estabilish a connection. Value should be a positive integer,
default being 100.
Did you configure the JNLP port in Jenkins itself? It is located in Manage Jenkins > Configure Global Security > Agents. Click the "Fixed" radio button (since you already assigned a TCP port). Set the "TCP port for JNLP agents" to 50000.
I think, "jenkins-slave" is not a valid name. You can try rename it to "jnlp"
Explain here:
This was related to this issue. If the name of the custom agent is not jnlp, then another agent with the default jnlp image is created. This explains messages like channel already closed etc..
I'm running a simple spring microservice project with Minikube. I have two projects: lucky-word-client (on port 8080) and lucky-word-server (on port 8888). lucky-word-client has to communicate with lucky-word-server. I want to inject the static Nodeport of lucky-word-server (http://192.*..100:32002) as an environment variable in my Kuberenetes deployment script of lucky-word-client. How I could do?
This is deployment of lucky-word-server:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lucky-server
spec:
selector:
matchLabels:
app: lucky-server
replicas: 1
template:
metadata:
labels:
app: lucky-server
spec:
containers:
- name: lucky-server
image: lucky-server-img
imagePullPolicy: Never
ports:
- containerPort: 8888
This is the service of lucky-word-server:
kind: Service
apiVersion: v1
metadata:
name: lucky-server
spec:
selector:
app: lucky-server
ports:
- protocol: TCP
targetPort: 8888
port: 80
nodePort: 32002
type: NodePort
This is the deployment of lucky-word-client:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lucky-client
spec:
selector:
matchLabels:
app: lucky-client
replicas: 1
template:
metadata:
labels:
app: lucky-client
spec:
containers:
- name: lucky-client
image: lucky-client-img
imagePullPolicy: Never
ports:
- containerPort: 8080
This is the service of lucky-word-client:
kind: Service
apiVersion: v1
metadata:
name: lucky-client
spec:
selector:
app: lucky-client
ports:
- protocol: TCP
targetPort: 8080
port: 80
type: NodePort
Kubernetes automatically injects services as environment variables. https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
But you should not use this. This won't work unless all the services are in place when you create the pod. It is inspired by "docker" which also moved on to DNS based service discovery now. So "environment based service discovery" is a thing of the past.
Please rely on DNS service discovery. Minikube ships with kube-dns so you can just use the lucky-server hostname (or one of lucky-server[.default[.svc[.cluster[.local]]]] names). Read the documentation: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/