I am trying to connect my pod from Kubernetes (k8s) cluster to a remote Jaeger server. I've tested and it can work well if both of them are on the same machine. However, when I run my app on k8s, my app can not connect to Jaeger despite I were using physical IP.
First, I've tried this:
containers:
- name: api
env:
- name: OTEL__AGENT_HOST
value: <my-physical-ip>
- name: OTEL__AGENT_PORT
value: "6831"
After read the docs from the internet, I add the Jaeger agent to my deployments as a sidecar container like this.
containers:
- name: api
env:
- name: OTEL__AGENT_HOST
value: "localhost"
- name: OTEL__AGENT_PORT
value: "6831"
- image: jaegertracing/jaeger-agent
name: jaeger-agent
ports:
- containerPort: 5775
protocol: UDP
- containerPort: 6831
protocol: UDP
- containerPort: 6832
protocol: UDP
- containerPort: 5778
protocol: TCP
args: ["--reporter.grpc.host-port=<my-physical-ip>:14250"]
It seems work very well on both containers. But on the collector of Jaeger, I received a log like this:
{"level":"warn","ts":1641987200.2678068,"caller":"channelz/logging.go:62","msg":"[core]grpc: Server.Serve failed to create ServerTransport: connection error: desc = \"transport: http2Server.
HandleStreams failed to receive the preface from client: read tcp 172.20.0.4:14250-><the-ip-of-machine-my-pods-are-deployed>:32852: i/o timeout\"","system":"grpc","grpc_log":true}
I exposed port 14267 on Jaeger collector on remote machine, then change args: ["--reporter.grpc.host-port=<my-physical-ip>:14250"] to args: ["--reporter.grpc.host-port=<my-physical-ip>:14267"] and it works.
Have you tried using jaeger operator? https://github.com/jaegertracing/jaeger-operator
This is how you will install it :
kubectl create namespace observability
kubectl create -f https://github.com/jaegertracing/jaeger-operator/releases/download/v1.31.0/jaeger-operator.yaml -n observability
then you can create Jaeger instance that will up jaeger components like collector, agent, query . You can define storage too .. like elastic search for e.g.
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: simple-prod-es
spec:
strategy: production
storage:
type: elasticsearch
options:
es:
server-urls: https://search-test-g7fbo7pzghdquvvgxty2pc6lqu.us-east-2.es.amazonaws.com
index-prefix: jaeger-span
username: test
password: xxxeee
Then in your application's deployment yaml file you will need to configure agent as a side car (or u can use agent as deamonset) so that request can be forwarded to the collector ..
More details here: https://www.jaegertracing.io/docs/1.31/operator/#deployment-strategies
Related
I'm trying to set up a few micro services in Kubernetes. Everything is working as expected, except the connection from one micro service to RabbitMQ.
Problem flow:
.NET Core app --> rabbitmq-kubernetes-service.yml --> RabbitMQ
In the .NET Core app the rabbit connection factory config looks like this:
"RabbitMQ": {
"Host": "rabbitmq-service",
"Port": 7000,
"UserName": "guest",
"Password": "guest"
}
The kubernetes rabbit service looks like this:
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-service
spec:
selector:
app: rabbitmq
ports:
- port: 7000
targetPort: 5672
As well as the rabbit deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: rabbitmq
labels:
app: rabbitmq
spec:
replicas: 1
selector:
matchLabels:
app: rabbitmq
template:
metadata:
labels:
app: rabbitmq
spec:
containers:
- name: rabbitmq
image: <private ACR with vanilla cfg - the image is: rabbitmq:3.7.9-management-alpine>
imagePullPolicy: Always
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: "0.5"
ports:
- containerPort: 5672
So this setup is currently not working in k8s. Locally it works like a charm with a basic docker-compose.
However, what I can do in k8s is to go from a LoadBalancer --> to the running rabbit pod and access the management GUI with these config settings.
apiVersion: v1
kind: Service
metadata:
name: rabbitmqmanagement-loadbalancer
spec:
type: LoadBalancer
selector:
app: rabbitmq
ports:
- port: 80
targetPort: 15672
Where am I going wrong?
I'm assuming you are running the .NET Core app outside the Kubernetes cluster.
If this is indeed the case then you need to use type: LoadBalancer.
LoadBalancer is used to expose a service to the internet.
ClusterIP exposes the service inside cluster-internal IP. So Service will be only accessible from within the cluster, also this is a default ServiceType.
NodePort exposes the service on each Node's IP at a static port.
For more details regarding Services please check the Kubernetes docs.
You can if the connection is working using a python script:
#!/usr/bin/env python
import pika
connection = pika.BlockingConnection(
pika.ConnectionParameters(host='RABBITMQ_SERVER_IP'))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='', routing_key='hello', body='Hello World!')
print(" [x] Sent 'Hello World!'")
connection.close()
This script will try to connect RABBITMQ_SERVER_IP using port 5672.
Script requires a library pika which can be installed using pip install pika.
I am new to Prometheus and relatively new to kubernetes so bear with me, please. I am trying to test Prometheus out and have tried two different approaches.
Run Prometheus as a docker container outside of kubernetes. To accomplish this I have created this Dockerfile:
FROM prom/prometheus
ADD prometheus.yml /etc/prometheus/
and this yaml file:
global:
scrape_interval: 15s
external_labels:
monitor: 'codelab-monitor'
scrape_configs:
- job_name: 'kubernetes-apiservers'
scheme: http
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: endpoints
api_server: localhost:443
When I run this I get:
Failed to list *v1.Pod: Get http://localhost:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused"
Failed to list *v1.Service: Get http://localhost:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused"
Failed to list *v1.Endpoints: Get http://localhost:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused"
on a loop. Prometheus will load when I go to localhost:9090 but there is no data.
I thought deploying Prometheus as a Kubernetes deployment may help, so I made this yaml and deployed it.
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: prometheus-monitor
spec:
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
containers:
- name: prometheus-monitor
image: prom/prometheus
# args:
# - '-config.file=/etc/prometheus/prometheus.yaml'
imagePullPolicy: IfNotPresent
ports:
- name: webui
containerPort: 9090
The deployment was successful, but if I go to localhost:9090 I get 'ERR_SOCKET_NOT_CONNECTED'. (my port is forwarded)
Can anyone tell me the advantage of in vs out of Kubernetes and how to fix at least one of these issues?
Also, my config file is suppressed because it was giving an error, and I will look into that once I am able to get Prometheus loaded.
Kubernetes does not map the port outside it's cluster when you deploy your container.
You also have to create a service (can be inside the same file) to make it available from your workstation (append this to your prometheus yaml):
---
apiVersion: v1
kind: Service
metadata:
name: prometheus-web
labels:
app: prometheus
spec:
type: NodePort
ports:
- port: 9090
protocol: TCP
targetPort: 9090
nodePort: 30090
name: webui
selector:
app: prometheus
NodePort opens the given port on all nodes you have. You should be able to see the frontend with http://localhost:30090/
Per default, kubernetes allow ports 30000 to 32767 for NodePort type (https://kubernetes.io/docs/concepts/services-networking/service/#nodeport).
Please consider reading the documentation in general for more information on services in kubernetes: https://kubernetes.io/docs/concepts/services-networking/service/
So 2 different issues. On:
You are trying to connect to localhost:443 where Prometheus is running and it's expecting to talk to a Kubernetes API server. Apparently, nothing is listening on localhost:443. Are you doing port forwarding to your kube-apiserver?
In this case you need to expose your deployment port. With something like:
kubectl expose deployment prmetheus-web --type=LoadBalancer # or
kubectl expose deployment prmetheus-web --type=NodePort
depending on how you want to expose your service. NodePort exposes it in service that maps to a port on your Kubernetes nodes (IPAddress:Port) and LoadBalancer exposes the deployment using an external load balancer that may vary depending on what cloud you are using (AWS, GCP, OpenStack, Azure, etc). More about exposing your Deployments or DaemonSets or StatefulSets here. More about services here
Hope it helps.
On a single Ubuntu 14.04 box
I've followed the same configuration as
http://dojoblog.dellemc.com/dojo/deploy-kafka-cluster-kubernetes/
I use Kubernetes version v1.10.2
( I also use apiVersion: apps/v1 in yml files. )
Basically I have setup a kubernetes service for kafka, and a kafka deployment using image wurstmeister/kafka. Zookeeper is working ok. Zookeeper and Kafka services are up.
Kafka deployment is configured as per the blog : KAFKA_ADVERTISED_HOST_NAME = the kafa service cluster IP which is for me 10.106.84.132
deployment config :
....
containers:
- name: kafka
image: wurstmeister/kafka
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: 10.106.84.132
- name: KAFKA_ZOOKEEPER_CONNECT
value: zoo1:2181
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_CREATE_TOPICS
value: topic1:3:3
Then I test the kafka subscribe and publish from outside the kafka container on my host, but that fails as follow :
root#edmitchell-virtual-machine:~# kafkacat -b 10.106.84.132:9092 -t topic1
% Auto-selecting Consumer mode (use -P or -C to override)
% ERROR: Topic topic1 error: Broker: Leader not available
The best I could do overall was
I delete and recreate a kafka deployment with
name: KAFKA_ADVERTISED_HOST_NAME
value: localhost
I can then subscribe and publish but only from within the kafka container, it doesn't work from outside. If I change the value to anything else than localhost, nothing works.
Any idea ?
It looks as if Kafka is not good to be used with Kubernetes ?
maybe I should not deploy Kafka not using kubernetes..
many thanks
ed
Thank you, I understand better now the nodeport function.
I still have the same issue :
root#fnature-virtual-machine:~/Zookeeper# kafkacat -b 192.168.198.160:32748 -t topic1 % Auto-selecting Consumer mode (use -P or -C to override) % ERROR: Topic topic1 error: Broker: Leader not available
I created the nodeport service as you said.
kafka-nodeport NodePort 10.111.234.104 9092:32748/TCP 27m
kafka-service LoadBalancer 10.106.84.132 9092:30351/TCP 1d
I also delete/create the kafka deployment with following env :
KAFKA_ADVERTISED_PORT: 32748
KAFKA_ADVERTISED_HOST_NAME: 192.168.198.160
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181
KAFKA_BROKER_ID: 1
KAFKA_CREATE_TOPICS: topic1:3:3
—
also if I run the following from inside the kafka container, I get similar error
"Leader not available". kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic topic1 --from-beginning
if I create the kafka deployment with KAFKA_ADVERTISED_HOST_NAME: localhost, then above command works inside the kafka container
and 192.168.198.160 is the ip of default interface ens33 in my Ubuntu VM
I can’t seem to find any logs for kafka
Kafka broker registers an address to zookeeper via KAFKA_ADVERTISED_HOST_NAME. However, this address is a kubernetes cluster ip (10.106.84.132), which is only reachable within Kubernetes cluster. So a client outside the cluster can not reach Kafka broker using this address.
To resolve this problem, you can expose kafka service to a public ip, either through NodePort or LoadBalancer. For example, run kubectl expose svc $YOUR_KAFKA_SERVICE_NAME --name=kafka-nodeport --type=NodePort, then lookup what nodeport is exposed: kubectl get svc kafka-nodeport -o yaml | grep nodePort. In this example, kafka broker will be accessible via this address: $KUBERNETES_NODE_IP:$NODEPORT.
in k8s, deployment kafka.yaml
env:
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_CREATE_TOPICS
value: "test:1:1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zookeeper:2181"
- name: KAFKA_ADVERTISED_LISTENERS
value: "INSIDE://:9092,OUTSIDE://kafka-com:30322"
- name: KAFKA_LISTENERS
value: "INSIDE://:9092,OUTSIDE://:30322"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: "INSIDE"
kafka service,the external service invocation address, or traefik proxy address
---
kind: Service
apiVersion: v1
metadata:
name: kafka-com
namespace: dev
labels:
k8s-app: kafka
spec:
selector:
k8s-app: kafka
ports:
- port: 9092
name: innerport
targetPort: 9092
protocol: TCP
- port: 30322
name: outport
targetPort: 30322
protocol: TCP
nodePort: 30322
type: NodePort
Ensure that Kafka external port and nodePort are consistent,Other services call kafka-com:30322, my blog write this config_kafka_in_kubernetes, hope to help U !
I am new to Kubernetes and Nginx Ingress tools and now i am trying to host MySql service using VHost in Nginx Ingress on AWS. I have created a file something like :
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
type: NodePort
ports:
- port: 3306
protocol: TCP
selector:
app: mysql
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- name: http
containerPort: 3306
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mysql
labels:
app: mysql
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: mysql.example.com
http:
paths:
- path: /
backend:
serviceName: mysql
servicePort: 3306
My LoadBalancer (created by Nginx Ingress) port configuration looks like :
80 (TCP) forwarding to 32078 (TCP)
Stickiness options not available for TCP protocols
443 (TCP) forwarding to 31480 (TCP)
Stickiness options not available for TCP protocols
mysql.example.com is pointing to my ELB.
I was expecting something like, from my local box i can connect to MySql if try something like :
mysql -h mysql.example.com -u root -P 80 -p
Which is not working out. Instead of NodePort if i try with LoadBalancer, its creating a new ELB for me which is working as expected.
I am not sure if this is right approach for what i want to achieve here. Please help me out if there is a way for achieving same using the Ingress with NodePort.
Kubernetes Ingress as a generic concept does not solve the issue of exposing/routing TCP/UDP services, as stated in https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md you should use custom configmaps if you want that with ingress. And please mind that it will never use hostname for routing as that is a feature of HTTP, not TCP.
I succeded to access MariaDB/MySQL hosted on Google Kubernetes Engine through ingress-nginx, using the hostname specified in the ingress created for the database Cluster IP.
As per the docs, simply create the config map and expose the port in the Service defined for the Ingress.
This helped me to figure how to set --tcp-services-configmap and --udp-services-configmap flags.
I currently have a service that looks like this:
apiVersion: v1
kind: Service
metadata:
name: httpd
spec:
ports:
- port: 80
targetPort: 80
name: http
protocol: TCP
- port: 443
targetPort: 443
name: https
protocol: TCP
selector:
app: httpd
externalIPs:
- 10.128.0.2 # VM's internal IP
I can receive traffic fine from the external IP bound to the VM, but all of the requests are received by the HTTP with the source IP 10.104.0.1, which is most definitely an internal IP – even when I connect to the VM's external IP from outside the cluster.
How can I get the real source IP for the request without having to set up a load balancer or ingress?
This is not simple to achieve -- because of the way kube-proxy works, your traffic can get forwarded between nodes before it reaches the pod that's backing your Service.
There are some beta annotations that you can use to get around this, specifically service.beta.kubernetes.io/external-traffic: OnlyLocal.
More info in the docs, here: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
But this does not meet your additional requirement of not requiring a LoadBalancer. Can you expand upon why you don't want to involve a LoadBalancer?
If you only have exactly one pod, you can use hostNetwork: true to achieve this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: caddy
spec:
replicas: 1
template:
metadata:
labels:
app: caddy
spec:
hostNetwork: true # <---------
containers:
- name: caddy
image: your_image
env:
- name: STATIC_BACKEND # example env in my custom image
value: $(STATIC_SERVICE_HOST):80
Note that by doing this your pod will inherit the host's DNS resolver and not Kubernetes'. That means you can no longer resolve cluster services by DNS name. For example, in the example above you cannot access the static service at http://static. You still can access services by their cluster IP, which are injected by environment variables.