Not able to access kafka using service IP - docker

To deploy Kafka with Zookeeper we are using below yaml code.
Kafka is running successfully and we are able to write and read data on Kafka topic through pod IP, but we are not able to write to Kafka topic through service IP.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: zookeeper-cluster1
namespace: default
labels:
app: zookeeper-cluster1
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper-cluster1
template:
metadata:
labels:
name: zookeeper-cluster1
app: zookeeper-cluster1
spec:
hostname: zookeeper-cluster1
containers:
- name: zookeeper-cluster1
image: wurstmeister/zookeeper:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper-cluster1
namespace: default
labels:
app: zookeeper-cluster1
spec:
type: NodePort
selector:
app: zookeeper-cluster1
ports:
- name: zookeeper-cluster1
protocol: TCP
port: 2181
targetPort: 2181
- name: zookeeper-follower-cluster1
protocol: TCP
port: 2888
targetPort: 2888
- name: zookeeper-leader-cluster1
protocol: TCP
port: 3888
targetPort: 3888
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka-cluster
namespace: default
labels:
app: kafka-cluster
spec:
replicas: 1
selector:
matchLabels:
app: kafka-cluster
template:
metadata:
labels:
name: kafka-cluster
app: kafka-cluster
spec:
hostname: kafka-cluster
containers:
- name: kafka-cluster
image: wurstmeister/kafka:latest
imagePullPolicy: IfNotPresent
env:
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://kafka-cluster:9092
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-cluster1:2181
ports:
- containerPort: 9092
---
apiVersion: v1
kind: Service
metadata:
name: kafka-cluster
namespace: default
labels:
app: kafka-cluster
spec:
type: NodePort
selector:
app: kafka-cluster
ports:
- name: kafka-cluster
protocol: TCP
port: 9092
targetPort: 9092
Please suggest why service associated with Kafka on port 9092 is not working.
Thanks

Related

Can't connect to minikube external service or ingress

I have clean ubuntu 18.04 server where installed minikube, kubectl and docker.
And I have several items for it.
One deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express-deployment
labels:
app: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongo-db-secret
key: mongo-db-root-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongo-db-secret
key: mongo-db-root-password
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongo-db-configmap
key: mongo-db-url
one internal service. because tried to connect through ingress
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
spec:
selector:
app: mongo-express
ports:
- protocol: TCP
port: 8081
targetPort: 8081
one ingress for it
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
spec:
rules:
- host: my-host.com
http:
paths:
- path: "/"
pathType: "Prefix"
backend:
service:
name: mongo-express-service
port:
number: 8081
And one external service because I tried to connect through them
apiVersion: v1
kind: Service
metadata:
name: mongo-express-external-service
spec:
selector:
app: mongo-express
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30000
But each of these options does not work for me. I tried add update to host file and add
192.168.47.2 my-host.com
but it also didn't help me.
When I run curl my-host.com in server terminal I receive correct response, but I can't get it from my browser.
My domain set up to my server and when I use nginx only all work fine.
May be need to add something else or update my config?
I hope you can help me.

EKS connection refused when trying to talk to Jaeger agent daemonset

I recently deployed the jaeger agent as a daemonset on my k8s cluster alongside a collector. When trying to send spans to the agent using:
- name: JAEGER_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
When looking at the application logs I see:
failed to flush Jaeger spans to server: write udp <Pod-Ip>:42531-><Node-Ip>:6831: write: connection refused
All nodes can access each other as the security group does not block ports between them, when using a sidecar agent the spans are sent without issue.
Replicate:
Deploy agent using:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: jaeger-agent
labels:
app: jaeger
app.kubernetes.io/name: jaeger
app.kubernetes.io/component: agent
namespace: observability
spec:
selector:
matchLabels:
app: jaeger
app.kubernetes.io/name: jaeger
app.kubernetes.io/component: agent
template:
metadata:
labels:
app: jaeger
app.kubernetes.io/name: jaeger
app.kubernetes.io/component: agent
spec:
containers:
- name: jaeger-agent
image: jaegertracing/jaeger-agent:1.18.0
args: ["--reporter.grpc.host-port=<collector-name>:14250"]
ports:
- containerPort: 5775
protocol: UDP
- containerPort: 6831
protocol: UDP
- containerPort: 6832
protocol: UDP
- containerPort: 5778
protocol: TCP
Then deploy hotrod application:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hotrod
labels:
app: hotrod
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: hotrod
template:
metadata:
labels:
app: hotrod
spec:
containers:
- name: hotrod
image: jaegertracing/example-hotrod:latest
imagePullPolicy: Always
env:
- name: JAEGER_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
ports:
- containerPort: 8080
Looks like your DaemonSet misses the hostNetwork property, to be able to listen on the node IP.
You can check that article for further info: https://medium.com/#masroor.hasan/tracing-infrastructure-with-jaeger-on-kubernetes-6800132a677

KIE server and workbench on Kubernetes

I followed the official instruction and had no problem with running kie server and workbench on Docker. However, when I try with Kubernetes I bump into some problem. There is no Execution server in the list (Business Central -> Deploy -> Execution Servers). Both of them are up and running, I can access Business Central, http://localhost:31002/kie-server/services/rest/server/ is responding correctly :
<response type="SUCCESS" msg="Kie Server info">
<kie-server-info>
<capabilities>KieServer</capabilities>
<capabilities>BRM</capabilities>
<capabilities>BPM</capabilities>
<capabilities>CaseMgmt</capabilities>
<capabilities>BPM-UI</capabilities>
<capabilities>BRP</capabilities>
<capabilities>DMN</capabilities>
<capabilities>Swagger</capabilities>
<location>http://localhost:8080/kie-server/services/rest/server</location>
<messages>
<content>Server KieServerInfo{serverId='kie-server-kie-server-7fcc96f568-2gf29', version='7.45.0.Final', name='kie-server-kie-server-7fcc96f568-2gf29', location='http://localhost:8080/kie-server/services/rest/server', capabilities=[KieServer, BRM, BPM, CaseMgmt, BPM-UI, BRP, DMN, Swagger]', messages=null', mode=DEVELOPMENT}started successfully at Tue Oct 27 10:36:09 UTC 2020</content>
<severity>INFO</severity>
<timestamp>2020-10-27T10:36:09.433Z</timestamp>
</messages>
<mode>DEVELOPMENT</mode>
<name>kie-server-kie-server-7fcc96f568-2gf29</name>
<id>kie-server-kie-server-7fcc96f568-2gf29</id>
<version>7.45.0.Final</version>
</kie-server-info>
</response>
Here is my yaml file that I am using to create deployments and services
apiVersion: apps/v1
kind: Deployment
metadata:
name: kie-wb
spec:
replicas: 1
selector:
matchLabels:
app: kie-wb
template:
metadata:
labels:
app: kie-wb
spec:
containers:
- name: kie-wb
image: jboss/drools-workbench-showcase:latest
ports:
- containerPort: 8080
- containerPort: 8001
securityContext:
privileged: true
---
kind: Service
apiVersion: v1
metadata:
name: kie-wb
spec:
selector:
app: kie-wb
ports:
- name: "8080"
port: 8080
targetPort: 8080
- name: "8001"
port: 8001
targetPort: 8001
# type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: kie-wb-np
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 31001
selector:
app: kie-wb
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kie-server
spec:
replicas: 1
selector:
matchLabels:
app: kie
template:
metadata:
labels:
app: kie
spec:
containers:
- name: kie
image: jboss/kie-server-showcase:latest
ports:
- containerPort: 8080
securityContext:
privileged: true
---
kind: Service
apiVersion: v1
metadata:
name: kie-server
spec:
selector:
app: kie
ports:
- name: "8080"
port: 8080
targetPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: kie-server-np
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 31002
selector:
app: kie
# type: LoadBalancer
When deploying to Docker I am using --link drools-wb:kie-wb
docker run -p 8180:8080 -d --name kie-server --link drools-wb:kie-wb jboss/kie-server-showcase:latest
In Kubernetes I created service called kie-wb, but that doesn't help.
What am I missing here?
I was working on a similar set up and used your YAML file as a start (thanks for that)!
I had to add the following snippet to the kia-server-showcase container:
env:
- name: KIE_WB_ENV_KIE_CONTEXT_PATH
value: "business-central"
It does work now, at least as far as I can tell.

Connection issue between services in kubernetes

I have three different images related to my application which works fine in docker-compose and has issues running on kubernetes cluster in GCP.
Below is the deployment file.
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql-database
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql-database
tier: database
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql-database
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-database
tier: database
spec:
hostname: mysql
containers:
- image: mysql/mysql-server:5.7
name: mysql
env:
- name: "MYSQL_USER"
value: "root"
- name: "MYSQL_HOST"
value: "mysql"
- name: "MYSQL_DATABASE"
value: "xxxx"
- name: "MYSQL_PORT"
value: "3306"
- name: "MYSQL_PASSWORD"
value: "password"
- name: "MYSQL_ROOT_PASSWORD"
value: "password"
- name: "RAILS_ENV"
value: "production"
ports:
- containerPort: 5432
name: db
---
apiVersion: v1
kind: Service
metadata:
name: dgservice
labels:
app: dgservice
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
selector:
name: dgservice
tier: dgservice
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dgservice
labels:
app: dgservice
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: dgservice
tier: dgservice
spec:
hostname: dgservice
containers:
- image: gcr.io/sample/sample-image:check_1
name: dgservice
ports:
- containerPort: 8080
name: dgservice
---
apiVersion: v1
kind: Service
metadata:
name: dg-ui
labels:
name: dg-ui
spec:
type: NodePort
ports:
- nodePort: 30156
port: 8000
protocol: TCP
targetPort: 8000
selector:
app: dg-ui
tier: dg
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dg-ui
labels:
app: dg-ui
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: dg-ui
tier: dg
spec:
hostname: dg-ui
containers:
- image: gcr.io/sample/sample:latest
name: dg-ui
env:
- name: "MYSQL_USER"
value: "root"
- name: "MYSQL_HOST"
value: "mysql"
- name: "MYSQL_DATABASE"
value: "xxxx"
- name: "MYSQL_PORT"
value: "3306"
- name: "MYSQL_PASSWORD"
value: "password"
- name: "MYSQL_ROOT_PASSWORD"
value: "password"
- name: "RAILS_ENV"
value: "production"
- name: "DG_SERVICE_HOST"
value: "dgservice"
ports:
- containerPort: 8000
name: dg-ui
The image is being pulled successfully from GCR as well.
The connection between mysql and ui service also works fine and my data's are getting migrated without any issues. But the connection is not established between the service and the ui.
Why ui is not able to access service in my application?
As your deployment has the following lables so service need to have same labels in order to create endpoint object
endpoints are the API object behind a service. The endpoints are where a service will route connections to when a connection is made to the ClusterIP of a service
Following are the labels of deployments
labels:
app: dgservice
tier: dgservice
New Service definition with correct labels
apiVersion: v1
kind: Service
metadata:
name: dgservice
labels:
app: dgservice
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
selector:
app: dgservice
tier: dgservice
I am assuming by "service" you are referring to your "dgservice". With the yaml presented above, I believe you also need to specify the DG_SERVICE_PORT (port 8080) to correctly access "dgservice".
As mentioned by Suresh in the comments, you should expose internal services using ClusterIP type. The NodePort is a superset of ClusterIP, and will expose the service internally to your cluster at service-name:port, and externally at node-ip:nodeport, targeting your deployment/pod at targetport.

how to use AWS ELB with nginx ingress on k8s

1) I have SSL certs generated on AWS
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:...fa5298fc
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
labels:
k8s-addon: ingress-nginx.addons.k8s.io
name: ingress-nginx-lb-svc
# namespace: ingress-nginx
spec:
externalTrafficPolicy: Cluster
ports:
- name: https
port: 443
protocol: TCP
targetPort: http
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: nginx-ingress-control-pod
type: LoadBalancer
2) then I have nginx controller pod
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-control-pod
labels:
app: nginx-ingress-control-pod
spec:
replicas: 1
selector:
app: nginx-ingress-control-pod
template:
metadata:
labels:
app: nginx-ingress-control-pod
spec:
containers:
- image: nginxdemos/nginx-ingress:1.0.0
imagePullPolicy: Always
name: nginx-ingress-control-pod
ports:
- name: http
containerPort: 80
hostPort: 80
#- name: https
# containerPort: 443
# hostPort: 443
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# Uncomment the lines below to enable extensive logging and/or customization of
# NGINX configuration with configmaps
args:
#- -v=3
#- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
#- -default-server-tls-secret=$(POD_NAMESPACE)/web-secret
3) lastly I am using helm to deploy grafana and prometheus (this setup works when accessing via NodePort)
I just can not figure out setup with ELB and ingress.
Btw ingress is a part of grafana deployment
which is correctly created
3)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
creationTimestamp: 2018-04-06T09:28:10Z
generation: 1
labels:
app: graf-helmf-default-ns-grafana
chart: grafana-0.8.5
component: grafana
heritage: Tiller
release: graf-helmf-default-ns
name: graf-helmf-default-ns-grafana
namespace: default
resourceVersion: "995865"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/graf-helmf-default-ns-grafana
uid: d2991870-397c-11e8-9d...5a37f5a
spec:
rules:
- host: grafana.my.valid.domain.com
http:
paths:
- backend:
serviceName: graf-helmf-default-ns-grafana
servicePort: 80
status:
loadBalancer: {}

Resources