Convert Kafka docker run command to Kubernetes - docker

The Kubernetes yaml below creates resources as expected and there are no errors/restarts. External requests to the kafka broker at localhost:80005 hang indefinitely. My understanding is that the docker notion of a "network" is achieved in Kubernetes via services. In this case, I have a NodePort service which brings traffic into the cluster and then a ClusterIp port which handles communication between containers. Based on the non-Kubernetes, docker run version of this Zookeeper/Kafka configuration, I suspect my issue lies somewhere in my service and/or advertised listeners. The internet is replete with docker compose files and helm charts but I don't seem to be able to find a combination of configurations that work.
Kafka/Zookeeper YAML:
# Zookeeper
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper-deploy
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper-app
template:
metadata:
labels:
app: zookeeper-app
spec:
containers:
- name: zookeeper-app
image: confluentinc/cp-zookeeper:5.4.1
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_TICK_TIME
value: "2000"
- name: ZOOKEEPER_CLIENT_PORT
value: "2181"
---
apiVersion: v1
kind: Service
metadata:
name: zk-cluster-svc
labels:
app: zk
spec:
type: ClusterIP
selector:
app: zookeeper-app
ports:
- port: 2181
targetPort: 2181
protocol: TCP
---
# Kafka
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-deploy
spec:
replicas: 1
selector:
matchLabels:
app: kafka-app
template:
metadata:
labels:
app: kafka-app
spec:
containers:
- name: kafka-app
image: confluentinc/cp-kafka:5.4.1
ports:
- containerPort: 9092
- containerPort: 29092
env:
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zk-cluster-svc:2181"
- name: KAFKA_ADVERTISED_LISTENERS
value: "INSIDE://kafka-cluster-svc:29092,OUTSIDE://localhost:30005"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: "INSIDE"
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
---
apiVersion: v1
kind: Service
metadata:
name: kafka-service
labels:
app: kafka-service
spec:
type: NodePort
ports:
- port: 9092
nodePort: 30005
protocol: TCP
selector:
app: kafka-app
---
apiVersion: v1
kind: Service
metadata:
name: kafka-cluster-svc
labels:
app: kafka-app
spec:
selector:
app: kafka-app
ports:
- port: 29092
targetPort: 29092
protocol: TCP
^ This is based on a working docker run version of these containers:
docker network create kafzoonet
docker run --name app-zookeeper -d --network="kafzoonet" --env ZOOKEEPER_TICK_TIME=2000 --env ZOOKEEPER_CLIENT_PORT=2181 confluentinc/cp-zookeeper:5.4.1
docker run --name app-kafka -p 30005:9092 -d --network="kafzoonet" --env KAFKA_BROKER_ID=1 --env KAFKA_ZOOKEEPER_CONNECT="app-zookeeper:2181" --env KAFKA_ADVERTISED_LISTENERS="PLAINTEXT://app-kafka:29092,PLAINTEXT_HOST://localhost:30005" --env KAFKA_LISTENER_SECURITY_PROTOCOL_MAP="PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT" --env KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT --env KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 confluentinc/cp-kafka:5.4.1
Are the services configured correctly for a single Kafka node? It appears that Kafka is able to reach Zookeeper but for some reason when a client connects, it isn't getting the correct listener echoed back. Unfortunately I don't have any error messages to support this as the client hangs and neither the Kafka or Zookeeper pods log anything.

Related

issues setting up traefik with kubernetes using a simple container

Not sure what I am missing, trying to set up a simple traefik environment with kubernetes proxying the errm/cheese:cheddar docker container to cheddar.minikube
Prerequisite:
have minikube setup
git clone # personal repo that is now deleted. see solution below
# setup.sh will delete current minikube environment then recreate it
./setup.sh
# add IP to minikube
echo `minikube ip` cheddar.minikube | sudo tee -a /etc/hosts
after running
minikube delete
minikube start
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.9/docs/content/reference/dynamic-configuration/kubernetes-crd-definition-v1.yml
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.9/docs/content/reference/dynamic-configuration/kubernetes-crd-rbac.yml
kubectl apply -f traefik-deployment.yaml -f traefik-whoami.yaml
with...
traefik-deployment.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: traefik
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
hostNetwork: true
serviceAccountName: traefik-ingress-controller
containers:
- name: traefik
image: traefik:v2.9
args:
- --api.insecure
- --accesslog
- --entrypoints.web.Address=:80
- --entrypoints.websecure.Address=:443
- --providers.kubernetescrd
ports:
- name: web
containerPort: 8000
# hostPort: 80
- name: websecure
containerPort: 4443
# hostPort: 443
- name: admin
containerPort: 8080
# hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
---
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: web
port: 80
- protocol: TCP
name: admin
port: 8080
- protocol: TCP
name: websecure
port: 443
selector:
app: traefik
traefik-whoami.yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: whoami
labels:
app: whoami
spec:
replicas: 2
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: traefik/whoami
ports:
- name: web
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: whoami
spec:
ports:
- protocol: TCP
name: web
port: 80
selector:
app: whoami
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: simpleingressroute
namespace: default
spec:
entryPoints:
- web
routes:
- match: PathPrefix(`/notls`)
kind: Rule
services:
- name: whoami
port: 80
I was able to get a simple container working with traefik in kubernetes at:
echo `minikube ip`/notls

Minikube - use a proxy running in localhost to access external service

I have a proxy running in localhost:8000 which gives me access to my service.
Previously, I was using docker-compose.yml, and by including http_proxy=http://host.docker.internal:8000 I was able to reach my service from within the container.
I have switched to Kubernetes using Minikube. I have started minikube with:
minikube start \
--docker-env HTTP_PROXY=http://host.minikube.internal:8000 \
--docker-env NO_PROXY=stats,127.0.0.1,10.96.0.0/12,192.168.59.0/24,192.168.39.0/24
The service and deployment yaml:
apiVersion: v1
kind: Service
metadata:
name: service-name
namespace: service-space
spec:
type: NodePort
ports:
- port: 5432
targetPort: 5432
selector:
run: service-name
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-name
labels:
app: service-name
spec:
replicas: 1
selector:
matchLabels:
app: service-name
template:
metadata:
labels:
app: service-name
spec:
containers:
- name: service-name
image: container-image:latest
ports:
- containerPort: 5432
From within the service-name container, when I trying to reach the service I get the following error:
curl: (6) Could not resolve host: ...

KIE server and workbench on Kubernetes

I followed the official instruction and had no problem with running kie server and workbench on Docker. However, when I try with Kubernetes I bump into some problem. There is no Execution server in the list (Business Central -> Deploy -> Execution Servers). Both of them are up and running, I can access Business Central, http://localhost:31002/kie-server/services/rest/server/ is responding correctly :
<response type="SUCCESS" msg="Kie Server info">
<kie-server-info>
<capabilities>KieServer</capabilities>
<capabilities>BRM</capabilities>
<capabilities>BPM</capabilities>
<capabilities>CaseMgmt</capabilities>
<capabilities>BPM-UI</capabilities>
<capabilities>BRP</capabilities>
<capabilities>DMN</capabilities>
<capabilities>Swagger</capabilities>
<location>http://localhost:8080/kie-server/services/rest/server</location>
<messages>
<content>Server KieServerInfo{serverId='kie-server-kie-server-7fcc96f568-2gf29', version='7.45.0.Final', name='kie-server-kie-server-7fcc96f568-2gf29', location='http://localhost:8080/kie-server/services/rest/server', capabilities=[KieServer, BRM, BPM, CaseMgmt, BPM-UI, BRP, DMN, Swagger]', messages=null', mode=DEVELOPMENT}started successfully at Tue Oct 27 10:36:09 UTC 2020</content>
<severity>INFO</severity>
<timestamp>2020-10-27T10:36:09.433Z</timestamp>
</messages>
<mode>DEVELOPMENT</mode>
<name>kie-server-kie-server-7fcc96f568-2gf29</name>
<id>kie-server-kie-server-7fcc96f568-2gf29</id>
<version>7.45.0.Final</version>
</kie-server-info>
</response>
Here is my yaml file that I am using to create deployments and services
apiVersion: apps/v1
kind: Deployment
metadata:
name: kie-wb
spec:
replicas: 1
selector:
matchLabels:
app: kie-wb
template:
metadata:
labels:
app: kie-wb
spec:
containers:
- name: kie-wb
image: jboss/drools-workbench-showcase:latest
ports:
- containerPort: 8080
- containerPort: 8001
securityContext:
privileged: true
---
kind: Service
apiVersion: v1
metadata:
name: kie-wb
spec:
selector:
app: kie-wb
ports:
- name: "8080"
port: 8080
targetPort: 8080
- name: "8001"
port: 8001
targetPort: 8001
# type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: kie-wb-np
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 31001
selector:
app: kie-wb
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kie-server
spec:
replicas: 1
selector:
matchLabels:
app: kie
template:
metadata:
labels:
app: kie
spec:
containers:
- name: kie
image: jboss/kie-server-showcase:latest
ports:
- containerPort: 8080
securityContext:
privileged: true
---
kind: Service
apiVersion: v1
metadata:
name: kie-server
spec:
selector:
app: kie
ports:
- name: "8080"
port: 8080
targetPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: kie-server-np
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 31002
selector:
app: kie
# type: LoadBalancer
When deploying to Docker I am using --link drools-wb:kie-wb
docker run -p 8180:8080 -d --name kie-server --link drools-wb:kie-wb jboss/kie-server-showcase:latest
In Kubernetes I created service called kie-wb, but that doesn't help.
What am I missing here?
I was working on a similar set up and used your YAML file as a start (thanks for that)!
I had to add the following snippet to the kia-server-showcase container:
env:
- name: KIE_WB_ENV_KIE_CONTEXT_PATH
value: "business-central"
It does work now, at least as far as I can tell.

Connection refused error when deploying couchbase in kubernetes {failed to connect to 127.0.0.1 port 8091: Connection refused}

I used the following yaml files to deploy couchbase in kubernetes.
Master:
apiVersion: v1
kind: ReplicationController
metadata:
name: couchbase-master-rc
spec:
replicas: 1
selector:
app: master-pod
template:
metadata:
labels:
app: master-pod
spec:
containers:
- name: couchbase-master
image: arungupta/couchbase:k8s
env:
- name: TYPE
value: MASTER
ports:
- containerPort: 8091
---
apiVersion: v1
kind: Service
metadata:
name: couchbase-master-service
labels:
app: couchbase-master-service
spec:
ports:
- port: 8091
selector:
app: master-pod
type: LoadBalancer
Worker:
apiVersion: v1
kind: ReplicationController
metadata:
name: couchbase-worker-rc
spec:
replicas: 1
selector:
app: couchbase-worker-pod
template:
metadata:
labels:
app: couchbase-worker-pod
spec:
containers:
- name: couchbase-worker
image: arungupta/couchbase:k8s
env:
- name: TYPE
value: "WORKER"
- name: COUCHBASE_MASTER
value: "couchbase-master-service"
- name: AUTO_REBALANCE
value: "false"
ports:
- containerPort: 8091
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: couchbase
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: xxx.com
http:
paths:
- path: /
backend:
serviceName: couchbase-master-service
servicePort: 8091
The pods started running and nothing seems to have an issue at first glance. But when I tried to hit the HostUrl it gives me bad gateway. And when I look into the logs of master's pod it shows me connection refused at 127.0.0.1:8091. I tried to exec into the pod and apply the curl statements from entrypoint.sh manually, but it also gave me the error "failed to connect to 127.0.0.1 port 8091: Connection refused".
I have found that master image is using this entrypoint script
I ran this container image and it looks like the curl is failing because 15s sleep is not enough time for couchbase-server to start and open 8091 port.
The easiest thing you could do is to set this sleep to higher value, but sleep is usually not the best option. (Actually this whole image is full of bad practises).
Better approach would be to replace sleep with following lines that wait until port 8091 is open:
while ! nc -z localhost 8091; do
sleep 1
done

How to produce to kafka broker running inside container from outside the docker host?

I am trying to produce to a kafka broker which is running inside the container launched by kubernetes. I am playing with KAFKA_ADVERTISED_LISTENERES and KAFKA_LISTERNERS.
I tried setting these two env variables KAFKA_ADVERTISED_LISTENERES = PLAINTEXT://<host-ip>:9092 and KAFKA_LISTERNERS = PLAINTEXT://0.0.0.0:9092 and ran using docker-compose. And I was able to produce from an application out of the host machine.
But setting these two env-variables in Kubernetes.yml file, I get No broker list available exception.
What am I missing here?
Update:
kafka-pod.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: casb-deployment
name: kafkaservice
spec:
replicas: 1
template:
metadata:
labels:
app: kafkaservice
spec:
hostname: kafkaservice
#hostNetwork: true # to access docker out side of host container
containers:
- name: kafkaservice
imagePullPolicy: IfNotPresent
image: wurstmeister/kafka:1.1.0
env: # for production
- name: KAFKA_ADVERTISED_LISTENERES
value: "PLAINTEXT://<host-ip>:9092"
- name: KAFKA_LISTERNERS
value: "PLAINTEXT://0.0.0.0:9092"
- name: KAFKA_CREATE_TOPICS
value: "Topic1:1:1,Topic2:1:1"
- name: KAFKA_MESSAGE_TIMESTAMP_TYPE
value: "LogAppendTime"
- name: KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE
value: "LogAppendTime"
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zookeeper:2181"
ports:
- name: port9092
containerPort: 9092
---
apiVersion: v1
kind: Service
metadata:
namespace: casb-deployment
name: kafkaservice
labels:
app: kafkaservice
spec:
selector:
app: kafkaservice
ports:
- name: port9092
port: 9092
targetPort: 9092
protocol: TCP
I'm assuming you have a Kubernetes service, whose selector links the ingress flow to your Kafka Broker, that is exposing the nodePort (as opposed to clusterIP).
https://kubernetes.io/docs/concepts/services-networking/service/
So the kubernetes pod should be reachable through localhost:<nodePort>.
You can also set a Load Balancer in front of your Kubernetes cluster then you can just expose the k8s pods, i.e., allow external ingress.
Then the next step is to just leverage some DNS record so the outbound request produced by your docker-compose-based containers will go to DNS and then come back to your Kubernetes cluster through the load balancer.

Resources