Can't access my local kubernetes service over the internet - docker

Implementation Goal
Expose Zookeeper instance, running on kubernetes, to the internet.
(configuration & version information provided at the bottom)
Implementation Attempt
I currently have a minikube cluster running on ubuntu 14.04, backed by docker containers.
I'm running a bare metal k8s cluster, and I'm trrying to expose a zookeeper service to the internet. Seeing as my cluster is not running on a cloud provider, I set up metallb, in order to provide a network-loadbalancer implementation for my zookeeper service.
On startup everything looks good, an external IP is assigned and I can access it from the same host via a curl command.
$ kubectl get pods -n metallb-system
NAME READY STATUS RESTARTS AGE
controller-5c9894b5cd-9gh8m 1/1 Running 0 5h59m
speaker-j2z8q 1/1 Running 0 5h59m
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.xxx.xxx.xxx <none> 443/TCP 6d19h
zk-cs LoadBalancer 10.xxx.xxx.xxx 172.1.1.x 2181:30035/TCP 56m
zk-hs LoadBalancer 10.xxx.xxx.xxx 172.1.1.x 2888:30664/TCP,3888:31113/TCP 6m15s
When I curl the above mentioned external IP's, I get a valid response
$ curl -D- "http://172.1.1.x:2181"
curl: (52) Empty reply from server
So far it all looks good, I can access the LB from outside the cluster with no issues, but this is where my lack of Kubernetes/Networking knowledge gets me.I'm finding it impossible to expose this LB to the internet. I've tried running minikube tunnel which I had high hopes for, only to be deeply disappointed.
Running a curl command from another node, whilst minikube tunnel is running will just see the request time out.
$ curl -D- "http://172.1.1.x:2181"
curl: (28) Failed to connect to 172.1.1.x port 2181: Timed out
At this point, as I mentioned before, I'm stuck.
Is there any way that I can get this service exposed to the internet without giving my soul to AWS or GCP?
Any help will be greatly appreciated.
Service Configuration
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
selector:
app: zk
ports:
- port: 2888
targetPort: 2888
name: server
protocol: TCP
- port: 3888
targetPort: 3888
name: leader-election
protocol: TCP
clusterIP: ""
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
selector:
app: zk
ports:
- name: client
protocol: TCP
port: 2181
targetPort: 2181
type: LoadBalancer
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 1
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: zookeeper
imagePullPolicy: Always
image: "library/zookeeper:3.6"
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
- name: zoo-config
mountPath: /conf
volumes:
- name: zoo-config
configMap:
name: zoo-config
securityContext:
fsGroup: 2000
runAsUser: 1000
runAsNonRoot: true
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: zoo-config
namespace: default
data:
zoo.cfg: |
tickTime=10000
dataDir=/var/lib/zookeeper
clientPort=2181
initLimit=10
syncLimit=4
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.1.1.1-172.1.1.10
minikube: v1.13.1
docker: 18.06.3-ce

You can do it with minikube, but the idea of minikube is just to test stuff on your local environment. So, by default, it does not have the correct IPTable permissions, and yes you can adjust that, but if your goal is only to use without any loud provider, I'll higly recommend you to use kubeadm (https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/).
This tool will provide you a very customizable cluster configuration and you will be able to set your network problems without headaches.

Related

GKE Kubernetes Ingress not routing traffic to microservices

I'm pretty new to Kubernetes and trying to deploy what I think is a pretty common use case onto a GKE cluster we have created with Terraform, microservices all hosted on one cluster, but cannot for the life of me get the routing to serve traffic to the correct services. The setup I'm trying to create is as follows:
Deployment & Service for each microservice (canvas-service and video-service)
Single Ingress (class GCE) on the cluster, hosted at a static IP, that routes traffic to each service based on path
The current config looks like this:
canvas-service.yaml
# Deployment (Service Manager)
apiVersion: apps/v1
kind: Deployment
metadata:
name: canvas-service-deployment
labels:
name: canvas-service
spec:
replicas: 2
selector:
matchLabels:
app: canvas-service
template:
metadata:
labels:
app: canvas-service
spec:
restartPolicy: Always
containers:
- name: canvas-service
image: gcr.io/emile-learning-dev/canvas-service-image:latest
readinessProbe:
httpGet:
path: /health
port: 8080
ports:
- name: root
containerPort: 8080
resources:
requests:
memory: "4096Mi"
cpu: "1000m"
limits:
memory: "8192Mi"
cpu: "2000m"
---
# Service
apiVersion: v1
kind: Service
metadata:
name: canvas-service
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: canvas-service
video-service.yaml
# Deployment (Service Manager)
apiVersion: apps/v1
kind: Deployment
metadata:
name: video-service-deployment
labels:
name: video-service
spec:
replicas: 2
selector:
matchLabels:
app: video-service
template:
metadata:
labels:
app: video-service
spec:
restartPolicy: Always
containers:
- name: video-service
image: gcr.io/emile-learning-dev/video-service-image:latest
readinessProbe:
httpGet:
path: /health
port: 8080
ports:
- name: root
containerPort: 8080
resources:
requests:
memory: "4096Mi"
cpu: "1000m"
limits:
memory: "8192Mi"
cpu: "2000m"
---
# Service
apiVersion: v1
kind: Service
metadata:
name: video-service
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: video-service
services-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: services-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "services-ip"
spec:
defaultBackend:
service:
name: canvas-service
port:
number: 80
rules:
- http:
paths:
- path: /canvas/*
pathType: ImplementationSpecific
backend:
service:
name: canvas-service
port:
number: 80
- path: /video/*
pathType: ImplementationSpecific
backend:
service:
name: video-service
port:
number: 80
The output of kubectl describe ingress services-ingress looks like this:
Name: services-ingress
Namespace: default
Address: 34.107.136.153
Default backend: canvas-service:80 (10.244.2.17:8080,10.244.5.15:8080)
Rules:
Host Path Backends
---- ---- --------
*
/canvas/* canvas-service:80 (10.244.2.17:8080,10.244.5.15:8080)
/video/* video-service:80 (10.244.1.15:8080,10.244.8.51:8080)
Annotations: ingress.kubernetes.io/backends: {"k8s-be-30551--8ba41a687ec15071":"HEALTHY","k8s-be-32145--8ba41a687ec15071":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s2-fr-xlhz0sas-default-services-ingress-hqyvwyy1
ingress.kubernetes.io/target-proxy: k8s2-tp-xlhz0sas-default-services-ingress-hqyvwyy1
ingress.kubernetes.io/url-map: k8s2-um-xlhz0sas-default-services-ingress-hqyvwyy1
kubernetes.io/ingress.global-static-ip-name: services-ip
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 9m36s loadbalancer-controller UrlMap "k8s2-um-xlhz0sas-default-services-ingress-hqyvwyy1" created
Normal Sync 9m34s loadbalancer-controller TargetProxy "k8s2-tp-xlhz0sas-default-services-ingress-hqyvwyy1" created
Normal Sync 9m26s loadbalancer-controller ForwardingRule "k8s2-fr-xlhz0sas-default-services-ingress-hqyvwyy1" created
Normal IPChanged 9m26s loadbalancer-controller IP is now 34.107.136.153
Normal Sync 6m56s (x5 over 11m) loadbalancer-controller Scheduled for sync
For testing, I have a healthcheck route at /health for each service. What I'm running into is that when I hit {public_ip}/health (using the default backend) I get the expected response. But when I hit {public_ip}/canvas/health or {public_ip}/video/health, I get a 404 Not Found.
I know it has something to do with the fact that the entire service route structure is on the /canvas or /video route, but thought that the /* was supposed to address exactly that. I'd like to basically make the root route for each service exist on the corresponding subpaths /canvas and /video. Would love to hear any thoughts you guys have as to what I'm doing wrong that's leading to traffic not being routed correctly.
If it's an issue with the GCP default Ingress resource or this isn't within its functionality, I'm totally open to using an nginx Ingress. But, I haven't been able to get an nginx Ingress to expose an IP at all so figured the GCP Ingress would probably be a shorter path to getting this cluster working. If I'm wrong about this also please let me know.
It's due to the paths defined in Ingress. Changing the path type to /video and /canvas will make this work.
To understand the reason behind this, you need to read nginx ingress controller documentation about path and pathType. You can also use the regex pattern in the path according to the documentation here https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/
Tip: When in doubt, you can always go to the nginx ingress controller pods and check the nginx.conf file about the location determined by nginx.

How can I communicate with a DB External to my Kubernetes Cluster

Good afternoon, I have a question, I am new to Kubernetes and I need to connect to a DB that is outside of my cluster, I could only connect to the DB using the hostNetwork = true, however, this is not recommended, in this case there is a method to communicate with External DB?
I leave you the yaml that I am currently using, my pod contains one container that work with spring boot
apiVersion: apps/v1
kind: Deployment
metadata:
name: find-complementary-account-info
labels:
app: find-complementary-account-info
spec:
replicas: 2
selector:
matchLabels:
app: find-complementary-account-info
template:
metadata:
labels:
app: find-complementary-account-info
spec:
hostNetwork: true
dnsPolicy: Default
containers:
- name: find-complementary-account-info
image:find-complementary-account-info:latest
imagePullPolicy: IfNotPresent
resources:
limits:
memory: "350Mi"
requests:
memory: "300Mi"
ports:
- containerPort: 8081
env:
- name: URL_CONNECTION_BD
value: jdbc:oracle:thin:#11.160.9.18:1558/DEFAULTSRV.WORLD
- name: USERNAME_CONNECTION_BD
valueFrom:
secretKeyRef:
name: credentials-bd-pers
key: user_pers
- name: PASSWORD_CONNECTION_BD
valueFrom:
secretKeyRef:
name: credentials-bd-pers
key: password_pers
key: password_pers
---
apiVersion: v1
kind: Service
metadata:
name: find-complementary-account-info
spec:
type: NodePort
selector:
app: find-complementary-account-info
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30020
Anyone have an idea how to communicate with external DB? This is not a cloud cluster, it is OnPremise
hostNetwork parameter is used for accessing pods from outside of the Cluster, you don't need that.
Pods from inside the Cluster can communicate externally because they are NATted. If not, something external prevent it, like a firewall or a missing routing.
The quicker way to find that is to ssh to one of your Kubernetes cluster nodes and try
telnet 11.160.9.18 1558
Anyway that IP address seems a Public one, so you have to check your company firewall imho

single service with multiple exposed ports on a pod with multiple containers

I have gotten multiple containers to work in the same pod.
kubectl apply -f myymlpod.yml
kubectl expose pod mypod --name=myname-pod --port 8855 --type=NodePort
then I was able to test the "expose"
minikube service list
..
|-------------|-------------------------|-----------------------------|
| NAMESPACE | NAME | URL |
|-------------|-------------------------|-----------------------------|
| default | kubernetes | No node port |
| default | myname-pod | http://192.168.99.100:30036 |
| kube-system | kube-dns | No node port |
| kube-system | kubernetes-dashboard | No node port |
|-------------|-------------------------|-----------------------------|
Now, my myymlpod.yml has multiple containers in it.
One container has a service running on 8855, and one on 8877.
The below article ~hints~ at what I need to do .
https://www.mirantis.com/blog/multi-container-pods-and-container-communication-in-kubernetes/
Exposing multiple containers in a Pod
While this example shows how to
use a single container to access other containers in the pod, it’s
quite common for several containers in a Pod to listen on different
ports — all of which need to be exposed. To make this happen, you can
either create a single service with multiple exposed ports, or you can
create a single service for every poirt you’re trying to expose.
"create a single service with multiple exposed ports"
I cannot find anything on how to actually do this, expose multiple ports.
How does one expose multiple ports on a single service?
Thank you.
APPEND:
K8Containers.yml (below)
apiVersion: v1
kind: Pod
metadata:
name: mypodkindmetadataname
labels:
example: mylabelname
spec:
containers:
- name: containername-springbootfrontend
image: mydocker.com/webfrontendspringboot:latest
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "612Mi"
cpu: "400m"
ports:
- containerPort: 8877
- name: containername-businessservicesspringboot
image: mydocker.com/businessservicesspringboot:latest
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "613Mi"
cpu: "400m"
ports:
- containerPort: 8855
kubectl apply -f K8containers.yml
pod "mypodkindmetadataname" created
kubectl get pods
NAME READY STATUS RESTARTS AGE
mypodkindmetadataname 2/2 Running 0 11s
k8services.yml (below)
apiVersion: v1
kind: Service
metadata:
name: myymlservice
labels:
name: myservicemetadatalabel
spec:
type: NodePort
ports:
- name: myrestservice-servicekind-port-name
port: 8857
targetPort: 8855
- name: myfrontend-servicekind-port-name
port: 8879
targetPort: 8877
selector:
name: mypodkindmetadataname
........
kubectl apply -f K8services.yml
service "myymlservice" created
........
minikube service myymlservice --url
http://192.168.99.100:30784
http://192.168.99.100:31751
........
kubectl describe service myymlservice
Name: myymlservice
Namespace: default
Labels: name=myservicemetadatalabel
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"name":"myservicemetadatalabel"},"name":"myymlservice","namespace":"default"...
Selector: name=mypodkindmetadataname
Type: NodePort
IP: 10.107.75.205
Port: myrestservice-servicekind-port-name 8857/TCP
TargetPort: 8855/TCP
NodePort: myrestservice-servicekind-port-name 30784/TCP
Endpoints: <none>
Port: myfrontend-servicekind-port-name 8879/TCP
TargetPort: 8877/TCP
NodePort: myfrontend-servicekind-port-name 31751/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
....
Unfortunately, it is still not working when I try to invoke the "exposed" items.
calling
http://192.168.99.100:30784/myrestmethod
does not work
and calling
http://192.168.99.100:31751
or
http://192.168.99.100:31751/index.html
does not work
Anyone see what I'm missing.
APPEND (working now)
The selector does not match on "name", it matches on label(s).
k8containers.yml (partial at the top)
apiVersion: v1
kind: Pod
metadata:
name: mypodkindmetadataname
labels:
myexamplelabelone: mylabelonevalue
myexamplelabeltwo: mylabeltwovalue
spec:
containers:
# Main application container
- name: containername-springbootfrontend
image: mydocker.com/webfrontendspringboot:latest
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "612Mi"
cpu: "400m"
ports:
- containerPort: 8877
- name: containername-businessservicesspringboot
image: mydocker.com/businessservicesspringboot:latest
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "613Mi"
cpu: "400m"
ports:
- containerPort: 8855
k8services.yml
apiVersion: v1
kind: Service
metadata:
name: myymlservice
labels:
name: myservicemetadatalabel
spec:
type: NodePort
ports:
- name: myrestservice-servicekind-port-name
port: 8857
targetPort: 8855
- name: myfrontend-servicekind-port-name
port: 8879
targetPort: 8877
selector:
myexamplelabelone: mylabelonevalue
myexamplelabeltwo: mylabeltwovalue
Yes you can create one single service with multiple ports open or service port connect pointing to container ports.
kind: Service
apiVersion: v1
metadata:
name: mymlservice
spec:
selector:
app: mymlapp
ports:
- name: servicename-1
port: 4444
targetPort: 8855
- name: servicename-2
port: 80
targetPort: 8877
Where target ports are poting out to your container ports.

Jenkins slave JNLP4- connection timeout

I see this error in some of the Jenkins jobs
Cannot contact jenkins-slave-l65p0-0f7m0: hudson.remoting.ChannelClosedException: Channel "unknown": Remote call on JNLP4-connect connection from 100.99.111.187/100.99.111.187:46776 failed. The channel is closing down or has closed down
I have a jenkins master - slave setup.
On the slave following logs are found
java.nio.channels.ClosedChannelException
at org.jenkinsci.remoting.protocol.NetworkLayer.onRecvClosed(NetworkLayer.java:154)
at org.jenkinsci.remoting.protocol.impl.NIONetworkLayer.ready(NIONetworkLayer.java:142)
at org.jenkinsci.remoting.protocol.IOHub$OnReady.run(IOHub.java:795)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Jenkins is on a kubernetes cluster.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
namespace: default
name: jenkins-deployment
spec:
serviceName: "jenkins-pod"
replicas: 1
template:
metadata:
labels:
app: jenkins-pod
spec:
initContainers:
- name: volume-mount-hack
image: busybox
command: ["sh", "-c", "chmod -R 777 /usr/mnt"]
volumeMounts:
- name: jenkinsdir
mountPath: /usr/mnt
containers:
- name: jenkins-container
imagePullPolicy: Always
readinessProbe:
exec:
command:
- curl
- http://localhost:8080/login
- -o
- /dev/null
livenessProbe:
httpGet:
path: /login
port: 8080
initialDelaySeconds: 120
periodSeconds: 10
env:
- name: JAVA_OPTS
value: "-Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85"
resources:
requests:
memory: "7100Mi"
cpu: "2000m"
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumeMounts:
- mountPath: /var/run
name: docker-sock
- mountPath: /var/jenkins_home
name: jenkinsdir
volumes:
- name: jenkinsdir
persistentVolumeClaim:
claimName: "jenkins-persistence"
- name: docker-sock
hostPath:
path: /var/run
---
apiVersion: v1
kind: Service
metadata:
namespace: default
name: jenkins
labels:
app: jenkins
spec:
type: NodePort
ports:
- name: http
port: 8080
targetPort: 8080
nodePort: 30099
protocol: TCP
selector:
app: jenkins-pod
---
apiVersion: v1
kind: Service
metadata:
namespace: default
name: jenkins-external
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
labels:
app: jenkins
spec:
type: LoadBalancer
ports:
- name: http
port: 8080
targetPort: 8080
protocol: TCP
selector:
app: jenkins-pod
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: jenkins-master-pdb
namespace: default
spec:
maxUnavailable: 0
selector:
matchLabels:
app: jenkins-pod
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: jenkins-slave-pdb
namespace: default
spec:
maxUnavailable: 0
selector:
matchLabels:
jenkins: slave
---
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: default
labels:
app: jenkins
spec:
selector:
app: jenkins-pod
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves
I doubt this has anything to do with kubernetes but still putting it out there.
I am assuming you are using Jenkins Kubernetes Plugin,
You can increase Timeout in seconds for Jenkins connection under Kubernetes Pod template. It may solve your issue.
Description for Timeout in seconds for Jenkins connection:
Specify time in seconds up to which Jenkins should wait for the JNLP
agent to estabilish a connection. Value should be a positive integer,
default being 100.
Did you configure the JNLP port in Jenkins itself? It is located in Manage Jenkins > Configure Global Security > Agents. Click the "Fixed" radio button (since you already assigned a TCP port). Set the "TCP port for JNLP agents" to 50000.
I think, "jenkins-slave" is not a valid name. You can try rename it to "jnlp"
Explain here:
This was related to this issue. If the name of the custom agent is not jnlp, then another agent with the default jnlp image is created. This explains messages like channel already closed etc..

How to produce to kafka broker running inside container from outside the docker host?

I am trying to produce to a kafka broker which is running inside the container launched by kubernetes. I am playing with KAFKA_ADVERTISED_LISTENERES and KAFKA_LISTERNERS.
I tried setting these two env variables KAFKA_ADVERTISED_LISTENERES = PLAINTEXT://<host-ip>:9092 and KAFKA_LISTERNERS = PLAINTEXT://0.0.0.0:9092 and ran using docker-compose. And I was able to produce from an application out of the host machine.
But setting these two env-variables in Kubernetes.yml file, I get No broker list available exception.
What am I missing here?
Update:
kafka-pod.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: casb-deployment
name: kafkaservice
spec:
replicas: 1
template:
metadata:
labels:
app: kafkaservice
spec:
hostname: kafkaservice
#hostNetwork: true # to access docker out side of host container
containers:
- name: kafkaservice
imagePullPolicy: IfNotPresent
image: wurstmeister/kafka:1.1.0
env: # for production
- name: KAFKA_ADVERTISED_LISTENERES
value: "PLAINTEXT://<host-ip>:9092"
- name: KAFKA_LISTERNERS
value: "PLAINTEXT://0.0.0.0:9092"
- name: KAFKA_CREATE_TOPICS
value: "Topic1:1:1,Topic2:1:1"
- name: KAFKA_MESSAGE_TIMESTAMP_TYPE
value: "LogAppendTime"
- name: KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE
value: "LogAppendTime"
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zookeeper:2181"
ports:
- name: port9092
containerPort: 9092
---
apiVersion: v1
kind: Service
metadata:
namespace: casb-deployment
name: kafkaservice
labels:
app: kafkaservice
spec:
selector:
app: kafkaservice
ports:
- name: port9092
port: 9092
targetPort: 9092
protocol: TCP
I'm assuming you have a Kubernetes service, whose selector links the ingress flow to your Kafka Broker, that is exposing the nodePort (as opposed to clusterIP).
https://kubernetes.io/docs/concepts/services-networking/service/
So the kubernetes pod should be reachable through localhost:<nodePort>.
You can also set a Load Balancer in front of your Kubernetes cluster then you can just expose the k8s pods, i.e., allow external ingress.
Then the next step is to just leverage some DNS record so the outbound request produced by your docker-compose-based containers will go to DNS and then come back to your Kubernetes cluster through the load balancer.

Resources