Connect to mongodb with mongoose both in kubernetes - docker

I have a microservice which I developed and tested using docker-compose. Now I would like to deploy it to kubernetes.
Part of my docker-compose file looks like this:
tasksdb:
container_name: tasks-db
image: mongo:4.4.1
restart: always
ports:
- '6004:27017'
volumes:
- ./tasks_service/tasks_db:/data/db
networks:
- backend
tasks-service:
container_name: tasks-service
build: ./tasks_service
restart: always
ports:
- "5004:3000"
volumes:
- ./tasks_service/logs:/usr/src/app/logs
- ./tasks_service/tasks_attachments/:/usr/src/app/tasks_attachments
depends_on:
- tasksdb
networks:
- backend
I used mongoose to connect to the database and it worked fine:
const connection = "mongodb://tasks-db:27017/tasks";
const connectDb = () => {
mongoose.connect(connection, {useNewUrlParser:true, useCreateIndex:true, useFindAndModify: false});
return mongoose.connect(connection);
};
Utilizing Kompose, I created a deployment file however I had to modify the persistent volume and persistent volume claim accordingly.
I have something like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: tasks-volume
labels:
type: local
spec:
storageClassName: manual
volumeMode: Filesystem
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.60.50
path: /tasks_db
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: tasksdb-claim0
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
I changed the mongourl as shown here like this:
const connection = "mongodb://tasksdb.default.svc.cluster.local:27017/tasks";
My deployment looks like this:
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasks-service
name: tasks-service
spec:
ports:
- name: "5004"
port: 5004
targetPort: 3000
selector:
io.kompose.service: tasks-service
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasksdb
name: tasksdb
spec:
ports:
- name: "6004"
port: 6004
targetPort: 27017
selector:
io.kompose.service: tasksdb
status:
loadBalancer: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasks-service
name: tasks-service
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: tasks-service
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasks-service
spec:
containers:
- image: 192.168.60.50:5000/blascal_tasks-service
name: tasks-service
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
restartPolicy: Always
status: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasksdb
name: tasksdb
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: tasksdb
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasksdb
spec:
containers:
- image: mongo:4.4.1
name: tasks-db
ports:
- containerPort: 27017
resources: {}
volumeMounts:
- mountPath: /data/db
name: tasksdb-claim0
restartPolicy: Always
volumes:
- name: tasksdb-claim0
persistentVolumeClaim:
claimName: tasksdb-claim0
status: {}
Having several services I added an ingress resource for my routing:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: tasks-service
servicePort: 5004
The deployment seems to run fine as you can see .
However, I have three issues:
Despite the fact that I can hit my default path which just reads "tasks service is up" I cannot access my mongoose routes like /api/task/raise which connects to the db, it says "..buffering timed out" like . I guess, the path does not link up to the database service?
the tasks service pod gives this
Whenever there is a power surge and my machine goes off, bringing up the db deployment fails until I delete the config files from the persistent volume, how do I prevent this corruption of files?
I have been researching in an elaborate way of changing the master ip of my cluster as I intend to transfer my cluster to a different network. Any guidance please?
kubectl logs --namespace=kube-system -l k8s-app=kube-dns
the above gives this :

Your tasksdb Service exposes port 6004, not 27017. Try using the following URL:
const connection = "mongodb://tasksdb.default.svc.cluster.local:6004/tasks";
Changing your network depends on what networking CNI plugin you are using. Every plugin has different steps . For Calico please see https://docs.projectcalico.org/networking/migrate-pools

I believe this is your cluster ip setting for mongodb instance:
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasksdb
name: tasksdb
spec:
ports:
- name: "6004"
port: 6004
targetPort: 27017
selector:
io.kompose.service: tasksdb
status:
loadBalancer: {}
When you create an instance of mongodb inside kubernetes, it runs inside a pod. To connect to a pod, we have to go through the cluster IP service. Anytime we are trying to connect to a cluster IP service we are going to write the name of that cluster iP service for the domain of connection url. in this case you connection url must be
mongodb://tasksdb:6004/nameOfDatabase

Related

Kafka & Python app on Kubernetes in separate pods - NoBrokersAvailable()

to start with - I am a sort of newbie to Kubernetes and I might omit some fundamentals.
I have a working containerized app that is orchestrated with docker-compose (and works alright) and I am rewriting it to deploy into Kubernetes. I've converted it to K8s .yaml files via Kompose and modified it to some degree. I am struggling to set up a connection between a Python app and Kafka that are running on separate pods. The Python app constantly returns NoBrokersAvailable() error no matter what I try to apply - it's quite obvious that it cannot connect to a broker. What am I missing? I've defined proper listeners and network policy. I am running it locally on Minikube with local Docker images registry.
The Python app connects to the following address:
KafkaProducer(bootstrap_servers='kafka-service.default.svc.cluster.local:9092')
kafka-deployment.yaml (the Dockerfile image is based on confluentinc/cp-kafka:6.2.0 with a topics setup script added to it):
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.service: kafka
name: kafka-app
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: kafka
strategy: {}
template:
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.network/pipeline-network: "true"
io.kompose.service: kafka
spec:
containers:
- env:
- name: KAFKA_LISTENERS
value: "LISTENER_INTERNAL://0.0.0.0:29092,LISTENER_EXTERNAL://0.0.0.0:9092"
- name: KAFKA_ADVERTISED_LISTENERS
value: "LISTENER_INTERNAL://localhost:29092,LISTENER_EXTERNAL://kafka-service.default.svc.cluster.local:9092"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "LISTENER_EXTERNAL:PLAINTEXT,LISTENER_INTERNAL:PLAINTEXT"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: "LISTENER_INTERNAL"
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS
value: "0"
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_TRANSACTION_STATE_LOG_MIN_ISR
value: "1"
- name: KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR
value: "1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: finnhub-streaming-data-pipeline-kafka:latest
imagePullPolicy: Never
lifecycle:
postStart:
exec:
command: ["/bin/sh","-c","/kafka-setup-k8s.sh"]
name: kafka-app
ports:
- containerPort: 9092
- containerPort: 29092
resources: {}
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: kafka-service
spec:
selector:
app: kafka
ports:
- protocol: TCP
name: firstport
port: 9092
targetPort: 9092
- protocol: TCP
name: secondport
port: 29092
targetPort: 29092
finnhub-producer.yaml (aka my Python app deployment):
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.service: finnhubproducer
name: finnhubproducer
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: finnhubproducer
strategy: {}
template:
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.network/pipeline-network: "true"
io.kompose.service: finnhubproducer
spec:
containers:
- env:
- name: KAFKA_PORT
value: "9092"
- name: KAFKA_SERVER
value: kafka-service.default.svc.cluster.local
- name: KAFKA_TOPIC_NAME
value: market
image: docker.io/library/finnhub-streaming-data-pipeline-finnhubproducer:latest
imagePullPolicy: Never
name: finnhubproducer
ports:
- containerPort: 8001
resources: {}
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.service: finnhubproducer
name: finnhubproducer
spec:
ports:
- name: "8001"
port: 8001
targetPort: 8001
selector:
io.kompose.service: finnhubproducer
status:
loadBalancer: {}
pipeline-network-networkpolicy.yaml:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
creationTimestamp: null
name: pipeline-network
spec:
ingress:
- from:
- podSelector:
matchLabels:
io.kompose.network/pipeline-network: "true"
podSelector:
matchLabels:
io.kompose.network/pipeline-network: "true"
EDIT:
Dockerfile for Kafka image:
FROM confluentinc/cp-kafka:6.2.0
COPY ./scripts/kafka-setup-k8s.sh /kafka-setup-k8s.sh
kafka-setup-k8s.sh:
# blocks until kafka is reachable
kafka-topics --bootstrap-server localhost:29092 --list
echo -e 'Creating kafka topics'
kafka-topics --bootstrap-server localhost:29092 --create --if-not-exists --topic market --replication-factor 1 --partitions 1
echo -e 'Successfully created the following topics:'
kafka-topics --bootstrap-server localhost:29092 --list
Your service's app selector is kafka, whereas the deployment is kafka-app, so they aren't connected.
I suggest you use Strimzi (or Confluent for Kubernetes if you want to use their images), not convert your existing Docker Compose file using Kompose, as it rarely gets network policies correct. If fact, you can probably remove the network labels and remove the network policy completely, as it isn't really necessary in the same namespace.
Regarding your Python app, you shouldn't need to separately define Kafka host and port; use one variable for KAFKA_BOOTSTRAP_SERVERS, which can accept multiple brokers, including their ports
I have managed to make it work by deleting services from deployment and running kubectl expose deployment kafka-app. The issue comes from Kompose labeling.

com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure on kubernetes MySQL and Spring Boot

I am facing some issues on I believe to be my .yaml file.
Docker-compose works fine and the containers ran as expected.
But after kompose convert on the file did not yield desired result on k8s, and I am getting com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure.
There are no existing container in docker containers and docker-compose down was used prior in kompose convert.
mysql pod work fine, and able to access.
spring is however unable to connect to it....
in docker-compose.yaml
version: '3'
services:
mysql-docker-container:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=1
- MYSQL_DATABASE=db_fromSpring
- MYSQL_USER=springuser
- MYSQL_PASSWORD=ThePassword
networks:
- backend
ports:
- 3307:3306
volumes:
- /data/mysql
spring-boot-jpa-app:
command: mvn clean install -DskipTests
image: bnsbns/spring-boot-jpa-image
depends_on:
- mysql-docker-container
environment:
- spring.datasource.url=jdbc:mysql://mysql-docker-container:3306/db_fromSpring
- spring.datasource.username=springuser
- spring.datasource.password=ThePassword
networks:
- backend
ports:
- "8087:8080"
volumes:
- /data/spring-boot-app
networks:
backend:
Error:
2021-09-15 04:37:47.542 ERROR 1 --- [ main] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization.
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
backend-network.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
creationTimestamp: null
name: backend
spec:
ingress:
- from:
- podSelector:
matchLabels:
io.kompose.network/backend: "true"
podSelector:
matchLabels:
io.kompose.network/backend: "true"
mysql-docker-container-claim0-persistentvolumeclaim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: mysql-docker-container-claim0
name: mysql-docker-container-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
mysql-docker-container-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: mysql-docker-container
name: mysql-docker-container
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: mysql-docker-container
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.network/backend: "true"
io.kompose.service: mysql-docker-container
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: db_fromSpring
- name: MYSQL_PASSWORD
value: ThePassword
- name: MYSQL_ROOT_PASSWORD
value: "1"
- name: MYSQL_USER
value: springuser
image: mysql:latest
imagePullPolicy: ""
name: mysql-docker-container
ports:
- containerPort: 3306
resources: {}
volumeMounts:
- mountPath: /data/mysql
name: mysql-docker-container-claim0
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: mysql-docker-container-claim0
persistentVolumeClaim:
claimName: mysql-docker-container-claim0
status: {}
mysql-docker-container-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: mysql-docker-container
name: mysql-docker-container
spec:
ports:
- name: "3307"
port: 3307
targetPort: 3306
selector:
io.kompose.service: mysql-docker-container
status:
loadBalancer: {}
springboot-app-jpa-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: spring-boot-jpa-app
name: spring-boot-jpa-app
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: spring-boot-jpa-app
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.network/backend: "true"
io.kompose.service: spring-boot-jpa-app
spec:
containers:
- args:
- mvn
- clean
- install
- -DskipTests
env:
- name: spring.datasource.password
value: ThePassword
- name: spring.datasource.url
value: jdbc:mysql://mysql-docker-container:3306/db_fromSpring
- name: spring.datasource.username
value: springuser
image: bnsbns/spring-boot-jpa-image
imagePullPolicy: ""
name: spring-boot-jpa-app
ports:
- containerPort: 8080
resources: {}
volumeMounts:
- mountPath: /data/spring-boot-app
name: spring-boot-jpa-app-claim0
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: spring-boot-jpa-app-claim0
persistentVolumeClaim:
claimName: spring-boot-jpa-app-claim0
status: {}
springboot-jpa-app-persistence-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: spring-boot-jpa-app-claim0
name: spring-boot-jpa-app-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
springboot-app-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: spring-boot-jpa-app
name: spring-boot-jpa-app
spec:
ports:
- name: "8087"
port: 8087
targetPort: 8080
selector:
io.kompose.service: spring-boot-jpa-app
status:
loadBalancer: {}
solution as posted by gohm'c was that i had the incorrect port.
facing this issue next, do i need to specific a cluster/load?
$ kubectl expose deployment spring-boot-jpa-app --type=NodePort
Error from server (AlreadyExists): services "spring-boot-jpa-app" already exists
minikube service spring-boot-jpa-app
|-----------|---------------------|-------------|--------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|---------------------|-------------|--------------|
| default | spring-boot-jpa-app | | No node port |
|-----------|---------------------|-------------|--------------|
😿 service default/spring-boot-jpa-app has no node port
The mysql-docker-container service port is 3307, can you try:
env:
...
- name: spring.datasource.url
value: jdbc:mysql://mysql-docker-container:3307/db_fromSpring

Converting docker-compose to k8s manifest file

I am working on a task to migrate all applications from docker container to kubernetes pods. I tried kompose but it's output is even further confusing.
Can someone please help me out here? I have run out of options to try.
Here is how my docker-compose file looks like:
version: '2'
services:
auth_module:
build: .
extra_hosts:
- "dockerhost:172.21.0.1"
networks:
- default
- mongo
ports:
- 3000
networks:
mongo:
external:
name: mongo_bridge_network
Kompose output:
apiVersion: v1
items:
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe
convert -f docker-compose.yml -o kubemanifest.yaml
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: auth-module
name: auth-module
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: auth-module
strategy: {}
template:
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe
convert -f docker-compose.yml -o kubemanifest.yaml
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.network/mongo_bridge_network: "true"
io.kompose.service: auth-module
spec:
containers:
- image: auth-module
imagePullPolicy: ""
name: auth-module
resources: {}
restartPolicy: Always
serviceAccountName: ""
volumes: null
status: {}
- apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
creationTimestamp: null
name: mongo_bridge_network
spec:
ingress:
- from:
- podSelector:
matchLabels:
io.kompose.network/mongo_bridge_network: "true"
podSelector:
matchLabels:
io.kompose.network/mongo_bridge_network: "true"
kind: List
metadata: {}

having problem with giving deployment using kubernetes [duplicate]

This question already has answers here:
no matches for kind "Deployment" in version "extensions/v1beta1"
(8 answers)
Closed 2 years ago.
i had this docker compose file which is working absolutely fine.But the i use "kompose convert -f docker-compose.yam -o deploy.yaml" in order to get yaml file for kubernetes deployment.
but when i go for "kubectl apply -f deploy.yaml"
i am getting this error
"service/cms created
service/mysqldb created
persistentvolumeclaim/my-datavolume configured
unable to recognize no matches for kind "Deployment" in version "extensions/v1beta1"
unable to recognize no matches for kind "Deployment" in version "extensions/v1beta1"
i am using minikube.
Please help me out.
docker-compose file content
version: "2"
services:
cms:
image: 1511981217/cms_mysql:0.0.2
ports:
- "8080:8080"
networks:
- cms-network
depends_on:
- mysqldb
mysqldb:
image: mysql:8
ports:
- "3306:3306"
networks:
- cms-network
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=cmsdb
volumes:
- my-datavolume:/var/lib/mysql
networks:
cms-network:
volumes:
my-datavolume:
deploy.yaml file content
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kubemanifests_2.yaml
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: cms
name: cms
spec:
ports:
- name: "8080"
port: 8080
targetPort: 8080
selector:
io.kompose.service: cms
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kubemanifests_2.yaml
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: mysqldb
name: mysqldb
spec:
ports:
- name: "3306"
port: 3306
targetPort: 3306
selector:
io.kompose.service: mysqldb
status:
loadBalancer: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kubemanifests_2.yaml
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: cms
name: cms
spec:
replicas: 1
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kubemanifests_2.yaml
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: cms
spec:
containers:
- image: 1511981217/cms_mysql:0.0.2
name: cms
ports:
- containerPort: 8080
resources: {}
restartPolicy: Always
status: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kubemanifests_2.yaml
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: mysqldb
name: mysqldb
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kubemanifests_2.yaml
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: mysqldb
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: cmsdb
- name: MYSQL_ROOT_PASSWORD
value: root
image: mysql:8
name: mysqldb
ports:
- containerPort: 3306
resources: {}
volumeMounts:
- mountPath: /var/lib/mysql
name: my-datavolume
restartPolicy: Always
volumes:
- name: my-datavolume
persistentVolumeClaim:
claimName: my-datavolume
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: my-datavolume
name: my-datavolume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
kind: List
metadata: {}
```
extensions/v1beta1 was deprecated in kubernetes version 1.16 .Change extensions/v1beta1 to apps/v1 in the yaml and it should work with a kubernetes cluster which is of version higher than 1.16

Error kafka cannot connect zookeeper on Kubernetes(single node)

I have problem with run kafka and zookeeper on kubernetes single node, I test with my laptop and it work but I run on private server it show error in kafka pod and I don't know network setting on private server
I use kompose to convert docker-compose file to k8s yaml file
zoo1: Temporary failure in name resolution
How to fix that error?
Thank you
my deployment and service yaml file
kafka1-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.16.0 (0c01309)
creationTimestamp: null
labels:
io.kompose.service: kafka1
name: kafka1
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: kafka1
spec:
containers:
- env:
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://kafka1:9092
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_LOG4J_LOGGERS
value: kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zoo1:2181
image: confluentinc/cp-kafka:4.0.0
name: kafka1
ports:
- containerPort: 9092
resources: {}
hostname: kafka1
restartPolicy: Always
status: {}
kafka1-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.16.0 (0c01309)
creationTimestamp: null
labels:
io.kompose.service: kafka1
name: kafka1
spec:
ports:
- name: "9092"
port: 9092
targetPort: 9092
selector:
io.kompose.service: kafka1
status:
loadBalancer: {}
zoo1-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.16.0 (0c01309)
creationTimestamp: null
labels:
io.kompose.service: zoo1
name: zoo1
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: zoo1
spec:
containers:
- env:
- name: ZOO_MY_ID
value: "1"
- name: ZOO_PORT
value: "2181"
- name: ZOO_SERVERS
value: server.1=zoo1:2888:3888
image: zookeeper:3.4.9
name: zoo1
ports:
- containerPort: 2181
resources: {}
hostname: zoo1
restartPolicy: Always
status: {}
zoo1-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.16.0 (0c01309)
creationTimestamp: null
labels:
io.kompose.service: zoo1
name: zoo1
spec:
ports:
- name: "2181"
port: 2181
targetPort: 2181
selector:
io.kompose.service: zoo1
status:
loadBalancer: {}
You have to create headless services for kafla and zookeeper
I would recommend to use confluent helm charts to use kafka in kubernetes, you can find how they created headless services.
What are the logs? What's happening exactly?
Hope that's help!

Resources