I am having issues pulling in my images of my minihkube kubemaniest file for what ever reason.
Here is how it looks:
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath -o kubemanifest.yaml
kompose.version: 1.26.1 (a9d05d509)
creationTimestamp: null
labels:
io.kompose.service: afsim-controller
name: afsim-controller
spec:
ports:
- name: "5000"
port: 5000
targetPort: 5000
selector:
io.kompose.service: afsim-controller
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath -o kubemanifest.yaml
kompose.version: 1.26.1 (a9d05d509)
creationTimestamp: null
labels:
io.kompose.service: 'mdo-geo'
name: 'mdo-geo'
spec:
ports:
- name: "5006"
port: 5006
targetPort: 5006
selector:
io.kompose.service: 'mdo-geo'
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath -o kubemanifest.yaml
kompose.version: 1.26.1 (a9d05d509)
creationTimestamp: null
labels:
io.kompose.service: mdo-net
name: mdo-net
spec:
ports:
- name: "5009"
port: 5009
targetPort: 5009
selector:
io.kompose.service: mdo-net
status:
loadBalancer: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath -o kubemanifest.yaml
kompose.version: 1.26.1 (a9d05d509)
creationTimestamp: null
labels:
io.kompose.service: 'afsim-controller'
name: 'afsim-controller'
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: 'afsim-controller'
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath -o kubemanifest.yaml
kompose.version: 1.26.1 (a9d05d509)
creationTimestamp: null
labels:
io.kompose.service: 'afsim-controller'
spec:
containers:
- env:
- name: MSG_SERVICE
- name: MSG_SERVICE_HOST
- name: MSG_SERVICE_PORT
- name: NETWORK_BEHAVIOR
value: /home/python3/network_behavior.yaml
- name: SERVICE_PORT
value: "5000"
- name: USE_NET
value: "1"
image: docker-ng-repo.ms.northgrum.com/aic/mdo_afsim_controller
imagePullPolicy: IfNotPresent
name: 'afsim-controller'
ports:
- containerPort: 5000
resources: {}
volumeMounts:
- mountPath: /home/python3/network_behavior.yaml
name: afsim-controller-hostpath0
- mountPath: /scenarios
name: afsim-controller-hostpath1
restartPolicy: Always
volumes:
- hostPath:
path: /home/jsikala/mdo_startup/network_behavior.yaml
name: afsim-controller-hostpath0
- hostPath:
path: /home/jsikala/mdo_afsim/scenarios/MDO
name: afsim-controller-hostpath1
status: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath -o kubemanifest.yaml
kompose.version: 1.26.1 (a9d05d509)
creationTimestamp: null
labels:
io.kompose.service: mdo-geo
name: mdo-geo
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: mdo-geo
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath -o kubemanifest.yaml
kompose.version: 1.26.1 (a9d05d509)
creationTimestamp: null
labels:
io.kompose.service: mdo-geo
spec:
containers:
- env:
- name: MSG_SERVICE
value: kafka
- name: MSG_SERVICE_HOST
value: localhost
- name: MSG_SERVICE_PORT
value: "5006"
- name: NETWORK_BEHAVIOR
value: /home/python3/network_behavior.yaml
- name: PLATFORM
value: uuv_1
- name: SERVICE_PORT
value: "5006"
- name: USE_NET
value: "1"
image: docker-ng-repo.ms.northgrum.com/aic/mdo_geo
imagePullPolicy: IfNotPresent
name: mdo-geo
ports:
- containerPort: 5006
resources: {}
volumeMounts:
- mountPath: /home/python3/network_behavior.yaml
name: mdo-geo-hostpath0
- mountPath: /Data2/MDO/geo_data
name: mdo-geo-hostpath1
restartPolicy: Always
volumes:
- hostPath:
path: /home/jsikala/mdo_startup/network_behavior.yaml
name: mdo-geo-hostpath0
- hostPath:
path: /home/jsikala/mdo_startup/geo_data
name: mdo-geo-hostpath1
status: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath -o kubemanifest.yaml
kompose.version: 1.26.1 (a9d05d509)
creationTimestamp: null
labels:
io.kompose.service: 'mdo-net'
name: 'mdo-net'
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: 'mdo-net'
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath -o kubemanifest.yaml
kompose.version: 1.26.1 (a9d05d509)
creationTimestamp: null
labels:
io.kompose.service: 'mdo-net'
spec:
containers:
- env:
- name: MSG_SERVICE
value: kafka
- name: MSG_SERVICE_HOST
value: localhost
- name: MSG_SERVICE_PORT
value: "5009"
- name: NETWORK_CONFIG
value: /home/python3/network_config.yaml
- name: SERVICE_PORT
value: "5009"
image: docker-ng-repo.ms.northgrum.com/aic/mdo_net
imagePullPolicy: IfNotPresent
name: mdo-net
ports:
- containerPort: 5009
resources: {}
volumeMounts:
- mountPath: /home/python3/network_config.yaml
name: mdo-net-hostpath0
restartPolicy: Always
volumes:
- hostPath:
path: /home/jsikala/mdo_startup/network_config.yaml
name: mdo-net-hostpath0
status: {}
kind: List
metadata: {}
When I attempt to get the pods I see this:
NAME READY STATUS RESTARTS AGE
afsim-controller-777cf55c4-vf4b9 0/1 ImagePullBackOff 0 20h
mdo-geo-588b9d46-2hrnk 0/1 ImagePullBackOff 0 20h
mdo-net-6f44d9d6c5-bpw4k 0/1 ImagePullBackOff 0 20h
I have pull the images locally, yet still can not get the images to be pull by minikube. I've also added an imagePullPolicy and still am getting the ImagePullBackOff error. Any idea what maybe causing this issue?
If the images are in a private registry, you should use an imagePullSecret
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
Related
I am facing some issues on I believe to be my .yaml file.
Docker-compose works fine and the containers ran as expected.
But after kompose convert on the file did not yield desired result on k8s, and I am getting com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure.
There are no existing container in docker containers and docker-compose down was used prior in kompose convert.
mysql pod work fine, and able to access.
spring is however unable to connect to it....
in docker-compose.yaml
version: '3'
services:
mysql-docker-container:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=1
- MYSQL_DATABASE=db_fromSpring
- MYSQL_USER=springuser
- MYSQL_PASSWORD=ThePassword
networks:
- backend
ports:
- 3307:3306
volumes:
- /data/mysql
spring-boot-jpa-app:
command: mvn clean install -DskipTests
image: bnsbns/spring-boot-jpa-image
depends_on:
- mysql-docker-container
environment:
- spring.datasource.url=jdbc:mysql://mysql-docker-container:3306/db_fromSpring
- spring.datasource.username=springuser
- spring.datasource.password=ThePassword
networks:
- backend
ports:
- "8087:8080"
volumes:
- /data/spring-boot-app
networks:
backend:
Error:
2021-09-15 04:37:47.542 ERROR 1 --- [ main] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization.
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
backend-network.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
creationTimestamp: null
name: backend
spec:
ingress:
- from:
- podSelector:
matchLabels:
io.kompose.network/backend: "true"
podSelector:
matchLabels:
io.kompose.network/backend: "true"
mysql-docker-container-claim0-persistentvolumeclaim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: mysql-docker-container-claim0
name: mysql-docker-container-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
mysql-docker-container-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: mysql-docker-container
name: mysql-docker-container
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: mysql-docker-container
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.network/backend: "true"
io.kompose.service: mysql-docker-container
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: db_fromSpring
- name: MYSQL_PASSWORD
value: ThePassword
- name: MYSQL_ROOT_PASSWORD
value: "1"
- name: MYSQL_USER
value: springuser
image: mysql:latest
imagePullPolicy: ""
name: mysql-docker-container
ports:
- containerPort: 3306
resources: {}
volumeMounts:
- mountPath: /data/mysql
name: mysql-docker-container-claim0
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: mysql-docker-container-claim0
persistentVolumeClaim:
claimName: mysql-docker-container-claim0
status: {}
mysql-docker-container-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: mysql-docker-container
name: mysql-docker-container
spec:
ports:
- name: "3307"
port: 3307
targetPort: 3306
selector:
io.kompose.service: mysql-docker-container
status:
loadBalancer: {}
springboot-app-jpa-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: spring-boot-jpa-app
name: spring-boot-jpa-app
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: spring-boot-jpa-app
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.network/backend: "true"
io.kompose.service: spring-boot-jpa-app
spec:
containers:
- args:
- mvn
- clean
- install
- -DskipTests
env:
- name: spring.datasource.password
value: ThePassword
- name: spring.datasource.url
value: jdbc:mysql://mysql-docker-container:3306/db_fromSpring
- name: spring.datasource.username
value: springuser
image: bnsbns/spring-boot-jpa-image
imagePullPolicy: ""
name: spring-boot-jpa-app
ports:
- containerPort: 8080
resources: {}
volumeMounts:
- mountPath: /data/spring-boot-app
name: spring-boot-jpa-app-claim0
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: spring-boot-jpa-app-claim0
persistentVolumeClaim:
claimName: spring-boot-jpa-app-claim0
status: {}
springboot-jpa-app-persistence-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: spring-boot-jpa-app-claim0
name: spring-boot-jpa-app-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
springboot-app-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: spring-boot-jpa-app
name: spring-boot-jpa-app
spec:
ports:
- name: "8087"
port: 8087
targetPort: 8080
selector:
io.kompose.service: spring-boot-jpa-app
status:
loadBalancer: {}
solution as posted by gohm'c was that i had the incorrect port.
facing this issue next, do i need to specific a cluster/load?
$ kubectl expose deployment spring-boot-jpa-app --type=NodePort
Error from server (AlreadyExists): services "spring-boot-jpa-app" already exists
minikube service spring-boot-jpa-app
|-----------|---------------------|-------------|--------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|---------------------|-------------|--------------|
| default | spring-boot-jpa-app | | No node port |
|-----------|---------------------|-------------|--------------|
😿 service default/spring-boot-jpa-app has no node port
The mysql-docker-container service port is 3307, can you try:
env:
...
- name: spring.datasource.url
value: jdbc:mysql://mysql-docker-container:3307/db_fromSpring
I am working on a task to migrate all applications from docker container to kubernetes pods. I tried kompose but it's output is even further confusing.
Can someone please help me out here? I have run out of options to try.
Here is how my docker-compose file looks like:
version: '2'
services:
auth_module:
build: .
extra_hosts:
- "dockerhost:172.21.0.1"
networks:
- default
- mongo
ports:
- 3000
networks:
mongo:
external:
name: mongo_bridge_network
Kompose output:
apiVersion: v1
items:
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe
convert -f docker-compose.yml -o kubemanifest.yaml
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: auth-module
name: auth-module
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: auth-module
strategy: {}
template:
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe
convert -f docker-compose.yml -o kubemanifest.yaml
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.network/mongo_bridge_network: "true"
io.kompose.service: auth-module
spec:
containers:
- image: auth-module
imagePullPolicy: ""
name: auth-module
resources: {}
restartPolicy: Always
serviceAccountName: ""
volumes: null
status: {}
- apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
creationTimestamp: null
name: mongo_bridge_network
spec:
ingress:
- from:
- podSelector:
matchLabels:
io.kompose.network/mongo_bridge_network: "true"
podSelector:
matchLabels:
io.kompose.network/mongo_bridge_network: "true"
kind: List
metadata: {}
I have a microservice which I developed and tested using docker-compose. Now I would like to deploy it to kubernetes.
Part of my docker-compose file looks like this:
tasksdb:
container_name: tasks-db
image: mongo:4.4.1
restart: always
ports:
- '6004:27017'
volumes:
- ./tasks_service/tasks_db:/data/db
networks:
- backend
tasks-service:
container_name: tasks-service
build: ./tasks_service
restart: always
ports:
- "5004:3000"
volumes:
- ./tasks_service/logs:/usr/src/app/logs
- ./tasks_service/tasks_attachments/:/usr/src/app/tasks_attachments
depends_on:
- tasksdb
networks:
- backend
I used mongoose to connect to the database and it worked fine:
const connection = "mongodb://tasks-db:27017/tasks";
const connectDb = () => {
mongoose.connect(connection, {useNewUrlParser:true, useCreateIndex:true, useFindAndModify: false});
return mongoose.connect(connection);
};
Utilizing Kompose, I created a deployment file however I had to modify the persistent volume and persistent volume claim accordingly.
I have something like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: tasks-volume
labels:
type: local
spec:
storageClassName: manual
volumeMode: Filesystem
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.60.50
path: /tasks_db
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: tasksdb-claim0
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
I changed the mongourl as shown here like this:
const connection = "mongodb://tasksdb.default.svc.cluster.local:27017/tasks";
My deployment looks like this:
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasks-service
name: tasks-service
spec:
ports:
- name: "5004"
port: 5004
targetPort: 3000
selector:
io.kompose.service: tasks-service
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasksdb
name: tasksdb
spec:
ports:
- name: "6004"
port: 6004
targetPort: 27017
selector:
io.kompose.service: tasksdb
status:
loadBalancer: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasks-service
name: tasks-service
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: tasks-service
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasks-service
spec:
containers:
- image: 192.168.60.50:5000/blascal_tasks-service
name: tasks-service
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
restartPolicy: Always
status: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasksdb
name: tasksdb
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: tasksdb
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasksdb
spec:
containers:
- image: mongo:4.4.1
name: tasks-db
ports:
- containerPort: 27017
resources: {}
volumeMounts:
- mountPath: /data/db
name: tasksdb-claim0
restartPolicy: Always
volumes:
- name: tasksdb-claim0
persistentVolumeClaim:
claimName: tasksdb-claim0
status: {}
Having several services I added an ingress resource for my routing:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: tasks-service
servicePort: 5004
The deployment seems to run fine as you can see .
However, I have three issues:
Despite the fact that I can hit my default path which just reads "tasks service is up" I cannot access my mongoose routes like /api/task/raise which connects to the db, it says "..buffering timed out" like . I guess, the path does not link up to the database service?
the tasks service pod gives this
Whenever there is a power surge and my machine goes off, bringing up the db deployment fails until I delete the config files from the persistent volume, how do I prevent this corruption of files?
I have been researching in an elaborate way of changing the master ip of my cluster as I intend to transfer my cluster to a different network. Any guidance please?
kubectl logs --namespace=kube-system -l k8s-app=kube-dns
the above gives this :
Your tasksdb Service exposes port 6004, not 27017. Try using the following URL:
const connection = "mongodb://tasksdb.default.svc.cluster.local:6004/tasks";
Changing your network depends on what networking CNI plugin you are using. Every plugin has different steps . For Calico please see https://docs.projectcalico.org/networking/migrate-pools
I believe this is your cluster ip setting for mongodb instance:
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: tasksdb
name: tasksdb
spec:
ports:
- name: "6004"
port: 6004
targetPort: 27017
selector:
io.kompose.service: tasksdb
status:
loadBalancer: {}
When you create an instance of mongodb inside kubernetes, it runs inside a pod. To connect to a pod, we have to go through the cluster IP service. Anytime we are trying to connect to a cluster IP service we are going to write the name of that cluster iP service for the domain of connection url. in this case you connection url must be
mongodb://tasksdb:6004/nameOfDatabase
This question already has answers here:
no matches for kind "Deployment" in version "extensions/v1beta1"
(8 answers)
Closed 2 years ago.
i had this docker compose file which is working absolutely fine.But the i use "kompose convert -f docker-compose.yam -o deploy.yaml" in order to get yaml file for kubernetes deployment.
but when i go for "kubectl apply -f deploy.yaml"
i am getting this error
"service/cms created
service/mysqldb created
persistentvolumeclaim/my-datavolume configured
unable to recognize no matches for kind "Deployment" in version "extensions/v1beta1"
unable to recognize no matches for kind "Deployment" in version "extensions/v1beta1"
i am using minikube.
Please help me out.
docker-compose file content
version: "2"
services:
cms:
image: 1511981217/cms_mysql:0.0.2
ports:
- "8080:8080"
networks:
- cms-network
depends_on:
- mysqldb
mysqldb:
image: mysql:8
ports:
- "3306:3306"
networks:
- cms-network
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=cmsdb
volumes:
- my-datavolume:/var/lib/mysql
networks:
cms-network:
volumes:
my-datavolume:
deploy.yaml file content
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kubemanifests_2.yaml
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: cms
name: cms
spec:
ports:
- name: "8080"
port: 8080
targetPort: 8080
selector:
io.kompose.service: cms
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kubemanifests_2.yaml
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: mysqldb
name: mysqldb
spec:
ports:
- name: "3306"
port: 3306
targetPort: 3306
selector:
io.kompose.service: mysqldb
status:
loadBalancer: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kubemanifests_2.yaml
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: cms
name: cms
spec:
replicas: 1
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kubemanifests_2.yaml
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: cms
spec:
containers:
- image: 1511981217/cms_mysql:0.0.2
name: cms
ports:
- containerPort: 8080
resources: {}
restartPolicy: Always
status: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kubemanifests_2.yaml
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: mysqldb
name: mysqldb
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml -o kubemanifests_2.yaml
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: mysqldb
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: cmsdb
- name: MYSQL_ROOT_PASSWORD
value: root
image: mysql:8
name: mysqldb
ports:
- containerPort: 3306
resources: {}
volumeMounts:
- mountPath: /var/lib/mysql
name: my-datavolume
restartPolicy: Always
volumes:
- name: my-datavolume
persistentVolumeClaim:
claimName: my-datavolume
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: my-datavolume
name: my-datavolume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
kind: List
metadata: {}
```
extensions/v1beta1 was deprecated in kubernetes version 1.16 .Change extensions/v1beta1 to apps/v1 in the yaml and it should work with a kubernetes cluster which is of version higher than 1.16
I have problem with run kafka and zookeeper on kubernetes single node, I test with my laptop and it work but I run on private server it show error in kafka pod and I don't know network setting on private server
I use kompose to convert docker-compose file to k8s yaml file
zoo1: Temporary failure in name resolution
How to fix that error?
Thank you
my deployment and service yaml file
kafka1-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.16.0 (0c01309)
creationTimestamp: null
labels:
io.kompose.service: kafka1
name: kafka1
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: kafka1
spec:
containers:
- env:
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://kafka1:9092
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_LOG4J_LOGGERS
value: kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zoo1:2181
image: confluentinc/cp-kafka:4.0.0
name: kafka1
ports:
- containerPort: 9092
resources: {}
hostname: kafka1
restartPolicy: Always
status: {}
kafka1-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.16.0 (0c01309)
creationTimestamp: null
labels:
io.kompose.service: kafka1
name: kafka1
spec:
ports:
- name: "9092"
port: 9092
targetPort: 9092
selector:
io.kompose.service: kafka1
status:
loadBalancer: {}
zoo1-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.16.0 (0c01309)
creationTimestamp: null
labels:
io.kompose.service: zoo1
name: zoo1
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: zoo1
spec:
containers:
- env:
- name: ZOO_MY_ID
value: "1"
- name: ZOO_PORT
value: "2181"
- name: ZOO_SERVERS
value: server.1=zoo1:2888:3888
image: zookeeper:3.4.9
name: zoo1
ports:
- containerPort: 2181
resources: {}
hostname: zoo1
restartPolicy: Always
status: {}
zoo1-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.16.0 (0c01309)
creationTimestamp: null
labels:
io.kompose.service: zoo1
name: zoo1
spec:
ports:
- name: "2181"
port: 2181
targetPort: 2181
selector:
io.kompose.service: zoo1
status:
loadBalancer: {}
You have to create headless services for kafla and zookeeper
I would recommend to use confluent helm charts to use kafka in kubernetes, you can find how they created headless services.
What are the logs? What's happening exactly?
Hope that's help!