Kubectl create error "could not find expected" - docker

I'm using kbuernetes version 1.20.5 with docker 19.03.8 on a virtual machine. I'm trying to create a test elk cluster with kubernetes. When i enter kubectl create i get the followig error:
error parsing testserver.yaml: error converting YAML to JSON: yaml: line 17: could not find expected ':'
I keep checking but can't find where the missing ":" should be. I validated the yaml in yaml lint and i get valid yaml result. The yaml file is like this:
#namespace define
apiVersion: v1
kind: Namespace
metadata:
name: testlog
---
#esnodes
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode1
name: testnode1
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode1
template:
metadata:
labels:
app: testnode1
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "false"
- name: node.name
value: testnode1
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode1
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode1-claim0
# restartPolicy: Always
volumes:
- name: testnode1-claim0
hostPath:
path: /logtest/es1
type: DirectoryOrCreate
---
#es1 portservice
apiVersion: v1
kind: Service
metadata:
name: testnode1-service
namespace: testlog
labels:
app: testnode1
spec:
type: NodePort
ports:
- port: 9200
nodePort: 9201
targetPort: 9200
protocol: TCP
name: testnode1-9200
- port: 9300
nodePort: 9301
targetPort: 9300
protocol: TCP
name: testnode1-9300
selector:
app: testnode1
---
#es1 dns
apiVersion: v1
kind: Service
metadata:
name: testnode1
namespace: testlog
labels:
app: testnode1
spec:
clusterIP: None
selector:
app: testnode1
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode2
name: testnode2
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode2
template:
metadata:
labels:
app: testnode2
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "true"
- name: node.name
value: testnode2
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode2
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode2-claim0
# restartPolicy: Always
volumes:
- name: testnode2-claim0
hostPath:
path: /logtest/es2
type: DirectoryOrCreate
---
#es1 dns
apiVersion: v1
kind: Service
metadata:
name: testnode2
namespace: testlog
labels:
app: testnode2
spec:
clusterIP: None
selector:
app: testnode2
----
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode3
name: testnode3
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode3
template:
metadata:
labels:
app: testnode3
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "true"
- name: node.name
value: testnode3
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode3
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode3-claim0
# restartPolicy: Always
volumes:
- name: testnode3-claim0
hostPath:
path: /logtest/es3
type: DirectoryOrCreate
---
#es3 dns
apiVersion: v1
kind: Service
metadata:
name: testnode3
namespace: testlog
labels:
app: testnode3
spec:
clusterIP: None
selector:
app: testnode3
---
#kibana dep
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kibana
name: kibana
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- env:
- name: ELASTICSEARCH_HOSTS
value: http://testnode1:9200
- name: ELASTICSEARCH_URL
value: http://testnode1:9200
image: amazon/opendistro-for-elasticsearch-kibana:1.8.0
name: kibana
# restartPolicy: Always
---
#kibana dns
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: testlog
labels:
app: kibana
spec:
clusterIP: None
selector:
app: kibana
---
#kibana port servi
apiVersion: v1
kind: Service
metadata:
name: kibana-service
namespace: testlog
labels:
app: kibana
spec:
type: NodePort
ports:
- port: 5601
nodePort: 5602
targetPort: 5601
protocol: TCP
name: kibana
selector:
app: kibana
----
#elasticsearch-hq deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: elasticsearch-hq
name: elasticsearch-hq
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch-hq
template:
metadata:
labels:
app: elasticsearch-hq
spec:
containers:
- image: elastichq/elasticsearch-hq
name: elasticsearch-hq
# restartPolicy: Always
---
#elasticsearch-hq port service
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-hq-service
namespace: testlog
labels:
app: elasticsearch-hq
spec:
type: NodePort
ports:
- port: 8081
nodePort: 8081
targetPort: 5000
protocol: TCP
name: elasticsearch-hq
selector:
app: elasticsearch-hq

There are couple of issues in the yaml file:
You have used four - in some places whereas the separator is three -.
Once, you fix the first issue, you'll see the following errors related to NodePort services as the valid range for the nodePort is 30000-32767:
Error from server (Invalid): error when creating "testserver.yaml": Service "testnode1-service" is invalid: spec.ports[0].nodePort: Invalid value: 9201: provided port is not in the valid range. The range of valid ports is 30000-32767
Error from server (Invalid): error when creating "testserver.yaml": Service "kibana-service" is invalid: spec.ports[0].nodePort: Invalid value: 5602: provided port is not in the valid range. The range of valid ports is 30000-32767
Error from server (Invalid): error when creating "testserver.yaml": Service "elasticsearch-hq-service" is invalid: spec.ports[0].nodePort: Invalid value: 8081: provided port is not in the valid range. The range of valid ports is 30000-32767
Fixing both the errors will resolve the yaml issues.
Below is the full working yaml file:
#namespace define
apiVersion: v1
kind: Namespace
metadata:
name: testlog
---
#esnodes
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode1
name: testnode1
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode1
template:
metadata:
labels:
app: testnode1
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "false"
- name: node.name
value: testnode1
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode1
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode1-claim0
# restartPolicy: Always
volumes:
- name: testnode1-claim0
hostPath:
path: /logtest/es1
type: DirectoryOrCreate
---
#es1 portservice
apiVersion: v1
kind: Service
metadata:
name: testnode1-service
namespace: testlog
labels:
app: testnode1
spec:
type: NodePort
ports:
- port: 9200
nodePort: 31201
targetPort: 9200
protocol: TCP
name: testnode1-9200
- port: 9300
nodePort: 31301
targetPort: 9300
protocol: TCP
name: testnode1-9300
selector:
app: testnode1
---
#es1 dns
apiVersion: v1
kind: Service
metadata:
name: testnode1
namespace: testlog
labels:
app: testnode1
spec:
clusterIP: None
selector:
app: testnode1
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode2
name: testnode2
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode2
template:
metadata:
labels:
app: testnode2
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "true"
- name: node.name
value: testnode2
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode2
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode2-claim0
# restartPolicy: Always
volumes:
- name: testnode2-claim0
hostPath:
path: /logtest/es2
type: DirectoryOrCreate
---
#es1 dns
apiVersion: v1
kind: Service
metadata:
name: testnode2
namespace: testlog
labels:
app: testnode2
spec:
clusterIP: None
selector:
app: testnode2
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode3
name: testnode3
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode3
template:
metadata:
labels:
app: testnode3
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "true"
- name: node.name
value: testnode3
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode3
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode3-claim0
# restartPolicy: Always
volumes:
- name: testnode3-claim0
hostPath:
path: /logtest/es3
type: DirectoryOrCreate
---
#es3 dns
apiVersion: v1
kind: Service
metadata:
name: testnode3
namespace: testlog
labels:
app: testnode3
spec:
clusterIP: None
selector:
app: testnode3
---
#kibana dep
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kibana
name: kibana
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
app: kibana
spec:
containers:
- env:
- name: ELASTICSEARCH_HOSTS
value: http://testnode1:9200
- name: ELASTICSEARCH_URL
value: http://testnode1:9200
image: amazon/opendistro-for-elasticsearch-kibana:1.8.0
name: kibana
# restartPolicy: Always
---
#kibana dns
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: testlog
labels:
app: kibana
spec:
clusterIP: None
selector:
app: kibana
---
#kibana port servi
apiVersion: v1
kind: Service
metadata:
name: kibana-service
namespace: testlog
labels:
app: kibana
spec:
type: NodePort
ports:
- port: 5601
nodePort: 31602
targetPort: 5601
protocol: TCP
name: kibana
selector:
app: kibana
---
#elasticsearch-hq deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: elasticsearch-hq
name: elasticsearch-hq
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch-hq
template:
metadata:
labels:
app: elasticsearch-hq
spec:
containers:
- image: elastichq/elasticsearch-hq
name: elasticsearch-hq
# restartPolicy: Always
---
#elasticsearch-hq port service
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-hq-service
namespace: testlog
labels:
app: elasticsearch-hq
spec:
type: NodePort
ports:
- port: 8081
nodePort: 31081
targetPort: 5000
protocol: TCP
name: elasticsearch-hq
selector:
app: elasticsearch-hq

Related

Docket Desktop and Istio unable to access endpoint on MAC OS

I am trying to work on sample project for istio. I have two apps demo1 and demo2.
demoapp Yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-1-app
spec:
replicas: 1
selector:
matchLabels:
app: demo-1-app
template:
metadata:
labels:
app: demo-1-app
spec:
containers:
- name: demo-1-app
image: muzimil:demo-1
ports:
- containerPort: 8080
imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
name: demo-1-app
spec:
selector:
app: demo-1-app
ports:
- port: 8080
name: http
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-1-app
labels:
account: demo-1-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-2-app
spec:
replicas: 1
selector:
matchLabels:
app: demo-2-app
template:
metadata:
labels:
app: demo-2-app
spec:
containers:
- name: demo-2-app
image: muzimil:demo2-1
ports:
- containerPort: 8080
imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
name: demo-2-app
spec:
selector:
app: demo-2-app
ports:
- port: 8080
name: http
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-2-app
labels:
account: demo-2-app
And My gateway os this
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: demo-app-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: demo-service1
spec:
hosts:
- "*"
gateways:
- demo-app-gateway
http:
- match:
- uri:
exact: /demo1
route:
- destination:
host: demo-1-app
port:
number: 8080
- match:
- uri:
exact: /demo2
route:
- destination:
host: demo-2-app
port:
number: 8080
I tried to hit url with localhost/demo1/getDetails both 127.0.0.1/demo1/getDetails
But I am getting always 404
istioctl analyse does not give any errors.
To access the application - either change istio-ingressgateway service to NodePort or do port forwarding for the istio ingress gateway service. Edit the istio-ingressgateway service to change the service type.
type: NodePort
K8s will give a node port then you can provide the same nodeport values in istio gateway.
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: <nodeportnumber>
name: http
protocol: HTTP

Bitnami airflow scheduler could not connect database while webserve can connect even they have same properties?

I want to configure airflow on openshift.
I set database on openshift like below :
kind: Service
apiVersion: v1
metadata:
name: airflow-database
namespace: ersin-poc
spec:
ports:
- name: 5432-tcp
protocol: TCP
port: 5432
targetPort: 5432
selector:
deployment: airflow-database
type: ClusterIP
sessionAffinity: None
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
and my database deployment like below :
kind: Deployment
apiVersion: apps/v1
metadata:
name: airflow-database
namespace: ersin-poc
labels:
deployment: airflow-database
spec:
replicas: 1
selector:
matchLabels:
deployment: airflow-database
template:
metadata:
creationTimestamp: null
labels:
deployment: airflow-database
spec:
volumes:
- name: generic
persistentVolumeClaim:
claimName: generic
- name: empty1
emptyDir: {}
containers:
- resources: {}
name: airflow-database
env:
- name: POSTGRESQL_USERNAME
value: 'bn_airflow'
- name: POSTGRESQL_PASSWORD
value: 'bitnami1'
- name: POSTGRESQL_DATABASE
value: 'bitnami_airflow'
ports:
- containerPort: 5432
protocol: TCP
volumeMounts:
- name: generic
mountPath: /bitnami/postgresql/
image: >-
bitnami/postgresql:latest
hostname: airflow-database
I can connect this db from my webserver like below :
kind: Deployment
apiVersion: apps/v1
metadata:
name: airflow-webserver
namespace: ersin-poc
labels:
deployment: airflow-webserver
spec:
replicas: 1
selector:
matchLabels:
deployment: airflow-webserver
template:
metadata:
creationTimestamp: null
labels:
deployment: airflow-webserver
spec:
volumes:
- name: generic
persistentVolumeClaim:
claimName: generic
- name: empty1
emptyDir: {}
containers:
- resources: {}
name: airflow-webserver
env:
- name: AIRFLOW_HOME
value: /home/appuser
- name: USER
value: appuser
- name: AIRFLOW_FERNET_KEY
value: '46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho='
- name: AIRFLOW_SECRET_KEY
value: 'a25mQ1FHTUh3MnFRSk5KMEIyVVU2YmN0VGRyYTVXY08='
- name: AIRFLOW_EXECUTOR
value: 'CeleryExecutor'
- name: AIRFLOW_DATABASE_NAME
value: 'bitnami_airflow'
- name: AIRFLOW_DATABASE_USERNAME
value: 'bn_airflow'
- name: AIRFLOW_DATABASE_PASSWORD
value: 'bitnami1'
- name: AIRFLOW_LOAD_EXAMPLES
value: 'yes'
- name: AIRFLOW_PASSWORD
value: 'bitnami123'
- name: AIRFLOW_USERNAME
value: 'user'
- name: AIRFLOW_EMAIL
value: 'user#example.com'
- name: AIRFLOW_DATABASE_HOST
value: 'airflow-database'
- name: AIRFLOW_DATABASE_PORT_NUMBER
value: '5432'
ports:
- containerPort: 8080
protocol: TCP
volumeMounts:
- name: generic
mountPath: /home/appuser
- name: generic
mountPath: /home/appuser/logs/
- name: generic
mountPath: /home/appuser/dags/
image: >-
bitnami/airflow:latest
hostname: airflow-webserver
but when i try it with airflow-scheduler it gives error :
airflow-scheduler 09:29:43.31 INFO ==> Trying to connect to the database server airflow-scheduler 09:30:47.42 ERROR ==> Could not connect to the database
and my scheduler yaml is :
kind: Deployment
apiVersion: apps/v1
metadata:
name: airflow-scheduler
namespace: ersin-poc
labels:
deployment: airflow-scheduler
spec:
replicas: 1
selector:
matchLabels:
deployment: airflow-scheduler
template:
metadata:
labels:
deployment: airflow-scheduler
spec:
volumes:
- name: generic
persistentVolumeClaim:
claimName: generic
- name: empty1
emptyDir: {}
containers:
- resources: {}
name: airflow-scheduler
env:
- name: AIRFLOW_HOME
value: /home/appuser
- name: USER
value: appuser
- name: AIRFLOW_FERNET_KEY
value: '46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho='
- name: AIRFLOW_SECRET_KEY
value: 'a25mQ1FHTUh3MnFRSk5KMEIyVVU2YmN0VGRyYTVXY08='
- name: AIRFLOW_EXECUTOR
value: 'CeleryExecutor'
- name: AIRFLOW_DATABASE_NAME
value: 'bitnami_airflow'
- name: AIRFLOW_DATABASE_USERNAME
value: 'bn_airflow'
- name: AIRFLOW_DATABASE_PASSWORD
value: 'bitnami1'
- name: AIRFLOW_DATABASE_HOST
value: 'airflow-database'
- name: AIRFLOW_DATABASE_PORT_NUMBER
value: '5432'
- name: AIRFLOW_WEBSERVER_HOST
value: 'airflow-webserver'
- name: AIRFLOW_WEBSERVER_PORT_NUMBER
value: '8080'
- name: REDIS_HOST
value: 'airflow-redis'
- name: REDIS_PORT_NUMBER
value: '6379'
ports:
- containerPort: 8080
protocol: TCP
volumeMounts:
- name: generic
mountPath: /home/appuser
- name: generic
mountPath: /home/appuser/logs/
- name: generic
mountPath: /home/appuser/dags/
image: >-
bitnami/airflow-scheduler:latest
hostname: airflow-scheduler
so i cant understand why i got this error with same properties?
thanks in advance
EDIT
and I try in scheduler pod this commands to see whether i can connect to db or not :
psql -h airflow-database -p 5432 -U bn_airflow -d bitnami_airflow -W
pass: bitnami1
select * from public.ab_user;
and yes I can.
After a lot of search , I ve decided to make this with apache/airflow images. (posgresql and redis are still bitnami - it doesnt important)
You can see all ymal files about airflow on openshift.
https://github.com/ersingulbahar/airflow_on_openshift
It works now as expected

Pod status as `CreateContainerConfigError` in Kubernetes cluster

I am new to Kubernates and have to deploy TheHive in our infrastructure. I use the docker image created by the cummunity thehiveproject/thehive.
Below are my scripts that I'm using for deployment.
apiVersion: v1
kind: Service
metadata:
name: thehive
labels:
app: thehive
spec:
type: NodePort
ports:
- port: 9000
targetPort: 9000
nodePort: 30900
protocol: TCP
selector:
app: thehive
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: thehive-pv-claim
labels:
app: thehive
spec:
accessModes:
- ReadWriteOnce
storageClassName: "local-path"
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: thehive
labels:
app: thehive
spec:
selector:
matchLabels:
app: thehive
template:
metadata:
labels:
app: thehive
spec:
containers:
- image: thehiveproject/thehive
name: thehive
env:
- name: TH_NO_CONFIG
value: 1
- name: TH_SECRET
value: "test#123"
- name: TH_CONFIG_ES
value: "elasticsearch"
- name: TH_CORTEX_PORT
value: "9001"
ports:
- containerPort: 9000
name: thehive
volumeMounts:
- name: thehive-config-file
mountPath: /etc/thehive/application.conf
subPath: application.conf
- name: thehive-storage
mountPath: /etc/thehive/
volumes:
- name: thehive-storage
persistentVolumeClaim:
claimName: thehive-pv-claim
- name: thehive-config-file
hostPath:
path: /home/ubuntu/k8s/thehive
Unfortunattly when I do
kubectl apply -f thehive-dep.yml
I get a CreateContainerConfigError. Elasticsearch is successfully deployed with the service name elasticsearch.
What am i doing wrong?
thank for every help :(

how to link tomcat with mysql db container in kubernetes

My tomcat and mysql containers are not connecting.so how can I link them so that my war file can run succesfully.
I built my tomcat image using docker file
FROM picoded/tomcat7
COPY data-core-0.0.1-SNAPSHOT.war /usr/local/tomcat/webapps/data-core-0.0.1-SNAPSHOT.war
mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
imagePullPolicy: "IfNotPresent"
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_DATABASE
value: data-core
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-initdb-pv-claim
mysqlpersistantvolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-initdb-pv-volume
labels:
type: local
app: mysql
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/home/vignesh/stackoverflow/tmp/data" //this is the path were my
sql init script is placed.
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-initdb-pv-claim
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
tomcat.yaml
apiVersion: v1
kind: Service
metadata:
name: tomcat
labels:
app: tomcat
spec:
type: NodePort
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: tomcat
tier: frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat
labels:
app: tomcat
spec:
selector:
matchLabels:
app: tomcat
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: tomcat
tier: frontend
spec:
containers:
- image: suji165475/vignesh:tomcatserver
name: tomcat
env:
- name: DB_PORT_3306_TCP_ADDR
value: mysql #service name of mysql
- name: DB_ENV_MYSQL_DATABASE
value: data-core
- name: DB_ENV_MYSQL_ROOT_PASSWORD
value: root
ports:
- containerPort: 8080
name: http
volumeMounts:
- name: tomcat-persistent-storage
mountPath: /var/data
volumes:
- name: tomcat-persistent-storage
persistentVolumeClaim:
claimName: tomcat-pv-claim
tomcatpersistantvolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: tomcat-pv
labels:
type: local
app: mysql
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/app"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: tomcat-pv-claim
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
currently using type:Nodeport for tomcat service. Do I have to use Nodeport for mysql also?? If so then should i give the same nodeport or different??
Note: Iam running all of this on a server using putty terminal
When kubetnetes start service, it adds env variables for host, port etc. Try using environment variable MYSQL_SERVICE_HOST

Connection issue between services in kubernetes

I have three different images related to my application which works fine in docker-compose and has issues running on kubernetes cluster in GCP.
Below is the deployment file.
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql-database
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql-database
tier: database
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql-database
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-database
tier: database
spec:
hostname: mysql
containers:
- image: mysql/mysql-server:5.7
name: mysql
env:
- name: "MYSQL_USER"
value: "root"
- name: "MYSQL_HOST"
value: "mysql"
- name: "MYSQL_DATABASE"
value: "xxxx"
- name: "MYSQL_PORT"
value: "3306"
- name: "MYSQL_PASSWORD"
value: "password"
- name: "MYSQL_ROOT_PASSWORD"
value: "password"
- name: "RAILS_ENV"
value: "production"
ports:
- containerPort: 5432
name: db
---
apiVersion: v1
kind: Service
metadata:
name: dgservice
labels:
app: dgservice
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
selector:
name: dgservice
tier: dgservice
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dgservice
labels:
app: dgservice
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: dgservice
tier: dgservice
spec:
hostname: dgservice
containers:
- image: gcr.io/sample/sample-image:check_1
name: dgservice
ports:
- containerPort: 8080
name: dgservice
---
apiVersion: v1
kind: Service
metadata:
name: dg-ui
labels:
name: dg-ui
spec:
type: NodePort
ports:
- nodePort: 30156
port: 8000
protocol: TCP
targetPort: 8000
selector:
app: dg-ui
tier: dg
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dg-ui
labels:
app: dg-ui
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: dg-ui
tier: dg
spec:
hostname: dg-ui
containers:
- image: gcr.io/sample/sample:latest
name: dg-ui
env:
- name: "MYSQL_USER"
value: "root"
- name: "MYSQL_HOST"
value: "mysql"
- name: "MYSQL_DATABASE"
value: "xxxx"
- name: "MYSQL_PORT"
value: "3306"
- name: "MYSQL_PASSWORD"
value: "password"
- name: "MYSQL_ROOT_PASSWORD"
value: "password"
- name: "RAILS_ENV"
value: "production"
- name: "DG_SERVICE_HOST"
value: "dgservice"
ports:
- containerPort: 8000
name: dg-ui
The image is being pulled successfully from GCR as well.
The connection between mysql and ui service also works fine and my data's are getting migrated without any issues. But the connection is not established between the service and the ui.
Why ui is not able to access service in my application?
As your deployment has the following lables so service need to have same labels in order to create endpoint object
endpoints are the API object behind a service. The endpoints are where a service will route connections to when a connection is made to the ClusterIP of a service
Following are the labels of deployments
labels:
app: dgservice
tier: dgservice
New Service definition with correct labels
apiVersion: v1
kind: Service
metadata:
name: dgservice
labels:
app: dgservice
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
selector:
app: dgservice
tier: dgservice
I am assuming by "service" you are referring to your "dgservice". With the yaml presented above, I believe you also need to specify the DG_SERVICE_PORT (port 8080) to correctly access "dgservice".
As mentioned by Suresh in the comments, you should expose internal services using ClusterIP type. The NodePort is a superset of ClusterIP, and will expose the service internally to your cluster at service-name:port, and externally at node-ip:nodeport, targeting your deployment/pod at targetport.

Resources