Pod status as `CreateContainerConfigError` in Kubernetes cluster - docker

I am new to Kubernates and have to deploy TheHive in our infrastructure. I use the docker image created by the cummunity thehiveproject/thehive.
Below are my scripts that I'm using for deployment.
apiVersion: v1
kind: Service
metadata:
name: thehive
labels:
app: thehive
spec:
type: NodePort
ports:
- port: 9000
targetPort: 9000
nodePort: 30900
protocol: TCP
selector:
app: thehive
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: thehive-pv-claim
labels:
app: thehive
spec:
accessModes:
- ReadWriteOnce
storageClassName: "local-path"
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: thehive
labels:
app: thehive
spec:
selector:
matchLabels:
app: thehive
template:
metadata:
labels:
app: thehive
spec:
containers:
- image: thehiveproject/thehive
name: thehive
env:
- name: TH_NO_CONFIG
value: 1
- name: TH_SECRET
value: "test#123"
- name: TH_CONFIG_ES
value: "elasticsearch"
- name: TH_CORTEX_PORT
value: "9001"
ports:
- containerPort: 9000
name: thehive
volumeMounts:
- name: thehive-config-file
mountPath: /etc/thehive/application.conf
subPath: application.conf
- name: thehive-storage
mountPath: /etc/thehive/
volumes:
- name: thehive-storage
persistentVolumeClaim:
claimName: thehive-pv-claim
- name: thehive-config-file
hostPath:
path: /home/ubuntu/k8s/thehive
Unfortunattly when I do
kubectl apply -f thehive-dep.yml
I get a CreateContainerConfigError. Elasticsearch is successfully deployed with the service name elasticsearch.
What am i doing wrong?
thank for every help :(

Related

TeamCity/EKS cluster

apiVersion: apps/v1
kind: Deployment
metadata:
name: example-teamcity-server
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: example-teamcity-server
template:
metadata:
labels:
app: example-teamcity-server
teamcity: server
spec:
containers:
- name: example-teamcity-server
image: jetbrains/teamcity-server
imagePullPolicy: Always
ports:
- containerPort: 8111
volumeMounts:
- name: teamcity-server-datadir-volume
mountPath: "/data/teamcity_server/datadir"
- name: teamcity-server-logs-volume
mountPath: "/opt/teamcity/logs"
volumes:
- name: teamcity-server-datadir-volume
persistentVolumeClaim:
claimName: teamcity-server-premium-datadir-disk
- name: teamcity-server-logs-volume
persistentVolumeClaim:
claimName: teamcity-server-premium-logs-disk

Kubectl create error "could not find expected"

I'm using kbuernetes version 1.20.5 with docker 19.03.8 on a virtual machine. I'm trying to create a test elk cluster with kubernetes. When i enter kubectl create i get the followig error:
error parsing testserver.yaml: error converting YAML to JSON: yaml: line 17: could not find expected ':'
I keep checking but can't find where the missing ":" should be. I validated the yaml in yaml lint and i get valid yaml result. The yaml file is like this:
#namespace define
apiVersion: v1
kind: Namespace
metadata:
name: testlog
---
#esnodes
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode1
name: testnode1
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode1
template:
metadata:
labels:
app: testnode1
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "false"
- name: node.name
value: testnode1
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode1
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode1-claim0
# restartPolicy: Always
volumes:
- name: testnode1-claim0
hostPath:
path: /logtest/es1
type: DirectoryOrCreate
---
#es1 portservice
apiVersion: v1
kind: Service
metadata:
name: testnode1-service
namespace: testlog
labels:
app: testnode1
spec:
type: NodePort
ports:
- port: 9200
nodePort: 9201
targetPort: 9200
protocol: TCP
name: testnode1-9200
- port: 9300
nodePort: 9301
targetPort: 9300
protocol: TCP
name: testnode1-9300
selector:
app: testnode1
---
#es1 dns
apiVersion: v1
kind: Service
metadata:
name: testnode1
namespace: testlog
labels:
app: testnode1
spec:
clusterIP: None
selector:
app: testnode1
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode2
name: testnode2
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode2
template:
metadata:
labels:
app: testnode2
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "true"
- name: node.name
value: testnode2
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode2
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode2-claim0
# restartPolicy: Always
volumes:
- name: testnode2-claim0
hostPath:
path: /logtest/es2
type: DirectoryOrCreate
---
#es1 dns
apiVersion: v1
kind: Service
metadata:
name: testnode2
namespace: testlog
labels:
app: testnode2
spec:
clusterIP: None
selector:
app: testnode2
----
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode3
name: testnode3
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode3
template:
metadata:
labels:
app: testnode3
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "true"
- name: node.name
value: testnode3
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode3
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode3-claim0
# restartPolicy: Always
volumes:
- name: testnode3-claim0
hostPath:
path: /logtest/es3
type: DirectoryOrCreate
---
#es3 dns
apiVersion: v1
kind: Service
metadata:
name: testnode3
namespace: testlog
labels:
app: testnode3
spec:
clusterIP: None
selector:
app: testnode3
---
#kibana dep
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kibana
name: kibana
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- env:
- name: ELASTICSEARCH_HOSTS
value: http://testnode1:9200
- name: ELASTICSEARCH_URL
value: http://testnode1:9200
image: amazon/opendistro-for-elasticsearch-kibana:1.8.0
name: kibana
# restartPolicy: Always
---
#kibana dns
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: testlog
labels:
app: kibana
spec:
clusterIP: None
selector:
app: kibana
---
#kibana port servi
apiVersion: v1
kind: Service
metadata:
name: kibana-service
namespace: testlog
labels:
app: kibana
spec:
type: NodePort
ports:
- port: 5601
nodePort: 5602
targetPort: 5601
protocol: TCP
name: kibana
selector:
app: kibana
----
#elasticsearch-hq deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: elasticsearch-hq
name: elasticsearch-hq
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch-hq
template:
metadata:
labels:
app: elasticsearch-hq
spec:
containers:
- image: elastichq/elasticsearch-hq
name: elasticsearch-hq
# restartPolicy: Always
---
#elasticsearch-hq port service
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-hq-service
namespace: testlog
labels:
app: elasticsearch-hq
spec:
type: NodePort
ports:
- port: 8081
nodePort: 8081
targetPort: 5000
protocol: TCP
name: elasticsearch-hq
selector:
app: elasticsearch-hq
There are couple of issues in the yaml file:
You have used four - in some places whereas the separator is three -.
Once, you fix the first issue, you'll see the following errors related to NodePort services as the valid range for the nodePort is 30000-32767:
Error from server (Invalid): error when creating "testserver.yaml": Service "testnode1-service" is invalid: spec.ports[0].nodePort: Invalid value: 9201: provided port is not in the valid range. The range of valid ports is 30000-32767
Error from server (Invalid): error when creating "testserver.yaml": Service "kibana-service" is invalid: spec.ports[0].nodePort: Invalid value: 5602: provided port is not in the valid range. The range of valid ports is 30000-32767
Error from server (Invalid): error when creating "testserver.yaml": Service "elasticsearch-hq-service" is invalid: spec.ports[0].nodePort: Invalid value: 8081: provided port is not in the valid range. The range of valid ports is 30000-32767
Fixing both the errors will resolve the yaml issues.
Below is the full working yaml file:
#namespace define
apiVersion: v1
kind: Namespace
metadata:
name: testlog
---
#esnodes
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode1
name: testnode1
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode1
template:
metadata:
labels:
app: testnode1
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "false"
- name: node.name
value: testnode1
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode1
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode1-claim0
# restartPolicy: Always
volumes:
- name: testnode1-claim0
hostPath:
path: /logtest/es1
type: DirectoryOrCreate
---
#es1 portservice
apiVersion: v1
kind: Service
metadata:
name: testnode1-service
namespace: testlog
labels:
app: testnode1
spec:
type: NodePort
ports:
- port: 9200
nodePort: 31201
targetPort: 9200
protocol: TCP
name: testnode1-9200
- port: 9300
nodePort: 31301
targetPort: 9300
protocol: TCP
name: testnode1-9300
selector:
app: testnode1
---
#es1 dns
apiVersion: v1
kind: Service
metadata:
name: testnode1
namespace: testlog
labels:
app: testnode1
spec:
clusterIP: None
selector:
app: testnode1
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode2
name: testnode2
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode2
template:
metadata:
labels:
app: testnode2
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "true"
- name: node.name
value: testnode2
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode2
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode2-claim0
# restartPolicy: Always
volumes:
- name: testnode2-claim0
hostPath:
path: /logtest/es2
type: DirectoryOrCreate
---
#es1 dns
apiVersion: v1
kind: Service
metadata:
name: testnode2
namespace: testlog
labels:
app: testnode2
spec:
clusterIP: None
selector:
app: testnode2
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: testnode3
name: testnode3
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: testnode3
template:
metadata:
labels:
app: testnode3
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms768m -Xmx768m
- name: MAX_LOCKED_MEMORY
value: unlimited
- name: bootstrap.memory_lock
value: "true"
- name: cluster.initial_master_nodes
value: testnode1,testnode2,testnode3
- name: cluster.name
value: testcluster
- name: discovery.seed_hosts
value: testnode1,testnode2,testnode3
- name: http.cors.allow-origin
value: "*"
- name: network.host
value: 0.0.0.0
- name: node.data
value: "true"
- name: node.name
value: testnode3
image: amazon/opendistro-for-elasticsearch:1.8.0
name: testnode3
securityContext:
privileged: true
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: testnode3-claim0
# restartPolicy: Always
volumes:
- name: testnode3-claim0
hostPath:
path: /logtest/es3
type: DirectoryOrCreate
---
#es3 dns
apiVersion: v1
kind: Service
metadata:
name: testnode3
namespace: testlog
labels:
app: testnode3
spec:
clusterIP: None
selector:
app: testnode3
---
#kibana dep
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kibana
name: kibana
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
app: kibana
spec:
containers:
- env:
- name: ELASTICSEARCH_HOSTS
value: http://testnode1:9200
- name: ELASTICSEARCH_URL
value: http://testnode1:9200
image: amazon/opendistro-for-elasticsearch-kibana:1.8.0
name: kibana
# restartPolicy: Always
---
#kibana dns
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: testlog
labels:
app: kibana
spec:
clusterIP: None
selector:
app: kibana
---
#kibana port servi
apiVersion: v1
kind: Service
metadata:
name: kibana-service
namespace: testlog
labels:
app: kibana
spec:
type: NodePort
ports:
- port: 5601
nodePort: 31602
targetPort: 5601
protocol: TCP
name: kibana
selector:
app: kibana
---
#elasticsearch-hq deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: elasticsearch-hq
name: elasticsearch-hq
namespace: testlog
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch-hq
template:
metadata:
labels:
app: elasticsearch-hq
spec:
containers:
- image: elastichq/elasticsearch-hq
name: elasticsearch-hq
# restartPolicy: Always
---
#elasticsearch-hq port service
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-hq-service
namespace: testlog
labels:
app: elasticsearch-hq
spec:
type: NodePort
ports:
- port: 8081
nodePort: 31081
targetPort: 5000
protocol: TCP
name: elasticsearch-hq
selector:
app: elasticsearch-hq

Jenkins in k8s don`t save install plugin

There is the following job, save jenkins state using pv / pvc. The problem is that it can't mount in /var/jenkins_home ,but it is mounted in any other folder, tell me what to do)
Or save the state of jenkins plugins to a folder and then get them from there using some script?
jenkins-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
ports:
- name: http-port
containerPort: 8080
volumeMounts:
- name: test-pvc
mountPath: /var/jenkins_home/
volumes:
- name: test-pvc
persistentVolumeClaim:
claimName: test-pvc
pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv
spec:
capacity:
storage: 2Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
storageClassName: local-storage
hostPath:
path: /data/jenkins_home/
pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
volumeName: jenkins-pv
storageClassName: local-storage
I figured it out))
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
namespace: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
ports:
- name: http-port
containerPort: 8080
volumeMounts:
- name: jenkins-storage
mountPath: /var/jenkins_home/
volumes:
- name: jenkins-storage
persistentVolumeClaim:
claimName: jenkins-pv-clain
---
apiVersion: v1
kind: Service
metadata:
name: jenkins
namespace: jenkins
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30000
selector:
app: jenkins
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv-clain
namespace: jenkins
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

Kubernetes: Modeling Jobs/Cron tasks for Postgres + Tomcat application

I work on an open source system that is comprised of a Postgres database and a tomcat server. I have docker images for each component. We currently use docker-compose to test the application.
I am attempting to model this application with kubernetes.
Here is my first attempt.
apiVersion: v1
kind: Pod
metadata:
name: dspace-pod
spec:
volumes:
- name: "pgdata-vol"
emptyDir: {}
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- image: dspace/dspace:dspace-6_x
name: dspace
ports:
- containerPort: 8080
name: http
protocol: TCP
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
#
- image: dspace/dspace-postgres-pgcrypto
name: dspacedb
ports:
- containerPort: 5432
name: http
protocol: TCP
volumeMounts:
- mountPath: "/pgdata"
name: "pgdata-vol"
env:
- name: PGDATA
value: /pgdata
I have a configMap that is setting the hostname to the name of the pod.
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T19:14:38Z
name: local-config-map
namespace: default
data:
local.cfg: |-
dspace.dir = /dspace
db.url = jdbc:postgresql://dspace-pod:5432/dspace
dspace.hostname = dspace-pod
dspace.baseUrl = http://dspace-pod:8080
solr.server=http://dspace-pod:8080/solr
This application has a number of tasks that are run from the command line.
I have created a 3rd Docker image that contains the jars that are needed on the command line.
I am interested in modeling these command line tasks as Jobs in Kubernetes. Assuming that is a appropriate way to handle these tasks, how do I specify that a job should run within a Pod that is already running?
Here is my first attempt at defining a job.
apiVersion: batch/v1
kind: Job
#https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
metadata:
name: dspace-create-admin
spec:
template:
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- name: dspace-cli
image: dspace/dspace-cli:dspace-6_x
command: [
"/dspace/bin/dspace",
"create-administrator",
"-e", "test#test.edu",
"-f", "test",
"-l", "admin",
"-p", "admin",
"-c", "en"
]
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
restartPolicy: Never
The following configuration has allowed me to start my services (tomcat and postgres) as I hoped.
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T19:14:38Z
name: local-config-map
namespace: default
data:
# example of a simple property defined using --from-literal
#example.property.1: hello
#example.property.2: world
# example of a complex property defined using --from-file
local.cfg: |-
dspace.dir = /dspace
db.url = jdbc:postgresql://dspacedb-service:5432/dspace
dspace.hostname = dspace-service
dspace.baseUrl = http://dspace-service:8080
solr.server=http://dspace-service:8080/solr
---
apiVersion: v1
kind: Service
metadata:
name: dspacedb-service
labels:
app: dspacedb-app
spec:
type: NodePort
selector:
app: dspacedb-app
ports:
- protocol: TCP
port: 5432
# targetPort: 5432
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dspacedb-deploy
labels:
app: dspacedb-app
spec:
selector:
matchLabels:
app: dspacedb-app
template:
metadata:
labels:
app: dspacedb-app
spec:
volumes:
- name: "pgdata-vol"
emptyDir: {}
containers:
- image: dspace/dspace-postgres-pgcrypto
name: dspacedb
ports:
- containerPort: 5432
name: http
protocol: TCP
volumeMounts:
- mountPath: "/pgdata"
name: "pgdata-vol"
env:
- name: PGDATA
value: /pgdata
---
apiVersion: v1
kind: Service
metadata:
name: dspace-service
labels:
app: dspace-app
spec:
type: NodePort
selector:
app: dspace-app
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dspace-deploy
labels:
app: dspace-app
spec:
selector:
matchLabels:
app: dspace-app
template:
metadata:
labels:
app: dspace-app
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- image: dspace/dspace:dspace-6_x-jdk8-test
name: dspace
ports:
- containerPort: 8080
name: http
protocol: TCP
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
After applying the configuration above, I have the following results.
$ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
dspace-service NodePort 10.104.224.245 <none> 8080:32459/TCP 3s app=dspace-app
dspacedb-service NodePort 10.96.212.9 <none> 5432:30947/TCP 3s app=dspacedb-app
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h <none>
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dspace-deploy-c59b77bb8-mr47k 1/1 Running 0 10m
dspacedb-deploy-58dd85f5b9-6v2lf 1/1 Running 0 10
I was pleased to see that the service name can be used for port forwarding.
$ kubectl port-forward service/dspace-service 8080:8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
I am also able to run the following job using the defined service names in the configMap.
apiVersion: batch/v1
kind: Job
metadata:
name: dspace-create-admin
spec:
template:
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- name: dspace-cli
image: dspace/dspace-cli:dspace-6_x
command: [
"/dspace/bin/dspace",
"create-administrator",
"-e", "test#test.edu",
"-f", "test",
"-l", "admin",
"-p", "admin",
"-c", "en"
]
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
restartPolicy: Never
Results
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dspace-create-admin-kl6wd 0/1 Completed 0 5m
dspace-deploy-c59b77bb8-mr47k 1/1 Running 0 10m
dspacedb-deploy-58dd85f5b9-6v2lf 1/1 Running 0 10m
I still have some work to do persisting the volumes.

how to link tomcat with mysql db container in kubernetes

My tomcat and mysql containers are not connecting.so how can I link them so that my war file can run succesfully.
I built my tomcat image using docker file
FROM picoded/tomcat7
COPY data-core-0.0.1-SNAPSHOT.war /usr/local/tomcat/webapps/data-core-0.0.1-SNAPSHOT.war
mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
imagePullPolicy: "IfNotPresent"
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_DATABASE
value: data-core
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-initdb-pv-claim
mysqlpersistantvolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-initdb-pv-volume
labels:
type: local
app: mysql
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/home/vignesh/stackoverflow/tmp/data" //this is the path were my
sql init script is placed.
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-initdb-pv-claim
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
tomcat.yaml
apiVersion: v1
kind: Service
metadata:
name: tomcat
labels:
app: tomcat
spec:
type: NodePort
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: tomcat
tier: frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat
labels:
app: tomcat
spec:
selector:
matchLabels:
app: tomcat
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: tomcat
tier: frontend
spec:
containers:
- image: suji165475/vignesh:tomcatserver
name: tomcat
env:
- name: DB_PORT_3306_TCP_ADDR
value: mysql #service name of mysql
- name: DB_ENV_MYSQL_DATABASE
value: data-core
- name: DB_ENV_MYSQL_ROOT_PASSWORD
value: root
ports:
- containerPort: 8080
name: http
volumeMounts:
- name: tomcat-persistent-storage
mountPath: /var/data
volumes:
- name: tomcat-persistent-storage
persistentVolumeClaim:
claimName: tomcat-pv-claim
tomcatpersistantvolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: tomcat-pv
labels:
type: local
app: mysql
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/app"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: tomcat-pv-claim
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
currently using type:Nodeport for tomcat service. Do I have to use Nodeport for mysql also?? If so then should i give the same nodeport or different??
Note: Iam running all of this on a server using putty terminal
When kubetnetes start service, it adds env variables for host, port etc. Try using environment variable MYSQL_SERVICE_HOST

Resources