I'm trying to deploy GLPI application (http://glpi-project.org/) over my Kubernetes cluster but i encounter an issue.
Here is my deployment code:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pv-claim-glpi
labels:
type: openebs
spec:
storageClassName: openebs-storageclass
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: glpi
namespace: jb
labels:
app: glpi
spec:
selector:
matchLabels:
app: glpi
replicas: 1 # tells deployment to run 1 pods matching the template
template: # create pods using pod definition in this template
metadata:
# unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
# generated from the deployment name
labels:
app: glpi
spec:
volumes:
- name: pv-storage-glpi
persistentVolumeClaim:
claimName: pv-claim-glpi
containers:
- name: mariadb
image: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
value: "glpi"
- name: MYSQL_DATABASE
value: "glpi"
- name: MYSQL_USER
value: "glpi"
- name: MYSQL_PASSWORD
value: "glpi"
- name: GLPI_SOURCE_URL
value: "https://forge.glpi-project.org/attachments/download/2020/glpi-0.85.4.tar.gz"
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
- mountPath: /var/lib/mariadb/
name: pv-storage-glpi
subPath: mariadb
- name: glpi
image: driket54/glpi
ports:
- containerPort: 80
name: http
- containerPort: 8090
name: https
volumeMounts:
- mountPath: /var/glpidata
name: pv-storage-glpi
subPath: glpidata
---
apiVersion: v1
kind: Service
metadata:
name: glpi
namespace: jb
spec:
selector:
app: glpi
ports:
- protocol: "TCP"
port: 80
targetPort: http
name: http
- protocol: "TCP"
port: 8090
targetPort: https
name: https
- protocol: "TCP"
port: 3306
targetPort: mariadb
name: mariadb
type: NodePort
---
The docker image is properly deployed but in my test phase, during the setup of the app, i get the following error while setting up the database (mysql).
I've already checked the credentials (host, username, password) and the are correct
Please help
Not really an answer since I don' t have the Kubernetes knowledge expected, but I can't add a comment yet :(
What you should alter first is your GLPi version.
Use this link. It's the last one:
https://github.com/glpi-project/glpi/releases/download/9.3.0/glpi-9.3.tgz
Then you may use cli tools to setup the database.
https://glpi-install.readthedocs.io/en/latest/command-line.html
Using what I get from your file:
php scripts/cliinstall.php --host=mariadb(not sure about this one in your environment but you get hte idea) --db=glpi --user=glpi --pass=glpi
Related
I am trying to host my own Nextcloud server using Kubernetes.
I want my Nextcloud server to be accessed from http://localhost:32738/nextcloud but every time I access that URL, it gets redirected to http://localhost:32738/login and gives me 404 Not Found.
If I replace the path with:
path: /
then, it works without problems on http://localhost:32738/login but as I said, it is not the solution I am looking for. The login page should be accessed from http://localhost:32738/nextcloud/login.
Going to http://127.0.0.1:32738/nextcloud/ does work for the initial setup but after that it becomes inaccessible as it always redirects to:
http://127.0.0.1:32738/apps/dashboard/
and not to:
http://127.0.0.1:32738/nextcloud/apps/dashboard/
This is my yaml:
#Nextcloud-Dep
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextcloud-server
labels:
app: nextcloud
spec:
replicas: 1
selector:
matchLabels:
pod-label: nextcloud-server-pod
template:
metadata:
labels:
pod-label: nextcloud-server-pod
spec:
containers:
- name: nextcloud
image: nextcloud:22.2.0-apache
env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: nextcloud
key: db-name
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: nextcloud
key: db-username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud
key: db-password
- name: POSTGRES_HOST
value: nextcloud-database:5432
volumeMounts:
- name: server-storage
mountPath: /var/www/html
subPath: server-data
volumes:
- name: server-storage
persistentVolumeClaim:
claimName: nextcloud
---
#Nextcloud-Serv
apiVersion: v1
kind: Service
metadata:
name: nextcloud-server
labels:
app: nextcloud
spec:
selector:
pod-label: nextcloud-server-pod
ports:
- port: 80
protocol: TCP
name: nextcloud-server
---
#Database-Dep
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextcloud-database
labels:
app: nextcloud
spec:
replicas: 1
selector:
matchLabels:
pod-label: nextcloud-database-pod
template:
metadata:
labels:
pod-label: nextcloud-database-pod
spec:
containers:
- name: postgresql
image: postgres:13.4
env:
- name: POSTGRES_DATABASE
valueFrom:
secretKeyRef:
name: nextcloud
key: db-name
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: nextcloud
key: db-username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud
key: db-password
- name: POSTGRES_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud
key: db-rootpassword
- name: PGDATA
value: /var/lib/postgresql/data/
volumeMounts:
- name: database-storage
mountPath: /var/lib/postgresql/data/
subPath: data
volumes:
- name: database-storage
persistentVolumeClaim:
claimName: nextcloud
---
#Database-Serv
apiVersion: v1
kind: Service
metadata:
name: nextcloud-database
labels:
app: nextcloud
spec:
selector:
pod-label: nextcloud-database-pod
ports:
- port: 5432
protocol: TCP
name: nextcloud-database
---
#PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: nextcloud-pv
labels:
type: local
spec:
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp"
---
#PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nextcloud
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
#Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nextcloud-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- backend:
service:
name: nextcloud-server
port:
number: 80
pathType: Prefix
path: /nextcloud(/.*)
---
#Secret
apiVersion: v1
kind: Secret
metadata:
name: nextcloud
labels:
app: nextcloud
immutable: true
stringData:
db-name: nextcloud
db-username: nextcloud
db-password: changeme
db-rootpassword: longpassword
username: admin
password: changeme
ingress-nginx was installed with:
helm install nginx ingress-nginx/ingress-nginx
Please tell me if you want me to supply more information.
In your case there is a difference between the exposed URL in the backend service and the specified path in the Ingress rule. That's why you get an error.
To avoid that you can use rewrite rule.
Using that one your ingress paths will be rewritten to value you provide.
This annotation ingress.kubernetes.io/rewrite-target: /login will rewrite the URL /nextcloud/login to /login before sending the request to the backend service.
But:
Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions.
On this documentation you can find following example:
$ echo '
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
rules:
- host: rewrite.bar.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /something(/|$)(.*)
' | kubectl create -f -
In this ingress definition, any characters captured by (.*) will be assigned to the placeholder $2, which is then used as a parameter in the rewrite-target annotation.
So in your URL you could see wanted /nextcloud/login, but rewriting will couse changing path to /login in the Ingress rule and finding your backend. I would suggest use one of following option:
path: /nextcloud(/.*)
nginx.ingress.kubernetes.io/rewrite-target: /$1
or
path: /nextcloud/login
nginx.ingress.kubernetes.io/rewrite-target: /login
See also this article.
I work on an open source system that is comprised of a Postgres database and a tomcat server. I have docker images for each component. We currently use docker-compose to test the application.
I am attempting to model this application with kubernetes.
Here is my first attempt.
apiVersion: v1
kind: Pod
metadata:
name: dspace-pod
spec:
volumes:
- name: "pgdata-vol"
emptyDir: {}
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- image: dspace/dspace:dspace-6_x
name: dspace
ports:
- containerPort: 8080
name: http
protocol: TCP
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
#
- image: dspace/dspace-postgres-pgcrypto
name: dspacedb
ports:
- containerPort: 5432
name: http
protocol: TCP
volumeMounts:
- mountPath: "/pgdata"
name: "pgdata-vol"
env:
- name: PGDATA
value: /pgdata
I have a configMap that is setting the hostname to the name of the pod.
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T19:14:38Z
name: local-config-map
namespace: default
data:
local.cfg: |-
dspace.dir = /dspace
db.url = jdbc:postgresql://dspace-pod:5432/dspace
dspace.hostname = dspace-pod
dspace.baseUrl = http://dspace-pod:8080
solr.server=http://dspace-pod:8080/solr
This application has a number of tasks that are run from the command line.
I have created a 3rd Docker image that contains the jars that are needed on the command line.
I am interested in modeling these command line tasks as Jobs in Kubernetes. Assuming that is a appropriate way to handle these tasks, how do I specify that a job should run within a Pod that is already running?
Here is my first attempt at defining a job.
apiVersion: batch/v1
kind: Job
#https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
metadata:
name: dspace-create-admin
spec:
template:
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- name: dspace-cli
image: dspace/dspace-cli:dspace-6_x
command: [
"/dspace/bin/dspace",
"create-administrator",
"-e", "test#test.edu",
"-f", "test",
"-l", "admin",
"-p", "admin",
"-c", "en"
]
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
restartPolicy: Never
The following configuration has allowed me to start my services (tomcat and postgres) as I hoped.
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T19:14:38Z
name: local-config-map
namespace: default
data:
# example of a simple property defined using --from-literal
#example.property.1: hello
#example.property.2: world
# example of a complex property defined using --from-file
local.cfg: |-
dspace.dir = /dspace
db.url = jdbc:postgresql://dspacedb-service:5432/dspace
dspace.hostname = dspace-service
dspace.baseUrl = http://dspace-service:8080
solr.server=http://dspace-service:8080/solr
---
apiVersion: v1
kind: Service
metadata:
name: dspacedb-service
labels:
app: dspacedb-app
spec:
type: NodePort
selector:
app: dspacedb-app
ports:
- protocol: TCP
port: 5432
# targetPort: 5432
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dspacedb-deploy
labels:
app: dspacedb-app
spec:
selector:
matchLabels:
app: dspacedb-app
template:
metadata:
labels:
app: dspacedb-app
spec:
volumes:
- name: "pgdata-vol"
emptyDir: {}
containers:
- image: dspace/dspace-postgres-pgcrypto
name: dspacedb
ports:
- containerPort: 5432
name: http
protocol: TCP
volumeMounts:
- mountPath: "/pgdata"
name: "pgdata-vol"
env:
- name: PGDATA
value: /pgdata
---
apiVersion: v1
kind: Service
metadata:
name: dspace-service
labels:
app: dspace-app
spec:
type: NodePort
selector:
app: dspace-app
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dspace-deploy
labels:
app: dspace-app
spec:
selector:
matchLabels:
app: dspace-app
template:
metadata:
labels:
app: dspace-app
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- image: dspace/dspace:dspace-6_x-jdk8-test
name: dspace
ports:
- containerPort: 8080
name: http
protocol: TCP
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
After applying the configuration above, I have the following results.
$ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
dspace-service NodePort 10.104.224.245 <none> 8080:32459/TCP 3s app=dspace-app
dspacedb-service NodePort 10.96.212.9 <none> 5432:30947/TCP 3s app=dspacedb-app
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h <none>
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dspace-deploy-c59b77bb8-mr47k 1/1 Running 0 10m
dspacedb-deploy-58dd85f5b9-6v2lf 1/1 Running 0 10
I was pleased to see that the service name can be used for port forwarding.
$ kubectl port-forward service/dspace-service 8080:8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
I am also able to run the following job using the defined service names in the configMap.
apiVersion: batch/v1
kind: Job
metadata:
name: dspace-create-admin
spec:
template:
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- name: dspace-cli
image: dspace/dspace-cli:dspace-6_x
command: [
"/dspace/bin/dspace",
"create-administrator",
"-e", "test#test.edu",
"-f", "test",
"-l", "admin",
"-p", "admin",
"-c", "en"
]
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
restartPolicy: Never
Results
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dspace-create-admin-kl6wd 0/1 Completed 0 5m
dspace-deploy-c59b77bb8-mr47k 1/1 Running 0 10m
dspacedb-deploy-58dd85f5b9-6v2lf 1/1 Running 0 10m
I still have some work to do persisting the volumes.
My tomcat and mysql containers are not connecting.so how can I link them so that my war file can run succesfully.
I built my tomcat image using docker file
FROM picoded/tomcat7
COPY data-core-0.0.1-SNAPSHOT.war /usr/local/tomcat/webapps/data-core-0.0.1-SNAPSHOT.war
mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
imagePullPolicy: "IfNotPresent"
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_DATABASE
value: data-core
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-initdb-pv-claim
mysqlpersistantvolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-initdb-pv-volume
labels:
type: local
app: mysql
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/home/vignesh/stackoverflow/tmp/data" //this is the path were my
sql init script is placed.
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-initdb-pv-claim
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
tomcat.yaml
apiVersion: v1
kind: Service
metadata:
name: tomcat
labels:
app: tomcat
spec:
type: NodePort
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: tomcat
tier: frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat
labels:
app: tomcat
spec:
selector:
matchLabels:
app: tomcat
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: tomcat
tier: frontend
spec:
containers:
- image: suji165475/vignesh:tomcatserver
name: tomcat
env:
- name: DB_PORT_3306_TCP_ADDR
value: mysql #service name of mysql
- name: DB_ENV_MYSQL_DATABASE
value: data-core
- name: DB_ENV_MYSQL_ROOT_PASSWORD
value: root
ports:
- containerPort: 8080
name: http
volumeMounts:
- name: tomcat-persistent-storage
mountPath: /var/data
volumes:
- name: tomcat-persistent-storage
persistentVolumeClaim:
claimName: tomcat-pv-claim
tomcatpersistantvolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: tomcat-pv
labels:
type: local
app: mysql
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/app"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: tomcat-pv-claim
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
currently using type:Nodeport for tomcat service. Do I have to use Nodeport for mysql also?? If so then should i give the same nodeport or different??
Note: Iam running all of this on a server using putty terminal
When kubetnetes start service, it adds env variables for host, port etc. Try using environment variable MYSQL_SERVICE_HOST
I have three different images related to my application which works fine in docker-compose and has issues running on kubernetes cluster in GCP.
Below is the deployment file.
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql-database
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql-database
tier: database
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql-database
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-database
tier: database
spec:
hostname: mysql
containers:
- image: mysql/mysql-server:5.7
name: mysql
env:
- name: "MYSQL_USER"
value: "root"
- name: "MYSQL_HOST"
value: "mysql"
- name: "MYSQL_DATABASE"
value: "xxxx"
- name: "MYSQL_PORT"
value: "3306"
- name: "MYSQL_PASSWORD"
value: "password"
- name: "MYSQL_ROOT_PASSWORD"
value: "password"
- name: "RAILS_ENV"
value: "production"
ports:
- containerPort: 5432
name: db
---
apiVersion: v1
kind: Service
metadata:
name: dgservice
labels:
app: dgservice
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
selector:
name: dgservice
tier: dgservice
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dgservice
labels:
app: dgservice
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: dgservice
tier: dgservice
spec:
hostname: dgservice
containers:
- image: gcr.io/sample/sample-image:check_1
name: dgservice
ports:
- containerPort: 8080
name: dgservice
---
apiVersion: v1
kind: Service
metadata:
name: dg-ui
labels:
name: dg-ui
spec:
type: NodePort
ports:
- nodePort: 30156
port: 8000
protocol: TCP
targetPort: 8000
selector:
app: dg-ui
tier: dg
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dg-ui
labels:
app: dg-ui
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: dg-ui
tier: dg
spec:
hostname: dg-ui
containers:
- image: gcr.io/sample/sample:latest
name: dg-ui
env:
- name: "MYSQL_USER"
value: "root"
- name: "MYSQL_HOST"
value: "mysql"
- name: "MYSQL_DATABASE"
value: "xxxx"
- name: "MYSQL_PORT"
value: "3306"
- name: "MYSQL_PASSWORD"
value: "password"
- name: "MYSQL_ROOT_PASSWORD"
value: "password"
- name: "RAILS_ENV"
value: "production"
- name: "DG_SERVICE_HOST"
value: "dgservice"
ports:
- containerPort: 8000
name: dg-ui
The image is being pulled successfully from GCR as well.
The connection between mysql and ui service also works fine and my data's are getting migrated without any issues. But the connection is not established between the service and the ui.
Why ui is not able to access service in my application?
As your deployment has the following lables so service need to have same labels in order to create endpoint object
endpoints are the API object behind a service. The endpoints are where a service will route connections to when a connection is made to the ClusterIP of a service
Following are the labels of deployments
labels:
app: dgservice
tier: dgservice
New Service definition with correct labels
apiVersion: v1
kind: Service
metadata:
name: dgservice
labels:
app: dgservice
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
selector:
app: dgservice
tier: dgservice
I am assuming by "service" you are referring to your "dgservice". With the yaml presented above, I believe you also need to specify the DG_SERVICE_PORT (port 8080) to correctly access "dgservice".
As mentioned by Suresh in the comments, you should expose internal services using ClusterIP type. The NodePort is a superset of ClusterIP, and will expose the service internally to your cluster at service-name:port, and externally at node-ip:nodeport, targeting your deployment/pod at targetport.
Ok, following the examples and documentation on the Kubernetes website along with extensive research on Google, I still cannot get DNS resolution between the containers within my Pod.
I have a Service and a PetSet with 2 containers defined. When I deploy the PetSet and Service, they start and run successfully, but if I attempt to ping the host of one of my containers from the other by hostname or by the full domain name I get destination unreachable. I can ping by IP address though.
Here is my Kubernetes configuration file:
apiVersion: v1
kind: Service
metadata:
name: ml-service
labels:
app: marklogic
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
#restartPolicy: OnFailure
clusterIP: None
selector:
app: marklogic
ports:
- protocol: TCP
port: 7997
#nodePort: 31997
name: ml7997
- protocol: TCP
port: 8000
#nodePort: 32000
name: ml8000
# ... More ports defined
#type: NodePort
---
apiVersion: apps/v1alpha1
kind: PetSet
metadata:
name: marklogic
spec:
serviceName: "ml-service"
replicas: 2
template:
metadata:
labels:
app: marklogic
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
terminationGracePeriodSeconds: 30
containers:
- name: 'marklogic'
image: "{local docker registry ip}:5000/dcgs-sof/ml8-docker-final:v1"
imagePullPolicy: Always
command: ["/opt/entry-point.sh", "-l", "/opt/mlconfig.sh"]
ports:
- containerPort: 7997
name: ml7997
- containerPort: 8000
name: ml8000
- containerPort: 8001
name: ml8001
- containerPort: 8002
name: ml8002
- containerPort: 8040
name: ml8040
- containerPort: 8041
name: ml8041
- containerPort: 8042
name: ml8042
- containerPort: 8050
name: ml8050
- containerPort: 8051
name: ml8051
- containerPort: 8060
name: ml8060
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
lifecycle:
preStop:
exec:
command: ["/etc/init.d/MarkLogic stop"]
volumeMounts:
- name: ml-data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: ml-data
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
I commented out the type: NodePort definition as I thought that might be the culprit, but still no success.
Additionally, if I run docker#minikube:/$ docker exec b4d21c4bc065 /bin/bash -c 'nslookup marklogic-1.marklogic.default.svc.cluster.local' it cannot resolve the name.
What am I missing???
You are resolving the wrong domain name.
See http://kubernetes.io/docs/user-guide/petset/#network-identity
You should try to resolve:
marklogic-0.ml-service.default.svc.cluster.local
If everything is within the default namespace, the DNS name is:
<pod_name>.<svc_name>.default.svc.cluster.local