Ok, following the examples and documentation on the Kubernetes website along with extensive research on Google, I still cannot get DNS resolution between the containers within my Pod.
I have a Service and a PetSet with 2 containers defined. When I deploy the PetSet and Service, they start and run successfully, but if I attempt to ping the host of one of my containers from the other by hostname or by the full domain name I get destination unreachable. I can ping by IP address though.
Here is my Kubernetes configuration file:
apiVersion: v1
kind: Service
metadata:
name: ml-service
labels:
app: marklogic
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
#restartPolicy: OnFailure
clusterIP: None
selector:
app: marklogic
ports:
- protocol: TCP
port: 7997
#nodePort: 31997
name: ml7997
- protocol: TCP
port: 8000
#nodePort: 32000
name: ml8000
# ... More ports defined
#type: NodePort
---
apiVersion: apps/v1alpha1
kind: PetSet
metadata:
name: marklogic
spec:
serviceName: "ml-service"
replicas: 2
template:
metadata:
labels:
app: marklogic
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
terminationGracePeriodSeconds: 30
containers:
- name: 'marklogic'
image: "{local docker registry ip}:5000/dcgs-sof/ml8-docker-final:v1"
imagePullPolicy: Always
command: ["/opt/entry-point.sh", "-l", "/opt/mlconfig.sh"]
ports:
- containerPort: 7997
name: ml7997
- containerPort: 8000
name: ml8000
- containerPort: 8001
name: ml8001
- containerPort: 8002
name: ml8002
- containerPort: 8040
name: ml8040
- containerPort: 8041
name: ml8041
- containerPort: 8042
name: ml8042
- containerPort: 8050
name: ml8050
- containerPort: 8051
name: ml8051
- containerPort: 8060
name: ml8060
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
lifecycle:
preStop:
exec:
command: ["/etc/init.d/MarkLogic stop"]
volumeMounts:
- name: ml-data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: ml-data
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
I commented out the type: NodePort definition as I thought that might be the culprit, but still no success.
Additionally, if I run docker#minikube:/$ docker exec b4d21c4bc065 /bin/bash -c 'nslookup marklogic-1.marklogic.default.svc.cluster.local' it cannot resolve the name.
What am I missing???
You are resolving the wrong domain name.
See http://kubernetes.io/docs/user-guide/petset/#network-identity
You should try to resolve:
marklogic-0.ml-service.default.svc.cluster.local
If everything is within the default namespace, the DNS name is:
<pod_name>.<svc_name>.default.svc.cluster.local
Related
I'm trying to deploy elasticsearch on kubernetes. When I login to the docker container with
kubectl exec -c elasticsearch -ti elasticsearch-0 -- bash and run curl localhost:9200 I get the expected response from elastic search:
{
"name" : "elasticsearch-0",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "oUXVt_Z4ROG89ACF1jdTXw",
"version" : {
"number" : "7.17.3",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "5ad023604c8d7416c9eb6c0eadb62b14e766caff",
"build_date" : "2022-04-19T08:11:19.070913226Z",
"build_snapshot" : false,
"lucene_version" : "8.11.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
but when i try to curl from elsewhere on my cluster, or the the exposed ingress IP address, I get a connection refused. I'm new to kubernetes so to see if I understood the setup correctly, I put an nginx container in the same pod and ran it on port 8080. When i curl nginx from the external IP i get a successful response so i'm not sure where i went wrong in my setup because they look the same. Here's my kubernetes manifest:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
labels:
app: elasticsearch
spec:
serviceName: elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
imagePullSecrets:
- name: acr-secret
nodeSelector:
mylabel: hihi
terminationGracePeriodSeconds: 30
initContainers:
- name: init-service
image: ubuntu
command: [ 'sysctl', '-w', 'vm.max_map_count=262144']
securityContext:
privileged: true
- name: init-permissions
image: ubuntu
command: ['chown', '-R', '1000:1000', '/usr/share/elasticsearch/data']
securityContext:
privileged: true
volumeMounts:
- name: esdata
mountPath: /usr/share/elasticsearch/data
containers:
- name: nginx
image: jamesesdocker.azurecr.io/nginxjames/latest
imagePullPolicy: Always
ports:
- containerPort: 8080
- name: elasticsearch
image: jamesesdocker.azurecr.io/es717/latest
imagePullPolicy: Always
ports:
- containerPort: 9200
name: es1
- containerPort: 9300
name: es2
resources:
limits:
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
securityContext:
capabilities:
add:
- IPC_LOCK
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
# These volume mounts are persistent. They are like inline claims,
# but not exactly because the names need to match exactly one of
# the stateful pod volumes.
volumeMounts:
- name: esdata
mountPath: /usr/share/elasticsearch/data
# These are converted to volume claims by the controller
# and mounted at the paths mentioned above.
# do not use these in production until ssd GCEPersistentDisk or other ssd pd
volumeClaimTemplates:
- metadata:
name: esdata
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: elasticsearch
resources:
requests:
storage: 2Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: elasticsearch
provisioner: kubernetes.io/azure-disk
parameters:
storageaccounttype: Standard_LRS
kind: managed
resourceGroup: james-elasticsearch
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
spec:
selector:
app: elasticsearch
ports:
- protocol: TCP
port: 9200
name: elasticsearch
- protocol: TCP
port: 8080
name: nginx
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /es
pathType: Prefix
backend:
service:
name: elasticsearch
port:
number: 9200
- path: /nginx
pathType: Prefix
backend:
service:
name: elasticsearch
port:
number: 8080
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
app.kubernetes.io/component: controller
name: nginx
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
spec:
controller: k8s.io/ingress-nginx
I have clean ubuntu 18.04 server where installed minikube, kubectl and docker.
And I have several items for it.
One deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express-deployment
labels:
app: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongo-db-secret
key: mongo-db-root-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongo-db-secret
key: mongo-db-root-password
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongo-db-configmap
key: mongo-db-url
one internal service. because tried to connect through ingress
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
spec:
selector:
app: mongo-express
ports:
- protocol: TCP
port: 8081
targetPort: 8081
one ingress for it
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
spec:
rules:
- host: my-host.com
http:
paths:
- path: "/"
pathType: "Prefix"
backend:
service:
name: mongo-express-service
port:
number: 8081
And one external service because I tried to connect through them
apiVersion: v1
kind: Service
metadata:
name: mongo-express-external-service
spec:
selector:
app: mongo-express
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30000
But each of these options does not work for me. I tried add update to host file and add
192.168.47.2 my-host.com
but it also didn't help me.
When I run curl my-host.com in server terminal I receive correct response, but I can't get it from my browser.
My domain set up to my server and when I use nginx only all work fine.
May be need to add something else or update my config?
I hope you can help me.
I have a Docker Enterprise k8 bare metal cluster running on Centos8, and following the official docs to install NGINX using manifest files from GIT: https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/
The pod seems to be running:
kubectl -n nginx-ingress describe pod nginx-ingress-fzr2j
Name: nginx-ingress-fzr2j
Namespace: nginx-ingress
Priority: 0
Node: server.example.com/172.16.1.180
Start Time: Sun, 16 Aug 2020 16:48:49 -0400
Labels: app=nginx-ingress
controller-revision-hash=85879fb7bc
pod-template-generation=2
Annotations: kubernetes.io/psp: privileged
Status: Running
IP: 192.168.225.27
IPs:
IP: 192.168.225.27
But my issue is the IP address it has selected is a 192.168.225.27. This is a second network on this server. How do I tell nginx to use the 172.16.1.180 address that is has in the Node: part?
The Daemset config is :
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ingress
namespace: nginx-ingress
spec:
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
#annotations:
#prometheus.io/scrape: "true"
#prometheus.io/port: "9113"
spec:
serviceAccountName: nginx-ingress
containers:
- image: nginx/nginx-ingress:edge
imagePullPolicy: Always
name: nginx-ingress
ports:
- name: http
containerPort: 80
hostPort: 80
- name: https
containerPort: 443
hostPort: 443
- name: readiness-port
containerPort: 8081
#- name: prometheus
#containerPort: 9113
readinessProbe:
httpGet:
path: /nginx-ready
port: readiness-port
periodSeconds: 1
securityContext:
allowPrivilegeEscalation: true
runAsUser: 101 #nginx
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
- -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
I can't see any configuration option for which IP address to bind to.
The thing you are likely looking for is hostNetwork: true, which:
Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false
spec:
template:
spec:
hostNetwork: true
containers:
- image: nginx/nginx-ingress:edge
name: nginx-ingress
You would only then need to specify a bind address if it bothered you having the Ingress controller bound to all addresses on the host. If that's still a requirement, you can have the Node's IP injected via the valueFrom: mechanism:
...
containers:
- env:
- name: MY_NODE_IP
valueFrom:
fieldRef:
status.hostIP
I work on an open source system that is comprised of a Postgres database and a tomcat server. I have docker images for each component. We currently use docker-compose to test the application.
I am attempting to model this application with kubernetes.
Here is my first attempt.
apiVersion: v1
kind: Pod
metadata:
name: dspace-pod
spec:
volumes:
- name: "pgdata-vol"
emptyDir: {}
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- image: dspace/dspace:dspace-6_x
name: dspace
ports:
- containerPort: 8080
name: http
protocol: TCP
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
#
- image: dspace/dspace-postgres-pgcrypto
name: dspacedb
ports:
- containerPort: 5432
name: http
protocol: TCP
volumeMounts:
- mountPath: "/pgdata"
name: "pgdata-vol"
env:
- name: PGDATA
value: /pgdata
I have a configMap that is setting the hostname to the name of the pod.
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T19:14:38Z
name: local-config-map
namespace: default
data:
local.cfg: |-
dspace.dir = /dspace
db.url = jdbc:postgresql://dspace-pod:5432/dspace
dspace.hostname = dspace-pod
dspace.baseUrl = http://dspace-pod:8080
solr.server=http://dspace-pod:8080/solr
This application has a number of tasks that are run from the command line.
I have created a 3rd Docker image that contains the jars that are needed on the command line.
I am interested in modeling these command line tasks as Jobs in Kubernetes. Assuming that is a appropriate way to handle these tasks, how do I specify that a job should run within a Pod that is already running?
Here is my first attempt at defining a job.
apiVersion: batch/v1
kind: Job
#https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
metadata:
name: dspace-create-admin
spec:
template:
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- name: dspace-cli
image: dspace/dspace-cli:dspace-6_x
command: [
"/dspace/bin/dspace",
"create-administrator",
"-e", "test#test.edu",
"-f", "test",
"-l", "admin",
"-p", "admin",
"-c", "en"
]
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
restartPolicy: Never
The following configuration has allowed me to start my services (tomcat and postgres) as I hoped.
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T19:14:38Z
name: local-config-map
namespace: default
data:
# example of a simple property defined using --from-literal
#example.property.1: hello
#example.property.2: world
# example of a complex property defined using --from-file
local.cfg: |-
dspace.dir = /dspace
db.url = jdbc:postgresql://dspacedb-service:5432/dspace
dspace.hostname = dspace-service
dspace.baseUrl = http://dspace-service:8080
solr.server=http://dspace-service:8080/solr
---
apiVersion: v1
kind: Service
metadata:
name: dspacedb-service
labels:
app: dspacedb-app
spec:
type: NodePort
selector:
app: dspacedb-app
ports:
- protocol: TCP
port: 5432
# targetPort: 5432
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dspacedb-deploy
labels:
app: dspacedb-app
spec:
selector:
matchLabels:
app: dspacedb-app
template:
metadata:
labels:
app: dspacedb-app
spec:
volumes:
- name: "pgdata-vol"
emptyDir: {}
containers:
- image: dspace/dspace-postgres-pgcrypto
name: dspacedb
ports:
- containerPort: 5432
name: http
protocol: TCP
volumeMounts:
- mountPath: "/pgdata"
name: "pgdata-vol"
env:
- name: PGDATA
value: /pgdata
---
apiVersion: v1
kind: Service
metadata:
name: dspace-service
labels:
app: dspace-app
spec:
type: NodePort
selector:
app: dspace-app
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dspace-deploy
labels:
app: dspace-app
spec:
selector:
matchLabels:
app: dspace-app
template:
metadata:
labels:
app: dspace-app
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- image: dspace/dspace:dspace-6_x-jdk8-test
name: dspace
ports:
- containerPort: 8080
name: http
protocol: TCP
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
After applying the configuration above, I have the following results.
$ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
dspace-service NodePort 10.104.224.245 <none> 8080:32459/TCP 3s app=dspace-app
dspacedb-service NodePort 10.96.212.9 <none> 5432:30947/TCP 3s app=dspacedb-app
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h <none>
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dspace-deploy-c59b77bb8-mr47k 1/1 Running 0 10m
dspacedb-deploy-58dd85f5b9-6v2lf 1/1 Running 0 10
I was pleased to see that the service name can be used for port forwarding.
$ kubectl port-forward service/dspace-service 8080:8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
I am also able to run the following job using the defined service names in the configMap.
apiVersion: batch/v1
kind: Job
metadata:
name: dspace-create-admin
spec:
template:
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- name: dspace-cli
image: dspace/dspace-cli:dspace-6_x
command: [
"/dspace/bin/dspace",
"create-administrator",
"-e", "test#test.edu",
"-f", "test",
"-l", "admin",
"-p", "admin",
"-c", "en"
]
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
restartPolicy: Never
Results
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dspace-create-admin-kl6wd 0/1 Completed 0 5m
dspace-deploy-c59b77bb8-mr47k 1/1 Running 0 10m
dspacedb-deploy-58dd85f5b9-6v2lf 1/1 Running 0 10m
I still have some work to do persisting the volumes.
I'm trying to deploy GLPI application (http://glpi-project.org/) over my Kubernetes cluster but i encounter an issue.
Here is my deployment code:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pv-claim-glpi
labels:
type: openebs
spec:
storageClassName: openebs-storageclass
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: glpi
namespace: jb
labels:
app: glpi
spec:
selector:
matchLabels:
app: glpi
replicas: 1 # tells deployment to run 1 pods matching the template
template: # create pods using pod definition in this template
metadata:
# unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
# generated from the deployment name
labels:
app: glpi
spec:
volumes:
- name: pv-storage-glpi
persistentVolumeClaim:
claimName: pv-claim-glpi
containers:
- name: mariadb
image: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
value: "glpi"
- name: MYSQL_DATABASE
value: "glpi"
- name: MYSQL_USER
value: "glpi"
- name: MYSQL_PASSWORD
value: "glpi"
- name: GLPI_SOURCE_URL
value: "https://forge.glpi-project.org/attachments/download/2020/glpi-0.85.4.tar.gz"
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
- mountPath: /var/lib/mariadb/
name: pv-storage-glpi
subPath: mariadb
- name: glpi
image: driket54/glpi
ports:
- containerPort: 80
name: http
- containerPort: 8090
name: https
volumeMounts:
- mountPath: /var/glpidata
name: pv-storage-glpi
subPath: glpidata
---
apiVersion: v1
kind: Service
metadata:
name: glpi
namespace: jb
spec:
selector:
app: glpi
ports:
- protocol: "TCP"
port: 80
targetPort: http
name: http
- protocol: "TCP"
port: 8090
targetPort: https
name: https
- protocol: "TCP"
port: 3306
targetPort: mariadb
name: mariadb
type: NodePort
---
The docker image is properly deployed but in my test phase, during the setup of the app, i get the following error while setting up the database (mysql).
I've already checked the credentials (host, username, password) and the are correct
Please help
Not really an answer since I don' t have the Kubernetes knowledge expected, but I can't add a comment yet :(
What you should alter first is your GLPi version.
Use this link. It's the last one:
https://github.com/glpi-project/glpi/releases/download/9.3.0/glpi-9.3.tgz
Then you may use cli tools to setup the database.
https://glpi-install.readthedocs.io/en/latest/command-line.html
Using what I get from your file:
php scripts/cliinstall.php --host=mariadb(not sure about this one in your environment but you get hte idea) --db=glpi --user=glpi --pass=glpi