How to resolve the hostname between the pods in the kubernetes cluster? - docker

I am creating two pods with a custom docker image(ubuntu is the base image). I am trying to ping the pods from their terminal. I am able to reach it using the IP address but not the hostname. How to achieve without manually adding /etc/hosts in the pods?
Note: I am not running any services in the node. I am basically trying to setup slurm using this.
Pod Manifest File:
apiVersion: v1
kind: Pod
metadata:
name: slurmctld
labels:
app: slurm
spec:
nodeName: docker-desktop
hostname: slurmctld
containers:
- name: slurmctld
image: slurmcontroller
imagePullPolicy: Always
ports:
- containerPort: 6817
resources:
requests:
memory: "1000Mi"
cpu: "1000m"
limits:
memory: "1500Mi"
cpu: "1500m"
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
---
apiVersion: v1
kind: Pod
metadata:
name: worker1
labels:
app: slurm
spec:
nodeName: docker-desktop
hostname: worker1
containers:
- name: worker1
image: slurmworker
imagePullPolicy: Always
ports:
- containerPort: 6818
resources:
requests:
memory: "1000Mi"
cpu: "1000m"
limits:
memory: "1500Mi"
cpu: "1500m"
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]

From the docs here
In general a pod has the following DNS resolution:
pod-ip-address.my-namespace.pod.cluster-domain.example.
For example, if a pod in the default namespace has the IP address
172.17.0.3, and the domain name for your cluster is cluster.local, then the Pod has a DNS name:
172-17-0-3.default.pod.cluster.local.
Any pods created by a Deployment or DaemonSet exposed by a Service
have the following DNS resolution available:
pod-ip-address.deployment-name.my-namespace.svc.cluster-domain.example
If you don't like to deal with ever changing IP of a pod then you need to create service to expose the pods using DNS hostnames. Below is an example of service to expose the slurmctld pod.
apiVersion: v1
kind: Service
metadata:
name: slurmctld-service
spec:
selector:
app: slurm
ports:
- protocol: TCP
port: 80
targetPort: 6817
Assuming you are doing these on default namespace You should now be able to access it via slurmctld-service.default.svc.cluster.local

You can also use hostname -i which in all k8s installs i've tested resolves to the pods IP address.

Related

Knative service deployment fails with reason RevisionMissing

I have deployed a service on Knative. I iterated on the service code/Docker image and I try to redeploy it at the same address. I proceeded as follow:
Pushed the new Docker image on our private Docker repo
Updated the service YAML file to point to the new Docker image (see YAML below)
Delete the service with the command: kubectl -n myspacename delete -f myservicename.yaml
Recreate the service with the command: kubectl -n myspacename apply -f myservicename.yaml
During the deployment, the service shows READY = Unknown and REASON = RevisionMissing, and after a while, READY = False and REASON = ProgressDeadlineExceeded. When looking at the logs of the pod with the following command kubectl -n myspacename logs revision.serving.knative.dev/myservicename-00001, I get the message:
no kind "Revision" is registered for version "serving.knative.dev/v1" in scheme "pkg/scheme/scheme.go:28"
Here is the YAML file of the service:
---
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: myservicename
namespace: myspacename
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/class: kpa.autoscaling.knative.dev
autoscaling.knative.dev/metric: concurrency
autoscaling.knative.dev/target: '1'
autoscaling.knative.dev/minScale: '0'
autoscaling.knative.dev/maxScale: '5'
autoscaling.knative.dev/scaleDownDelay: 60s
autoscaling.knative.dev/window: 600s
spec:
tolerations:
- key: nvidia.com/gpu
operator: Exists
effect: NoSchedule
volumes:
- name: nfs-volume
persistentVolumeClaim:
claimName: myspacename-models-pvc
imagePullSecrets:
- name: myrobotaccount-pull-secret
containers:
- name: myservicename
image: quay.company.com/project/myservicename:0.4.0
ports:
- containerPort: 5000
name: user-port
protocol: TCP
resources:
limits:
cpu: "4"
memory: 36Gi
nvidia.com/gpu: 1
requests:
cpu: "2"
memory: 32Gi
volumeMounts:
- name: nfs-volume
mountPath: /tmp/static/
securityContext:
privileged: true
env:
- name: CLOUD_STORAGE_PASSWORD
valueFrom:
secretKeyRef:
name: myservicename-cloud-storage-password
key: key
envFrom:
- configMapRef:
name: myservicename-config
The protocol I followed above is correct, the problem was because of a bug in the code of the Docker image that Knative is serving. I was able to troubleshoot the issue by looking at the logs of the pods as follow:
First run the following command to get the pod name: kubectl -n myspacename get pods. Example of pod name = myservicename-00001-deployment-56595b764f-dl7x6
Then get the logs of the pod with the following command: kubectl -n myspacename logs myservicename-00001-deployment-56595b764f-dl7x6

Configuring Rails application in Kubernates

I am configuring rails application in kubernates.I am using redis,sidekiq and Postgres DB.Below the yaml I am using.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: dev-app
name: test-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: Dev-app
spec:
nodeSelector:
cloud.io/sec-zone-green: "true"
containers:
- name: dev-application
image: hub.docker.net/appautomation/dev.app.1.0:latest
command: ["/bin/sh"]
args: ["-c", "while true; do echo test; sleep 20;done"]
resources:
limits:
memory: 8Gi
cpu: 5
requests:
memory: 8Gi
cpu: 5
ports:
- containerPort: 3000
- name: dev-app-nginx
image: hub.docker.net/appautomation/dev.nginx.1.0:latest
resources:
limits:
memory: 4Gi
cpu: 4
requests:
memory: 4Gi
cpu: 4
ports:
- containerPort: 80
- name: dev-app-redis
image: hub.docker.net/appautomation/dev.redis.1.0:latest
resources:
limits:
memory: 4Gi
cpu: 4
requests:
memory: 4Gi
cpu: 4
ports:
- containerPort: 6379
In kubectl I am not seeing any error.But when I try to execute logs in pods I am getting below.I could see three containers build internally.I have executed my dev-application and tried rails s to check server is running or not.But I am getting "/usr/local/bundle/gems/redis-3.3.5/lib/redis/connection/ruby.rb:229:in `getaddrinfo': getaddrinfo: Name or service not known (SocketError." How to check my application linked with redis and nginx? My yaml configuration is correct? or I need to use depends on in my yaml file.
kubectl get pods
NAME READY STATUS RESTARTS AGE
dev-database-57b6ff5997-mgdhm 1/1 Running 0 11d
test-deployment-5f59864c8b-4t5b7 3/3 Running 0 8m44s
kubectl logs test-deployment-5f59864c8b-4t5b7
error: a container name must be specified for pod test-deployment-5f59864c8b-4t5b7, choose one of: [dev-application dev-app-nginx dev-app-redis]
Service yams file
apiVersion: v1
kind: Service
metadata:
namespace: Dev-app
name: test-deployment
spec:
selector:
app: Dev-app
ports:
- name: Dev-application
protocol: TCP
port: 3001
targetPort: 3000
- name: redis
port: 6379
targetPort: 6379
you are not running right way container. ideally POD running must be single application if require multiple container then and then use the multiple container inside the single POD or deployment.
you should be deploying single container in single POD or deployment instead of 3 in single.
for logs issue you check specific container logs using
kubectl logs test-deployment-5f59864c8b-4t5b7
error: a container name must be specified for pod test-deployment-5f59864c8b-4t5b7, choose one of: [dev-application dev-app-nginx dev-app-redis]
-c is used to check the specific container logs
kubectl logs test-deployment-5f59864c8b-4t5b7 -c <any one name dev-application dev-app-nginx dev-app-redis>
ideally distributed system structure goes like you run the standalone POD or deployment of the REDIS so all the services can use it here you are running your application redis if Redis crash your application will auto-restart (Kubernetes behavior).
If application crash Redis will auto-restart as Kubernetes auto-restart whole if any of container fails inside the POD.
I am getting "/usr/local/bundle/gems/redis-3.3.5/lib/redis/connection/ruby.rb:229:in `getaddrinfo': getaddrinfo: Name or service not known (SocketError.
if you are getting this error check you have set the proper host path into the application code. If all the Redis, Nginx and application running in single container you connect with any or service over the localhost. So Redis will be running at localhost 6379 for application
if you want to further debug you try using the exec command to go inside the pod and check the
kubectl exec -it test-deployment-5f59864c8b-4t5b7 -c dev-application -- /bin/bash
by this way, you will be inside the container and test out the connections to Redis using CLI.
Update :
Redis deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
type: ClusterIP
ports:
- port: 6379
name: redis
selector:
app: redis
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: redis
spec:
selector:
matchLabels:
app: redis
serviceName: redis
replicas: 1
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redislabs/rejson
args: ["--appendonly", "no", "--loadmodule"]
ports:
- containerPort: 6379
name: redis
volumeMounts:
- name: redis-volume
mountPath: /data
volumeClaimTemplates:
- metadata:
name: redis-volume
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi

single service with multiple exposed ports on a pod with multiple containers

I have gotten multiple containers to work in the same pod.
kubectl apply -f myymlpod.yml
kubectl expose pod mypod --name=myname-pod --port 8855 --type=NodePort
then I was able to test the "expose"
minikube service list
..
|-------------|-------------------------|-----------------------------|
| NAMESPACE | NAME | URL |
|-------------|-------------------------|-----------------------------|
| default | kubernetes | No node port |
| default | myname-pod | http://192.168.99.100:30036 |
| kube-system | kube-dns | No node port |
| kube-system | kubernetes-dashboard | No node port |
|-------------|-------------------------|-----------------------------|
Now, my myymlpod.yml has multiple containers in it.
One container has a service running on 8855, and one on 8877.
The below article ~hints~ at what I need to do .
https://www.mirantis.com/blog/multi-container-pods-and-container-communication-in-kubernetes/
Exposing multiple containers in a Pod
While this example shows how to
use a single container to access other containers in the pod, it’s
quite common for several containers in a Pod to listen on different
ports — all of which need to be exposed. To make this happen, you can
either create a single service with multiple exposed ports, or you can
create a single service for every poirt you’re trying to expose.
"create a single service with multiple exposed ports"
I cannot find anything on how to actually do this, expose multiple ports.
How does one expose multiple ports on a single service?
Thank you.
APPEND:
K8Containers.yml (below)
apiVersion: v1
kind: Pod
metadata:
name: mypodkindmetadataname
labels:
example: mylabelname
spec:
containers:
- name: containername-springbootfrontend
image: mydocker.com/webfrontendspringboot:latest
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "612Mi"
cpu: "400m"
ports:
- containerPort: 8877
- name: containername-businessservicesspringboot
image: mydocker.com/businessservicesspringboot:latest
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "613Mi"
cpu: "400m"
ports:
- containerPort: 8855
kubectl apply -f K8containers.yml
pod "mypodkindmetadataname" created
kubectl get pods
NAME READY STATUS RESTARTS AGE
mypodkindmetadataname 2/2 Running 0 11s
k8services.yml (below)
apiVersion: v1
kind: Service
metadata:
name: myymlservice
labels:
name: myservicemetadatalabel
spec:
type: NodePort
ports:
- name: myrestservice-servicekind-port-name
port: 8857
targetPort: 8855
- name: myfrontend-servicekind-port-name
port: 8879
targetPort: 8877
selector:
name: mypodkindmetadataname
........
kubectl apply -f K8services.yml
service "myymlservice" created
........
minikube service myymlservice --url
http://192.168.99.100:30784
http://192.168.99.100:31751
........
kubectl describe service myymlservice
Name: myymlservice
Namespace: default
Labels: name=myservicemetadatalabel
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"name":"myservicemetadatalabel"},"name":"myymlservice","namespace":"default"...
Selector: name=mypodkindmetadataname
Type: NodePort
IP: 10.107.75.205
Port: myrestservice-servicekind-port-name 8857/TCP
TargetPort: 8855/TCP
NodePort: myrestservice-servicekind-port-name 30784/TCP
Endpoints: <none>
Port: myfrontend-servicekind-port-name 8879/TCP
TargetPort: 8877/TCP
NodePort: myfrontend-servicekind-port-name 31751/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
....
Unfortunately, it is still not working when I try to invoke the "exposed" items.
calling
http://192.168.99.100:30784/myrestmethod
does not work
and calling
http://192.168.99.100:31751
or
http://192.168.99.100:31751/index.html
does not work
Anyone see what I'm missing.
APPEND (working now)
The selector does not match on "name", it matches on label(s).
k8containers.yml (partial at the top)
apiVersion: v1
kind: Pod
metadata:
name: mypodkindmetadataname
labels:
myexamplelabelone: mylabelonevalue
myexamplelabeltwo: mylabeltwovalue
spec:
containers:
# Main application container
- name: containername-springbootfrontend
image: mydocker.com/webfrontendspringboot:latest
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "612Mi"
cpu: "400m"
ports:
- containerPort: 8877
- name: containername-businessservicesspringboot
image: mydocker.com/businessservicesspringboot:latest
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "613Mi"
cpu: "400m"
ports:
- containerPort: 8855
k8services.yml
apiVersion: v1
kind: Service
metadata:
name: myymlservice
labels:
name: myservicemetadatalabel
spec:
type: NodePort
ports:
- name: myrestservice-servicekind-port-name
port: 8857
targetPort: 8855
- name: myfrontend-servicekind-port-name
port: 8879
targetPort: 8877
selector:
myexamplelabelone: mylabelonevalue
myexamplelabeltwo: mylabeltwovalue
Yes you can create one single service with multiple ports open or service port connect pointing to container ports.
kind: Service
apiVersion: v1
metadata:
name: mymlservice
spec:
selector:
app: mymlapp
ports:
- name: servicename-1
port: 4444
targetPort: 8855
- name: servicename-2
port: 80
targetPort: 8877
Where target ports are poting out to your container ports.

Kubectl fail after a long time

I have made up a little cluster (it is 1 Machine the master and two VM the nodes), now I have created a NFS directory to share a persistence volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs #nome di riferimento
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.57.1
path: "/mnt/shardisk"
and a claim that call it:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Mi
and finally a stupid pod to use it:
kind: Pod
apiVersion: v1
metadata:
name: nginx-nfs
spec:
volumes:
- name: storage
persistentVolumeClaim:
claimName: test-pvc
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: storage
now I have created a cluster from the physical machine and I have joined it from the VM, I have used callico for the network services (because flannel fail to start if someone know why it would be wonderful to solve it)
now if I try to do:
kubectl describe pod I see all work fine and so to kubectl logs nginx-nfs, but if I try to do kubectl exec -it nginx-nfs /bin/bash
all freeze for a very long time and after that I have this:
Error from server: error dialing backend: dial tcp 10.0.2.15:10250: getsockopt: connection timed out
I have "solve" it, i use kubernetes in 2 different lan and so the admin.conf have an ip that no match the current ip and it will not work, i have solve it creating same vm internal to the host and nat a static ip on it

How to pass docker container flags via kubernetes pod

Hi I am running kubernetes cluster where I run mailhog container.
But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command:
docker run mailhog/mailhog -auth-file=./auth.file
But I need to run it via Kubernetes pod. My pod looks like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
ports:
- containerPort: 8025
How to achieve to run Docker container with parameter -auth-file=./auth.file via kubernetes. Thanks.
I tried adding under containers
command: ["-auth-file", "/data/mailhog/auth.file"]
but then I get
Failed to start container with docker id 7565654 with error: Error response from daemon: Container command '-auth-file' not found or does not exist.
thanks to #lang2
here is my deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
volumes:
- name: secrets-volume
secret:
secretName: mailhog-login
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
resources:
limits:
cpu: 70m
memory: 30Mi
requests:
cpu: 50m
memory: 20Mi
volumeMounts:
- name: secrets-volume
mountPath: /data/mailhog
readOnly: true
ports:
- containerPort: 8025
- containerPort: 1025
args:
- "-auth-file=/data/mailhog/auth.file"
In kubernetes, command is equivalent of ENTRYPOINT. In your case, args should be used.
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#container-v1-core
You are on the right track. It's just that you also need to include the name of the binary in the command array as the first element. You can find that out by looking​ in the respective Dockerfile (CMD and/or ENTRYPOINT).
In this case:
command: ["Mailhog", "-auth-file", "/data/mailhog/auth.file"]
I needed similar task (my aim was passing the application profile to app) and what I did is the following:
Setting an environment variable in Deployment section of the kubernetes yml file.
env:
- name: PROFILE
value: "dev"
Using this environment variable in dockerfile as command line argument.
CMD java -jar -Dspring.profiles.active=${PROFILE} /opt/app/xyz-service-*.jar

Resources