How to edit all the deployment of kubernetes at a time - docker

We have hundreds of deployment and in the config we have imagePullPolicy set as “ifnotpresent” for most of them and for few it is set to “always” now I want to modify all deployment which has ifnotpresent to always.
How can we achieve this with at a stroke?
Ex:
kubectl get deployment -n test -o json | jq ‘.spec.template.spec.contianer[0].imagePullPolicy=“ifnotpresent”| kubectl -n test replace -f -
The above command helps to reset it for one particular deployment.

Kubernetes doesn't natively offer mass update capabilities. For that you'd have to use other CLI tools. That being said, for modifying existing resources, you can also use the kubectl patch function.
The script below isn't pretty, but will update all deployments in the namespace.
kubectl get deployments -o name | sed -e 's/.*\///g' | xargs -I {} kubectl patch deployment {} --type=json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/imagePullPolicy", "value": "Always"}]'
Note: I used sed to strip the resource type from the name as kubectl doesn't recognize operations performed on resources of type deployment.extensions (and probably others).

Related

Pass docker host ip as env var into devcontainer

I am trying to pass an environment variable into my devcontainer that is the output of a command run on my dev machine. I have tried the following in my devcontainer.json with no luck:
"initializeCommand": "export DOCKER_HOST_IP=\"$(ifconfig | grep -E '([0-9]{1,3}.){3}[0-9]{1,3}' | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d: | head -n1)\"",
"containerEnv": {
"DOCKER_HOST_IP1": "${localEnv:DOCKER_HOST_IP}",
"DOCKER_HOST_IP2": "${containerEnv:DOCKER_HOST_IP}"
},
and
"runArgs": [
"-e DOCKER_HOST_IP=\"$(ifconfig | grep -E '([0-9]{1,3}.){3}[0-9]{1,3}' | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d: | head -n1)\"
],
(the point of the ifconfig/grep piped command is to provide me with the IP of my docker host which is running via Docker for Desktop (Mac))
Some more context
Within my devcontainer I am running some kubectl deployments (to a cluster running on Docker for Desktop) where I would like to configure a hostAlias for a pod (docs) such that that pod will direct requests to https://api.cancourier.local to the ip of the docker host (which would then hit an ingress I have configured for that CNAME).
I could just pass in the output of the ifconfig command to my kubectl command when running from within the devcontainer. The problem is that I get two different results from this depending on whether I am running it on my host (10.0.0.89) or from within the devcontainer (10.1.0.1). 10.0.0.89 in this case is the "correct" IP as if I curl this from within my devcontainer, or my deployed pod, I get the response I'd expect from my ingress.
I'm also aware that I could just use the name of my k8s service (in this case api) to communicate between pods, but this isn't ideal. As for why, I'm running a Next.js application in a pod. The Next.js app on this pod has two "contexts":
my browser - the app serves up static HTML/JS to my browser where communicating with https://api.cancourier.local works fine
on the pod itself - running some things (ie. _middleware) on the pod itself, where the pod does not currently know what https://api.cancourier.local
What I was doing to temporarily get around this was to have a separate config on the pod, one for the "browser context" and the other for things running on the pod itself. This is less than ideal as when I go to deploy this Next.js app (to Vercel) it won't be an issue (as my API will be deployed on some publicly accessible CNAME). If I can accomplish what I was trying to do above, I'd be able to avoid this.
So I didn't end up finding a way to pass the output of a command run on the host machine as an env var into my devcontainer. However I did find a way to get the "correct" docker host IP and pass this along to my pod.
In my devcontainer.json I have this:
"runArgs": [
// https://stackoverflow.com/a/43541732/3902555
"--add-host=api.cancourier.local:host-gateway",
"--add-host=cancourier.local:host-gateway"
],
which augments the devcontainer's /etc/hosts with:
192.168.65.2 api.cancourier.local
192.168.65.2 cancourier.local
then in my Makefile where I store my kubectl commands I am simply running:
deploy-the-things:
DOCKER_HOST_IP = $(shell cat /etc/hosts | grep 'api.cancourier.local' | awk '{print $$1}')
helm upgrade $(helm_release_name) $(charts_location) \
--install \
--namespace=$(local_namespace) \
--create-namespace \
-f $(charts_location)/values.yaml \
-f $(charts_location)/local.yaml \
--set cwd=$(HOST_PROJECT_PATH) \
--set dockerHostIp=$(DOCKER_HOST_IP) \
--debug \
--wait
then within my helm chart I can use the following for the pod running my Next.js app:
hostAliases:
- ip: {{ .Values.dockerHostIp }}
hostnames:
- "api.cancourier.local"
Highly recommend following this tutorial: Container environment variables
In this tutorial, 2 methods are mentioned:
Adding individual variables
Using env file
Choose which is more comfortable for you, good luck))

Kubernetes Deployment dynamic port forwarding

I am moving a Docker Image from Docker to a K8s Deployment. I have auto-scale rules on so it starts 5 but can go to 12. The Docker image on K8s starts perfectly with a k8s service in front to cluster the Deployment.
Now each container has its own JVM which has a Prometheus app retrieving its stats. In Docker, this is no problem because the port that serves Prometheus info is dynamically created with a starting port of 8000, so the docker-compose.yml grows the port by 1 based on how many images are started.
The problem is that I can't find how to do this in a K8s [deployment].yml file. Because Deployment pods are dynamic, I would have thought there would be some way to set a starting HOST port to be incremented based on how many containers are started.
Maybe I am looking at this the wrong way so any clarification would be helpful meanwhile will keep searching the Google for any info on such a thing.
Well after reading and reading and reading so much I came to the conclusion that K8s is not responsible to open ports for a Docker Image or provide ingress to your app on some weird port, it's not its responsibility. K8s Deployment just deploys the Pods you requested. You can set the Ports option on a DEPLOYMENT -> SPEC -> CONTAINERS -> PORTS which just like Docker is only informational. But this allows you to JSONPath query for all PODS(containers) with a Prometheus port available. This will allow you to rebuild the "targets" value in Prometheus.yaml file. Now having those targets makes them available to Grafana to create a dashboard.
That's it, pretty easy. I was complicating something because I did not understand it. I am including a script I QUICKLY wrote to get something going USE AT YOUR OWN RISK.
By the way, I use Pod and Container interchangeably.
#!/usr/bin/env bash
#set -x
_MyappPrometheusPort=8055
_finalIpsPortArray=()
_prometheusyamlFile=prometheus.yml
cd /docker/images/prometheus
#######################################################################################################################################################
#One container on the K8s System is weave and it holds the subnet we need to validate against.
#weave-net-lwzrk 2/2 Running 8 (7d3h ago) 9d 192.168.2.16 accl-ffm-srv-006 <none> <none>
_weavenet=$(kubectl get pod -n kube-system -o wide | grep weave | cut -d ' ' -f1 )
echo "_weavenet: $_weavenet"
#The default subnet is the one that lets us know the conntainer is part of kubernetes network.
# Range: 10.32.0.0/12
# DefaultSubnet: 10.32.0.0/12
_subnet=$( kubectl exec -n kube-system $_weavenet -c weave -- /home/weave/weave --local status | sed -En "s/^(.*)(DefaultSubnet:\s)(.*)?/\3/p" )
echo "_subnet: $_subnet"
_cidr2=$( echo "$_subnet" | cut -d '/' -f2 )
echo "_cidr2: /$_cidr2"
#######################################################################################################################################################
#This is an array of the currently monitored containers that prometheus was sstarted with.
#We will remove any containers form the array that fit the K8s Weavenet subnet with the myapp prometheus port.
_targetLineFound_array=($( egrep '^\s{1,20}-\s{0,5}targets\s{0,5}:\s{0,5}\[.*\]' $_prometheusyamlFile | sed -En "s/(.*-\stargets:\s\[)(.*)(\]).*/\2/p" | tr "," "\n"))
for index in "${_targetLineFound_array[#]}"
do
_ip="${index//\'/$''}"
_ipTocheck=$( echo $_ip | cut -d ':' -f1 )
_portTocheck=$( echo $_ip | cut -d ':' -f2 )
#We need to check if the IP is within the subnet mask attained from K8s.
#The port must also be the prometheus port in case some other port is used also for Prometheus.
#This means the IP should be removed since we will put the list of IPs from
#K8s currently in production by Deployment/AutoScale rules.
#Network: 10.32.0.0/12
_isIpWithinSubnet=$( ipcalc $_ipTocheck/$_cidr2 | sed -En "s/^(.*)(Network:\s+)([0-9]{1}[0-9]?[0-9]?\.[0-9]{1}[0-9]?[0-9]?\.[0-9]{1}[0-9]?[0-9]?\.[0-9]{1}[0-9]?[0-9]?)(\/[0-9]{1}[0-9]{1}.*)?/\3/p" )
if [[ "$_isIpWithinSubnet/$_cidr2" == "$_subnet" && "$_portTocheck" == "$_MyappPrometheusPort" ]]; then
echo "IP managed by K8s will be deleted: _isIpWithinSubnet: ($_ip) $_isIpWithinSubnet"
else
_finalIpsPortArray+=("$_ip")
fi
done
#######################################################################################################################################################
#This is an array of the current running myapp App containers with a prometheus port that is available.
#From this list we will add them to the prometheus file to be available for Grafana monitoring.
readarray -t _currentK8sIpsArr < <( kubectl get pods --all-namespaces --chunk-size=0 -o json | jq '.items[] | select(.spec.containers[].ports != null) | select(.spec.containers[].ports[].containerPort == '$_MyappPrometheusPort' ) | .status.podIP' )
for index in "${!_currentK8sIpsArr[#]}"
do
_addIPToMonitoring=${_currentK8sIpsArr[index]//\"/$''}
echo "IP Managed by K8s as myapp app with prometheus currently running will be added to monitoring: $_addIPToMonitoring"
_finalIpsPortArray+=("$_addIPToMonitoring:$_MyappPrometheusPort")
done
######################################################################################################################################################
#we need to recreate this string and sed it into the file
#- targets: ['192.168.2.13:3201', '192.168.2.13:3202', '10.32.0.7:8055', '10.32.0.8:8055']
_finalPrometheusTargetString="- targets: ["
i=0
# Iterate the loop to read and print each array element
for index in "${!_finalIpsPortArray[#]}"
do
((i=i+1))
_finalPrometheusTargetString="$_finalPrometheusTargetString '${_finalIpsPortArray[index]}'"
if [[ $i != ${#_finalIpsPortArray[#]} ]]; then
_finalPrometheusTargetString="$_finalPrometheusTargetString,"
fi
done
_finalPrometheusTargetString="$_finalPrometheusTargetString]"
echo "$_finalPrometheusTargetString"
sed -i -E "s/(.*)-\stargets:\s\[.*\]/\1$_finalPrometheusTargetString/" ./$_prometheusyamlFile
docker-compose down
sleep 4
docker-compose up -d
echo "All changes were made. Exiting"
exit 0
Ideally, you should be using the Average of JVM across all the replicas. There is no meaning to create a different deployment with a different port if you are running the single same Docker image across all the replicas.
i think keeping a single deployment with resource requirements set to deployment would be the best practice.
You can get the JVM average of all the running replicas
sum(jvm_memory_max_bytes{area="heap", app="app-name",job="my-job"}) / sum(kube_pod_status_phase{phase="Running"})
as you are running the same Docker image across all replicas and K8s service by default will be managing the Load Balancing, average utilization would be an option to monitor.
Still, if you want to filter and get different values you can create different deployments (Not at all good way) or use the stateful sets.
You can also filter the data by hostname (POD name) in Prometheus, so will get the each replica usage.

container labels in kubernetes

I am building my docker image with jenkins using:
docker build --build-arg VCS_REF=$GIT_COMMIT \
--build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` \
--build-arg BUILD_NUMBER=$BUILD_NUMBER -t $IMAGE_NAME\
I was using Docker but I am migrating to k8.
With docker I could access those labels via:
docker inspect --format "{{ index .Config.Labels \"$label\"}}" $container
How can I access those labels with Kubernetes ?
I am aware about adding those labels in .Metadata.labels of my yaml files but I don't like it that much because
- it links those information to the deployment and not the container itself
- can be modified anytime
...
kubectl describe pods
Thank you
Kubernetes doesn't expose that data. If it did, it would be part of the PodStatus API object (and its embedded ContainerStatus), which is one part of the Pod data that would get dumped out by kubectl get pod deployment-name-12345-abcde -o yaml.
You might consider encoding some of that data in the Docker image tag; for instance, if the CI system is building a tagged commit then use the source control tag name as the image tag, otherwise use a commit hash or sequence number. Another typical path is to use a deployment manager like Helm as the principal source of truth about deployments, and if you do that there can be a path from your CD system to Helm to Kubernetes that can pass along labels or annotations. You can also often set up software to know its own build date and source control commit ID at build time, and then expose that information via an informational-only API (like an HTTP GET /_version call or some such).
I'll add another option.
I would suggest reading about the Recommended Labels by K8S:
Key Description
app.kubernetes.io/name The name of the application
app.kubernetes.io/instance A unique name identifying the instance of an application
app.kubernetes.io/version The current version of the application (e.g., a semantic version, revision hash, etc.)
app.kubernetes.io/component The component within the architecture
app.kubernetes.io/part-of The name of a higher level application this one is part of
app.kubernetes.io/managed-by The tool being used to manage the operation of an application
So you can use the labels to describe a pod:
apiVersion: apps/v1
kind: Pod # Or via Deployment
metadata:
labels:
app.kubernetes.io/name: wordpress
app.kubernetes.io/instance: wordpress-abcxzy
app.kubernetes.io/version: "4.9.4"
app.kubernetes.io/managed-by: helm
app.kubernetes.io/component: server
app.kubernetes.io/part-of: wordpress
And use the downward api (which works in a similar way to reflection in programming languages).
There are two ways to expose Pod and Container fields to a running Container:
1 ) Environment variables.
2 ) Volume Files.
Below is an example for using volumes files:
apiVersion: v1
kind: Pod
metadata:
name: kubernetes-downwardapi-volume-example
labels:
version: 4.5.6
component: database
part-of: etl-engine
annotations:
build: two
builder: john-doe
spec:
containers:
- name: client-container
image: k8s.gcr.io/busybox
command: ["sh", "-c"]
args: # < ------ We're using the mounted volumes inside the container
- while true; do
if [[ -e /etc/podinfo/labels ]]; then
echo -en '\n\n'; cat /etc/podinfo/labels; fi;
if [[ -e /etc/podinfo/annotations ]]; then
echo -en '\n\n'; cat /etc/podinfo/annotations; fi;
sleep 5;
done;
volumeMounts:
- name: podinfo
mountPath: /etc/podinfo
volumes: # < -------- We're mounting in our example the pod's labels and annotations
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "annotations"
fieldRef:
fieldPath: metadata.annotations
Notice that in the example we accessed the labels and annotations that were passed and mounted to the /etc/podinfo path.
Beside labels and annotations, the downward API exposed multiple additional options like:
The pod's IP address.
The pod's service account name.
The node's name and IP.
A Container's CPU limit , CPU request , memory limit, memory request.
See full list in here.
(*) A nice blog discussing the downward API.
(**) You can view all your pods labels with
$ kubectl get pods --show-labels
NAME ... LABELS
my-app-xxx-aaa pod-template-hash=...,run=my-app
my-app-xxx-bbb pod-template-hash=...,run=my-app
my-app-xxx-ccc pod-template-hash=...,run=my-app
fluentd-8ft5r app=fluentd,controller-revision-hash=...,pod-template-generation=2
fluentd-fl459 app=fluentd,controller-revision-hash=...,pod-template-generation=2
kibana-xyz-adty4f app=kibana,pod-template-hash=...
recurrent-tasks-executor-xaybyzr-13456 pod-template-hash=...,run=recurrent-tasks-executor
serviceproxy-1356yh6-2mkrw app=serviceproxy,pod-template-hash=...
Or viewing only specific label with $ kubectl get pods -L <label_name>.

Kubenetes: Is it possible to hit multiple pods with a single request in Kubernetes cluster

I want to clear cache in all the pods in my Kubernetes namespace. I want to send one request to the end-point which will then send a HTTP call to all the pods in the namespace to clear cache. Currently, I can hit only one pod using Kubernetes and I do not have control over which pod would get hit.
Even though the load-balancer is set to RR, continuously hitting the pods(n number of times, where n is the total number of pods) doesn't help as some other requests can creep in.
The same issue was discussed here, but I couldn't find a solution for the implementation:
https://github.com/kubernetes/kubernetes/issues/18755
I'm trying to implement the clearing cache part using Hazelcast, wherein I will store all the cache and Hazelcast automatically takes care of the cache update.
If there is an alternative approach for this problem, or a way to configure kubernetes to hit all end-points for some specific requests, sharing here would be a great help.
Provided you got kubectl in your pod and have access to the api-server, you can get all endpoint adressess and pass them to curl:
kubectl get endpoints <servicename> \
-o jsonpath="{.subsets[*].addresses[*].ip}" | xargs curl
Alternative without kubectl in pod:
the recommended way to access the api server from a pod is by using kubectl proxy: https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod this would of course add at least the same overhead. alternatively you could directly call the REST api, you'd have to provide the token manually.
APISERVER=$(kubectl config view --minify | grep server | cut -f 2- -d ":" | tr -d " ")
TOKEN=$(kubectl describe secret $(kubectl get secrets \
| grep ^default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d " ")
if you provide the APISERVER and TOKEN variables, you don't need kubectl in your pod, this way you only need curl to access the api server and "jq" to parse the json output:
curl $APISERVER/api/v1/namespaces/default/endpoints --silent \
--header "Authorization: Bearer $TOKEN" --insecure \
| jq -rM ".items[].subsets[].addresses[].ip" | xargs curl
UPDATE (final version)
APISERVER usually can be set to kubernetes.default.svc and the token should be available at /var/run/secrets/kubernetes.io/serviceaccount/token in the pod, so no need to provide anything manually:
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token); \
curl https://kubernetes.default.svc/api/v1/namespaces/default/endpoints --silent \
--header "Authorization: Bearer $TOKEN" --insecure \
| jq -rM ".items[].subsets[].addresses[].ip" | xargs curl
jq is available here: https://stedolan.github.io/jq/download/ (< 4 MiB, but worth it for easily parsing JSON)
UPDATE I published this article for this approach
I have had the similar situation. Here is how I resolved it (I'm using a namespace other than "default").
Setup access to cluster Using RBAC Authorization
Access to API is done by creating a ServiceAccount, assign it to the Pod and bind a Role to it.
1.Create a ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-serviceaccount
namespace: my-namespace
2.Create a Role: in this section you need to provide the list of resources and the list of actions you'd like to have access to. Here is the example where you'd like to list the endpoints and also get the details of a specific endpoint.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: my-role
namespace: my-namespace
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list"]
3.Bind the role to the service account
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-role-binding
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: my-serviceaccount
roleRef:
kind: Role
name: my-role
apiGroup: rbac.authorization.k8s.io
4.Assign the service account to the pods in your deployment (it should be under template.spec)
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
app: my-pod
template:
metadata:
labels:
app: my-pod
spec:
serviceAccountName: my-serviceaccount
containers:
- name: my-pod
...
Access Clusters Using the Kubernetes API
Having all the security aspects set, you will have enough privilege to access the API within your Pod. All the required information to communicate with API Server is mounted under /var/run/secrets/kubernetes.io/serviceaccount in your Pod.
You can use the following shell script (probably add it to your COMMAND or ENTRYPOINT of the Docker image).
#!/bin/bash
# Point to the internal API server hostname
API_SERVER=https://kubernetes.default.svc
# Path to ServiceAccount token
SERVICE_ACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
# Read this Pod's namespace
NAMESPACE=$(cat ${SERVICE_ACCOUNT}/namespace)
# Read the ServiceAccount bearer token
TOKEN=$(cat ${SERVICE_ACCOUNT}/token)
# Reference the internal certificate authority (CA)
CA_CERT=${SERVICE_ACCOUNT}/ca.crt
From this point forward, it is just simple REST API call. You can read these environment variables in any language of your choice and access to API.
Here is an example of listing the endpoint for your use case
# List all the endpoints in the namespace that Pod is running
curl --cacert ${CA_CERT} --header "Authorization: Bearer ${TOKEN}" -X GET \
"${API_SERVER}/api/v1/namespaces/${NAMESPACE}/endpoints"
# List all the endpoints in the namespace that Pod is running for a deployment
curl --cacert ${CA_CERT} --header "Authorization: Bearer ${TOKEN}" -X GET \
"${API_SERVER}/api/v1/namespaces/${NAMESPACE}/endpoints/my-deployment"
For more information on available API endpoints and how to call them, refer to API Reference.
For those of you trying to find an alternative, I have used hazelcast as distributed event listener. Added a similar POC on github: https://github.com/vinrar/HazelcastAsEventListener
I fixed this problem by using this script. You just have to write the equivalent command to make the API call. I used curl to do that.
Following is the usage of the script:
function usage {
echo "usage: $PROGNAME [-n NAMESPACE] [-m MAX-PODS] -s SERVICE -- COMMAND"
echo " -s SERVICE K8s service, i.e. a pod selector (required)"
echo " COMMAND Command to execute on the pods"
echo " -n NAMESPACE K8s namespace (optional)"
echo " -m MAX-PODS Max number of pods to run on (optional; default=all)"
echo " -q Quiet mode"
echo " -d Dry run (don't actually exec)"
}
For example to run command curl http://google.com on all pods of a service with name s1 and namespace n1, you need to execute ./kcdo -s s1 -n n1 -- curl http://google.com.
I needed access to all pods so I can change log level on a class so I did from the inside of one of the pods:
// Change level to DEBUG
host <service-name>| awk '{print $4}' | while read line; do
curl --location --request POST "http://$line:9111/actuator/loggers/com.foo.MyClassName" \
--header 'Content-Type: application/json' \
--data-raw '{"configuredLevel": "DEBUG"}'
done
// Query level on all pods
host <service-name>| awk '{print $4}' | while read line; do
curl --location --request GET "http://$line:9111/actuator/loggers/com.foo.MyClassName"
echo
done
You need host and curl to execute it.
Not sure if this is good practice.

How to find Kubernetes pod that's handling the request

I am running 2 pods and a service with Type: NodePort, to load balance the requests between pods. I want to know when I send a request to the service, which pod the request is forwarded to. Is there a way to find this, because looking at the response, it looks like all the requests are handled by same pod.
A kubernetes Service will load-balance using WRR by default. When you create a Service, iptables rules will be generated in the node.
To be sure, ssh into the node and run iptables-save|less. Search for the name of the service. In the example below, a Service microbot load balances the microbot deployment with 3 replicas. There should be 2 entries in your case since you have just 2 pods.
-A KUBE-SVC-LX5ZXALLN4UQ7ZFL -m comment --comment "default/microbot:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-OZCDYTQTC3KQGJK5
-A KUBE-SVC-LX5ZXALLN4UQ7ZFL -m comment --comment "default/microbot:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SKIRAXBCCQB5R4MV
-A KUBE-SVC-LX5ZXALLN4UQ7ZFL -m comment --comment "default/microbot:" -j KUBE-SEP-SPMPNZCOIJIRSNNQ
If the iptables output doesn't look like the above, it's likely that your Service is not configured properly. Like what Heidi said, the pods are not associated with the Service.
Most likely because of your context and lack of namespace you can't see the logs.
Try kubectl get pods -o wide --all-namespaces | grep <pod> to get the information of which namespace the pod is in and the node IP address.
Then feed the and into the command below to get a running tail of the last 100 lines of the log
kubectl --namespace <namespace> logs --tail 100 -f <pod>
There is also the remote chance that the pod is not associated with the service. To check run kubectl describe services --namespace <namespace> <service> and look for the app name in the Selector: section
You can also exec into the container and see if the port is accessible or bound on the pod itself. If it isn't listening or answering, most likely it's due to the service not being associated with the application in the namespace.
You can look at the application log file, if your using one. if you print anything in stdout, use kubectl logs <pod> to see the message.
For testing you can include pod hostname in the response.

Resources