Pod is in Pending state on single node Kubernetes cluster - docker

I am using the YAML file to deploy the container on Kubernetes with some replication factor on a hosted machine.
YAML File
apiVersion: apps/v1
kind: Deployment
metadata:
name: mojo-deployment
labels:
app: mojo
spec:
selector:
matchLabels:
app: mojo
replicas: 3
template:
metadata:
labels:
app: mojo
spec:
containers:
- name: mojo
image: mojo:1.0.1
ports:
- containerPort: 9000
---
#Services Info
apiVersion: v1
kind: Service
metadata:
name: mojo-services
spec:
selector:
app: mojo
ports:
- protocol: TCP
port: 80
targetPort: 9376
---
#Ingress Configuration
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: mojo-ingress
annotations:
kubernetes.io/ingress.class: mojo
spec:
backend:
serviceName: mojo-services
servicePort: 80
Steps:
Build Docker image using `docker build -t mojo:1.0 .
docker image ls show me an image id.
Skipping docker build command to deploy image on container. Do I need to do it? or kubectl service will take care of it.
Run kubectl apply -f Prod.yaml. It shows
deployment.apps/mojo-deployment created
service/mojo-services created
ingress.networking.k8s.io/mojo-ingress created
kubectl get service returns
kubectl get pod returns
kubectl get deployment returns
Questions?
Do I need to build the container before deploying YAML file? I tried it but still kubernetes not running.
Why all pods are showing Pending status.
Deployment is also showing pending status.
Though I am trying to access the Ingress with :80 and cannot access it.
Edit
pod description
Name: mojo-deployment-6665bdc557-s57m7
Namespace: default
Priority: 0
Node: <none>
Labels: app=mojo
pod-template-hash=6665bdc557
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/mojo-deployment-6665bdc557
Containers:
mojo:
Image: mojo:1.0
Port: 9000/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tjx6p
(ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-tjx6p:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tjx6p
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From
Message
---- ------ ---- ---- -------
Warning FailedScheduling 70s (x45 over 67m) default-scheduler 0/1
nodes are available: 1 node(s) were unschedulable.
Edit 2
After removing the taint from the master node.
1. kubectl get node returns
kubectl get pod returns
kubectl describe node : https://gist.github.com/amixpal/333bffd6ab91def749267f30d4ffb079

If you have only one node (master) , then usually a Taint will be added to it which will make master node unschedulable. Remove taint from the master (and all other nodes, if there is more than one) using below.
kubectl taint nodes --all node-role.kubernetes.io/master-
Edit: Based on the node describe output, the CNI not ready.
Please make sure all CNI related Pods are running and healthy

Your container manifest should include downloadable docker image or k8s node should already contain docker image:
containers:
- name: mojo
image: mojo:1.0.1
ports:
- containerPort: 9000
Please answer: How your mojo:1.0.1 docker image appears on kubernetes nodes?
All pods wait to image be available.
Deployment wait for all pods will be in status Running.
K8s services make ingress be available after deployment be ready.

Related

How do I uncover the reason for "Pending" when my node.js app fails to deploy to Kubernetes?

I wrote a simple node.js app that listens on a port and returns HTML. I can docker run the node.js app and, with port forwarding in place, hit it happily.
+ docker run -p 7081:7081 split-server
Now I want to run the app in kubernetes. I am on a mac and set up minikube and virtual box. I also set up a local docker registry for my local app, using instructions found here.
It doesn't work no matter what combination of things I try. Pending. The describe is below. I think I'm close, but I just can't get useful debugging output from kubectl:
+ kubectl describe pod split-server
Name: split-server-68fc6cdcd-gpk5m
Namespace: default
Priority: 0
Node: <none>
Labels: app=split-server
pod-template-hash=68fc6cdcd
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/split-server-68fc6cdcd
Containers:
app:
Image: split-server:latest
Port: 7081/TCP
Host Port: 0/TCP
Environment:
SPLIT_API_KEY: <API KEY>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f8lzd (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
kube-api-access-f8lzd:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 3m27s (x3 over 13m) default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
My YAML is...
apiVersion: v1
kind: Service
metadata:
name: split-server
spec:
selector:
app: split-server
ports:
- port: 7081
targetPort: 7081
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: split-server
spec:
replicas: 1
selector:
matchLabels:
app: split-server
template:
metadata:
labels:
app: split-server
spec:
containers:
- name: app
image: 192.168.4.26:5000/split-server:latest
#image: split-server:latest
ports:
- containerPort: 7081
env:
- name: SPLIT_API_KEY
value: <API KEY>
imagePullPolicy: Always
And here is what docker has for its list of images:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
split-server latest d2caa2d0c693 45 minutes ago 1.01GB
192.168.4.26:5000/local/split-server latest d2caa2d0c693 45 minutes ago 1.01GB
Where should I be hunting? What tools am I missing? kubectl logs comes back empty every time... should have a single line of logging if the app had come up properly.
The minikube node is marked as unschedulable for some reason (manually or there is a problem), you can try to remove the taint:
kubectl taint nodes --all node.kubernetes.io/unschedulable-
or add a toleration on your pod:
apiVersion: v1
kind: Pod
metadata:
name: ...
...
spec:
containers:
- name: ...
...
tolerations:
- key: "node.kubernetes.io/unschedulable"
operator: "Exists"
effect: "NoSchedule"
Way to troubleshoot a pending pod is by looking at the events that you get when you describe the pod. In your case master node is marked unschedulable hence you are facing the issue.
The command to fix would be like what Hussein said also refer this page to get an idea how to troubleshoot a pending pod:
Troubleshooting pending pods

Kubernetes K3D Ingress always error 404 not found

When doing for e.g. a call to my endpoint curl localhost:8081/ticketapi/ticket/get I'm receiving an error 404. I know that error 404 is that the routing cannot be found. But I've no clue on how can I debug my problem or if there is something off in my .yml files.
I'm really new to Kubernetes and Docker and I hope someone can help me solve this issue.
Thanks in advance.
My endpoint in swagger
Cluster create:
k3d cluster create --api-port 6550 -p "8081:80#loadbalancer" --agents 2 from here
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ticket-deployment
labels:
app: ticketapi
spec:
replicas: 1
selector:
matchLabels:
app: ticketapi
template:
metadata:
labels:
app: ticketapi
spec:
containers:
- name: ticketapi
image: stanpanman/ticketapi:latest
ports:
- containerPort: 80
Service:
apiVersion: v1
kind: Service
metadata:
name: ticket-service
spec:
selector:
app: ticketapi
ports:
- protocol: TCP
port: 80
targetPort: 80
Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
ingressClassName: nginx-example
rules:
- http:
paths:
# Ticket API
- path: /ticketapi
pathType: Prefix
backend:
service:
name: ticketapi
port:
number: 80
Describe pod:
Name: ticket-deployment-6549764674-swwjq
Namespace: default
Priority: 0
Node: k3d-k3s-default-agent-1/172.24.0.3
Start Time: Tue, 03 May 2022 11:31:50 +0200
Labels: app=ticketapi
pod-template-hash=6549764674
Annotations: <none>
Status: Running
IP: 10.42.2.4
IPs:
IP: 10.42.2.4
Controlled By: ReplicaSet/ticket-deployment-6549764674
Containers:
ticketapi:
Container ID: containerd://f830c5396c5886c9109f733a07d9c2e02cde32b7689ed85dbfb8e62dc503705a
Image: stanpanman/ticketapi:latest
Image ID: docker.io/stanpanman/ticketapi#sha256:428990ac8d10dbf74039eb25a6899448f4cb96997cb5edde2e9d62e66d547070
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 03 May 2022 11:32:03 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kgp8c (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-kgp8c:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17m default-scheduler Successfully assigned default/ticket-deployment-6549764674-swwjq to k3d-k3s-default-agent-1
Normal Pulling 17m kubelet Pulling image "stanpanman/ticketapi:latest"
Normal Pulled 17m kubelet Successfully pulled image "stanpanman/ticketapi:latest" in 12.656309048s
Normal Created 17m kubelet Created container ticketapi
Normal Started 17m kubelet Started container ticketapi
This could help you debug:
Check if your application is actually working at the pod level (make sure the pod-name is correct):
kubectl port-forward pod/ticket-deployment-6549764674-swwjq 8080:80
#then try on your browser localhost:8080/whateverYouConfigured
If this doesn't work, the pb may came from your app ( you can connect into the pod and exec a curl localhost:80/whateverYouConfigured command
If this worked, try to check if the service is working correctly:
kubectl port-forward svc/ticket-service 8080:80
#then try on your browser localhost:8080/whateverYouConfigured
If this doesn't work, check the svc endpoints or labels
If it worked, then you can do the same with the ingress:
kubectl port-forward <ingress-pod-name> 8080:<ingress-port>
If this works check your k3d config / firewalling etc.
Hope this can help: https://learnk8s.io/a/b0862c5c0f1e7a6db8145f7970dd9601.png

Ingress configuration issue in Docker kubernetes cluster

I am recently new to Kubernetes and Docker in general and am experiencing issues.
I am running a single local Kubernetes cluster via Docker and am using skaffold to control the build up and teardown of objects within the cluster. When I run skaffold dev the build seems successful, yet when I attempt to make a request to my cluster via Postman the request hangs. I am using an ingress-nginx controller and I feel the bug lies somewhere here. My request handling logic is simple and so I feel the issue is not in the route handling but the configuration of my cluster, specifically with the ingress controller. I will post below my skaffold yaml config and my ingress yaml config.
Any help is greatly appreciated as I have struggled with this bug for sometime.
ingress yaml config :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
Note that I have a redirect in my /etc/hosts file from ticketing.dev to 127.0.0.1
Auth service yaml config :
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: conorl47/auth
---
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
skaffold yaml config :
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: conorl47/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
For installing the ingress nginx controller I followed the installation instructions at https://kubernetes.github.io/ingress-nginx/deploy/ , namely the Docker desktop installation instruction.
After running that command I see the following two Docker containers running in Docker desktop
The two services created in the ingress-nginx namespace are :
❯ k get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.103.6.146 <pending> 80:30036/TCP,443:30465/TCP 22m
ingress-nginx-controller-admission ClusterIP 10.108.8.26 <none> 443/TCP 22m
When I kubectl describe both of these services I see the following :
❯ kubectl describe service ingress-nginx-controller -n ingress-nginx
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/version=1.0.0
helm.sh/chart=ingress-nginx-4.0.1
Annotations: <none>
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.103.6.146
IPs: 10.103.6.146
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 30036/TCP
Endpoints: 10.1.0.10:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 30465/TCP
Endpoints: 10.1.0.10:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 32485
Events: <none>
and :
❯ kubectl describe service ingress-nginx-controller-admission -n ingress-nginx
Name: ingress-nginx-controller-admission
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/version=1.0.0
helm.sh/chart=ingress-nginx-4.0.1
Annotations: <none>
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.108.8.26
IPs: 10.108.8.26
Port: https-webhook 443/TCP
TargetPort: webhook/TCP
Endpoints: 10.1.0.10:8443
Session Affinity: None
Events: <none>
As it seems, you have made the ingress service of type LoadBalancer, this will usually provision an external loadbalancer from your cloud provider of choice. That's also why It's still pending. Its waiting for the loadbalancer to be ready, but it will never happen.
If you want to have that ingress service reachable outside your cluster, you need to use type NodePort.
Since their docs are not great on this point, and it seems to be by default like this. You could download the content of https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/cloud/deploy.yaml and modify it before applying. Or you use helm, then you probably can configure this.
You could also do it in this dirty fashion.
kubectl apply --dry-run=client -o yaml -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/cloud/deploy.yaml \
| sed s/LoadBalancer/NodePort/g \
| kubectl apply -f -
You could also edit in place.
kubectl edit svc ingress-nginx-controller-admission -n ingress-nginx

Need a working Kubectl binary inside an image

My goal is to have a pod with a working Kubectl binary inside.
Unfortunatly every kubectl image from docker hub I booted using basic yaml resulted in CrashLoopbackOff or else.
Has anyone got some yaml (deployment, pod, etc) that would get me my kubectl ?
I tried a bunch of images with this basic yaml there:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubectl-demo
labels:
app: deploy
role: backend
spec:
replicas: 1
selector:
matchLabels:
app: deploy
role: backend
template:
metadata:
labels:
app: deploy
role: backend
spec:
containers:
- name: kubectl-demo
image: <SOME_IMAGE>
ports:
- containerPort: 80
Thx
Or, you can do this. It works in my context, with kubernetes on VMs, where I know where is kubeconfig file. You would need to make the necessary changes, to make it work in your environment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubectl
spec:
replicas: 1
selector:
matchLabels:
role: kubectl
template:
metadata:
labels:
role: kubectl
spec:
containers:
- image: viejo/kubectl
name: kubelet
tty: true
securityContext:
privileged: true
volumeMounts:
- name: kube-config
mountPath: /root/.kube/
volumes:
- name: kube-config
hostPath:
path: /home/$USER/.kube/
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: Exists
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
This is the result:
$ kubectl get po
NAME READY STATUS RESTARTS AGE
kubectl-cb8bfc6dd-nv6ht 1/1 Running 0 70s
$ kubectl exec kubectl-cb8bfc6dd-nv6ht -- kubectl get no
NAME STATUS ROLES AGE VERSION
kubernetes-1-17-master Ready master 16h v1.17.3
kubernetes-1-17-worker Ready <none> 16h v1.17.3
As Suren already explained in the comments that kubectl is not a daemon so kubectl will run, exit and cause the container to restart.
There are a couple of workarounds for this. One of these is to use sleep command with infinity argument. This would keep the Pod alive, prevent it from restarting and allow you to exec into it.
Here`s an example how to do that:
spec:
containers:
- image: bitnami/kubectl
command:
- sleep
- "infinity"
name: kctl
Let me know if this helps.

Can't get response from Express API in k8s-Skaffold from Postman

Trying to do something that should be pretty simple: starting up an Express pod and fetch the localhost:5000/ which should respond with Hello World!.
I've installed ingress-nginx for Docker for Mac and minikube
Mandatory: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
Docker for Mac: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
minikube: minikube addons enable ingress
I run skaffold dev --tail
It prints out Example app listening on port 5000, so apparently is running
Navigate to localhost and localhost:5000 and get a "Could not get any response" error
Also, tried minikube ip which is 192.168.99.100 and experience the same results
Not quite sure what I am doing wrong here. Code and configs are below. Suggestions?
index.js
// Import dependencies
const express = require('express');
// Set the ExpressJS application
const app = express();
// Set the listening port
// Web front-end is running on port 3000
const port = 5000;
// Set root route
app.get('/', (req, res) => res.send('Hello World!'));
// Listen on the port
app.listen(port, () => console.log(`Example app listening on port ${port}`));
skaffold.yaml
apiVersion: skaffold/v1beta15
kind: Config
build:
local:
push: false
artifacts:
- image: sockpuppet/server
context: server
docker:
dockerfile: Dockerfile.dev
sync:
manual:
- src: '**/*.js'
dest: .
deploy:
kubectl:
manifests:
- k8s/ingress-service.yaml
- k8s/server-deployment.yaml
- k8s/server-cluster-ip-service.yaml
ingress-service.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /?(.*)
backend:
serviceName: server-cluster-ip-service
servicePort: 5000
server-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
spec:
replicas: 3
selector:
matchLabels:
component: server
template:
metadata:
labels:
component: server
spec:
containers:
- name: server
image: sockpuppet/server
ports:
- containerPort: 5000
server-cluster-ip-service.yaml
apiVersion: v1
kind: Service
metadata:
name: server-cluster-ip-service
spec:
type: ClusterIP
selector:
component: server
ports:
- port: 5000
targetPort: 5000
Dockerfile.dev
FROM node:12.10-alpine
EXPOSE 5000
WORKDIR "/app"
COPY ./package.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
Output from describe
$ kubectl describe ingress ingress-service
Name: ingress-service
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
localhost
/ server-cluster-ip-service:5000 (172.17.0.7:5000,172.17.0.8:5000,172.17.0.9:5000)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx"},"name":"ingress-service","namespace":"default"},"spec":{"rules":[{"host":"localhost","http":{"paths":[{"backend":{"serviceName":"server-cluster-ip-service","servicePort":5000},"path":"/"}]}}]}}
kubernetes.io/ingress.class: nginx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 16h nginx-ingress-controller Ingress default/ingress-service
Normal CREATE 21s nginx-ingress-controller Ingress default/ingress-service
Output from kubectl get po -l component=server
$ kubectl get po -l component=server
NAME READY STATUS RESTARTS AGE
server-deployment-cf6dd5744-2rnh9 1/1 Running 0 11s
server-deployment-cf6dd5744-j9qvn 1/1 Running 0 11s
server-deployment-cf6dd5744-nz4nj 1/1 Running 0 11s
Output from kubectl describe pods server-deployment: Noticed that the Host Port: 0/TCP. Possibly the issue?
Name: server-deployment-6b78885779-zttns
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/10.0.2.15
Start Time: Tue, 08 Oct 2019 19:54:03 -0700
Labels: app.kubernetes.io/managed-by=skaffold-v0.39.0
component=server
pod-template-hash=6b78885779
skaffold.dev/builder=local
skaffold.dev/cleanup=true
skaffold.dev/deployer=kubectl
skaffold.dev/docker-api-version=1.39
skaffold.dev/run-id=c545df44-a37d-4746-822d-392f42817108
skaffold.dev/tag-policy=git-commit
skaffold.dev/tail=true
Annotations: <none>
Status: Running
IP: 172.17.0.5
Controlled By: ReplicaSet/server-deployment-6b78885779
Containers:
server:
Container ID: docker://2d0aba8f5f9c51a81f01acc767e863b7321658f0a3d0839745adb99eb0e3907a
Image: sockpuppet/server:668dfe550d93a0ae76eb07e0bab900f3968a7776f4f177c97f61b18a8b1677a7
Image ID: docker://sha256:668dfe550d93a0ae76eb07e0bab900f3968a7776f4f177c97f61b18a8b1677a7
Port: 5000/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 08 Oct 2019 19:54:05 -0700
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qz5kr (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-qz5kr:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qz5kr
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/server-deployment-6b78885779-zttns to minikube
Normal Pulled 7s kubelet, minikube Container image "sockpuppet/server:668dfe550d93a0ae76eb07e0bab900f3968a7776f4f177c97f61b18a8b1677a7" already present on machine
Normal Created 7s kubelet, minikube Created container server
Normal Started 6s kubelet, minikube Started container server
OK, got this sorted out now.
It boils down to the kind of Service being used: ClusterIP.
ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.
If I am wanting to connect to a Pod or Deployment directly from outside of the cluster (something like Postman, pgAdmin, etc.) and I want to do it using a Service, I should be using NodePort:
NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
So in my case, if I want to continue using a Service, I'd change my Service manifest to:
apiVersion: v1
kind: Service
metadata:
name: server-cluster-ip-service
spec:
type: NodePort
selector:
component: server
ports:
- port: 5000
targetPort: 5000
nodePort: 31515
Making sure to manually set nodePort: <port> otherwise it is kind of random and a pain to use.
Then I'd get the minikube IP with minikube ip and connect to the Pod with 192.168.99.100:31515.
At that point, everything worked as expected.
But that means having separate sets of development (NodePort) and production (ClusterIP) manifests, which is probably totally fine. But I want my manifests to stay as close to the production version (i.e. ClusterIP).
There are a couple ways to get around this:
Using something like Kustomize where you can set a base.yaml and then have overlays for each environment where it just changes the relevant info avoiding manifests that are mostly duplicative.
Using kubectl port-forward. I think this is the route I am going to go. That way I can keep my one set of production manifests, but when I want to QA Postgres with pgAdmin I can do:
kubectl port-forward services/postgres-cluster-ip-service 5432:5432
Or for the back-end and Postman:
kubectl port-forward services/server-cluster-ip-service 5000:5000
I'm playing with doing this through the ingress-service.yaml using nginx-ingress, but don't have that working quite yet. Will update when I do. But for me, port-forward seems the way to go since I can just have one set of production manifests that I don't have to alter.
Skaffold Port-Forwarding
This is even better for my needs. Appending this to the bottom of the skaffold.yaml and is basically the same thing as kubectl port-forward without tying up a terminal or two:
portForward:
- resourceType: service
resourceName: server-cluster-ip-service
port: 5000
localPort: 5000
- resourceType: service
resourceName: postgres-cluster-ip-service
port: 5432
localPort: 5432
Then run skaffold dev --port-forward.

Resources