I have a very simple "Hello" spring-boot application
#RestController
public class HelloWorld {
#RequestMapping("/")
public String sayHello() {
return "Hello Spring Boot!!";
}
}
I packaged Dockerfile
FROM java:8
COPY ./springsimple-1.0-SNAPSHOT.jar /Users/a/Documents/dev/intellij/dockerImages/
WORKDIR /Users/a/Documents/dev/intellij/dockerImages/
EXPOSE 8090
CMD ["java", "-jar", "springsimple-1.0-SNAPSHOT.jar"]
and pulled into my container registry and deployed it
amhg$ kubectl run testproject --image acontainerregistry.azurecr.io/hellospring:v1
deployment.apps "testproject" created
amhg$ kubectl expose deployments testproject --port=5000 --type=LoadBalancer
service "testproject" exposed
command kubectl get pods
NAME READY STATUS RESTARTS AGE
testproject-bdf5b54d-gkk92 1/1 Running 0 41s
However when I try the command (Starting to serve on 127.0.0.1:8001) I got the error:
amhg$ curl http://127.0.0.1:8001/api/v1/proxy/namespaces/default/pods/testproject-bdf5b54d-gkk92/
Internal Server Error
What is missing?
The description of the pod is
amhg$ kubectl describe pod testproject-bdf5b54d-gkk92
Name: testproject-bdf5b54d-gkk92
Namespace: default
Node: aks-nodepool1-39744669-0/10.240.0.4
Start Time: Thu, 19 Apr 2018 13:13:20 +0200
Labels: pod-template-hash=68916108
run=testproject
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"testproject-bdf5b54d","uid":"aa99808e-43c2-11e8-9537-0a58ac1f0f4...
Status: Running
IP: 10.244.0.40
Controlled By: ReplicaSet/testproject-bdf5b54d
Containers:
testproject:
Container ID: docker://6ed3878fa4476a5d2e56f0ba70908742702709c7505c7b19989efc6ff658ea55
Image: acontainerregistry.azurecr.io/hellospring:v1
Image ID: docker-pullable://acontainerregistry.azurecr.io/azure-vote-front#sha256:e2af252d275c99b802e21b3b469c75b256d7812ee71d7582cd759bd4faf5a6ec
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 19 Apr 2018 13:13:21 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vkpjm (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-vkpjm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vkpjm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 57m default-scheduler Successfully assigned testproject-bdf5b54d-gkk92 to aks-nodepool1-39744669-0
Normal SuccessfulMountVolume 57m kubelet, aks-nodepool1-39744669-0 MountVolume.SetUp succeeded for volume "default-token-vkpjm"
Normal Pulled 57m kubelet, aks-nodepool1-39744669-0 Container image "acontainerregistry.azurecr.io/hellospring:v1" already present on machine
Normal Created 57m kubelet, aks-nodepool1-39744669-0 Created container
Normal Started 57m kubelet, aks-nodepool1-39744669-0 Started container
Let's start from the beginning: it is always better to use YAML config files to do anything with Kubernetes. It will help you with debugging if something goes wrong and repeat your action in future.
First, you use the command to create the pod:
kubectl run testproject --image acontainerregistry.azurecr.io/hellospring:v1
where YAML looks like:
apiVersion: v1
kind: Pod
metadata:
name: test-app
spec:
containers:
- name: java-app
image: acontainerregistry.azurecr.io/hellospring:v1
ports:
- containerPort: 8090
and you can apply it as a command:
kubectl apply -f ./pod.yaml
You get the same result as while running your command, but additionally you have the config file which can be used in future.
You`re trying to expose your pod using command:
kubectl expose deployments testproject --port=5000 --type=LoadBalancer
YAML for your service looks like:
apiVersion: v1
kind: Service
metadata:
name: java-service
labels:
name: test-app
spec:
type: LoadBalancer
ports:
- port: 5000
targetPort: 8090
name: http
selector:
name: test-app
Doing the same but with using YAML allows to describe more and be sure you don't miss anything.
You tried to curl the localhost but I`m not sure what did you expect from this command:
amhg$ curl http://127.0.0.1:8001/api/v1/proxy/namespaces/default/pods/testproject-bdf5b54d-gkk92/
Internal Server Error
After you create the service, you call kubectl describe service $service_name, which you can find here:
LoadBalancer Ingress: XX.XX.XX.XX
Port: http 5000/TCP
You can curl this address and receive the answer from your application.
curl -v XX.XX.XX.XX:5000
Don't forget to open the port on Azure firewall.
Related
I got error when creating deployment.
This is my Dockerfile that i have run and test it on local, i also push it to DockerHub
FROM node:14.15.4
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
RUN npm install pm2 -g
COPY . .
EXPOSE 3001
CMD [ "pm2-runtime", "server.js" ]
In my raspberry pi 3 model B, i have install k3s using curl -sfL https://get.k3s.io | sh -
Here is my controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-deployment
labels:
app: controller
spec:
replicas: 1
selector:
matchLabels:
app: controller
template:
metadata:
labels:
app: controller
spec:
containers:
- name: controller
image: akirayorunoe/node-controller-server
ports:
- containerPort: 3001
After run this file the pod is error
When i log the pod, it said
standard_init_linux.go:219: exec user process caused: exec format error
Here is the reponse from describe pod
Name: controller-deployment-8669c9c864-sw8kh
Namespace: default
Priority: 0
Node: raspberrypi/192.168.0.30
Start Time: Fri, 21 May 2021 11:21:05 +0700
Labels: app=controller
pod-template-hash=8669c9c864
Annotations: <none>
Status: Running
IP: 10.42.0.43
IPs:
IP: 10.42.0.43
Controlled By: ReplicaSet/controller-deployment-8669c9c864
Containers:
controller:
Container ID: containerd://439edcfdbf49df998e3cabe2c82206b24819a9ae13500b0 13b9bac1df6e56231
Image: akirayorunoe/node-controller-server
Image ID: docker.io/akirayorunoe/node-controller-server#sha256:e1c5115 2f9d596856952d590b1ef9a486e136661076a9d259a9259d4df314226
Port: 3001/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 21 May 2021 11:24:29 +0700
Finished: Fri, 21 May 2021 11:24:29 +0700
Ready: False
Restart Count: 5
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-txm85 (ro )
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-txm85:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-txm85
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m33s default-scheduler Successfully ass igned default/controller-deployment-8669c9c864-sw8kh to raspberrypi
Normal Pulled 5m29s kubelet Successfully pul led image "akirayorunoe/node-controller-server" in 3.072053213s
Normal Pulled 5m24s kubelet Successfully pul led image "akirayorunoe/node-controller-server" in 3.018192177s
Normal Pulled 5m6s kubelet Successfully pul led image "akirayorunoe/node-controller-server" in 3.015959209s
Normal Pulled 4m34s kubelet Successfully pul led image "akirayorunoe/node-controller-server" in 2.921116157s
Normal Created 4m34s (x4 over 5m29s) kubelet Created containe r controller
Normal Started 4m33s (x4 over 5m28s) kubelet Started containe r controller
Normal Pulling 3m40s (x5 over 5m32s) kubelet Pulling image "a kirayorunoe/node-controller-server"
Warning BackOff 30s (x23 over 5m22s) kubelet Back-off restart ing failed container
Here is the error images
You are trying to launch a container built for x86 (or x86_64, same difference) on an ARM machine. This does not work. Containers for ARM must be built specifically for ARM and contain ARM executables. While major projects are slowly adding ARM support to their builds, most random images you find on Docker Hub or whatever will not work on ARM.
I have set up a simple two-node Kubernetes cluster using K3S. I have deployed a very simple web app, but when I try to access the web app, I just get a "Gateway Timeout". I've tried to keep the set up as simple as possible, but I can't see where I'm going wrong. I've outlined my entire setup below, from starting with two brand new Ubuntu 20.04 instances. Can anyone see where I'm going wrong?
This is my set up from start to finish:
On Master Node:
sudo ufw allow 80
sudo ufw allow 8080
sudo ufw allow 6443
sudo ufw allow 2379
sudo ufw allow 2380
sudo ufw allow 2379:10252/tcp
sudo ufw allow 30000:32767/tcp
export http_proxy=proxy.example.com:8082
export https_proxy=proxy.example.com:8082
curl -sfL https://get.k3s.io | sh -
cat /var/lib/rancher/k3s/server/node-token
sudo cat /var/lib/rancher/k3s/server/node-token
sudo cat /etc/rancher/k3s/k3s.yaml
On Agent:
sudo ufw allow 80
sudo ufw allow 8080
sudo ufw allow 6443
sudo ufw allow 2379
sudo ufw allow 2380
sudo ufw allow 2379:10252/tcp
sudo ufw allow 30000:32767/tcp
export http_proxy=proxy.example.com:8082
export https_proxy=proxy.example.com:8082
curl -sfL https://get.k3s.io | K3S_URL=https://vm1234.example.com:6443 K3S_TOKEN=K1060cf9217115ce1cb67d8450ea809b267ddc332b59c0c8ec6c6a30573f0b75eca::server:0b2be94c380be7bf4e16d94af36cac00 sh -
mkdir /etc/rancher/k3s/
sudo mkdir /etc/rancher/k3s/
sudo vim /etc/rancher/k3s/registries.yaml
sudo systemctl restart k3s-agent
On Local Workstation:
kubectl --kubeconfig k3s.yaml apply -f web-test-deployment.yaml
kubectl --kubeconfig k3s.yaml apply -f web-test-service.yaml
kubectl --kubeconfig k3s.yaml apply -f web-test-ingress.yaml
List running pods:
$ kubectl --kubeconfig k3s.yaml get po
NAME READY STATUS RESTARTS AGE
web-test-deployment-5594bffd47-2gpd2 1/1 Running 0 4m57s
Inspect running pod:
$ kubectl --kubeconfig k3s.yaml describe pod web-test-deployment-5594bffd47-2gpd2
Name: web-test-deployment-5594bffd47-2gpd2
Namespace: default
Priority: 0
Node: vm9876/10.192.110.200
Start Time: Fri, 28 Aug 2020 12:07:01 +0100
Labels: app=web-test
pod-template-hash=5594bffd47
Annotations: <none>
Status: Running
IP: 10.42.1.3
IPs:
IP: 10.42.1.3
Controlled By: ReplicaSet/web-test-deployment-5594bffd47
Containers:
web-test:
Container ID: containerd://c32d85da0642d3ccc00c61a5265280f9fcc11e8979d621690117878c89506440
Image: docker.example.com//web-test
Image ID: docker.example.com//web-test#sha256:cb568f5b6554284684815fc4ee17eb8cceb1aa90838a575fd3755b60bb7e44e7
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 28 Aug 2020 12:09:03 +0100
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wkzpx (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-wkzpx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wkzpx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/web-test-deployment-5594bffd47-2gpd2 to vm9876
Normal Pulling 3m58s (x4 over 5m17s) kubelet, vm9876 Pulling image "docker.example.com/web-test"
Normal Pulled 3m16s kubelet, vm9876 Successfully pulled image "docker.example.com/web-test"
Normal Created 3m16s kubelet, vm9876 Created container web-test
Normal Started 3m16s kubelet, vm9876 Started container web-test
Show stack:
$ kubectl --kubeconfig k3s.yaml get all
NAME READY STATUS RESTARTS AGE
pod/web-test-deployment-5594bffd47-2gpd2 1/1 Running 0 5m43s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 16m
service/web-test-service ClusterIP 10.43.100.212 <none> 8080/TCP 5m39s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/web-test-deployment 1/1 1 1 5m44s
NAME DESIRED CURRENT READY AGE
replicaset.apps/web-test-deployment-5594bffd47 1 1 1 5m45s
List Ingress:
$ kubectl --kubeconfig k3s.yaml get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
web-test <none> * 10.94.230.224 80 5m55s
Inspect Ingress:
$ kubectl --kubeconfig k3s.yaml describe ing web-test
Name: web-test
Namespace: default
Address: 10.94.230.224
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/ web-test-service:8080 (10.42.1.3:8080)
Annotations: kubernetes.io/ingress.class: traefik
Events: <none>
Inspect Service:
kubectl --kubeconfig k3s.yaml describe svc web-test-service
Name: web-test-service
Namespace: default
Labels: app=web-test
Annotations: Selector: app=web-test
Type: ClusterIP
IP: 10.43.100.212
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.42.1.3:8080
Session Affinity: None
Events: <none>
$ curl http://10.94.230.224/web-test-service/
Gateway Timeout
These are my deployment manifests:
web-test-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web-test
name: web-test-deployment
spec:
replicas: 1
selector:
matchLabels:
app: web-test
strategy: {}
template:
metadata:
labels:
app: web-test
spec:
containers:
- image: docker.example.com/web-test
imagePullPolicy: Always
name: web-test
ports:
- containerPort: 8080
restartPolicy: Always
volumes: null
web-test-service.yaml:
apiVersion: v1
kind: Service
metadata:
labels:
app: web-test
name: web-test-service
spec:
ports:
- port: 8080
targetPort: 8080
selector:
app: web-test
web-test--ingress.yaml:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: web-test
annotations:
kubernetes.io/ingress.class: "traefik"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: web-test-service
servicePort: 8080
Note: I've also tried a similar setup using Ambassador, but I'm getting similar results :-(
The annotation on the Ingress is missing description of the entrypoint and the host:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: web-test
annotations:
kubernetes.io/ingress.class: "traefik"
traefik.ingress.kubernetes.io/router.entrypoints: http
spec:
rules:
- host: webtest.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-test-service
port:
number: 80
I am using the YAML file to deploy the container on Kubernetes with some replication factor on a hosted machine.
YAML File
apiVersion: apps/v1
kind: Deployment
metadata:
name: mojo-deployment
labels:
app: mojo
spec:
selector:
matchLabels:
app: mojo
replicas: 3
template:
metadata:
labels:
app: mojo
spec:
containers:
- name: mojo
image: mojo:1.0.1
ports:
- containerPort: 9000
---
#Services Info
apiVersion: v1
kind: Service
metadata:
name: mojo-services
spec:
selector:
app: mojo
ports:
- protocol: TCP
port: 80
targetPort: 9376
---
#Ingress Configuration
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: mojo-ingress
annotations:
kubernetes.io/ingress.class: mojo
spec:
backend:
serviceName: mojo-services
servicePort: 80
Steps:
Build Docker image using `docker build -t mojo:1.0 .
docker image ls show me an image id.
Skipping docker build command to deploy image on container. Do I need to do it? or kubectl service will take care of it.
Run kubectl apply -f Prod.yaml. It shows
deployment.apps/mojo-deployment created
service/mojo-services created
ingress.networking.k8s.io/mojo-ingress created
kubectl get service returns
kubectl get pod returns
kubectl get deployment returns
Questions?
Do I need to build the container before deploying YAML file? I tried it but still kubernetes not running.
Why all pods are showing Pending status.
Deployment is also showing pending status.
Though I am trying to access the Ingress with :80 and cannot access it.
Edit
pod description
Name: mojo-deployment-6665bdc557-s57m7
Namespace: default
Priority: 0
Node: <none>
Labels: app=mojo
pod-template-hash=6665bdc557
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/mojo-deployment-6665bdc557
Containers:
mojo:
Image: mojo:1.0
Port: 9000/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tjx6p
(ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-tjx6p:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tjx6p
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From
Message
---- ------ ---- ---- -------
Warning FailedScheduling 70s (x45 over 67m) default-scheduler 0/1
nodes are available: 1 node(s) were unschedulable.
Edit 2
After removing the taint from the master node.
1. kubectl get node returns
kubectl get pod returns
kubectl describe node : https://gist.github.com/amixpal/333bffd6ab91def749267f30d4ffb079
If you have only one node (master) , then usually a Taint will be added to it which will make master node unschedulable. Remove taint from the master (and all other nodes, if there is more than one) using below.
kubectl taint nodes --all node-role.kubernetes.io/master-
Edit: Based on the node describe output, the CNI not ready.
Please make sure all CNI related Pods are running and healthy
Your container manifest should include downloadable docker image or k8s node should already contain docker image:
containers:
- name: mojo
image: mojo:1.0.1
ports:
- containerPort: 9000
Please answer: How your mojo:1.0.1 docker image appears on kubernetes nodes?
All pods wait to image be available.
Deployment wait for all pods will be in status Running.
K8s services make ingress be available after deployment be ready.
Trying to do something that should be pretty simple: starting up an Express pod and fetch the localhost:5000/ which should respond with Hello World!.
I've installed ingress-nginx for Docker for Mac and minikube
Mandatory: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
Docker for Mac: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
minikube: minikube addons enable ingress
I run skaffold dev --tail
It prints out Example app listening on port 5000, so apparently is running
Navigate to localhost and localhost:5000 and get a "Could not get any response" error
Also, tried minikube ip which is 192.168.99.100 and experience the same results
Not quite sure what I am doing wrong here. Code and configs are below. Suggestions?
index.js
// Import dependencies
const express = require('express');
// Set the ExpressJS application
const app = express();
// Set the listening port
// Web front-end is running on port 3000
const port = 5000;
// Set root route
app.get('/', (req, res) => res.send('Hello World!'));
// Listen on the port
app.listen(port, () => console.log(`Example app listening on port ${port}`));
skaffold.yaml
apiVersion: skaffold/v1beta15
kind: Config
build:
local:
push: false
artifacts:
- image: sockpuppet/server
context: server
docker:
dockerfile: Dockerfile.dev
sync:
manual:
- src: '**/*.js'
dest: .
deploy:
kubectl:
manifests:
- k8s/ingress-service.yaml
- k8s/server-deployment.yaml
- k8s/server-cluster-ip-service.yaml
ingress-service.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /?(.*)
backend:
serviceName: server-cluster-ip-service
servicePort: 5000
server-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
spec:
replicas: 3
selector:
matchLabels:
component: server
template:
metadata:
labels:
component: server
spec:
containers:
- name: server
image: sockpuppet/server
ports:
- containerPort: 5000
server-cluster-ip-service.yaml
apiVersion: v1
kind: Service
metadata:
name: server-cluster-ip-service
spec:
type: ClusterIP
selector:
component: server
ports:
- port: 5000
targetPort: 5000
Dockerfile.dev
FROM node:12.10-alpine
EXPOSE 5000
WORKDIR "/app"
COPY ./package.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
Output from describe
$ kubectl describe ingress ingress-service
Name: ingress-service
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
localhost
/ server-cluster-ip-service:5000 (172.17.0.7:5000,172.17.0.8:5000,172.17.0.9:5000)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx"},"name":"ingress-service","namespace":"default"},"spec":{"rules":[{"host":"localhost","http":{"paths":[{"backend":{"serviceName":"server-cluster-ip-service","servicePort":5000},"path":"/"}]}}]}}
kubernetes.io/ingress.class: nginx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 16h nginx-ingress-controller Ingress default/ingress-service
Normal CREATE 21s nginx-ingress-controller Ingress default/ingress-service
Output from kubectl get po -l component=server
$ kubectl get po -l component=server
NAME READY STATUS RESTARTS AGE
server-deployment-cf6dd5744-2rnh9 1/1 Running 0 11s
server-deployment-cf6dd5744-j9qvn 1/1 Running 0 11s
server-deployment-cf6dd5744-nz4nj 1/1 Running 0 11s
Output from kubectl describe pods server-deployment: Noticed that the Host Port: 0/TCP. Possibly the issue?
Name: server-deployment-6b78885779-zttns
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/10.0.2.15
Start Time: Tue, 08 Oct 2019 19:54:03 -0700
Labels: app.kubernetes.io/managed-by=skaffold-v0.39.0
component=server
pod-template-hash=6b78885779
skaffold.dev/builder=local
skaffold.dev/cleanup=true
skaffold.dev/deployer=kubectl
skaffold.dev/docker-api-version=1.39
skaffold.dev/run-id=c545df44-a37d-4746-822d-392f42817108
skaffold.dev/tag-policy=git-commit
skaffold.dev/tail=true
Annotations: <none>
Status: Running
IP: 172.17.0.5
Controlled By: ReplicaSet/server-deployment-6b78885779
Containers:
server:
Container ID: docker://2d0aba8f5f9c51a81f01acc767e863b7321658f0a3d0839745adb99eb0e3907a
Image: sockpuppet/server:668dfe550d93a0ae76eb07e0bab900f3968a7776f4f177c97f61b18a8b1677a7
Image ID: docker://sha256:668dfe550d93a0ae76eb07e0bab900f3968a7776f4f177c97f61b18a8b1677a7
Port: 5000/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 08 Oct 2019 19:54:05 -0700
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qz5kr (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-qz5kr:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qz5kr
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/server-deployment-6b78885779-zttns to minikube
Normal Pulled 7s kubelet, minikube Container image "sockpuppet/server:668dfe550d93a0ae76eb07e0bab900f3968a7776f4f177c97f61b18a8b1677a7" already present on machine
Normal Created 7s kubelet, minikube Created container server
Normal Started 6s kubelet, minikube Started container server
OK, got this sorted out now.
It boils down to the kind of Service being used: ClusterIP.
ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.
If I am wanting to connect to a Pod or Deployment directly from outside of the cluster (something like Postman, pgAdmin, etc.) and I want to do it using a Service, I should be using NodePort:
NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
So in my case, if I want to continue using a Service, I'd change my Service manifest to:
apiVersion: v1
kind: Service
metadata:
name: server-cluster-ip-service
spec:
type: NodePort
selector:
component: server
ports:
- port: 5000
targetPort: 5000
nodePort: 31515
Making sure to manually set nodePort: <port> otherwise it is kind of random and a pain to use.
Then I'd get the minikube IP with minikube ip and connect to the Pod with 192.168.99.100:31515.
At that point, everything worked as expected.
But that means having separate sets of development (NodePort) and production (ClusterIP) manifests, which is probably totally fine. But I want my manifests to stay as close to the production version (i.e. ClusterIP).
There are a couple ways to get around this:
Using something like Kustomize where you can set a base.yaml and then have overlays for each environment where it just changes the relevant info avoiding manifests that are mostly duplicative.
Using kubectl port-forward. I think this is the route I am going to go. That way I can keep my one set of production manifests, but when I want to QA Postgres with pgAdmin I can do:
kubectl port-forward services/postgres-cluster-ip-service 5432:5432
Or for the back-end and Postman:
kubectl port-forward services/server-cluster-ip-service 5000:5000
I'm playing with doing this through the ingress-service.yaml using nginx-ingress, but don't have that working quite yet. Will update when I do. But for me, port-forward seems the way to go since I can just have one set of production manifests that I don't have to alter.
Skaffold Port-Forwarding
This is even better for my needs. Appending this to the bottom of the skaffold.yaml and is basically the same thing as kubectl port-forward without tying up a terminal or two:
portForward:
- resourceType: service
resourceName: server-cluster-ip-service
port: 5000
localPort: 5000
- resourceType: service
resourceName: postgres-cluster-ip-service
port: 5432
localPort: 5432
Then run skaffold dev --port-forward.
i am using nfs for volume in kubernetes pod using deployment.
Below are details of all files.
Filename :- nfs-server.yaml
kind: Service
apiVersion: v1
metadata:
name: nfs-service
spec:
selector:
role: nfs
ports:
# Open the ports required by the NFS server
# Port 2049 for TCP
- name: tcp-2049
port: 2049
protocol: TCP
# Port 111 for UDP
- name: udp-111
port: 111
protocol: UDP
---
kind: Pod
apiVersion: v1
metadata:
name: nfs-server-pod
labels:
role: nfs
spec:
containers:
- name: nfs-server-container
image: cpuguy83/nfs-server
securityContext:
privileged: true
args:
# Pass the paths to share to the Docker image
- /exports
Both service and pod are running. Below is the output.
Now i have to use this in my web-server. Below is the details of the deployment file for web.
Filename :- deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: apache-service
spec:
selector:
app: apache
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: apache-deployment
spec:
selector:
matchLabels:
app: apache
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: apache
spec:
volumes:
- name: nfs-volume
nfs:
server: 10.99.56.195
path: /exports
containers:
- name: apache
image: mobingi/ubuntu-apache2-php7:7.2
ports:
- containerPort: 80
volumeMounts:
- name: nfs-volume
mountPath: /var/www/html
when i am running this file without volume everything works fine. But when i am running this with nfs than pod give following error.
kubectl describe pod apache-deployment-577ffcf9bd-p8s75
Give following output:-
Name: apache-deployment-577ffcf9bd-p8s75
Namespace: default
Priority: 0
Node: worker-node2/10.160.0.4
Start Time: Tue, 09 Jul 2019 09:53:39 +0000
Labels: app=apache
pod-template-hash=577ffcf9bd
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/apache-deployment-577ffcf9bd
Containers:
apache:
Container ID:
Image: mobingi/ubuntu-apache2-php7:7.2
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-p9qdb (ro)
/var/www/html from nfs-volume (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
nfs-volume:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 10.244.1.50
Path: /exports
ReadOnly: false
default-token-p9qdb:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-p9qdb
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m21s default-scheduler Successfully assigned default/apache-deployment-577ffcf9bd-p8s75 to worker-node2
Warning FailedMount 4m16s kubelet, worker-node2 MountVolume.SetUp failed for volume "nfs-volume" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/29114119-5815-442a-bb97-03fa491206a4/volumes/kubernetes.io~nfs/nfs-volume --scope -- mount -t nfs 10.244.1.50:/exports /var/lib/kubelet/pods/29114119-5815-442a-bb97-03fa491206a4/volumes/kubernetes.io~nfs/nfs-volume
Output: Running scope as unit: run-r3a55a8a3287448a59f7e4dbefa0333af.scope
mount.nfs: Connection timed out
Warning FailedMount 2m10s kubelet, worker-node2 MountVolume.SetUp failed for volume "nfs-volume" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/29114119-5815-442a-bb97-03fa491206a4/volumes/kubernetes.io~nfs/nfs-volume --scope -- mount -t nfs 10.244.1.50:/exports /var/lib/kubelet/pods/29114119-5815-442a-bb97-03fa491206a4/volumes/kubernetes.io~nfs/nfs-volume
Output: Running scope as unit: run-r5fe7befa141d4f989e14b291afa43208.scope
mount.nfs: Connection timed out
Warning FailedMount 2m3s (x2 over 4m18s) kubelet, worker-node2 Unable to mount volumes for pod "apache-deployment-577ffcf9bd-p8s75_default(29114119-5815-442a-bb97-03fa491206a4)": timeout expired waiting for volumes to attach or mount for pod "default"/"apache-deployment-577ffcf9bd-p8s75". list of unmounted volumes=[nfs-volume]. list of unattached volumes=[nfs-volume default-token-p9qdb]
Warning FailedMount 4s kubelet, worker-node2 MountVolume.SetUp failed for volume "nfs-volume" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/29114119-5815-442a-bb97-03fa491206a4/volumes/kubernetes.io~nfs/nfs-volume --scope -- mount -t nfs 10.244.1.50:/exports /var/lib/kubelet/pods/29114119-5815-442a-bb97-03fa491206a4/volumes/kubernetes.io~nfs/nfs-volume
Output: Running scope as unit: run-rd30c104228ae43df933839b6da469107.scope
mount.nfs: Connection timed out
Can Anyone please help to solve this problem.
Make sure there is no firewall between the nodes
Make sure nfs-utils installed on cluster nodes
Here is a blog posts about the docker image that you are using for nfs server, you need to do some tweaks for ports to be used by nfs server.
https://medium.com/#aronasorman/creating-an-nfs-server-within-kubernetes-e6d4d542bbb9