I try to start RStudio in docker container via kubernetes. All objects are created, but when I try to open rstudio using such commands in Ubuntu 18:
kubectl create -f rstudio-ing.yml
IP=$(minikube ip)
xdg-open http://$IP/rstudio/
there is error: #RStudio initialization error: unable connect to service.
Usual docker command works fine:
docker run -d -p 8787:8787 -e PASSWORD=123 -v /home/aabor/r-projects:/home/rstudio aabor/rstudio
The same intended operation in kubernetes fails.
rstudio-ing.yml file creates all objects well. RStudio is accessible if I do not mount any folder. But if I add folder mounts it produces an error. Any suggestions?
The content of the rstudio-ing.yml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: r-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /rstudio/
backend:
serviceName: rstudio
servicePort: 8787
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rstudio
spec:
replicas: 1
selector:
matchLabels:
service: rstudio
template:
metadata:
labels:
service: rstudio
language: R
spec:
containers:
- name: rstudio
image: aabor/rstudio
env:
- name: PASSWORD
value: "123"
volumeMounts:
- name: home-dir
mountPath: /home/rstudio/
volumes:
- name: home-dir
hostPath:
#RStudio initialization error: unable connect to service
path: /home/aabor/r-projects
---
apiVersion: v1
kind: Service
metadata:
name: rstudio
spec:
ports:
- port: 8787
selector:
service: rstudio
This is pod description:
Name: rstudio-689c4fd6c8-fgt7w
Namespace: default
Node: minikube/10.0.2.15
Start Time: Fri, 23 Nov 2018 21:42:35 +0300
Labels: language=R
pod-template-hash=2457098274
service=rstudio
Annotations: <none>
Status: Running
IP: 172.17.0.9
Controlled By: ReplicaSet/rstudio-689c4fd6c8
Containers:
rstudio:
Container ID: docker://a6bdcbfdf8dc5489a4c1fa6f23fb782bc3d58dd75d50823cd370c43bd3bffa3c
Image: aabor/rstudio
Image ID: docker-pullable://aabor/rstudio#sha256:2326e5daa3c4293da2909f7e8fd15fdcab88b4eb54f891b4a3cb536395e5572f
Port: <none>
Host Port: <none>
State: Running
Started: Fri, 23 Nov 2018 21:42:39 +0300
Ready: True
Restart Count: 0
Environment:
PASSWORD: 123
Mounts:
/home/rstudio/ from home-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mrkd8 (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
home-dir:
Type: HostPath (bare host directory volume)
Path: /home/aabor/r-projects
HostPathType:
default-token-mrkd8:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mrkd8
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10s default-scheduler Successfully assigned rstudio-689c4fd6c8-fgt7w to minikube
Normal SuccessfulMountVolume 10s kubelet, minikube MountVolume.SetUp succeeded for volume "home-dir"
Normal SuccessfulMountVolume 10s kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-mrkd8"
Normal Pulling 9s kubelet, minikube pulling image "aabor/rstudio"
Normal Pulled 7s kubelet, minikube Successfully pulled image "aabor/rstudio"
Normal Created 7s kubelet, minikube Created container
Normal Started 6s kubelet, minikube Started container
You have created a service of type ClusterIP that can only be possible to access in the cluster not the outside. So to make it available outside of the cluster, change the service type LoadBalancer.
apiVersion: v1
kind: Service
metadata:
name: rstudio
spec:
ports:
- port: 8787
selector:
service: rstudio
type: LoadBalancer
In that case, the loadbalancer type service don't need the ingress and use the url as:
$ minikube service rstudio --url
Related
When doing for e.g. a call to my endpoint curl localhost:8081/ticketapi/ticket/get I'm receiving an error 404. I know that error 404 is that the routing cannot be found. But I've no clue on how can I debug my problem or if there is something off in my .yml files.
I'm really new to Kubernetes and Docker and I hope someone can help me solve this issue.
Thanks in advance.
My endpoint in swagger
Cluster create:
k3d cluster create --api-port 6550 -p "8081:80#loadbalancer" --agents 2 from here
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ticket-deployment
labels:
app: ticketapi
spec:
replicas: 1
selector:
matchLabels:
app: ticketapi
template:
metadata:
labels:
app: ticketapi
spec:
containers:
- name: ticketapi
image: stanpanman/ticketapi:latest
ports:
- containerPort: 80
Service:
apiVersion: v1
kind: Service
metadata:
name: ticket-service
spec:
selector:
app: ticketapi
ports:
- protocol: TCP
port: 80
targetPort: 80
Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
ingressClassName: nginx-example
rules:
- http:
paths:
# Ticket API
- path: /ticketapi
pathType: Prefix
backend:
service:
name: ticketapi
port:
number: 80
Describe pod:
Name: ticket-deployment-6549764674-swwjq
Namespace: default
Priority: 0
Node: k3d-k3s-default-agent-1/172.24.0.3
Start Time: Tue, 03 May 2022 11:31:50 +0200
Labels: app=ticketapi
pod-template-hash=6549764674
Annotations: <none>
Status: Running
IP: 10.42.2.4
IPs:
IP: 10.42.2.4
Controlled By: ReplicaSet/ticket-deployment-6549764674
Containers:
ticketapi:
Container ID: containerd://f830c5396c5886c9109f733a07d9c2e02cde32b7689ed85dbfb8e62dc503705a
Image: stanpanman/ticketapi:latest
Image ID: docker.io/stanpanman/ticketapi#sha256:428990ac8d10dbf74039eb25a6899448f4cb96997cb5edde2e9d62e66d547070
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 03 May 2022 11:32:03 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kgp8c (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-kgp8c:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17m default-scheduler Successfully assigned default/ticket-deployment-6549764674-swwjq to k3d-k3s-default-agent-1
Normal Pulling 17m kubelet Pulling image "stanpanman/ticketapi:latest"
Normal Pulled 17m kubelet Successfully pulled image "stanpanman/ticketapi:latest" in 12.656309048s
Normal Created 17m kubelet Created container ticketapi
Normal Started 17m kubelet Started container ticketapi
This could help you debug:
Check if your application is actually working at the pod level (make sure the pod-name is correct):
kubectl port-forward pod/ticket-deployment-6549764674-swwjq 8080:80
#then try on your browser localhost:8080/whateverYouConfigured
If this doesn't work, the pb may came from your app ( you can connect into the pod and exec a curl localhost:80/whateverYouConfigured command
If this worked, try to check if the service is working correctly:
kubectl port-forward svc/ticket-service 8080:80
#then try on your browser localhost:8080/whateverYouConfigured
If this doesn't work, check the svc endpoints or labels
If it worked, then you can do the same with the ingress:
kubectl port-forward <ingress-pod-name> 8080:<ingress-port>
If this works check your k3d config / firewalling etc.
Hope this can help: https://learnk8s.io/a/b0862c5c0f1e7a6db8145f7970dd9601.png
I'm stuck with an annoying issue, where my pod can't access the mounted persistent volume.
Kubeadm: v1.19.2
Docker: 19.03.13
Zookeeper image: library/zookeeper:3.6
Cluster info: Locally hosted, no Cloud Provide
K8s configuration:
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
selector:
app: zk
ports:
- port: 2888
targetPort: 2888
name: server
protocol: TCP
- port: 3888
targetPort: 3888
name: leader-election
protocol: TCP
clusterIP: ""
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
selector:
app: zk
ports:
- name: client
protocol: TCP
port: 2181
targetPort: 2181
type: LoadBalancer
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 1
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: zk
spec:
volumes:
- name: zoo-config
configMap:
name: zoo-config
- name: datadir
persistentVolumeClaim:
claimName: zoo-pvc
containers:
- name: zookeeper
imagePullPolicy: Always
image: "library/zookeeper:3.6"
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper/data
- name: zoo-config
mountPath: /conf
securityContext:
fsGroup: 2000
runAsUser: 1000
runAsNonRoot: true
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.beta.kubernetes.io/storage-class: local-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: local-storage
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: zoo-config
namespace: default
data:
zoo.cfg: |
tickTime=10000
dataDir=/var/lib/zookeeper/data
clientPort=2181
initLimit=10
syncLimit=4
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: zoo-pv
labels:
type: local
spec:
storageClassName: local-storage
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data"
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <node-name>
I've tried running the pod as root with the following security context, which I know is a terrible idea, purely as a test. This however caused a bunch of other issues.
securityContext:
fsGroup: 0
runAsUser: 0
Once the pod starts up the logs contain the following,
Zookeeper JMX enabled by default
Using config: /conf/zoo.cfg
<log4j Warnings>
Unable too access datadir, exiting abnormally
Inspecting the pod, provides me with the following information,
~$ kubectl describe pod/zk-0
Name: zk-0
Namespace: default
Priority: 0
Node: <node>
Start Time: Sat, 26 Sep 2020 15:48:00 +0200
Labels: app=zk
controller-revision-hash=zk-6c68989bd
statefulset.kubernetes.io/pod-name=zk-0
Annotations: <none>
Status: Running
IP: <IP>
IPs:
IP: <IP>
Controlled By: StatefulSet/zk
Containers:
zookeeper:
Container ID: docker://281e177d677394604785542c231d21b71f1666a22e74c1c10ef88491dad7a522
Image: library/zookeeper:3.6
Image ID: docker-pullable://zookeeper#sha256:6c051390cfae7958ff427834937c353fc6c34484f6a84b3e4bc8c512b53a16f6
Ports: 2181/TCP, 2888/TCP, 3888/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 26 Sep 2020 16:04:26 +0200
Finished: Sat, 26 Sep 2020 16:04:27 +0200
Ready: False
Restart Count: 8
Requests:
cpu: 500m
memory: 1Gi
Environment: <none>
Mounts:
/conf from zoo-config (rw)
/var/lib/zookeeper/data from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-88x56 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-zk-0
ReadOnly: false
zoo-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: zoo-config
Optional: false
default-token-88x56:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-88x56
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17m default-scheduler Successfully assigned default/zk-0 to <node>
Normal Pulled 17m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.932381527s
Normal Pulled 17m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.960610662s
Normal Pulled 17m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.959935633s
Normal Created 16m (x4 over 17m) kubelet Created container zookeeper
Normal Pulled 16m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.92551645s
Normal Started 16m (x4 over 17m) kubelet Started container zookeeper
Normal Pulling 15m (x5 over 17m) kubelet Pulling image "library/zookeeper:3.6"
Warning BackOff 2m35s (x71 over 17m) kubelet Back-off restarting failed container
To me, it seems like the pod has full rw access to the volume, so I'm unsure why it's still refusing to access the directory. Any help will be appreciated!
After quite some digging, I finally figured out why it wasn't working. The logs were actually telling me all I needed to know in the end, the mounted persistentVolumeClaim simply did not have the correct file permissions to read from the mounted hostpath /mnt/data directory
To fix this, in a somewhat hacky way, I gave read, write & execute permissions to all.
chmod 777 /mnt/data
Overview can be found here
This is definitely not the most secure way, of fixing the issue, and I would strongly advise against using this in any production like environment.
Probably a better approach would be the following
sudo usermod -a -G 1000 1000
I have set up a simple two-node Kubernetes cluster using K3S. I have deployed a very simple web app, but when I try to access the web app, I just get a "Gateway Timeout". I've tried to keep the set up as simple as possible, but I can't see where I'm going wrong. I've outlined my entire setup below, from starting with two brand new Ubuntu 20.04 instances. Can anyone see where I'm going wrong?
This is my set up from start to finish:
On Master Node:
sudo ufw allow 80
sudo ufw allow 8080
sudo ufw allow 6443
sudo ufw allow 2379
sudo ufw allow 2380
sudo ufw allow 2379:10252/tcp
sudo ufw allow 30000:32767/tcp
export http_proxy=proxy.example.com:8082
export https_proxy=proxy.example.com:8082
curl -sfL https://get.k3s.io | sh -
cat /var/lib/rancher/k3s/server/node-token
sudo cat /var/lib/rancher/k3s/server/node-token
sudo cat /etc/rancher/k3s/k3s.yaml
On Agent:
sudo ufw allow 80
sudo ufw allow 8080
sudo ufw allow 6443
sudo ufw allow 2379
sudo ufw allow 2380
sudo ufw allow 2379:10252/tcp
sudo ufw allow 30000:32767/tcp
export http_proxy=proxy.example.com:8082
export https_proxy=proxy.example.com:8082
curl -sfL https://get.k3s.io | K3S_URL=https://vm1234.example.com:6443 K3S_TOKEN=K1060cf9217115ce1cb67d8450ea809b267ddc332b59c0c8ec6c6a30573f0b75eca::server:0b2be94c380be7bf4e16d94af36cac00 sh -
mkdir /etc/rancher/k3s/
sudo mkdir /etc/rancher/k3s/
sudo vim /etc/rancher/k3s/registries.yaml
sudo systemctl restart k3s-agent
On Local Workstation:
kubectl --kubeconfig k3s.yaml apply -f web-test-deployment.yaml
kubectl --kubeconfig k3s.yaml apply -f web-test-service.yaml
kubectl --kubeconfig k3s.yaml apply -f web-test-ingress.yaml
List running pods:
$ kubectl --kubeconfig k3s.yaml get po
NAME READY STATUS RESTARTS AGE
web-test-deployment-5594bffd47-2gpd2 1/1 Running 0 4m57s
Inspect running pod:
$ kubectl --kubeconfig k3s.yaml describe pod web-test-deployment-5594bffd47-2gpd2
Name: web-test-deployment-5594bffd47-2gpd2
Namespace: default
Priority: 0
Node: vm9876/10.192.110.200
Start Time: Fri, 28 Aug 2020 12:07:01 +0100
Labels: app=web-test
pod-template-hash=5594bffd47
Annotations: <none>
Status: Running
IP: 10.42.1.3
IPs:
IP: 10.42.1.3
Controlled By: ReplicaSet/web-test-deployment-5594bffd47
Containers:
web-test:
Container ID: containerd://c32d85da0642d3ccc00c61a5265280f9fcc11e8979d621690117878c89506440
Image: docker.example.com//web-test
Image ID: docker.example.com//web-test#sha256:cb568f5b6554284684815fc4ee17eb8cceb1aa90838a575fd3755b60bb7e44e7
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 28 Aug 2020 12:09:03 +0100
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wkzpx (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-wkzpx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wkzpx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/web-test-deployment-5594bffd47-2gpd2 to vm9876
Normal Pulling 3m58s (x4 over 5m17s) kubelet, vm9876 Pulling image "docker.example.com/web-test"
Normal Pulled 3m16s kubelet, vm9876 Successfully pulled image "docker.example.com/web-test"
Normal Created 3m16s kubelet, vm9876 Created container web-test
Normal Started 3m16s kubelet, vm9876 Started container web-test
Show stack:
$ kubectl --kubeconfig k3s.yaml get all
NAME READY STATUS RESTARTS AGE
pod/web-test-deployment-5594bffd47-2gpd2 1/1 Running 0 5m43s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 16m
service/web-test-service ClusterIP 10.43.100.212 <none> 8080/TCP 5m39s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/web-test-deployment 1/1 1 1 5m44s
NAME DESIRED CURRENT READY AGE
replicaset.apps/web-test-deployment-5594bffd47 1 1 1 5m45s
List Ingress:
$ kubectl --kubeconfig k3s.yaml get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
web-test <none> * 10.94.230.224 80 5m55s
Inspect Ingress:
$ kubectl --kubeconfig k3s.yaml describe ing web-test
Name: web-test
Namespace: default
Address: 10.94.230.224
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/ web-test-service:8080 (10.42.1.3:8080)
Annotations: kubernetes.io/ingress.class: traefik
Events: <none>
Inspect Service:
kubectl --kubeconfig k3s.yaml describe svc web-test-service
Name: web-test-service
Namespace: default
Labels: app=web-test
Annotations: Selector: app=web-test
Type: ClusterIP
IP: 10.43.100.212
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.42.1.3:8080
Session Affinity: None
Events: <none>
$ curl http://10.94.230.224/web-test-service/
Gateway Timeout
These are my deployment manifests:
web-test-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web-test
name: web-test-deployment
spec:
replicas: 1
selector:
matchLabels:
app: web-test
strategy: {}
template:
metadata:
labels:
app: web-test
spec:
containers:
- image: docker.example.com/web-test
imagePullPolicy: Always
name: web-test
ports:
- containerPort: 8080
restartPolicy: Always
volumes: null
web-test-service.yaml:
apiVersion: v1
kind: Service
metadata:
labels:
app: web-test
name: web-test-service
spec:
ports:
- port: 8080
targetPort: 8080
selector:
app: web-test
web-test--ingress.yaml:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: web-test
annotations:
kubernetes.io/ingress.class: "traefik"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: web-test-service
servicePort: 8080
Note: I've also tried a similar setup using Ambassador, but I'm getting similar results :-(
The annotation on the Ingress is missing description of the entrypoint and the host:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: web-test
annotations:
kubernetes.io/ingress.class: "traefik"
traefik.ingress.kubernetes.io/router.entrypoints: http
spec:
rules:
- host: webtest.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-test-service
port:
number: 80
i am using nfs for volume in kubernetes pod using deployment.
Below are details of all files.
Filename :- nfs-server.yaml
kind: Service
apiVersion: v1
metadata:
name: nfs-service
spec:
selector:
role: nfs
ports:
# Open the ports required by the NFS server
# Port 2049 for TCP
- name: tcp-2049
port: 2049
protocol: TCP
# Port 111 for UDP
- name: udp-111
port: 111
protocol: UDP
---
kind: Pod
apiVersion: v1
metadata:
name: nfs-server-pod
labels:
role: nfs
spec:
containers:
- name: nfs-server-container
image: cpuguy83/nfs-server
securityContext:
privileged: true
args:
# Pass the paths to share to the Docker image
- /exports
Both service and pod are running. Below is the output.
Now i have to use this in my web-server. Below is the details of the deployment file for web.
Filename :- deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: apache-service
spec:
selector:
app: apache
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: apache-deployment
spec:
selector:
matchLabels:
app: apache
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: apache
spec:
volumes:
- name: nfs-volume
nfs:
server: 10.99.56.195
path: /exports
containers:
- name: apache
image: mobingi/ubuntu-apache2-php7:7.2
ports:
- containerPort: 80
volumeMounts:
- name: nfs-volume
mountPath: /var/www/html
when i am running this file without volume everything works fine. But when i am running this with nfs than pod give following error.
kubectl describe pod apache-deployment-577ffcf9bd-p8s75
Give following output:-
Name: apache-deployment-577ffcf9bd-p8s75
Namespace: default
Priority: 0
Node: worker-node2/10.160.0.4
Start Time: Tue, 09 Jul 2019 09:53:39 +0000
Labels: app=apache
pod-template-hash=577ffcf9bd
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/apache-deployment-577ffcf9bd
Containers:
apache:
Container ID:
Image: mobingi/ubuntu-apache2-php7:7.2
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-p9qdb (ro)
/var/www/html from nfs-volume (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
nfs-volume:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 10.244.1.50
Path: /exports
ReadOnly: false
default-token-p9qdb:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-p9qdb
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m21s default-scheduler Successfully assigned default/apache-deployment-577ffcf9bd-p8s75 to worker-node2
Warning FailedMount 4m16s kubelet, worker-node2 MountVolume.SetUp failed for volume "nfs-volume" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/29114119-5815-442a-bb97-03fa491206a4/volumes/kubernetes.io~nfs/nfs-volume --scope -- mount -t nfs 10.244.1.50:/exports /var/lib/kubelet/pods/29114119-5815-442a-bb97-03fa491206a4/volumes/kubernetes.io~nfs/nfs-volume
Output: Running scope as unit: run-r3a55a8a3287448a59f7e4dbefa0333af.scope
mount.nfs: Connection timed out
Warning FailedMount 2m10s kubelet, worker-node2 MountVolume.SetUp failed for volume "nfs-volume" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/29114119-5815-442a-bb97-03fa491206a4/volumes/kubernetes.io~nfs/nfs-volume --scope -- mount -t nfs 10.244.1.50:/exports /var/lib/kubelet/pods/29114119-5815-442a-bb97-03fa491206a4/volumes/kubernetes.io~nfs/nfs-volume
Output: Running scope as unit: run-r5fe7befa141d4f989e14b291afa43208.scope
mount.nfs: Connection timed out
Warning FailedMount 2m3s (x2 over 4m18s) kubelet, worker-node2 Unable to mount volumes for pod "apache-deployment-577ffcf9bd-p8s75_default(29114119-5815-442a-bb97-03fa491206a4)": timeout expired waiting for volumes to attach or mount for pod "default"/"apache-deployment-577ffcf9bd-p8s75". list of unmounted volumes=[nfs-volume]. list of unattached volumes=[nfs-volume default-token-p9qdb]
Warning FailedMount 4s kubelet, worker-node2 MountVolume.SetUp failed for volume "nfs-volume" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/29114119-5815-442a-bb97-03fa491206a4/volumes/kubernetes.io~nfs/nfs-volume --scope -- mount -t nfs 10.244.1.50:/exports /var/lib/kubelet/pods/29114119-5815-442a-bb97-03fa491206a4/volumes/kubernetes.io~nfs/nfs-volume
Output: Running scope as unit: run-rd30c104228ae43df933839b6da469107.scope
mount.nfs: Connection timed out
Can Anyone please help to solve this problem.
Make sure there is no firewall between the nodes
Make sure nfs-utils installed on cluster nodes
Here is a blog posts about the docker image that you are using for nfs server, you need to do some tweaks for ports to be used by nfs server.
https://medium.com/#aronasorman/creating-an-nfs-server-within-kubernetes-e6d4d542bbb9
I recently evaluated Kubernetes with a simple test project and I was able to update image of StatefulSet with command like this:
kubectl set image statefulset/cloud-stateful-set cloud-stateful-container=ncccloud:v716
I'm now trying to get our real system to work in Kubernetes and the pods don't do anything when I try to update image, even though I'm using basically the same command.
It says:
statefulset.apps "cloud-stateful-set" image updated
And kubectl describe statefulset.apps/cloud-stateful-set says:
Image: ncccloud:v716"
But kubectl describe pod cloud-stateful-set-0 and kubectl describe pod cloud-stateful-set-1 say:
"Image: ncccloud:latest"
The ncccloud:latest is an image which doesn't work:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cloud-stateful-set-0 0/1 CrashLoopBackOff 7 13m
cloud-stateful-set-1 0/1 CrashLoopBackOff 7 13m
mssql-deployment-6cd4ff766-pzz99 1/1 Running 1 55m
Another strange thing is that every time I try to apply the StatefulSet it says configured instead of unchanged.
$ kubectl apply -f k8s/cloud-stateful-set.yaml
statefulset.apps "cloud-stateful-set" configured
Here is my cloud-stateful-set.yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cloud-stateful-set
labels:
app: cloud
group: service
spec:
replicas: 2
# podManagementPolicy: Parallel
serviceName: cloud-stateful-set
selector:
matchLabels:
app: cloud
template:
metadata:
labels:
app: cloud
group: service
spec:
containers:
- name: cloud-stateful-container
image: ncccloud:latest
imagePullPolicy: Never
ports:
- containerPort: 80
volumeMounts:
- name: cloud-stateful-storage
mountPath: /cloud-stateful-data
volumeClaimTemplates:
- metadata:
name: cloud-stateful-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
Here is full output of kubectl describe pod/cloud-stateful-set-1:
Name: cloud-stateful-set-1
Namespace: default
Node: docker-for-desktop/192.168.65.3
Start Time: Tue, 02 Jul 2019 11:03:01 +0300
Labels: app=cloud
controller-revision-hash=cloud-stateful-set-5c9964c897
group=service
statefulset.kubernetes.io/pod-name=cloud-stateful-set-1
Annotations: <none>
Status: Running
IP: 10.1.0.20
Controlled By: StatefulSet/cloud-stateful-set
Containers:
cloud-stateful-container:
Container ID: docker://3ec26930c1a81caa39d5c5a16c4e25adf7584f90a71e0110c0b03ecb60dd9592
Image: ncccloud:latest
Image ID: docker://sha256:394427c40e964e34ca6c9db3ce1df1f8f6ce34c4ba8f3ab10e25da6e89678830
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 139
Started: Tue, 02 Jul 2019 11:19:03 +0300
Finished: Tue, 02 Jul 2019 11:19:03 +0300
Ready: False
Restart Count: 8
Environment: <none>
Mounts:
/cloud-stateful-data from cloud-stateful-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gzxpx (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
cloud-stateful-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: cloud-stateful-storage-cloud-stateful-set-1
ReadOnly: false
default-token-gzxpx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gzxpx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19m default-scheduler Successfully assigned cloud-stateful-set-1 to docker-for-desktop
Normal SuccessfulMountVolume 19m kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "pvc-4c9e1796-9c9a-11e9-998f-00155d64fa03"
Normal SuccessfulMountVolume 19m kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "default-token-gzxpx"
Normal Pulled 17m (x5 over 19m) kubelet, docker-for-desktop Container image "ncccloud:latest" already present on machine
Normal Created 17m (x5 over 19m) kubelet, docker-for-desktop Created container
Normal Started 17m (x5 over 19m) kubelet, docker-for-desktop Started container
Warning BackOff 4m (x70 over 19m) kubelet, docker-for-desktop Back-off restarting failed container
Here is full output of kubectl describe statefulset.apps/cloud-stateful-set:
Name: cloud-stateful-set
Namespace: default
CreationTimestamp: Tue, 02 Jul 2019 11:02:59 +0300
Selector: app=cloud
Labels: app=cloud
group=service
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"labels":{"app":"cloud","group":"service"},"name":"cloud-stateful-set","names...
Replicas: 2 desired | 2 total
Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=cloud
group=service
Containers:
cloud-stateful-container:
Image: ncccloud:v716
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/cloud-stateful-data from cloud-stateful-storage (rw)
Volumes: <none>
Volume Claims:
Name: cloud-stateful-storage
StorageClass:
Labels: <none>
Annotations: <none>
Capacity: 10Mi
Access Modes: [ReadWriteOnce]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 25m statefulset-controller create Pod cloud-stateful-set-0 in StatefulSet cloud-stateful-set successful
Normal SuccessfulCreate 25m statefulset-controller create Pod cloud-stateful-set-1 in StatefulSet cloud-stateful-set successful
I'm using Docker Desktop on Windows, if it matters.
in my case imagePullPolicy was set to Always already:
kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"gcr.io/google_containers/nginx-slim:0.8"}]'
helped in my case, see k8s docs: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#rolling-update
In the stateful set yaml, change
imagePullPolicy: Never
to
imagePullPolicy: Always