When doing for e.g. a call to my endpoint curl localhost:8081/ticketapi/ticket/get I'm receiving an error 404. I know that error 404 is that the routing cannot be found. But I've no clue on how can I debug my problem or if there is something off in my .yml files.
I'm really new to Kubernetes and Docker and I hope someone can help me solve this issue.
Thanks in advance.
My endpoint in swagger
Cluster create:
k3d cluster create --api-port 6550 -p "8081:80#loadbalancer" --agents 2 from here
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ticket-deployment
labels:
app: ticketapi
spec:
replicas: 1
selector:
matchLabels:
app: ticketapi
template:
metadata:
labels:
app: ticketapi
spec:
containers:
- name: ticketapi
image: stanpanman/ticketapi:latest
ports:
- containerPort: 80
Service:
apiVersion: v1
kind: Service
metadata:
name: ticket-service
spec:
selector:
app: ticketapi
ports:
- protocol: TCP
port: 80
targetPort: 80
Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
ingressClassName: nginx-example
rules:
- http:
paths:
# Ticket API
- path: /ticketapi
pathType: Prefix
backend:
service:
name: ticketapi
port:
number: 80
Describe pod:
Name: ticket-deployment-6549764674-swwjq
Namespace: default
Priority: 0
Node: k3d-k3s-default-agent-1/172.24.0.3
Start Time: Tue, 03 May 2022 11:31:50 +0200
Labels: app=ticketapi
pod-template-hash=6549764674
Annotations: <none>
Status: Running
IP: 10.42.2.4
IPs:
IP: 10.42.2.4
Controlled By: ReplicaSet/ticket-deployment-6549764674
Containers:
ticketapi:
Container ID: containerd://f830c5396c5886c9109f733a07d9c2e02cde32b7689ed85dbfb8e62dc503705a
Image: stanpanman/ticketapi:latest
Image ID: docker.io/stanpanman/ticketapi#sha256:428990ac8d10dbf74039eb25a6899448f4cb96997cb5edde2e9d62e66d547070
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 03 May 2022 11:32:03 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kgp8c (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-kgp8c:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17m default-scheduler Successfully assigned default/ticket-deployment-6549764674-swwjq to k3d-k3s-default-agent-1
Normal Pulling 17m kubelet Pulling image "stanpanman/ticketapi:latest"
Normal Pulled 17m kubelet Successfully pulled image "stanpanman/ticketapi:latest" in 12.656309048s
Normal Created 17m kubelet Created container ticketapi
Normal Started 17m kubelet Started container ticketapi
This could help you debug:
Check if your application is actually working at the pod level (make sure the pod-name is correct):
kubectl port-forward pod/ticket-deployment-6549764674-swwjq 8080:80
#then try on your browser localhost:8080/whateverYouConfigured
If this doesn't work, the pb may came from your app ( you can connect into the pod and exec a curl localhost:80/whateverYouConfigured command
If this worked, try to check if the service is working correctly:
kubectl port-forward svc/ticket-service 8080:80
#then try on your browser localhost:8080/whateverYouConfigured
If this doesn't work, check the svc endpoints or labels
If it worked, then you can do the same with the ingress:
kubectl port-forward <ingress-pod-name> 8080:<ingress-port>
If this works check your k3d config / firewalling etc.
Hope this can help: https://learnk8s.io/a/b0862c5c0f1e7a6db8145f7970dd9601.png
Related
I wrote a simple node.js app that listens on a port and returns HTML. I can docker run the node.js app and, with port forwarding in place, hit it happily.
+ docker run -p 7081:7081 split-server
Now I want to run the app in kubernetes. I am on a mac and set up minikube and virtual box. I also set up a local docker registry for my local app, using instructions found here.
It doesn't work no matter what combination of things I try. Pending. The describe is below. I think I'm close, but I just can't get useful debugging output from kubectl:
+ kubectl describe pod split-server
Name: split-server-68fc6cdcd-gpk5m
Namespace: default
Priority: 0
Node: <none>
Labels: app=split-server
pod-template-hash=68fc6cdcd
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/split-server-68fc6cdcd
Containers:
app:
Image: split-server:latest
Port: 7081/TCP
Host Port: 0/TCP
Environment:
SPLIT_API_KEY: <API KEY>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f8lzd (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
kube-api-access-f8lzd:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 3m27s (x3 over 13m) default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
My YAML is...
apiVersion: v1
kind: Service
metadata:
name: split-server
spec:
selector:
app: split-server
ports:
- port: 7081
targetPort: 7081
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: split-server
spec:
replicas: 1
selector:
matchLabels:
app: split-server
template:
metadata:
labels:
app: split-server
spec:
containers:
- name: app
image: 192.168.4.26:5000/split-server:latest
#image: split-server:latest
ports:
- containerPort: 7081
env:
- name: SPLIT_API_KEY
value: <API KEY>
imagePullPolicy: Always
And here is what docker has for its list of images:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
split-server latest d2caa2d0c693 45 minutes ago 1.01GB
192.168.4.26:5000/local/split-server latest d2caa2d0c693 45 minutes ago 1.01GB
Where should I be hunting? What tools am I missing? kubectl logs comes back empty every time... should have a single line of logging if the app had come up properly.
The minikube node is marked as unschedulable for some reason (manually or there is a problem), you can try to remove the taint:
kubectl taint nodes --all node.kubernetes.io/unschedulable-
or add a toleration on your pod:
apiVersion: v1
kind: Pod
metadata:
name: ...
...
spec:
containers:
- name: ...
...
tolerations:
- key: "node.kubernetes.io/unschedulable"
operator: "Exists"
effect: "NoSchedule"
Way to troubleshoot a pending pod is by looking at the events that you get when you describe the pod. In your case master node is marked unschedulable hence you are facing the issue.
The command to fix would be like what Hussein said also refer this page to get an idea how to troubleshoot a pending pod:
Troubleshooting pending pods
I know there are many questions concerning this aspect... but until now I could not find any answers. I tried two images (Apache Solr and Neo4J). Tried different namespaces, clusterIP, edit /etc/hosts, ingress, tunnel, minikube ip and all my requests got no response.
I tried these images standalone in Docker and they answer properly... with localhost, 127.0.0.1 and my ethernet IP - in case 192.168.0.15. I guessed that could be an internal configuration (from Sol, Neo4J) to allow requests only from localhost... but as they replied the calling from IP address and through a custom domain I set in /etc/hosts, I turned to kubernetes configuration.
Below are the following steps and environment:
1) MacOS 10.15 Catalina
2) minikube version: v1.24.0 - commit: 76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b
3) Kubectl:
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:33:37Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:35:25Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
4) Docker:
Client:
Cloud integration: v1.0.22
Version: 20.10.11
API version: 1.41
Go version: go1.16.10
Git commit: dea9396
Built: Thu Nov 18 00:36:09 2021
OS/Arch: darwin/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.11
API version: 1.41 (minimum version 1.12)
Go version: go1.16.9
Git commit: 847da18
Built: Thu Nov 18 00:35:39 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.12
GitCommit: 7b11cfaabd73bb80907dd23182b9347b4245eb5d
runc:
Version: 1.0.2
GitCommit: v1.0.2-0-g52b36a2
docker-init:
Version: 0.19.0
GitCommit: de40ad0
minikube start --mount --mount-string="/my/local/path:/analytics" --driver='docker'
kubectl apply -f neo4j-configmap.yaml
kubectl apply -f neo4j-secret.yaml
kubectl apply -f neo4j-volume.yaml
kubectl apply -f neo4j-volume-claim.yaml
kubectl apply -f neo4j.yaml
kubectl apply -f neo4j-service.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: neo4j-configmap
data:
neo4j-url: neo4j-service
---
apiVersion: v1
kind: Secret
metadata:
name: neo4j-secret
type: Opaque
data:
neo4j-user: bmVvNGoK
neo4j-password: bmVvNGoK
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: neo4j-volume
spec:
storageClassName: hostpath
capacity:
storage: 101Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/analytics/neo4j"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: neo4j-volume-claim
labels:
app: neo4j
spec:
storageClassName: hostpath
volumeMode: Filesystem
volumeName: neo4j-volume
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 101Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: neo4j-application
labels:
app: neo4j
spec:
replicas: 1
selector:
matchLabels:
app: neo4j
template:
metadata:
labels:
app: neo4j
spec:
volumes:
- name: neo4j-storage
persistentVolumeClaim:
claimName: neo4j-volume-claim
containers:
- name: neo4j
image: neo4j:4.1.4
ports:
- containerPort: 7474
name: neo4j-7474
- containerPort: 7687
name: neo4j-7687
volumeMounts:
- name: neo4j-storage
mountPath: "/data"
---
apiVersion: v1
kind: Service
metadata:
name: neo4j-service
spec:
type: NodePort
selector:
app: neo4j
ports:
- protocol: TCP
port: 7474
targetPort: neo4j-7474
nodePort: 30001
name: neo4j-port-7474
- protocol: TCP
port: 7687
targetPort: neo4j-7687
nodePort: 30002
name: neo4j-port-7687
The bash steps where executed in that order. I have each yaml configuration in a separated file. I joined they here as just one yaml just to expose then.
What part or parts of the setup process or configuration process am I missing?
Below follows the kubectl describe all with only neo4j. I tried http, https request from all possibles IP... Connected to each pod and perform a curl inside the pod... and got successfully responses.
Name: neo4j-application-7757948b98-2pxr2
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Sun, 09 Jan 2022 14:19:32 -0300
Labels: app=neo4j
pod-template-hash=7757948b98
Annotations: <none>
Status: Running
IP: 172.17.0.4
IPs:
IP: 172.17.0.4
Controlled By: ReplicaSet/neo4j-application-7757948b98
Containers:
neo4j:
Container ID: docker://2deda46b3bb15712ff6dde5d2f3493c07b616c2eef3433dec6fe6f0cd6439c5f
Image: neo4j:4.1.4
Image ID: docker-pullable://neo4j#sha256:b1bc8a5c5136f4797dc553c114c0269537c85d3580e610a8e711faacb48eb774
Ports: 7474/TCP, 7687/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Sun, 09 Jan 2022 14:19:43 -0300
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/data from neo4j-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z5hq9 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
neo4j-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: neo4j-volume-claim
ReadOnly: false
kube-api-access-z5hq9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 35m default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 35m default-scheduler Successfully assigned default/neo4j-application-7757948b98-2pxr2 to minikube
Normal Pulling 35m kubelet Pulling image "neo4j:4.1.4"
Normal Pulled 35m kubelet Successfully pulled image "neo4j:4.1.4" in 3.087215911s
Normal Created 34m kubelet Created container neo4j
Normal Started 34m kubelet Started container neo4j
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.0.1
IPs: 10.96.0.1
Port: https 443/TCP
TargetPort: 8443/TCP
Endpoints: 192.168.49.2:8443
Session Affinity: None
Events: <none>
Name: neo4j-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=neo4j
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.98.131.77
IPs: 10.98.131.77
Port: neo4j-port-7474 7474/TCP
TargetPort: neo4j-7474/TCP
NodePort: neo4j-port-7474 30001/TCP
Endpoints: 172.17.0.4:7474
Port: neo4j-port-7687 7687/TCP
TargetPort: neo4j-7687/TCP
NodePort: neo4j-port-7687 30002/TCP
Endpoints: 172.17.0.4:7687
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Name: neo4j-application
Namespace: default
CreationTimestamp: Sun, 09 Jan 2022 14:19:27 -0300
Labels: app=neo4j
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=neo4j
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=neo4j
Containers:
neo4j:
Image: neo4j:4.1.4
Ports: 7474/TCP, 7687/TCP
Host Ports: 0/TCP, 0/TCP
Environment: <none>
Mounts:
/data from neo4j-storage (rw)
Volumes:
neo4j-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: neo4j-volume-claim
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: neo4j-application-7757948b98 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 35m deployment-controller Scaled up replica set neo4j-application-7757948b98 to 1
Name: neo4j-application-7757948b98
Namespace: default
Selector: app=neo4j,pod-template-hash=7757948b98
Labels: app=neo4j
pod-template-hash=7757948b98
Annotations: deployment.kubernetes.io/desired-replicas: 1
deployment.kubernetes.io/max-replicas: 2
deployment.kubernetes.io/revision: 1
Controlled By: Deployment/neo4j-application
Replicas: 1 current / 1 desired
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=neo4j
pod-template-hash=7757948b98
Containers:
neo4j:
Image: neo4j:4.1.4
Ports: 7474/TCP, 7687/TCP
Host Ports: 0/TCP, 0/TCP
Environment: <none>
Mounts:
/data from neo4j-storage (rw)
Volumes:
neo4j-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: neo4j-volume-claim
ReadOnly: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 35m replicaset-controller Created pod: neo4j-application-7757948b98-2pxr2
As mentioned in comments and in this post, the way you would expose app running in minikube via NodePort is my running the command:
minikube service <SERVICE_NAME> --url
Which prints out url you can paste in your browser.
You also mentioned:
With the url fro minikube service I could reach the endpoint! 🏃 Starting tunnel for service neo4j-service. http://127.0.0.1:49523 and http://127.0.0.1:49524. But considering the domain of the application... What should I do with NodePort 30001? What is the correct way to configure a kubernetes node?
The output you pasted is correct, you are getting a successful response. As for the NodePort - minikube maps this port to url that you are getting when running command mentioned before. Read more on accessing apps running in minikube here
I'm stuck with an annoying issue, where my pod can't access the mounted persistent volume.
Kubeadm: v1.19.2
Docker: 19.03.13
Zookeeper image: library/zookeeper:3.6
Cluster info: Locally hosted, no Cloud Provide
K8s configuration:
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
selector:
app: zk
ports:
- port: 2888
targetPort: 2888
name: server
protocol: TCP
- port: 3888
targetPort: 3888
name: leader-election
protocol: TCP
clusterIP: ""
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
selector:
app: zk
ports:
- name: client
protocol: TCP
port: 2181
targetPort: 2181
type: LoadBalancer
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 1
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: zk
spec:
volumes:
- name: zoo-config
configMap:
name: zoo-config
- name: datadir
persistentVolumeClaim:
claimName: zoo-pvc
containers:
- name: zookeeper
imagePullPolicy: Always
image: "library/zookeeper:3.6"
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper/data
- name: zoo-config
mountPath: /conf
securityContext:
fsGroup: 2000
runAsUser: 1000
runAsNonRoot: true
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.beta.kubernetes.io/storage-class: local-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: local-storage
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: zoo-config
namespace: default
data:
zoo.cfg: |
tickTime=10000
dataDir=/var/lib/zookeeper/data
clientPort=2181
initLimit=10
syncLimit=4
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: zoo-pv
labels:
type: local
spec:
storageClassName: local-storage
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data"
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <node-name>
I've tried running the pod as root with the following security context, which I know is a terrible idea, purely as a test. This however caused a bunch of other issues.
securityContext:
fsGroup: 0
runAsUser: 0
Once the pod starts up the logs contain the following,
Zookeeper JMX enabled by default
Using config: /conf/zoo.cfg
<log4j Warnings>
Unable too access datadir, exiting abnormally
Inspecting the pod, provides me with the following information,
~$ kubectl describe pod/zk-0
Name: zk-0
Namespace: default
Priority: 0
Node: <node>
Start Time: Sat, 26 Sep 2020 15:48:00 +0200
Labels: app=zk
controller-revision-hash=zk-6c68989bd
statefulset.kubernetes.io/pod-name=zk-0
Annotations: <none>
Status: Running
IP: <IP>
IPs:
IP: <IP>
Controlled By: StatefulSet/zk
Containers:
zookeeper:
Container ID: docker://281e177d677394604785542c231d21b71f1666a22e74c1c10ef88491dad7a522
Image: library/zookeeper:3.6
Image ID: docker-pullable://zookeeper#sha256:6c051390cfae7958ff427834937c353fc6c34484f6a84b3e4bc8c512b53a16f6
Ports: 2181/TCP, 2888/TCP, 3888/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 3
Started: Sat, 26 Sep 2020 16:04:26 +0200
Finished: Sat, 26 Sep 2020 16:04:27 +0200
Ready: False
Restart Count: 8
Requests:
cpu: 500m
memory: 1Gi
Environment: <none>
Mounts:
/conf from zoo-config (rw)
/var/lib/zookeeper/data from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-88x56 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-zk-0
ReadOnly: false
zoo-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: zoo-config
Optional: false
default-token-88x56:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-88x56
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17m default-scheduler Successfully assigned default/zk-0 to <node>
Normal Pulled 17m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.932381527s
Normal Pulled 17m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.960610662s
Normal Pulled 17m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.959935633s
Normal Created 16m (x4 over 17m) kubelet Created container zookeeper
Normal Pulled 16m kubelet Successfully pulled image "library/zookeeper:3.6" in 1.92551645s
Normal Started 16m (x4 over 17m) kubelet Started container zookeeper
Normal Pulling 15m (x5 over 17m) kubelet Pulling image "library/zookeeper:3.6"
Warning BackOff 2m35s (x71 over 17m) kubelet Back-off restarting failed container
To me, it seems like the pod has full rw access to the volume, so I'm unsure why it's still refusing to access the directory. Any help will be appreciated!
After quite some digging, I finally figured out why it wasn't working. The logs were actually telling me all I needed to know in the end, the mounted persistentVolumeClaim simply did not have the correct file permissions to read from the mounted hostpath /mnt/data directory
To fix this, in a somewhat hacky way, I gave read, write & execute permissions to all.
chmod 777 /mnt/data
Overview can be found here
This is definitely not the most secure way, of fixing the issue, and I would strongly advise against using this in any production like environment.
Probably a better approach would be the following
sudo usermod -a -G 1000 1000
I try to start RStudio in docker container via kubernetes. All objects are created, but when I try to open rstudio using such commands in Ubuntu 18:
kubectl create -f rstudio-ing.yml
IP=$(minikube ip)
xdg-open http://$IP/rstudio/
there is error: #RStudio initialization error: unable connect to service.
Usual docker command works fine:
docker run -d -p 8787:8787 -e PASSWORD=123 -v /home/aabor/r-projects:/home/rstudio aabor/rstudio
The same intended operation in kubernetes fails.
rstudio-ing.yml file creates all objects well. RStudio is accessible if I do not mount any folder. But if I add folder mounts it produces an error. Any suggestions?
The content of the rstudio-ing.yml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: r-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /rstudio/
backend:
serviceName: rstudio
servicePort: 8787
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rstudio
spec:
replicas: 1
selector:
matchLabels:
service: rstudio
template:
metadata:
labels:
service: rstudio
language: R
spec:
containers:
- name: rstudio
image: aabor/rstudio
env:
- name: PASSWORD
value: "123"
volumeMounts:
- name: home-dir
mountPath: /home/rstudio/
volumes:
- name: home-dir
hostPath:
#RStudio initialization error: unable connect to service
path: /home/aabor/r-projects
---
apiVersion: v1
kind: Service
metadata:
name: rstudio
spec:
ports:
- port: 8787
selector:
service: rstudio
This is pod description:
Name: rstudio-689c4fd6c8-fgt7w
Namespace: default
Node: minikube/10.0.2.15
Start Time: Fri, 23 Nov 2018 21:42:35 +0300
Labels: language=R
pod-template-hash=2457098274
service=rstudio
Annotations: <none>
Status: Running
IP: 172.17.0.9
Controlled By: ReplicaSet/rstudio-689c4fd6c8
Containers:
rstudio:
Container ID: docker://a6bdcbfdf8dc5489a4c1fa6f23fb782bc3d58dd75d50823cd370c43bd3bffa3c
Image: aabor/rstudio
Image ID: docker-pullable://aabor/rstudio#sha256:2326e5daa3c4293da2909f7e8fd15fdcab88b4eb54f891b4a3cb536395e5572f
Port: <none>
Host Port: <none>
State: Running
Started: Fri, 23 Nov 2018 21:42:39 +0300
Ready: True
Restart Count: 0
Environment:
PASSWORD: 123
Mounts:
/home/rstudio/ from home-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mrkd8 (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
home-dir:
Type: HostPath (bare host directory volume)
Path: /home/aabor/r-projects
HostPathType:
default-token-mrkd8:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mrkd8
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10s default-scheduler Successfully assigned rstudio-689c4fd6c8-fgt7w to minikube
Normal SuccessfulMountVolume 10s kubelet, minikube MountVolume.SetUp succeeded for volume "home-dir"
Normal SuccessfulMountVolume 10s kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-mrkd8"
Normal Pulling 9s kubelet, minikube pulling image "aabor/rstudio"
Normal Pulled 7s kubelet, minikube Successfully pulled image "aabor/rstudio"
Normal Created 7s kubelet, minikube Created container
Normal Started 6s kubelet, minikube Started container
You have created a service of type ClusterIP that can only be possible to access in the cluster not the outside. So to make it available outside of the cluster, change the service type LoadBalancer.
apiVersion: v1
kind: Service
metadata:
name: rstudio
spec:
ports:
- port: 8787
selector:
service: rstudio
type: LoadBalancer
In that case, the loadbalancer type service don't need the ingress and use the url as:
$ minikube service rstudio --url
I've created an example Rails 5 app that uses Google Cloud PostgreSQL.
I'm able to run the app locally with docker-compose up, but I'm not able to connect to it remote when I deploy it to GCP.
I tried to replicate https://cloud.google.com/ruby/tutorials/bookshelf-on-kubernetes-engine where they use targetPort: http-server
The rails app is published on Github.
Am I doing anything obviously wrong? :-|
Running the app locally works
git clone git#github.com:stabenfeldt/k8s-colors.git
docker-compose up -d
docker-compose run colors rake db:create db:migrate
open http://localhost:3000
Create a GKE cluster
gcloud container clusters create color-cluster --num-nodes=2
Setup PostgreSQL Cloud SQL
I followed the instructions from https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine?authuser=1
and updated my config/database.yml and k8s/colors.yml with these values.
Deployed but stuck on ContainerCreating
kubectl apply -f k8s/colors.yml
kubectl get pods
NAME READY STATUS RESTARTS AGE
colors-d9f744dc-d5l5v 0/2 ContainerCreating 0 5m
colors-d9f744dc-spmws 0/2 ContainerCreating 0 5m
kubectl logs d9f744dc-d5l5v -c colors # => Nothing logged
kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
colors 2 2 2 0 7m
But fails to connect to the app
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
colors LoadBalancer 10.55.245.192 35.228.111.217 80:30746/TCP 1h
kubernetes ClusterIP 10.55.240.1 <none> 443/TCP 1h
curl 35.228.111.217 # => No response! :-/
kubectl describe svc colors
Name: colors
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"colors","namespace":"default"},"spec":{"ports":[{"port":80,"targetPort":3000}]...
Selector: app=colors
Type: LoadBalancer
IP: 10.55.252.91
LoadBalancer Ingress: 35.228.203.46
Port: <unset> 80/TCP
TargetPort: 3000/TCP
NodePort: <unset> 30964/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 1m service-controller ClusterIP -> LoadBalancer
Normal EnsuringLoadBalancer 1m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 30s service-controller Ensured load balancer
k8s/service.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: colors
labels:
app: colors
spec:
replicas: 2
selector:
matchLabels:
app: colors
template:
metadata:
labels:
app: colors
spec:
containers:
- name: colors
image: docker.io/stabenfeldt/colors:latest
ports:
- name: http-server
containerPort: 3000
env:
- name: POSTGRES_HOST
value: 127.0.0.1:5432
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=PROJECT_ID:europe-west1:staging=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
---
apiVersion: v1
kind: Service
metadata:
name: colors
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
selector:
app: colors
kubectl describe deployment
Name: colors
Namespace: default
CreationTimestamp: Fri, 13 Jul 2018 10:37:06 +0200
Labels: app=colors
Annotations: deployment.kubernetes.io/revision=1
kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"colors"},"name":"colors","namespace":"default"},"spec":{"repl...
Selector: app=colors
Replicas: 2 desired | 2 updated | 2 total | 0 available | 2 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=colors
Containers:
colors:
Image: docker.io/stabenfeldt/colors:latest
Port: 3000/TCP
Environment:
POSTGRES_HOST: 127.0.0.1:5432
POSTGRES_USER: <set to the key 'username' in secret 'cloudsql-db-credentials'> Optional: false
POSTGRES_PASSWORD: <set to the key 'password' in secret 'cloudsql-db-credentials'> Optional: false
Mounts: <none>
cloudsql-proxy:
Image: gcr.io/cloudsql-docker/gce-proxy:1.11
Port: <none>
Command:
/cloud_sql_proxy
-instances=MY-INSTANCE:europe-west1:staging=tcp:5432
-credential_file=/secrets/cloudsql/credentials.json
Environment: <none>
Mounts:
/secrets/cloudsql from cloudsql-instance-credentials (ro)
Volumes:
cloudsql-instance-credentials:
Type: Secret (a volume populated by a Secret)
SecretName: cloudsql-instance-credentials
Optional: false
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing True ReplicaSetUpdated
OldReplicaSets: <none>
NewReplicaSet: colors-d9f744dc (2/2 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 1m deployment-controller Scaled up replica set colors-d9f744dc to 2
kubectl describe service
Name: colors
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"colors","namespace":"default"},"spec":{"ports":[{"port":80,"targetPort":3000}]...
Selector: app=colors
Type: LoadBalancer
IP: 10.55.252.91
LoadBalancer Ingress: 35.228.203.46
Port: <unset> 80/TCP
TargetPort: 3000/TCP
NodePort: <unset> 30964/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 4m service-controller ClusterIP -> LoadBalancer
Normal EnsuringLoadBalancer 4m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 3m service-controller Ensured load balancer
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.55.240.1
Port: https 443/TCP
TargetPort: 443/TCP
Endpoints: 35.228.79.249:443
Session Affinity: ClientIP
Events: <none>
I don't see anything wrong outright, but here are a few tips to verifying your Kubernetes Objects look like they should compared to your yamls:
Use the describe command to get more information about objects and make sure they are set up correctly.
For example, if you do kubectl describe deployment <deployment_name> you should verify the following line is present:
Port: 3000/TCP
And for your Service - kubectl describe service <service_name>:
LoadBalancer Ingress: <PUBLIC_IP>
Port: <unset> 80/TCP
TargetPort: 3000/TCP
Finally, I'm not sure if you want to apply the following in your LoadBalancer:
labels:
app: colors
Since you are using this label as a selector, it may be doing something funky and trying to load balance to itself instead of your containers with the apps in it.
Also as a side note on your terminology, GCP (Google Cloud Platform) is the overarching name of Google's Services, GKE (Google Kubernetes Engine) is the service providing you with a managed Kuberenetes Cluster.
Hope this helps.
A working setup can be in my example Rails app at Github.
k8s/colors.yml
# Remember to update MY-INSTANCE
apiVersion: v1
kind: Service
metadata:
name: colors-frontend
labels:
app: colors
tier: frontend
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: http-server
selector:
app: colors
tier: frontend
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: colors-frontend
labels:
app: colors
tier: frontend
spec:
replicas: 3
template:
metadata:
labels:
app: colors
tier: frontend
spec:
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
containers:
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=MY-INSTANCE:europe-west1:development=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: colors-app
image: docker.io/stabenfeldt/colors:1
imagePullPolicy: Always
env:
- name: RAILS_LOG_TO_STDOUT
value: "true"
- name: RAILS_ENV
value: development
- name: POSTGRES_HOST
value: 127.0.0.1
- name: POSTGRES_USERNAME
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
ports:
- name: http-server
containerPort: 3000
Your POSTGRES_HOST environment variable needs to be localhost instead of 127.0.0.01:5432. You do not need to add port in the POSTGRES_HOST