I'm pretty new to Kubernetes and trying to deploy what I think is a pretty common use case onto a GKE cluster we have created with Terraform, microservices all hosted on one cluster, but cannot for the life of me get the routing to serve traffic to the correct services. The setup I'm trying to create is as follows:
Deployment & Service for each microservice (canvas-service and video-service)
Single Ingress (class GCE) on the cluster, hosted at a static IP, that routes traffic to each service based on path
The current config looks like this:
canvas-service.yaml
# Deployment (Service Manager)
apiVersion: apps/v1
kind: Deployment
metadata:
name: canvas-service-deployment
labels:
name: canvas-service
spec:
replicas: 2
selector:
matchLabels:
app: canvas-service
template:
metadata:
labels:
app: canvas-service
spec:
restartPolicy: Always
containers:
- name: canvas-service
image: gcr.io/emile-learning-dev/canvas-service-image:latest
readinessProbe:
httpGet:
path: /health
port: 8080
ports:
- name: root
containerPort: 8080
resources:
requests:
memory: "4096Mi"
cpu: "1000m"
limits:
memory: "8192Mi"
cpu: "2000m"
---
# Service
apiVersion: v1
kind: Service
metadata:
name: canvas-service
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: canvas-service
video-service.yaml
# Deployment (Service Manager)
apiVersion: apps/v1
kind: Deployment
metadata:
name: video-service-deployment
labels:
name: video-service
spec:
replicas: 2
selector:
matchLabels:
app: video-service
template:
metadata:
labels:
app: video-service
spec:
restartPolicy: Always
containers:
- name: video-service
image: gcr.io/emile-learning-dev/video-service-image:latest
readinessProbe:
httpGet:
path: /health
port: 8080
ports:
- name: root
containerPort: 8080
resources:
requests:
memory: "4096Mi"
cpu: "1000m"
limits:
memory: "8192Mi"
cpu: "2000m"
---
# Service
apiVersion: v1
kind: Service
metadata:
name: video-service
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: video-service
services-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: services-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "services-ip"
spec:
defaultBackend:
service:
name: canvas-service
port:
number: 80
rules:
- http:
paths:
- path: /canvas/*
pathType: ImplementationSpecific
backend:
service:
name: canvas-service
port:
number: 80
- path: /video/*
pathType: ImplementationSpecific
backend:
service:
name: video-service
port:
number: 80
The output of kubectl describe ingress services-ingress looks like this:
Name: services-ingress
Namespace: default
Address: 34.107.136.153
Default backend: canvas-service:80 (10.244.2.17:8080,10.244.5.15:8080)
Rules:
Host Path Backends
---- ---- --------
*
/canvas/* canvas-service:80 (10.244.2.17:8080,10.244.5.15:8080)
/video/* video-service:80 (10.244.1.15:8080,10.244.8.51:8080)
Annotations: ingress.kubernetes.io/backends: {"k8s-be-30551--8ba41a687ec15071":"HEALTHY","k8s-be-32145--8ba41a687ec15071":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s2-fr-xlhz0sas-default-services-ingress-hqyvwyy1
ingress.kubernetes.io/target-proxy: k8s2-tp-xlhz0sas-default-services-ingress-hqyvwyy1
ingress.kubernetes.io/url-map: k8s2-um-xlhz0sas-default-services-ingress-hqyvwyy1
kubernetes.io/ingress.global-static-ip-name: services-ip
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 9m36s loadbalancer-controller UrlMap "k8s2-um-xlhz0sas-default-services-ingress-hqyvwyy1" created
Normal Sync 9m34s loadbalancer-controller TargetProxy "k8s2-tp-xlhz0sas-default-services-ingress-hqyvwyy1" created
Normal Sync 9m26s loadbalancer-controller ForwardingRule "k8s2-fr-xlhz0sas-default-services-ingress-hqyvwyy1" created
Normal IPChanged 9m26s loadbalancer-controller IP is now 34.107.136.153
Normal Sync 6m56s (x5 over 11m) loadbalancer-controller Scheduled for sync
For testing, I have a healthcheck route at /health for each service. What I'm running into is that when I hit {public_ip}/health (using the default backend) I get the expected response. But when I hit {public_ip}/canvas/health or {public_ip}/video/health, I get a 404 Not Found.
I know it has something to do with the fact that the entire service route structure is on the /canvas or /video route, but thought that the /* was supposed to address exactly that. I'd like to basically make the root route for each service exist on the corresponding subpaths /canvas and /video. Would love to hear any thoughts you guys have as to what I'm doing wrong that's leading to traffic not being routed correctly.
If it's an issue with the GCP default Ingress resource or this isn't within its functionality, I'm totally open to using an nginx Ingress. But, I haven't been able to get an nginx Ingress to expose an IP at all so figured the GCP Ingress would probably be a shorter path to getting this cluster working. If I'm wrong about this also please let me know.
It's due to the paths defined in Ingress. Changing the path type to /video and /canvas will make this work.
To understand the reason behind this, you need to read nginx ingress controller documentation about path and pathType. You can also use the regex pattern in the path according to the documentation here https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/
Tip: When in doubt, you can always go to the nginx ingress controller pods and check the nginx.conf file about the location determined by nginx.
Related
I am trying to setup a jenkins server inside of a kubernetes container with minikube. Following up the documentation for this, I have the following kubernetes configuration.
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
---
apiVersion: v1
kind: Service
metadata:
name: jenkins-service
spec:
selector:
app: jenkins
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jenkins-ingress
annotations:
nginx.ingress.kubernetes.io/default-backend: ingress-nginx-controller
kubernetes.io/ingress.class: nginx
## tells ingress to check for regex in the config file
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- http:
paths:
- path: /?(.*)
pathType: Prefix
backend:
service:
name: jenkins-service
port:
number: 8080
I am trying to create a deployment (pod), service for it and an ingress to expose it.
I got the ip address of the ingress with kubectl get ingress and it is the following:
NAME CLASS HOSTS ADDRESS PORTS AGE
jenkins-ingress <none> * 80 5s
jenkins-ingress <none> * 192.168.49.2 80 5s
When I try to open 192.168.49.2 in the browser, I get timeout. The same happens when I try to ping this ip address from the terminal.
The addon for ingress is enabled in minikube, and when I describe the ingress, I see the following:
Name: jenkins-ingress
Namespace: default
Address: 192.168.49.2
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/?(.*) jenkins-service:8080 (172.17.0.4:8080)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/default-backend: ingress-nginx-controller
nginx.ingress.kubernetes.io/use-regex: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 6m15s (x2 over 6m20s) nginx-ingress-controller Scheduled for sync
Can someone help me what I am doing wrong and how can I access the jenkins instance through the browser?
I am trying to learn the basics about istio so I have gone through the official documentation here in order to create a 80/20 canary deployment, I have also follow this guide from digitalocean which explains it very easy for a simple deployment https://www.digitalocean.com/community/tutorials/how-to-do-canary-deployments-with-istio-and-kubernetes.
I have created a simple app with 2 different messages on the homepage, and then created the virtualService, Gateway and the destination rules. I (as mentioned in the guide) get the external ip with kubectl -n istio-system get svc and the try to navigate to that address but I get a 503 error. It seems very simple but I have to be missing something. These are my 3 files (as far as I understand there are no more necessary files):
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
namespace: istio
name: flask-gateway
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- "*"
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: flask-app
namespace: istio
spec:
hosts:
- "*"
gateways:
- flask-gateway
http:
- route:
- destination:
host: flask-app
subset: v1
weight: 80
- destination:
host: flask-app
subset: v2
weight: 20
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: flask-app
namespace: istio
spec:
host: flask-app
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Here are the yaml with deployment and services for v1 and v2:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
version: v1
name: flask-deployment-v1
namespace: istio
spec:
replicas: 1
selector:
matchLabels:
app: flask-app
template:
metadata:
labels:
version: v1
app: flask-app
spec:
containers:
- name: flask-app
image: latalavera/flask-app:1.3
ports:
- containerPort: 5000
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
name: flask-service
namespace: istio
spec:
selector:
app: flask-app
ports:
- port: 5000
protocol: TCP
targetPort: 5000
type: ClusterIP
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
version: v2
name: flask-deployment-v2
namespace: istio
spec:
replicas: 1
selector:
matchLabels:
app: flask-app
template:
metadata:
labels:
app: flask-app
version: v2
spec:
containers:
- name: flask-app
image: latalavera/flask-app:2.0
ports:
- containerPort: 5000
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
name: flask-service2
namespace: istio
spec:
selector:
app: flask-app
ports:
- port: 5000
protocol: TCP
targetPort: 5000
type: ClusterIP
I have added the labels version: v1 and version: v2 to my deployments, and I have also used the kubectl label ns istio istio-injection=enabled command, but they are not working anyways
You named the service flask-service and set the host in your VirtualService to flask-app.
The host field is not a selector but the FQDN of the service you want to route the traffic to. So it should be called flask-service or better flask-service.istio.svc.cluster.local:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: flask-app
namespace: istio
spec:
hosts:
- "*"
gateways:
- flask-gateway
http:
- route:
- destination:
host: flask-service.istio.svc.cluster.local
subset: v1
weight: 80
- destination:
host: flask-service.istio.svc.cluster.local
subset: v2
weight: 20
Alternatively you could just call the service flask-app like the Deployment. But using the full FQDN <service-name>.<namespace-name>.svc.cluster.local is recommended in any case. From docs:
Note for Kubernetes users: When short names are used (e.g. “reviews” instead of “reviews.default.svc.cluster.local”), Istio will interpret the short name based on the namespace of the rule, not the service. A rule in the “default” namespace containing a host “reviews” will be interpreted as “reviews.default.svc.cluster.local”, irrespective of the actual namespace associated with the reviews service. To avoid potential misconfigurations, it is recommended to always use fully qualified domain names over short names.
https://istio.io/latest/docs/reference/config/networking/virtual-service/#VirtualService -> hosts
You btw don't need 2 services, just one. Your service has a selector for app: flask-app so it can route traffic to v1 and v2. How the traffic is routed is defined by the VirtualService and DestionationRule. I would recommand to remove service flask-service2. If you need to route the traffic inside the mesh, add mesh as gateways in the VirtualService or create a new one for mesh internal traffic, to reach both versions. More on that topic:
https://istio.io/latest/docs/reference/config/networking/virtual-service/#VirtualService -> gateways
I have a Pod that contains an image of a NodeJS project serving a build of vuejs (Manual SSR)
And I have an ingress controller to match a host to the service connecting to the Pod as follows:
The Problem I'm facing is that static assets (CSS, JS and images) aren't served, so they need a "server" that handles these files.
I tried to have the Pod mounts two containers, NodeJS + Nginx container and copy the static file in Nginx /var/www/html and serve them from there using nginx location rule but it looks like an overkill to put that in production.
I'm curious how the best way to do this, maybe using Ingress controller rules/annotations? or some way that I am missing?
My Ingress controller looks like:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: test.ssr.local
http:
paths:
- path: /
backend:
serviceName: service-internal-ssr
servicePort: 8080
My Deployment looks like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ssr
spec:
selector:
matchLabels:
app: ssr
template:
metadata:
labels:
app: ssr
spec:
- name: ssr
image: ...
resources:
limits:
cpu: 0.5
memory: 1000Mi
requests:
cpu: 0.2
memory: 500Mi
ports:
- name: app-port
containerPort: 2055
---
apiVersion: v1
kind: Service
metadata:
name: service-internal-ssr
spec:
type: ClusterIP
selector:
app: ssr
ports:
- port: 8080
targetPort: 2055
Thank you in advance
I would suggest serving your static content from something like a bucket or CDN, something as close to the client as possible that is better suited for this use case. Using K8s for this is overkill.
it is possible to fix the ip and port somewhere in my yaml.
My application has 3 parts: a fronted with its respective balancer, a backend with its respective balancer and the database with statefulset and to persevere the volume, these 3 applications have their respective hpa rules.
I put the yaml of the backend if it is possible to set the ip and the port since I am working local and every so often I have to change the port or the ip.
backend.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: XXXXXXX
command: ["/bin/sh"]
args: ["-c", "node index.js"]
ports:
- containerPort: 4000
imagePullPolicy: IfNotPresent
env:
- name: HOST_DB
value: "172.17.0.3"
- name: PORT_DB
value: "31109"
resources:
requests:
memory: "128Mi"
cpu: "200m"
limits:
memory: "256Mi"
cpu: "1000m"
readinessProbe:
httpGet:
path: /
port: 4000
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 4000
initialDelaySeconds: 15
periodSeconds: 20
---
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
app: backend
ports:
- protocol: TCP
port: 5000
targetPort: 4000
name: https
type: LoadBalancer
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: backend
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: backend
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
the result image is this
The IP is anyway fixed to the single node's IP in case of minikube.You can hardcode the NodePort by specifying nodePort in the service. Without nodePort specified in the service kubernetes will assign a port from the range 30000-32767
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: NodePort
selector:
app: backend
ports:
- protocol: TCP
port: 5000
targetPort: 4000
name: https
nodePort: 30007
Follow this guide to expose applications via NodePort type service in minikube.
I'm trying to deploy two services on Google container engine, I have created a cluster with 3 Nodes.
My docker images are in private docker hub repo that's why I have created a secret and used in Deployments, The ingress is creating a load balancer in the Google cloud console but it shows that backend services are not healthy and inside the kubernetes section under workloads it says Does not have minimum availability.
I'm new to kubernetes, what can be a problem?
Here are my yamls:
Deployment.yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
name: pythonaryapp
labels:
app: pythonaryapp
spec:
replicas: 1 #We always want more than 1 replica for HA
selector:
matchLabels:
app: pythonaryapp
template:
metadata:
labels:
app: pythonaryapp
spec:
containers:
- name: pythonaryapp #1st container
image: docker.io/arycloud/docker_web_app:pythonaryapp #Dockerhub image
ports:
- containerPort: 8080 #Exposes the port 8080 of the container
env:
- name: PORT #Env variable key passed to container that is read by app
value: "8080" # Value of the env port.
readinessProbe:
httpGet:
path: /healthz
port: 8080
periodSeconds: 2
timeoutSeconds: 2
successThreshold: 2
failureThreshold: 10
imagePullSecrets:
- name: docksecret
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: pythonaryapp1
labels:
app: pythonaryapp1
spec:
replicas: 1 #We always want more than 1 replica for HA
selector:
matchLabels:
app: pythonaryapp1
template:
metadata:
labels:
app: pythonaryapp1
spec:
containers:
- name: pythonaryapp1 #1st container
image: docker.io/arycloud/docker_web_app:pythonaryapp1 #Dockerhub image
ports:
- containerPort: 8080 #Exposes the port 8080 of the container
env:
- name: PORT #Env variable key passed to container that is read by app
value: "8080" # Value of the env port.
readinessProbe:
httpGet:
path: /healthz
port: 8080
periodSeconds: 2
timeoutSeconds: 2
successThreshold: 2
failureThreshold: 10
imagePullSecrets:
- name: docksecret
---
And here's services.yaml:
kind: Service
apiVersion: v1
metadata:
name: pythonaryapp
spec:
type: NodePort
selector:
app: pythonaryapp
ports:
- protocol: TCP
port: 8080
---
---
kind: Service
apiVersion: v1
metadata:
name: pythonaryapp1
spec:
type: NodePort
selector:
app: pythonaryapp1
ports:
- protocol: TCP
port: 8080
---
And Here's my ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mysvcs
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: pythonaryapp
servicePort: 8080
- path: /<name>
backend:
serviceName: pythonaryapp1
servicePort: 8080
Update:
Here's flask service code:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello World, from Python Service.', 200
if __name__ == '__main__':
app.run()
And, on running the container of it's docker image it retunrs 200 sttaus code at the root path /.
Thanks in advance!
Have a look at this post. It might contain helpful tips for your issue.
For example I do see a readiness probe but not a liveness probe in your config files.
This post suggests that “Does not have minimum availability” in k8s could be a result of a CrashloopBackoff caused by a failing liveness probe.
In GKE, the ingress is implemented by GCP LoadBalancer. The GCP LB is checking the health of the service by calling it in the service address with the root path '/'. Make sure that your container can respond with 200 on the root, or alternatively change the LB backend service health check route (you can do it in the GCP console)