I have Docker, Kubernetes(1.7) and Nginx all running on my RHEL7 server with my own services being inside a docker container and being picked up by Kubernetes. I know Kubernetes is working right with docker because I can call a get request of the Kubernete pod using its own IP:PORT addresses and it works. I set up Nginx with a default backend and have all of this working. I know this by calling get pods and get svc commands and everything is running as it should. When I create ingress, I know Nginx is picking it up because when I use the command kubectl describe pods {NGNIX-CONTROLLER} I see it updates its ingress and even logs what I named it. Now I get the IP address of Kubernetes master using kubectl clusterinfo and I use this ip address to attempt to call my services, something along the lines of http://KUBEIPADDRESS/PATH/TO/MY/SERVICE, with no port number but it doesn't work. I have no idea what is going on. Can someone help me why Ingress and/or Nnginx isn't routing properly to my services? I'll give my ingress and nginx file down below.
(Note, for the nginx yaml file, the deployment of the nginx controller is all the way in the bottom.)
Ingress yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway-ingress
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/rewrite-target: /
spec:
backend:
serviceName: default-http-backend
servicePort: 80
rules:
- host: testhost
http:
paths:
- path: /customer
backend:
serviceName: customer
servicePort: 9001
nginx controller yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: ingress
rules:
- apiGroups:
- ""
- "extensions"
resources:
- configmaps
- secrets
- services
- endpoints
- ingresses
- nodes
- pods
verbs:
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- apiGroups:
- ""
resources:
- events
- services
verbs:
- create
- list
- update
- get
- apiGroups:
- "extensions"
resources:
- ingresses/status
- ingresses
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: ingress-ns
namespace: kube-system
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- apiGroups:
- ""
resources:
- services
verbs:
- get
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- create
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: ingress-ns-binding
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-ns
subjects:
- kind: ServiceAccount
name: ingress
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: ingress-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress
subjects:
- kind: ServiceAccount
name: ingress
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: kube-system
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
k8s-app: default-http-backend
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-controller
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: nginx-ingress-controller
spec:
# hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration
# however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host
# that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used
# like with kubeadm
hostNetwork: true
terminationGracePeriodSeconds: 60
serviceAccountName: ingress
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.3
name: nginx-ingress-controller
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
Also when I do kubectl describe ing I get
Name: gateway-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
testhost
/customer customer:9001 ({IP}:9001,{IP}:9001)
Annotations:
rewrite-target: /
Events: <none>
Here are my deployment and service of the customer in case anyone needs that
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: customer
labels:
run: customer
spec:
replicas: 2
template:
metadata:
labels:
run: customer
spec:
containers:
- name: customer
image: customer
imagePullPolicy: Always
ports:
- containerPort: 9001
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: customer
spec:
selector:
run: customer
type: NodePort
ports:
- name: port1
protocol: TCP
port: 9001
targetPort: 9001
There are some issues with your setup as far as I can see:
KUBEIPADDRESS in the URL you call: an IP address won't work because you configured your Ingress to listen on testhost. So you need to call http://testhost/customer, and configure your network to resolve testhost to the correct IP address
but what is the correct IP address? You are trying to use k8s master on port 80. That won't work without further configuration. For that you need to use a NodePort service for the Ingress Controller, which exposes it on port 80 (and probably 433). In order to use that low ports, you need to allow it with an option of kube-apiserver, see --service-node-port-range on https://kubernetes.io/docs/admin/kube-apiserver/. Once that works, you can use any IP address of any node of your k8s cluster for testhost. Note: be sure that no other application uses these ports on any node!
Related
I am trying to setup a jenkins server inside of a kubernetes container with minikube. Following up the documentation for this, I have the following kubernetes configuration.
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
---
apiVersion: v1
kind: Service
metadata:
name: jenkins-service
spec:
selector:
app: jenkins
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jenkins-ingress
annotations:
nginx.ingress.kubernetes.io/default-backend: ingress-nginx-controller
kubernetes.io/ingress.class: nginx
## tells ingress to check for regex in the config file
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- http:
paths:
- path: /?(.*)
pathType: Prefix
backend:
service:
name: jenkins-service
port:
number: 8080
I am trying to create a deployment (pod), service for it and an ingress to expose it.
I got the ip address of the ingress with kubectl get ingress and it is the following:
NAME CLASS HOSTS ADDRESS PORTS AGE
jenkins-ingress <none> * 80 5s
jenkins-ingress <none> * 192.168.49.2 80 5s
When I try to open 192.168.49.2 in the browser, I get timeout. The same happens when I try to ping this ip address from the terminal.
The addon for ingress is enabled in minikube, and when I describe the ingress, I see the following:
Name: jenkins-ingress
Namespace: default
Address: 192.168.49.2
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/?(.*) jenkins-service:8080 (172.17.0.4:8080)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/default-backend: ingress-nginx-controller
nginx.ingress.kubernetes.io/use-regex: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 6m15s (x2 over 6m20s) nginx-ingress-controller Scheduled for sync
Can someone help me what I am doing wrong and how can I access the jenkins instance through the browser?
I'm pretty new to Kubernetes and trying to deploy what I think is a pretty common use case onto a GKE cluster we have created with Terraform, microservices all hosted on one cluster, but cannot for the life of me get the routing to serve traffic to the correct services. The setup I'm trying to create is as follows:
Deployment & Service for each microservice (canvas-service and video-service)
Single Ingress (class GCE) on the cluster, hosted at a static IP, that routes traffic to each service based on path
The current config looks like this:
canvas-service.yaml
# Deployment (Service Manager)
apiVersion: apps/v1
kind: Deployment
metadata:
name: canvas-service-deployment
labels:
name: canvas-service
spec:
replicas: 2
selector:
matchLabels:
app: canvas-service
template:
metadata:
labels:
app: canvas-service
spec:
restartPolicy: Always
containers:
- name: canvas-service
image: gcr.io/emile-learning-dev/canvas-service-image:latest
readinessProbe:
httpGet:
path: /health
port: 8080
ports:
- name: root
containerPort: 8080
resources:
requests:
memory: "4096Mi"
cpu: "1000m"
limits:
memory: "8192Mi"
cpu: "2000m"
---
# Service
apiVersion: v1
kind: Service
metadata:
name: canvas-service
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: canvas-service
video-service.yaml
# Deployment (Service Manager)
apiVersion: apps/v1
kind: Deployment
metadata:
name: video-service-deployment
labels:
name: video-service
spec:
replicas: 2
selector:
matchLabels:
app: video-service
template:
metadata:
labels:
app: video-service
spec:
restartPolicy: Always
containers:
- name: video-service
image: gcr.io/emile-learning-dev/video-service-image:latest
readinessProbe:
httpGet:
path: /health
port: 8080
ports:
- name: root
containerPort: 8080
resources:
requests:
memory: "4096Mi"
cpu: "1000m"
limits:
memory: "8192Mi"
cpu: "2000m"
---
# Service
apiVersion: v1
kind: Service
metadata:
name: video-service
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: video-service
services-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: services-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "services-ip"
spec:
defaultBackend:
service:
name: canvas-service
port:
number: 80
rules:
- http:
paths:
- path: /canvas/*
pathType: ImplementationSpecific
backend:
service:
name: canvas-service
port:
number: 80
- path: /video/*
pathType: ImplementationSpecific
backend:
service:
name: video-service
port:
number: 80
The output of kubectl describe ingress services-ingress looks like this:
Name: services-ingress
Namespace: default
Address: 34.107.136.153
Default backend: canvas-service:80 (10.244.2.17:8080,10.244.5.15:8080)
Rules:
Host Path Backends
---- ---- --------
*
/canvas/* canvas-service:80 (10.244.2.17:8080,10.244.5.15:8080)
/video/* video-service:80 (10.244.1.15:8080,10.244.8.51:8080)
Annotations: ingress.kubernetes.io/backends: {"k8s-be-30551--8ba41a687ec15071":"HEALTHY","k8s-be-32145--8ba41a687ec15071":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s2-fr-xlhz0sas-default-services-ingress-hqyvwyy1
ingress.kubernetes.io/target-proxy: k8s2-tp-xlhz0sas-default-services-ingress-hqyvwyy1
ingress.kubernetes.io/url-map: k8s2-um-xlhz0sas-default-services-ingress-hqyvwyy1
kubernetes.io/ingress.global-static-ip-name: services-ip
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 9m36s loadbalancer-controller UrlMap "k8s2-um-xlhz0sas-default-services-ingress-hqyvwyy1" created
Normal Sync 9m34s loadbalancer-controller TargetProxy "k8s2-tp-xlhz0sas-default-services-ingress-hqyvwyy1" created
Normal Sync 9m26s loadbalancer-controller ForwardingRule "k8s2-fr-xlhz0sas-default-services-ingress-hqyvwyy1" created
Normal IPChanged 9m26s loadbalancer-controller IP is now 34.107.136.153
Normal Sync 6m56s (x5 over 11m) loadbalancer-controller Scheduled for sync
For testing, I have a healthcheck route at /health for each service. What I'm running into is that when I hit {public_ip}/health (using the default backend) I get the expected response. But when I hit {public_ip}/canvas/health or {public_ip}/video/health, I get a 404 Not Found.
I know it has something to do with the fact that the entire service route structure is on the /canvas or /video route, but thought that the /* was supposed to address exactly that. I'd like to basically make the root route for each service exist on the corresponding subpaths /canvas and /video. Would love to hear any thoughts you guys have as to what I'm doing wrong that's leading to traffic not being routed correctly.
If it's an issue with the GCP default Ingress resource or this isn't within its functionality, I'm totally open to using an nginx Ingress. But, I haven't been able to get an nginx Ingress to expose an IP at all so figured the GCP Ingress would probably be a shorter path to getting this cluster working. If I'm wrong about this also please let me know.
It's due to the paths defined in Ingress. Changing the path type to /video and /canvas will make this work.
To understand the reason behind this, you need to read nginx ingress controller documentation about path and pathType. You can also use the regex pattern in the path according to the documentation here https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/
Tip: When in doubt, you can always go to the nginx ingress controller pods and check the nginx.conf file about the location determined by nginx.
I am trying to learn the basics about istio so I have gone through the official documentation here in order to create a 80/20 canary deployment, I have also follow this guide from digitalocean which explains it very easy for a simple deployment https://www.digitalocean.com/community/tutorials/how-to-do-canary-deployments-with-istio-and-kubernetes.
I have created a simple app with 2 different messages on the homepage, and then created the virtualService, Gateway and the destination rules. I (as mentioned in the guide) get the external ip with kubectl -n istio-system get svc and the try to navigate to that address but I get a 503 error. It seems very simple but I have to be missing something. These are my 3 files (as far as I understand there are no more necessary files):
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
namespace: istio
name: flask-gateway
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- "*"
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: flask-app
namespace: istio
spec:
hosts:
- "*"
gateways:
- flask-gateway
http:
- route:
- destination:
host: flask-app
subset: v1
weight: 80
- destination:
host: flask-app
subset: v2
weight: 20
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: flask-app
namespace: istio
spec:
host: flask-app
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Here are the yaml with deployment and services for v1 and v2:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
version: v1
name: flask-deployment-v1
namespace: istio
spec:
replicas: 1
selector:
matchLabels:
app: flask-app
template:
metadata:
labels:
version: v1
app: flask-app
spec:
containers:
- name: flask-app
image: latalavera/flask-app:1.3
ports:
- containerPort: 5000
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
name: flask-service
namespace: istio
spec:
selector:
app: flask-app
ports:
- port: 5000
protocol: TCP
targetPort: 5000
type: ClusterIP
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
version: v2
name: flask-deployment-v2
namespace: istio
spec:
replicas: 1
selector:
matchLabels:
app: flask-app
template:
metadata:
labels:
app: flask-app
version: v2
spec:
containers:
- name: flask-app
image: latalavera/flask-app:2.0
ports:
- containerPort: 5000
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
name: flask-service2
namespace: istio
spec:
selector:
app: flask-app
ports:
- port: 5000
protocol: TCP
targetPort: 5000
type: ClusterIP
I have added the labels version: v1 and version: v2 to my deployments, and I have also used the kubectl label ns istio istio-injection=enabled command, but they are not working anyways
You named the service flask-service and set the host in your VirtualService to flask-app.
The host field is not a selector but the FQDN of the service you want to route the traffic to. So it should be called flask-service or better flask-service.istio.svc.cluster.local:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: flask-app
namespace: istio
spec:
hosts:
- "*"
gateways:
- flask-gateway
http:
- route:
- destination:
host: flask-service.istio.svc.cluster.local
subset: v1
weight: 80
- destination:
host: flask-service.istio.svc.cluster.local
subset: v2
weight: 20
Alternatively you could just call the service flask-app like the Deployment. But using the full FQDN <service-name>.<namespace-name>.svc.cluster.local is recommended in any case. From docs:
Note for Kubernetes users: When short names are used (e.g. “reviews” instead of “reviews.default.svc.cluster.local”), Istio will interpret the short name based on the namespace of the rule, not the service. A rule in the “default” namespace containing a host “reviews” will be interpreted as “reviews.default.svc.cluster.local”, irrespective of the actual namespace associated with the reviews service. To avoid potential misconfigurations, it is recommended to always use fully qualified domain names over short names.
https://istio.io/latest/docs/reference/config/networking/virtual-service/#VirtualService -> hosts
You btw don't need 2 services, just one. Your service has a selector for app: flask-app so it can route traffic to v1 and v2. How the traffic is routed is defined by the VirtualService and DestionationRule. I would recommand to remove service flask-service2. If you need to route the traffic inside the mesh, add mesh as gateways in the VirtualService or create a new one for mesh internal traffic, to reach both versions. More on that topic:
https://istio.io/latest/docs/reference/config/networking/virtual-service/#VirtualService -> gateways
Kind Note: I have googled a lot and take a look too many questions related to this issue at StackOverflow also but couldn't solve my issue, that's why don't mark this as duplicate, please!
I'm trying to deploy 2 services (One is Python flask and other is NodeJS) on Google Kubernetes Engine. I have created two Kubernetes-deployments one for each service and two Kubernetes-services one for each service of type NodePort. Then, I have created an Ingress and mentioned my endpoints but Ingress says that One backend service is UNHEALTHY.
Here are my Deployments YAML definitions:
# Pyservice deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: pyservice
labels:
app: pyservice
namespace: default
spec:
selector:
matchLabels:
app: pyservice
template:
metadata:
labels:
app: pyservice
spec:
containers:
- name: pyservice
image: docker.io/arycloud/docker_web_app:pyservice
ports:
- containerPort: 5000
imagePullSecrets:
- name: docksecret
# # Nodeservice deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodeservice
labels:
app: nodeservice
namespace: default
spec:
selector:
matchLabels:
app: nodeservice
template:
metadata:
labels:
app: nodeservice
tier: web
spec:
containers:
- name: nodeservice
image: docker.io/arycloud/docker_web_app:nodeservice
ports:
- containerPort: 8080
imagePullSecrets:
- name: docksecret
And, here are my services and Ingress YAML definitions:
# pyservcie service
kind: Service
apiVersion: v1
metadata:
name: pyservice
spec:
type: NodePort
selector:
app: pyservice
ports:
- protocol: TCP
port: 5000
nodePort: 30001
---
# nodeservcie service
kind: Service
apiVersion: v1
metadata:
name: nodeservcie
spec:
type: NodePort
selector:
app: nodeservcie
ports:
- protocol: TCP
port: 8080
nodePort: 30002
---
# Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "gce"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: pyservice
servicePort: 5000
- path: /*
backend:
serviceName: pyservice
servicePort: 5000
- path: /node/svc/
backend:
serviceName: nodeservcie
servicePort: 8080
The pyservice is working fine but the nodeservice shows as UNHEALTHY backend. Here's a screenshot:
Even I have edited the Firewall Rules for all gke-.... and allow all ports just for getting out from this issue, but it still showing the UNHEALTHY status for nodeservice.
What's wrong here?
Thanks in advance!
Why are you using a GCE ingress class and then specifying a nginx rewrite annotation? In case you haven't realised, the annotation won't do anything to the GCE ingress.
You have also got 'nodeservcie' as your selector instead of 'nodeservice'.
I'm trying to understand the difference between the manifest files used for bringing up the Kubernetes cluster.
Say suppose I have a file called pod.yml that defines my pod, that is the containers running it:
Pod.yml
apiversion : v1
kind: Pod
metadata:
name : web
spec:
containers:
- name : webserver
image : httpd
ports :
- ContainerPort: 80
HostPort: 80`
And I have replicator.yml file to launch 3 of these pods:
Replicator.yml
kind: "ReplicationController"
apiVersion: "v1"
metadata:
name: "webserver-controller"
spec:
replicas: 3
selector:
app: "webserver"
template:
spec:
containers:
- name: webserver
image: httpd
ports:
- containerPort: 80
hostport: 80`
Can I avoid the template section in the replicator.yml if I'm already using pod.yml to define the images to be used to build the containers in the pod.
Do you need all three manifest files pod.yml, service.yml and replicator.yml or can you just use service.yml and replicator.yml to create the cluster.
If you are using a ReplicationController, Deployment, DaemonSet or a Pet Set, you don't need a separate pod definition. However, the service should be defined if you want to expose the pod and this can be done on the same file.
Example:
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: default
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
k8s-app: default-http-backend
---
apiVersion: v1
kind: ReplicationController
metadata:
name: default-http-backend
namespace: default
spec:
replicas: 1
selector:
k8s-app: default-http-backend
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi