Following the instructions on the Keycloak docs site below, I'm trying to set up Keycloak to run in a Kubernetes cluster. I have an Ingress Controller set up which successfully works for a simple test page. Cloudflare points the domain to the ingress controllers IP.
Keycloak deploys successfully (Admin console listening on http://127.0.0.1:9990), but when going to the domain I get a message from NGINX: 503 Service Temporarily Unavailable.
https://www.keycloak.org/getting-started/getting-started-kube
Here's the Kubernetes config:
apiVersion: v1
kind: Service
metadata:
name: keycloak-cip
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
name: keycloak
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: nginx
service.beta.kubernetes.io/linode-loadbalancer-default-protocol: https
service.beta.kubernetes.io/linode-loadbalancer-port-443: '{ "tls-secret-name": "my-secret", "protocol": "https" }'
spec:
rules:
- host: my.domain.com
http:
paths:
- backend:
serviceName: keycloak-cip
servicePort: 8080
tls:
- hosts:
- my.domain.com
secretName: my-secret
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:12.0.3
env:
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
initialDelaySeconds: 90
periodSeconds: 5
failureThreshold: 30
successThreshold: 1
revisionHistoryLimit: 1
Edit:
TLS should be handled by the ingress controller.
--
Edit 2:
If I go into the controller using kubectl exec, I can do curl -L http://127.0.0.1:8080/auth which successfully retrieves the page:
<title>Welcome to Keycloak</title>. So I'm sure that keycloak is running. It's just that either traffic doesn't reach the pod, or keycloak doesn't respond.
If I use the ClusterIP instead but otherwise keep the call above the same, I get a Connection timed out. I tried both ports 80 and 8080 with the same result.
The following configuration is required to run keycloak behind ingress controller:
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: KEYCLOAK_HOSTNAME
value: "my.domain.com"
So I think adding correct KEYCLOAK_HOSTNAME value should solve your issue.
I had a similar issue with Traefik Ingress Controller:
Can't expose Keycloak Server on AWS with Traefik Ingress Controller and AWS HTTPS Load Balancer
You can find the full code of my configuration here:
https://github.com/skyglass-examples/user-management-keycloak
Hello Have you tried to add this line :
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
it looks like it is missing from your config file which result in 503 error, check this for more input on the config of K8s.
Related
I am trying to create an ingress file to route urls into the inside services. but after calling in postman, it just returns 503 error.
this is my ingress file config:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
spec:
rules:
- host: posts.com
http:
paths:
- path: /posts/create
pathType: Prefix
backend:
service:
name: posts-clusterip-srv
port:
number: 7000
this is my posts deployment file and cluster ip:
apiVersion: apps/v1
kind: Deployment
metadata:
name: posts-depl
spec:
replicas: 1
selector:
matchLabels:
app: posts
template:
metadata:
labels:
app: posts
spec:
containers:
- name: posts
image: 4765/posts
---
apiVersion: v1
kind: Service
metadata:
name: posts-clusterip-srv
spec:
selector:
app: posts
ports:
- name: posts
protocol: TCP
port: 7000
targetPort: 7000
when in postman I send this request http://posts.com/posts/create just returns 503 service unavailable. I try to curl the cluster Ip curl http://posts-clusterip-srv:7000 but it responses Could not resolve host: posts-clusterip-srv
I don't know what to do?
Does your app server accept request on /?
As path: /posts/create will forward the request to your server which will receive a request on /.
Concerning the curl http://posts-clusterip-srv:7000 it depends of the set up of your cluster:
If you are using a local cluster on your computer you should modify your /etc/hosts add your local IP as posts.com then you should be able to curl it.
If your cluster is on a server it seems that it is a DNS problem, same way as above you can add the server IP to your hosts file to avoid using the DNS.
I'm pretty new to Kubernetes and trying to deploy what I think is a pretty common use case onto a GKE cluster we have created with Terraform, microservices all hosted on one cluster, but cannot for the life of me get the routing to serve traffic to the correct services. The setup I'm trying to create is as follows:
Deployment & Service for each microservice (canvas-service and video-service)
Single Ingress (class GCE) on the cluster, hosted at a static IP, that routes traffic to each service based on path
The current config looks like this:
canvas-service.yaml
# Deployment (Service Manager)
apiVersion: apps/v1
kind: Deployment
metadata:
name: canvas-service-deployment
labels:
name: canvas-service
spec:
replicas: 2
selector:
matchLabels:
app: canvas-service
template:
metadata:
labels:
app: canvas-service
spec:
restartPolicy: Always
containers:
- name: canvas-service
image: gcr.io/emile-learning-dev/canvas-service-image:latest
readinessProbe:
httpGet:
path: /health
port: 8080
ports:
- name: root
containerPort: 8080
resources:
requests:
memory: "4096Mi"
cpu: "1000m"
limits:
memory: "8192Mi"
cpu: "2000m"
---
# Service
apiVersion: v1
kind: Service
metadata:
name: canvas-service
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: canvas-service
video-service.yaml
# Deployment (Service Manager)
apiVersion: apps/v1
kind: Deployment
metadata:
name: video-service-deployment
labels:
name: video-service
spec:
replicas: 2
selector:
matchLabels:
app: video-service
template:
metadata:
labels:
app: video-service
spec:
restartPolicy: Always
containers:
- name: video-service
image: gcr.io/emile-learning-dev/video-service-image:latest
readinessProbe:
httpGet:
path: /health
port: 8080
ports:
- name: root
containerPort: 8080
resources:
requests:
memory: "4096Mi"
cpu: "1000m"
limits:
memory: "8192Mi"
cpu: "2000m"
---
# Service
apiVersion: v1
kind: Service
metadata:
name: video-service
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: video-service
services-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: services-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "services-ip"
spec:
defaultBackend:
service:
name: canvas-service
port:
number: 80
rules:
- http:
paths:
- path: /canvas/*
pathType: ImplementationSpecific
backend:
service:
name: canvas-service
port:
number: 80
- path: /video/*
pathType: ImplementationSpecific
backend:
service:
name: video-service
port:
number: 80
The output of kubectl describe ingress services-ingress looks like this:
Name: services-ingress
Namespace: default
Address: 34.107.136.153
Default backend: canvas-service:80 (10.244.2.17:8080,10.244.5.15:8080)
Rules:
Host Path Backends
---- ---- --------
*
/canvas/* canvas-service:80 (10.244.2.17:8080,10.244.5.15:8080)
/video/* video-service:80 (10.244.1.15:8080,10.244.8.51:8080)
Annotations: ingress.kubernetes.io/backends: {"k8s-be-30551--8ba41a687ec15071":"HEALTHY","k8s-be-32145--8ba41a687ec15071":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s2-fr-xlhz0sas-default-services-ingress-hqyvwyy1
ingress.kubernetes.io/target-proxy: k8s2-tp-xlhz0sas-default-services-ingress-hqyvwyy1
ingress.kubernetes.io/url-map: k8s2-um-xlhz0sas-default-services-ingress-hqyvwyy1
kubernetes.io/ingress.global-static-ip-name: services-ip
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 9m36s loadbalancer-controller UrlMap "k8s2-um-xlhz0sas-default-services-ingress-hqyvwyy1" created
Normal Sync 9m34s loadbalancer-controller TargetProxy "k8s2-tp-xlhz0sas-default-services-ingress-hqyvwyy1" created
Normal Sync 9m26s loadbalancer-controller ForwardingRule "k8s2-fr-xlhz0sas-default-services-ingress-hqyvwyy1" created
Normal IPChanged 9m26s loadbalancer-controller IP is now 34.107.136.153
Normal Sync 6m56s (x5 over 11m) loadbalancer-controller Scheduled for sync
For testing, I have a healthcheck route at /health for each service. What I'm running into is that when I hit {public_ip}/health (using the default backend) I get the expected response. But when I hit {public_ip}/canvas/health or {public_ip}/video/health, I get a 404 Not Found.
I know it has something to do with the fact that the entire service route structure is on the /canvas or /video route, but thought that the /* was supposed to address exactly that. I'd like to basically make the root route for each service exist on the corresponding subpaths /canvas and /video. Would love to hear any thoughts you guys have as to what I'm doing wrong that's leading to traffic not being routed correctly.
If it's an issue with the GCP default Ingress resource or this isn't within its functionality, I'm totally open to using an nginx Ingress. But, I haven't been able to get an nginx Ingress to expose an IP at all so figured the GCP Ingress would probably be a shorter path to getting this cluster working. If I'm wrong about this also please let me know.
It's due to the paths defined in Ingress. Changing the path type to /video and /canvas will make this work.
To understand the reason behind this, you need to read nginx ingress controller documentation about path and pathType. You can also use the regex pattern in the path according to the documentation here https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/
Tip: When in doubt, you can always go to the nginx ingress controller pods and check the nginx.conf file about the location determined by nginx.
I have an application running in kubernetes pod (on my local docker desktop, with kubernetes enabled), listening on port 8080. I then have the following kubernetes configuration
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: myrelease-foobar-app-gw
namespace: default
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: default-foobar-local-credential
hosts:
- test.foobar.local
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myrelease-foobar-app-vs
namespace: default
spec:
hosts:
- test.foobar.local
gateways:
- myrelease-foobar-app-gw
http:
- match:
- port: 443
route:
- destination:
host: myrelease-foobar-app.default.svc.cluster.local
subset: foobarAppDestination
port:
number: 8081
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: myrelease-foobar-app-destrule
namespace: default
spec:
host: myrelease-foobar-app.default.svc.cluster.local
subsets:
- name: foobarAppDestination
labels:
app.kubernetes.io/instance: myrelease
app.kubernetes.io/name: foobar-app
---
apiVersion: v1
kind: Service
metadata:
name: myrelease-foobar-app
namespace: default
labels:
helm.sh/chart: foobar-app-0.1.0
app.kubernetes.io/name: foobar-app
app.kubernetes.io/instance: myrelease
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/managed-by: Helm
spec:
type: ClusterIP
ports:
- port: 8081
targetPort: 8080
protocol: TCP
name: http
selector:
app.kubernetes.io/name: foobar-app
app.kubernetes.io/instance: myrelease
This works fine. But I'd like to change that port 443 into something else, say 8443 (because I will have multiple Gateway). When I have this, I cant access the application anymore. Is there some configuration that I'm missing? I'm guessing I need to configure Istio to accept port 8443 too? I installed istio using the following command:
istioctl install --set profile=default -y
Edit:
I've done a bit more reading (https://www.dangtrinh.com/2019/09/how-to-open-custom-port-on-istio.html), and I've done the following:
kubectl -n istio-system get service istio-ingressgateway -o yaml > istio_ingressgateway.yaml
edit istio_ingressgateway.yaml, and add the following:
- name: foobarhttps
nodePort: 32700
port: 445
protocol: TCP
targetPort: 8445
kubectl apply -f istio_ingressgateway.yaml
Change within my Gateway above:
- port:
number: 445
name: foobarhttps
protocol: HTTPS
Change within my VirtualService above:
http:
- match:
- port: 445
But I still cant access it from my browser (https://foobar.test.local:445)
I suppose that port has to be mapped on the Istio Ingress Gateway. So if you want to use a custom port, you might have to customize that.
But usually it should not be a problem if multiple Gateways use the same port, it does not cause a clash. So for that use case it should not be necessary to do that.
Fixed it. What i've done wrong in my edit above is this:
- name: foobarhttps
nodePort: 32700
port: 445
protocol: TCP
targetPort: 8443
(notice that targetPort is still 8443). I'm guessing there is an istio component listening on port 8443, which handles all this https stuff. Thanks user140547 for the help!
I have a couple of applications, which runs in Docker containers (all on the same VM).
In front of them, I have an nginx container as a reverse proxy.
Now I want to migrate that to Kubernetes.
When I start them by docker-composer locally it works like expected.
On Kubernetes not.
nginx.conf
http {
server {
location / {
proxy_pass http://app0:80;
}
location /app1/ {
proxy_pass http://app1:80;
rewrite ^/app1(.*)$ $1 break;
}
location /app2/ {
proxy_pass http://app2:80;
rewrite ^/app2(.*)$ $1 break;
}
}
}
edit: nginx.conf is not used on kubernetes. I have to use ingress-controller for that:
deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: app0
spec:
replicas: 1
template:
metadata:
labels:
app: app0
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: app0
image: appscontainerregistry1.azurecr.io/app0:latest
imagePullPolicy: Always
ports:
- containerPort: 80
name: nginx
---
#the other apps
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
annotations:
# use the shared ingress-nginx
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: apps-url.com
http:
paths:
- path: /
backend:
serviceName: app0
servicePort: 80
- path: /app1
backend:
serviceName: app1
servicePort: 80
- path: /app2
backend:
serviceName: app2
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: loadbalancer
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: ingress-nginx
I get the response on / (app0). Unfortunately, the subroutes are not working. What I´m doing wrong?
EDIT
I figured out. Ich missed installing the ingress controller. Like on this page (https://kubernetes.io/docs/concepts/services-networking/ingress/) described, the ingress doesn't work if no controller is installed.
I used ingress-nginx as a controller (https://kubernetes.github.io/ingress-nginx/deploy/) because it was the best-described install guide which I was able to find and I didn´t want to use HELM.
I have one more question. How I can change my ingress that subdomains are working.
For example, k8url.com/app1/subroute shows me every time the start page of my app1.
And if I use a domain name proxying, it rewrites every time the domain name by the IP.
you have created deployment successfully but with that service should be there. nginx ngress on kubernetes manage traffic based on the service.
so flow goes like
nginx-ingress > service > deployment pod.
you are missing to create the service for both applications and add the proper route based on that in kubernetes ingress.
Add this :
apiVersion: v1
kind: Service
metadata:
name: loadbalancer
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
app: ingress-nginx
Because you didnot route for Service Load balancer to targetPort to 80
I'm trying to deploy two services on Google container engine, I have created a cluster with 3 Nodes.
My docker images are in private docker hub repo that's why I have created a secret and used in Deployments, The ingress is creating a load balancer in the Google cloud console but it shows that backend services are not healthy and inside the kubernetes section under workloads it says Does not have minimum availability.
I'm new to kubernetes, what can be a problem?
Here are my yamls:
Deployment.yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
name: pythonaryapp
labels:
app: pythonaryapp
spec:
replicas: 1 #We always want more than 1 replica for HA
selector:
matchLabels:
app: pythonaryapp
template:
metadata:
labels:
app: pythonaryapp
spec:
containers:
- name: pythonaryapp #1st container
image: docker.io/arycloud/docker_web_app:pythonaryapp #Dockerhub image
ports:
- containerPort: 8080 #Exposes the port 8080 of the container
env:
- name: PORT #Env variable key passed to container that is read by app
value: "8080" # Value of the env port.
readinessProbe:
httpGet:
path: /healthz
port: 8080
periodSeconds: 2
timeoutSeconds: 2
successThreshold: 2
failureThreshold: 10
imagePullSecrets:
- name: docksecret
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: pythonaryapp1
labels:
app: pythonaryapp1
spec:
replicas: 1 #We always want more than 1 replica for HA
selector:
matchLabels:
app: pythonaryapp1
template:
metadata:
labels:
app: pythonaryapp1
spec:
containers:
- name: pythonaryapp1 #1st container
image: docker.io/arycloud/docker_web_app:pythonaryapp1 #Dockerhub image
ports:
- containerPort: 8080 #Exposes the port 8080 of the container
env:
- name: PORT #Env variable key passed to container that is read by app
value: "8080" # Value of the env port.
readinessProbe:
httpGet:
path: /healthz
port: 8080
periodSeconds: 2
timeoutSeconds: 2
successThreshold: 2
failureThreshold: 10
imagePullSecrets:
- name: docksecret
---
And here's services.yaml:
kind: Service
apiVersion: v1
metadata:
name: pythonaryapp
spec:
type: NodePort
selector:
app: pythonaryapp
ports:
- protocol: TCP
port: 8080
---
---
kind: Service
apiVersion: v1
metadata:
name: pythonaryapp1
spec:
type: NodePort
selector:
app: pythonaryapp1
ports:
- protocol: TCP
port: 8080
---
And Here's my ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mysvcs
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: pythonaryapp
servicePort: 8080
- path: /<name>
backend:
serviceName: pythonaryapp1
servicePort: 8080
Update:
Here's flask service code:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello World, from Python Service.', 200
if __name__ == '__main__':
app.run()
And, on running the container of it's docker image it retunrs 200 sttaus code at the root path /.
Thanks in advance!
Have a look at this post. It might contain helpful tips for your issue.
For example I do see a readiness probe but not a liveness probe in your config files.
This post suggests that “Does not have minimum availability” in k8s could be a result of a CrashloopBackoff caused by a failing liveness probe.
In GKE, the ingress is implemented by GCP LoadBalancer. The GCP LB is checking the health of the service by calling it in the service address with the root path '/'. Make sure that your container can respond with 200 on the root, or alternatively change the LB backend service health check route (you can do it in the GCP console)