Why external IP of AKS is not working? Something wrong with yaml? - azure-aks

I try to deploy simple API to AKS. API works fine in local docker.
I get no errors. I see from AKS that status is running. However browser does not respond. Could someone check yaml or give tip what could be wrong?
C:\Azure\ACIPythonAPI\conda-flask-api> kubectl get service demo-flask-api --watch
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-flask-api LoadBalancer 10.0.105.135 11.22.33.44 80:31551/TCP 5m53s
aks.yaml file:
apiVersion: v1
kind: Service
metadata:
name: demo-flask-api
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: demo-flask-api
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-flask-api
spec:
replicas: 1
selector:
matchLabels:
app: demo-flask-api
template:
metadata:
labels:
app: demo-flask-api
spec:
containers:
- name: demo-flask-api
image: mycontainerregistry.azurecr.io/demo/flask-api:v1
ports:
- containerPort: 8080

Seems you did not defined a Ingress resource nether an Ingress Controller.
Here is a Workflow from MS to install ingress-nginx as Ingress Controller on your Cluster.
You then expose the IC service like this:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup # only needed if the LB is in another RG
name: ingress-nginx-controller
spec:
loadBalancerIP: <YOUR_STATIC_IP>
type: LoadBalancer
The Ingress Controller then will route incoming traffic to your demo flask API with an Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
ingressClassName: nginx # ingress-nginx specifix
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test
port:
number: 80

Related

Why my canary deployment does not work with istio?

I am trying to learn the basics about istio so I have gone through the official documentation here in order to create a 80/20 canary deployment, I have also follow this guide from digitalocean which explains it very easy for a simple deployment https://www.digitalocean.com/community/tutorials/how-to-do-canary-deployments-with-istio-and-kubernetes.
I have created a simple app with 2 different messages on the homepage, and then created the virtualService, Gateway and the destination rules. I (as mentioned in the guide) get the external ip with kubectl -n istio-system get svc and the try to navigate to that address but I get a 503 error. It seems very simple but I have to be missing something. These are my 3 files (as far as I understand there are no more necessary files):
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
namespace: istio
name: flask-gateway
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- "*"
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: flask-app
namespace: istio
spec:
hosts:
- "*"
gateways:
- flask-gateway
http:
- route:
- destination:
host: flask-app
subset: v1
weight: 80
- destination:
host: flask-app
subset: v2
weight: 20
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: flask-app
namespace: istio
spec:
host: flask-app
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Here are the yaml with deployment and services for v1 and v2:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
version: v1
name: flask-deployment-v1
namespace: istio
spec:
replicas: 1
selector:
matchLabels:
app: flask-app
template:
metadata:
labels:
version: v1
app: flask-app
spec:
containers:
- name: flask-app
image: latalavera/flask-app:1.3
ports:
- containerPort: 5000
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
name: flask-service
namespace: istio
spec:
selector:
app: flask-app
ports:
- port: 5000
protocol: TCP
targetPort: 5000
type: ClusterIP
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
version: v2
name: flask-deployment-v2
namespace: istio
spec:
replicas: 1
selector:
matchLabels:
app: flask-app
template:
metadata:
labels:
app: flask-app
version: v2
spec:
containers:
- name: flask-app
image: latalavera/flask-app:2.0
ports:
- containerPort: 5000
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
name: flask-service2
namespace: istio
spec:
selector:
app: flask-app
ports:
- port: 5000
protocol: TCP
targetPort: 5000
type: ClusterIP
I have added the labels version: v1 and version: v2 to my deployments, and I have also used the kubectl label ns istio istio-injection=enabled command, but they are not working anyways
You named the service flask-service and set the host in your VirtualService to flask-app.
The host field is not a selector but the FQDN of the service you want to route the traffic to. So it should be called flask-service or better flask-service.istio.svc.cluster.local:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: flask-app
namespace: istio
spec:
hosts:
- "*"
gateways:
- flask-gateway
http:
- route:
- destination:
host: flask-service.istio.svc.cluster.local
subset: v1
weight: 80
- destination:
host: flask-service.istio.svc.cluster.local
subset: v2
weight: 20
Alternatively you could just call the service flask-app like the Deployment. But using the full FQDN <service-name>.<namespace-name>.svc.cluster.local is recommended in any case. From docs:
Note for Kubernetes users: When short names are used (e.g. “reviews” instead of “reviews.default.svc.cluster.local”), Istio will interpret the short name based on the namespace of the rule, not the service. A rule in the “default” namespace containing a host “reviews” will be interpreted as “reviews.default.svc.cluster.local”, irrespective of the actual namespace associated with the reviews service. To avoid potential misconfigurations, it is recommended to always use fully qualified domain names over short names.
https://istio.io/latest/docs/reference/config/networking/virtual-service/#VirtualService -> hosts
You btw don't need 2 services, just one. Your service has a selector for app: flask-app so it can route traffic to v1 and v2. How the traffic is routed is defined by the VirtualService and DestionationRule. I would recommand to remove service flask-service2. If you need to route the traffic inside the mesh, add mesh as gateways in the VirtualService or create a new one for mesh internal traffic, to reach both versions. More on that topic:
https://istio.io/latest/docs/reference/config/networking/virtual-service/#VirtualService -> gateways

Expose basepath in container through service in nginx ingress on kubernetes

My ASP.NET Core web application uses basepath in startup like,
app.UsePathBase("/app1");
So the application will run on basepath. For example if the application is running on localhost:5000, then the app1 will be accessible on 'localhost:5000/app1'.
So in nginx ingress or any ingress we can expose the entire container through service to outside of the kubernetes cluster.
My kubernetes deployment YAML file looks like below,
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-deployment
spec:
selector:
matchLabels:
app: app1
replicas: 1
template:
metadata:
labels:
app: app1
spec:
containers:
- name: app1-container
image: app1:latest
ports:
- containerPort: 5001
---
apiVersion: v1
kind: Service
metadata:
name: app1-service
labels:
app: app1
spec:
type: NodePort
ports:
- name: app1-port
port: 5001
targetPort: 5001
protocol: TCP
selector:
app: app1
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app1-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- http:
paths:
- path: /app1(/|$)(.*)
backend:
serviceName: server-service
servicePort: 5001
The above ingress will expose the entire container in 'localhost/app1' URL. So the application will run on '/app1' virtual path. But the app1 application will be accessible on 'localhost/app1/app1'.
So I want to know if there is any way to route 'localhost/app1' request to basepath in container application 'localhost:5001/app1' in ingress.
If I understand correctly, now the app is accessible on /app1/app1 and you would like it to be accessible on /app1
To do this, don't use rewrite:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app1-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /app1
backend:
serviceName: server-service
servicePort: 5001

Exposing Neo4j Bolt using Kubernetes Ingress

I'm trying to build a Neo4j Learning Tool for some of our Trainings. I want to use Kubernetes to spin up a Neo4j Pod for each participant to use. Currently I struggle exposing the bolt endpoint using an Ingress and I don't know why.
Here are my deployment configs:
apiVersion: apps/v1
kind: Deployment
metadata:
name: neo4j
namespace: learn
labels:
app: neo-manager
type: database
spec:
replicas: 1
selector:
matchLabels:
app: neo-manager
type: database
template:
metadata:
labels:
app: neo-manager
type: database
spec:
containers:
- name: neo4j
imagePullPolicy: IfNotPresent
image: neo4j:3.5.6
ports:
- containerPort: 7474
- containerPort: 7687
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: neo4j-service
namespace: learn
labels:
app: neo-manager
type: database
spec:
selector:
app: neo-manager
type: database
ports:
- port: 7687
targetPort: 7687
name: bolt
protocol: TCP
- port: 7474
targetPort: 7474
name: client
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: neo4j-ingress
namespace: learn
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: learn.neo4j.com
http:
paths:
- path: /
backend:
serviceName: neo4j-service
servicePort: 7474
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: learn
data:
7687: "learn/neo4j-service:7687"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: learn
spec:
replicas: 1
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0-beta.16
args:
- /nginx-ingress-controller
- --tcp-services-configmap=${POD_NAMESPACE}/tcp-services
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
The client gets exposed nicely and it reachable under learn.neo4j.com but I don't know where to point it to to connect to the DB using bolt. Whatever I try, it fails saying ServiceUnavailable: Websocket Connection failure (WebSocket network error: The operation couldn’t be completed. Connection refused in the console).
What am I missing?
The nginx-ingress-controller by default creates http(s) proxies only.
In your case you're trying to use a different protocol (bolt) so you need to configure your ingress controller in order for it to make a TCP proxy.
In order to do so, you need to create a configmap (in the nginx-ingress-controller namespace) similar to the following:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
7687: "<your neo4j namespace>/neo4j-service:7687"
Then, make sure your ingress controller has the following flag in its command:
--tcp-services-configmap tcp-services
This will make your nginx-ingress controller listen to port 7687 with a TCP proxy.
You can delete the neo4j-bolt-ingress Ingress, that's not going to be used.
Of course you have to ensure that the ingress controller correctly exposes the 7687 port the same way it does with ports 80 and 443, and possibly you'll have to adjust the settings of any firewall and load balancer you might have.
Source: https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
It automatically tries to connect to port 7687 by default - if you enter the connection url http://learn.neo4j.bolt.com:80 (or https), it works.
I haven't used kubernetes ingress in this context before, but I think that when you use HTTP or HTTPS to connect to Neo4J, you still require external availability to connect to the the bolt port (7687). Does your setup allow for that?
Try using multi-path mapping in your Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: neo4j-ingress
namespace: learn
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: learn.neo4j.com
http:
paths:
- path: /browser
backend:
serviceName: neo4j-service
servicePort: 7474
- path: /
backend:
serviceName: neo4j-service
servicePort: 7687
You should then be able to access the UI at learn.neo4j.com/browser. The bolt Connect URL would have to specified as:
bolt+s://learn.neo4j.com:443/

Google Kubernetes Engine Ingress UNHEALTHY backend service

Kind Note: I have googled a lot and take a look too many questions related to this issue at StackOverflow also but couldn't solve my issue, that's why don't mark this as duplicate, please!
I'm trying to deploy 2 services (One is Python flask and other is NodeJS) on Google Kubernetes Engine. I have created two Kubernetes-deployments one for each service and two Kubernetes-services one for each service of type NodePort. Then, I have created an Ingress and mentioned my endpoints but Ingress says that One backend service is UNHEALTHY.
Here are my Deployments YAML definitions:
# Pyservice deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: pyservice
labels:
app: pyservice
namespace: default
spec:
selector:
matchLabels:
app: pyservice
template:
metadata:
labels:
app: pyservice
spec:
containers:
- name: pyservice
image: docker.io/arycloud/docker_web_app:pyservice
ports:
- containerPort: 5000
imagePullSecrets:
- name: docksecret
# # Nodeservice deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodeservice
labels:
app: nodeservice
namespace: default
spec:
selector:
matchLabels:
app: nodeservice
template:
metadata:
labels:
app: nodeservice
tier: web
spec:
containers:
- name: nodeservice
image: docker.io/arycloud/docker_web_app:nodeservice
ports:
- containerPort: 8080
imagePullSecrets:
- name: docksecret
And, here are my services and Ingress YAML definitions:
# pyservcie service
kind: Service
apiVersion: v1
metadata:
name: pyservice
spec:
type: NodePort
selector:
app: pyservice
ports:
- protocol: TCP
port: 5000
nodePort: 30001
---
# nodeservcie service
kind: Service
apiVersion: v1
metadata:
name: nodeservcie
spec:
type: NodePort
selector:
app: nodeservcie
ports:
- protocol: TCP
port: 8080
nodePort: 30002
---
# Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "gce"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: pyservice
servicePort: 5000
- path: /*
backend:
serviceName: pyservice
servicePort: 5000
- path: /node/svc/
backend:
serviceName: nodeservcie
servicePort: 8080
The pyservice is working fine but the nodeservice shows as UNHEALTHY backend. Here's a screenshot:
Even I have edited the Firewall Rules for all gke-.... and allow all ports just for getting out from this issue, but it still showing the UNHEALTHY status for nodeservice.
What's wrong here?
Thanks in advance!
Why are you using a GCE ingress class and then specifying a nginx rewrite annotation? In case you haven't realised, the annotation won't do anything to the GCE ingress.
You have also got 'nodeservcie' as your selector instead of 'nodeservice'.

Kubernetes (Minikube): environment variable

I'm running a simple spring microservice project with Minikube. I have two projects: lucky-word-client (on port 8080) and lucky-word-server (on port 8888). lucky-word-client has to communicate with lucky-word-server. I want to inject the static Nodeport of lucky-word-server (http://192.*..100:32002) as an environment variable in my Kuberenetes deployment script of lucky-word-client. How I could do?
This is deployment of lucky-word-server:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lucky-server
spec:
selector:
matchLabels:
app: lucky-server
replicas: 1
template:
metadata:
labels:
app: lucky-server
spec:
containers:
- name: lucky-server
image: lucky-server-img
imagePullPolicy: Never
ports:
- containerPort: 8888
This is the service of lucky-word-server:
kind: Service
apiVersion: v1
metadata:
name: lucky-server
spec:
selector:
app: lucky-server
ports:
- protocol: TCP
targetPort: 8888
port: 80
nodePort: 32002
type: NodePort
This is the deployment of lucky-word-client:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lucky-client
spec:
selector:
matchLabels:
app: lucky-client
replicas: 1
template:
metadata:
labels:
app: lucky-client
spec:
containers:
- name: lucky-client
image: lucky-client-img
imagePullPolicy: Never
ports:
- containerPort: 8080
This is the service of lucky-word-client:
kind: Service
apiVersion: v1
metadata:
name: lucky-client
spec:
selector:
app: lucky-client
ports:
- protocol: TCP
targetPort: 8080
port: 80
type: NodePort
Kubernetes automatically injects services as environment variables. https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
But you should not use this. This won't work unless all the services are in place when you create the pod. It is inspired by "docker" which also moved on to DNS based service discovery now. So "environment based service discovery" is a thing of the past.
Please rely on DNS service discovery. Minikube ships with kube-dns so you can just use the lucky-server hostname (or one of lucky-server[.default[.svc[.cluster[.local]]]] names). Read the documentation: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/

Resources