EKS + NLB: `service.beta.kubernetes.io/aws-load-balancer-internal: true` not working with `service.beta.kubernetes.io/aws-load-balancer-type: nlb` - amazon-elb

I have an EKS Kubernetes 1.16.x. cluster with three public subnets tagged with kubernetes.io/role/elb: 1 and three private subnets tagged with kubernetes.io/role/internal-elb: 1
I'm attempting to create an internal NLB LoadBalancer service. By internal, I want it hosted on the three private subnets and not the three public subnets.
I'm following the docs at https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-internal: true
name: grafana-nlb
namespace: prometheus
spec:
ports:
- name: service
port: 80
protocol: TCP
targetPort: 3000
selector:
app.kubernetes.io/instance: prom
app.kubernetes.io/name: grafana
type: LoadBalancer
If I omit the service.beta.kubernetes.io/aws-load-balancer-internal: true annotation, everything seems to work perfectly and produce exactly what I expect. I get a public NLB that is hosted on the three public subnets only. I can see this via the AWS cli with aws elbv2 describe-load-balancers, with "Scheme": "internet-facing", "Type": "network",.
If create this with the service.beta.kubernetes.io/aws-load-balancer-internal: true annotation, I get a classic ELB rather than an NLB, and it's still public. It has "Scheme": "internet-facing" and is hosted on the three public subnets only. With the CLI, I can see the load balancer with aws elb describe-load-balancers but not with aws elbv2 describe-load-balancers
This seems like broken behavior. Any tips on how I can troubleshoot or proceed?

The true needs to be quoted as "true" in the yaml.
This works:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
This causes the error I was experiencing:
service.beta.kubernetes.io/aws-load-balancer-internal: true

Related

Creating a connection string in the cloud for Redis

I have a Redis pod, and I expect connection requests to this pod from different clusters and applications not running in the cloud.
Since Redis does not work with the http protocol, accessing as the route I have done below does not work with this connection string "route-redis.local:6379".
route.yml
apiVersion: v1
kind: Route
metadata:
name: redis
spec:
host: route-redis.local
to:
kind: Service
name: redis
service.yml
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
ports:
- port: 6379
targetPort: 6379
selector:
name: redis
You may have encountered this situation. In short, is there any way to access to the redis pod via route? If not, how do you solve this problem?
You already discovered that Redis does not work via the HTTP protocol, which is correct as far as I know. Routes work by inspecting the HTTP Host header for each request, which will not work for Redis. This means that you will not be able to use Routes for non-HTTP workload.
Typically, such non-HTTP services are exposed via a Service and NodePorts. This means that each Worker Node that is part of your cluster will open this port and will forward the traffic to your application.
You can find more information in the Kubernetes documentation:
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.
You can define a NodePort like so (this example is for MySQL, which is also non-HTTP workload):
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
name: mysql
spec:
type: NodePort
ports:
- port: 3306
nodePort: 30036
name: http
selector:
name: mysql
Of course, your administrator may limit the access to these ports, so it may or may not be possible to use these types of services on your OpenShift cluster.
You can expose the tcp via ingress atleast nginx one
https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/

How to set HTTPS as default on GKE Ingress-gce

I currently have a working Frontend and Backend nodeports with an Ingress service setup with GKE's Google-managed certificates.
However, my issue is that by default when a user goes to samplesite.com, it uses http as default. This means that the user needs to specifically type in the browser https://samplesite.com in order to get the https version of my website.
How do I properly disable http on GKE ingress, or how do I redirect all my traffic to https? I understand that this can be forcefully done in my backend code as well but I want to separate concerns and handle this in my Kubernetes setup.
Here is my ingress.yaml file:
kind: Service
apiVersion: v1
metadata:
name: frontend-node-service
namespace: default
spec:
type: NodePort
selector:
app: frontend
ports:
- port: 5000
targetPort: 80
protocol: TCP
name: http
---
kind: Service
apiVersion: v1
metadata:
name: backend-node-service
namespace: default
spec:
type: NodePort
selector:
app: backend
ports:
- port: 8081
targetPort: 9229
protocol: TCP
name: http
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: samplesite-ingress-frontend
namespace: default
annotations:
kubernetes.io/ingress.global-static-ip-name: "samplesite-static-ip"
kubernetes.io/ingress.allow-http: "false"
networking.gke.io/managed-certificates: samplesite-ssl
spec:
backend:
serviceName: frontend-node-service
servicePort: 5000
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: samplesite-ingress-backend
namespace: default
annotations:
kubernetes.io/ingress.global-static-ip-name: "samplesite-backend-ip"
kubernetes.io/ingress.allow-http: "false"
networking.gke.io/managed-certificates: samplesite-api-ssl
spec:
backend:
serviceName: backend-node-service
servicePort: 8081
Currently GKE Ingress does not support out of the box HTTP->HTTPS redirect.
There is an ongoing Feature Request for it here:
Issuetracker.google.com: Issues: Redirect all HTTP traffic to HTTPS when using the HTTP(S) Load Balancer
There are some workarounds for it:
Use different Ingress controller like nginx-ingress.
Create a HTTP->HTTPS redirection in GCP Cloud Console.
How do I properly disable http on GKE ingress, or how do I redirect all my traffic to https?
To disable HTTP on GKE you can use following annotation:
kubernetes.io/ingress.allow-http: "false"
This annotation will:
Allow traffic only on port: 443 (HTTPS).
Deny traffic on port: 80 (HTTP) resulting in error code: 404.
Focusing on previously mentioned workarounds:
Use different Ingress controller like nginx-ingress
One of the ways to have the HTTP->HTTPS redirection is to use nginx-ingress. You can deploy it with official documentation:
Kubernetes.github.io: Ingress-nginx: Deploy: GCE-GKE
This Ingress controller will create a service of type LoadBalancer which will be the entry point for your traffic. Ingress objects will respond on LoadBalancer IP. You can download the manifest from installation part and modify it to support the static IP you have requested in GCP. More reference can be found here:
Stackoverflow.com: How to specify static IP address for Kubernetes load balancer?
You will need to provide your own certificates or use tools like cert-manager to have HTTPS traffic as the annotation: networking.gke.io/managed-certificates will not work with nginx-ingress.
I used this YAML definition and without any other annotations I was always redirected to the HTTPS:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx" # IMPORTANT
spec:
tls: # HTTPS PART
- secretName: ssl-certificate # SELF PROVIDED CERT NAME
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: hello-service
servicePort: hello-port
Create a HTTP->HTTPS redirection in GCP Cloud Console.
There is also an option to manually create a redirection rule for your Ingress resource. You will need to follow official documentation:
Cloud.google.com: Load Balancing: Docs: HTTPS: Setting up HTTP -> HTTPS Redirect
Using the part of above documentation, you will need to create a HTTP LoadBalancer responding on the same IP as your Ingress resource (reserved static IP) redirecting traffic to HTTPS.
Disclaimer!
Your Ingress resource will need to have following annotation:
kubernetes.io/ingress.allow-http: "false"
Lack there of will result in forbidding you to create a redirection mentioned above.

Not able to access service from the public ip in kubernetes

I am using kubernetes and run one service. Service is running and is showing in service. But i am not able to access it from the public ip of the instance. Below is my deployment file.
apiVersion: v1
kind: Service
metadata:
name: apache-service
spec:
selector:
app: apache
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: apache-deployment
spec:
selector:
matchLabels:
app: apache
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: apache
spec:
containers:
- name: apache
image: mobingi/ubuntu-apache2-php7:7.2
ports:
- containerPort: 80
Here is my list of service.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apache-service NodePort 10.106.242.181 <none> 80:31807/TCP 9m5s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11m
But when i check the same service from the follwing telnet with the public ip of cluster and node. It is not responding.
telnet public-ip:31807
Any type of help will be appreciable.
What do you mean by cluster IP? Do you mean the node that acts as kunernetes master? It won't work if you use master IP. Because masters don't have deployments scheduled due to security concerns.
Exposing a service via nodeport means that the service listens to a particular port in each of the worker nodes. So you can access the kunernetes worker nodes with the nodeports and get response. However if you created the cluster using cloud providers like aws, the worker nodes security groups are secured. Probably, you need to edit the security groups of worker nodes to access the service.

Make services accessible only from Private network Kubernetes

I have received a public ip address for my kubernetes service which i can configure as a loadbalancer ip in my NGINX ingress. This public ip address can be accessed from public internet.
Is there a way or some configuration through which i can make these services accessible only from my client network in kubernetes?
With Kubernetes Nginx Ingress it is as simple as setting an annotation on your ingress object like :
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/whitelist-source-range: '8.8.8.8/32'
https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/annotations.md#whitelist-source-range
You can as suggested make use of the VPN and create an internal LoadBalancer or you can check the Network Policies that I consider that Kubernetes standard way to implement your solution.
By default, if no policies exist in a namespace, then all ingress and egress traffic is allowed to and from pods in that namespace. The following examples let you change the default behavior in that namespace.
You will need to create a NetworkPolicy Resource, in the spec you will have to describe the behaviour making use of the available fields, I recommend you to check the official documentation to retrieve more info regarding the structure.
PolicyTypes:
...
ingress: Each NetworkPolicy may include a list of whitelist ingress rules. Each rule allows traffic which matches both the from and ports sections. The example policy contains a single rule, which matches traffic on a single port, from one of three sources, the first specified via an ipBlock, the second via a namespaceSelector and the third via a podSelector.
...
Keep in mind that in order to implement them you need to use a
networking solution which supports NetworkPolicy, if you just
create the resource without a controller to implement it will have no
effect.
Example of policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
Using Network Policy is nice. But, a simpler approach would be use set ExternalIP of the nginx ingress controller to the IP address in the client network. This exposes the services only on the client network.
Below is the sample configuration for helm:
helm install --name my-ingress stable/nginx-ingress \
--set controller.service.externalIPs=<IP address in client network>

How to preserve source IP from traffic arriving on a ClusterIP service with an external IP?

I currently have a service that looks like this:
apiVersion: v1
kind: Service
metadata:
name: httpd
spec:
ports:
- port: 80
targetPort: 80
name: http
protocol: TCP
- port: 443
targetPort: 443
name: https
protocol: TCP
selector:
app: httpd
externalIPs:
- 10.128.0.2 # VM's internal IP
I can receive traffic fine from the external IP bound to the VM, but all of the requests are received by the HTTP with the source IP 10.104.0.1, which is most definitely an internal IP – even when I connect to the VM's external IP from outside the cluster.
How can I get the real source IP for the request without having to set up a load balancer or ingress?
This is not simple to achieve -- because of the way kube-proxy works, your traffic can get forwarded between nodes before it reaches the pod that's backing your Service.
There are some beta annotations that you can use to get around this, specifically service.beta.kubernetes.io/external-traffic: OnlyLocal.
More info in the docs, here: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
But this does not meet your additional requirement of not requiring a LoadBalancer. Can you expand upon why you don't want to involve a LoadBalancer?
If you only have exactly one pod, you can use hostNetwork: true to achieve this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: caddy
spec:
replicas: 1
template:
metadata:
labels:
app: caddy
spec:
hostNetwork: true # <---------
containers:
- name: caddy
image: your_image
env:
- name: STATIC_BACKEND # example env in my custom image
value: $(STATIC_SERVICE_HOST):80
Note that by doing this your pod will inherit the host's DNS resolver and not Kubernetes'. That means you can no longer resolve cluster services by DNS name. For example, in the example above you cannot access the static service at http://static. You still can access services by their cluster IP, which are injected by environment variables.

Resources