Istio Sidecar - doc description vs. live cluster behavior - connection

my question is related to the Istio Sidecar
First : In the description of the Sidecar the following is stated:
When determining the Sidecar configuration to be applied to a workload
instance, preference will be given to the resource with a
workloadSelector that selects this workload instance, over a Sidecar
configuration without any workloadSelector.
Second: In the spec table, in the description of egress, following is written:
If not specified, inherits the system detected defaults from the
namespace-wide or the global default Sidecar.
So, I've tried to apply two sidecars on the "default" namespace, for the bookinfo example.
1.first sidecar is default (selector-less) in the default namespace with an egress field restricting the connections
apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
name: default-sidecar
namespace: default
spec:
egress:
- hosts:
- "./details.default.svc.cluster.local"
- "./ratings.default.svc.cluster.local"
second sidecar, is with a selector selecting the ratings app, but without an egress field:
apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
name: ratings-selected
namespace: default
spec:
workloadSelector:
labels:
app: ratings
ingress:
- port:
number: 9080
protocol: TCP
name: somename
defaultEndpoint: unix:///var/run/someuds.sock
So, From the egress description above, I expect that the ratings app may connect only with the details and ratings apps (the egress of the default sidecar)
But on live cluster the behavior I get is that it can connect with all other services in my mesh (including productpage and reviews from bookinfo - default namespace)
which means that the second sidecar overrides the first one - and cancels the "egress"
connections specified in it even if the second sidecar does not include an egress field.
Can anyone help me understand why this happens?

Related

can't seen in kubernetes cluster

I create a yaml file to create rabbitmq kubernetes cluster. I can see pods. But when I write kubectl get deployment. I cant see there. I can't access to rabbitmq ui page.
apiVersion: v1
kind: Service
metadata:
labels:
app: rabbit
name: rabbit
spec:
ports:
- port: 5672
protocol: TCP
name: mqtt
- port: 15672
protocol: TCP
name: ui
type: NodePort
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rabbit
spec:
serviceName: rabbit
replicas: 3
selector:
matchLabels:
app: rabbit
template:
metadata:
labels:
app: rabbit
spec:
containers:
- name: rabbitmq
image: rabbitmq
nodeSelector:
rabbitmq: "clustered"
#arghya-sadhu's answer is correct.
NB I'm unfamiliar with RabbitMQ but you may need to use a different image (see 'Management Plugin`) to include the UI.
See below for more details.
You should be able to hack your way to the UI on one (!) of the Pods via:
PORT=8888
kubectl port-forward pod/rabbit-0 --namespace=${NAMESPACE} ${PORT}:15672
And then browse localhost:${PORT} (if 8888 is unavailable, try another).
I suspect (!) this won't work unless you use the image with the management plugin.
Plus
The Service needs to select the StatefulSet's Pods
Within the Service spec you should add perhaps:
selector:
app: rabbit
Presumably (!?) you are using a private repo (because you have imagePullSecrets).
If you don't and wish to use DockerHub, you may remove the imagePullSecrets section.
It's useful to document (!) container ports albeit not mandatory:
In the StatefulSet
ports:
- containerPort: 5672
- containerPort: 15672
Debug
NAMESPACE="default" # Or ...
Ensure the StatefulSet is created:
kubectl get statesfulset/rabbit --namespace=${NAMESPACE}
Check the Pods:
kubectl get pods --selector=app=rabbit --namespace=${NAMESPACE}
You can check the the Pods are bound to a (!) Service:
kubectl describe endpoints/rabbit --namespace=${NAMESPACE}
NB You should see 3 addresses (one per Pod)
Get the NodePort either:
kubectl get service/rabbit --namespace=${NAMESPACE} --output=json
kubectl describe service/rabbit --namespace=${NAMESPACE}
You will need to use the NodePort to access both the MQTT endpoint and the UI.
statefulsets and deployments are different kubernetes resources. You have created statefulsets. That's why you don't see deployments. If you do
kubectl get statefulset you should see it and also both statefulset and deployment creates pod finally so you should be able to see rabbitmq pods if you do kubectl get pods
Since you have created a Nodeport service. You should be able to access it via http://nodeip:nodeport where nodeip is ip of any worker node in your kubernetes cluster.
You can get to know what is the Nodeport(a number between 30000-32767) by
kubectl describe services rabbit
Here is the doc on accessing a Nodeport service from outside the cluster.

How to do Dynamic Proxying to Kubernetes Pods?

I'm wanting to create a service that can do some kind of dynamic proxying back to Kubernetes Pods. Basically I'll have hundreds of K8s Pods that are running the same application that map to a random port on the host (like 10456). However, each Pod is unique and I want traffic directed at a specific pod based on hostname. So when a request comes in for abc123.app.com, I'll have a proxy layer that does a lookup in a database to find what host and port that domain is running on (like 10.0.0.5:10456), then forward the request there. Is there a service that supports this? I've worked with Nginx a lot before, but I'm not clear if it could support this lookup functionality.
Has anyone built something like this before? what's the best way to build a proxy layer that can do lookups like that? How would I update the database when a pod moves from one host to another?
Thanks in advance!
EDIT:
I should have put this in there the first time, but the types of traffic going to these pods are RPC traffic and Peer to Peer traffic
You're describing something very similar to what kubernetes ingress definitions do for http traffic.
An ingress definition configures an ingress controller to point requests for a hostname at a service. The service selects endpoints (pods) via label selectors. When pods move, kubernetes updates the service automatically.
The work on your end just becomes pushing out config changes from your database via one of the API clients to kubernetes rather than directing a proxy. If your environment was extremely dynamic requiring reconfiguration all the time or you need to make dynamic decisions about where traffic should go, you might want to continue looking at a custom proxy or istio, openresty.
It sounds like you have unique deployments going to kubernetes already, so in addition to that include a service and ingress definition.
A simple example including a label on the a pod, a service that use the label. Then an ingress definition using the service.
apiVersion: v1
kind: Pod
metadata:
name: my-app
labels:
app: host-abc123
spec:
containers:
- name: host-abc123
image: me/my-app:1.2.1
ports:
- containerPort: 10456
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: host-abc123
spec:
rules:
- host: abc123.bar.com
http:
paths:
- backend:
serviceName: host-abc123
servicePort: 80
apiVersion: v1
kind: Service
metadata:
name: host-abc123
spec:
ports:
- protocol: TCP
port: 80
targetPort: 10456
The single ingress definition could include all hosts but I'm not sure how kubernetes and the ingress controllers would go replacing that regularly.
There are nginx based ingress controllers too. You end up with a nginx server config per ingress/host definition.

Expose a Kubernetes pod on a bare metal cluster

I'm trying to expose a Kubernetes pod on a single node bare metal cluster without a domain.
In my understanding I've the these options:
Expose using NodePort
Expose using an Ingress controller
Expose using ClusterIP and manually set an external IP
As I mentioned already, I only have a single node cluster. This means that the master is master and node at the same time directlly running on a fedora host system.
The simplest solution is to use a NodePort. But the limitation here is (if I'm right), that the service port will be automatically selected from a given port range.
The next better solution is to use an ingress controller. But for this I need a public domain which I haven't. So the ingress controller also doesn't fit to me.
What for other options do I have? I just want to expose my service directly on port 9090.
Why not Option 3 ? you can setup externalIPs to your node ip.
apiVersion: v1
kind: Service
...
spec:
externalIPs:
- your node ip
Also with NodePort, the service port can be specified.
You can set a custom port range for NodePort by adding this option to your apiserver settings (/etc/kubernetes/manifests/kube-apiserver.yaml):
--service-node-port-range portRange
Default: 30000-32767
A port range to reserve for services with NodePort visibility. Example:
'30000-32767'.
Inclusive at both ends of the range.
This is the part from Kubernetes documentation related to Services:
If you want a specific port number, you can specify a value in the
nodePort field, and the system will allocate you that port or else the
API transaction will fail (i.e. you need to take care about possible
port collisions yourself). The value you specify must be in the
configured range for node ports.
Example for this answer was taken from the article Hosting Your Own Kubernetes NodePort Load Balancer:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
- port: 443
nodePort: 30443
name: https
selector:
name: nginx

Make services accessible only from Private network Kubernetes

I have received a public ip address for my kubernetes service which i can configure as a loadbalancer ip in my NGINX ingress. This public ip address can be accessed from public internet.
Is there a way or some configuration through which i can make these services accessible only from my client network in kubernetes?
With Kubernetes Nginx Ingress it is as simple as setting an annotation on your ingress object like :
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/whitelist-source-range: '8.8.8.8/32'
https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/annotations.md#whitelist-source-range
You can as suggested make use of the VPN and create an internal LoadBalancer or you can check the Network Policies that I consider that Kubernetes standard way to implement your solution.
By default, if no policies exist in a namespace, then all ingress and egress traffic is allowed to and from pods in that namespace. The following examples let you change the default behavior in that namespace.
You will need to create a NetworkPolicy Resource, in the spec you will have to describe the behaviour making use of the available fields, I recommend you to check the official documentation to retrieve more info regarding the structure.
PolicyTypes:
...
ingress: Each NetworkPolicy may include a list of whitelist ingress rules. Each rule allows traffic which matches both the from and ports sections. The example policy contains a single rule, which matches traffic on a single port, from one of three sources, the first specified via an ipBlock, the second via a namespaceSelector and the third via a podSelector.
...
Keep in mind that in order to implement them you need to use a
networking solution which supports NetworkPolicy, if you just
create the resource without a controller to implement it will have no
effect.
Example of policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
Using Network Policy is nice. But, a simpler approach would be use set ExternalIP of the nginx ingress controller to the IP address in the client network. This exposes the services only on the client network.
Below is the sample configuration for helm:
helm install --name my-ingress stable/nginx-ingress \
--set controller.service.externalIPs=<IP address in client network>

kubernetes unhealthy ingress backend

I followed the load balancer tutorial: https://cloud.google.com/container-engine/docs/tutorials/http-balancer which is working fine when I use the Nginx image, when I try and use my own application image though the backend switches to unhealthy.
My application redirects on / (returns a 302) but I added a livenessProbe in the pod definition:
livenessProbe:
httpGet:
path: /ping
port: 4001
httpHeaders:
- name: X-health-check
value: kubernetes-healthcheck
- name: X-Forwarded-Proto
value: https
- name: Host
value: foo.bar.com
My ingress looks like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foo
spec:
backend:
serviceName: foo
servicePort: 80
rules:
- host: foo.bar.com
Service configuration is:
kind: Service
apiVersion: v1
metadata:
name: foo
spec:
type: NodePort
selector:
app: foo
ports:
- port: 80
targetPort: 4001
Backends health in ingress describe ing looks like:
backends: {"k8s-be-32180--5117658971cfc555":"UNHEALTHY"}
and the rules on the ingress look like:
Rules:
Host Path Backends
---- ---- --------
* * foo:80 (10.0.0.7:4001,10.0.1.6:4001)
Any pointers greatly received, I've been trying to work this out for hours with no luck.
Update
I have added the readinessProbe to my deployment but something still appears to hit / and the ingress is still unhealthy. My probe looks like:
readinessProbe:
httpGet:
path: /ping
port: 4001
httpHeaders:
- name: X-health-check
value: kubernetes-healthcheck
- name: X-Forwarded-Proto
value: https
- name: Host
value: foo.com
I changed my service to:
kind: Service
apiVersion: v1
metadata:
name: foo
spec:
type: NodePort
selector:
app: foo
ports:
- port: 4001
targetPort: 4001
Update2
After I removed the custom headers from the readinessProbe it started working! Many thanks.
You need to add a readinessProbe (just copy your livenessProbe).
It's explained in the GCE L7 Ingress Docs.
Health checks
Currently, all service backends must satisfy either of the following requirements to pass the HTTP health checks sent to it from the GCE loadbalancer: 1. Respond with a 200 on '/'. The content does not matter. 2. Expose an arbitrary url as a readiness probe on the pods backing the Service.
Also make sure that the readinessProbe is pointing to the same port that you expose to the Ingress. In your case that's fine since you have only one port, if you add another one you may run into trouble.
I thought it's worth noting that this is a quite important limitation in the documentation:
Changes to a Pod's readinessProbe do not affect the Ingress after it is created.
After adding my readinessProbe I basically deleted my ingress (kubectl delete ingress <name>) and then applied my yaml file again to re-create it and shortly after everything was working again.
I was having the same issue. Followed Tex's tip but continued to see that message. It turns out I had to wait a few minutes before ingress to validate the service health. If someone is going through the same and done all the steps like readinessProbe and linvenessProbe, just ensure your ingress is pointing to a service that is either a NodePort, and wait a few minutes until the yellow warning icon turns into a green one. Also, check the log on StackDriver to get a better idea of what's going on.
I was also having exactly the same issue, after updating my ingress readinessProbe.
I can see Ingress status labeled Some backend services are in UNKNOWN state status in yellow.
I waited for more than 30 min, yet the changes were not reflected.
After more than 24 hours the changes reflected and status turned green.
I didn't get any official documentation for this but seems like a bug in GCP Ingress resource.
If you don't want to change your pod spec, or rely on the magic of GKE pulling out your readinessProbe, you can also configure a BackendConfig like this to explicitly configure the health check.
This is also helpful if you want to use a script for your readinessProbe, which isn't supported by GKE ingress health checks.
Note that the BackendConfig needs to be explicitly referenced in your Service definition.
---
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: my-namespace
annotations:
cloud.google.com/neg: '{"ingress":true}'
# This points GKE Ingress to the BackendConfig below
cloud.google.com/backend-config: '{"default": "my-backendconfig"}'
spec:
type: ClusterIP
ports:
- name: health
port: 1234
protocol: TCP
targetPort: 1234
- name: http
...
selector:
...
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-backendconfig
namespace: my-namespace
spec:
healthCheck:
checkIntervalSec: 15
port: 1234
type: HTTP
requestPath: /healthz
Everyone of these answers helped me.
In addition, the http probes need to return a 200 status. Stupidly, mine was returning a 301. So I just added a simple "ping" endpoint and all was well/healthy.

Resources