Accessing HTTPS Istio Ingress Gateway from Pod - docker

I have a fairly simple setup in my kubernetes cluster, with two zones:
Low trust (public facing)
Medium trust (non public)
Both zones have Istio enabled, with:
Ingress gateway with SSL enabled. For testing within my local docker desktop, I use port 443 for the public facing, and port 443 for medium trust
Virtual service
Destination rule
I am deploying apache HTTPD - acting as a reverse proxy within the low trust. The plan is for the HTTPD to then forward the traffic to istio ingress gateway in the medium trust.
Within the medium trust is a Spring boot application.
So, lets say, user is accessing https://lowtrust.avengers.local/avengers. This request will be serviced by the ingress gateway in the lowtrust, and will end up in the HTTPD, which then forward the request to ingress gateway in mediumtrust.
LOWTRUST MEDIUMTRUST
| GW--> VS-->HTTPD Pod|======>| GW --> VS -->Java Pod|
I have created a github repo to demonstrate this:
https://github.com/alexwibowo/avengersKubernetes
The HTTP proxy configuration is here: https://github.com/alexwibowo/avengersKubernetes/blob/main/httpd/conf/proxy.conf.
The Istio ingress gateway for lowtrust:
https://github.com/alexwibowo/avengersKubernetes/blob/main/kubernetes/avengers/charts/avengers-istio/templates/istio-httpd.yaml
and istio ingress gateway for mediumtrust:
https://github.com/alexwibowo/avengersKubernetes/blob/main/kubernetes/avengers/charts/avengers-istio/templates/istio-app.yaml
As you can see, both gateways have their own certs configured. At the moment, I kind of 'cheat' by modifying my /etc/host file to have the following:
127.0.0.1 lowtrust.avengers.local
<CLUSTER_IP_ADDRESS> mediumtrust.avengers.local
By doing this, when HTTPD pod making request to 'mediumtrust.avengers.local', it will get directed to the istio ingress gateway (thats my understanding anyway).
I've heard that you can actually set up a mutual TLS for the scenario I've described above. With this approach, I wont need to setup the certificate in my mediumtrust ingress gateway - and just use 'ISTIO_MUTUAL'. I think for this, I will also need to set up a 'proxy' service & virtual service in the lowtrust namespace. The virtual service will then manage the communication between lowtrust & mediumtrust. But I'm not 100% how to do this.
Any help / advice is much appreciated!
Edit 1 (2021/07/01)
I've been reading more about this topic. So another option, is to have Service of type 'ExternalName' within the 'lowtrust' namespace.
Which, if I might use the analogy, will act like a 'proxy' for connecting to the service on the other namespace.
e.g.:
apiVersion: v1
kind: Service
metadata:
name: cr1-avengers-app
namespace: "lowtrust"
spec:
type: ExternalName
externalName: "cr1-avengers-app.mediumtrust.svc.cluster.local
ports:
- port: 8081
targetPort: 8080
protocol: TCP
name: http
But by using this, I will effectively bypass the Istio VirtualService, DestinationRule that I've defined on the mediumtrust namespace.

The way I've managed to solve this locally is by having an entry in my windows hostfile.
E.g.:
127.0.0.1 lowtrust.avengers.local
10.109.161.243 mediumtrust.avengers.local
10.109.161.243 is the Cluster IP address for my istio-ingressgateway. I got this by running kubectl get svc -n istio-system from command line.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.109.161.243 localhost 15021:30564/TCP,80:31834/TCP,443:31828/TCP,445:32700/TCP,15012:30459/TCP,15443:30397/TCP 21d
I was also missing 'SSLProxyEngine' flag in my reverse proxy configuration. So in the end my VirtualHost configuration looks like below:
E.g.:
<VirtualHost *:7000>
ProxyRequests Off
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
SSLProxyEngine on
ProxyPass /avengers https://mediumtrust.avengers.local/avengers
ProxyPassReverse /avengers https://mediumtrust.avengers.local/avengers
CustomLog "/tmp/access.log" common
ErrorLog /tmp/error.log
</VirtualHost>

Related

Ingress NGNIX does not listen on URL with specified port

I am running Azure AKS with Kubenet networking, in which I have deployed several services, exposed on several ports.
I have configured a URL based routing and it seems to work for the services I could test.
I found out the following:
sending URL and URL:80, returns the desired web page, but the URL displayed in the browser's address bar is removing the port, if I send it. Looks like http://URL/
When I try accessing other web pages or services, I get a strange phenomena: Calling the URL with the port number, is waiting until the browser says it's unreachable. Fiddler returns "time out".
When I access the service (1 of 3 I could check visibly) and not provide the port, the Ingress rules I applied answer the request and I get the resulting web page, which is exposed on the internal service port.
i'm using this YAML, for rabbit management page:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rabbit-admin-on-ingress
namespace: mynamespace
spec:
rules:
- host: rabbit.my.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: rabbitmq
port:
number: 15672
ingressClassName: nginx
and also, apply this config (using kubectl apply -f config.file.yaml):
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
15672: "mynamespace/rabbitmq:15672"
What happens is:
http://rabbit.my.local gets the rabbit admin page
http://rabbit.my.local:15672 get a time out and I get frustrated
It seems this is also happening on another service I have running on port 8085 and perhaps even the DB running on the usual SQL port (might be a TCP only connection)
Both are configured the same as the rabbitmq service in the yaml rules and config file, with their respected service names, namespaces and ports.
Please help me to figure out how I can make Ingress accept the URLs with the :PORT attached to it and answer them. Save me.
A quick reminder - :80 works fine. Perhaps because it's one of the defaults for Ingress
Thank you so much in advance.
Moshe

How to block outgoing traffic to ip in IP tables in K8S

I want block outgoing traffic to the ip (eg-DB) in IP tables in K8s.
I know that in K8s ip tables exist only at node level.
and I'm not sure in which file changes should be made and what is the command or changes required.
Please help me with this query.
Thanks.
You could deploy istio and specifically the istio egress gateway.
This way you will be able to manage outgoing traffic within the istio manifest
You can directly run the IPtable command (ex. iptables -A OUTPUT -j REJECT) on top of a node if that's fine.
however file depends on the OS : /etc/sysconfig/iptables this is for ipv4
i would suggest checking out the Network policy in Kubernetes using that you can block the outgoing traffic.
https://kubernetes.io/docs/concepts/services-networking/network-policies/
No extra setup is required like Istio or anything.
Cluster security you can handle using the network policy in the backend it uses IP tables only.
For example to block traffic on specific CIDR or IP by applying the YAML only
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978

Kubernetes : Micro services running on same port?

I am building a microservice full stack web application as (so far) :
ReactJS (client microservice) : listens on 3000
Authentication (Auth microservice) : listens on 3000 // accidently assigned the same port
Technically, what I have heard/learned so far is that we cannot have two Pods running on the same port.
I am really confused how am I able to run the application (perfectly) like this with same ports on different applications/pods?
ingress-nginx config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
## our custom routing rules
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: auth-srv
servicePort: 3000
- path: /?(.*)
backend:
serviceName: client-srv
servicePort: 3000
I am really curious, am I missing something here?
Each Pod has its own network namespace and its own IP address, though the Pod-specific IP addresses aren't reachable from outside the cluster and aren't really discoverable inside the cluster. Since each Pod has its own IP address, you can have as many Pods as you want all listening to the same port.
Each Service also has its own IP address; again, not reachable from outside the cluster, though they have DNS names so applications can find them. Since each Service has its own IP address, you can have as many Services as you want all listening to the same port. The Service ports can be the same or different from the Pod ports.
The Ingress controller is reachable from outside the cluster via HTTP. The Ingress specification you show defines HTTP routing rules. If I set up a DNS service with a .dev TLD and define an A record for ticketing.dev that points at the ingress controller, then http://ticketing.dev/api/users/anything gets forwarded to http://auth-srv.default.svc.cluster.local:3000/ within the cluster, and http://ticketing.dev/otherwise goes to http://client-srv.default.svc.cluster.local:3000/. Those in turn will get forwarded to whatever Pods they're connected to.
There's no particular prohibition against multiple Pods or Services having the same port. I tend to like setting all of my HTTP Services to listen on port 80 since it's the standard HTTP port, even if the individual Pods are listening on port 3000 or 8000 or 8080 or whatever else.
You have two different services in the backend: auth-srv and client-srv. Therefore, you have two different addresses and then can use any port you want in each of it. that means you can get the same port in the two different services.

Kubernetes: Replace mod_cluster for back end services (reverse proxy)

I am migrating my current service to Kubernetes. Currently back end services are resolved via mod_cluster. mod cluster manager runs on httpd and mod_cluster clients auto register their web contexts with httpd/mod_cluster manager on startup
user-->ingress-rule--> httpd [running mod_cluster manager]--> Jboss[mod_cluster clients]
I resolve my UI via the following ingress rule
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: httpd
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
rules:
- host: myk8s.myath.myserv.com
http:
paths:
- path: /
backend:
serviceName: httpd
servicePort: 443
tls:
- hosts:
- myk8s.myath.myserv.com
This works well, resolves UI, can log in and resolve all static content etc.
Mod cluster exposes services such as myservice. I disabled mod_cluster and created a Kubernetes service myservice that resolved to the back-end Pod thinking that the Ingress rule would get the request as far as httpd and then httpd would be able to resolve the backend service via Kubernetes but i get 404s as I am unable to resolve myservice
Service can be resolved via Reverse proxy rules such as below, but this is not preferred solution
# Redirect to myjbossserv
ProxyPass /myservice/services/command/ http://myjbossserv:8080/myservice/services/command/ <-----myjbossserv is a service registered in kubernetes
ProxyPassReverse /myservice/services/command/ http://myjbossserv:8080/myservice/services/command/
Any help much appreciated
The simplest way to solve this...catering for all HA and robustness use cases was to use reverse proxy rules. There are multiple ways to configure these such as at image build time or via config maps...
# Redirect to myjbossserv
ProxyPass /myservice/services/command/ http://myjbossserv:8080/myservice/services/command/ <-----myjbossserv is a service registered in kubernetes
ProxyPassReverse /myservice/services/command/ http://myjbossserv:8080/myservice/services/command/

Access service on subdomain in Kubernetes

I have following setup:
Private OpenStack Cloud - o̲n̲l̲y̲ Web UI (Horizon) is accessible
(API is restricted but maybe I could get access)
I have used CoreOS with a setup of one master and three nodes
Resources are standardized (as default of OpenStack)
I followed the getting-started guide for CoreOS (i.e. I'm using the default YAMLs for cloud-config provided) on GitHub
As I read extensions such like Web UI (kube-ui) can be added as Add-On - which I have added (only kube-ui).
Now if I run a test such like simple-nginx I get following output:
creating pods:
$ kubectl run my-nginx --image=nginx --replicas=2 --port=80
creating service:
$ kubectl expose rc my-nginx --port=80 --type=LoadBalancer
NAME LABELS SELECTOR IP(S) PORT(S)
my-nginx run=my-nginx run=my-nginx 80/TCP
get service info:
$ kubectl describe service my-nginx
Name: my-nginx
Namespace: default
Labels: run=my-nginx
Selector: run=my-nginx
Type: LoadBalancer
IP: 10.100.161.90
Port: <unnamed> 80/TCP
NodePort: <unnamed> 31170/TCP
Endpoints: 10.244.19.2:80,10.244.44.3:80
Session Affinity: None
No events.
I can access my service from every(!) external IP of the nodes.
My question now is as follows:
How can access any started service ether with a subdomain and therefore how can I set this configuration (for example I have domain.com as example) or could it be printed out on which node-IP I have to access my service (although I have only two replicas(?!))?
To describe my thoughts more understandable I mean following:
given domain: domain.com (pointing to master)
start service simple-nginx
service can be accessed with simple-nginx.domain.com
Does your OpenStack cloud provider implementation support services of type LoadBalancer?
If so, the service controller should assign an ingress IP or hostname to the service, which should eventually show up in kubectl describe svc output. You could then set up external DNS for it.
If not, just use type=NodePort, and you'll still get a NodePort on each node. You can then follow the advice in the comment to create an Ingress resource, which can do the port and host remapping.

Resources