Was anyone able to integrate the three in the subject to achive end-2-end TLS? To clarify, I'm talking about TLS between Application Gateway and Istio ingress.
There are some threads on StackOverflow and there is an old issue on AGIC Github repo but i was not able to find any evidence it's really working. If someone have it working, can you share the setup?
• According to the Istio documentation, any request to the gateway will have two connections, viz., client downstream inbound connection to the gateway and client outbound connection to the destination as shown in the figure below: -
Thus, in the above scenario, consider the gateway as the Azure application gateway and the Istio ingress as the destination, as a result both these connections are independent TLS connections.
For TLS connections, there are a few more options:
a) What protocol is encapsulated? If the connection is HTTPS, the server protocol should be configured as HTTPS. Otherwise, for a raw TCP connection encapsulated with TLS, the protocol should be set to TLS.
b) Is the TLS connection terminated or passed through? For passthrough traffic, configure the TLS mode field to PASSTHROUGH : -
apiVersion: networking.istio.io/v1beta1
kind: Gateway
...
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: PASSTHROUGH
In this mode, Istio will route based on SNI information and forward the connection as-is to the destination. Mutual TLS can be configured through the TLS mode MUTUAL. When this is configured, a client certificate will be requested and verified against the configured caCertificates or credentialName : -
apiVersion: networking.istio.io/v1beta1
kind: Gateway
...
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: MUTUAL
caCertificates: ... ’
The only difference is that you should be careful to consider the Gateway settings when configuring this. For example, if the Gateway is configured with TLS PASSTHROUGH while the DestinationRule configures TLS origination, you will end up with double encryption. This works but is often not the desired behavior.
A VirtualService bound to the gateway needs care as well to ensure it is consistent with the Gateway definition. In this scenario, consider the gateway as an Azure application gateway in that sense the TLS settings are configured correctly. For more information, kindly refer to the Istio documentation below: -
https://istio.io/latest/docs/ops/configuration/traffic-management/tls-configuration/
https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/
Related
I have a fairly simple setup in my kubernetes cluster, with two zones:
Low trust (public facing)
Medium trust (non public)
Both zones have Istio enabled, with:
Ingress gateway with SSL enabled. For testing within my local docker desktop, I use port 443 for the public facing, and port 443 for medium trust
Virtual service
Destination rule
I am deploying apache HTTPD - acting as a reverse proxy within the low trust. The plan is for the HTTPD to then forward the traffic to istio ingress gateway in the medium trust.
Within the medium trust is a Spring boot application.
So, lets say, user is accessing https://lowtrust.avengers.local/avengers. This request will be serviced by the ingress gateway in the lowtrust, and will end up in the HTTPD, which then forward the request to ingress gateway in mediumtrust.
LOWTRUST MEDIUMTRUST
| GW--> VS-->HTTPD Pod|======>| GW --> VS -->Java Pod|
I have created a github repo to demonstrate this:
https://github.com/alexwibowo/avengersKubernetes
The HTTP proxy configuration is here: https://github.com/alexwibowo/avengersKubernetes/blob/main/httpd/conf/proxy.conf.
The Istio ingress gateway for lowtrust:
https://github.com/alexwibowo/avengersKubernetes/blob/main/kubernetes/avengers/charts/avengers-istio/templates/istio-httpd.yaml
and istio ingress gateway for mediumtrust:
https://github.com/alexwibowo/avengersKubernetes/blob/main/kubernetes/avengers/charts/avengers-istio/templates/istio-app.yaml
As you can see, both gateways have their own certs configured. At the moment, I kind of 'cheat' by modifying my /etc/host file to have the following:
127.0.0.1 lowtrust.avengers.local
<CLUSTER_IP_ADDRESS> mediumtrust.avengers.local
By doing this, when HTTPD pod making request to 'mediumtrust.avengers.local', it will get directed to the istio ingress gateway (thats my understanding anyway).
I've heard that you can actually set up a mutual TLS for the scenario I've described above. With this approach, I wont need to setup the certificate in my mediumtrust ingress gateway - and just use 'ISTIO_MUTUAL'. I think for this, I will also need to set up a 'proxy' service & virtual service in the lowtrust namespace. The virtual service will then manage the communication between lowtrust & mediumtrust. But I'm not 100% how to do this.
Any help / advice is much appreciated!
Edit 1 (2021/07/01)
I've been reading more about this topic. So another option, is to have Service of type 'ExternalName' within the 'lowtrust' namespace.
Which, if I might use the analogy, will act like a 'proxy' for connecting to the service on the other namespace.
e.g.:
apiVersion: v1
kind: Service
metadata:
name: cr1-avengers-app
namespace: "lowtrust"
spec:
type: ExternalName
externalName: "cr1-avengers-app.mediumtrust.svc.cluster.local
ports:
- port: 8081
targetPort: 8080
protocol: TCP
name: http
But by using this, I will effectively bypass the Istio VirtualService, DestinationRule that I've defined on the mediumtrust namespace.
The way I've managed to solve this locally is by having an entry in my windows hostfile.
E.g.:
127.0.0.1 lowtrust.avengers.local
10.109.161.243 mediumtrust.avengers.local
10.109.161.243 is the Cluster IP address for my istio-ingressgateway. I got this by running kubectl get svc -n istio-system from command line.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.109.161.243 localhost 15021:30564/TCP,80:31834/TCP,443:31828/TCP,445:32700/TCP,15012:30459/TCP,15443:30397/TCP 21d
I was also missing 'SSLProxyEngine' flag in my reverse proxy configuration. So in the end my VirtualHost configuration looks like below:
E.g.:
<VirtualHost *:7000>
ProxyRequests Off
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
SSLProxyEngine on
ProxyPass /avengers https://mediumtrust.avengers.local/avengers
ProxyPassReverse /avengers https://mediumtrust.avengers.local/avengers
CustomLog "/tmp/access.log" common
ErrorLog /tmp/error.log
</VirtualHost>
I am installing jenkins on GKE.
I want to use ingress (so to avoid the LoadBalancer) but I also want it to have TLS enabled.
Here are the ingress - related values:
ingress:
enabled: false
# For Kubernetes v1.14+, use 'networking.k8s.io/v1beta1'
apiVersion: "extensions/v1beta1"
labels: {}
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
# Set this path to jenkinsUriPrefix above or use annotations to rewrite path
# path: "/jenkins"
# configures the hostname e.g. jenkins.example.com
hostName:
tls:
# - secretName: jenkins.cluster.local
# hosts:
# - jenkins.cluster.local
Assuming I already have a CloudDNS (routable to my-network.mydomain.net) and I want jenkins accessible via jenkins.my-network.mydomain.net, how should I configure the above values?
What is the usefulness of the values.ingress.tls.secretName?
In case I enable tls, what will be the issuing authority of the corresponding certificate? Is this handled automatically by GCP?
The ingress that you will setup will need one loadBalancer. This load balancer will be receiving traffic from client and forward it to the ingress controller(gke ingress, nginx etc). So you are really not avoiding loadbalancer completely in this case.
The ingress is used to avoid creation of load balancers exponentially if you are using kubernetes service of type LoadBalancer to serve external clients.In your case the jenkins master service instead of exposing via load balancer directly you can choose an ingress to avoid more than one load balancer creation.
What is the usefulness of the values.ingress.tls.secretName?
It tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a certificate that contains a Common Name (CN), also known as a Fully Qualified Domain Name (FQDN) for jenkins.cluster.local.
You also need to create a secret with name jenkins.cluster.local
apiVersion: v1
kind: Secret
metadata:
name: jenkins.cluster.local
namespace: default
data:
tls.crt: base64 encoded cert
tls.key: base64 encoded key
type: kubernetes.io/tls
In case I enable tls, what will be the issuing authority of the
corresponding certificate? Is this handled automatically by GCP?
It's not automatically handled by GCP. Check Options for providing SSL certificates section from the official docs
Out of all 3 options I believe you need to follow Self-managed certificates as Secret resources and provision your own SSL certificate and create a Secret to hold it. You can then refer to the Secret in an Ingress specification to create an HTTP(S) load balancer that uses the certificate. Refer to the instructions for using certificates in Secrets for more information.
I'm running Istio in Google Kubenetes Engine. My application uses gRPC and has an Extensible Service Proxy container to link to the Google Enpoints Service.
Istio on GKE by default blocks all egress requests, bu that breaks the ESP container since it needs to request some data from outside the Istio mesh.
The logs from the ESP informed me it was trying to access IP 169.254.169.254 to get some metadata, so I opened up an egress channel from Istio to let that happen, and that's fine.
But the next thing the ESP attempts is to "fetch the service config ID from the rollouts service". Again this is blocked but this time the log error doesn't tell me the URL that it's trying to access, only the path. So I don't know what url to open up for egress.
This is the log entry:
WARNING:Retrying (Retry(total=2, connect=None, read=None, redirect=None,
status=None)) after connection broken by 'ProtocolError('Connection
aborted.', error(104, 'Connection reset by peer'))':
/v1/services/rev79.endpoints.rev79-232812.cloud.goog/rollouts?filter=status=SUCCESS
so can anyone tell me what URLs the ESP needs to access to be able to work?
For anyone else stuck with this problem.
The ESP needs access to two separate endpoints in order to run without crashing. They are
servicemanagement.googleapis.com (HTTPS)
169.254.269.254 (HTTP)
To function correctly, it also needs
servicecontrol.googleapis.com (HTTPS)
If you have strict egress filtering in your Istio mesh, you will need two ServiceEntry resource to make this happen.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: endpoints-cert
spec:
hosts:
- metadata.google # this field does not matter
addresses:
- 169.254.169.254/32
ports:
- number: 80
name: http
protocol: HTTP
resolution: NONE
location: MESH_EXTERNAL
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: endpoints-metadata
spec:
hosts:
- "servicemanagement.googleapis.com"
- "servicecontrol.googleapis.com"
ports:
- number: 80 # may not be necessary
name: http
protocol: HTTP
- number: 443
name: https
protocol: HTTPS
resolution: DNS
location: MESH_EXTERNAL
If you are using an egress gateway, you will need additional configuration for both of
these endpoints.
I eventually stumbled across what I was looking for by googling parts of the path with some key words.
This looks like what the ESP is trying to access:
https://servicemanagement.googleapis.com/v1/services/{serviceName}/rollouts/{rolloutId}
Indeed opening up a route to that host gets the ESP up and running.
I need to be able to read/write to an Azure Service Bus Queue and for that, the hostname and ports need to be white-listed by my IT team.
The connection string is: "Endpoint=sb://[myappname].servicebus.windows.net;...".
I have tried the hostname with port 443 (assuming here), but that hasn't worked after white-listing. So now I tried writing to queue while capturing the traffic from Wireshark, but I am getting lost in all the network packet details there.
Can anyone please help me with this?
Thank you
TCP port is used by default for transport operations. Please have a try to open the port 5671 and 5672. We could get more information from AMQP 1.0 in Azure Service Bus and Event Hubs protocol guide.
Azure Service Bus requires the use of TLS at all times. It supports connections over TCP port 5671, whereby the TCP connection is first overlaid with TLS before entering the AMQP protocol handshake, and also supports connections over TCP port 5672 whereby the server immediately offers a mandatory upgrade of connection to TLS using the AMQP-prescribed model. The AMQP WebSockets binding creates a tunnel over TCP port 443 that is then equivalent to AMQP 5671 connections.
If you use a library, please have a try to set the ConnectivityMode to https (443 port)
ServiceBusEnvironment.SystemConnectivity.Mode = ConnectivityMode.Https
I am running a restfull service behind self signed cert thru NGINX in google cloud kubernetes infrastructure.
Kubernetes service loader exposes 443 and routes the traffic those containers. All is working just fine as expected other than asking internal clients to ignore the self sign cert warning!
It is time for to move to CA cert thus only option as far as I see is https loader but I couldnt figure out how we can reroute the traffic to service loader or directly to pods as service loader(http loader)
Any help apprecaited
Update Firewall Rules for:
IP: 130.211.0.0/22
tcp:30000-32767
Create NodePort type service:
apiVersion: v1
kind: Service
metadata:
name: yourservicenodeport
labels:
name: your-service-node-port
spec:
type: NodePort
ports:
- port: 80
nodePort: 30001
selector:
name: yourpods
Create health check.
For the nodeport which is in this case: 30001
Create an ingress service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: youTheking-ingress-service
spec:
backend:
serviceName: yourservice
servicePort: 80
Wait for few minutes, be patient.
Change the health check on http load balancer.
a. Go to the Load Balancing on Networking Tab.
b. Click Advance menu.
c. Go Backend Services and Edit.
d. Update health check option and use the one created for nodeport service.
Repeat step 5 for instance group to be recognized health.
SSL is needed, go back to the load balancer, edit, click Frontend Configuration, then add https with cert.
You are ready to roll.
I'm not sure I fully understand you question but I'll try to answer it anyway.
You have two options for exposing your service using a cert signed by a trusted CA:
Do what you are doing today but with the real cert. You will probably want to put the cert into a secret and point your nginx configuration at it to load the cert.
Replace nginx with the google L7 load balancer. You would upload your certificate to google, configure the L7 balancer to terminate HTTPS and forward traffic to your backends.