Envoy upstream error when using external auth - google-cloud-run

We use envoy 1.24 docker image and have envoy yaml configured to go to service A successfully. After that we decided to go with envoy external auth
We did configuration as per instructions and got it to work locally. There is a auth service that is run as docker image and we have envoy running in docker, pointing to auth docker image address.
Problem occurred when we shipped these two services to Google Cloud Run. While we didn't had any auth configured, we successfully managed to trigger GRPC requests on service A. Moment we added external auth, and tried to push everything to GCP Cloud Run, we bumped into issue:
upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: immediate connect error: Cannot assign requested address'
Envoy config looks like this:
Filter:
- name: envoy.filters.http.ext_authz
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz
grpc_service:
envoy_grpc:
cluster_name: auth
transport_api_version: V3
Cluster:
- name: auth
type: STRICT_DNS
lb_policy: ROUND_ROBIN
typed_extension_protocol_options:
envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
"#type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
explicit_http_config:
http2_protocol_options: { }
load_assignment:
cluster_name: auth
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: cloud-run-address
port_value: 443
If I try to hit the auth service directly via GRPC check request I am able to do so without any issues. Even if I set url of auth service to service A, it is resolving successfully and giving me error accordingly.
Since we use GRPC for communication, all services are http2 enabled.
What are we doing wrong? We need to be able to authorise using external auth service hosted on cloud run. Any help to get this to work out is appreciated. Thx

Managed to get it working. Had to do couple of changes to cluster config:
- name: auth
type: STRICT_DNS
lb_policy: ROUND_ROBIN
dns_lookup_family: V4_ONLY
http2_protocol_options: {}
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
load_assignment:
cluster_name: auth
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: internal_ip_address_from_lb
port_value: 443
Furhter more, in address field I have added IP address from load balancer that I have created. Type of load balancer is internal load balancer, configured to go directly to my cloud run auth service.

Related

Ingress NGNIX does not listen on URL with specified port

I am running Azure AKS with Kubenet networking, in which I have deployed several services, exposed on several ports.
I have configured a URL based routing and it seems to work for the services I could test.
I found out the following:
sending URL and URL:80, returns the desired web page, but the URL displayed in the browser's address bar is removing the port, if I send it. Looks like http://URL/
When I try accessing other web pages or services, I get a strange phenomena: Calling the URL with the port number, is waiting until the browser says it's unreachable. Fiddler returns "time out".
When I access the service (1 of 3 I could check visibly) and not provide the port, the Ingress rules I applied answer the request and I get the resulting web page, which is exposed on the internal service port.
i'm using this YAML, for rabbit management page:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rabbit-admin-on-ingress
namespace: mynamespace
spec:
rules:
- host: rabbit.my.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: rabbitmq
port:
number: 15672
ingressClassName: nginx
and also, apply this config (using kubectl apply -f config.file.yaml):
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
15672: "mynamespace/rabbitmq:15672"
What happens is:
http://rabbit.my.local gets the rabbit admin page
http://rabbit.my.local:15672 get a time out and I get frustrated
It seems this is also happening on another service I have running on port 8085 and perhaps even the DB running on the usual SQL port (might be a TCP only connection)
Both are configured the same as the rabbitmq service in the yaml rules and config file, with their respected service names, namespaces and ports.
Please help me to figure out how I can make Ingress accept the URLs with the :PORT attached to it and answer them. Save me.
A quick reminder - :80 works fine. Perhaps because it's one of the defaults for Ingress
Thank you so much in advance.
Moshe

Integrating Azure Application Gateway, AGIC and Istio

Was anyone able to integrate the three in the subject to achive end-2-end TLS? To clarify, I'm talking about TLS between Application Gateway and Istio ingress.
There are some threads on StackOverflow and there is an old issue on AGIC Github repo but i was not able to find any evidence it's really working. If someone have it working, can you share the setup?
• According to the Istio documentation, any request to the gateway will have two connections, viz., client downstream inbound connection to the gateway and client outbound connection to the destination as shown in the figure below: -
Thus, in the above scenario, consider the gateway as the Azure application gateway and the Istio ingress as the destination, as a result both these connections are independent TLS connections.
For TLS connections, there are a few more options:
a) What protocol is encapsulated? If the connection is HTTPS, the server protocol should be configured as HTTPS. Otherwise, for a raw TCP connection encapsulated with TLS, the protocol should be set to TLS.
b) Is the TLS connection terminated or passed through? For passthrough traffic, configure the TLS mode field to PASSTHROUGH : -
apiVersion: networking.istio.io/v1beta1
kind: Gateway
...
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: PASSTHROUGH
In this mode, Istio will route based on SNI information and forward the connection as-is to the destination. Mutual TLS can be configured through the TLS mode MUTUAL. When this is configured, a client certificate will be requested and verified against the configured caCertificates or credentialName : -
apiVersion: networking.istio.io/v1beta1
kind: Gateway
...
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: MUTUAL
caCertificates: ... ’
The only difference is that you should be careful to consider the Gateway settings when configuring this. For example, if the Gateway is configured with TLS PASSTHROUGH while the DestinationRule configures TLS origination, you will end up with double encryption. This works but is often not the desired behavior.
A VirtualService bound to the gateway needs care as well to ensure it is consistent with the Gateway definition. In this scenario, consider the gateway as an Azure application gateway in that sense the TLS settings are configured correctly. For more information, kindly refer to the Istio documentation below: -
https://istio.io/latest/docs/ops/configuration/traffic-management/tls-configuration/
https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/

What URLs does the GKE extensible service proxy need to access

I'm running Istio in Google Kubenetes Engine. My application uses gRPC and has an Extensible Service Proxy container to link to the Google Enpoints Service.
Istio on GKE by default blocks all egress requests, bu that breaks the ESP container since it needs to request some data from outside the Istio mesh.
The logs from the ESP informed me it was trying to access IP 169.254.169.254 to get some metadata, so I opened up an egress channel from Istio to let that happen, and that's fine.
But the next thing the ESP attempts is to "fetch the service config ID from the rollouts service". Again this is blocked but this time the log error doesn't tell me the URL that it's trying to access, only the path. So I don't know what url to open up for egress.
This is the log entry:
WARNING:Retrying (Retry(total=2, connect=None, read=None, redirect=None,
status=None)) after connection broken by 'ProtocolError('Connection
aborted.', error(104, 'Connection reset by peer'))':
/v1/services/rev79.endpoints.rev79-232812.cloud.goog/rollouts?filter=status=SUCCESS
so can anyone tell me what URLs the ESP needs to access to be able to work?
For anyone else stuck with this problem.
The ESP needs access to two separate endpoints in order to run without crashing. They are
servicemanagement.googleapis.com (HTTPS)
169.254.269.254 (HTTP)
To function correctly, it also needs
servicecontrol.googleapis.com (HTTPS)
If you have strict egress filtering in your Istio mesh, you will need two ServiceEntry resource to make this happen.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: endpoints-cert
spec:
hosts:
- metadata.google # this field does not matter
addresses:
- 169.254.169.254/32
ports:
- number: 80
name: http
protocol: HTTP
resolution: NONE
location: MESH_EXTERNAL
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: endpoints-metadata
spec:
hosts:
- "servicemanagement.googleapis.com"
- "servicecontrol.googleapis.com"
ports:
- number: 80 # may not be necessary
name: http
protocol: HTTP
- number: 443
name: https
protocol: HTTPS
resolution: DNS
location: MESH_EXTERNAL
If you are using an egress gateway, you will need additional configuration for both of
these endpoints.
I eventually stumbled across what I was looking for by googling parts of the path with some key words.
This looks like what the ESP is trying to access:
https://servicemanagement.googleapis.com/v1/services/{serviceName}/rollouts/{rolloutId}
Indeed opening up a route to that host gets the ESP up and running.

Kubernetes: Replace mod_cluster for back end services (reverse proxy)

I am migrating my current service to Kubernetes. Currently back end services are resolved via mod_cluster. mod cluster manager runs on httpd and mod_cluster clients auto register their web contexts with httpd/mod_cluster manager on startup
user-->ingress-rule--> httpd [running mod_cluster manager]--> Jboss[mod_cluster clients]
I resolve my UI via the following ingress rule
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: httpd
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
rules:
- host: myk8s.myath.myserv.com
http:
paths:
- path: /
backend:
serviceName: httpd
servicePort: 443
tls:
- hosts:
- myk8s.myath.myserv.com
This works well, resolves UI, can log in and resolve all static content etc.
Mod cluster exposes services such as myservice. I disabled mod_cluster and created a Kubernetes service myservice that resolved to the back-end Pod thinking that the Ingress rule would get the request as far as httpd and then httpd would be able to resolve the backend service via Kubernetes but i get 404s as I am unable to resolve myservice
Service can be resolved via Reverse proxy rules such as below, but this is not preferred solution
# Redirect to myjbossserv
ProxyPass /myservice/services/command/ http://myjbossserv:8080/myservice/services/command/ <-----myjbossserv is a service registered in kubernetes
ProxyPassReverse /myservice/services/command/ http://myjbossserv:8080/myservice/services/command/
Any help much appreciated
The simplest way to solve this...catering for all HA and robustness use cases was to use reverse proxy rules. There are multiple ways to configure these such as at image build time or via config maps...
# Redirect to myjbossserv
ProxyPass /myservice/services/command/ http://myjbossserv:8080/myservice/services/command/ <-----myjbossserv is a service registered in kubernetes
ProxyPassReverse /myservice/services/command/ http://myjbossserv:8080/myservice/services/command/

HTTP(S) Load Balancing for Kubernetes / Docker

I am running a restfull service behind self signed cert thru NGINX in google cloud kubernetes infrastructure.
Kubernetes service loader exposes 443 and routes the traffic those containers. All is working just fine as expected other than asking internal clients to ignore the self sign cert warning!
It is time for to move to CA cert thus only option as far as I see is https loader but I couldnt figure out how we can reroute the traffic to service loader or directly to pods as service loader(http loader)
Any help apprecaited
Update Firewall Rules for:
IP: 130.211.0.0/22
tcp:30000-32767
Create NodePort type service:
apiVersion: v1
kind: Service
metadata:
name: yourservicenodeport
labels:
name: your-service-node-port
spec:
type: NodePort
ports:
- port: 80
nodePort: 30001
selector:
name: yourpods
Create health check.
For the nodeport which is in this case: 30001
Create an ingress service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: youTheking-ingress-service
spec:
backend:
serviceName: yourservice
servicePort: 80
Wait for few minutes, be patient.
Change the health check on http load balancer.
a. Go to the Load Balancing on Networking Tab.
b. Click Advance menu.
c. Go Backend Services and Edit.
d. Update health check option and use the one created for nodeport service.
Repeat step 5 for instance group to be recognized health.
SSL is needed, go back to the load balancer, edit, click Frontend Configuration, then add https with cert.
You are ready to roll.
I'm not sure I fully understand you question but I'll try to answer it anyway.
You have two options for exposing your service using a cert signed by a trusted CA:
Do what you are doing today but with the real cert. You will probably want to put the cert into a secret and point your nginx configuration at it to load the cert.
Replace nginx with the google L7 load balancer. You would upload your certificate to google, configure the L7 balancer to terminate HTTPS and forward traffic to your backends.

Resources