Authentication scenario with keycloak security proxy
I have built the architecture shown in the image.
A client authentication request reaches the keycloak security proxy docker container.
Proxy asks the actual keycloak server docker container
Keycloak Server asks an external LDAP for user credentials.
Keycloak server replies OK.
Keycloak proxy replies OK and passes control to the external application URL.
The problem is that after successful authentication, the URL of the host server (i.e. where the keycloak proxy container and keycloak authentication container lie) appears on the address bar of the browser instead of the actual external application URL.
For example, if the host machine where the keycloak containers lie is keycloak.containers.gr, and the external application domain name is www.external.application.gr, then, after a successful login to the keycloak SSO login page, the URL in the address bar appears to be http://keycloak.containers.gr instead of http://www.external.application.gr.
This fact destroys all the relative css, js scripts, etc. attached to the site
www.external.application.gr.
KEYCLOAK SECURITY PROXY CONFIGURATION
I use a proxy.json for the keycloak security proxy configuration
{
"target-url": "http://www.external.application.gr",
"bind-address": "0.0.0.0",
"send-access-token": true,
"http-port": "8180",
"https-port": "8443",
"applications": [
{
"base-path": "/",
"adapter-config": {
"realm": "internal_applications",
"auth-server-url": "http://keycloak.containers.gr:8202/auth",
"resource": "test_app",
"ssl-required": "external",
"credentials": {
"secret": "xxxxx-xxx-xxx-xxxx-xxxxxxxxxxx"
}
},
"constraints": [
{
"pattern": "/*",
"authenticate": true
}
],
"proxy-address-forwarding": true
}
]
}
NOTE:
I have tried to change the "bind-address": "0.0.0.0" parameter, from 0.0.0.0 to the IP of the www.external.application.gr, but with no luck...
Related
I have configured azure load balancer which points my public Ip http, and I reach my website and working fine.
Now, I want to achieve a routing rule is used to redirect the application traffic from HTTP protocol to HTTPS secure protocol with the help of azure application gateway.
Can we simply add our HTTPS services to the health probe after installing an SSL certificate on the load balancer? I don't have much knowledge in networking any help highly appreciate.
I tried to reproduce the same in my environment it works fine successfully.
you are able to configure your public Ip address to https using Azure application gateway. Try to create a self-signed certificate like below:
New-SelfSignedCertificate `
-certstorelocation cert:\localmachine\my `
-dnsname www.contoso.com
#To create pfx file
$pwd = ConvertTo-SecureString -String "Azure"-Force -AsPlainText
Export-PfxCertificate `
-cert cert:\localMachine\my\<YOURTHUMBPRINT> `
-FilePath c:\appgwcert.pfx `
-Password $pwd
Try to create an application gateway. you can use your exciting public Ip address like below.
In routing rule add your frontend Ip as public and protocol _ HTTPS _ as_ 443 ports _ and upload a pfx certificate like below:
Then, try to create listener with port 8080 with http and add routing rule as same and verify the backend status is healthy like below:
When I test the http protocol with Ip address redirect successfully like below:
We use envoy 1.24 docker image and have envoy yaml configured to go to service A successfully. After that we decided to go with envoy external auth
We did configuration as per instructions and got it to work locally. There is a auth service that is run as docker image and we have envoy running in docker, pointing to auth docker image address.
Problem occurred when we shipped these two services to Google Cloud Run. While we didn't had any auth configured, we successfully managed to trigger GRPC requests on service A. Moment we added external auth, and tried to push everything to GCP Cloud Run, we bumped into issue:
upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: immediate connect error: Cannot assign requested address'
Envoy config looks like this:
Filter:
- name: envoy.filters.http.ext_authz
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz
grpc_service:
envoy_grpc:
cluster_name: auth
transport_api_version: V3
Cluster:
- name: auth
type: STRICT_DNS
lb_policy: ROUND_ROBIN
typed_extension_protocol_options:
envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
"#type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
explicit_http_config:
http2_protocol_options: { }
load_assignment:
cluster_name: auth
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: cloud-run-address
port_value: 443
If I try to hit the auth service directly via GRPC check request I am able to do so without any issues. Even if I set url of auth service to service A, it is resolving successfully and giving me error accordingly.
Since we use GRPC for communication, all services are http2 enabled.
What are we doing wrong? We need to be able to authorise using external auth service hosted on cloud run. Any help to get this to work out is appreciated. Thx
Managed to get it working. Had to do couple of changes to cluster config:
- name: auth
type: STRICT_DNS
lb_policy: ROUND_ROBIN
dns_lookup_family: V4_ONLY
http2_protocol_options: {}
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
load_assignment:
cluster_name: auth
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: internal_ip_address_from_lb
port_value: 443
Furhter more, in address field I have added IP address from load balancer that I have created. Type of load balancer is internal load balancer, configured to go directly to my cloud run auth service.
I have a flask app that was built based on the following instructions that allows me to authenticate users based Azure AD.
https://learn.microsoft.com/en-us/azure/active-directory/develop/quickstart-v2-python-webapp
The app works great when tested on localhost:5000. Now I want to deploy it to a production server using docker and nginx reverse proxy. I have created a docker container so that the docker port is mapped to port 6000 on localhost. Then I have added a proxy_pass in nginx config to pass the traffic to the docker container.
nginx.conf
location /app/authenticated-app/ {
proxy_pass http://localhost:6000/;
proxy_redirect default;
}
With this config, I can go to the login page via https://server/app/authenticated-app however, when I click on login, the request that goes to azure has a query parameter redirect_uri that's set to http://localhost:6000/getToken. Therefore, once I complete the login, the app gets redirected to that url. Does anyone know how to fix this and get it redirected to the proper url. I have already added https://server/app/authenticated-app/getToken under the redirect_uri on azure portal.
I had a similar issue, with nginx and my flask app both running in docker containers in the same stack and using a self-signed SSL certificate.
My nginx redirects requests as follow:
proxy_pass http://$CONTAINER_NAME:$PORT;
and the msal app uses that URL when building its redirect_uri
def _build_auth_code_flow(authority=None, scopes=None):
return _build_msal_app(authority=authority).initiate_auth_code_flow(
scopes or [],
redirect_uri=url_for("auth.authorized", _external=True))
I cheated a little bit by hardcoding the return URL I wanted (which is identical to the one I configured in my azure app registration) in my config.py file and using that for the redirect_uri:
def _build_auth_code_flow(authority=None, scopes=None):
return _build_msal_app(authority=authority).initiate_auth_code_flow(
scopes or [],
redirect_uri=current_app.config['HARDCODED_REDIRECT_URL_MICROSOFT'])
In my case, that url would be https://localhost/auth/redirect/. I also needed to configure my nginx to redirect all requests from http to https:
events {}
http {
server {
listen 80;
server_name localhost;
return 301 https://localhost$request_uri;
}
...
I had the same issue, what I did is :
Use Cherrypy to enable ssl on custom port.
cherrypy.config.update({'server.socket_host': '0.0.0.0',
'server.socket_port': 8443,
'engine.autoreload.on': False,
'server.ssl_module':'builtin',
'server.ssl_certificate':'crt',
'server.ssl_private_key':'key'
})
Then install Nginx and proxy to https://127.0.0.1:8443
Not sure if that will help but this what I did to get my flask app working with MSAL.
I have a service that exposes endpoint, the main purpose of which is to perform custom HTTP call based on JSON data provided (URL, port, body, headers etc.).
The service is running inside Docker container on Cloud environment.
How do I prevent user from being able to make circular calls to the server itself for ex. post http://myservice.com/invoke body: {'url': 'localhost:8080/invoke', 'method': 'get'}
and how to deny 'localhost', '0.0.0.0' or '127.0.0.1' urls so the server was unable to find and call them inside container and execute requests for not exposed endpoints? (for ex. my service also has '\stats' endpoint that's not accessible from public VPC but accessible inside private VPC, so calling 'myservice.com/invoke {'url': 'localhost:8080/stats', 'method': 'get'}' will make this endpoint accessible to user outside private VPC).
My simple Dockerfile:
FROM registry.access.redhat.com/ubi8/ubi-minimal
COPY build/my-service-1.0-SNAPSHOT-runner my-service
RUN chmod +x my-service
EXPOSE 8080
CMD ["./my-service"]
Thank you
I have multiple NGNIX-uWSGI based Django Applications deployed using Docker and hosted in EC2 (currently at different ports like 81, 82, ...). Now I wish to add in sub-domains to this such that sub1.domain.com and sub2.domain.com will both work from the same EC2 instance.
I am fine with multiple ports, BUT they dont work via DNS settings.
sub1.domain.com -> 1.2.3.4:81
sub2.domain.com -> 1.2.3.4:82
What I cannot do
Multiple IPs ref: allocation of a new ip for each deployed sub-domain is not possible.
NGINX Proxy ref: This looks like the ideal solution BUT this is not maintained by an org like Docker or NGINX, so I am un-sure of the security and reliability.
What I am considering:
I am considering to write my own NGINX reverse proxy, similar to Apache Multiple Sub Domains With One IP Address BUT then the flow is will via multiple proxies since already there is an NGINX-uWSGI proxy via the Tech Stack
you can use nginx upstream
upstream backend {
server backend1.example.com weight=5;
server backend2.example.com:8080;
server unix:/tmp/backend3;
server backup1.example.com:8080 backup;
server backup2.example.com:8080 backup;
}
server {
server_name sub.test.com www.sub.test.com;
location / {
proxy_pass http://backend;
}
}