OAuth token validation from HAProxy or Apache mod_proxy - oauth

I have a microservice deployed on 3 nodes sitting behind a HAProxy load balancer all inside internal network. The services are protected using OAuth2 APIS authorization server. Now, I want to move the HAProxy to DMZ. And I want to reject requests that do not have auth token in the header and also validate the auth token by calling OAuth REST API.
In HAProxy I couldn't find a way to do this. There is an option httpchk which can be used for healthcheck. I'm looking for a similar feature that can be used to validate each incoming request.
Can anyone please help suggest me on how to implement this using HAProxy or Apache mod_proxy?

There's the Apache module mod_auth_openidc that would allow you to validate OAuth 2.0 tokens against an Authorization Server, see: https://github.com/zmartzone/mod_auth_openidc. That module can be combined with mod_proxy to achieve what you are looking for.

In HAProxy I couldn't find a way to do this.
For the record, as of 2021 you can. Here's a HAProxy official blog post about using OAuth https://www.haproxy.com/blog/using-haproxy-as-an-api-gateway-part-2-authentication/.
TL;DR: install this haproxy-lua-oauth script, then you can come up with conf like this snippet
frontend api_gateway
# Always use HTTPS to protect the secrecy of the token
bind :443 ssl crt /usr/local/etc/haproxy/pem/test.com.pem
# Accept GET requests and skip further checks
http-request allow if { method GET }
# Deny the request if it's missing an Authorization header
http-request deny unless { req.hdr(authorization) -m found }
# Verify the token by invoking the jwtverify Lua script
http-request lua.jwtverify
# Deny the request unless 'authorized' is true
http-request deny unless { var(txn.authorized) -m bool }
# (Optional) Deny the request if it's a POST/DELETE to a
# path beginning with /api/hamsters, but the token doesn't
# include the "write:hamsters" scope
http-request deny if { path_beg /api/hamsters } { method POST DELETE } ! { var(txn.oauth_scopes) -m sub write:hamsters }
# If no problems, send to the apiservers backend
default_backend apiservers

Related

Redirecting to docker registry with nginx

All I would like to do is control the top endpoint (MY_ENDPOINT where users will login and pull images. The registry and containers are being hosted (DOCKER_SAAS), so all I need is a seemingly simple redirect. Concretely, where you would normally do:
docker login -u ... -p ... DOCKER_SAAS
docker pull DOCKER_SAAS/.../...
I would like to allow:
docker login -u ... -p ... MY_ENDPOINT
docker pull MY_ENDPOINT/.../...
And even more optimally I would prefer:
docker login MY_ENDPOINT
docker pull MY_ENDPOINT/.../...
where the difference in the last item is that the endpoint contains a hashed version of the username and password, which is set into an Authorization header (using Basic) - so the user doesn't even need to worry about username and password, just their URL. I've tried a proxy_pass as we are already doing for basic packaging (using HTTPS), but that fails with a 404 (in part because we do not handle /v2 - do I need to redirect that through, also?). This led me to https://docs.docker.com/registry/recipes/nginx/, but this seems to only be pertinent if you are hosting the registry. Is what I am trying to do even possible?
It sounds like there is also an Nginx or similar reverse-proxy-server in front of the DOCKER_SAAS. Does the infrastructure look like this?
[MY_ENDPOINT: nginx] <--> ([DOCKER_SAAS ENDPOINT: ?] <--> [DOCKER_REGISTRY])
My guess is that since the server [DOCKER_SAAS ENDPOINT: ?] is apparently configured with a fixed domain name, it expects exactly that domain name in the request header (e.g. Host: DOCKER_SAAS.TLD). So the problem is probably that when proxying from [MY_ENDPOINT: nginx] to [DOCKER_SAAS ENDPOINT: ?] the wrong Host header is sent along, i.e. by default the host header MY_ENDPOINT.TLD is sent along, but it should be DOCKER_SAAS.TLD instead. E.g.:
upstream docker-registry {
server DOCKER_SAAS.TLD:8443;
}
server {
...
server_name MY_ENDPOINT.TLD;
location / {
proxy_pass https://docker-registry/;
proxy_set_header Host DOCKER_SAAS.TLD; # set the header explicitly
...
}
}
or
server {
...
server_name MY_ENDPOINT.TLD;
location / {
proxy_pass https://DOCKER_SAAS.TLD:8443/;
proxy_set_header Host DOCKER_SAAS.TLD; # set the header explicitly
...
}
}
Regarding this:
And even more optimally I would prefer: docker login MY_ENDPOINT
This could be set on the proxy server ([MY_ENDPOINT: nginx]), yes. (The Authorization: "Basic ..." can be dynamically filled with the respective token extracted from the MY_ENDPOINT, and so on). However, the docker CLI would still ask for a username and password anyway. Yes, the user can enter dummy values (to make the CLI happy), or this would also work though:
docker login -u lalala -p 123 MY_ENDPOINT
But this would be inconsistent, and would rather confuse the users, imho. So better let it be...
This simple config works both with GitHub and Amazon ECR:
server {
listen 80;
server_name localhost;
location / {
proxy_set_header Authorization "Basic ${NGINX_AUTH_CREDENTIALS}";
proxy_pass https://registry.example.com;
}
}
${NGINX_AUTH_CREDENTIALS} is a placeholder for actual hash that Docker uses to authenticate. You can get it from $HOME/.docker/config.json after using docker login once:
{
"auths": {
"registry.example.com": {
"auth": "THIS STRING"
}
}
Since proxy injects/replaces authentication header, there is no need to use docker login, just pull using the address of the proxy instead of registry address.
Why 404?
I had several 40X errors trying to test the proxy to GitHub with curl:
bad credentials - 404, not 401 or 403 as it normally is.
GET /v2/_catalog - 404 (not supported on GitHub yet, in backlog). Use GET /v2/repo_name/image_name/tags/list instead.
curl without -XGET - 405, it gives response anyway but to get 200 you need to explicitly use GET (-XGET)
Despite all that docker pull worked flawlessly from the beginning, so I recommend using it for testing.
How to handle /v2/
location / matches everything, including /v2/, so there is no particular need for that in proxy.

keycloak token introspection always fails with {"active":false}

I'm kind of desesperate to make this keycloak work. I can authenticate but for some reason, my token introspection always fail.
For example if I try to authenticate:
curl -d 'client_id=flask_api' -d 'client_secret=98594477-af85-48d8-9d95-f3aa954e5492' -d 'username=jean#gmail.com' -d 'password=superpassE0' -d 'grant_type=password' 'http://keycloak.dev.local:9000/auth/realms/skilltrock/protocol/openid-connect/token'
I get my access_token as expected:
{
"access_token":"eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJnLVZJQ0VETnJ4NWRfN1pWQllCTC1tNDdTZWFNT3NDVlowSFdtZF9QQkZrIn0.eyJqdGkiOiIwNTBkYWI5MS1kMjA5LTQwYjctOTBkOS1mYTgzMWYyMTk1Y2MiLCJleHAiOjE1NDQ1MjIyNDEsIm5iZiI6MCwiaWF0IjoxNTQ0NTIxOTQxLCJpc3MiOiJodHRwOi8va2V5Y2xvYWsuZGV2LmxvY2FsOjkwMDAvYXV0aC9yZWFsbXMvc2tpbGx0cm9jayIsImF1ZCI6ImFjY291bnQiLCJzdWIiOiI3NDA0MWNkNS1lZDBhLTQzMmYtYTU3OC0wYzhhMTIxZTdmZTAiLCJ0eXAiOiJCZWFyZXIiLCJhenAiOiJmbGFza19hcGkiLCJhdXRoX3RpbWUiOjAsInNlc3Npb25fc3RhdGUiOiJiOGI0MzA2Ny1lNzllLTQxZmItYmNkYi0xMThiMTU2OWU3ZDEiLCJhY3IiOiIxIiwicmVhbG1fYWNjZXNzIjp7InJvbGVzIjpbIm9mZmxpbmVfYWNjZXNzIiwidW1hX2F1dGhvcml6YXRpb24iXX0sInJlc291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6ImVtYWlsIHByb2ZpbGUiLCJlbWFpbF92ZXJpZmllZCI6ZmFsc2UsIm5hbWUiOiJqZWFuIHBpZXJyZSIsInByZWZlcnJlZF91c2VybmFtZSI6ImplYW5AZ21haWwuY29tIiwiZ2l2ZW5fbmFtZSI6ImplYW4iLCJmYW1pbHlfbmFtZSI6InBpZXJyZSIsImVtYWlsIjoiamVhbkBnbWFpbC5jb20ifQ.x1jW1cTSWSXN5DsXT3zk1ra4-BcxgjXbbqV5cjdwKTovoNQn7LG0Y_kR8-8Pe8MvFe7UNmqrHbHh21wgZy1JJFYSnnPKhzQaiT5YTcXCRybSdgXAjnvLpBjVQGVbMse_obzjjE1yTdROrZOdf9ARBx6EBr3teH1bHMu32a5wDf-fpYYmHskpW-YoQZljzNyL353K3bmWMlWSGzXx1y7p8_T_1WLwPMPr6XJdeZ5kW0hwLcaJVyDhX_92CFSHZaHQvI8P095D4BKLrI8iJaulnhsb4WqnkUyjOvDJBqrGxPvVqJxC4C1NXKA4ahk35tk5Pz8uS33HY6BkcRKw7z6xuA",
"expires_in":300,
"refresh_expires_in":1800,
"refresh_token":"eyJhbGciOiJIUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJlYmY4ZDVlOC01MTM4LTRiNTUtYmZhNC02YzcwMzBkMTIwM2YifQ.eyJqdGkiOiI3NWQ1ODgyMS01NzJkLTQ1NDgtOWQwYS0wM2Q3MGViYWE4NGEiLCJleHAiOjE1NDQ1MjM3NDEsIm5iZiI6MCwiaWF0IjoxNTQ0NTIxOTQxLCJpc3MiOiJodHRwOi8va2V5Y2xvYWsuZGV2LmxvY2FsOjkwMDAvYXV0aC9yZWFsbXMvc2tpbGx0cm9jayIsImF1ZCI6Imh0dHA6Ly9rZXljbG9hay5kZXYubG9jYWw6OTAwMC9hdXRoL3JlYWxtcy9za2lsbHRyb2NrIiwic3ViIjoiNzQwNDFjZDUtZWQwYS00MzJmLWE1NzgtMGM4YTEyMWU3ZmUwIiwidHlwIjoiUmVmcmVzaCIsImF6cCI6ImZsYXNrX2FwaSIsImF1dGhfdGltZSI6MCwic2Vzc2lvbl9zdGF0ZSI6ImI4YjQzMDY3LWU3OWUtNDFmYi1iY2RiLTExOGIxNTY5ZTdkMSIsInJlYWxtX2FjY2VzcyI6eyJyb2xlcyI6WyJvZmZsaW5lX2FjY2VzcyIsInVtYV9hdXRob3JpemF0aW9uIl19LCJyZXNvdXJjZV9hY2Nlc3MiOnsiYWNjb3VudCI6eyJyb2xlcyI6WyJtYW5hZ2UtYWNjb3VudCIsIm1hbmFnZS1hY2NvdW50LWxpbmtzIiwidmlldy1wcm9maWxlIl19fSwic2NvcGUiOiJlbWFpbCBwcm9maWxlIn0.omhube2oe79dXlcChOD9AFRdUep53kKPjD0HF14QioY",
"token_type":"bearer",
"not-before-policy":0,
"session_state":"b8b43067-e79e-41fb-bcdb-118b1569e7d1",
"scope":"email profile"
}
But if I try to introspect the access_token like given below, keycloack return always {"active":false}. I really don't understand this behavior.
curl -X POST -u "flask_api:98594477-af85-48d8-9d95-f3aa954e5492" -d "token=eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJnLVZJQ0VETnJ4NWRfN1pWQllCTC1tNDdTZWFNT3NDVlowSFdtZF9QQkZrIn0.eyJqdGkiOiIwNTBkYWI5MS1kMjA5LTQwYjctOTBkOS1mYTgzMWYyMTk1Y2MiLCJleHAiOjE1NDQ1MjIyNDEsIm5iZiI6MCwiaWF0IjoxNTQ0NTIxOTQxLCJpc3MiOiJodHRwOi8va2V5Y2xvYWsuZGV2LmxvY2FsOjkwMDAvYXV0aC9yZWFsbXMvc2tpbGx0cm9jayIsImF1ZCI6ImFjY291bnQiLCJzdWIiOiI3NDA0MWNkNS1lZDBhLTQzMmYtYTU3OC0wYzhhMTIxZTdmZTAiLCJ0eXAiOiJCZWFyZXIiLCJhenAiOiJmbGFza19hcGkiLCJhdXRoX3RpbWUiOjAsInNlc3Npb25fc3RhdGUiOiJiOGI0MzA2Ny1lNzllLTQxZmItYmNkYi0xMThiMTU2OWU3ZDEiLCJhY3IiOiIxIiwicmVhbG1fYWNjZXNzIjp7InJvbGVzIjpbIm9mZmxpbmVfYWNjZXNzIiwidW1hX2F1dGhvcml6YXRpb24iXX0sInJlc291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6ImVtYWlsIHByb2ZpbGUiLCJlbWFpbF92ZXJpZmllZCI6ZmFsc2UsIm5hbWUiOiJqZWFuIHBpZXJyZSIsInByZWZlcnJlZF91c2VybmFtZSI6ImplYW5AZ21haWwuY29tIiwiZ2l2ZW5fbmFtZSI6ImplYW4iLCJmYW1pbHlfbmFtZSI6InBpZXJyZSIsImVtYWlsIjoiamVhbkBnbWFpbC5jb20ifQ.x1jW1cTSWSXN5DsXT3zk1ra4-BcxgjXbbqV5cjdwKTovoNQn7LG0Y_kR8-8Pe8MvFe7UNmqrHbHh21wgZy1JJFYSnnPKhzQaiT5YTcXCRybSdgXAjnvLpBjVQGVbMse_obzjjE1yTdROrZOdf9ARBx6EBr3teH1bHMu32a5wDf-fpYYmHskpW-YoQZljzNyL353K3bmWMlWSGzXx1y7p8_T_1WLwPMPr6XJdeZ5kW0hwLcaJVyDhX_92CFSHZaHQvI8P095D4BKLrI8iJaulnhsb4WqnkUyjOvDJBqrGxPvVqJxC4C1NXKA4ahk35tk5Pz8uS33HY6BkcRKw7z6xuA" http://localhost:9000/auth/realms/skilltrock/protocol/openid-connect/token/introspect
return
{"active":false}
Where I am wrong? I'm totally lost
You need to make sure that you introspect the token using the same DNS hostname/port as the request. Unfortunately that's a not widely documented "feature" of Keycloak...
So use:
curl -u "flask_api:98594477-af85-48d8-9d95-f3aa954e5492" -d "token=<token>" http://keycloak.dev.local:9000/auth/realms/skilltrock/protocol/openid-connect/token/introspect
A legacy alternative is to send a Host header with the introspection request which matches the frontend host where the token was obtained. That I have tried, I can confirm that it works. It requires configuration of two URLs for the introspecting protected resource, though.
Since Keycloak 8 it is also possible to use the OpenID Connect Discovery as described in https://issues.redhat.com/browse/KEYCLOAK-11728 and the attached design document.
The latter solution involves two things:
Keycloak needs to know the frontend URL used for token retrieval by clients; that can happen at start time as described in the server installation (-Dkeycloak.frontendUrl or standalone.xml) or when definining a realm through its Frontend URL Option in the Keycloak Management UI. The Keycloak docker container supports the environment variable KEYCLOAK_FRONTEND_URL for this. Note that the frontendUrl needs to include the basePath where Keycloak is running (/auth by default, adjust as needed).
Optionally, the protected resource can use OIDC discovery to determine the introspection URL. Either use the Keycloak Java Adapter >=v8 which has built-in support for OIDC Discovery or verify if your protected resource does. See the related PR on the Keycloak Java Adapter for details.
When a protected resource uses the internal discovery endpoint http://keycloak:8998/auth/realms/myrealm/.well-known/openid-configuration, it will find the correct internal introspection endpoint:
{
"issuer": "http://localhost:8998/auth/realms/myrealm",
"authorization_endpoint": "http://localhost:8998/auth/realms/myrealm/protocol/openid-connect/auth",
"token_endpoint": "http://keycloak:8080/auth/realms/myrealm/protocol/openid-connect/token",
"token_introspection_endpoint": "http://keycloak:8080/auth/realms/myrealm/protocol/openid-connect/token/introspect"
}
Using the discovery endpoint is not required. If you just need to introspect an incoming token with Keycloak, you can directly use the introspection endpoint.
The introspect endpoint can also return {"active":false} if a session associated with that token doesn't exist in Keycloak. For example, if you create the token, restart keycloak and then call introspect.
As mentioned earlier the same domain should be used when obtaining and introspecting the token.
For me, I was using the localhost to obtain the token and 127.0.0.1 to introspect the token.

How to setup HAProxy to add access token to client requests

I have a client that can only make requests without authentication information.
I would like to use HAProxy or a similar proxy solution to add OAuth authentication to these client requests.
I already succeeded to add a Bearer token to the client requests. See below for the haproxy.cfg with some placeholders.
frontend front
mode http
bind *:8080
default_backend servers
http-request add-header Authorization "Bearer {{ .Env.ACCESS_TOKEN}}"
backend servers
mode http
server server1 myserver.com:443 ssl
The problem is that the access tokens have a TTL of 24 hours. So I need to refresh them or get a new token periodically.
Does HAProxy support this already?
I can write some script to get a new access token periodically, update the config and restart HAProxy. Is this a good approach when running HAProxy in docker? Are there better solutions?
You could give a try to create/test your script using Lua, it is now supported in the latest versions, check How Lua runs in HAProxy.
An example of this but using Nginx + Lua, can be found in this project: https://github.com/jirutka/ngx-oauth

AzureAD authentication to Icingaweb2

Is it possible to authenticate to Icingaweb2 through AzureAD (SAML/oauth2/openID) ?
This thing is actually possible to achieve with usage of
https://github.com/bitly/oauth2_proxy
After this proxy is installed and configured, run it with -set-xauthrequest info is in github repo wiki/readme
Set up icingaweb2 for external authentication by adding:
[autologin]
backend = external
into authentication.ini file
In icingaweb2 you need to add:
fastcgi_param REMOTE_USER $http_X_User;
into nginx/apache configuration.
If you will use same cookie name and secret pair in oauth2 proxy configuration, you will be authenticated to all your systems (Graylog SSO plugin, Icinga2, any your site) with pure SSO experience.
Depending on how much information is available, you can add a custom application to Azure AD.
This way only allows the connection to be SAML.

How set cookie sent from server to a client on a different port

I have a backend server (powered by Rails), whose APIs are used by a HTML5 frontend that runs on a Node simple development server.
Both are on the same host: my machine.
When I login from the frontend to the backend, rails sent me the session cookie. I can see it in the response headers, the problem is that browsers do not save it.
Policies are right, If I serve the same frontend directly from the rails app cookies are set right.
The only difference I can see is that when the frontend run on Node server, It runs on the port 8080 and rails is on the port 3000. I knew that cookies are not supposed to be port specific, so I am missing what is happening here.
Any thoughts? solutions?
(I need to be able to keep the setup this way, so to have the frontend served from Node and the backend on rails on different ports)
You're correct that cookies are port agnostic, and that the browser will send the same cookies to myapp.local:3000 as myapp.local:8080--except not through XMLHttpRequest (XHR, a.k.a., AJAX) when doing a cross-site request (CORS).
Solution: The request can be told to include cookies and auth headers by setting withCredentials to true on any XMLHttpRequest object. See: https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/withCredentials
Or if using the Fetch API, set the option credentials: 'include'. See: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch
Alternative: since you tagged webpack-dev-server in your question, you might be interested in proxying requests to your Rails API through the webpack-dev-server to avoid any CORS issues in the first place. This is done in your weback.config:
proxy: {
'/some/path': {
target: 'https://other-server.example.com',
secure: false
}
}
See: https://webpack.js.org/configuration/dev-server/#devserverproxy

Resources