How to configure keycloak with krakend to access apache2 webserver? - docker

I'm struggling with krakend and keycloak. I have the following:
Apache2 Webserver with a my-ip/keycloak-protected file (directly on the host)
Krakend with my-ip:8402 (as a docker container)
Keycloak with my-ip:8080 (as a docker container)
I want to protect the keycloak-protected file, so only logged in users can access it. For krakend I use this config: https://www.krakend.io/docs/authorization/keycloak/. I changed it to match my IP:
{"version": 3,
"timeout": "3s",
"endpoints":[{"endpoint": "/keycloak-protected",
"extra_config": {"auth/validator": {"alg": "RS256",
"jwk_url": "http://<my-ip>:8080/auth/realms/master/protocol/openid-connect/certs",
"disable_jwk_security": true}},
"backend": [{"host":["http://<my-ip>:80"],
"url_pattern": "/__health"}]}]}
I then set up keycloak, create a client with valid redirect URLs = http://my-ip:8402/* and access type Public. I also create a new user.
Using postman, I try to access http://my-ip:8402/keycloak-protected with a GET. I fill out the parameters in the tab Authorization and click "Get new access token", where I successfully log in and get a (valid?) token. But when I then try to access http://my-ip:8402/keycloak-protected, it says 401 Unauthorized.
What am I missing here? Am I using postman wrong? Is the krakend config faulty? Or is the keycloak client not configured properly?
Thank you very much!
I followed the tutorials https://www.krakend.io/docs/authorization/keycloak/ and some parts of https://github.com/xyder/example-krakend-keycloak.
I would like to have different realms/clients/users to only access specific files on my apache2 webserver.

I found the solution: The example on the krakend website (on https://www.krakend.io/docs/authorization/keycloak/) is faulty. The URL in the jwk_url should be:
http://:8080/realms/master/protocol/openid-connect/certs
and NOT:
http://:8080/auth/realms/master/protocol/openid-connect/certs
(see the missing /auth in the URL).

Related

Oauth2_proxy with Keycloak : getting "invalid_token" with /userinfo API

I am trying Keycloak for the first time and using Keycoak as provider with oauth2_proxy (https://github.com/oauth2-proxy/oauth2-proxy/blob/v5.1.1/providers/keycloak.go) to achieve user authentication via LDAP.
I have followed all steps inside Keycloak to create a realm, create client, client id, client secret etc. Also the Keycloak API "/token" is passing. However once I pass username/password in the keycloak login screen, I get following error in oauth2_proxy:
[2020/05/30 10:15:37] [requests.go:25] 401 GET http://172.20.0.10:8080/auth/realms/master/protocol/openid-connect/userinfo {"error":"invalid_token","error_description":"Token verification failed"}
Also I am passing following parameters when bringing up oauth2_proxy docker container:
command: -upstream=static://200 -http-address=0.0.0.0:8080 -https-address=0.0.0.0:8443
-redirect-url="https://portal.acme.com/oauth2/callback"
-scope='test-scope' -email-domain=* -cookie-domain=* -cookie-secure=false -cookie-secret=skjgfsgfsf23524
-cookie-samesite="none" -provider=keycloak
-client-id='abcd-client' -client-secret='c0281257-b600-40b2-beae-68d1f2d72f02'
--tls-cert-file=/etc/acme.com.pem
--tls-key-file=/etc/acme.com.key
-login-url="http://localhost:7575/auth/realms/master/protocol/openid-connect/auth"
-redeem-url="http://172.20.0.10:8080/auth/realms/master/protocol/openid-connect/token"
-validate-url="http://172.20.0.10:8080/auth/realms/master/protocol/openid-connect/userinfo"
Can someone please help what could be missing or going wrong?
Any lead or hint will be really helpful.
I found the solution to this problem. This was because the issuer in the JWT token was not matching with the URL I gave when bringing up this oauth2_proxy container.
To fix this, what was needed is that the docker container needed to talk to the host network and port exposed by Keycloak. This needs 2 things:
Use "host.docker.internal" as host in all Keycloak APIs when bringing up oauth2_proxy so that oauth2_proxy container communicates Keycloak via host network.
Map "host.docker.internal" to 127.0.0.1 in local machine/host so that the browser redirect is accessible.
After this little hack, the setup works.
Thanks!

Why does Apereo CAS server redirect to localhost for OAuth2 endpoint?

I have setup a CAS server at 172.16.238.10 that generally works with the CAS protocol. However, for OAuth2 there is a strange redirection behavior:
REQ: https://172.16.238.10:8443/ooscas/oauth2.0/authorize
RESP: 302, Location: https://localhost:8443/ooscas/login?service=https%3A%2F%2Flocalhost%3A8443%2Fooscas%2Foauth2.0%2FcallbackAuthorize%3Fclient_name%3DCasOAuthClient
Never mind the service and client_name parameters for this staged example, but my question is about the hostname:
Where does the "localhost" come from? How can I configure that to be something else?
In a real OAuth2 webflow localhost will simply not work, even if 172.16.238.10 happens to be localhost. The reason is that by posting the login form to localhost, the CAS server then redirects to itself using localhost (https://localhost:8443/oauth2.0/callbackAuthorize) and that will lead to an internal SSL handshake error, because the server's certificate is not valid for localhost.
Most likely, you need to define the following:
cas.server.name=
cas.server.prefix=${cas.server.name}/cas
You're referencing the prefix in your setup, but its definition seems absent. If you fail to do that, default values take place.
PS Always specify the CAS version in your posts.

401 error when trying GET request to Hawkbit Server with Gateway Security Token

Q1:
I'm running a Hawkbit server on localhost in a docker container and activated the option "Allow a gateway to authenticate and manage multiple targets through a gateway security token" in the settings of the web UI that I access via http://localhost:8080/.
Now I'm using Postman to send a GET request to http://localhost:8080/default/controller/v1/25 with the header
key: GatewayToken, value: <The gateway token shown in the Hawkbit web UI>
Using this header, I'm supposed to be able to authenticate my Postman client against the Hawkbit server (compare e.g. https://www.eclipse.org/hawkbit/concepts/authentication/), however I'm always getting a "401 Unauthorized" response.
Even if I enable "Allow targets to download artifacts without security credentials" which should enable any client to get a ressource even without authentification, I get a 401.
What am I doing wrong?
Q2:
The Hawkbit server is running in Docker started via "docker-compse up -d" as described here: https://www.eclipse.org/hawkbit/gettingstarted/
In order to solve the problem of Q1, I wanted to check the output of Hawkbit inside the container, but I'm not too familiar with docker and couldn't find out how. I was able to get inside the conainer using
docker exec -it docker_hawkbit_1 /bin/sh
which bring me into the container's file system at /opt/hawkbit. But that's not what I was looking for. How can I see the log/output of the Hawkbit/Spring Boot application running inside the container?
Q1:
The key of the request should not be GatewayToken, but Authorization. The header of the request will then look as follows:
key: Authorization, value: GatewayToken <token>
Q2:
Try the following command to see the logs:
docker logs -f docker_hawkbit_1

keycloak token introspection always fails with {"active":false}

I'm kind of desesperate to make this keycloak work. I can authenticate but for some reason, my token introspection always fail.
For example if I try to authenticate:
curl -d 'client_id=flask_api' -d 'client_secret=98594477-af85-48d8-9d95-f3aa954e5492' -d 'username=jean#gmail.com' -d 'password=superpassE0' -d 'grant_type=password' 'http://keycloak.dev.local:9000/auth/realms/skilltrock/protocol/openid-connect/token'
I get my access_token as expected:
{
"access_token":"eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJnLVZJQ0VETnJ4NWRfN1pWQllCTC1tNDdTZWFNT3NDVlowSFdtZF9QQkZrIn0.eyJqdGkiOiIwNTBkYWI5MS1kMjA5LTQwYjctOTBkOS1mYTgzMWYyMTk1Y2MiLCJleHAiOjE1NDQ1MjIyNDEsIm5iZiI6MCwiaWF0IjoxNTQ0NTIxOTQxLCJpc3MiOiJodHRwOi8va2V5Y2xvYWsuZGV2LmxvY2FsOjkwMDAvYXV0aC9yZWFsbXMvc2tpbGx0cm9jayIsImF1ZCI6ImFjY291bnQiLCJzdWIiOiI3NDA0MWNkNS1lZDBhLTQzMmYtYTU3OC0wYzhhMTIxZTdmZTAiLCJ0eXAiOiJCZWFyZXIiLCJhenAiOiJmbGFza19hcGkiLCJhdXRoX3RpbWUiOjAsInNlc3Npb25fc3RhdGUiOiJiOGI0MzA2Ny1lNzllLTQxZmItYmNkYi0xMThiMTU2OWU3ZDEiLCJhY3IiOiIxIiwicmVhbG1fYWNjZXNzIjp7InJvbGVzIjpbIm9mZmxpbmVfYWNjZXNzIiwidW1hX2F1dGhvcml6YXRpb24iXX0sInJlc291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6ImVtYWlsIHByb2ZpbGUiLCJlbWFpbF92ZXJpZmllZCI6ZmFsc2UsIm5hbWUiOiJqZWFuIHBpZXJyZSIsInByZWZlcnJlZF91c2VybmFtZSI6ImplYW5AZ21haWwuY29tIiwiZ2l2ZW5fbmFtZSI6ImplYW4iLCJmYW1pbHlfbmFtZSI6InBpZXJyZSIsImVtYWlsIjoiamVhbkBnbWFpbC5jb20ifQ.x1jW1cTSWSXN5DsXT3zk1ra4-BcxgjXbbqV5cjdwKTovoNQn7LG0Y_kR8-8Pe8MvFe7UNmqrHbHh21wgZy1JJFYSnnPKhzQaiT5YTcXCRybSdgXAjnvLpBjVQGVbMse_obzjjE1yTdROrZOdf9ARBx6EBr3teH1bHMu32a5wDf-fpYYmHskpW-YoQZljzNyL353K3bmWMlWSGzXx1y7p8_T_1WLwPMPr6XJdeZ5kW0hwLcaJVyDhX_92CFSHZaHQvI8P095D4BKLrI8iJaulnhsb4WqnkUyjOvDJBqrGxPvVqJxC4C1NXKA4ahk35tk5Pz8uS33HY6BkcRKw7z6xuA",
"expires_in":300,
"refresh_expires_in":1800,
"refresh_token":"eyJhbGciOiJIUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJlYmY4ZDVlOC01MTM4LTRiNTUtYmZhNC02YzcwMzBkMTIwM2YifQ.eyJqdGkiOiI3NWQ1ODgyMS01NzJkLTQ1NDgtOWQwYS0wM2Q3MGViYWE4NGEiLCJleHAiOjE1NDQ1MjM3NDEsIm5iZiI6MCwiaWF0IjoxNTQ0NTIxOTQxLCJpc3MiOiJodHRwOi8va2V5Y2xvYWsuZGV2LmxvY2FsOjkwMDAvYXV0aC9yZWFsbXMvc2tpbGx0cm9jayIsImF1ZCI6Imh0dHA6Ly9rZXljbG9hay5kZXYubG9jYWw6OTAwMC9hdXRoL3JlYWxtcy9za2lsbHRyb2NrIiwic3ViIjoiNzQwNDFjZDUtZWQwYS00MzJmLWE1NzgtMGM4YTEyMWU3ZmUwIiwidHlwIjoiUmVmcmVzaCIsImF6cCI6ImZsYXNrX2FwaSIsImF1dGhfdGltZSI6MCwic2Vzc2lvbl9zdGF0ZSI6ImI4YjQzMDY3LWU3OWUtNDFmYi1iY2RiLTExOGIxNTY5ZTdkMSIsInJlYWxtX2FjY2VzcyI6eyJyb2xlcyI6WyJvZmZsaW5lX2FjY2VzcyIsInVtYV9hdXRob3JpemF0aW9uIl19LCJyZXNvdXJjZV9hY2Nlc3MiOnsiYWNjb3VudCI6eyJyb2xlcyI6WyJtYW5hZ2UtYWNjb3VudCIsIm1hbmFnZS1hY2NvdW50LWxpbmtzIiwidmlldy1wcm9maWxlIl19fSwic2NvcGUiOiJlbWFpbCBwcm9maWxlIn0.omhube2oe79dXlcChOD9AFRdUep53kKPjD0HF14QioY",
"token_type":"bearer",
"not-before-policy":0,
"session_state":"b8b43067-e79e-41fb-bcdb-118b1569e7d1",
"scope":"email profile"
}
But if I try to introspect the access_token like given below, keycloack return always {"active":false}. I really don't understand this behavior.
curl -X POST -u "flask_api:98594477-af85-48d8-9d95-f3aa954e5492" -d "token=eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJnLVZJQ0VETnJ4NWRfN1pWQllCTC1tNDdTZWFNT3NDVlowSFdtZF9QQkZrIn0.eyJqdGkiOiIwNTBkYWI5MS1kMjA5LTQwYjctOTBkOS1mYTgzMWYyMTk1Y2MiLCJleHAiOjE1NDQ1MjIyNDEsIm5iZiI6MCwiaWF0IjoxNTQ0NTIxOTQxLCJpc3MiOiJodHRwOi8va2V5Y2xvYWsuZGV2LmxvY2FsOjkwMDAvYXV0aC9yZWFsbXMvc2tpbGx0cm9jayIsImF1ZCI6ImFjY291bnQiLCJzdWIiOiI3NDA0MWNkNS1lZDBhLTQzMmYtYTU3OC0wYzhhMTIxZTdmZTAiLCJ0eXAiOiJCZWFyZXIiLCJhenAiOiJmbGFza19hcGkiLCJhdXRoX3RpbWUiOjAsInNlc3Npb25fc3RhdGUiOiJiOGI0MzA2Ny1lNzllLTQxZmItYmNkYi0xMThiMTU2OWU3ZDEiLCJhY3IiOiIxIiwicmVhbG1fYWNjZXNzIjp7InJvbGVzIjpbIm9mZmxpbmVfYWNjZXNzIiwidW1hX2F1dGhvcml6YXRpb24iXX0sInJlc291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6ImVtYWlsIHByb2ZpbGUiLCJlbWFpbF92ZXJpZmllZCI6ZmFsc2UsIm5hbWUiOiJqZWFuIHBpZXJyZSIsInByZWZlcnJlZF91c2VybmFtZSI6ImplYW5AZ21haWwuY29tIiwiZ2l2ZW5fbmFtZSI6ImplYW4iLCJmYW1pbHlfbmFtZSI6InBpZXJyZSIsImVtYWlsIjoiamVhbkBnbWFpbC5jb20ifQ.x1jW1cTSWSXN5DsXT3zk1ra4-BcxgjXbbqV5cjdwKTovoNQn7LG0Y_kR8-8Pe8MvFe7UNmqrHbHh21wgZy1JJFYSnnPKhzQaiT5YTcXCRybSdgXAjnvLpBjVQGVbMse_obzjjE1yTdROrZOdf9ARBx6EBr3teH1bHMu32a5wDf-fpYYmHskpW-YoQZljzNyL353K3bmWMlWSGzXx1y7p8_T_1WLwPMPr6XJdeZ5kW0hwLcaJVyDhX_92CFSHZaHQvI8P095D4BKLrI8iJaulnhsb4WqnkUyjOvDJBqrGxPvVqJxC4C1NXKA4ahk35tk5Pz8uS33HY6BkcRKw7z6xuA" http://localhost:9000/auth/realms/skilltrock/protocol/openid-connect/token/introspect
return
{"active":false}
Where I am wrong? I'm totally lost
You need to make sure that you introspect the token using the same DNS hostname/port as the request. Unfortunately that's a not widely documented "feature" of Keycloak...
So use:
curl -u "flask_api:98594477-af85-48d8-9d95-f3aa954e5492" -d "token=<token>" http://keycloak.dev.local:9000/auth/realms/skilltrock/protocol/openid-connect/token/introspect
A legacy alternative is to send a Host header with the introspection request which matches the frontend host where the token was obtained. That I have tried, I can confirm that it works. It requires configuration of two URLs for the introspecting protected resource, though.
Since Keycloak 8 it is also possible to use the OpenID Connect Discovery as described in https://issues.redhat.com/browse/KEYCLOAK-11728 and the attached design document.
The latter solution involves two things:
Keycloak needs to know the frontend URL used for token retrieval by clients; that can happen at start time as described in the server installation (-Dkeycloak.frontendUrl or standalone.xml) or when definining a realm through its Frontend URL Option in the Keycloak Management UI. The Keycloak docker container supports the environment variable KEYCLOAK_FRONTEND_URL for this. Note that the frontendUrl needs to include the basePath where Keycloak is running (/auth by default, adjust as needed).
Optionally, the protected resource can use OIDC discovery to determine the introspection URL. Either use the Keycloak Java Adapter >=v8 which has built-in support for OIDC Discovery or verify if your protected resource does. See the related PR on the Keycloak Java Adapter for details.
When a protected resource uses the internal discovery endpoint http://keycloak:8998/auth/realms/myrealm/.well-known/openid-configuration, it will find the correct internal introspection endpoint:
{
"issuer": "http://localhost:8998/auth/realms/myrealm",
"authorization_endpoint": "http://localhost:8998/auth/realms/myrealm/protocol/openid-connect/auth",
"token_endpoint": "http://keycloak:8080/auth/realms/myrealm/protocol/openid-connect/token",
"token_introspection_endpoint": "http://keycloak:8080/auth/realms/myrealm/protocol/openid-connect/token/introspect"
}
Using the discovery endpoint is not required. If you just need to introspect an incoming token with Keycloak, you can directly use the introspection endpoint.
The introspect endpoint can also return {"active":false} if a session associated with that token doesn't exist in Keycloak. For example, if you create the token, restart keycloak and then call introspect.
As mentioned earlier the same domain should be used when obtaining and introspecting the token.
For me, I was using the localhost to obtain the token and 127.0.0.1 to introspect the token.

Dropbox OAuth callback to Mule using https

Dropbox requires the callback URL to be over HTTPS (when not using localhost).
Using Mule 3.6.0 with the latest dropbox connector, the callback defaults to http - thus only working with localhost. For production I need to use https for the OAuth dance.
What is the correct way to specify a https callback URL?
I've tried:
<https:connector name="connector.http.mule.default">
<https:tls-key-store path="${ssl.certfile}" keyPassword="${ssl.keyPass}" storePassword="${ssl.storePass}"/>
</https:connector>
<dropbox:config name="Dropbox" appKey="${dropbox.appKey}" appSecret="${dropbox.appSecret}" doc:name="Dropbox">
<dropbox:oauth-callback-config domain="production.mydomain.com" path="callback" />
</dropbox:config>
But it errors:
Endpoint scheme must be compatible with the connector scheme. Connector is: "https", endpoint is "http://production.mydomain.com:8052/callback"
Here's what I ended up with that solved the problem:
<https:connector name="connector.http.mule.default" doc:name="HTTP-HTTPS">
<https:tls-key-store path="${ssl.certfile}" keyPassword="${ssl.keyPass}" storePassword="${ssl.storePass}"/>
</https:connector>
<dropbox:config name="Dropbox" appKey="${dropbox.appKey}" appSecret="${dropbox.appSecret}" doc:name="Dropbox">
<dropbox:oauth-callback-config domain="myserver.domain.com" path="callback" connector-ref="connector.http.mule.default" localPort="8052" remotePort="8052"/>
</dropbox:config>
This works great for localhost, but not if you need the callback to go to something other than localhost (e.g. myserver.domain.com)
Reviewing mule.log you can see that the connector binds to localhost (127.0.0.0) despite the config pointing to:
domain="myserver.domain.com"
Log Entry:
INFO ... Attempting to register service with name: Mule.Ops:type=Endpoint,service="DynamicFlow-https://localhost:8052/callback",connector=connector.http.mule.default,name="endpoint.https.localhost.8052.callback"
INFO ... Registered Endpoint Service with name: Mule.Ops:type=Endpoint,service="DynamicFlow-https://localhost:8052/callback",connector=connector.http.mule.default,name="endpoint.https.localhost.8052.callback"
INFO ... Registered Connector Service with name Mule.Ops:type=Connector,name="connector.http.mule.default.1"
The workaround is to force Mule to listen to 0.0.0.0 for connectors which define localhost as the endpoint.
In wrapper.conf set
wrapper.java.additional.x=-Dmule.tcp.bindlocalhosttoalllocalinterfaces=TRUE

Resources