How does Prometheus get refresh tokens from sidecar program - oauth-2.0

I have Prometheus setup on a Kubernetes Pod, which also has a sidecar that will connect to OAuth server and get a bearer token for the targets from where Prometheus is trying to fetch data.
What is happening is that the token expires after 2 weeks and then I have to restart the Pod for getting the new token work with Prometheus. I believe that the sidecar is getting the new token but that token is not getting updated in Prometheus, and so I have to restart the pod to get that new token working.
Can anyone please help?
Thanks in advance.

These days Prometheus reads the token from a bearer_token_file on every HTTP request, so this should just work once the file is updated.

Related

KeyCloak: Connection has been refused by the server. Connection timed out

Occasionally I receive a connection timeout when calling the /userinfo endpoint of my KeyCloak-Server.
So far, I have no indication what's wrong and what causes the timeouts. There are no errors in the server.log I configured. Also, I cannot reproduce the issue, I just see the errors in the logs of the application trying to authenticate with keycloak.
Is there some sort of connection limit that my keycloak might use?
List item
What additional logs can I activate to narrow down the problem?
I am currently on version 17.0.1
Try running keycloak in debug mode kc.sh start --log-level=debug If the /userinfo call reached the keycloak then there will be a debug log for that, you can match the time when error occurred to the keycloak log.
Do you have any other components in between your application and keycloak such as proxy, a DNS server etc ? You would need to check their logs as well.
Also check out this document regarding rest api in keycloak -> https://github.com/keycloak/keycloak-community/blob/main/design/rest-api-guideline.md#rate-lmiting

Oauth2_proxy with Keycloak : getting "invalid_token" with /userinfo API

I am trying Keycloak for the first time and using Keycoak as provider with oauth2_proxy (https://github.com/oauth2-proxy/oauth2-proxy/blob/v5.1.1/providers/keycloak.go) to achieve user authentication via LDAP.
I have followed all steps inside Keycloak to create a realm, create client, client id, client secret etc. Also the Keycloak API "/token" is passing. However once I pass username/password in the keycloak login screen, I get following error in oauth2_proxy:
[2020/05/30 10:15:37] [requests.go:25] 401 GET http://172.20.0.10:8080/auth/realms/master/protocol/openid-connect/userinfo {"error":"invalid_token","error_description":"Token verification failed"}
Also I am passing following parameters when bringing up oauth2_proxy docker container:
command: -upstream=static://200 -http-address=0.0.0.0:8080 -https-address=0.0.0.0:8443
-redirect-url="https://portal.acme.com/oauth2/callback"
-scope='test-scope' -email-domain=* -cookie-domain=* -cookie-secure=false -cookie-secret=skjgfsgfsf23524
-cookie-samesite="none" -provider=keycloak
-client-id='abcd-client' -client-secret='c0281257-b600-40b2-beae-68d1f2d72f02'
--tls-cert-file=/etc/acme.com.pem
--tls-key-file=/etc/acme.com.key
-login-url="http://localhost:7575/auth/realms/master/protocol/openid-connect/auth"
-redeem-url="http://172.20.0.10:8080/auth/realms/master/protocol/openid-connect/token"
-validate-url="http://172.20.0.10:8080/auth/realms/master/protocol/openid-connect/userinfo"
Can someone please help what could be missing or going wrong?
Any lead or hint will be really helpful.
I found the solution to this problem. This was because the issuer in the JWT token was not matching with the URL I gave when bringing up this oauth2_proxy container.
To fix this, what was needed is that the docker container needed to talk to the host network and port exposed by Keycloak. This needs 2 things:
Use "host.docker.internal" as host in all Keycloak APIs when bringing up oauth2_proxy so that oauth2_proxy container communicates Keycloak via host network.
Map "host.docker.internal" to 127.0.0.1 in local machine/host so that the browser redirect is accessible.
After this little hack, the setup works.
Thanks!

Spring Security - Google OAuth 2.0 - UnknownHostException www.googleapis.com

I've implemented Google oauth login based on this tutorial: https://www.callicoder.com/spring-boot-security-oauth2-social-login-part-1/
It is working correctly when app is run locally. However, after deploying it on GKE, I'm unable to log in - flow fails with the following error:
error: [invalid_token_response] An error occurred while attempting to retrieve the OAuth 2.0 Access Token Response: I/O error on POST request for "https://www.googleapis.com/oauth2/v4/token": www.googleapis.com; nested exception is java.net.UnknownHostException: www.googleapis.com
Which comes from OAuth2AccessTokenResponseClient
As I said before, it's working fine when run on localhost and I'm unable to debug it.
The app is deployed with Ingress using a static IP. I've assigned that IP to my domain very recently. Domain is registered in Google APIs Authorised redirect URIs
Google APIs use the OAuth 2.0 protocol for authentication and authorization. Google supports common OAuth 2.0 scenarios such as those for web server, installed, and client-side applications. Please have a look at this link.
We can follow the below steps for obtaining OAuth 2.0 access tokens.
Step 1: Generate a code verifier and challenge
Step 2: Send a request to Google's OAuth 2.0 server
Step 3: Google prompts user for consent
Step 4: Handle the OAuth 2.0 server response
Step 5: Exchange authorization code for refresh and access tokens
The problem was that kube-dns pods dind't get up. I set up a preemptible cluster and added a taint to it's only node pool. That prevented kube-dns from starting:
Normal NotTriggerScaleUp 61s (x22798 over 2d18h) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added): 1 node(s) had taints that the pod didn't tolerate
Warning FailedScheduling 44s (x141 over 26h) default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
After removing the taint hostname got resolved

401 error when trying GET request to Hawkbit Server with Gateway Security Token

Q1:
I'm running a Hawkbit server on localhost in a docker container and activated the option "Allow a gateway to authenticate and manage multiple targets through a gateway security token" in the settings of the web UI that I access via http://localhost:8080/.
Now I'm using Postman to send a GET request to http://localhost:8080/default/controller/v1/25 with the header
key: GatewayToken, value: <The gateway token shown in the Hawkbit web UI>
Using this header, I'm supposed to be able to authenticate my Postman client against the Hawkbit server (compare e.g. https://www.eclipse.org/hawkbit/concepts/authentication/), however I'm always getting a "401 Unauthorized" response.
Even if I enable "Allow targets to download artifacts without security credentials" which should enable any client to get a ressource even without authentification, I get a 401.
What am I doing wrong?
Q2:
The Hawkbit server is running in Docker started via "docker-compse up -d" as described here: https://www.eclipse.org/hawkbit/gettingstarted/
In order to solve the problem of Q1, I wanted to check the output of Hawkbit inside the container, but I'm not too familiar with docker and couldn't find out how. I was able to get inside the conainer using
docker exec -it docker_hawkbit_1 /bin/sh
which bring me into the container's file system at /opt/hawkbit. But that's not what I was looking for. How can I see the log/output of the Hawkbit/Spring Boot application running inside the container?
Q1:
The key of the request should not be GatewayToken, but Authorization. The header of the request will then look as follows:
key: Authorization, value: GatewayToken <token>
Q2:
Try the following command to see the logs:
docker logs -f docker_hawkbit_1

How to setup HAProxy to add access token to client requests

I have a client that can only make requests without authentication information.
I would like to use HAProxy or a similar proxy solution to add OAuth authentication to these client requests.
I already succeeded to add a Bearer token to the client requests. See below for the haproxy.cfg with some placeholders.
frontend front
mode http
bind *:8080
default_backend servers
http-request add-header Authorization "Bearer {{ .Env.ACCESS_TOKEN}}"
backend servers
mode http
server server1 myserver.com:443 ssl
The problem is that the access tokens have a TTL of 24 hours. So I need to refresh them or get a new token periodically.
Does HAProxy support this already?
I can write some script to get a new access token periodically, update the config and restart HAProxy. Is this a good approach when running HAProxy in docker? Are there better solutions?
You could give a try to create/test your script using Lua, it is now supported in the latest versions, check How Lua runs in HAProxy.
An example of this but using Nginx + Lua, can be found in this project: https://github.com/jirutka/ngx-oauth

Resources