Wso2 token expires - oauth-2.0

Wso2 Identity Server v 5.1.0, with integrated LDAP disabled and using Readonly LDAP for authentication
when authenticating user with wso2 IS using /oauth2/token API, initially wso2 returns output as following
{
"access_token": "fa738bd8c50d4506cf2c3566ed86adb8",
"refresh_token": "9b2d346cc05f827f4cab99bc9c90401a",
"scope": "openid",
"token_type": "Bearer",
"expires_in": 3600
}
when accessing API again in 1 seconds it provides expires as '3300',
So my question is why 300 deducted everytime when accessing API for first time ?

Please check the value of identity.xml
<OAuth> --> <TimestampSkew>300</TimestampSkew>
by default value is 300. When calculating expiry time reduce time stamp skew also to put client in safe side (network delays...etc). You can change the value as required.

From WSO2 docs:
Configuring the token expiration time
User access tokens have a fixed expiration time, which is set to 60 minutes by default. Before deploying the API Manager to users, extend the default expiration time by editing the <AccessTokenDefaultValidityPeriod> element in <PRODUCT_HOME>/repository/conf/identity.xml.
Also take the time stamp skew into account when configuring the expiration time. The time stamp skew is used to manage small time gaps in the system clocks of different servers. For example, let's say you have two Key Managers and you generate a token from the first one and authenticate with the other. If the second server's clock runs 300 seconds ahead, you can configure a 300s time stamp skew in the first server. When the first Key Manager generates a token (e.g., with the default life span, which is 3600 seconds), the time stamp skew is deducted from the token's life span. The new life span is 3300 seconds and the first server calls the second server after 3200 seconds.
You configure the time stamp skew using the <TimestampSkew> element in <PRODUCT_HOME>/repository/conf/identity.xml.
Tip: Ideally, the time stamp skew should not be larger than the token's life span. We recommend you to set it to zero if the nodes in your cluster are synchronized. Also, note that when the API Gateway cache is enabled (it is enabled by default), even after a token expires, it will still be available in the cache for consumers until the cache expires in approximately 15 minutes.

given that i understand your query, try
changing the value of <TimestampSkew>300</TimestampSkew> found in IS_HOME\repository\conf\identity\identity.xml to <TimestampSkew>0</TimestampSkew>

Related

What does "offset" mean in Angular Auth OIDC config values?

In my project, there is a configuration value for Oauth that I do not understand:
auth: {
maxIdTokenIatOffsetAllowedInSeconds: 600
},
Based on the document I am reading: https://nice-hill-002425310.azurestaticapps.net/docs/documentation/configuration
For maxIdTokenIatOffsetAllowedInSeconds It says:
The amount of offset allowed between the server creating the token, and the client app receiving the id_token.
What does the offset mean in this case? Is it like a timing unit?
I am assuming it means that each user can only receive one token every 600 seconds?
Can someone explains what does the offset mean? and what maxIdTokenIatOffsetAllowedInSeconds is doing to the token?
The docs you linked to are specifically for the angular-auth-oidc-client library so hopefully that's what you're using. In that case the maxIdTokenIatOffsetAllowedInSeconds is being used to determine how much clock skew is allowed between the issuing server and the consuming browser. In this case, 600 seconds would mean the clocks can be 10 minutes different from one another and the token will still be considered valid.
However, today I came across this issue and any value I pushed in higher than 299 was causing my token to be considered expired. I looked back through the changelog and found a recent-ish PR that added this check and a new configuration value that allows you to ignore it (disableIdTokenValidation).

Increase access token time in GCP UI or via python API

I am using VM enabled on GCP and would like to increase access token time. By default it is 1 hour.
I a using below command to generate token
query = "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token"
response = requests.get(query,headers={"Metadata-Flavor": "Google"}).json()
It generates token in the following format and it shows the 'expires_in' seconds. This is remaining time for the token to expire.
{'access_token': 'ya29.c.Ko8BEAjJKLI1bUQBiIj0zZz5hw3TlLjyCoXxKtyslbEnyRj9eUWO0sVdW3512f64ynOoi6laZNnPV
O23nELV5fYhk2epYodI1kXXXXXXXXXXX', 'expires_in': 3035, 'token_type': 'Bearer'}
How can I increase the expires_in parameters either through UI, configuration file or Python API such that token expire as per the defined time?
The purpose of creating long lived token is as I am starting TPU from Google VM using TPU start API and TPU turns on and my machine learning model is trained on the TPU. ML models are quite complex and take more than 10 or 20 hours in many scenarios. After the computations is over on the TPU I want TPU to shut down itself. Then I get the above error which says it cannot authenticated the token as it is expired.
Any example / way to increase the token expiry time or some other way to get rid of this problem.
AFAIK, you can't do that with the metadata server. However, it's possible to generate a short lived token and to extend up to 12h the lifetime of the access token.
You can have a look at this Service Account Credential API. However, you need to update the organisation policy to extend the default (60min) lifetime duration.

What is the incentive for (re)using Google access token rather than asking a new one every time?

I've been using Google refresh/access token in my in-app purchase verification on server-side.
When refreshing the token, I get a new access token, valid for 3600 secs.
Why can't I just re-ask for a new access token when needed, rather than storing and re-using it?
Why can't I just re-ask for a new access token when needed, rather than storing and re-using it?
Simple answer is you can. Technically speaking you could just ask for a new access token every time you want to make a request. As long as you store your refresh token you will then be able to access the server when ever you need.
I think a better question would be why wouldn't you want to. Well if you are using an application and that application is running for 30 minutes there is really no reason to request a new access token when you can just use the one that was returned to you the first time. You are making an extra round trip to the server with ever request you make if you are also requesting an access token first.
However if you have an application that say runs once every five minutes as a cron job or a windows service well then one could argue that its a pain trying to keep track of it. Actually its not you could just add a date to when it was generated and check the date before using it.
Now google is not going to stop you from requesting a new one every time so feel free. However I cant remember if Google gives you a new access token or if they just return the same one that they generated last time. If they generate a new one every time. Remember that they will all work for an hour so dont leave them laying around.

freeradius allow user access according to billing plan

I have freeradius server installed on centos 6 with MySQL and Mikrotik as a controller.
I want to restrict the user to use internet according to billing plan like use internet for 1 hour in 3 days. After 3 days the same username should get again 1 hour for next 3 days.
Please suggest which modification should I need to do in radius configuration and which parameters should I send.
A Session-Time attribute will set the maximum time a session may take. This does not account for other rules like a maximum time over a day or a week. You need more logic for this than an attribute in your RADIUS reply.
A stateful storage is needed to keep track of the used time for a user. RADIUS accounting is sufficient for this purpose. For example, when storing accounting data in MySQL you can query the already used session time for a period to calculate a new Session-Time for the upcoming session.
You can use Radclient to disconnect sessions in Mikrotik.
FreeRADIUS has modules for this purpose: sqlcounter and counter. The documentation covers examples of implementation.

What's the point of a timestamp in OAuth if a Nonce can only be used one time?

I had at first misinterpreted the timestamp implementation of OAuth into thinking that it meant a timestamp that was not within 30 seconds past the current time would be denied, it turned out this was wrong for a few reasons including the fact that we could not guarantee that each system clock was in sync enough down to the minutes and seconds regardless of time zone. Then I read it again to get more clarity:
"Unless otherwise specified by the Service Provider, the timestamp is
expressed in the number of seconds since January 1, 1970 00:00:00 GMT.
The timestamp value MUST be a positive integer and MUST be equal or
greater than the timestamp used in previous requests."
source: http://oauth.net/core/1.0/#nonce
Meaning the timestamps are only compared in relation to previous requests from the same source, not in comparison to my server system clock.
Then I read a more detailed description here: http://hueniverse.com/2008/10/beginners-guide-to-oauth-part-iii-security-architecture/
(TL;DR? - skip to the bold parts below)
To prevent compromised requests from being used again (replayed),
OAuth uses a nonce and timestamp. The term nonce means ‘number used
once’ and is a unique and usually random string that is meant to
uniquely identify each signed request. By having a unique identifier
for each request, the Service Provider is able to prevent requests
from being used more than once. This means the Consumer generates a
unique string for each request sent to the Service Provider, and the
Service Provider keeps track of all the nonces used to prevent them
from being used a second time. Since the nonce value is included in
the signature, it cannot be changed by an attacker without knowing the
shared secret.
Using nonces can be very costly for Service Providers as they demand
persistent storage of all nonce values received, ever. To make
implementations easier, OAuth adds a timestamp value to each request
which allows the Service Provider to only keep nonce values for a
limited time. When a request comes in with a timestamp that is older
than the retained time frame, it is rejected as the Service Provider
no longer has nonces from that time period. It is safe to assume that
a request sent after the allowed time limit is a replay attack. OAuth
provides a general mechanism for implementing timestamps but leaves
the actual implementation up to each Service Provider (an area many
believe should be revisited by the specification). From a security
standpoint, the real nonce is the combination of the timestamp value
and nonce string. Only together they provide a perpetual unique value
that can never be used again by an attacker.
The reason I am confused is if the Nonce is only used once, why would the Service Provider ever reject based on timestamp? "Service Provider no longer has nonces from that time period" is confusing to me and sounds as if a nonce can be re-used as long as it is within 30 seconds of the last time it was used.
So can anyone clear this up for me? What is the point of the timestamp if the nonce is a one time use and I am not comparing the timestamp against my own system clock (because that obviously would not be reliable). It makes sense that the timestamps will only be relative to each other, but with the unique nonce requirement it seems irrelevant.
The timestamp is used for allowing the server to optimize their storage of nonces. Basically, consider the read nonce to be the combination of the timestamp and random string. But by having a separate timestamp component, the server can implement a time-based restriction using a short window (say, 15 minutes) and limit the amount of storage it needs. Without timestamps, the server will need infinite storage to keep every nonce ever used.
Let's say you decide to allow up to 15 minutes time difference between your clock and the client's and are keeping track of the nonce values in a database table. The unique key for the table is going to be a combination of 'client identifier', 'access token', 'nonce', and 'timestamp'. When a new request comes in, check that the timestamp is within 15 minutes of your clock then lookup that combination in your table. If found, reject the call, otherwise add that to your table and return the requested resource. Every time you add a new nonce to the table, delete any record for that 'client identifier' and 'access token' combination with timestamp older than 15 minutes.
OK, after enough pondering I believe I have cracked this one.
They want me to always know the timestamp of the last successful request that was made so that if any timestamp comes in prior to that it will be ignored.
Also the Nonce must be unique, but I am only going to store them up to a certain date range, therefore if the timestamp is so many hours old the Nonce will be dropped and can then be used again, however because the last used timestamp is also stored, they cannot re-use an old request even if the Nonce is considered unique because the timestamp on that request would be outdated.
However this only works because of the signature. If they changed the timestamp or the Nonce on a request the signature would no longer match the request and would be denied (as the timestamp and Nonce are both a part of the signature creation and they do not have the signing key).
Phew!
If OAuth used just the timstamp, it'd be relatively easy for an attacker to guess what the next timestamp would be, and inject their own request into the process. They'd just have to do "previous timestamp + 1".
By using the nonce, which is generated in a cryptographically secure manner (hopefully), the attacker can't simply inject TS+1, because they won't have the proper nonce to authenticate themselves.
Think of it as a secure door lock that requires both a keycard and a pin code. You can steal the keycard, but still can't get through the door because you don't know the pin.
Can't comment yet, so posting as an answer. Reply to #tofutim's comment
Indeed, if we insist the the timestamp value of the new request has to be greater than the timestamps of all the previous requests, there seems to be little point in nonce:
Replay attack is prevented, since the provider would reject the message with timestamp equal to the previous one
Yes, the next timestamp is easy to guess for an attacker - just use timestamp + 1 - but the attacker still has no way to tamper the timestamp parameter, since all parameters are signed using consumer's secret (and token secret)
However, reading the OAuth 1.0a spec reveals that
Unless otherwise specified by the Service Provider, the timestamp is expressed in the number of seconds since January 1, 1970 00:00:00 GMT. The timestamp value MUST be a positive integer and MUST be equal or greater than the timestamp used in previous requests.
The Consumer SHALL then generate a Nonce value that is unique for all requests with that timestamp
So nonces are used to prevent replay attack when you send multiple requests with the same timestamp.
Why allow to send requests with the same timestamp? Consider the case when you want to send multiple requests to independent resources in parallel, to finish the processing faster. Conceptually, the server handles each request one-by-one. You don't know in which orders the requests will arrive, since it depends on many things such as OS, network the packages will travel, the server logic and so on
If you send requests with increasing timestamp, there's still a possibility that the request with hire timestamp will be handled first, and then all requests with lower timestamp will fail. Instead, you can send request with equal timestamps and different nonces
It is reasonable to assume one could attempt to crack the nonce with brute force. A timestamp reduces the chance someone might succeed.

Resources