What does "offset" mean in Angular Auth OIDC config values? - oauth-2.0

In my project, there is a configuration value for Oauth that I do not understand:
auth: {
maxIdTokenIatOffsetAllowedInSeconds: 600
},
Based on the document I am reading: https://nice-hill-002425310.azurestaticapps.net/docs/documentation/configuration
For maxIdTokenIatOffsetAllowedInSeconds It says:
The amount of offset allowed between the server creating the token, and the client app receiving the id_token.
What does the offset mean in this case? Is it like a timing unit?
I am assuming it means that each user can only receive one token every 600 seconds?
Can someone explains what does the offset mean? and what maxIdTokenIatOffsetAllowedInSeconds is doing to the token?

The docs you linked to are specifically for the angular-auth-oidc-client library so hopefully that's what you're using. In that case the maxIdTokenIatOffsetAllowedInSeconds is being used to determine how much clock skew is allowed between the issuing server and the consuming browser. In this case, 600 seconds would mean the clocks can be 10 minutes different from one another and the token will still be considered valid.
However, today I came across this issue and any value I pushed in higher than 299 was causing my token to be considered expired. I looked back through the changelog and found a recent-ish PR that added this check and a new configuration value that allows you to ignore it (disableIdTokenValidation).

Related

Increase access token time in GCP UI or via python API

I am using VM enabled on GCP and would like to increase access token time. By default it is 1 hour.
I a using below command to generate token
query = "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token"
response = requests.get(query,headers={"Metadata-Flavor": "Google"}).json()
It generates token in the following format and it shows the 'expires_in' seconds. This is remaining time for the token to expire.
{'access_token': 'ya29.c.Ko8BEAjJKLI1bUQBiIj0zZz5hw3TlLjyCoXxKtyslbEnyRj9eUWO0sVdW3512f64ynOoi6laZNnPV
O23nELV5fYhk2epYodI1kXXXXXXXXXXX', 'expires_in': 3035, 'token_type': 'Bearer'}
How can I increase the expires_in parameters either through UI, configuration file or Python API such that token expire as per the defined time?
The purpose of creating long lived token is as I am starting TPU from Google VM using TPU start API and TPU turns on and my machine learning model is trained on the TPU. ML models are quite complex and take more than 10 or 20 hours in many scenarios. After the computations is over on the TPU I want TPU to shut down itself. Then I get the above error which says it cannot authenticated the token as it is expired.
Any example / way to increase the token expiry time or some other way to get rid of this problem.
AFAIK, you can't do that with the metadata server. However, it's possible to generate a short lived token and to extend up to 12h the lifetime of the access token.
You can have a look at this Service Account Credential API. However, you need to update the organisation policy to extend the default (60min) lifetime duration.

410 Gone error in MSGraph Delta API for oneDrive

I am using /delta OneDrive APIs from Graph to sync files & folders for all the users in my organization.
According to the documentation:
There may be cases when the service can't provide a list of changes for a given token (for example, if a client tries to reuse an old token after being disconnected for a long time, or if server state has changed and a new token is required). In these cases, the service will return an HTTP 410 Gone error
There is no exact time-frame when the delta token is too old or expires.
Is there a particular time-frame after which the token is unusable in case of drive and we'll get the 410 error?
There is not defined time to live (TTL) for delta tokens, nor is the age the only factor in determining if the token is invalid. If there are substantial changes (i.e. substantial changes to the tenant and/or drive could cause it as well).
So long as your code is set up to handle a possible 410, you shouldn't see much impact from this. My general guidance would be to optimize for a "full resync" but comparing file metadata and only pulling or pushing files that have changed (i.e. compare name, path, size, dates, etc.).

SyncStateNotFound error: how to fix or avoid?

I am using Microsoft Graph API delta queries for downloading some information (messages, contacts, events) regularly. But sometimes I get this error:
{
"error" :
{
"code" : "SyncStateNotFound",
"innerError" :
{
"date" : "2018-06-01T06:31:24",
"request-id" : "47e918a9-ce5b-42b4-8a86-12b96c93121a"
},
"message" : "The sync state generation is not found; generation=605;[highest=841][841][839][840]."
}
}
I can't provide you steps for reproducing because I don't know how to reproduce it. It happens sometimes on production environment.
So I have some questions:
What is generation in Microsoft Graph API? Is there any available documentation about it? I didn't find anything useful in the Internet.
Why delta links expire? Delta link expires by time or some iterations of using delta links? Can I save my delta link in my database and in a e.g. 1 year use this delta link for syncing again?
How to avoid delta links expiration? Are there any lifehacks?
What should I do if I got this issue? Full resync and getting new delta link?
Is it a bug or feature?
Every time you sync, a new sync token is generated. We store the current sync token along with the two previous ones. This helps us in cases where we advance the sync on the server side, but something happens transmitting the data to the client so they don't get the new token value. In such cases, we can "fallback" to the previous sync token so that the client doesn't have to resync everything. But these three stored tokens change with each sync - the oldest one gets dropped and we advance. In your case, you are passing us a delta token that is around 230 generations old. That token is long gone.
Another thing to consider is that an "inactive" sync token will hang around for around 90 days at which point we consider it stale, pour gas on it and set it on fire (not really).

SurveyMonkey Long Lived Access Token Lifespan

I am working on a project that requires an automated SSIS package to
connect to SurveyMonkey data store via API to incrementally download survey
results for the day or specified time period for custom reporting and low scoring task assignment.
Via OAuth I can collect a long lived access token, but due to the automated
and infinite nature of my projects lifespan, I cannot manually initiate
OAuth2 token refreshes or complete manual re-authentication cycles.
Is there another method to automatically export this data upon a scheduled
request?
Additionally, for clarification for how long is a long lived access token
valid? 60 days?
Miles from surveymonkey.com support responded to me with a great answer. I hope it can help someone down the line.
Hi Rob,
Currently our tokens should not expire - this is not guaranteed and
may change in future, but we will send out an update well ahead of
time if this does ever change. The token you receive on completion of
OAuth lets you know how long the token will last for without user
intervention, currently it returns 'null' in the 'expires_in' field.
There is no other automated way to schedule the data to be exported
currently, however it sounds like our current setup should suit your
needs
In addition to Miles's reply, it is very straightforward to pull diffs from surveymonkey using modified dates. we keep "last sync" timestamp per-survey in our database, and update it after each successful data pull.
Use the REST api directly, or (if you're using PHP) try https://github.com/oori/php-surveymonkey. We run it in production.
*note: actually, you're interested in setting the start_modified_date option for the "getRespondentList" function. but in general - see the API docs, modified date filter is available in more functions.

What's the point of a timestamp in OAuth if a Nonce can only be used one time?

I had at first misinterpreted the timestamp implementation of OAuth into thinking that it meant a timestamp that was not within 30 seconds past the current time would be denied, it turned out this was wrong for a few reasons including the fact that we could not guarantee that each system clock was in sync enough down to the minutes and seconds regardless of time zone. Then I read it again to get more clarity:
"Unless otherwise specified by the Service Provider, the timestamp is
expressed in the number of seconds since January 1, 1970 00:00:00 GMT.
The timestamp value MUST be a positive integer and MUST be equal or
greater than the timestamp used in previous requests."
source: http://oauth.net/core/1.0/#nonce
Meaning the timestamps are only compared in relation to previous requests from the same source, not in comparison to my server system clock.
Then I read a more detailed description here: http://hueniverse.com/2008/10/beginners-guide-to-oauth-part-iii-security-architecture/
(TL;DR? - skip to the bold parts below)
To prevent compromised requests from being used again (replayed),
OAuth uses a nonce and timestamp. The term nonce means ‘number used
once’ and is a unique and usually random string that is meant to
uniquely identify each signed request. By having a unique identifier
for each request, the Service Provider is able to prevent requests
from being used more than once. This means the Consumer generates a
unique string for each request sent to the Service Provider, and the
Service Provider keeps track of all the nonces used to prevent them
from being used a second time. Since the nonce value is included in
the signature, it cannot be changed by an attacker without knowing the
shared secret.
Using nonces can be very costly for Service Providers as they demand
persistent storage of all nonce values received, ever. To make
implementations easier, OAuth adds a timestamp value to each request
which allows the Service Provider to only keep nonce values for a
limited time. When a request comes in with a timestamp that is older
than the retained time frame, it is rejected as the Service Provider
no longer has nonces from that time period. It is safe to assume that
a request sent after the allowed time limit is a replay attack. OAuth
provides a general mechanism for implementing timestamps but leaves
the actual implementation up to each Service Provider (an area many
believe should be revisited by the specification). From a security
standpoint, the real nonce is the combination of the timestamp value
and nonce string. Only together they provide a perpetual unique value
that can never be used again by an attacker.
The reason I am confused is if the Nonce is only used once, why would the Service Provider ever reject based on timestamp? "Service Provider no longer has nonces from that time period" is confusing to me and sounds as if a nonce can be re-used as long as it is within 30 seconds of the last time it was used.
So can anyone clear this up for me? What is the point of the timestamp if the nonce is a one time use and I am not comparing the timestamp against my own system clock (because that obviously would not be reliable). It makes sense that the timestamps will only be relative to each other, but with the unique nonce requirement it seems irrelevant.
The timestamp is used for allowing the server to optimize their storage of nonces. Basically, consider the read nonce to be the combination of the timestamp and random string. But by having a separate timestamp component, the server can implement a time-based restriction using a short window (say, 15 minutes) and limit the amount of storage it needs. Without timestamps, the server will need infinite storage to keep every nonce ever used.
Let's say you decide to allow up to 15 minutes time difference between your clock and the client's and are keeping track of the nonce values in a database table. The unique key for the table is going to be a combination of 'client identifier', 'access token', 'nonce', and 'timestamp'. When a new request comes in, check that the timestamp is within 15 minutes of your clock then lookup that combination in your table. If found, reject the call, otherwise add that to your table and return the requested resource. Every time you add a new nonce to the table, delete any record for that 'client identifier' and 'access token' combination with timestamp older than 15 minutes.
OK, after enough pondering I believe I have cracked this one.
They want me to always know the timestamp of the last successful request that was made so that if any timestamp comes in prior to that it will be ignored.
Also the Nonce must be unique, but I am only going to store them up to a certain date range, therefore if the timestamp is so many hours old the Nonce will be dropped and can then be used again, however because the last used timestamp is also stored, they cannot re-use an old request even if the Nonce is considered unique because the timestamp on that request would be outdated.
However this only works because of the signature. If they changed the timestamp or the Nonce on a request the signature would no longer match the request and would be denied (as the timestamp and Nonce are both a part of the signature creation and they do not have the signing key).
Phew!
If OAuth used just the timstamp, it'd be relatively easy for an attacker to guess what the next timestamp would be, and inject their own request into the process. They'd just have to do "previous timestamp + 1".
By using the nonce, which is generated in a cryptographically secure manner (hopefully), the attacker can't simply inject TS+1, because they won't have the proper nonce to authenticate themselves.
Think of it as a secure door lock that requires both a keycard and a pin code. You can steal the keycard, but still can't get through the door because you don't know the pin.
Can't comment yet, so posting as an answer. Reply to #tofutim's comment
Indeed, if we insist the the timestamp value of the new request has to be greater than the timestamps of all the previous requests, there seems to be little point in nonce:
Replay attack is prevented, since the provider would reject the message with timestamp equal to the previous one
Yes, the next timestamp is easy to guess for an attacker - just use timestamp + 1 - but the attacker still has no way to tamper the timestamp parameter, since all parameters are signed using consumer's secret (and token secret)
However, reading the OAuth 1.0a spec reveals that
Unless otherwise specified by the Service Provider, the timestamp is expressed in the number of seconds since January 1, 1970 00:00:00 GMT. The timestamp value MUST be a positive integer and MUST be equal or greater than the timestamp used in previous requests.
The Consumer SHALL then generate a Nonce value that is unique for all requests with that timestamp
So nonces are used to prevent replay attack when you send multiple requests with the same timestamp.
Why allow to send requests with the same timestamp? Consider the case when you want to send multiple requests to independent resources in parallel, to finish the processing faster. Conceptually, the server handles each request one-by-one. You don't know in which orders the requests will arrive, since it depends on many things such as OS, network the packages will travel, the server logic and so on
If you send requests with increasing timestamp, there's still a possibility that the request with hire timestamp will be handled first, and then all requests with lower timestamp will fail. Instead, you can send request with equal timestamps and different nonces
It is reasonable to assume one could attempt to crack the nonce with brute force. A timestamp reduces the chance someone might succeed.

Resources