Increase access token time in GCP UI or via python API - oauth

I am using VM enabled on GCP and would like to increase access token time. By default it is 1 hour.
I a using below command to generate token
query = "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token"
response = requests.get(query,headers={"Metadata-Flavor": "Google"}).json()
It generates token in the following format and it shows the 'expires_in' seconds. This is remaining time for the token to expire.
{'access_token': 'ya29.c.Ko8BEAjJKLI1bUQBiIj0zZz5hw3TlLjyCoXxKtyslbEnyRj9eUWO0sVdW3512f64ynOoi6laZNnPV
O23nELV5fYhk2epYodI1kXXXXXXXXXXX', 'expires_in': 3035, 'token_type': 'Bearer'}
How can I increase the expires_in parameters either through UI, configuration file or Python API such that token expire as per the defined time?
The purpose of creating long lived token is as I am starting TPU from Google VM using TPU start API and TPU turns on and my machine learning model is trained on the TPU. ML models are quite complex and take more than 10 or 20 hours in many scenarios. After the computations is over on the TPU I want TPU to shut down itself. Then I get the above error which says it cannot authenticated the token as it is expired.
Any example / way to increase the token expiry time or some other way to get rid of this problem.

AFAIK, you can't do that with the metadata server. However, it's possible to generate a short lived token and to extend up to 12h the lifetime of the access token.
You can have a look at this Service Account Credential API. However, you need to update the organisation policy to extend the default (60min) lifetime duration.

Related

What does "offset" mean in Angular Auth OIDC config values?

In my project, there is a configuration value for Oauth that I do not understand:
auth: {
maxIdTokenIatOffsetAllowedInSeconds: 600
},
Based on the document I am reading: https://nice-hill-002425310.azurestaticapps.net/docs/documentation/configuration
For maxIdTokenIatOffsetAllowedInSeconds It says:
The amount of offset allowed between the server creating the token, and the client app receiving the id_token.
What does the offset mean in this case? Is it like a timing unit?
I am assuming it means that each user can only receive one token every 600 seconds?
Can someone explains what does the offset mean? and what maxIdTokenIatOffsetAllowedInSeconds is doing to the token?
The docs you linked to are specifically for the angular-auth-oidc-client library so hopefully that's what you're using. In that case the maxIdTokenIatOffsetAllowedInSeconds is being used to determine how much clock skew is allowed between the issuing server and the consuming browser. In this case, 600 seconds would mean the clocks can be 10 minutes different from one another and the token will still be considered valid.
However, today I came across this issue and any value I pushed in higher than 299 was causing my token to be considered expired. I looked back through the changelog and found a recent-ish PR that added this check and a new configuration value that allows you to ignore it (disableIdTokenValidation).

410 Gone error in MSGraph Delta API for oneDrive

I am using /delta OneDrive APIs from Graph to sync files & folders for all the users in my organization.
According to the documentation:
There may be cases when the service can't provide a list of changes for a given token (for example, if a client tries to reuse an old token after being disconnected for a long time, or if server state has changed and a new token is required). In these cases, the service will return an HTTP 410 Gone error
There is no exact time-frame when the delta token is too old or expires.
Is there a particular time-frame after which the token is unusable in case of drive and we'll get the 410 error?
There is not defined time to live (TTL) for delta tokens, nor is the age the only factor in determining if the token is invalid. If there are substantial changes (i.e. substantial changes to the tenant and/or drive could cause it as well).
So long as your code is set up to handle a possible 410, you shouldn't see much impact from this. My general guidance would be to optimize for a "full resync" but comparing file metadata and only pulling or pushing files that have changed (i.e. compare name, path, size, dates, etc.).

Wso2 token expires

Wso2 Identity Server v 5.1.0, with integrated LDAP disabled and using Readonly LDAP for authentication
when authenticating user with wso2 IS using /oauth2/token API, initially wso2 returns output as following
{
"access_token": "fa738bd8c50d4506cf2c3566ed86adb8",
"refresh_token": "9b2d346cc05f827f4cab99bc9c90401a",
"scope": "openid",
"token_type": "Bearer",
"expires_in": 3600
}
when accessing API again in 1 seconds it provides expires as '3300',
So my question is why 300 deducted everytime when accessing API for first time ?
Please check the value of identity.xml
<OAuth> --> <TimestampSkew>300</TimestampSkew>
by default value is 300. When calculating expiry time reduce time stamp skew also to put client in safe side (network delays...etc). You can change the value as required.
From WSO2 docs:
Configuring the token expiration time
User access tokens have a fixed expiration time, which is set to 60 minutes by default. Before deploying the API Manager to users, extend the default expiration time by editing the <AccessTokenDefaultValidityPeriod> element in <PRODUCT_HOME>/repository/conf/identity.xml.
Also take the time stamp skew into account when configuring the expiration time. The time stamp skew is used to manage small time gaps in the system clocks of different servers. For example, let's say you have two Key Managers and you generate a token from the first one and authenticate with the other. If the second server's clock runs 300 seconds ahead, you can configure a 300s time stamp skew in the first server. When the first Key Manager generates a token (e.g., with the default life span, which is 3600 seconds), the time stamp skew is deducted from the token's life span. The new life span is 3300 seconds and the first server calls the second server after 3200 seconds.
You configure the time stamp skew using the <TimestampSkew> element in <PRODUCT_HOME>/repository/conf/identity.xml.
Tip: Ideally, the time stamp skew should not be larger than the token's life span. We recommend you to set it to zero if the nodes in your cluster are synchronized. Also, note that when the API Gateway cache is enabled (it is enabled by default), even after a token expires, it will still be available in the cache for consumers until the cache expires in approximately 15 minutes.
given that i understand your query, try
changing the value of <TimestampSkew>300</TimestampSkew> found in IS_HOME\repository\conf\identity\identity.xml to <TimestampSkew>0</TimestampSkew>

SurveyMonkey Long Lived Access Token Lifespan

I am working on a project that requires an automated SSIS package to
connect to SurveyMonkey data store via API to incrementally download survey
results for the day or specified time period for custom reporting and low scoring task assignment.
Via OAuth I can collect a long lived access token, but due to the automated
and infinite nature of my projects lifespan, I cannot manually initiate
OAuth2 token refreshes or complete manual re-authentication cycles.
Is there another method to automatically export this data upon a scheduled
request?
Additionally, for clarification for how long is a long lived access token
valid? 60 days?
Miles from surveymonkey.com support responded to me with a great answer. I hope it can help someone down the line.
Hi Rob,
Currently our tokens should not expire - this is not guaranteed and
may change in future, but we will send out an update well ahead of
time if this does ever change. The token you receive on completion of
OAuth lets you know how long the token will last for without user
intervention, currently it returns 'null' in the 'expires_in' field.
There is no other automated way to schedule the data to be exported
currently, however it sounds like our current setup should suit your
needs
In addition to Miles's reply, it is very straightforward to pull diffs from surveymonkey using modified dates. we keep "last sync" timestamp per-survey in our database, and update it after each successful data pull.
Use the REST api directly, or (if you're using PHP) try https://github.com/oori/php-surveymonkey. We run it in production.
*note: actually, you're interested in setting the start_modified_date option for the "getRespondentList" function. but in general - see the API docs, modified date filter is available in more functions.

How to synchronize token(string value) between application nodes?

We have a Spring 3 + REST application in which we are using a token to identify source of request coming in. This token is kept in-memory(in a hashmap) and used to identify the request.
When we are multiple instance of our app(deployed on multiple tomcat instances on different machines), how can we share/sync this token between different app nodes?
Our only requirement is to sync this token among different nodes.
I browsed and found few apis'/software like Redis, memcached, zookeeper. I am not able to decide which one to select.
Any kind of suggestion/comments is helpful.
Regards,
Pramod
I've never used zookeeper so I cannot say anything about it. Both Redis (database) and memcached (cache) will work for you. Which one is better depends on how you use tokens.
Choose Redis if:
tokens may be stored for a long time
no need to expire tokens
want to handle more tokens than can be stored in memory
want to replicate tokens to other servers so if one goes down other will provide tokens
tokens have to survive server restart
Choose memcached if:
tokens are valid only for certain amount of time and should expire
amount of tokens exeed memory capacity the least recently used token will be removed
all tokens may be stored in memory
no need to replicate tokens to other server
tokens don't have to survive server restart
want to use Spring Cache abstraction

Resources