I am implementing 'sign in with google' or 'Google Sign In' on a webapp of mine. The google api allows a nonce to be used to prevent replay attacks, but fails to inform the max (or min) length of the nonce.
Looking at the: data-nonce
from this doc: https://developers.google.com/identity/gsi/web/reference/html-reference#data-nonce
Any help would be great
I sent a message off to my contacts at Google waiting to hear their response. First part of this answer was my own research.
That being said I started digging around in the RFC for OAuth and the only mention I found worth wild would be the following. (My experience with the Google Oauth team says that they like to follow the RFC guidelines as they are accepted industry standard.)
NonceNotes
15.5.2. Nonce Implementation Notes
The nonce parameter value needs to include per-session state and be unguessable to attackers. One method to achieve this for Web Server Clients is to store a cryptographically random value as an HttpOnly session cookie and use a cryptographic hash of the value as the nonce parameter. In that case, the nonce in the returned ID Token is compared to the hash of the session cookie to detect ID Token replay by third parties. A related method applicable to JavaScript Clients is to store the cryptographically random value in HTML5 local storage and use a cryptographic hash of this value.
The fact that they suggest storing it in a session cookie would then lead to the max limit being at the very least the max size of a session cookie which would give us something like this.
However i seem to remember something that allowed you to a cookie value across multiple cookies. That this would again lead me back to your assumption which would be this is probably going to be a limitation applied by each OAuth server.
I think we still need to wait to hear back from Google.
paraphrased response from Google
Google does not explicitly limit nonce size. An idea would be to
constrain it by the max support JWT size. However this may very by browser
, devices and networking infrastructure.
They have also updated the documentation found here to reflect that.
Related
I am working with the eBay API using OAuth on my current Meteor project app.
There is a section of the app where I can create an eBay account profile, and assign custom values to the account (such as nick-naming it, etc.). This is where I initiate the OAuth sign-in redirect process.
My question is about the 'state' parameter in the token requests. I understand that it is for helping prevent CSRF, but do I HAVE to use it that way? 'state' does seem to be optional after all.
Let's say I wanted to pass another value into the request call such as the string 'eBay Seller', and expect that the same exact string be returned in the response. I want to use that value to help my app determine which account to assign the returned tokens to (based on which account profile initiated the redirect link).
Is 'state' a valid place to pass in a variable that I expect to be returned exactly as sent? I considered using Session variables to handle this scenario, but quickly realized that this would not work, since the OAuth process takes me outside of my project's domain.
Does OAuth support passing variables that are expected to be returned as sent? Is sending my variable as 'state' allowed or even recommended (or absolutely not recommended?) Is there a better way to achieve what I want to do that does not involve updating database values?
Thank you!
You can send what you want as state. You should try to make sure it's not guessable though, to mitigate against CSRF attacks.
If you want to return useful information like 'ebay seller' then include something for CSRF (e.g. hash of the session key id) and the text 'ebay seller' and delimit them e.g.
2CF24DBA5FB0A30E26E83B2AC5B9E29E1B161E5C1FA7425E73043362938B9824|ebay seller
Now you have the best of both worlds: useful state info + CSRF protection.
Your redirect endpoint logic can check the hash of the session id matches and also confirm the account type from the initial request.
We are securing our webAPI using 'OAuth'. All request comes with OAuth token in header. To validate the token, we use Identity provider's public key. All works well.
I have a question. I believe this is not the right (and secure way), but don't know why.
In place of validating it with the public key every time, we can validate it once and for next subsequent requests, Why can't we store this token in cache (with emailId as key) and for all subsequent hits, we can compare it with the token store in cache.
Thanks in advance.
That's all fine and most Resource Servers would do exactly this. Typically one would calculate and store the hash of the access token for storage optimization reasons.
Note that you can do this safely assuming there's some lifetime that you can extract from the token and you will store the (hash of the) token not beyond that lifetime.
I think it depends on the character of the access token. If the token has a fixed life time that cannot change and its validity is verified just by checking its cryptographic signature (something like a JSON Web Token), then you can safely cache the verification results (if it brings you some speed advantage).
But access tokens are often revocable and it's necessary to validate them at the authorization server. The endpoint for access token info and verification didn't use to be part of OAuth2 spec, but it's in RFC now as "Introspection endpoint" - https://www.rfc-editor.org/rfc/rfc7662
Still, if there are many requests coming, even revocable tokens may be safe to cache for a short period of time (few seconds). But it depends on the character of your application.
I am developing an oAuth2 server and I've stumbled upon this question.
Lets suppose a scenario where my tokens are set to expire within one hour. On this timeframe, some client goes through the implicit auth fifty times using the same client_id and same redirect_uri. Basically same everything.
Should I give it the same accessToken generated on the first request on the subsequent ones until it expires or should I issue a new accessToken on every request?
The benefits of sending the same token is that I won't leave stale and unused tokens of a client on the server, minimizing the window for an attacker trying to guess a valid token.
I know that I should rate-limit things and I am doing it, but in the case of a large botnet attack from thousands of different machines, some limits won't take effect immediately.
However, I am not sure about the downsides of this solution and that's why I came here. Is it a valid solution?
I would rather say - no.
Reasons:
You should NEVER store access tokens in plain text on the Authorization Server side. Access tokens are credentials and should be stored hashed. Salting might not be necessary since they are generated strings anyway. See OAuth RFC point 10.3.
Depending how you handle subsequent requests - an attacker who knows that a certain resource owner is using your service and repeat requests for the used client id. That way an attacker will be able to impersonate the resource owner. If you really return the same token then at least ensure that you authenticate the resource owner every time.
What about the "state" parameter? Will you consider requests to be the "same" if the state parameter is different? If no then a botnet attack will simply use a different state every time and force you to issue new tokens.
As an addition - generally defending against a botnet attack via application logic is very hard. The server exposing your AS to the internet should take care for that. On application layer you should take care that it does not go down from small-bandwidth attacks.
You can return the same access_token if it is still valid, there's no issue with that. The only downside may be in the fact that you use the Implicit flow and thus repeatedly send the - same, valid - access token in a URL fragment which is considered less secure than using e.g. the Authorization Code flow.
As a thumb rule never reuse keys, this will bring additional security in the designed system in case of key capture
You can send different access token when requested after proper authentication and also send refresh token along your access token.
Once your access token expires, you should inform user about that and user should re-request for new access token providing one-time-use refresh token previously provided to them skipping need for re-authentication, and you should provide new access token and refresh token.
To resist attack with fake refresh token, you should blacklist them along with their originating IP after few warnings.
PS: Never use predictable tokens. Atleast make it extremely difficult to brute force attacks by using totally random, long alpha-numeric strings. I would suggest bin2hex(openssl_random_pseudo_bytes(512)), if you are using php.
I'm building a simple api with Rails API, and want to make sure I'm on the right track here. I'm using devise to handle logins, and decided to go with Devise's token_authenticatable option, which generates an API key that you need to send with each request.
I'm pairing the API with a backbone/marionette front end and am generally wondering how I should handle sessions. My first thought was to just store the api key in local storage or a cookie, and retrieve it on page load, but something about storing the api key that way bothered me from a security standpoint. Wouldn't be be easy to grab the api key either by looking in local storage/the cookie or sniffing any request that goes through, and use it to impersonate that user indefinitely? I currently am resetting the api key each login, but even that seems frequent - any time you log in on any device, that means you'd be logged out on every other one, which is kind of a pain. If I could drop this reset I feel like it would improve from a usability standpoint.
I may be totally wrong here (and hope I am), can anyone explain whether authenticating this way is reliably secure, and if not what a good alternative would be? Overall, I'm looking for a way I can securely keep users 'signed in' to API access without frequently forcing re-auth.
token_authenticatable is vulnerable to timing attacks, which are very well explained in this blog post. These attacks were the reason token_authenticatable was removed from Devise 3.1. See the plataformatec blog post for more info.
To have the most secure token authentication mechanism, the token:
Must be sent via HTTPS.
Must be random, of cryptographic strength.
Must be securely compared.
Must not be stored directly in the database. Only a hash of the token can be stored there. (Remember, token = password. We don't store passwords in plain text in the db, right?)
Should expire according to some logic.
If you forego some of these points in favour of usability you'll end up with a mechanism that is not as secure as it could be. It's as simple as that. You should be safe enough if you satisfy the first three requirements and restrict access to your database though.
Expanding and explaining my answer:
Use HTTPS. This is definitely the most important point because it deals with sniffers.
If you don't use HTTPS, then a lot can go wrong. For example:
To securely transmit the user's credentials (username/email/password), you would have to use digest authentication but that just doesn't cut it these days since salted hashes can be brute forced.
In Rails 3, cookies are only shrouded by Base64 encoding, so they can be fairly easily revealed. See Decoding Rails Session Cookies for more info.
Since Rails 4 though, the cookie store is encrypted so data is both digitally verified and unreadable to an attacker. Cookies should be secure as long as your secret_key_base is not leaked.
Generate your token with:
SecureRandom.hex only if you are on Ruby 2.5+.
The gem sysrandom if you are on an older Ruby.
For an explanation on why this is necessary, I suggest reading the sysrandom's README and the blog post How to Generate Secure Random Numbers in Various Programming Languages.
Find the user record using the user's ID, email or some other attribute. Then, compare that user's token with the request's token with Devise.secure_compare(user.auth_token, params[:auth_token].
If you are on Rails 4.2.1+ you can also use ActiveSupport::SecurityUtils.secure_compare.
Do not find the user record with a Rails finder like User.find_by(auth_token: params[:auth_token]). This is vulnerable to timing attacks!
If you are going to have several applications/sessions at the same time per user, then you have two options:
Store the unencrypted token in the database so it can be shared among devices. This is a bad practice, but I guess you can do it in the name of UX (and if you trust your employees with DB access).
Store as many encrypted tokens per user as you want to allow current sessions. So if you want to allow 2 sessions on 2 different devices, keep 2 distinct token hashes in the database. This option is a little less straightforward to implement but it's definitely safer. It also has the upside of allowing you to provide your users the option to end current active sessions in specific devices by revoking their tokens (just like GitHub and Facebook do).
There should be some kind of mechanism that causes the token to expire. When implementing this mechanism take into account the trade-off between UX and security.
Google expires a token if it has not been used for six months.
Facebook expires a token if it has not been used for two months:
Native mobile apps using Facebook's SDKs will get long-lived access
tokens, good for about 60 days. These tokens will be refreshed once
per day when the person using your app makes a request to Facebook's
servers. If no requests are made, the token will expire after about 60
days and the person will have to go through the login flow again to
get a new token.
Upgrade to Rails 4 to use its encrypted cookie store. If you can't, then encrypt the cookie store yourself, like suggested here. There would absolutely be no problem in storing an authentication token in an encrypted cookie store.
You should also have a contingency plan, for example, a rake task to reset a subset of tokens or every single token in the database.
To get you started, you could check out this gist (by one of the authors of Devise) on how to implement token authentication with Devise. Finally, the Railscast on securing an API should be helpful.
You can try to use rails4 with your API, it's providing more security and use devise 3.1.0rc
In Rails 4.0, several features have been extracted into gems.
ActiveRecord::SessionStore
Action Caching
Page Caching
Russian Doll-caching through key-based expiration with automatic dependency management of nested templates.
http://blog.envylabs.com/post/41711428227/rails-4-security-for-session-cookies
Devise 3.1.0.rc runs on both Rails 3.2 and Rails 4.0.
http://blog.plataformatec.com.br/2013/08/devise-3-1-now-with-more-secure-defaults/
Devise is deprecation of TokenAuthenticatable in 3.1.0rc but you can build your own TokenAuthenticatable method for security issue. It's more reliable and secure.
For token, session store you can go through http://ruby.railstutorial.org/chapters/sign-in-sign-out and http://blog.bigbinary.com/2013/03/19/cookies-on-rails.html for more understable.
At last you should go through these kind of encryption and decryption "Unable to decrypt stored encrypted data" to get the more security.
I don't see it mentioned anywhere in the 2.0 spec, is nonce not used by OAuth 2 and if not, now does it prevent replay attacks?
The 1.0 spec states:
3.3. Nonce and Timestamp
The timestamp value MUST be a positive integer. Unless otherwise specified by the server's documentation, the timestamp is expressed in the number of seconds since January 1, 1970 00:00:00 GMT.
A nonce is a random string, uniquely generated by the client to allow the server to verify that a request has never been made before and helps prevent replay attacks when requests are made over a non-secure channel. The nonce value MUST be unique across all requests with the same timestamp, client credentials, and token combinations.
To avoid the need to retain an infinite number of nonce values for future checks, servers MAY choose to restrict the time period after which a request with an old timestamp is rejected. Note that this restriction implies a level of synchronization between the client's and server's clocks. Servers applying such a restriction MAY provide a way for the client to sync with the server's clock; alternatively, both systems could synchronize with a trusted time service. Details of clock synchronization strategies are beyond the scope of this specification.
This is captured in a separate spec. See OAuth 2.0 Threat Model and Security Considerations for details/answers :)