I am new to SAML and I've recently been asked to implement an SSO SP using Spring Security SAML Extension.
I was able to implement the entire SSO Flow and it seems to be working correctly but I just want to understand how secure SAML is.
Can attackers sniff the SAML Response, take out the signatureValue and public certificate and reuse them to make another SSO request? (This is without considering the other assertion attributes like time, etc)
I hope someone can enlighten me.
Cheers.
You need the private key in order to generate a new signature. The public key is used for verification, the private key for signature generation. I would suggest reading one of many articles on explaining PKI and how it works that are available on internet. There are other forms of attack that your certificate might suggest are possible, so you could encrypt the response or even use out-of-bounds SAML profile such as artifact resolution. Of course this is all dependent on your IdP and whether or not it supports it.
Related
I have familiarity with OAuth 2.0 / OpenID Connect but am new to WebAuthn. I am trying to understand how a scenario using those OAuth flows and connections would work using WebAuthn. I thought by mapping concepts from oauth to webauthn I would be able better understand the concepts.
I think similar to how in OAuth implicit grant flow a client may receive an id_token and access_token, in WebAuthn a client may receive a credential object from the Authenticator using navigator.credential.create.
The part I do not understand is how this credential can reliably be consumed by downstream services. In OAuth a client or server may send "access_tokens" and the receiving servers may request the public keys from the authorities to validate that it hasn't been tampered, is not expired, has correct audience, etc. This relies on the authorities having a publicly available /.well-known endpoint with the public keys.
However, I think because the keys are specific to the authenticator instead of a single shared public key it is not possible to have these be discoverable.
This is where I don't understand how credentials could be consumed by services. I thought the client would have to send the public key WITH the authenticator and client data but this is 3 pieces of information and awkward. Sending a single access_token seems actually cleaner.
I created a graphic to explain visually.
(It may have technical inaccuracies, but hopefully the larger point is made clearer)
https://excalidraw.com/#json=fIacaTAOUQ9GVgsrJMOPr,yYDVJsmuXos0GfX_Y4fLRQ
Here are the 3 questions embedded in the image:
What data does the client need to send to the server in order for the server to use the data? (Similar to sending access_token)
How would sever get the public key to decrypt data?
Which piece of data is appropriate / standardized to use as the stable user id?
As someone else mentioned - where there are a lot of commonalities between how WebAuthn and something like OpenID Connect work, they aren't really useful for understanding how WebAuthn works - they are better to explore after you understand WebAuthn.
A WebAuthn relying party does not have its own cryptographic keys or secrets or persistent configuration - it just has a relying party identifier, which is typically the web origin. The client (browser and/or platform) mediate between the relying party and authenticators, mostly protecting user privacy, consent, and providing phishing protection.
The relying party will create a new credential (e.g. key pair) with the authenticator of a user's choosing, be it a cell phone or a physical security key fob in their pocket. The response is the public key of a newly created key pair on the authenticator. That public key is saved against the user account by the RP.
In a future authentication, the authentication request results in a response signed by that public key. The private portion is never meant to leave the authenticator - at least not without cryptographic protections.
This does pair well with something like OpenID Connect. The registration is normally by web domain, which means that there could be a lot of manual registrations necessary (and potentially management, and recovery, and other IAM type activities) necessary. With OpenID Connect, you can centralize the authentication of several applications at a single point, and with it centralize all WebAuthn credential management.
I thought by mapping concepts from oauth to webauthn I would be able better understand the concepts.
This seems to be working against you - you're trying to pattern match WebAuthn onto a solution for a different kind of problem (access delegation). Overloaded terminology around "authentication" doesn't help, but the WebAuthn specification does make things a bit more clear when it describes what it means with "Relying Party":
Note: While the term Relying Party is also often used in other contexts (e.g., X.509 and OAuth), an entity acting as a Relying Party in one context is not necessarily a Relying Party in other contexts. In this specification, the term WebAuthn Relying Party is often shortened to be just Relying Party, and explicitly refers to a Relying Party in the WebAuthn context. Note that in any concrete instantiation a WebAuthn context may be embedded in a broader overall context, e.g., one based on OAuth.
Concretely: in your OAuth 2.0 diagram WebAuthn is used during step 2 "User enters credentials", the rest of it doesn't change. Passing the WebAuthn credentials to other servers is not how it's meant to be used, that's what OAuth is for.
To clarify one other question "how would sever get the public key to decrypt data?" - understand that WebAuthn doesn't encrypt anything. Some data (JS ArrayBuffers) from the authenticator response is typically base64 encoded, but otherwise the response is often passed to the server unaltered as JSON. The server uses the public key to verify the signature, this is either seen for the first time during registration, or retrieved from the database (belonging to a user account) during authentication.
EDIT: Added picture for a clearer understanding of how webauthn works, since it has nothing to do with OAuth2 / OpenID.
(source: https://passwordless.id/protocols/webauthn/1_introduction)
Interestingly enough, what I aim to do with Passwordless.ID is a free public identity provider using webauthn and compatible with OAuth2/OpenID.
Here is the demo of such a "Sign in" button working with OAuth2/OpenID:
https://passwordless-id.github.io/demo/
Please note that this is an early preview, still in development and somewhat lacking regarding the documentation. Nevertheless, it might be useful as working example.
That said, I sense some confusion in the question. So please let me emphasize that OAuth2 and WebAuthN are two completely distinct and unrelated protocols.
WebAuthN is a protocol to authenticate a user device. It is "Hey user, please sign me this challenge with your device to prove it's you"
OAuth2 is a protocol to authorize access to [part of] an API. It is "Hey API, please grant me permission to do this and that on behalf of the user".
OpenID builds on OAuth2 to basically say "Hey API, please allow me to read the user's standardized profile!".
WebauthN is not a replacement for OAuth2, they are 100% independent things. OAuth2 is to authorize (grant permissions) and is unrelated to how the user actually authenticates on the given system. It could be with username/password, it could be with SMS OTP ...and it could be with WebauthN.
There is a lot of good information in the other answers and comments which I encourage you to read. Although I thought it would be better to consolidate it in a single post which directly responds to the question from OP.
How does WebAuthN allow dependent web API's to access public key for decrypting credential without having to send the key?
There were problems with the question:
I used the word "decrypt" but this was wrong. The data sent is signed not encrypted and so key is not used to decrypted but verify the signature.
I was asking how a part of OAuth process can be done using WebAuthN however, this was misunderstanding. WebAuthN is not intended to solve this part of process so the question is less relevant and doesn't make sense to be answered directly.
Others have posted that WebAuthN can be used WITH OAuth so downstream systems can still receive JWTs and verify signatures as normal. How these two protocols are paired is a out of scope.
What data does the client need to send to the server in order for the server to use the data?
#Rafe answered: "table with user_id, credential_id, public_key and signature_counter"
See: https://www.w3.org/TR/webauthn-2/#authenticatormakecredential
How would server get the public key to decrypt data?
Again, decrypt is wrong word. Server is not decrypting only verifying signature
Also, the word server has multiple meanings based on context and it wasn't clarified in the question.
WebAuthN: For the server which acts as Relying Party in WebAuthN context, it will verify signature during authentication requests. However, the server in question was intended to mean the downstream APIs would not be part of WebAuthN.
OAuth: As explained by others, theses API servers could still be using OAuth and request public key from provider for verification and token contains necessary IDs and scopes/permissions. (Likely means able to re-use existing JWT middlewares)
Which piece of data is appropriate / standardized to use as the stable user id?
For WebAuthN the user object requires { id, name, displayName }. However, it intentionally does not try to standardize how the ID may propagated to downstream systems. That is up to developer.
See: https://www.w3.org/TR/webauthn-2/#dictdef-publickeycredentialuserentity
For OAuth
sub: REQUIRED. Subject Identifier. A locally unique and never reassigned identifier within the Issuer for the End-User
See: https://openid.net/specs/openid-connect-core-1_0.html#TokenResponse
Hopefully I didn't make too many technical inaccuracies. 😬
I've read a lot about the different flows (authorization code, implicit, hybrid and some extensions such as PKCE). Now I'm on the authorization code flow with PKCE.
PKCE ensures the initiator is the same user as the users who exchanges the authorization code for an access token. That is nice and OK.
When using this flow without a client_secret (which is recommended for SPA/Javscript applications) there is no warranty that the client is the known/original client. So, the 'consent' the user gave, is of no value. uhh?
I am working on a nativate client (a public downloadable binary). A secret cannot be considered confidential when baked in the binary, it can be decompiled for example.
Now I'm in dubio. What is better, bake the secret in the binary so that there is some extra layer of assurance the client is the known client or stop asking for 'consent' and give the same client_id to the whole world, only relying on the user-credentials.
Or is there something wrong with my story?
Very good question and made me realise a gap in my understanding. It is the role of the redirect uri to deal with this risk. In the web / https case the only hack that could work would be to edit the hosts file of the user. I'm the native case it is less perfect and your question is covered below. Generally our best bet is to follow recommendations / standards - but they have plenty of problems! https://web-in-security.blogspot.com/2017/01/pkce-what-cannot-be-protected.html?m=1
To others reading this case I've read a lot more.
Client impersonation is not easy fixable.
RFC8252 seems to be the most applicable article with recommendations for native apps - https://www.rfc-editor.org/rfc/rfc8252
"Claimed ‘https’ scheme" is mentioned as the best solution (IOS, Android and maybe UWP apps).
Since I'm working on a native Windows, non-UWP application I can't use this. As far as I can see the "Web Authentication Broker based on the app SID" is possible for my situation.
The other method is to accept the client as not known/identified and ask for 'consent' every time the client would access personal data.
We are trying to split apart Authorization and Authentication into two separate services. Both will use Identity Server 4. We may someday add in additional external Authentication providers. I believe Federated Gateway is the term (http://docs.identityserver.io/en/release/topics/federation_gateway.html?highlight=Federation)
My research so far indicates we are able to setup Authorization as External Providers and set [LocalLoginEnabled] to false. This works fine for web apps, since it redirects along the traditional flows. Our requirement is to have both Web-based and client-apps (Windows and Mobile) calling our solution. This would need Implicit or Resource Owner (password) flows.
Looking for guidance on the best way to set this up. I'm tempted to write a custom endpoint API to relay the authentication to authentication instance.
QUESTION:
How can I achieve "password flow" between two ID4 instances (Authorization + Authentication)?
Thanks in advance!
After much research and little guidance, I decided to take a leap and just created a new API Endpoint used only for Resource Owner / Password Grant Types. This API simply validates the necessary information was present grant-type, secret, user/pwd etc... and then relays it to the Authentication instance. There may be more "elegant" ways of doing this, but this one seems to be working.
Hope this might help someone.
Although my original answer worked, it was not the best way to accomplish my end result. Instead of creating a new endpoint, I am able to inject my own handling of password grants by extending the IResourceOwnerPasswordValidator. I can then have a single endpoint for all authorization. This solution is more "natural" and falls inline with the intended architecture.
The IResourceOwnerValidator interface just implements one function...
public Task ValidateAsync(ResourceOwnerPasswordValidationContext context)
A much more elegant solution.
I want to secure my application with JWT. This application is only accessed by other server applications that know the secret key before hand. I do not need to add a token generation since the key is already known between the applications. I tried to find some samples for this, but all the examples are complicated (I'm new to spring security) and moreover they do not include anything simple that would fit my use case (known secret key and algorithm, so no provider and storing of the token is needed).
Basically what I want is to decode the token sent by the fellow server, check the secret key, check the sender and check the time (the fellow server will always generate a new token so if that token is stollen then it will be invalid in a small amount of time).
I've thought of implementing this with a custom filter (or interceptor) plus this library and remove spring security entirely, since I can't find any use for it. But I would prefer to use spring security in order to have it available for any future needs and in general achieve what I want by doing it the spring way.
The JWTFilter from JHipster may be a good start!
I have developed a 'REST-like' XML API that I wish to expose for consumption by third-party web applications. I'm now looking to implement a security model for the exchange of data between a third-party application and the 'REST-like' XML API. I would appreciate suggestions for a suitable asymmetric encryption model.
If you want encryption why not just use SSL to encrypt the connection rather than encrypting the response data? If 128-bit SSL isn't sufficient, then you'll either need to integrate some existing PKI infrastructure using an external, trusted authority or develop a key distribution/sharing infrastructure yourself and issue your public key and a suitable private key/identifier to your API consumers. Choose one of the cryptography providers in System.Security.Cryptography that supports public/private key exchange.
HTTPS works with asymmetric key encryption. It is well-known protocol easy to implement.
It protects against 3p intrusion in your communication.
All you need to implement "below" is authentication - to make sure your user known to you.
Common thing to do is to provide users with key that needs to be sent with every request.
Most common is to implement the OAuth protocol. This is what is used for the OpenSocial providers that checks authorization with 2-legged and/or 3-legged oAuth
Just do some google search and you will find a lot of implementations.