I have developed a 'REST-like' XML API that I wish to expose for consumption by third-party web applications. I'm now looking to implement a security model for the exchange of data between a third-party application and the 'REST-like' XML API. I would appreciate suggestions for a suitable asymmetric encryption model.
If you want encryption why not just use SSL to encrypt the connection rather than encrypting the response data? If 128-bit SSL isn't sufficient, then you'll either need to integrate some existing PKI infrastructure using an external, trusted authority or develop a key distribution/sharing infrastructure yourself and issue your public key and a suitable private key/identifier to your API consumers. Choose one of the cryptography providers in System.Security.Cryptography that supports public/private key exchange.
HTTPS works with asymmetric key encryption. It is well-known protocol easy to implement.
It protects against 3p intrusion in your communication.
All you need to implement "below" is authentication - to make sure your user known to you.
Common thing to do is to provide users with key that needs to be sent with every request.
Most common is to implement the OAuth protocol. This is what is used for the OpenSocial providers that checks authorization with 2-legged and/or 3-legged oAuth
Just do some google search and you will find a lot of implementations.
Related
I have familiarity with OAuth 2.0 / OpenID Connect but am new to WebAuthn. I am trying to understand how a scenario using those OAuth flows and connections would work using WebAuthn. I thought by mapping concepts from oauth to webauthn I would be able better understand the concepts.
I think similar to how in OAuth implicit grant flow a client may receive an id_token and access_token, in WebAuthn a client may receive a credential object from the Authenticator using navigator.credential.create.
The part I do not understand is how this credential can reliably be consumed by downstream services. In OAuth a client or server may send "access_tokens" and the receiving servers may request the public keys from the authorities to validate that it hasn't been tampered, is not expired, has correct audience, etc. This relies on the authorities having a publicly available /.well-known endpoint with the public keys.
However, I think because the keys are specific to the authenticator instead of a single shared public key it is not possible to have these be discoverable.
This is where I don't understand how credentials could be consumed by services. I thought the client would have to send the public key WITH the authenticator and client data but this is 3 pieces of information and awkward. Sending a single access_token seems actually cleaner.
I created a graphic to explain visually.
(It may have technical inaccuracies, but hopefully the larger point is made clearer)
https://excalidraw.com/#json=fIacaTAOUQ9GVgsrJMOPr,yYDVJsmuXos0GfX_Y4fLRQ
Here are the 3 questions embedded in the image:
What data does the client need to send to the server in order for the server to use the data? (Similar to sending access_token)
How would sever get the public key to decrypt data?
Which piece of data is appropriate / standardized to use as the stable user id?
As someone else mentioned - where there are a lot of commonalities between how WebAuthn and something like OpenID Connect work, they aren't really useful for understanding how WebAuthn works - they are better to explore after you understand WebAuthn.
A WebAuthn relying party does not have its own cryptographic keys or secrets or persistent configuration - it just has a relying party identifier, which is typically the web origin. The client (browser and/or platform) mediate between the relying party and authenticators, mostly protecting user privacy, consent, and providing phishing protection.
The relying party will create a new credential (e.g. key pair) with the authenticator of a user's choosing, be it a cell phone or a physical security key fob in their pocket. The response is the public key of a newly created key pair on the authenticator. That public key is saved against the user account by the RP.
In a future authentication, the authentication request results in a response signed by that public key. The private portion is never meant to leave the authenticator - at least not without cryptographic protections.
This does pair well with something like OpenID Connect. The registration is normally by web domain, which means that there could be a lot of manual registrations necessary (and potentially management, and recovery, and other IAM type activities) necessary. With OpenID Connect, you can centralize the authentication of several applications at a single point, and with it centralize all WebAuthn credential management.
I thought by mapping concepts from oauth to webauthn I would be able better understand the concepts.
This seems to be working against you - you're trying to pattern match WebAuthn onto a solution for a different kind of problem (access delegation). Overloaded terminology around "authentication" doesn't help, but the WebAuthn specification does make things a bit more clear when it describes what it means with "Relying Party":
Note: While the term Relying Party is also often used in other contexts (e.g., X.509 and OAuth), an entity acting as a Relying Party in one context is not necessarily a Relying Party in other contexts. In this specification, the term WebAuthn Relying Party is often shortened to be just Relying Party, and explicitly refers to a Relying Party in the WebAuthn context. Note that in any concrete instantiation a WebAuthn context may be embedded in a broader overall context, e.g., one based on OAuth.
Concretely: in your OAuth 2.0 diagram WebAuthn is used during step 2 "User enters credentials", the rest of it doesn't change. Passing the WebAuthn credentials to other servers is not how it's meant to be used, that's what OAuth is for.
To clarify one other question "how would sever get the public key to decrypt data?" - understand that WebAuthn doesn't encrypt anything. Some data (JS ArrayBuffers) from the authenticator response is typically base64 encoded, but otherwise the response is often passed to the server unaltered as JSON. The server uses the public key to verify the signature, this is either seen for the first time during registration, or retrieved from the database (belonging to a user account) during authentication.
EDIT: Added picture for a clearer understanding of how webauthn works, since it has nothing to do with OAuth2 / OpenID.
(source: https://passwordless.id/protocols/webauthn/1_introduction)
Interestingly enough, what I aim to do with Passwordless.ID is a free public identity provider using webauthn and compatible with OAuth2/OpenID.
Here is the demo of such a "Sign in" button working with OAuth2/OpenID:
https://passwordless-id.github.io/demo/
Please note that this is an early preview, still in development and somewhat lacking regarding the documentation. Nevertheless, it might be useful as working example.
That said, I sense some confusion in the question. So please let me emphasize that OAuth2 and WebAuthN are two completely distinct and unrelated protocols.
WebAuthN is a protocol to authenticate a user device. It is "Hey user, please sign me this challenge with your device to prove it's you"
OAuth2 is a protocol to authorize access to [part of] an API. It is "Hey API, please grant me permission to do this and that on behalf of the user".
OpenID builds on OAuth2 to basically say "Hey API, please allow me to read the user's standardized profile!".
WebauthN is not a replacement for OAuth2, they are 100% independent things. OAuth2 is to authorize (grant permissions) and is unrelated to how the user actually authenticates on the given system. It could be with username/password, it could be with SMS OTP ...and it could be with WebauthN.
There is a lot of good information in the other answers and comments which I encourage you to read. Although I thought it would be better to consolidate it in a single post which directly responds to the question from OP.
How does WebAuthN allow dependent web API's to access public key for decrypting credential without having to send the key?
There were problems with the question:
I used the word "decrypt" but this was wrong. The data sent is signed not encrypted and so key is not used to decrypted but verify the signature.
I was asking how a part of OAuth process can be done using WebAuthN however, this was misunderstanding. WebAuthN is not intended to solve this part of process so the question is less relevant and doesn't make sense to be answered directly.
Others have posted that WebAuthN can be used WITH OAuth so downstream systems can still receive JWTs and verify signatures as normal. How these two protocols are paired is a out of scope.
What data does the client need to send to the server in order for the server to use the data?
#Rafe answered: "table with user_id, credential_id, public_key and signature_counter"
See: https://www.w3.org/TR/webauthn-2/#authenticatormakecredential
How would server get the public key to decrypt data?
Again, decrypt is wrong word. Server is not decrypting only verifying signature
Also, the word server has multiple meanings based on context and it wasn't clarified in the question.
WebAuthN: For the server which acts as Relying Party in WebAuthN context, it will verify signature during authentication requests. However, the server in question was intended to mean the downstream APIs would not be part of WebAuthN.
OAuth: As explained by others, theses API servers could still be using OAuth and request public key from provider for verification and token contains necessary IDs and scopes/permissions. (Likely means able to re-use existing JWT middlewares)
Which piece of data is appropriate / standardized to use as the stable user id?
For WebAuthN the user object requires { id, name, displayName }. However, it intentionally does not try to standardize how the ID may propagated to downstream systems. That is up to developer.
See: https://www.w3.org/TR/webauthn-2/#dictdef-publickeycredentialuserentity
For OAuth
sub: REQUIRED. Subject Identifier. A locally unique and never reassigned identifier within the Issuer for the End-User
See: https://openid.net/specs/openid-connect-core-1_0.html#TokenResponse
Hopefully I didn't make too many technical inaccuracies. 😬
I’m developing my own OAuth2 + OpenID Connect implementation. I am a bit confused about how to handle OAuth flows for native (specifically, Mobile) clients. So far, I am seeing that I need to use an Authentication Code Flow. However, based on my research, there are some details that seem to contradict each other(at least based on my current understanding).
First, standard practice seems to say that mobile apps are not inherently private and, as such, standard flows that make use of a back channel should not be used. As a work around, the PKCE extension can be used (and utilize the built-in device browser as opposed to a web view so the tokens and sensitive information are less likely to be leaked).
However, under the Protocol’s Dynamic Client Registration specification, it is also mentioned that mobile apps should use this method of client registration to get a valid client ID and client secret... But, why would we do this when in an earlier section it was established that mobile applications were indeed public clients and couldn’t be trusted with confidential information like a client secret (which we are getting by using this DCR mechanism...
So, what am I not understanding? These two things seem to contradict one another. One claims mobile apps are public shouldn’t be trusted with a secret. Yet, in the recommended DCR mechanism, we assign them the secret we just established they can’t be trusted with.
Thanks.
A bit late, but hope it helps. So part of the OAuth2.0 protocol is two components, the client_id, and client secret. The client and server must agree on those two values outside the protocol i.e. before the protocol start. Usually, the process is as follows. The client communicates with the Authorization Server using an out-of-bound communication channel to get these values and be registered at the server. There is two way this client registration can happen, statically and dynamically. Statically mean the client_id and secret do change, i.e. the client gets them once when he registers with the server. Dynamic client registration refers to the process of registering a client_id every time the client wants to use to protocol, i.e. a client secret will be generated for him every time (also by an outbound communication).
Now, Why use dynamic registration?
Dynamic client registration is better at managing clients across replicated authorization servers., The original OAuth use cases revolved around single-location APIs, such as those from companies providing web services. These APIs require specialized clients to talk to them, and those clients will need to talk to only a single API provider. In these cases, it doesn’t seem unreasonable to expect client developers to put in the effort to register their client with the API, because there’s only one provider.
Does Dynamic Client registration offer any security advantages?
No, both are vulnerable if used with a JavaScript or a Native Mobile Client (JavaScript client can be inspected, and Mobile apps can be decompiled). Hence, both of them require PKCE as an extra layer of security.
when i use local validation instead of introspection in my project with OpenIddict
usually the Authserver and the Resourceserver are sharing one symmetric encryptionkey. However when i use more than one Resourceserver i would like to use more than one symmetric encryptionkey (each Resourceserver should have it's own encryptionkey). Is there any way to achieve this?
Thanks for your help,
Nicolai
This is not currently supported.
It may be supported in a future version, but it has important design considerations: e.g the JWT compact format doesn't support embedding multiple content encryption keys, which prevents issuing encrypted JWTs to different recipients/resource servers.
If you don't want to share the secret with the resource servers (e.g. if you need to support 3rd party resource servers) you can use introspection.
But introspection has another drawback: I found no way to cache the introspection response in the openiddict validator. I don't think it is a good idea to have a roundtrip to the auth server on every API request. This could slow down high frequented API servers. Not sure if there is a simple solution for caching the response...
I have a Ruby on Rails application with a database that stores sensitive user information (hashed with Devise). I now need to pass some of this sensitive information to another internal service on another server that needs it to make calls to third party APIs, so it needs a way to request that information from the RoR app.
What's the best approach to something like this? My first intuition was to grant an internal API key that would grant access to all sensitive information in the DB (via a private endpoint), the same way developer keys give access to a subset of API endpoints. Is this secure enough as long as I hash the API key? What's the best approach to passing sensitive information around through internal services?
Private APIs
My first intuition was to grant an internal API key that would grant access to all sensitive information in the DB (via a private endpoint), the same way developer keys give access to a subset of API endpoints
Well private endpoints or private APIs don't exist in the sense of only protecting them by using an API key. From a web app you only need to see the html source code to be able to find the API keys. In mobile devices you can see how easy is to reverse engineer API keys in this series of articles about Mobile API Security Techniques. While the articles are in the context of mobile devices, some of the techniques used are also valid in other type of APIs. I hope you can see now how someone could grab the API key and abuse from the API you are trying to secure.
Now even if you don't expose the API key in a mobile app or web app, the API is still discoverable, specially if the endpoint is served by the same API used for the other public endpoints. This is made even easier when you tell in robots.txt that bots should not access some of the endpoints, because this is the first place where hackers look into for trying to enumerate attack vectors into your APIs.
Possible Solutions
Private API Solution
What's the best approach to something like this? My first intuition was to grant an internal API key that would grant access to all sensitive information in the DB (via a private endpoint)
In order to have a private API the server hosting it needs to be protected by a firewall and locked to the other internal server consuming it with certificate pinning and maybe also by IP address. In order to be able to properly secure and lock down the internal server hosting the supposed private API it MUST not support any public requests.
Certificate Pinning:
Pinning effectively removes the "conference of trust". An application which pins a certificate or public key no longer needs to depend on others - such as DNS or CAs - when making security decisions relating to a peer's identity. For those familiar with SSH, you should realize that public key pinning is nearly identical to SSH's StrictHostKeyChecking option. SSH had it right the entire time, and the rest of the world is beginning to realize the virtues of directly identifying a host or service by its public key.
Database Direct Access Solution
What's the best approach to passing sensitive information around through internal services?
Personally I would prefer to access the database directly from the other server and have the database software itself configured to only accept requests from specific internal servers for specific users with the less privileges possible to perform the action they need. Additionally you would employ the firewall locking to and use certificate pinning between the internal servers.
Conclusion
No matter what solution you choose place your database with the sensitive data in server that only hosts that database and is very well lock-down to your internal network.
Anyone needing to access that sensitive data MUST have only read privileges for that specific database table.
I am in the process of re-writing some very outdated .NET 2.0 SOAP web services for my company. So I am rewriting them using MVC3 RESTful. This method would simplify the usage of our services for our client base (over 500 clients using our current SOAP services) who are on multiple platforms and languages.
I am looking for a BETTER method of authorization for the RESTful services, than what the previous developer used for our .NET 2.0 SOAP web services (he basically just had the client pass in a GUID as a parameter and matched it in code behind).
I have looked into oAuth and I want to use it, HOWEVER, I have been told, from my superiors, that this method is TOO complicated for the "level" of clients that connect to our services and want me to find another simpler way for them to connect but still have authorization. Most of our clients have BASIC to no knowledge of programming (either we helped them get their connection setup OR they hired some kid to do it for them). This is another reason that the superiors want a different method, because we can't have all 500+ (plus 5-10 new clients a day) asking for help on how to implement oAuth.
So, is there another way to secure the MVC3 services other than passing a preset GUID?
I have looked into using Windows Authentication on the services site, but is this really logical for 500+ clients to use?
Is there an easy and secure method of authorizing multiple users on multiple platforms to use the MVC3 RESTful services that a end-client can implement very easily?
Thanks.
If you don't want anything too complicated, have a look at Basic HTTP Authentication. If you use it over SSL then it should be safe enough and also easy enough to implement for your clients. The Twitter API actually used this up until a few months ago when they switched to OAuth.
You want to distinguish between authentication and authorization. What you are looking for is authentication and indeed as Caps suggests, the easiest way may be to use HTTP BASIC authentication along with SSL to make the password is not compromised.
You could look into other means of authentication e.g. DIGEST or more advanced using ADFS or SAML (ADFS could be compelling since you're in .NET). Have a look at OpenID Connect too - it is strongly supported by Google and has great support.
Once you are done with that, you may want to consider authorization - if you need it that is - to control what a given client can do on a given resource / item / record. For that you can use claims-based authorization as provided in the .NET framework or if you need finer-grained authorization, look into XACML.
OAuth wouldn't really solve your issue since OAuth is about delegation of authorization i.e. I let Twitter write to my Facebook account on my behalf.
HTH