Wrapped private key with libfido2? - fido

I am currently working my way into libfido2 and trying to figure out how to use wrapped private keys with it.
Yubico says in the FAQs that with YubiKey 5 unlimited key pairs can be used for FIDO U2F; however, for FIDO2 only space for 25 resident keys is promised.
Does "FIDO2" mean that resident keys are used and that FIDO2 cannot be used with (external) wrapped private keys?
If this is the case, does libfido2 offer any possibility to work with FIDO U2F and wrapped keys instead?
If so, how does libfido2 need to be configured to do this? How can I provide the library with the appropriate protected private key. At least in "fido2-assert" I don't see a way to do this when I want to create an assertion on the client.
(The function accepts four specific parameters description here, and the only one that I understand could bring the private key is the "credential id". But the name makes me doubt if my request is possible with this parameter).
I am grateful for any answer!
EDIT: In the meantime I found this link to some Solo Keys developer pages describing how it works on Solo Keys. It seems the private key is calculated on the fly - in this case credential id would work as seed for the calculation)

FIDO2 encompasses both WebAuthn (browser API) and CTAP2 (USB/Bluetooth/NFC APIs for externally connected authenticators). CTAP2 supports both client-side and server-side credentials, and specifies how backwards compatibility with U2F/CTAP1 authenticators works. Since you're working with libfido2, the CTAP documentation might be useful to understand what it does under the hood.
Client-side discoverable credentials (previously known as resident keys) are used for usernameless flows where no Credential IDs are specified during authentication. These keys are generated randomly and require storage space. Server-side credentials (non-resident keys) are represented as Credential IDs. What type of key is created is requested during the registration process but both FIDO2 standards default to server-side credentials if not specified. U2F only supported server-side credentials.
For external authenticators with limited storage space, server-side credentials are typically wrapped private keys encrypted by a single 'master' key stored in the authenticator. Since the entire state is stored outside of the authenticator this allows for practically infinite keys to be generated even with limited storage space. But it does mean that the Credential ID generated during registration must be stored on the server, and in order to generate an assertion it must be offered back to the authenticator later for authentication. In WebAuthn these Credential ID(s) are typically presented after the user is identified (e.g. via username and password) in the allowCredentials argument, CTAP2 calls this allowList.
With the terminology now (hopefully) clarified, yes libfido2 supports both types of credentials according to the assert example:
Asks <device> for a FIDO2 assertion corresponding to [cred_id],
which may be omitted for resident keys. The obtained assertion
is verified using <pubkey>.

Related

Returning client secret in plain text from GET registration request

I was going through the OpenID Connect Dynamic Client Registration specification. Section 4.3 lists the response for a client read request in which the client secret is displayed in plain text.
While obviously the secret needs to be returned in plain text when registering the client, having to return it in plain text on read requests later implies that the secret value itself needs to be stored (likely encrypted) instead of the salted hash of the client secret.
Since client id and secret are basically the same as username/password, I'm wondering why is the spec requiring to return a secret in plain text in this response, basically going against best practices in password storage?
Passwords are a special kind of secret which are often memorized by users. Since users often re-use passwords, it is important not only to hash the passwords (to protect against reversing it), but also to salt it (to prevent rainbow tables from being used). Secrets such as the client_secret are usually generated from a random source and used only once. Someone who gains access from the database can therefore steal the secret, and impersonate the client, but it won't have value elsewhere.
The client secret needs to be available when a client is configured. If you are for example provisioning multiple instances of a service, you might want to dynamically obtain the client configuration including the secret when you are deploying the application.
To recap, there is a different risk model, the secret is assumed to be random and used only once, whereas passwords are often reused. The secret is supposed to contain enough entropy to protect against a brute force attack, passwords are often shorter or from a dictionary.
There is also a use case for making the secret available many times without needing to change already provisioned clients.

What is the purpose of signingKey and encryptionKey settings

This question is related to this one, but I decided to separate it.
I have signingKey and encryptionKey settings in my grails spring saml config. What is their purpose? I read the spring-security-saml doc and grails saml plugin doc, but it's still a little bit unclear. Could anyone explain their practical usage?
You can think of them as:
signingKey = outbound requests
encryptionKey = inbound data:
i.e. the key you will use to sign outbound requests (e.g. AuthnRequest) vs. the key that your integration partners will use to encrypt assertions that are destined for you.
In the SAML metadata standard they are allowed to be the same value or you can specify different keys for each:
2.4.1.1 The element provides information about the cryptographic key(s) that an entity uses to sign data or receive encrypted keys
use [Optional] Optional attribute specifying the purpose of the key being described. Values are drawn from the KeyTypes enumeration, and consist of the values encryption and signing

How to identify provider for oauth2 redirect_uri callback?

Im trying to undertand how to properly identify which provider a returning authorization request was initiated by. I see three approaches:
Use provider specific redirect_uri callback URIs. /oauth2/<provider-name>/callback etc.
Encode provider id/name in state parameter somehow
Store a pending provider id/name in the web session
Try to verify response with all used providers
I've read parts of the OAuth2 spec but I can't find anything discussing it. Looking at other client implementations it seems as provider specific URIs is the most common solution. Am I missing something?
Clients may not be multi-tenant and are tightly integrated with a single Authorization Server, so there's no need to store a provider identifier because there's only a single fixed one. That may be the reason why there's no obvious solution.
Multi-provider clients like your's should store the provider identifier as part of the state. This is because the state should be protected, and the provider specific redirect_uri is not. One could play an access token for provider A against the callback for provider B and thus defeat the purpose of a provider specific callback.
state can be protected either by reference to server state or to an encrypted cookie, or by value in the form of a self-contained encrypted structured value for the state parameter, and thus can be a safe mechanism to store the provide identifier.

Building RESTful API with MVC for an iPhone app - How to secure it?

I'm going to be writing the services for an iPhone app being built by a third party vendor.
I'll be using ASP.NET MVC to accept posts and also return JSON formatted data.
My question is, how do you secure it?
Just using an API key perhaps? Would that be enough to ensure that only data from the iPhone apps are allowed to hit the specified services?
I'm sort of struggling with the same concepts myself. I think the first thing is to do HTTPS only, so that it's starting out more secure than not.
Next, it depends on how you're going to do authentication. If all you need is an API key, (to track which entity is accessing the data) that should be fine. If you also want to track user information, you'll need some way to associate that specific API keys can access specific types of records, based on a join somewhere.
I'm looking at doing forms auth on my app, and using an auth cookie. Fortunately ASP.NET on IIS can do a lot of that heavy lifting for you.
Example time: (I'm sure I'll need to add more to this, but while I'm at work it gives something to gnaw on)
Forms auth:
Send a pair (or more) of fields in a form body. This is POST through and through. There's no amount of non-reversible hashing that can make this secure. To secure it you must either always be behind a firewall from all intruding eyes (yeah right) or you must be over HTTPS. Simple enough.
Basic auth:
Send a base64 encoded string of "username:password" over the wire as part of the header. Note that base64 is to secure as a screen door is to a submarine. You do not want it to be unsecured. HTTPS is required.
API key:
This says that an app is supposedly XYZ. This should be private. This has nothing to do with users. Preferably is that at the time that the API key is requested, a public key is shared with the API grantor, allowing the API key to be encoded on transit, thus ensuring that it stays private but still proves the source as who they are. This can get complicated, but because there is an application process and because it won't change from the vendor, this can be done over HTTP. This does not mean per-user, this means per-developing-company-that-uses-your-api.
So what you want to have happen is that for the app accessing your data, that you want to make sure it's an authorized app, you can do negotiation using private keys for signing at runtime. This ensures that you're talking to the app you want to talk to. But remember, this does not mean that the user is who they say they are.
HOWEVER.
What you can do is you can use the API key and the associated public/private keys to encode the username and password information for sending them over the wire using HTTP. This is very similar to how HTTPS works but you're only encrypting the sensitive part of the message.
But to let a user track their information, you're going to have to assign a token based on login based on a user. So let them login, send the data over the wire using the appropriate system, then return some unique identifier that represents the user back to the app. Let the app then send that information every time that you are doing user specific tasks. (generally all the time).
The way you send it over the wire is you tell the client to set a cookie, and all the httpClient implementations I've ever seen know that when they make a request to the server, they send back all cookies the server has ever set that are still valid. It just happens for you. So you set a cookie on your response on the server that contains whatever information you need to communicate with the client by.
HTH, ask me more questions so we can refine this further.
One option would be to use forms authentication and use the authentication cookie. Also, make sure all the service calls are being sent over SSL.

single sign on between Vbulletin and rails applications

we have a lot of users on a VBulletin forum. now i want write few more apps on rails for the same userbase. Until now all the authentication and session management is being taken care of by VBulletin. What is the best way to provide SSO for my users both onVBulletin and on the rails apps i am writing
I am working on single sign-on process with v Bulletin and custom made application. i can logged in at Vb using cookies. i can access all. but when access send "Private Message". it says
"
You have turned off private messages. You may not send private messages until you turn them on by editing your options.
"
is there all permission are set at "datasource" table?..
Thanks
master
Ideally your two sites are subdomains of a common domain (e.g. forum.example.com and rails.example.com), or share the same domain (www.example.com.) One of the sites would be the primary authenticator, and set a cookie (for .example.com in the case of the common parent domain [notice the . before example.com] or www.example.com in the case of the shared domain, so that both applications can access it), where the cookie contains:
the user ID
a salt (random value calculated at login time), and
a SHA-2 signature computed over the triplet (user ID + salt + a shared secret key), where the shared secret key is a secret string known by both sites.
Each site would be able to retrieve the user ID and salt from the cookie, then use the shared secret key (known only by the two applications) to calculate a SHA-2 signature that must match the SHA-2 signature stored in the cookie.
If the SHA-2 signatures match then you can assume that the user is authenticated, otherwise force the user to log in again.
The cookie must be destroyed when logging off.
The small print
To protect against session hijacking, all requests made over the two sites should be encrypted over SSL (use https.) If this is not possible, a hash based on the client's IP address as well as browser type and version (User-agent) should probably be calculated at login time and also be stored in the cookie. It should be re-checked against the client's IP address and user agent before serving each request. The hash-based approach is security through obscurity, and can be fooled; moreover, a user accessing the internet from behind a pool of proxies or using TOR may be kicked out by your system every time a different proxy or exit node (with a different IP address) forwards a request.

Resources