Disable any cryptography in Windows Identity Foundation - wif

I am only interested in the federation of corporate web services.
So, how can I disable any encryption and signature in the active scenario ?

Encryption is not a problem. Simply don't enable encryption on the STS side.
For signatures, there are two sides:
STS side. If your STS is AD FS 2.0, then it will always sign any token it gives out. (Or at least, my experiments did not reveal any way to remove all token-signing certificates.)
You may be able to create a custom STS, e.g., based on WIF, to generate unsigned tokens.
RP side. Given that this question is tagged 'WIF', your web service is apparently implemented using WIF.
When receiving a signed token, your WIF-based web service needs to trust this AD FS instance. (Or at least, I haven't found a way to have WIF trust just any token, without checking the token signature.)

why would you want to disable signing? (not even sure if it is possible). It'd defeat the entire purpose of using claims based identity. As marnix suggested, token encryption is optional.

Related

Which OAuth 2.0 flow to use?

I have read rfc6749 and https://auth0.com/docs/authorization/which-oauth-2-0-flow-should-i-use but I couldn't seem to find the flow that's exactly for my use case.
There will be a native app(essentially a GUI) that will spin up a daemon on end user device. The daemon will try to call internal APIs hosted by the backend. The internal APIs don't need to verify the user identity; but it's preferred that the device identity can be verified to some extent. There will be an OAuth authorization server in the backend to handle the logic. But I couldn't identify which is the correct flow to use for this case.
Originally I thought this is a good fit for client credentials grant type. But then I realized that this might be a public client but client credentials is supposed to be used for confidential clients only.
I then came to find out about authorization code with PKCE flow. This seems to be the recommended flow for native apps but it doesn't make much sense to me as there will be redirects and user needs to interact but the APIs that will be called is supposed to be internal and user shouldn't know about these back channel stuff at all. Also the resource owner should be the same as the client in this case, which should be the machine not the user.
So which flow should I use?
Thanks a lot for the help!
Client Credentials feels like the standard option but there are a few variations:
SINGLE CLIENT SECRET
This is not a good option since anyone who captures a message in transit can access data for any user.
CLIENT SECRET PER USER
Using Dynamic Client Registration might be an option, where each instance of the app gets its own client ID and secret, linked to a user.
The Daemon then uses this value, and if the secret is somehow captured in transit it only impacts one user.
STRONGER CLIENT SECRETS
The client credentials grant can also be used with stronger secrets such as a Client Assertion, which can be useful if you want to avoid sending the actual secret.
This type of solution would involve generating a key per user when they aughenticate, then storing the private key on the device, eg in the keychain.

Access Token JWT validation

Is it really required to validate the access token by decrypting it using the public key? I am asking this question related to the Azure AD. I am understanding that the JWT token can be validated to make sure it was in deed signed by Azure AD. If this is the case, are there any Azure AD endpoints where I can pass the token and get a response whether it was signed by it? All the articles over the internet explains the manual way of grabbing the public key from Azure AD endpoint and then do the decrypt steps by ourselves. Are there any automated way to validate the access tokens?
It would be great if someone can throw light on whether its a standard practice for the APIs to validate the access tokens before servicing the request.
It is considered best security for APIs to validate JWT access tokens on every request. This approach is fast and scales to a microservices architecture, where Service A can forward the access token to Service B and so on.
The result can be termed a 'zero trust architecture', since both calls from internet clients and clients running within the back end involve digital verification before the API logic runs.
You are right that a certain amount of plumbing code is needed to verify JWTs. This is typically coded once in a filter and then you don't need to revisit it. For some examples in various technologies, see the Curity API Guides.
I can confirm that this approach works fine for Azure AD - please post back if you run into any specific problems.
Some Authorization Servers also support token validation via OAuth Introspection, but Azure AD does not support this currently.
Introspection is most commonly used with opaque access tokens (also unsupported by Azure). See the Phantom Token Approach for further details on this pattern.
Yes, you should validate it every time. The reason to use JWT is to authorize request. If you don't care who sent the request and it doesn't matter if it was a hacker or your customer than don't use jwt and oauth. If you do care and use it you have to be sure that the jwt was not changed by some hacker, so signature has to be checked.

What are the different ways to authenticate two different server securely apart from OAuth and SAML?

Background:
I want to integrate SSO in my existing application with my client's application and client IDP does not support any SAML and OAuth standards.
Problem Statement:
I'm looking for a custom solution where if any client application simply opens a link of my application then my application must be able to recognise that user/client and should auto logged-in into my application.
Though, I've done enormous research before asking this question in this forum:
My findings so far:
Some of my findings which I can use to auto login into my system using the following custom techniques:
JWT token
Certificate
Simple encryption using a secret key token. ie AES, RSA
Are there any other custom secure options available?
And what option one should choose in this kind of situation?

What if any are the downsides to using the resource owner password credential flow when the app and api are first party?

I'm just starting to learn about Identity Server, OAuth 2, and OpenId Connect. While doing so I've spent some time looking over the different OAuth flows and their applications. I understand the risks of using the Resource Owner Password Credential flow when the client is 3rd party or not trusted. However, I haven't really been able to find much on its use when the client(mobile app) and api are trusted 1st party. What are the potential risks of using this flow in this scenario? If you could point to specific security vulnerabilities that would be very helpful.
Thanks!
If you are talking about exactly the following...
Your own Mobile App (using trusted libraries)
Collects User Credentials (as if they were logging on your website, assuming you have one)
Sends them over TLS to your Auth server
Returns the normal token response if correct
Then I would argue there is no security penalty, at least, it is no worse than using username/password auth in the first place.
However, there is a wider problem with mobile authentication of this nature.
There is no way to tell that it's your app sending the requests, this applies to all OAuth2 flows (even if you use a more secure flow, the User can simply pull apart the mobile app and extract the credentials).
There are some features from both Google and Apple that attempt to fix this problem, but I'm not sure how mature or secure they are at the moment, it might be worth looking into.
So you are relying on the User to not get tricked into installing a fake app, however, this lands under social engineering and it does apply to all OAuth 2 flows.

OAuth 2 for native application - what is difference between public and confidential client types?

I trying to implement OAuth 2 provider for web service and then built native application on top of it. Also I want give access to API for third-party developers.
I read OAuth 2 specification already and can't choose right flow. I want authenticate both CLI and GUI apps as well.
First of all we have two client types - public and confidential. Of course both GUI and CLI apps will be public. But what is difference between this two types? In this case for what I need client_secret if I can get access token without it just by changing client type?
I tried to look at some API implementations of popular services like GitHub. But they use HTTP Basic Auth. Not sure it is a good idea.
Is there any particular difference? Does one improve security over the other?
As to the difference between public and confidential clients, see http://tutorials.jenkov.com/oauth2/client-types.html which says:
A confidential client is an application that is capable of keeping a
client password confidential to the world. This client password is
assigned to the client app by the authorization server. This password
is used to identify the client to the authorization server, to avoid
fraud. An example of a confidential client could be a web app, where
no one but the administrator can get access to the server, and see the
client password.
A public client is an application that is not capable of keeping a
client password confidential. For instance, a mobile phone application
or a desktop application that has the client password embedded inside
it. Such an application could get cracked, and this could reveal the
password. The same is true for a JavaScript application running in the
users browser. The user could use a JavaScript debugger to look into
the application, and see the client password.
Confidential clients are more secure than public clients, but you may not always be able to use confidential clients because of constraints on the environment that they run in (c.q. native apps, in-browser clients).
#HansZ 's answer is a good starting point in that it clarifies the difference between a public and private client application: the ability to keep the client secret a secret.
But it doesn't answer the question: what OAuth2 profile should I use for which use cases? To answer this critical question, we need to dig a bit deeper into the issue.
For confidential applications, the client secret is supplied out of band (OOB), typically by configuration (e.g. in a properties file). For browser based and mobile applications, there really isn't any opportunity to perform any configuration and, thus, these are considered public applications.
So far, so good. But I disagree that this makes such apps unable accept or store refresh tokens. In fact, the redirect URI used by SPAs and mobile apps is typically localhost and, thus, 100% equivalent to receiving the tokens directly from the token server in response to a Resource Owner Password Credentials Grant (ROPC) .
Many writers point out, sometimes correctly, that OAuth2 doesn't actually do Authentication. In fact, as stated by the OAuth2 RFC 6749, both the ROPC and Client Credentias (CC) grants are required to perform authentication. See Section 4.3 and Section 4.4.
However, the statement is true for Authorization Code and Implicit grants. But how does authentication actually work for these domains?
Typically, the user enters her username and password into a browser form, which is posted to the authentication server, which sets a cookie for its domain. Sorry, but even in 2019, cookies are the state of the authentication art. Why? Because cookies are how browser applications maintain state. There's nothing wrong with them and browser cookie storage is reasonably secure (domain protected, JS apps can't get at "http only" cookies, secure requires TLS/SSL). Cookies allow login forms to be presented only on the 1st authorization request. After that, the current identity is re-used (until the session has expired).
Ok, then what is different between the above and ROPC? Not much. The difference is where the login form comes from. In an SPA, the app is known to be from the TLS/SSL authenticated server. So this is all-but identical to having the form rendered directly by the server. Either way, you trust the site via TLS/SSL. For a mobile app, the form is known to be from the app developer via the app signature (apps from Google Play, Apple Store, etc. are signed). So, again, there is a trust mechanism similar to TLS/SSL (no better, no worse, depends on the store, CA, trusted root distributions, etc.).
In both scenarios, a token is returned to prevent the application from having to resend the password with every request (which is why HTTP Basic authentication is bad).
In both scenarios, the authentication server MUST be hardened to the onslaught of attacks that any Internet facing login server is subjected. Authorization servers don't have this problem as much, because they delegate authentication. However, OAuth2 password and client_credentials profiles both serve as de facto authentication servers and, thus, really need to be tough.
Why would you prefer ROPC over an HTML form? Non-interactive cases, such as a CLI, are a common use case. Most CLIs can be considered confidential and, thus, should have both a client_id and client_secret. Note, if running on a shared OS instance, you should write your CLI to pull the client secret and password from a file or, at least, the standard input to avoid secrets and passwords from showing up in process listings!
Native apps and SPAs are another good use, imo, because these apps require tokens to pass to REST services. However, if these apps require cookies for authentication as well, then you probably want to use the Authorization Code or Implicit flows and delegate authentication to a regular web login server.
Likewise, if users are not authenticated in the same domain as the resource server, you really need to use Authorization Code or Implicit grant types. It is up to the authorization server how the user must authenticate.
If 2-factor authentication is in use, things get tricky. I haven't crossed this particular bridge yet myself. But I have seen cases, like Attlassian, that can use an API key to allow access to accounts that normally require a 2nd factor beyond the password.
Note, even when you host an HTML login page on the server, you need to take care that it is not wrapped either by an IFRAME in the browser or some Webview component in a native application (which may be able to set hooks to see the username and password you type in, which is how password managers work, btw). But that is another topic falling under "login server hardening", but the answers all involve clients respecting web security conventions and, thus, a certain level of trust in applications.
A couple final thoughts:
If a refresh token is securely delivered to the application, via any flow type, it can be safely stored in the browser/native local storage. Browsers and mobile devices protect this storage reasonably well. It is, of course, less secure than storing refresh tokens only in memory. So maybe not for banking applications ... But a great many apps have very long lived sessions (weeks) and this is how it's done.
Do not use client secrets for public apps. It will only give you a false sense of security. Client secrets are appropriate only when a secure OOB mechanism exists to deliver the secret and it is stored securely (e.g. locked down OS permissions).

Resources