I am aware of How to Respond to an Authentication Challengelike we have NTLM Authentication as there are 3 options.
Provide authentication credentials.
Attempt to continue without credentials.
Cancel the authentication request.
But just want to know the thoughts here, when we go with the first option Provide authentication credentials we pass the username and password URLCredential is there any possibility of leakage of credentials, is it secure to pass the credentials, what is happening behind the screens? how Apple network API sending the credentials to the server?
Yes, we can set the policies like server domain, failure count etc. but from the security point of view is it safe? from Man in Middle Attack (MIMA) or anything else?
Maybe the way I have posted my question is not clear but I was looking more from the Application credential security point of view with NTLM Authentication and after lots of Google, I have found, how’s NTLM works and it’s pretty interesting to see that client don’t share the password with the server. here are the steps as follow.
The client makes the request to the server.
The server needs to validate the user because there is no identity so server generates 16 bytes random number called as the challenge and sends it to the client.
Client hash this challenge with the user’s password and return it back to the server that is called the response it also includes username as plain text and challenge sent to the client.
The server sends everything to the domain controller and it uses the username to retrieve the hash of the user’s password from security account manager database and hash the challenge.
Domain controller shares the response back to the server if they are identical then authentication is successful otherwise a failure.
So the interesting part is here that Network API doesn’t share the password with the server it means it very secure.
I hope it will help others, For More.
There are multiple types of challenges, and the answer to your question depends on what type of challenge you're talking about. Each challenge has a protection space, which basically tells what type of challenge you're responding to.
To answer your question for the most common protection spaces:
Basic password-based authentication (NSURLAuthenticationMethodHTTPBasic): The credential you pass is sent in cleartext to the server (HTTP) or encrypted by the session key (HTTPS).
Digest authentication (NSURLAuthenticationMethodHTTPDigest): The credential you pass is cryptographically hashed with a nonce provided by the server, and only the resulting hashed token gets sent over the network.
NTLM authentication (NSURLAuthenticationMethodNTLM): The credential you pass is cryptographically hashed with a nonce sent by the server, and only the resulting hashed token gets sent over the network.
Client Certificate authentication (NSURLAuthenticationMethodClientCertificate): The certificate is sent to the server, but not the private key data. The client uses the private key to sign the prior TLS handshake data as a means of letting the server verify that the client really does have the private key associated with that cert.
Server certificate validation (NSURLAuthenticationMethodServerTrust): If you pass a certificate obtained from the server, you MUST validate it first, or else you effectively reduce the level of security to that of HTTP (i.e. any server can send any cert and you'll be saying to trust that cert when talking to the server).
The list above covers the most common protection spaces. Kerberos is its own animal, and I don't know anything at all about how that works. And there's also the "Form" protection space, which is just a placeholder for custom authentication that you can use in various parts of your app's code, but is not actually supported in any meaningful way.
It is worth noting that Basic, Digest, and NTLM authentication provide no protection against man-in-the-middle attacks if the attacker can alter data in transit, because the authentication token provided does not depend on the rest of the request in any way. Thus, these are really suitable only for use over an encrypted channel (HTTPS).
Related
I'm writing an application that uses keycloak as its user authentication service. I have normal users, who log in to keycloak from the frontend (web browsers), and service users, who log in from the backend (PHP on IIS). However, when I log in from the backend, keycloak uses HS256 as its signature algorithm for the access token, and thus rejects it for further communication because RS256 is set in the realm and client settings. To get around this issue, I would like to "pretend to be the frontend" to get RS256 signed access tokens for my service users.
For security reasons, I cannot give the HS256 key to the application server, as it's symmetrical and too many people can access the server's code.
I am currently debugging the issue using the same user/pw/client id/grant type both on the frontend and the backend, so that cannot be the issue.
So far I have tried these with no luck:
copying the user agent
copying every single HTTP header (Host, Accept, Content-Type, User-Agent, Accept-Encoding, Connection, even Content-Length is the same as the form data is the same)
double checking if the keycloak login is successful or not - it is, it's just that it uses the wrong signature algorithm
So how does keycloak determine which algorithm to sign tokens with? If it's different from version to version, where should I look in keycloak's code for the answer?
EDIT: clarification of the flow of login and reasons why backend handles it.
If a user logs in, this is what happens:
client --[login data]--> keycloak server
keycloak server --[access and refresh token with direct token granting]--> client
client --[access token]--> app server
(app server validates access token)
app server --[data]--> client
But in some occasions the fifth step's data is the list of users that exist in my realm. The problem with this is that keycloak requires one to have the view-users role to list users, which only exists in the master realm, so I cannot use the logged in user's token to retrieve it.
For this case, I created a special service user in the master realm that has the view-users role, and gets the data like this:
client --[asks for list of users]--> app server
app server --[login data of service user]--> keycloak server
keycloak server --[access token with direct granting]-->app server
app server --[access token]--> keycloak server's get user list API endpoint
(app server filters detailed user data to just a list of usernames)
app server --[list of users]--> client
This makes the the list of usernames effectively public, but all other data remains hidden from the clients - and for security/privacy reasons, I want to keep it this way, so I can't just put the service user's login data in a JS variable on the frontend.
In the latter list, step 4 is the one that fails, as step 3 returns a HS256 signed access token. In the former list, step 2 correctly returns an RS256 signed access token.
Thank you for the clarification. If I may, I will answer your question maybe differently than expected. While you focus on the token signature algorithm, I think there are either mistakes within your OAuth2 flows regarding their usage, or you are facing some misunderstanding.
The fact that both the backend and frontend use "Direct Access Granting" which refers to the OAuth2 flow Resources Owner Credentials Grant is either a false claim or is a mistake in your architecture.
As stated by Keycloak's own documentation (but also slightly differently in official OAuth.2 references):
Resource Owner Password Credentials Grant (Direct Access Grants) ... is used by REST clients that want to obtain a token on behalf of a
user. It is one HTTP POST request that contains the credentials of the
user as well as the id of the client and the client’s secret (if it is
a confidential client). The user’s credentials are sent within form
parameters. The HTTP response contains identity, access, and refresh
tokens.
As far as I can see the application(s) and use case(s) you've described do NOT need this flow.
My proposal
Instead what I'd have seen in your case for flow (1) is Authorization Code flow ...
assuming that "Client" refers to normal users in Browser (redirected to Keycloak auth. from your front app)
and assuming you do not actually need the id and access tokens back in your client, unless you have a valid reasonable reason. As the flows allowing that are considered legacy/deprecated and no more recommended. In this case, we would be speaking of Implicit Flow (and Password Grant flow is also discouraged now).
So I think that the presented exchange (first sequence with points 1 to 5 in your post) is invalid at some point.
For the second flow (backend -> list users), I'd propose two modifications:
Allow users to poll the front end application for the list of users and in turn the front-end will ask the backend to return it. The backend having a service account to a client with view-roles will be able to get the required data:
Client (logged) --> Request list.users to FRONTEND app --> Get list.users from BACKEND app
(<--> Keycloak Server)
<----------------------------------------- Return data.
Use Client Credentials Grant (flow) for Backend <> Keycloak exchanges for this use case. The app will have a service account to which you can assign specific scopes+roles. It will not work on-behalf of any user (even though you could retrieve the original requester another way!) but will do its work in a perfectly safe manner and kept simple. You can even define a specific Client for these exchanges that would be bearer-only.
After all if you go that way you don't have to worry about tokens signature or anything like that. This is handled automatically according to the scheme, flow and parties involved. I believe that by incorrectly making use of the flows you end up having to deal with tricky token issues. According to me that is the root cause and it will be more helpful than focusing on the signature problem. What do you think?
Did I miss something or am I completely wrong...?
You tell me.
I know that certificates that are sent by the server cant be faked (still there is MD5 collisions but costy) but what about faking the client ..
in man in the middle attack:
cant we tell the server that we are the legitimate client and take data from that server manipulate it then encrypt it again with legitimate client public key ? how does the client be sure that the data came really from the server ?
in theory .. can we inject any data into the response sent by the server to the client ?..
How are you authenticating the client? SSL client certificates? Or some application level system (cookies etc)?
Here's what SSL does in a nutshell:
Negotiates a Diffie-Hellman shared session key between the two parties
Has the server sign the session key and send the result to the client. Once the client verifies this, the client knows there is no MITM, and the server is who they say they are.
If client certificates are enabled, has the client sign the session key and send the signature to the server. The server now knows there is no MITM and the client is who they say they are.
Encrypts all data in both directions using the shared session key
Typically when you use SSL you won't use client certificates. Strictly speaking, the server does not know if the connection is MITM'd. However, most clients will disconnect if the server certificate is bad. The server assumes that if the client pushes forward with the connection, there is no MITM. Even if Mallory, doing the MITM, chooses not to propagate the disconnect from the client, he has no new information now; all he's done is connected to the server himself. Without intercepting the client's session cookie or other authentication information (which is only sent by the client after verifying the connection is secure) the MITM is useless.
So in short, as long as one end or the other verifies the certificate of the other end before initiating any high-level communication of sensitive information, SSL is secure in both directions.
You're right -- without secure certificate authentication on the client and server there is an opening for a man in the middle attack.
SSL can be "secure both ways" if you use mutual authentication also called two-way SSL.
I am using the implicit client in the identity server, in the other hand there is a native android app,
My security concerns is:
1- App reverse engineering: if the attacker get access to the client_Id
, redirect_uri and/or response_type, he will be able to mimic the login request.
by doing this he is impersonating the original client
2- Man in the middle: those client_id,... is send to the identity server through the https URI which is not encrypted, why not hiding them in the Header?
3- Browser will resend the token in the URI revealing it to the man in the middle if the redirect URI is not oob or localhost, the browser default behavior is redirecting to the Location, so can we force the developers when they register a client to use oob,
You can say ohh no the app is reading the token and close the browser so fast, before the browser sends the request,
Can we really rely on the app speed of closing the browser, this is sounds squishy?
Which question is legitimate concerns and which is not, also how to solve the legitimate concerns.
About point 1: how google is protecting it's services like google Map, the client Quote is vital and it has to be very secure, Right !!?
Edit
if we pass the client_id in the header to encrypt it, we will violate the http 1.1 spec and oauth2 spec, still we didn't do much, because the client_id is reside inside the handset, by small reverse engineering you can get it
Regarding point number 3:
The token response after a successful authentication will be something like this:
HTTP/1.1 302 Found
Location: http://example.com/cb#access_token=2YotnFZFEjr1zCsicMWpAA
&state=xyz&token_type=example&expires_in=3600
The user-agent will redirect to the URL provided in the location parameter. Here there is no need to worry about MiTM attacks because the access-token is included in the URL hash fragment & hash fragments are not sent in the HTTP request messages, in other words the hash fragment will not leave the client machine.
Yes, but the token service will only return the user to the pre-registered callback uri for the client.
2 & 3. You should use HTTPS for most everything on the web these days.
I'm currently working on an OpenId Server/Client for demonstration purposes and I struggle to understand the following specification.
http://openid.net/specs/openid-connect-core-1_0.html#ImplicitAuthRequest
1) The clientApp sends an request (ajax) to the serverApp in order to obtain a session id
2) The clientApp sends an authentication request (ajax) to the serverApp with
{
response_type : "id_token",
scope: "openid profile",
client_id: "clientApp",
redirect_uri : "clientAppAddress/redirecturi",
state: ???,
nonce: ???
}
There are no optional fields for grant_type, username and password (as in RFC6749: Access Token Request). How can I transmit the credentials?
Moreover I don't understand the concept behind "state" and "nonce". The specification says that nonce's value "needs to include per-session state and be unguessable to attackers. One method to achieve this for Web Server Clients is to store a cryptographically random value as an HttpOnly session cookie and use a cryptographic hash of the value as the nonce parameter.", whereas state is used to mitigate CSRF, XSRF "by cryptographically binding the value of this parameter with a browser cookie". Where is the difference between them and how do they increase security? I would use the hash-value of the sessionid (stored in http only cookie, and transmitted to the client in the first request) for both of them?
The actual method of authenticating the user, thus transporting credentials is not part of the OpenID Connect specification. The OpenID Connect specification merely tells you how to transport information about the authentication event and the user to a peer. The means of user authentication is independent of that.
The state parameter is there to correlate request and response and to share context between request and response. One of the things that you would typically associate with the state is the URL that the user is trying to access, so that after a successful authentication response you can redirect to that.
The nonce parameter is to prevent replay attacks since that value should be cached.
Together they are used to prevent Cross Site Request Forgery where an attacker got hold of the id_token and tries to use it against the RP to impersonate the user in the attacker's browser.
It would be better to use other values for state and nonce than directly derived from session_id since you may want to restart authentication from the same session and then nonce replay prevention would block you from reusing it (and distinguish between you and an attacker). Also state should be non-guessable, so not the same as previously used in the same session.
I am very new to authentication/ssl especially combined with ios. My question is, if I am sending a username and password to lets say https://server.com/login.php do I need to hash my passwords in the iOS client or can I post the text of password then hash them before they are stored in the DB?
Hash them beforehand for sure!
Sure you're using https, but it's a best practice to never send sensitive information in plain text if you can help it.
Sending an hash or the cleartext password is essentially the same issue.
If an attacker can capture data from the connection between client and server, he can later authenticates himself impersonating the client by sending either the hash or the cleartext password to the server.
The key point is to use HTTPS, wich encrypt communication between client and server, so that an attacker can not intercept the data (hash or password) the client are sending to the service.
That said, hashing the password is good practice for the client so that no one will know the real user password (even if the client stores the hash in memory for usability, you should never store cleartext passwords anywhere).