How safe is it to use LOGON32_LOGON_NETWORK_CLEARTEXT?
We have the following scenario:
Web server A is using Win32 LogonUser. Then it needs to invoke an asmx method on server B.
If the used logon type is LOGON32_LOGON_INTERACTIVE it works well. However the customer rejects this because it requires interactive access.
If we use LOGON32_LOGON_NETWORK this does not allow token delegation to the remote server and we get 401 (as expected, according to the MSDN).
Attempting to use DuplicateToken to "upgrade" the token to interactive fails. This attempt was based on this article where it states:
"When you request an interactive logon, LogonUser returns a primary
token that allows you to create processes while impersonating. When
you request a network logon, LogonUser returns an impersonation token
that can be used to access local resources, but not to create
processes. If required, you can convert an impersonation token to a
primary token by calling the Win32 DuplicateToken function."
But it seems that if we use LOGON32_LOGON_NETWORK_CLEARTEXT as stated in this old thread, delegation works. But how safe is it for usage? According to MSDN:
"This logon type preserves the name and password in the authentication
package, which allows the server to make connections to other network
servers while impersonating the client. A server can accept plaintext
credentials from a client, call LogonUser, verify that the user can
access the system across the network, and still communicate with other
servers."
Are the credentials used in this format visible in anyway to sniffers (we're using Windows Integrated security, sometimes with SSL but not always).
Please advise.
I had the same question, and though I haven't found a definitive answer I've done some investigating and reading between the lines, and this is my conclusion (corrections welcome):
The ideal/safest use case is if your code looks like this pseudocode:
success = LogonUser(username, domain, password,
LOGON32_LOGON_NETWORK_CLEARTEXT, provider, out token)
if (success) {
StartImpersonation(token)
remoteConnection = AuthenticateToRemoteServer()
StopImpersonation()
CloseHandle(token)
// continue to use remoteConnection
}
The plaintext credentials associated with the LogonUser session will be destroyed when you close its handle (I haven't found a reference for this, but it doesn't make sense to me that they wouldn't). So for the lifetime of the token there was a copy of the user's credentials and it was used to authenticate to the remote server. But your application already had the credentials in memory in plaintext (in the variables username, domain and password) so this doesn't really present a additional security risk.
Any authentication with a remote server that uses Windows authentication will be using NTML or Kerberos and neither protocol sends the credentials on the wire, so that's not a concern. I can't say for sure what would happen if the remote server asked for basic authentication, but I think it's more likely that it would fail than your credentials would be sent over.
If you need to keep the token around longer, the documentation does state that the credentials are stored in plaintext (somewhere). I took a dump of a test process and wasn't able to find them in the dump file, so I don't know if that means that they are stored in kernel memory or what. I would be a little worried if I had to keep this token around for a long time.
Related
Good day! I am trying to implement my own authorization server using oauth2 standards. Upon reading into its specifications on authorization code flow, a 3rd party application requesting for API access needs an authorization code from the authorization server, which will then be used to exchange for an access token. My question is, once I generate an authorization code from my authorization server, by concept, where do I store it so that when a client app requests an exchange for access token, I can verify that the authorization code is valid?
You can store the code anywhere you want - in your server memory (as an object in a map), in a database or in any other safe storage. If your server is just a single application (having just one RAM), you can store the codes in memory if you don't mind losing them during application restarts. But if you want to run multiple instances of your application (e.g. in Kubernetes) or server is composed of multiple applications, you will need to use some external storage (database, Hazelcast, Redis).
With the code, you will need to keep metadata such as client_id, validity, PKCE attributes (code_challenge_method, code_challenge) and such. When you receive a request to your token endpoint wanting to exchange the code for tokens, you need to find the code in your storage, compare the relevant metadata (client_id, PKCE code_verifier, client_secret) and issue tokens.
But you should keep the code with a timestamp saying when the tokens were issued. And you should be able to find what tokens were issued from the code. Because if you receive another /token exchange request with the same code, you should invalidate all the tokens issued - the code was probably stolen.
It's good to read OAuth2 Security RFC for all the considerations.
You can create a global data structure map and map the client_id to the auth codes and delete them after the access token is exchanged, this is a very simple a valid solution as long as it is properly implemented and the auth code and deleted correctly.
Since the exchange happens directly, you don't need to worry about the heap filling up since the auth code is created and deleted in a very short period of time making space. Say 1000 users log in every minute, a data structure of 1000 elements is very acceptable in most cases assuming there is a timeout of the exchange of 1 min (which should be the case)
I have an app, client side, that uses auth0 for accessing the different API's on the server. But now I want to add another app, a single page app, I'm going to use VueJs, and this app would be open "ideally" w/o a user having to sign in, it's like a demo with reduced functionality, I just want to check that the user is not a robot basically, so I don't expose my API in those cases.
My ideas so far:
- Somehow use recaptcha and auth0 altogether.
- Then have a new server that would validate that the calls are made only to allowed endpoints (this is not of my interest in the question), so that even if somehow the auth is vulnerated it doesn't leave the real server open to all type of calls.
- Pass the call to the server along with the bearer token, just as if I was doing it with my other old client app.
Is this viable? Now I'm forcing the user to validate, this is more a thing about UX (User-experience), but I'd like a way to avoid that. I'm aware that just with auth0 I can't do this see this post from Auth0, so I was expecting a mix between what I mentioned.
EDIT:
I'm sticking to validating in both cases, but I'm still interested to get opinions over this as future references.
At the end, with the very concept of how auth0 works that idea is not possible, so my approach was the following:
Give a temporary authenticated (auth 0) visitor a token which has restricted access level, then pass the request to a new middle server, the idea is to encrypt the real ids so the frontend thinks it's requesting project A123456etc, when indeed it's going to get decrypted in the middle server to project 456y-etc and given a whitelist it will decide to pass the request along with the token to the final server, the final server has measures to reduce xss and Ddos threats.
Anyway, if there's a better resolve to it I will change the accepted answer.
You could do a mix of using recaptcha for the open public, then on the server side analyse the incoming user request (you can already try to get a human made digital fingerprint just to differentiate with a robot-generated one) and the server (more a middle server) makes the call to you API (and this server has limited surface access)
What we normally do in these situations (if I got your issue correctly) is to create two different endpoints, one working with the token and another one receiving the Recaptcha token and validating it with Google servers.
Both endpoints end up calling the same code but this way you can add extra functionality in a layer in the 'public' endpoint to ensure that you are asking only for public features (if that cannot be granted just modifying the interface).
I am developing an oAuth2 server and I've stumbled upon this question.
Lets suppose a scenario where my tokens are set to expire within one hour. On this timeframe, some client goes through the implicit auth fifty times using the same client_id and same redirect_uri. Basically same everything.
Should I give it the same accessToken generated on the first request on the subsequent ones until it expires or should I issue a new accessToken on every request?
The benefits of sending the same token is that I won't leave stale and unused tokens of a client on the server, minimizing the window for an attacker trying to guess a valid token.
I know that I should rate-limit things and I am doing it, but in the case of a large botnet attack from thousands of different machines, some limits won't take effect immediately.
However, I am not sure about the downsides of this solution and that's why I came here. Is it a valid solution?
I would rather say - no.
Reasons:
You should NEVER store access tokens in plain text on the Authorization Server side. Access tokens are credentials and should be stored hashed. Salting might not be necessary since they are generated strings anyway. See OAuth RFC point 10.3.
Depending how you handle subsequent requests - an attacker who knows that a certain resource owner is using your service and repeat requests for the used client id. That way an attacker will be able to impersonate the resource owner. If you really return the same token then at least ensure that you authenticate the resource owner every time.
What about the "state" parameter? Will you consider requests to be the "same" if the state parameter is different? If no then a botnet attack will simply use a different state every time and force you to issue new tokens.
As an addition - generally defending against a botnet attack via application logic is very hard. The server exposing your AS to the internet should take care for that. On application layer you should take care that it does not go down from small-bandwidth attacks.
You can return the same access_token if it is still valid, there's no issue with that. The only downside may be in the fact that you use the Implicit flow and thus repeatedly send the - same, valid - access token in a URL fragment which is considered less secure than using e.g. the Authorization Code flow.
As a thumb rule never reuse keys, this will bring additional security in the designed system in case of key capture
You can send different access token when requested after proper authentication and also send refresh token along your access token.
Once your access token expires, you should inform user about that and user should re-request for new access token providing one-time-use refresh token previously provided to them skipping need for re-authentication, and you should provide new access token and refresh token.
To resist attack with fake refresh token, you should blacklist them along with their originating IP after few warnings.
PS: Never use predictable tokens. Atleast make it extremely difficult to brute force attacks by using totally random, long alpha-numeric strings. I would suggest bin2hex(openssl_random_pseudo_bytes(512)), if you are using php.
I use OAuth 2 for authorization and need to implement it in a load balanced cluster. I've considered a few approaches, but it seems there is no way around a centralized approach. Here are my thoughts:
1. Balancing using source IP
Caching the tokens on one server and balancing by IP would be ideal, however, the IP can not assumed to be static. So when the same user tries to access services that require authorization from another IP with a valid token, it will fail, because it is not cached on this machine. Also other devices logged in with this user will not reach the same machine.
2. Balancing using a load balancing cookie
Also not really an option, since it cannot be assumed that every client implements cookie storage.
3. Balancing using the Authorization header
Balancing by hashing the Authorization: Bearer token header is problematic, because the first request (for requesting the authorization token) has no Authorization header, thus, the following request might not hit the same instance.
My current approach is to use a central Redis instance for authorization token storage.
Is there an option left, where a centralized approach can be avoided?
I think you still have two options to consider.
One is to balance by session ID. Application servers usually can be configured to manage sessions either by cookie or a GET parameter added to every link, so it does not definitely needs cookie storage. Additionally, there are very few HTTP clients left that still do not implement cookie storage, so you may want to reconsider item 2 of your list.
The other one is using self-contained tokens, e.g. JSON Web Tokens (JWT) with signatures (JWS). Validation of self-contained tokens may not need central database, each server instance can check token signatures alone and extract authorization details from the token itself. However, if you need support for revoking tokens, you may still need a central database to store at least a blacklist of revoked tokens.
Though I cannot provide you a full-fledged solution, hope this gives you some ideas.
My team are coding a web app, which include a server and a client, I think it's obviously not advisable to send user's uid and password to server every request from client.
I am looking for a good choice to deal with this, maybe something like Oauth, is there any efficient approach?
For example, a user with username lyj and password 123456 request login from my client app, the server should check if it is permissible, after login success, the client can send more request to get other resource from server.
My problem is that, except userid and password, is there a way between server and client to make sure who is this guy, is there any suggest to transmit a access token between server and client?
Without much information on your platform and technologies I can only attempt a generic answer. There are several ways in which you can generate a token depending on how you want to use it. MD5 is a well established algorithm and you can use it to generate a oth token using something like username and email etc. Remember that you cannot decrypt MD5 string. So to do any kind of verification you will have to recreate the string using original parameters and then perform a check. If you want a hash that you can reverse you can look at something like base-64.
Both MD6 and base-64 are easily available as libraries in any back end you may be using.
* UPDATE
Looking at your comments that you are working with a stateless client, here is a possible approach to using tokens.
Client performs login for first time. (preferably HTTPS)
Server performs validation and generates a token using MD5(or any other of your choice) using (username+email+ip_address+time_stamp) and sends it back to client
Server creates a new session for this client in the table in the database using userID , ip_address and, time_stamp
Client passes this token back for any future requests.
When client passes the token , server retrieves the session from the database and generates the MD5 hash and compares it with the token client sent. If its the same you are good.
You can also use the time-stamp value a validity window for your tokens so they are not valid forever. Also its impossible to recreate this token unless someone can create the same MD5 hash at the same time down to milliseconds
Modern web application containers have embedded the session tracking functionality. Of course there is always the choice of cookies. Its up to you what to implement...