How to control different JWTs if I have many valid public keys corresponding each token? - spring-security

I am using spring security, and I created a JWT checker. It checks only expiration time so far.
The problem is I expose two different apis, and the callers are also using different public/private keys to generate the jwt(s) I control, which means that I have two different public keys to be able to check the JWTs corresponding to them.
What would be the best way to keep only one function that checks tokens but which also switchs between public keys ?
example :
public class JwtControlValid {
#Value("${public.key.A}") // from application.properties
private String pubKeyA;
#Value("${public.key.B}")
private String pubKeyB;
private RSAPublicKey getPubKey() {
try {
KeyFactory kf = KeyFactory.getInstance("RSA");
// if(caller A)
X509EncodedKeySpec keySpecX509 = new X509EncodedKeySpec(Base64.getDecoder().decode(pubKeyA));
// if (caller B)
X509EncodedKeySpec keySpecX509 = new X509EncodedKeySpec(Base64.getDecoder().decode(pubKeyB));
RSAPublicKey pubKey = (RSAPublicKey) kf.generatePublic(keySpecX509);
return pubKey;
}catch ...
and another function that uses the first one to say if true or false :
public boolean isJwtValid(String jwt) {
try {
Jwts.parser().setSigningKey(getPublicKey()).parseClaimsJws(jwt);
return true;
} catch ....
return false;
Both public keys are RSA public keys
Is there a clean way to switch between public keys depending on the jwt that I receive ?

The problem is I expose two different apis, and the callers are also using different public/private keys to generate the jwt(s) I control, which means that I have two different public keys to be able to check the JWTs corresponding to them.
Yes this is a problem and is not a standard. Having public keys for each client scales bad, and is in general not supported.
Spring out of the box supports trusting single asymmetric key but not multiple because this is not a standard and i do not recommend this approach.
If you want to support multiple keys you should instead be using JWKs which are a number of keys exposed by the issuing service.
What this basically means that the issuer (the server/service) that issues your tokens, has an endpoint, which all resource servers can fetch the public keys needed to verify tokens.
Advantages is that you can rotate keys, you can check against multiple JWKs, it is scalable and secure etc.
If you still want to go down your path which i highly do not recommend then you can for example implement two JwtAuthenticationProvider using its interface.
The you can resolve which one to use by implementing a AuthenticationManagerResolver.
I would highly recommend you read about the entire JWT flow in the spring security official docs and use Nimbus which comes included in spring security instead of JJwt which is just an extra not needed dependency.
I just want to include, writing custom security is never good, and all it takes is one bug in your custom security code and your entire security might be useless. Thats why there are standards, RFC, and established flows. To avoid custom security solutions.

Related

Designing safe and efficient API for item state updates via events

Recently I've been working on a simple state-tracking system, its main purpose is to persist updates, sent periodically from a mobile client in relational database for further analysis/presentation.
The mobile client uses JWTs issued by AAD to authenticate against our APIs. I need to find a way to verify if user has permissions to send an update for a certain Item (at this moment only its creator should be able to do that).
We assume that those updates could be sent by a lot of clients, in small intervals (15-30 seconds). We will only have one Item in active state per user.
The backend application is based on Spring-Boot, uses Spring Security with MS AAD starter and Spring Data JPA.
Obviously we could just do the following:
User_1 creates Item_1
User_1 sends an Update for Item_1
Item has an owner_ID field, before inserting Update we simply check if Item_1.owner_ID=User_1.ID - this means we need to fetch the original Item before every insert.
I was wondering if there was a more elegant approach to solving these kind of problems. Should we just use some kind of caching solution to keep allowed ID pairs, eg. {User_1, Item_1}?
WHERE clause
You can include it as a condition in your WHERE clause. For example, if you are updating record X you might have started with:
UPDATE table_name SET column1 = value1 WHERE id = X
However, you can instead do:
UPDATE table_name SET column1 = value1 WHERE id = X AND owner_id = Y
If the owner isn't Y, then the value won't get updated. You can introduce a method in your Spring Data repository that looks up the Spring Security value:
#Query("UPDATE table_name SET column1 = ?value1 WHERE id = ?id AND owner_id = ?#{principal.ownerId}")
public int updateValueById(String value1, String id);
where principal is whatever is returned from Authentication#getPrincipal.
Cache
You are correct that technically a cache would prevent the first database call, but it would introduce other complexities. Keeping a cache fresh is enough of a challenge that I would try it only when it's obvious that introducing the complexity of a cache brings the required, observed performance gains.
#PostAuthorize
Alternatively, you can make the extra call and use the framework to simplify the boilerplate. For example, you can use the #PostAuthorize annotation, like so, in your controller:
#PutMapping("/updatevalue")
#Transactional
#PostAuthorize("returnObject?.ownerId == authentication.principal.ownerId")
public MyWidget update(String value1, String id) {
MyWidget widget = this.repository.findById(id);
widget.setColumn1(value1);
return widget;
}
With this arrangement, Spring Security will check the return value's ownerId against the logged-in user. If it fails, then the transaction will be rolled back, and the changes won't make it into the database.
For this to work, ensure that Spring's transaction interceptor is placed before Spring Security's post authorize interceptor like so:
#EnableMethodSecurity
#EnableTransactionManagement(order=-1)
The downside to this solution is that there are still the same two DB calls. I like it because it's allowing the framework to enforce the authorization rule. To learn more, take a look at this sample application that follows this pattern.

Using RSA in reverse to decrypt a licence code: encrypt with private key, decrypt with public key

I want to encrypt some values pertaining to a licence code with a secret private key, and then when it's entered in the user's app install it will be decrypted with the public key (stored with the app) to view the encoded data and ensure it was only created by me.
The trouble is it seems that you encrypt with the public key and decrypt with the private key, which is the reverse of what I want.
It's also worth mentioning that the library I'm using called SwiftyRSA only supports encrypting with the public key, and doesn't like it when I use the private key instead. I believe this is because it's being saved to the keychain with kSecAttrKeyClassPublic, because that's what it's expecting, and that causes things to fail.
I have read that the keys are technically interchangeable, but it seems I can't get it to work in my instance. Is this because they public key has a smaller exponent? Is there a way to get the public key to be as "long" as the private key using ssh-keygen, and therefore be able to swap them around? If not, how could I proceed?
The keys aren't always interchangeable (e.g. RSA private keys with CRT parameters) and it is pretty likely that the encryption procedure doesn't protect the key against side channel attacks. You should not use private keys to encrypt, period.
You could use signatures with message recovery if you're really careful.
Otherwise - if you've enough space - you could of course always sign-then-encrypt your license. For this to work (without additional AES encryption) your encryption key pair would have to be quite a bit larger than your signing key though.

Given an public key of type CKK_EC, is it possible to find the matching private key using C_FindObjects?

I have a serialized EC public key - its CKA_EC_PARAMS and CKA_EC_POINT. There's a matching private key on my token. Is there any way to find it?
With an RSA key, I can do a FindObjects with CKA_KEY_TYPE=CKK_PRIVATE_KEY and CKA_MODULUS=. Is there a way to do the same thing with EC keys? According to the PKCS#11 spec, CKA_EC_POINT isn't an attribute for EC Private Keys.
I have a token with support for EC at hand, and it seems that the only way to associate the private and public key will be through the CKA_ID value. No attribute available to test directly the key value.
Actually, even in the case of RSA that's the basic standard method to associate a private and a public key, they ought to be created with identical CKA_ID (that's what the Netscape browser originally did, and everyone copied on that).
They are even some buggy pkcs#11 implementations that won't allow you to read the CKA_MODULUS value of a RSA private key (this is definitevely a bug since the spec explicitly says this value ought to always be public, but it's just one of many bad things frequently happpening with pkcs#11). With them, CKA_ID is the only way even for RSA.

Web Service Contributing ID Disambiguation

I work with a Web Service API that can pump through a generic type of Results that all offer certain basic information, most notably a unique ID. That unique ID tends to be--but is not required to be--a UUID defined by the sender, which is not always the same person (but IDs are unique across the system).
Fundamentally, the API results in something along the lines of this (written in Java, but the language should be irrelevant), where only the base interface represents common details:
interface Result
{
String getId();
}
class Result1 implements Result
{
public String getId() { return uniqueValueForInstance; }
public OtherType1 getField1() { /* ... */ }
public OtherType2 getField2() { /* ... */ }
}
class Result2 implements Result
{
public String getId() { return uniqueValueForInstance; }
public OtherType3 getField3() { /* ... */ }
}
It's important to note that each Result type may represent a completely different kind of information. Some of it cannot be correlated with other Results, and some of it can, whether or not they have identical types (e.g., Result1 may be able to be correlated with Result2, and therefore vice versa, but some ResultX may exist that cannot be correlated because it represents different information).
We are currently implementing a system that receives some of those Results and correlates them where possible, which generates a different Result object that is a container of what it correlated together:
class ContainerResult implements Result
{
public String getId() { return uniqueValueForInstance; }
public Collection<Result> getResults() { return containedResultsList; }
public OtherType4 getField4() { /* ... */ }
}
class IdContainerResult implements Result
{
public String getId() { return uniqueValueForInstance; }
public Collection<String> getIds() { return containedIdsList; }
public OtherType4 getField4() { /* ... */ }
}
These are two containers, which present different use cases. The first, ContainerResult, allows someone to receive the correlated details as well as the actual complete, correlated data. The second, IdContainerResult, sacrifices the complete listing in favor of bandwidth by only sending the associated IDs. The system doing the correlating is not necessarily the same as the client, and the client can receive Results that those IDs would represent, which is intended to allow them to show correlations on their system by simply receiving the IDs.
Now, my problem may be non-obvious to some, and it may be obvious to others: if I send only the ID as part of the IdContainerResult, then how does the client know how to match the Result on their end if they do not have a single ID-store? The types of data that are actually represented by each Result implementation lend themselves to being segregated when they cannot be correlated, which means that a single ID-store is unlikely in most situations without forcing a memory or storage burden.
The current solution that we have come up with entails creating a new type of ID, we'll call it TypedId, which combines the XML Namespace and XML Name from each Result with the Result's ID.
My main problem with that solution is that it requires either maintaining a mutable collection of types that is updated as they are discovered, or prior knowledge of all types so that the ID can be properly associated on any client's system. Unfortunately, I cannot come up with a better solution, but the current solution feels wrong.
Has anyone faced a similar situation where they want associate generic Results with its original type, particularly with the limitations of WSDLs in mind, and solved it in a cleaner way?
Here's my suggestion:
You want to have "the client know how to match the Result on their end". So include in your response an extra discriminator field called "RequestType", a String.
You want to avoid "maintaining a mutable collection of types that is updated as they are discovered, or prior knowledge of all types so that the ID can be properly associated on any client's system". Obviously, each client request call DOES know what area of processing the Result will relate to. So you can have the client pass the "RequestType" string in as part of the request. As long as the RequestType is a unique string for each different type of client request, your service can process and correlate it without hard-coding any knowledge.
Here's one possible example of java classes for request and response messages (i.e. not the actual service endpoint):
interface Request {
String getId();
String getRequestType();
// anything else ...
}
interface Result {
String getId();
String getRequestType();
}
class Result1 implements Result {
public String getId() { return uniqueValueForInstance; }
public OtherType1 getField1() { /* ... */ }
public OtherType2 getField2() { /* ... */ }
}
class Result2 implements Result {
public String getId() { return uniqueValueForInstance; }
public OtherType3 getField3() { /* ... */ }
}
Here's the gotcha. (2) and (3) above do not give a completely dynamic solution. You want your service to be able to return a flexible record structure relating to each different request. You have the following options:
4A) In XSD, declare Result as a singular strongly-typed variant record type, and in WSDL return Result from a single service endpoint and single operation. The XSD will still need to hardcode the values for the discriminator element when declaring variant record structure.
4B) In XSD, declare multiple strongly-typed unique types Result1, Result2, etc for each possible client request. In WSDL, have a multiple uniquely named operations to return each one of these. These operations can be across one or many service endpoints - or even across multiple WSDLs. While this avoids hard coding the request type as a specific field per se, it is not actually a generic client-independent solution because you are still explicitly hard-coding to discriminate each request type by creating a uniquely name for each result type and each operation. So any apparent dynamism is a mirage.
4C) In XSD, define a flexible generic data structure that is not variant, but has plenty of generally named fields that could be able to handle all possible results required. Example fields could be "stringField1", "stringField2", "integerField1", "dateField1058", etc. i.e. use extremely weak typing and put the burden on the client to magically know what data is in each field. This option may be very generic, but it is usually considered terrible practice. It is inelegant, pretty unreadable, error prone and has limitations/assumptions built in anyway - how do you know you have enough generic fields included? In your case, (4A) is probably the best option.
4D) Use flexible XSD schema design tactics - type substitutability and use of "any" element. See http://www.xfront.com/ExtensibleContentModels.html.
4E) Use the #Produces #SomeQualifier annotations against your own factory class method which creates a high level type. This tells CDI to always use this method to construct the specificied bean type & qualifier. Your factory method can have fancy logic to decide which specific low-level type to construct upon each call. #SomeQualifier can have additional parameters to give guidance towards selecting the type. This potentially reducing the number of qualifiers to just one.
If you use (4D) you will have a flexible service endpoint design that can deal with changing requirements quite effectively. BUT your service implementation still needs to implement the flexible behaviour to decide which results fields to return for each request. Fact is, if you have a logical requirement for varying data structures, your code must know how to process these data structures for each separate request, so must depend on some form of RequestType / unique operation names to discriminate. Any goal of completely dynamic processing (without adapting to each client's needs for results data) is over-ambitious.

WinAPI -> CryptoAPI -> RSA, encrypt with private, decrypt with public

Good day.
I need to teach Windows CryptoAPI to encrypt the message with private (not public) part of the key, and decrypt with public. This is necessary to give users information, that they can read, but can't change.
How it works now:
I get the context
CryptAcquireContext(#Prov, PAnsiChar(containerName), nil, PROV_RSA_FULL, 0)
generate a key pair
CryptGenKey(Prov, CALG_RSA_KEYX, CRYPT_EXPORTABLE, #key)
Encrypt (and the problem is here. "key" - a keypair, and the function uses its public part);
CryptEncrypt(key, 0, true, 0, #res[1], #strLen, buffSize)
Decrypt (the same problem here, it uses the private part of the key)
CryptDecrypt(key, 0, true, 0, #res[1], #buffSize)
Thank you for your attention / help.
Update
Yes, I could use a digital signature and other metods...
The problem is that I need to encrypt one database field and make sure that no one but me can change it. It will be possible to read this field only with the help of my program (till someone decompiles it and get public key). This could be done with symmetrical key and digital signatures, but then i will need to create another field and store another key and so on...
I do hope that we can somehow teach WIN API to do as I want. I know that i can do so with RSA, and I hope that somehow WinAPI supports this feature.
Strictly speaking, when "signing" a message:
the person with the private key decrypts the hash with their private key.
they then send that "decrypted" hash along with the message.
the receiver then encrypts the signature with the public key
If the "encrypted" hash matches the hash of the original message, you know the message has not been altered, and was sent by the person with the private key. The following pseudo-code represents the signing algorithm:
//Person with private key generating message and signature
originalHash = GenerateHashOfMessage(message);
signature = RsaDecrypt(originalHash, privateKey);
//Receiver validating signed message
hash = GenerateHashOfMessage(message);
originalHash = RsaEncrypt(signature, publicKey);
messageValid = (hash == originalHash);
This same mechanism can be used to accomplish what you want. Except you don't care about hashes, you just want to encrypt some (small) amount of data:
//Person with private key
cipherText = RsaDecrypt(plainText, privateKey);
//Person with public key
plainText = RsaEncrypt(cipherText, publicKey);
i'll leave the CryptoAPI calls as an excercise - since i'm still trying to figure out Microsoft's Crypto API.
Encrypting data with the private key and decrypting it with the public key isn't supported because anyone with the "published" public key could decrypt it. What's the value in encrypting it then?
If you want to verify that data hasn't been changed, you will want to sign the data instead. Signing encrypts a hash of the data with the private key. Look at the signing functions.
You may be able to trick out the signing functions to do what you want. I've done this with other implementations, but I haven't tried with the Microsoft CryptoAPI.
Also, note that with RSA encryption, the plain text message cannot be longer than the key. So, if you are using a 2048 bit key, you can only encrypt a message body of up to 256 bytes (minus a few for overhead).
Consider using asymmetric encryption just to pass a symmetric key, and use the symmetric key to encrypt and decrypt any size data.
Update
You may be able to use the CryptSignHash() function for this. Normally, this is used to "sign" a hash, but you can put any data you want into the hash:
Set the hash value in the hash object by using the HP_HASHVAL value of
the dwParam parameter in CryptSetHashParam.
You might be limited to so many bytes if the input is expected to be a SHA1 hash value.
Alternatively, you may wish to consider using OpenSSL. If I recall correctly, it's pretty straight forward to use its RSA signing functions to encrypt with the private key.
Also, I accomplished the same thing using the old (freeware) version of SecureBlackbox. You may be able to find the old free version, but it's not Unicode friendly, so you'll have some conversion to do if you're using a new Delphi. I've done this in the past also, so it's not too difficult.
You may also consider trying out the current SecureBlackbox and purchase it if it works for you.
Otherwise, as you stated, sign it to detect tampering, and encrypt it with a symmetric key that only the program knows in order to obfuscate it.
If they crack your code, anything's fair game anyway.

Resources