Convert Lockbox2 cipher text to Lockbox3 cipher text - delphi

Is there a way I can Convert my Lockbox 2 Cipher text to LockBox 3 Cipher text.
We are migrating our application built on Delphi 2007 to Delphi xe2, we used the Lockbox 2 RSA Encryption algorithm in Delphi 2007 and we intend to use lockbox 3 in Delphi xe2 to support Unicode data. since the cipher text generated by both of them differs as Xe2 supports Unicode data we face a problem . So we would like to convert the cipher text that is generated by Lockbox 2 to LockBox 3 somehow.

Since your cipher text by definition is unrecognizable, there is no easy way to tell if the underlying plaintext data was Ansi or Unicode....so you likely need to manage a new associated property.
It obviously depends on the layout of your application and where this data is stored and how the clients are going to be upgraded, but there could be a new version flag of some sort associated with the stored ciphertext. If it's in a local table say, add a new column for PlainTextVersion and set the version to some value to flag that the ciphertext was saved from Unicode plaintext. When reading the ciphertext and this new field doesn't match the Unicdoe flag, you could upgrade the ciphertext by decrypting, and encrypting using Unicode plaintext, and then re-save the ciphertext and set the new flag (or simply put-off the ciphertext version upgrade until the plaintext has changed and needs to be updated.)
Or, better yet, auto-upgrade all current ciphertext at one time if feasible.

To convert, it would be easiest to use Lockbox 2 to decrypt your cypher text and use Lockbox 3 to reencrypt it.
The reason is that from what I can tell, Lockbox 2 stuffed up the implementation of the RSA block type 2 padding which means that Lockbox 2's RSA encryption is not compatible with anybody else's RSA decryption.
Lockbox 2's RSA encryption pads out the message incorrectly as follows (found by placing a breakpoint and inspecting the memory at biBlock.Fi.IntBuf.pBuf):
message-bytes 0x00 random-padding-bytes 0x02 0x00
e.g. 'test' was padded to:
$01C883AC 74 65 73 74 00 D4 50 50 test..PP
$01C883B4 A7 BO E5 51 7A 4C C2 BC ...QzL..
$01C883BC 8C B8 69 8A 97 DF AA 1D ..I.....
$01C883C4 78 67 1E OE 8B AB 02 00 xg......
But it should be padded out to (e.g. look at this worked example):
0x00 0x02 random-padding-bytes 0x00 message-bytes
Lockbox 2 isn't just storing the bytes in reverse (otherwise the message "test" would also be reversed) or reversed 32 bit little endian (otherwise the 02 00 would be swapped too). Everything works so long as you use Lockbox 2 for both encryption and decryption.
Also I noticed another bug where Lockbox 2 calls e.RandomSimplePrime() to generate the public exponent e but it generates an even number i.e. a fairly noteworthy bug in RandomSimplePrime() eh? I only looked at Lockbox 2.07. Lockbox 3 was a complete rewrite, so it won't have these bugs.

Related

Little Endian vs. Big Endian architectures

I've a question it is a kind of disagreement with a university professor, about Endianness, So I did not find any mean to solve this and find the right answer but asking and open a discussion in Stack Overflow community.
Let's say that we have this number (hex)11FF1 defined as an integer, for example in C++ it will be like : int num = 0x11FF1, and I say that the number will be presented in the memory in a little endian machine as :
addr[0] is f1 addr[1] is 1f addr[2] is 01 addr[3] is 00
in binary : 1111 0001 0001 1111 0000 0001 0000 0000
as the compiler considers 0x11ff1 as 0x0001ff1 and considers also 00 as the 1st byte and 01 as the 2nd byte and so on, and for the Big Endian I believe it will look like:
addr[0] is 00 addr[1] is 01 addr[2] is 1f addr[3] is f1
in binary : 0000 0000 0000 0001 0001 1111 1111 0001
but he has another opinion, he says :
Little Endian
Big Endian:
Actually I don't see anything logical in his representation, so I hope the developers Resolve this disagreement, Thanks in advance.
Your hex and binary numbers are correct.
Your (professor's?) French image for little-endian makes no sense at all, none of the 3 representations are consistent with either of the other 2.
73713 is 0x11ff1 in hex, so there aren't any 0xFF bytes (binary 11111111).
In 32-bit little-endian, the bytes are F1 1F 01 00 in order of increasing memory address.
You can get that by taking pairs of hex digits (bytes / octets) from the low end of the full hex value, then fill with zeros once you've consumed the value.
It looks like they maybe padded the wrong side of the hex value with 0s to zero-extend to 32 bits as 0x11ff1000, not 0x00011ff1. Note these are full hex values of the whole number, not an attempt to break it down into separate hex bytes in any order.
But the hex and binary don't match each other; their binary ends with an all-ones byte, so it has FF as the high byte, not the 3rd byte. I didn't check if that matches their hex in PDP (mixed) endian.
They broke up their hex column into 4 byte-sized groups, which would seem to indicate that it's showing bytes in memory order. But that column is the same between their big- and little-endian images, so apparently that's not what they're doing, and they really did just extend it to 32 bits by left shifting (padding with low instead of high zero).
Also, the binary field in the big vs. little endian aren't the reverse of each other. To flip from big to little endian, you reverse the order of the bytes within the integer, keeping each byte value the same. (like x86 bswap). Their 11111111 (FF) byte is 2nd in their big-endian version, but last in little-endian.
TL:DR: unfortunately, nothing about those images makes any sense that I can see.

ELF32 binary, little endian or not?

I know that printf(%08x) shows 4 octets of the stack (410484e4 for instance). Let's say that this value correspond to the begining of a char array (called tab), so what would be the value of tab[0], would it be 08 ('A' converted in ASCII) or e4 (รค) ?
Thank you
p.s: the executable I'm working on is an ELF 32 binary

What kind of padding does Rails OpenSSL::Cipher use for AES-CBC-256?

What padding scheme does OpenSSL::Cipher use when padding blocks for encryption? The documentation is vague.
http://www.ruby-doc.org/stdlib-1.9.3/libdoc/openssl/rdoc/OpenSSL/Cipher.html#method-i-padding-3D
I will need to use the encrypted data with a different language. I'm aware there are many types of padding:
https://en.wikipedia.org/wiki/Block_cipher_modes_of_operation#Padding
Your first link advises to
See EVP_CIPHER_CTX_set_padding for further information.
This page indicates (emphasis mine) that:
If padding is enabled (the default) then EVP_EncryptFinal_ex()
encrypts the ``final'' data, that is any data that remains in a
partial block. It uses standard block padding (aka PKCS padding). The
encrypted final data is written to out which should have sufficient
space for one cipher block. The number of bytes written is placed in
outl. After this function is called the encryption operation is
finished and no further calls to EVP_EncryptUpdate() should be made.
That page also includes a link to additional information that you may find helpful.
PKCS#7/PKSC#5 are pretty common for CBC mode. PKCS#5 is identical to PKCS#7, but PKCS#5 refers only to 64 bit (8 byte) block size so for AES-256 it is PKCS#7
from en.wikipedia.org/wiki/Padding_(cryptography)#PKCS7
01
02 02
03 03 03
04 04 04 04
05 05 05 05 05
etc.
if your msg size is a multiple of 16 (Block Size in AES) then 1 more block filled with 16 times of byte 16 is added
You can confirm what padding is being used by decrypting a message with NoPadding set in your decryption method. That will pass through any padding as if it was part of the actual message. Have a look at the last block's worth of bytes from the message. That will tell you what type of padding the sender is using. Then set your decryption function to expect that type of padding.

iOS Base64 Lib that prevents CRLF

I'm having troubles decoding/encoding a base64 string because of the CRLF on it.
I've tried this lib Base64.h and this one NSData+Base64.h but both do not handle well the CRLF.
Anyone had this problem before?
Anyone has an advice on how to avoid these CRLF? I think Android's Java lib is replacing this with a '0', am I correct?
public static final int CRLF = 4;
Base64 encodes 64 characters, namely 'A-Za-z0-9+/' with a possible trailing '=' to indicate a non mod 3 length. CR+LF may be used as a line separator, generally decode each line separately.
See Wikipedia Base64 for more information on CR+LF variants.
"+vqbiP7s3oe7/puJ8v2a3fOYnf3vmpap"
decoded is:
"FA FA 9B 88 FE EC DE 87 BB FE 9B 89 F2 FD 9A DD F3 98 9D FD EF 9A 96 A9"
The last character is not 0.

AES decryption in iOS: PKCS5 padding and CBC

I am implementing for iOS some decryption code for a message originating on a server over which I have no control. A previous implementation on another platform documents the decryption requirements AES256, specifies the key and the initialization vector, and also says:
* Cipher Mode: CBC
* Padding: PKCS5Padding
The options for creation of a CCCryptor object include only kCCOptionPKCS7Padding and kCCOptionECBMode, noting that CBC is the default. From what what I understand about padding for encryption, I don't understand how one might use both; I thought they were mutually exclusive. In creating a CCCryptor for the decryption, I have tried using both a 0 for options and kCCOptionPKCS7Padding, but both give me gibberish after decryption.
I have compared the dump of this decryption with a dump of the decoded byte buffer on the other platform and confirmed that they really are different. So there is something that I am doing different in this implementation that is significantly different, I just don't know what... And don't have a clue as to how to get a handle on it. The platforms are different enough that it is difficult to infer much from the previous implementation because it is based on a very different platform. And of course, the author of the previous implementation has since departed.
Any guesses what else could be incompatible or how to troubleshoot this thing?
PKCS#5 padding and PKCS#7 padding are practically the same (adding bytes 01, or 0202, or 0303 etc up to the length of the block size of the algorithm, 16 bytes in this case). Officially PKCS#5 padding should only be used for 8 byte blocks, but in many runtimes the two can be interchanged without issue. Padding always occurs at the end of the ciphertext, so if you get just gibberish it's not the padding. ECB is a block mode of operation (that should not be used to encrypt data that can be distinguished from random numbers) : it would require padding, so the two are not mutually exclusive.
Finally, if you just perform decryption (not MAC'ing or other forms of integrity control), and you return the result of the unpadding to the server (decryption failed), your plain text data is not safe because of padding oracle attacks.
First, you can worry about the padding later. Providing 0 like you have done means AES CBC with no padding, and with that configuration you should see your message just fine. Albiet potentially with some padding bytes on the end. So that leaves:
You're not loading the key correctly.
You're not loading the IV correctly.
You're not loading the data correctly.
The server is doing something you don't expect.
To debug this, you need to isolate your system. You can do this by implementing a loopback test where you both encrypt and then decrypt the data to make sure you're loading everything correctly. But that can be misleading. Even if you do something wrong (e.g., loading the key backwards), you could still be able to decrypt what you've encrypted because you're doing it exactly the same wrong way on both sides.
So you need to test against Known Answer Tests (KATs). You can look up the official KATs on the AES wikipedia entry. But it just so happens that I have posted another answer here on SO that we can use.
Given this input:
KEY: 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f
IV: 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
PLAIN TEXT: encrypt me
CIPHER TEXT: 338d2a9e28208cad84c457eb9bd91c81
Verify with a third-party program that you can decrypt the cipher text and get the plain text.
$ echo -n "encrypt me" > to_encrypt
$ openssl enc -in to_encrypt -out encrypted -e -aes-256-cbc \
> -K 000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f \
> -iv 0000000000000000
$ hexdump -C encrypted
00000000 33 8d 2a 9e 28 20 8c ad 84 c4 57 eb 9b d9 1c 81 |3.*.( ....W.....|
00000010
$ openssl enc -in encrypted -out plain_text -d -aes-256-cbc \
> -K 000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f \
> -iv 0000000000000000
$ hexdump -C plain_text
00000000 65 6e 63 72 79 70 74 20 6d 65 |encrypt me|
0000000a
So now try to decrypt this known-answer test in your program. Be sure to enable PKCS7 padding, because that's what I used in this example. As an exercise, decrypt it with no padding and see that the result is the same, except you have padding bytes after the "encrypt me" text.
Implementing the KAT is a big step. It says that your implementation correct, but your assumptions on the server's behavior is wrong. And then it's time to start questioning those assumptions...
(And P.S., those options you mentioned are not mutually exclusive. ECB means no IV, and CBC means you have an IV. No relation to padding.)
OK, I know I said it's an exercise, but I want to prove that even if you encrypt with padding
and decrypt without padding, you do not get garbage. So given the KAT that used PKCS7 padding, we decrypt it with the no padding option and get a readable message followed by 06 used as a padding byte.
$ openssl enc -in encrypted -out plain_text -d -aes-256-cbc \
-K 000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f \
-iv 0000000000000000 -nopad
$ hexdump -C plain_text
00000000 65 6e 63 72 79 70 74 20 6d 65 06 06 06 06 06 06 |encrypt me......|
00000010
$
Paul,
The PKCS#5 padding is needed to identify padding in the decrypted data. For CBC, the input buffer must be a multiple of the cipher block size (16 for AES). For that reason, the buffer to be encrypted is extended with additional bytes. Note that after encryption, the original size of the data is lost. PKCS#5 padding allows to retrieve that size. This is done by filling the extended data buffer with repeated bytes, with value equal to the padding size. e.g if your cleartext buffer was 12 bytes, to make it multiple of 16, you will need to add 4 bytes more. (If the data was 16, you will add 16 more to make it 32). Then you fill those 4 bytes with '0x4' to conform with PKCS#5 padding. When you decrypt, simply look for the last byte in the decrypted data and subtract that number from the length of the decrypted buffer.
What you are doing is padding with '0's. Although you seem to be happy to see the results, you will get a surprise when your original data ends in one of more '0's.
It turns out that the explanation for the what I was experiencing was embarrassingly simple: I misinterpreted something I read in the previous implementation to imply that it was using a 256-bit key, but in fact it was using a 128-bit key. Make that change and all of the sudden what was obscure becomes cleartext. :-)
0 for the options argument, to invoke CBC, was in fact correct. What the reference to PKCS5 padding in the previous implementation is still mysterious, but that doesn't matter because because what I've got now works.
Thanks for the shot, indiv.

Resources