BlackBerry Decryption - BadPaddingException - blackberry

I have successfully encrypted data in BlackBerry in AES format. In order to verify my result, I am trying to implement decryption in BlackBerry using the following method:
private static byte[] decrypt( byte[] keyData, byte[] ciphertext )throws CryptoException, IOException
{
// First, create the AESKey again.
AESKey key = new AESKey( keyData );
// Now, create the decryptor engine.
AESDecryptorEngine engine = new AESDecryptorEngine( key );
// Since we cannot guarantee that the data will be of an equal block length
// we want to use a padding engine (PKCS5 in this case).
PKCS5UnformatterEngine uengine = new PKCS5UnformatterEngine( engine );
// Create the BlockDecryptor to hide the decryption details away.
ByteArrayInputStream input = new ByteArrayInputStream( ciphertext );
BlockDecryptor decryptor = new BlockDecryptor( uengine, input );
// Now, read in the data. Remember that the last 20 bytes represent
// the SHA1 hash of the decrypted data.
byte[] temp = new byte[ 100 ];
DataBuffer buffer = new DataBuffer();
for( ;; ) {
int bytesRead = decryptor.read( temp );
buffer.write( temp, 0, bytesRead );
if( bytesRead < 100 ) {
// We ran out of data.
break;
}
}
byte[] plaintextAndHash = buffer.getArray();
int plaintextLength = plaintextAndHash.length - SHA1Digest.DIGEST_LENGTH;
byte[] plaintext = new byte[ plaintextLength ];
byte[] hash = new byte[ SHA1Digest.DIGEST_LENGTH ];
System.arraycopy( plaintextAndHash, 0, plaintext, 0, plaintextLength );
System.arraycopy( plaintextAndHash, plaintextLength, hash, 0,
SHA1Digest.DIGEST_LENGTH );
// Now, hash the plaintext and compare against the hash
// that we found in the decrypted data.
SHA1Digest digest = new SHA1Digest();
digest.update( plaintext );
byte[] hash2 = digest.getDigest();
if( !Arrays.equals( hash, hash2 )) {
throw new RuntimeException();
}
return plaintext;
}
I get an exception thrown "BadPaddingException" at the following line
int bytesRead = decryptor.read( temp );
Can anybody please help.

I think the problem might be in this block:
for( ;; ) {
int bytesRead = decryptor.read( temp );
buffer.write( temp, 0, bytesRead );
if( bytesRead < 100 ) {
// We ran out of data.
break;
}
}
When read returns -1, you are also writing it to the buffer. And the exit condition is also wrong. Compare that to the block in CryptoDemo sample project:
for( ;; ) {
int bytesRead = decryptor.read( temp );
if( bytesRead <= 0 )
{
// We have run out of information to read, bail out of loop
break;
}
db.write(temp, 0, bytesRead);
}
Also there are a few points you should be careful about, even if they are not causing the error:
AESDecryptorEngine engine = new AESDecryptorEngine( key );
If you read the docs for this constructor, it says:
"Creates an instance of the AESEncryptorEngine class given the AES key
with a default block length of 16 bytes."
But in the previous line, when you create the key, you are doing this:
AESKey key = new AESKey( keyData );
Which according to the docs, it "Creates the longest key possible from existing data.", BUT only "the first 128 bits of the array are used". So it does not matter what length your keyData has, you will always be using a 128 bit key length, which is the shortest of the 3 available sizes (128, 192, 256).
Instead, you could explicitly select the algorithm block key length. For instance, to use AES-256:
AESKey key = new AESKey(keyData, 0, 256); //key length in BITS
AESDecryptorEngine engine = new AESDecryptorEngine(key, 32); //key lenth IN BYTES
Finally, even if you get this working, you should be aware that directly deriving the key from the password (which might be of an arbitrary size) is not secure. You could use PKCS5KDF2PseudoRandomSource to derive an stronger key from the key material (password), instead of just using PKCS5 for padding.

Your encrypted data should be correctly padded to the block size (16 bytes).
Try to decrypt the data without padding, and see if tail bytes correspond to PKCS#5 padding (for instance, if it was needed 5 bytes of padding, it should be appended with 0x05 0x05 0x05 0x05 0x05 bytes).

The problem is that any data with the correct block size will decrypt. The issue with that is that it will likely decrypt to random looking garbage. Random looking garbage is not often compatible with the PKCS#7 padding scheme, hence the exception.
I say problem because this exception may be thrown if the key data is invalid, if the wrong padding or block mode was used or simply if the input data was garbled during the process. The best way to debug this is to make 100% sure that the algorithms match, and that the binary input parameters (including default ones by the API) match precisely on both sides.

Related

writing to flash memory dspic33e

I have some questions regarding the flash memory with a dspic33ep512mu810.
I'm aware of how it should be done:
set all the register for address, latches, etc. Then do the sequence to start the write procedure or call the builtins function.
But I find that there is some small difference between what I'm experiencing and what is in the DOC.
when writing the flash in WORD mode. In the DOC it is pretty straightforward. Following is the example code in the DOC
int varWord1L = 0xXXXX;
int varWord1H = 0x00XX;
int varWord2L = 0xXXXX;
int varWord2H = 0x00XX;
int TargetWriteAddressL; // bits<15:0>
int TargetWriteAddressH; // bits<22:16>
NVMCON = 0x4001; // Set WREN and word program mode
TBLPAG = 0xFA; // write latch upper address
NVMADR = TargetWriteAddressL; // set target write address
NVMADRU = TargetWriteAddressH;
__builtin_tblwtl(0,varWord1L); // load write latches
__builtin_tblwth(0,varWord1H);
__builtin_tblwtl(0x2,varWord2L);
__builtin_tblwth(0x2,varWord2H);
__builtin_disi(5); // Disable interrupts for NVM unlock sequence
__builtin_write_NVM(); // initiate write
while(NVMCONbits.WR == 1);
But that code doesn't work depending on the address where I want to write. I found a fix to write one WORD but I can't write 2 WORD where I want. I store everything in the aux memory so the upper address(NVMADRU) is always 0x7F for me. The NVMADR is the address I can change. What I'm seeing is that if the address where I want to write modulo 4 is not 0 then I have to put my value in the 2 last latches, otherwise I have to put the value in the first latches.
If address modulo 4 is not zero, it doesn't work like the doc code(above). The value that will be at the address will be what is in the second set of latches.
I fixed it for writing only one word at a time like this:
if(Address % 4)
{
__builtin_tblwtl(0, 0xFFFF);
__builtin_tblwth(0, 0x00FF);
__builtin_tblwtl(2, ValueL);
__builtin_tblwth(2, ValueH);
}
else
{
__builtin_tblwtl(0, ValueL);
__builtin_tblwth(0, ValueH);
__builtin_tblwtl(2, 0xFFFF);
__builtin_tblwth(2, 0x00FF);
}
I want to know why I'm seeing this behavior?
2)I also want to write a full row.
That also doesn't seem to work for me and I don't know why because I'm doing what is in the DOC.
I tried a simple write row code and at the end I just read back the first 3 or 4 element that I wrote to see if it works:
NVMCON = 0x4002; //set for row programming
TBLPAG = 0x00FA; //set address for the write latches
NVMADRU = 0x007F; //upper address of the aux memory
NVMADR = 0xE7FA;
int latchoffset;
latchoffset = 0;
__builtin_tblwtl(latchoffset, 0);
__builtin_tblwth(latchoffset, 0); //current = 0, available = 1
latchoffset+=2;
__builtin_tblwtl(latchoffset, 1);
__builtin_tblwth(latchoffset, 1); //current = 0, available = 1
latchoffset+=2;
.
. all the way to 127(I know I could have done it in a loop)
.
__builtin_tblwtl(latchoffset, 127);
__builtin_tblwth(latchoffset, 127);
INTCON2bits.GIE = 0; //stop interrupt
__builtin_write_NVM();
while(NVMCONbits.WR == 1);
INTCON2bits.GIE = 1; //start interrupt
int testaddress;
testaddress = 0xE7FA;
status = NVMemReadIntH(testaddress);
status = NVMemReadIntL(testaddress);
testaddress += 2;
status = NVMemReadIntH(testaddress);
status = NVMemReadIntL(testaddress);
testaddress += 2;
status = NVMemReadIntH(testaddress);
status = NVMemReadIntL(testaddress);
testaddress += 2;
status = NVMemReadIntH(testaddress);
status = NVMemReadIntL(testaddress);
What I see is that the value that is stored in the address 0xE7FA is 125, in 0xE7FC is 126 and in 0xE7FE is 127. And the rest are all 0xFFFF.
Why is it taking only the last 3 latches and write them in the first 3 address?
Thanks in advance for your help people.
The dsPIC33 program memory space is treated as 24 bits wide, it is
more appropriate to think of each address of the program memory as a
lower and upper word, with the upper byte of the upper word being
unimplemented
(dsPIC33EPXXX datasheet)
There is a phantom byte every two program words.
Your code
if(Address % 4)
{
__builtin_tblwtl(0, 0xFFFF);
__builtin_tblwth(0, 0x00FF);
__builtin_tblwtl(2, ValueL);
__builtin_tblwth(2, ValueH);
}
else
{
__builtin_tblwtl(0, ValueL);
__builtin_tblwth(0, ValueH);
__builtin_tblwtl(2, 0xFFFF);
__builtin_tblwth(2, 0x00FF);
}
...will be fine for writing a bootloader if generating values from a valid Intel HEX file, but doesn't make it simple for storing data structures because the phantom byte is not taken into account.
If you create a uint32_t variable and look at the compiled HEX file, you'll notice that it in fact uses up the least significant words of two 24-bit program words. I.e. the 32-bit value is placed into a 64-bit range but only 48-bits out of the 64-bits are programmable, the others are phantom bytes (or zeros). Leaving three bytes per address modulo of 4 that are actually programmable.
What I tend to do if writing data is to keep everything 32-bit aligned and do the same as the compiler does.
Writing:
UINT32 value = ....;
:
__builtin_tblwtl(0, value.word.word_L); // least significant word of 32-bit value placed here
__builtin_tblwth(0, 0x00); // phantom byte + unused byte
__builtin_tblwtl(2, value.word.word_H); // most significant word of 32-bit value placed here
__builtin_tblwth(2, 0x00); // phantom byte + unused byte
Reading:
UINT32 *value
:
value->word.word_L = __builtin_tblrdl(offset);
value->word.word_H = __builtin_tblrdl(offset+2);
UINT32 structure:
typedef union _UINT32 {
uint32_t val32;
struct {
uint16_t word_L;
uint16_t word_H;
} word;
uint8_t bytes[4];
} UINT32;

Parsing cbor stream

I'm trying to parse CBOR stream using tinyCBOR. Goal is to write a generic parsing code for "map type"(because i don't know how many keys are there and which are they, in cbor stream)but not for a json, I just want to get values using "key",but for getting values from key i have to know the key.
Im simply able to parse the value by passing "key" in function
cbor_value_map_find_value(&main_value,"Age",&map_value);
but few things still not clear to me.
What sequence to follow, for getting key and values from CBOR stream?
For eg. following is my data in map format -
{"Roll_number": 7, "Age": 24, "Name": "USER"}
here is this binary format from cbor.me link -
A3 # map(3)
6B # text(11)
526F6C6C5F6E756D626572 # "Roll_number"
07 # unsigned(7)
63 # text(3)
416765 # "Age"
18 18 # unsigned(24)
64 # text(4)
4E616D65 # "Name"
64 # text(4)
55534552 # "USER"
1.How to get key from stream. like - Roll_number or AGE from stream?(sequentially getting key and values also fine).
2.After getting Roll_number value, how can i jump to next element ("Age") for getting "key" and "value".
3.How to identify that i'm reached at the "end of stream" and now there is no data ??
Any snippet code, that how to parse and which sequence of function need to follow.
Any help is appreciate.
Thanks!!!
The example code is pretty helpful for understanding the API. To iterate over the keys and values of a map, you call cbor_value_enter_container, then cbor_value_advance until cbor_value_at_end returns true (as long as there are no nested maps or arrays you want to look inside). For example:
cbor_parser_init(input, sizeof(input), 0, &parser, &it);
if (!cbor_value_is_map(&it)) {
return 1;
}
err = cbor_value_enter_container(&it, &map);
if (err) return 1;
while (!cbor_value_at_end(&map)) {
// get the key. Remember, keys don't have to be strings.
if (!cbor_value_is_text_string(&map)) {
return 1;
}
char *buf;
size_t n;
// Note: this also advances to the value
err = cbor_value_dup_text_string(&map, &buf, &n, &map);
if (err) return 1;
printf("Key: '%*s'\n", (int)n-1, buf);
if (strncmp(buf, "Age", n-1) == 0) {
if (cbor_value_is_integer(&map)) {
// Found the expected key and value type
err = cbor_value_get_int(&map, &val);
if (err) return 1;
printf("age: %d\n", val);
}
// note: can't break here, have to keep going until the end if you want
// `it` to still be valid.
}
free(buf);
err = cbor_value_advance(&map);
if (err) return 1;
}
err = cbor_value_leave_container(&it, &map);
if (err) return 1;

Zebra Printer - Not Printing PNG Stream *Provided my own answer*

I think I'm very close to getting this to print. However it still isn't. There is no exception thrown and it does seem to be hitting the zebra printer, but nothing. Its a long shot as I think most people are in the same position I am and know little about it. Any help anyone can give no matter how small will be welcomed, I'm losing the will to live
using (var response = request.GetResponse())
{
using (var responseStream = response.GetResponseStream())
{
using (var stream = new MemoryStream())
{
if (responseStream == null)
{
return;
}
responseStream.CopyTo(stream);
stream.Position = 0;
using (var zipout = ZipFile.Read(stream))
{
using (var ms = new MemoryStream())
{
foreach (var e in zipout.Where(e => e.FileName.Contains(".png")))
{
e.Extract(ms);
}
if (ms.Length <= 0)
{
return;
}
var binaryData = ms.ToArray();
byte[] compressedFileData;
// Compress the data using the LZ77 algorithm.
using (var outStream = new MemoryStream())
{
using (var compress = new DeflateStream(outStream, CompressionMode.Compress, true))
{
compress.Write(binaryData, 0, binaryData.Length);
compress.Flush();
compress.Close();
}
compressedFileData = outStream.ToArray();
}
// Encode the compressed data using the MIME Base64 algorithm.
var base64 = Convert.ToBase64String(compressedFileData);
// Calculate a CRC across the encoded data.
var crc = Calc(Convert.FromBase64String(base64));
// Add a unique header to differentiate the new format from the existing ASCII hexadecimal encoding.
var finalData = string.Format(":Z64:{0}:{1}", base64, crc);
var zplToSend = "~DYR:LOGO,P,P," + finalData.Length + ",," + finalData;
const string PrintImage = "^XA^FO0,0^IMR:LOGO.PNG^FS^XZ";
try
{
var client = new System.Net.Sockets.TcpClient();
client.Connect(IpAddress, Port);
var writer = new StreamWriter(client.GetStream(), Encoding.UTF8);
writer.Write(zplToSend);
writer.Flush();
writer.Write(PrintImage);
writer.Close();
client.Close();
}
catch (Exception ex)
{
// Catch Exception
}
}
}
}
}
}
private static ushort Calc(byte[] data)
{
ushort wCrc = 0;
for (var i = 0; i < data.Length; i++)
{
wCrc ^= (ushort)(data[i] << 8);
for (var j = 0; j < 8; j++)
{
if ((wCrc & 0x8000) != 0)
{
wCrc = (ushort)((wCrc << 1) ^ 0x1021);
}
else
{
wCrc <<= 1;
}
}
}
return wCrc;
}
The following code is working for me. The issue was the commands, these are very very important! Overview of the command I have used below, more can be found here
PrintImage
^XA
Start Format Description The ^XA command is used at the beginning of ZPL II code. It is the opening bracket and indicates the start of a new label format. This command is substituted with a single ASCII control character STX (control-B, hexadecimal 02). Format ^XA Comments Valid ZPL II format requires that label formats should start with the ^XA command and end with the ^XZ command.
^FO
Field Origin Description The ^FO command sets a field origin, relative to the label home (^LH) position. ^FO sets the upper-left corner of the field area by defining points along the x-axis and y-axis independent of the rotation. Format ^FOx,y,z
x = x-axis location (in dots) Accepted Values: 0 to 32000 Default
Value: 0
y = y-axis location (in dots) Accepted Values: 0 to 32000
Default Value: 0
z = justification The z parameter is only
supported in firmware versions V60.14.x, V50.14.x, or later. Accepted
Values: 0 = left justification 1 = right justification 2 = auto
justification (script dependent) Default Value: last accepted ^FW
value or ^FW default
^IM
Image Move Description The ^IM command performs a direct move of an image from storage area into the bitmap. The command is identical to the ^XG command (Recall Graphic), except there are no sizing parameters. Format ^IMd:o.x
d = location of stored object Accepted Values: R:, E:, B:, and A: Default Value: search priority
o = object name Accepted Values: 1 to 8 alphanumeric characters Default Value: if a name is not specified, UNKNOWN is used
x = extension Fixed Value: .GRF, .PNG
^FS
Field Separator Description The ^FS command denotes the end of the field definition. Alternatively, ^FS command can also be issued as a single ASCII control code SI (Control-O, hexadecimal 0F). Format ^FS
^XZ
End Format Description The ^XZ command is the ending (closing) bracket. It indicates the end of a label format. When this command is received, a label prints. This command can also be issued as a single ASCII control character ETX (Control-C, hexadecimal 03). Format ^XZ Comments Label formats must start with the ^XA command and end with the ^XZ command to be in valid ZPL II format.
zplToSend
^MN
Media Tracking Description This command specifies the media type being used and the black mark offset in dots. This bulleted list shows the types of media associated with this command:
Continuous Media – this media has no physical characteristic (such as a web, notch, perforation, black mark) to separate labels. Label length is determined by the ^LL command.
Continuous Media, variable length – same as Continuous Media, but if portions of the printed label fall outside of the defined label length, the label size will automatically be extended to contain them. This label length extension applies only to the current label. Note that ^MNV still requires the use of the ^LL command to define the initial desired label length.
Non-continuous Media – this media has some type of physical characteristic (such as web, notch, perforation, black mark) to separate the labels.
Format ^MNa,b
a = media being used Accepted Values: N = continuous media Y = non-continuous media web sensing d, e W = non-continuous media web sensing d, e M = non-continuous media mark sensing A = auto-detects the type of media during calibration d, f V = continuous media, variable length g Default Value: a value must be entered or the command is ignored
b = black mark offset in dots This sets the expected location of the media mark relative to the point of separation between documents. If set to 0, the media mark is expected to be found at the point of separation. (i.e., the perforation, cut point, etc.) All values are listed in dots. This parameter is ignored unless the a parameter is set to M. If this parameter is missing, the default value is used. Accepted Values: -80 to 283 for direct-thermal only printers -240 to 566 for 600 dpi printers -75 to 283 for KR403 printers -120 to 283 for all other printers Default Value: 0
~DY
Download Objects Description The ~DY command downloads to the printer graphic objects or fonts in any supported format. This command can be used in place of ~DG for more saving and loading options. ~DY is the preferred command to download TrueType fonts on printers with firmware later than X.13. It is faster than ~DU. The ~DY command also supports downloading wireless certificate files. Format ~DYd:f,b,x,t,w,data
Note
When using certificate files, your printer supports:
- Using Privacy Enhanced Mail (PEM) formatted certificate files.
- Using the client certificate and private key as two files, each downloaded separately.
- Using exportable PAC files for EAP-FAST.
- Zebra recommends using Linear sty
d = file location .NRD and .PAC files reside on E: in firmware versions V60.15.x, V50.15.x, or later. Accepted Values: R:, E:, B:, and A: Default Value: R:
f = file name Accepted Values: 1 to 8 alphanumeric characters Default Value: if a name is not specified, UNKNOWN is used
b = format downloaded in data field .TTE and .TTF are only supported in firmware versions V60.14.x, V50.14.x, or later. Accepted Values: A = uncompressed (ZB64, ASCII) B = uncompressed (.TTE, .TTF, binary) C = AR-compressed (used only by Zebra’s BAR-ONE® v5) P = portable network graphic (.PNG) - ZB64 encoded Default Value: a value must be specified
clearDownLabel
^ID
Description The ^ID command deletes objects, graphics, fonts, and stored formats from storage areas. Objects can be deleted selectively or in groups. This command can be used within a printing format to delete objects before saving new ones, or in a stand-alone format to delete objects.
The image name and extension support the use of the asterisk (*) as a wild card. This allows you to easily delete a selected groups of objects. Format ^IDd:o.x
d = location of stored object Accepted Values: R:, E:, B:, and A: Default Value: R:
o = object name Accepted Values: any 1 to 8 character name Default Value: if a name is not specified, UNKNOWN is used
x = extension Accepted Values: any extension conforming to Zebra conventions
Default Value: .GRF
const string PrintImage = "^XA^FO0,0,0^IME:LOGO.PNG^FS^XZ";
var zplImageData = string.Empty;
using (var response = request.GetResponse())
{
using (var responseStream = response.GetResponseStream())
{
using (var stream = new MemoryStream())
{
if (responseStream == null)
{
return;
}
responseStream.CopyTo(stream);
stream.Position = 0;
using (var zipout = ZipFile.Read(stream))
{
using (var ms = new MemoryStream())
{
foreach (var e in zipout.Where(e => e.FileName.Contains(".png")))
{
e.Extract(ms);
}
if (ms.Length <= 0)
{
return;
}
var binaryData = ms.ToArray();
foreach (var b in binaryData)
{
var hexRep = string.Format("{0:X}", b);
if (hexRep.Length == 1)
{
hexRep = "0" + hexRep;
}
zplImageData += hexRep;
}
var zplToSend = "^XA" + "^FO0,0,0" + "^MNN" + "~DYE:LOGO,P,P," + binaryData.Length + ",," + zplImageData + "^XZ";
var label = GenerateStreamFromString(zplToSend);
var client = new System.Net.Sockets.TcpClient();
client.Connect(IpAddress, Port);
label.CopyTo(client.GetStream());
label.Flush();
client.Close();
var cmd = GenerateStreamFromString(PrintImage);
var client2 = new System.Net.Sockets.TcpClient();
client2.Connect(IpAddress, Port);
cmd.CopyTo(client2.GetStream());
cmd.Flush();
client2.Close();var clearDownLabel = GenerateStreamFromString("^XA^IDR:LOGO.PNG^FS^XZ");
var client3 = new System.Net.Sockets.TcpClient();
client3.Connect(IpAddress, Port);
clearDownLabel.CopyTo(client3.GetStream());
clearDownLabel.Flush();
client3.Close();
}
}
}
}
}
}
Easy once you know how.
Zebra ZPL logo example in base64
Python3
import crcmod
import base64
crc16 = crcmod.predefined.mkCrcFun('xmodem')
s = hex(crc16(ZPL_LOGO.encode()))[2:]
print (f"crc16: {s}")
Poorly documented may I say the least

Is there public key initialization API with point compression?

I am tumbling around with CryptoPP and cannot find answer to this specific question. Here is sample source code (partial)
AutoSeededRandomPool prng;
//Generate a private key
ECDSA<ECP, CryptoPP::SHA256>::PrivateKey privateKey;
privateKey.Initialize(prng, CryptoPP::ASN1::secp256r1());
// Generate publicKey
ECDSA<ECP, CryptoPP::SHA256>::PublicKey publicKey;
privateKey.MakePublicKey(publicKey);
// Extract Component values
Integer p = privateKey.GetGroupParameters().GetCurve().GetField().GetModulus();
Integer a = privateKey.GetGroupParameters().GetCurve().GetA();
Integer b = privateKey.GetGroupParameters().GetCurve().GetB();
Integer Gx = privateKey.GetGroupParameters().GetSubgroupGenerator().x;
Integer Gy = privateKey.GetGroupParameters().GetSubgroupGenerator().y;
Integer n = privateKey.GetGroupParameters().GetSubgroupOrder();
Integer h = privateKey.GetGroupParameters().GetCofactor();
Integer Qx = publicKey.GetPublicElement().x;
Integer Qy = publicKey.GetPublicElement().y;
Integer x = privateKey.GetPrivateExponent();
// Construct Point elelemt;
ECP curve(p,a,b);
ECP::Point G(Gx,Gy);
ECP::Point Q(Qx,Qy);
//Build publicKey using elements (no point compression)
ECDSA<ECP, CryptoPP::SHA256>::PublicKey GeneratedPublicKey;
GeneratedPublicKey.Initialize(curve,G,n,Q);
assert(GeneratedPublicKey.Validate(prng, 3));
//Build publicKey using elements (with point compression)?
With this way, I can generate publicKey using component values. However, I cannot
make it work with point compression-which means I don't have Qy value- Is there a
way to do it? Initialize method has two overloading but none of them are for point
compression situation.
My question is specific with Crypto++ on "PublicKey.Initialize(curve,G,n,Q)". Since I cannot transfer whole publicKey with my current project-which I am force to specify domain
parameter as index value and can only transfer Qx value. So I should initialize publicKey
using something like "PublicKey.Initialize(curve,G,n,Q)" However, I cannot find such initialization API concerning point compression.
So, this is not about "how to do a point compression" but "Is there a way to initialize
public key without having Qy value?"
How to Construct ECDSA publicKey using only with x value (Point compression)?
x is the private exponent. The public key is a point on the curve; and it does not use the private exponent.
To get the public key: take the private exponent, and raise your base point to it. That is, Q = G^x.
If you want to set the private exponent on a private key or decryptor, then set the domain parameters (i.e., DL_GroupParameters_EC< ECP > or DL_GroupParameters_EC< EC2M >) and then call SetPrivateExponent(x);.
Have you reviewed your previous question at How can I recover compressed y value from sender?? The community took the time to provide you with an answer and sample code, but you did not acknowledge or follow up.
I think owlstead said it best here:
Why would we care answer you if you are not inclined to accept answers
or even follow up to them? Your questions are all right, but the way
you treat the community is terrible.
"Is there a way to initialize public key without having Qy value?"
Yes, there is. Here is an crypto++ example:
#include <string>
#include <iostream>
#include <cryptopp/cryptlib.h>
#include <cryptopp/ecp.h>
#include <cryptopp/eccrypto.h>
#include <cryptopp/hex.h>
#include <cryptopp/oids.h>
#include <cryptopp/osrng.h>
using namespace CryptoPP;
using std::cout;
using std::endl;
int main()
{
OID curve = ASN1::secp256r1();
ECDH<ECP>::Domain domain(curve);
SecByteBlock privKey(domain.PrivateKeyLength());
SecByteBlock pubKey(domain.PublicKeyLength());
AutoSeededRandomPool prng;
domain.GenerateKeyPair(prng, privKey, pubKey);
// Convert public key to string representation
std::string pub_str;
HexEncoder encoder;
encoder.Attach( new StringSink(pub_str) );
encoder.Put( pubKey.data(), pubKey.size() );
encoder.MessageEnd();
// Uncompressed point - first byte '04' in front of the string.
std::cout << "Uncompressed public key (point) " << pub_str << endl;
// Extract x value from the point
std::string public_point_x = pub_str.substr(2, 64);
// Compressed - '02' byte in front of the string.
public_point_x = "02" + public_point_x;
std::cout << "Compressed public key (point) " << public_point_x << endl;
// ----- reconstruct point from compressed point/value.
StringSource ss(public_point_x, true, new HexDecoder);
ECP::Point point;
domain.GetGroupParameters().GetCurve().DecodePoint(point, ss, ss.MaxRetrievable());
cout << "Result after decompression X: " << std::hex << point.x << endl;
cout << "Result after decompression Y: " << std::hex << point.y << endl;
return 0;
}
I hope this is the answer to your question. I was using ECDH, but it should work equally well with ECDSA class.

Is it possible to have zlib read from and write to the same memory buffer?

I have a character buffer that I would like to compress in place. Right now I have it set up so there are two buffers and zlib's deflate reads from the input buffer and writes to the output buffer. Then I have to change the input buffer pointer to point to the output buffer and free the old input buffer. This seems like an unnecessary amount of allocation. Since zlib is compressing, the next_out pointer should always lag behind the next_in pointer. Anyway, I can't find enough documentation to verify this and was hoping someone had some experience with this. Thanks for your time!
It can be done, with some care. The routine below does it. Not all data is compressible, so you have to handle the case where the output data catches up with the input data. It takes a lot of incompressible data, but it can happen (see comments in code), in which case you have to allocate a buffer to temporarily hold the remaining input.
/* Compress buf[0..len-1] in place into buf[0..*max-1]. *max must be greater
than or equal to len. Return Z_OK on success, Z_BUF_ERROR if *max is not
enough output space, Z_MEM_ERROR if there is not enough memory, or
Z_STREAM_ERROR if *strm is corrupted (e.g. if it wasn't initialized or if it
was inadvertently written over). If Z_OK is returned, *max is set to the
actual size of the output. If Z_BUF_ERROR is returned, then *max is
unchanged and buf[] is filled with *max bytes of uncompressed data (which is
not all of it, but as much as would fit).
Incompressible data will require more output space than len, so max should
be sufficiently greater than len to handle that case in order to avoid a
Z_BUF_ERROR. To assure that there is enough output space, max should be
greater than or equal to the result of deflateBound(strm, len).
strm is a deflate stream structure that has already been successfully
initialized by deflateInit() or deflateInit2(). That structure can be
reused across multiple calls to deflate_inplace(). This avoids unnecessary
memory allocations and deallocations from the repeated use of deflateInit()
and deflateEnd(). */
int deflate_inplace(z_stream *strm, unsigned char *buf, unsigned len,
unsigned *max)
{
int ret; /* return code from deflate functions */
unsigned have; /* number of bytes in temp[] */
unsigned char *hold; /* allocated buffer to hold input data */
unsigned char temp[11]; /* must be large enough to hold zlib or gzip
header (if any) and one more byte -- 11
works for the worst case here, but if gzip
encoding is used and a deflateSetHeader()
call is inserted in this code after the
deflateReset(), then the 11 needs to be
increased to accomodate the resulting gzip
header size plus one */
/* initialize deflate stream and point to the input data */
ret = deflateReset(strm);
if (ret != Z_OK)
return ret;
strm->next_in = buf;
strm->avail_in = len;
/* kick start the process with a temporary output buffer -- this allows
deflate to consume a large chunk of input data in order to make room for
output data there */
if (*max < len)
*max = len;
strm->next_out = temp;
strm->avail_out = sizeof(temp) > *max ? *max : sizeof(temp);
ret = deflate(strm, Z_FINISH);
if (ret == Z_STREAM_ERROR)
return ret;
/* if we can, copy the temporary output data to the consumed portion of the
input buffer, and then continue to write up to the start of the consumed
input for as long as possible */
have = strm->next_out - temp;
if (have <= (strm->avail_in ? len - strm->avail_in : *max)) {
memcpy(buf, temp, have);
strm->next_out = buf + have;
have = 0;
while (ret == Z_OK) {
strm->avail_out = strm->avail_in ? strm->next_in - strm->next_out :
(buf + *max) - strm->next_out;
ret = deflate(strm, Z_FINISH);
}
if (ret != Z_BUF_ERROR || strm->avail_in == 0) {
*max = strm->next_out - buf;
return ret == Z_STREAM_END ? Z_OK : ret;
}
}
/* the output caught up with the input due to insufficiently compressible
data -- copy the remaining input data into an allocated buffer and
complete the compression from there to the now empty input buffer (this
will only occur for long incompressible streams, more than ~20 MB for
the default deflate memLevel of 8, or when *max is too small and less
than the length of the header plus one byte) */
hold = strm->zalloc(strm->opaque, strm->avail_in, 1);
if (hold == Z_NULL)
return Z_MEM_ERROR;
memcpy(hold, strm->next_in, strm->avail_in);
strm->next_in = hold;
if (have) {
memcpy(buf, temp, have);
strm->next_out = buf + have;
}
strm->avail_out = (buf + *max) - strm->next_out;
ret = deflate(strm, Z_FINISH);
strm->zfree(strm->opaque, hold);
*max = strm->next_out - buf;
return ret == Z_OK ? Z_BUF_ERROR : (ret == Z_STREAM_END ? Z_OK : ret);
}

Resources