ECDSA signature length - elliptic-curve

What will the signature length for 256 bit EC key in ECDSA algorithm?
I wanted to validated signature length for the same. It will be great if some body can help me with one EC key set.

It depends on how you encode the signature. This is the code segment from OpenSSL that measures the length of ECDSA signature in DER format.
/** ECDSA_size
* returns the maximum length of the DER encoded signature
* \param eckey pointer to a EC_KEY object
* \return numbers of bytes required for the DER encoded signature
*/
int ECDSA_size(const EC_KEY *r)
{
int ret,i;
ASN1_INTEGER bs;
BIGNUM *order=NULL;
unsigned char buf[4];
const EC_GROUP *group;
if (r == NULL)
return 0;
group = EC_KEY_get0_group(r);
if (group == NULL)
return 0;
if ((order = BN_new()) == NULL) return 0;
if (!EC_GROUP_get_order(group,order,NULL))
{
BN_clear_free(order);
return 0;
}
i=BN_num_bits(order);
bs.length=(i+7)/8;
bs.data=buf;
bs.type=V_ASN1_INTEGER;
/* If the top bit is set the asn1 encoding is 1 larger. */
buf[0]=0xff;
i=i2d_ASN1_INTEGER(&bs,NULL);
i+=i; /* r and s */
ret=ASN1_object_size(1,i,V_ASN1_SEQUENCE);
BN_clear_free(order);
return(ret);
}
The result of the above function with an EC_KEY on prime256 curve as parameter is
sig_len = ECDSA_size(eckey);
where sig_len is 72.
You need 72 bytes for DER encoded ECDSA signature using a 256-bit EC key.

Related

How can I generate check sum code in dart?

I want to use PayMaya EMV Merchant Presented QR Code Specification for Payment Systems everything is good except CRC i don't understand how to generate this code.
that's all exist about it ,but i still can't understand how to generate this .
The checksum shall be calculated according to [ISO/IEC 13239] using the polynomial '1021' (hex) and initial value 'FFFF' (hex). The data over which the checksum is calculated shall cover all data objects, including their ID, Length and Value, to be included in the QR Code, in their respective order, as well as the ID and Length of the CRC itself (but excluding its Value).
Following the calculation of the checksum, the resulting 2-byte hexadecimal value shall be encoded as a 4-character Alphanumeric Special value by converting each nibble to an Alphanumeric Special character.
Example: a CRC with a two-byte hexadecimal value of '007B' is included in the QR Code as "6304007B".
This converts a string to its UTF-8 representation as a sequence of bytes, and prints out the 16-bit Cyclic Redundancy Check of those bytes (CRC-16/CCITT-FALSE).
int crc16_CCITT_FALSE(String data) {
int initial = 0xFFFF; // initial value
int polynomial = 0x1021; // 0001 0000 0010 0001 (0, 5, 12)
Uint8List bytes = Uint8List.fromList(utf8.encode(data));
for (var b in bytes) {
for (int i = 0; i < 8; i++) {
bool bit = ((b >> (7-i) & 1) == 1);
bool c15 = ((initial >> 15 & 1) == 1);
initial <<= 1;
if (c15 ^ bit) initial ^= polynomial;
}
}
return initial &= 0xffff;
}
The CRC for ISO/IEC 13239 is this CRC-16/ISO-HDLC, per the notes in that catalog. This implements that CRC and prints the check value 0x906e:
import 'dart:typed_data';
int crc16ISOHDLC(Uint8List bytes) {
int crc = 0xffff;
for (var b in bytes) {
crc ^= b;
for (int i = 0; i < 8; i++)
crc = (crc & 1) != 0 ? (crc >> 1) ^ 0x8408 : crc >> 1;
}
return crc ^ 0xffff;
}
void main() {
Uint8List msg = Uint8List.fromList([0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39]);
print("0x" + crc16ISOHDLC(msg).toRadixString(16));
}

Is it possible to remove padding at the end from CRC checksum

For example, I calculated CRC checksum of a file of size 1024 KB and file includes 22 KB of padding of zeros at the end of the file.
If given checksum of 1024 KB and
size of the padding of zeros of given file
Is it possible to calculate the checksum of the file without the passing. That is in above case getting the checksum of 1002 KB of the file. Assuming we don't have to recalculate the checksum again and reuse the checksum already calculated for the entire file with padding.
After a normal CRC is calculated, a CRC can be "reverse cycled" backwards past the trailing zeroes, but rather than actually reverse cycling the CRC, a carryless multiply can be used:
new crc = (crc ยท (pow(2,-1-reverse_distance)%poly))%poly
The -1 represents the cyclic period for a CRC. For CRC32, the period is 2^32-1 = 0xffffffff .
By generating a table for pow(2,-1-(i*8))%poly) for i = 1 to n, time complexity can be reduced to O(1), doing a table lookup followed by a carryless multiply mod polynomial (32 iterations).
Example code for a 32 byte message with 14 data bytes, 18 zero bytes, with the new crc to be located at msg[{14,15,16,17}]. After the new bytes are stored in the message, a normal CRC calculation on the shortened message will be zero. The example code doesn't use a table, and the time complexity is O(log2(n)) for the pow(2,-1-(n*8))%poly) calculation.
#include <stdio.h>
typedef unsigned char uint8_t;
typedef unsigned int uint32_t;
static uint32_t crctbl[256];
void GenTbl(void) /* generate crc table */
{
uint32_t crc;
uint32_t c;
uint32_t i;
for(c = 0; c < 0x100; c++){
crc = c<<24;
for(i = 0; i < 8; i++)
crc = (crc<<1)^((0-(crc>>31))&0x04c11db7);
crctbl[c] = crc;
}
}
uint32_t GenCrc(uint8_t * bfr, size_t size) /* generate crc */
{
uint32_t crc = 0u;
while(size--)
crc = (crc<<8)^crctbl[(crc>>24)^*bfr++];
return(crc);
}
/* carryless multiply modulo crc */
uint32_t MpyModCrc(uint32_t a, uint32_t b) /* (a*b)%crc */
{
uint32_t pd = 0;
uint32_t i;
for(i = 0; i < 32; i++){
pd = (pd<<1)^((0-(pd>>31))&0x04c11db7u);
pd ^= (0-(b>>31))&a;
b <<= 1;
}
return pd;
}
/* exponentiate by repeated squaring modulo crc */
uint32_t PowModCrc(uint32_t p) /* pow(2,p)%crc */
{
uint32_t prd = 0x1u; /* current product */
uint32_t sqr = 0x2u; /* current square */
while(p){
if(p&1)
prd = MpyModCrc(prd, sqr);
sqr = MpyModCrc(sqr, sqr);
p >>= 1;
}
return prd;
}
/* message 14 data, 18 zeroes */
/* parities = crc cycled backwards 18 bytes */
int main()
{
uint32_t pmr;
uint32_t crc;
uint32_t par;
uint8_t msg[32] = {0x01,0x02,0x03,0x04,0x05,0x06,0x07,0x08,
0x09,0x0a,0x0b,0x0c,0x0d,0x0e,0x00,0x00,
0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00};
GenTbl(); /* generate crc table */
pmr = PowModCrc(-1-(18*8)); /* pmr = pow(2,-1-18*8)%crc */
crc = GenCrc(msg, 32); /* generate crc including 18 zeroes */
par = MpyModCrc(crc, pmr); /* par = (crc*pmr)%crc = new crc */
crc = GenCrc(msg, 14); /* generate crc for shortened msg */
printf("%08x %08x\n", par, crc); /* crc == par */
msg[14] = (uint8_t)(par>>24); /* store parities in msg */
msg[15] = (uint8_t)(par>>16);
msg[16] = (uint8_t)(par>> 8);
msg[17] = (uint8_t)(par>> 0);
crc = GenCrc(msg, 18); /* crc == 0 */
printf("%08x\n", crc);
return 0;
}
Sure. Look at this answer for code that undoes the trailing zeros, crc32_remove_zeros().

Manual CBC encryption handing with Crypto++

I am trying to play around with a manual encryption in CBC mode but still use Crypto++, just to know can I do it manually.
The CBC algorithm is (AFAIK):
Presume we have n block K[1]....k[n]
0. cipher = empty;
1. xor(IV, K1) -> t1
2. encrypt(t1) -> r1
3. cipher += r1
4. xor (r1, K2) -> t2
5. encrypt(t2) -> r2
6. cipher += r2
7. xor(r2, K3)->t3
8. ...
So I tried to implement it with Crypto++. I have a text file with alphanumeric characters only. Test 1 is read file chunk by chunk (16 byte) and encrypt them using CBC mode manually, then sum up the cipher. Test 2 is use Crypto++ built-in CBC mode.
Test 1
char* key;
char* iv;
//Iterate in K[n] array of n blocks
BSIZE = 16;
std::string vectorToString(vector<char> v){
string s ="";
for (int i = 0; i < v.size(); i++){
s[i] = v[i];
}
return s;
}
vector<char> xor( vector<char> s1, vector<char> s2, int len){
vector<char> r;
for (int i = 0; i < len; i++){
int u = s1[i] ^ s2[i];
r.push_back(u);
}
return r;
}
vector<char> byteToVector(byte *b, int len){
vector<char> v;
for (int i = 0; i < len; i++){
v.push_back( b[i]);
}
return v;
}
string cbc_manual(byte [n]){
int i = 0;
//Open a file and read from it, buffer size = 16
// , equal to DEFAULT_BLOCK_SIZE
std::ifstream fin(fileName, std::ios::binary | std::ios::in);
const int BSIZE = 16;
vector<char> encryptBefore;
//This function will return cpc
string cpc ="";
while (!fin.eof()){
char buffer[BSIZE];
//Read a chunk of file
fin.read(buffer, BSIZE);
int sb = sizeof(buffer);
if (i == 0){
encryptBefore = byteToVector( iv, BSIZE);
}
//If i == 0, xor IV with current buffer
//else, xor encryptBefore with current buffer
vector<char> t1 = xor(encryptBefore, byteToVector((byte*) buffer, BSIZE), BSIZE);
//After xored, encrypt the xor result, it will be current step cipher
string r1= encrypt(t1, BSIZE).c_str();
cpc += r1;
const char* end = r1.c_str() ;
encryptBefore = stringToVector( r1);
i++;
}
return cpc;
}
This is my encrypt() function, because we have only one block so I use ECB (?) mode
string encrypt(string s, int size){
ECB_Mode< AES >::Encryption e;
e.SetKey(key, size);
string cipher;
StringSource ss1(s, true,
new StreamTransformationFilter(e,
new StringSink(cipher)
) // StreamTransformationFilter
); // StringSource
return cipher;
}
And this is 100% Crypto++ made solution:
Test 2
encryptCBC(char * plain){
CBC_Mode < AES >::Encryption encryption(key, sizeof(key), iv);
StreamTransformationFilter encryptor(encryption, NULL);
for (size_t j = 0; j < plain.size(); j++)
encryptor.Put((byte)plain[j]);
encryptor.MessageEnd();
size_t ready = encryptor.MaxRetrievable();
string cipher(ready, 0x00);
encryptor.Get((byte*)&cipher[0], cipher.size());
}
Result of Test 1 and Test 2 are different. In the fact, ciphered text from Test 1 is contain the result of Test 2. Example:
Test 1's result aaa[....]bbb[....]ccc[...]...
Test 2 (Crypto++ built-in CBC)'s result: aaabbbccc...
I know the xor() function may cause a problem relate to "sameChar ^ sameChar = 0", but is there any problem relate to algorithm in my code?
This is my Test 2.1 after the 1st solution of jww.
static string auto_cbc2(string plain, long size){
CBC_Mode< AES >::Encryption e;
e.SetKeyWithIV(key, sizeof(key), iv, sizeof(iv));
string cipherText;
CryptoPP::StringSource ss(plain, true,
new CryptoPP::StreamTransformationFilter(e,
new CryptoPP::StringSink(cipherText)
, BlockPaddingSchemeDef::NO_PADDING
) // StreamTransformationFilter
); // StringSource
return cipherText;
}
It throw an error:
Unhandled exception at 0x7407A6F2 in AES-CRPP.exe: Microsoft C++
exception: CryptoPP::InvalidDataFormat at memory location 0x00EFEA74
I only got this error when use BlockPaddingSchemeDef::NO_PADDING, tried to remove BlockPaddingSchemeDef or using BlockPaddingSchemeDef::DEFAULT_PADDING, I got no error . :?
StringSource ss1(s, true,
new StreamTransformationFilter(e,
new StringSink(cipher)));
This uses PKCS padding by default. It takes a 16-byte input and produces a 32-byte output due to padding. You should do one of two things.
First, you can use BlockPaddingScheme::NO_PADDING. Something like:
StringSource ss1(s, true,
new StreamTransformationFilter(e,
new StringSink(cipher)
BlockPaddingScheme::NO_PADDING));
Second, you can process blocks manually, 16 bytes at a time. Something like:
AES::Encryption encryptor(key, keySize);
byte ibuff[<some size>] = ...;
byte obuff[<some size>];
ASSERT(<some size> % AES::BLOCKSIZE == 0);
unsigned int BLOCKS = <some size>/AES::BLOCKSIZE;
for (unsigned int i=0; i<BLOCKS; i==)
{
encryptor.ProcessBlock(&ibuff[i*16], &obuff[i*16]);
// Do the CBC XOR thing...
}
You may be able to call ProcessAndXorBlock from the BlockCipher base class and do it in one shot.

Why would SecKeyEncrypt return paramErr (-50) for input strings longer than 246 bytes?

I am using SecKeyEncrypt with a JSON formatted string as input. If pass SecKeyEncrypt a plainTextLength of less than 246, it works. If I pass it a length of 246 or more, it fails with return value: paramErr (-50).
It could be a matter of the string itself. An example of what I might send SecKeyEncrypt is:
{"handle":"music-list","sym_key":"MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALeaEO7ZrjgOFGLBzBHZtQuzH2GNDYMLWP+fIFNu5Y+59C6HECY+jt0yOXXom2mzp/WYYI/9G+Ig8OD6YiKv2nMCAwEAAQ==","app_id":"xgfdt.LibraryTestApp","api_key":"7e080f74de3625b90dd293fc8be560a5cdfafc08"}
The 245th character is '0'.
The ONLY input that is changing between this working and is the plainTextLength. SecKeyGetBlockSize() is returning 256 to me, so any input up to 256 characters long should work.
Here is my encrypt method:
+ (NSData*)encrypt:(NSString*)data usingPublicKeyWithTag:(NSString*)tag
{
OSStatus status = noErr;
size_t cipherBufferSize;
uint8_t *cipherBuffer;
// [cipherBufferSize]
size_t dataSize = 246;//[data lengthOfBytesUsingEncoding:NSUTF8StringEncoding];
const uint8_t* textData = [[data dataUsingEncoding:NSUTF8StringEncoding] bytes];
SecKeyRef publicKey = [Encryption copyPublicKeyForTag:tag];
NSAssert(publicKey, #"The public key being referenced by tag must have been stored in the keychain before attempting to encrypt data using it!");
// Allocate a buffer
cipherBufferSize = SecKeyGetBlockSize(publicKey);
// this value will not get modified, whereas cipherBufferSize may.
const size_t fullCipherBufferSize = cipherBufferSize;
cipherBuffer = malloc(cipherBufferSize);
NSMutableData* accumulatedEncryptedData = [NSMutableData dataWithCapacity:0];
// Error handling
for (int ii = 0; ii*fullCipherBufferSize < dataSize; ii++) {
const uint8_t* dataToEncrypt = (textData+(ii*fullCipherBufferSize));
const size_t subsize = (((ii+1)*fullCipherBufferSize) > dataSize) ? fullCipherBufferSize-(((ii+1)*fullCipherBufferSize) - dataSize) : fullCipherBufferSize;
// Encrypt using the public key.
status = SecKeyEncrypt( publicKey,
kSecPaddingPKCS1,
dataToEncrypt,
subsize,
cipherBuffer,
&cipherBufferSize
);
[accumulatedEncryptedData appendBytes:cipherBuffer length:cipherBufferSize];
}
if (publicKey) CFRelease(publicKey);
free(cipherBuffer);
return accumulatedEncryptedData;
}
From the documentation:
plainTextLen
Length in bytes of the data in the plainText buffer. This must be less than or equal to the value returned by the SecKeyGetBlockSize function. When PKCS1 padding is performed, the maximum length of data that can be encrypted is 11 bytes less than the value returned by the SecKeyGetBlockSize function (secKeyGetBlockSize() - 11).
(emphasis mine)
You're using PKCS1 padding. So if the block size is 256, you can only encrypt up to 245 bytes at a time.

How to calculate binary checksum?

I'm working on an app of hardware communication that I send or require data from an external hardware. I have the require data part done.
And I just find out I could use some help to calculate the checksum.
A package is created as NSMutableData, then it will be converted in to Byte Array before sending out.
A package looks like this:
0x1E 0x2D 0x2F DATA checksum
I'm thinking I can convert hex into binary to calculate them one by one. But I don't know if it's a good idea. Please let me know if this is the only way to do it, or there are some built in functions I don't know.
Any suggestions will be appreciated.
BTW, I just found the code for C# from other's post, I'll try to make it work in my app. If I can, I'll share it with you. Still any suggestions will be appreciated.
package org.example.checksum;
public class InternetChecksum {
/**
* Calculate the Internet Checksum of a buffer (RFC 1071 - http://www.faqs.org/rfcs/rfc1071.html)
* Algorithm is
* 1) apply a 16-bit 1's complement sum over all octets (adjacent 8-bit pairs [A,B], final odd length is [A,0])
* 2) apply 1's complement to this final sum
*
* Notes:
* 1's complement is bitwise NOT of positive value.
* Ensure that any carry bits are added back to avoid off-by-one errors
*
*
* #param buf The message
* #return The checksum
*/
public long calculateChecksum(byte[] buf) {
int length = buf.length;
int i = 0;
long sum = 0;
long data;
// Handle all pairs
while (length > 1) {
// Corrected to include #Andy's edits and various comments on Stack Overflow
data = (((buf[i] << 8) & 0xFF00) | ((buf[i + 1]) & 0xFF));
sum += data;
// 1's complement carry bit correction in 16-bits (detecting sign extension)
if ((sum & 0xFFFF0000) > 0) {
sum = sum & 0xFFFF;
sum += 1;
}
i += 2;
length -= 2;
}
// Handle remaining byte in odd length buffers
if (length > 0) {
// Corrected to include #Andy's edits and various comments on Stack Overflow
sum += (buf[i] << 8 & 0xFF00);
// 1's complement carry bit correction in 16-bits (detecting sign extension)
if ((sum & 0xFFFF0000) > 0) {
sum = sum & 0xFFFF;
sum += 1;
}
}
// Final 1's complement value correction to 16-bits
sum = ~sum;
sum = sum & 0xFFFF;
return sum;
}
}
When I post this question a year ago, I was still quite new to Objective-C. It turned out to be something very easy to do.
The way you calculate checksum is based on how checksum is defined in your communication protocol. In my case, checksum is just the sum of all the previous bytes sent or the data you want to send.
So if I have a NSMutableData *cmd that has five bytes:
0x10 0x14 0xE1 0xA4 0x32
checksum is the last byte of 0x10+0x14+0xE1+0xA4+0x32
So the sum is 01DB, checksum is 0xDB.
Code:
//i is the length of cmd
- (Byte)CalcCheckSum:(Byte)i data:(NSMutableData *)cmd
{ Byte * cmdByte = (Byte *)malloc(i);
memcpy(cmdByte, [cmd bytes], i);
Byte local_cs = 0;
int j = 0;
while (i>0) {
local_cs += cmdByte[j];
i--;
j++;
};
local_cs = local_cs&0xff;
return local_cs;
}
To use it:
Byte checkSum = [self CalcCheckSum:[command length] data:command];
Hope it helps.

Resources