Converting InputStream data to different datatypes - ios

I have been working with InputStreams in Objective-C and seem to have taken the wrong step into the processing of the received data.
I am receiving chunks of bytes, which are read and converted to datatypes, as integers, floats, doubles etc.
So far my process is like this:
readBuffer = (uint8_t *) malloc(4);
memset(readBuffer, 0, 4);
while (length < byteLength) {
length = [InputStream read:readBuffer 4];
}
[something fourByteUint8ToLong:readBuffer];
Now in order to do some converting from the 4-bytes to long
- (long) fourByteUint8ToLong:(uint8_t *) buffer
{
long temp = 0;
temp |= buffer[0] & 0xFF;
temp <<= 8;
temp |= buffer[1] & 0xFF;
temp <<= 8;
temp |= buffer[2] & 0xFF;
temp <<= 8;
temp |= buffer[3] & 0xFF;
return temp;
}
Is there not an easier way to handle this by using the objective-C classes?
In that case how? 8-bytes -> double, 4-bytes -> float.
Thanks in advance.

Problem solved by using CoreFoundation.h class function:
uint8_t * buffer;
buffer = (uint8_t *) malloc(8);
double tempDouble = CFConvertFloat64SwappedToHost(*((CFSwappedFloat64*)buffer));

Related

How can I generate check sum code in dart?

I want to use PayMaya EMV Merchant Presented QR Code Specification for Payment Systems everything is good except CRC i don't understand how to generate this code.
that's all exist about it ,but i still can't understand how to generate this .
The checksum shall be calculated according to [ISO/IEC 13239] using the polynomial '1021' (hex) and initial value 'FFFF' (hex). The data over which the checksum is calculated shall cover all data objects, including their ID, Length and Value, to be included in the QR Code, in their respective order, as well as the ID and Length of the CRC itself (but excluding its Value).
Following the calculation of the checksum, the resulting 2-byte hexadecimal value shall be encoded as a 4-character Alphanumeric Special value by converting each nibble to an Alphanumeric Special character.
Example: a CRC with a two-byte hexadecimal value of '007B' is included in the QR Code as "6304007B".
This converts a string to its UTF-8 representation as a sequence of bytes, and prints out the 16-bit Cyclic Redundancy Check of those bytes (CRC-16/CCITT-FALSE).
int crc16_CCITT_FALSE(String data) {
int initial = 0xFFFF; // initial value
int polynomial = 0x1021; // 0001 0000 0010 0001 (0, 5, 12)
Uint8List bytes = Uint8List.fromList(utf8.encode(data));
for (var b in bytes) {
for (int i = 0; i < 8; i++) {
bool bit = ((b >> (7-i) & 1) == 1);
bool c15 = ((initial >> 15 & 1) == 1);
initial <<= 1;
if (c15 ^ bit) initial ^= polynomial;
}
}
return initial &= 0xffff;
}
The CRC for ISO/IEC 13239 is this CRC-16/ISO-HDLC, per the notes in that catalog. This implements that CRC and prints the check value 0x906e:
import 'dart:typed_data';
int crc16ISOHDLC(Uint8List bytes) {
int crc = 0xffff;
for (var b in bytes) {
crc ^= b;
for (int i = 0; i < 8; i++)
crc = (crc & 1) != 0 ? (crc >> 1) ^ 0x8408 : crc >> 1;
}
return crc ^ 0xffff;
}
void main() {
Uint8List msg = Uint8List.fromList([0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39]);
print("0x" + crc16ISOHDLC(msg).toRadixString(16));
}

Manual CBC encryption handing with Crypto++

I am trying to play around with a manual encryption in CBC mode but still use Crypto++, just to know can I do it manually.
The CBC algorithm is (AFAIK):
Presume we have n block K[1]....k[n]
0. cipher = empty;
1. xor(IV, K1) -> t1
2. encrypt(t1) -> r1
3. cipher += r1
4. xor (r1, K2) -> t2
5. encrypt(t2) -> r2
6. cipher += r2
7. xor(r2, K3)->t3
8. ...
So I tried to implement it with Crypto++. I have a text file with alphanumeric characters only. Test 1 is read file chunk by chunk (16 byte) and encrypt them using CBC mode manually, then sum up the cipher. Test 2 is use Crypto++ built-in CBC mode.
Test 1
char* key;
char* iv;
//Iterate in K[n] array of n blocks
BSIZE = 16;
std::string vectorToString(vector<char> v){
string s ="";
for (int i = 0; i < v.size(); i++){
s[i] = v[i];
}
return s;
}
vector<char> xor( vector<char> s1, vector<char> s2, int len){
vector<char> r;
for (int i = 0; i < len; i++){
int u = s1[i] ^ s2[i];
r.push_back(u);
}
return r;
}
vector<char> byteToVector(byte *b, int len){
vector<char> v;
for (int i = 0; i < len; i++){
v.push_back( b[i]);
}
return v;
}
string cbc_manual(byte [n]){
int i = 0;
//Open a file and read from it, buffer size = 16
// , equal to DEFAULT_BLOCK_SIZE
std::ifstream fin(fileName, std::ios::binary | std::ios::in);
const int BSIZE = 16;
vector<char> encryptBefore;
//This function will return cpc
string cpc ="";
while (!fin.eof()){
char buffer[BSIZE];
//Read a chunk of file
fin.read(buffer, BSIZE);
int sb = sizeof(buffer);
if (i == 0){
encryptBefore = byteToVector( iv, BSIZE);
}
//If i == 0, xor IV with current buffer
//else, xor encryptBefore with current buffer
vector<char> t1 = xor(encryptBefore, byteToVector((byte*) buffer, BSIZE), BSIZE);
//After xored, encrypt the xor result, it will be current step cipher
string r1= encrypt(t1, BSIZE).c_str();
cpc += r1;
const char* end = r1.c_str() ;
encryptBefore = stringToVector( r1);
i++;
}
return cpc;
}
This is my encrypt() function, because we have only one block so I use ECB (?) mode
string encrypt(string s, int size){
ECB_Mode< AES >::Encryption e;
e.SetKey(key, size);
string cipher;
StringSource ss1(s, true,
new StreamTransformationFilter(e,
new StringSink(cipher)
) // StreamTransformationFilter
); // StringSource
return cipher;
}
And this is 100% Crypto++ made solution:
Test 2
encryptCBC(char * plain){
CBC_Mode < AES >::Encryption encryption(key, sizeof(key), iv);
StreamTransformationFilter encryptor(encryption, NULL);
for (size_t j = 0; j < plain.size(); j++)
encryptor.Put((byte)plain[j]);
encryptor.MessageEnd();
size_t ready = encryptor.MaxRetrievable();
string cipher(ready, 0x00);
encryptor.Get((byte*)&cipher[0], cipher.size());
}
Result of Test 1 and Test 2 are different. In the fact, ciphered text from Test 1 is contain the result of Test 2. Example:
Test 1's result aaa[....]bbb[....]ccc[...]...
Test 2 (Crypto++ built-in CBC)'s result: aaabbbccc...
I know the xor() function may cause a problem relate to "sameChar ^ sameChar = 0", but is there any problem relate to algorithm in my code?
This is my Test 2.1 after the 1st solution of jww.
static string auto_cbc2(string plain, long size){
CBC_Mode< AES >::Encryption e;
e.SetKeyWithIV(key, sizeof(key), iv, sizeof(iv));
string cipherText;
CryptoPP::StringSource ss(plain, true,
new CryptoPP::StreamTransformationFilter(e,
new CryptoPP::StringSink(cipherText)
, BlockPaddingSchemeDef::NO_PADDING
) // StreamTransformationFilter
); // StringSource
return cipherText;
}
It throw an error:
Unhandled exception at 0x7407A6F2 in AES-CRPP.exe: Microsoft C++
exception: CryptoPP::InvalidDataFormat at memory location 0x00EFEA74
I only got this error when use BlockPaddingSchemeDef::NO_PADDING, tried to remove BlockPaddingSchemeDef or using BlockPaddingSchemeDef::DEFAULT_PADDING, I got no error . :?
StringSource ss1(s, true,
new StreamTransformationFilter(e,
new StringSink(cipher)));
This uses PKCS padding by default. It takes a 16-byte input and produces a 32-byte output due to padding. You should do one of two things.
First, you can use BlockPaddingScheme::NO_PADDING. Something like:
StringSource ss1(s, true,
new StreamTransformationFilter(e,
new StringSink(cipher)
BlockPaddingScheme::NO_PADDING));
Second, you can process blocks manually, 16 bytes at a time. Something like:
AES::Encryption encryptor(key, keySize);
byte ibuff[<some size>] = ...;
byte obuff[<some size>];
ASSERT(<some size> % AES::BLOCKSIZE == 0);
unsigned int BLOCKS = <some size>/AES::BLOCKSIZE;
for (unsigned int i=0; i<BLOCKS; i==)
{
encryptor.ProcessBlock(&ibuff[i*16], &obuff[i*16]);
// Do the CBC XOR thing...
}
You may be able to call ProcessAndXorBlock from the BlockCipher base class and do it in one shot.

How to minimize the length of the base64 string from nsdata of image?

I convert an image to NSData and NSData to base64string using
NSData *imagedata = UIImageJPEGRepresentation(imageView.image, 0.1f);
NSString *c = [NSString base64StringFromData:imagedata];
the fn for stringconversion
+ (NSString*)base64forData:(NSData*)theData {
const uint8_t* input = (const uint8_t*)[theData bytes];
NSInteger length = [theData length];
static char table[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=";
NSMutableData* data = [NSMutableData dataWithLength:((length + 2) / 3) * 4];
uint8_t* output = (uint8_t*)data.mutableBytes;
NSInteger i;
for (i=0; i < length; i += 3) {
NSInteger value = 0;
NSInteger j;
for (j = i; j < (i + 3); j++) {
value <<= 8;
if (j < length) {
value |= (0xFF & input[j]);
}
}
NSInteger theIndex = (i / 3) * 4;
output[theIndex + 0] = table[(value >> 18) & 0x3F];
output[theIndex + 1] = table[(value >> 12) & 0x3F];
output[theIndex + 2] = (i + 1) < length ? table[(value >> 6) & 0x3F] : '=';
output[theIndex + 3] = (i + 2) < length ? table[(value >> 0) & 0x3F] : '=';
}
return [[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding];
}
but the resulted base64string is too long, its length is above 300000.
ie,
int len = c.length;
value of len is above 300000.
the image is of 3 to 4 mb
infact I compress the image to 0.1f
NSData *imagedata = UIImageJPEGRepresentation(iivv.image, 0.1f);
how to minimize the length, is there any other code for base64conversion from NSData?
Base64 will always have larger space requirements than the original data, because it does not use all the bits in one byte. This is done intentionally in order to make sure that higher bits do not cause problems when being handed from one system to another. So in effect it trades space for transmission safety.
It is called Base64 because it only uses 6 bits (2^6=64) for each byte, therefore effectively taking up 5 bytes where the original data only had 4. Or put another way: size will increase by 25%.
The Base64 encoder of course does not care about what the bytes you feed into it represent, so you are free to compress the heck out of your data, as long as it is still in its own format (e. g. create a PNG or JPG out of uncompressed image data) and then encode that as Base64.

get 32 bit number in ios

How to get a 32 bit number in objective c when an byte array is passed to it, similarly as in java where,
ByteBuffer bb = ByteBuffer.wrap(truncation);
return bb.getInt();
Where truncation is the byte array.
It returns 32 bit number.. Is this possible in objective c?
If the number is encoded in little-endian within the buffer, then use:
int32_t getInt32LE(const uint8_t *buffer)
{
int32_t value = 0;
unsigned length = 4;
while (length > 0)
{
value <<= 8;
value |= buffer[--length];
}
return value;
}
If the number is encoded in big-endian within the buffer, then use:
int32_t getInt32BE(const uint8_t *buffer)
{
int32_t value = 0;
for (unsigned i = 0; i < 4; i++)
{
value <<= 8;
value |= *buffer++;
}
return value;
}
UPDATE If you are using data created on the same host then endianness is not an issue, in which case you can use a union as a bridge between the buffer and integers, which avoids some unpleasant casting:
union
{
uint8_t b[sizeof(int32_t)];
int32_t i;
} u;
memcpy(u.b, buffer, sizeof(u.b));
// value is u.i
Depending on the endianness:
uint32_t n = b0 << 24 | b1 << 16 | b2 << 8 | b3;
or
uint32_t n = b3 << 24 | b2 << 16 | b1 << 8 | b0
Not sure if you just want to read 4 bytes and assign that value to an integer. This case:
int32_t number;
memcpy(&number, truncation, sizeof(uint32_t));
About endianess
From your question (for me) was clear that the bytes were already ordered correctly. However if you have to re-order these bytes, use ntohl() after memcpy() :
number=ntohl(number);

Converting aligned array uint8[8] to double

I am facing a bit of a challenge trying to convert an aligned array uint8[8] to a double.
It was particularly easy to convert uint8[4] to long with bit-operations, but i understand that the double can become messy in terms of a sign bit?
In Java i simply use ByteBuffer.wrap(bytes).getDouble() but i assume its not that easy in C.
I tried to implement this code, but the last command gives the error Expression is not assignable and Shift count >= width of type
long tempHigh = 0;
long tempLow = 0;
double sum = 0;
tempHigh |= buffer[0] & 0xFF;
tempHigh <<= 8;
tempHigh |= buffer[1] & 0xFF;
tempHigh <<= 8;
tempHigh |= buffer[2] & 0xFF;
tempHigh <<= 8;
tempHigh |= buffer[3] & 0xFF;
tempLow |= buffer[4] & 0xFF;
tempLow <<= 8;
tempLow |= buffer[5] & 0xFF;
tempLow <<= 8;
tempLow |= buffer[6] & 0xFF;
tempLow <<= 8;
tempLow |= buffer[7] & 0xFF;
sum |= ((tempHigh & 0xFFFF) <<= 32) + (tempLow & 0xFFFF);
How can this procedure be done correctly or just resolve the error i have made?
Thanks in advance.
double is a floating-point type; it doesn't support bitwise operations such as |.
You could do something like:
double sum;
memcpy(&sum, buffer, sizeof(sum));
But be aware of endianness issues.
The portable way to do it is to read out the sign, exponent, and mantissa values into integer variables with bitwise arithmetic, then call ldexp to apply the exponent.
OK, here's some code. Beware it might have mismatched parentheses or off-by-one errors.
unsigned char x[8]; // your input; code assumes little endian
long mantissa = ((((((x[6]%16)*256 + x[5])*256 + x[4])*256 + x[3])*256 + x[2])*256 + x[1])*256 + x[0];
int exp = x[7]%128*16 + x[6]/16 - 1023;
int sign = 1-x[7]/128*2;
double y = sign*ldexp(0x1p53 + mantissa, exp-53);
How about a union? Write to the long part as you have, then the double is automagically correct. Something like this:
union
{
double sum;
struct
{
long tempHigh;
long tempLow;
}v;
}u;
u.v.tempHigh = 0;
u.v.tempHigh |= buffer[0] & 0xFF;
u.v.tempHigh <<= 8;
u.v.tempHigh |= buffer[1] & 0xFF;
u.v.tempHigh <<= 8;
u.v.tempHigh |= buffer[2] & 0xFF;
u.v.tempHigh <<= 8;
u.v.tempHigh |= buffer[3] & 0xFF;
u.v.tempLow |= buffer[4] & 0xFF;
u.v.tempLow <<= 8;
u.v.tempLow |= buffer[5] & 0xFF;
u.v.tempLow <<= 8;
u.v.tempLow |= buffer[6] & 0xFF;
u.v.tempLow <<= 8;
u.v.tempLow |= buffer[7] & 0xFF;
printf("%f", u.sum);

Resources