Convert double to binary representation - dart

I'm using bit shift operators on ints to convert to binary representation like that:
String toBinary(int i) {
var bytes = Uint8List(8);
bytes[0] = i >> 56;
bytes[1] = i >> 48;
bytes[2] = i >> 40;
bytes[3] = i >> 32;
bytes[4] = i >> 24;
bytes[5] = i >> 16;
bytes[6] = i >> 8;
bytes[7] = i;
return String.fromCharCodes(bytes);
}
Now I need to do the same thing for doubles, but double does not define bit shift operators. However, as doubles are also represented in 64 bit, is there a way to convert them to binary format?

First of all Dart already provides a ByteData class so in this case you can avoid using the bit shift operation and do instead:
var byteData = ByteData(8);
byteData.setUint64(0, 256);
var bytes = byteData.buffer.asUint8List();
which will produce the same byte list.
Given that you can use the setFloat64 method on ByteData to set a double and then get the binary representation.

Related

How to convert Uint8List to decimal number in Dart?

I have an Uint8List data list, for example:
Uint8List uintList = Uint8List.fromList([10, 1]);
How can I convert these numbers to a decimal number?
int decimalValue = ??? // in this case 265
Mees' answer is the correct general method, and it's good to understand how to do bitwise operations manually.
However, Dart does have a ByteData class that has various functions to help parse byte data for you (e.g. getInt16, getUint16). In your case, you can do:
Uint8List uintList = Uint8List.fromList([10, 1]);
int decimalValue = ByteData.view(uintList.buffer).getInt16(0, Endian.little);
print(decimalValue); // Prints: 266.
From what I understand of your question, you want decimalValue to be an integer where the least significant byte is (decimal)10, and the byte after that to be 1. This would result in the value 1 * 256 + 10 = 266. If you meant the bytes the other way around, it would be 10 * 256 + 1 = 2560 + 1 = 2561.
I don't actually have any experience with dart, but I assume code similar to this would work:
int decimalValue = 0;
for (int i = 0; i < uintList.length; i++) {
decimalValue = decimalValue << 8; // shift everything one byte to the left
decimalValue = decimalValue | uintList[i]; // bitwise or operation
}
If it doesn't produce the number you want it to, you might have to iterate through the loop backwards instead, which requires changing one line of code:
for (int i = uintList.length-1; i >= 0; i--) {

How to calculate 16 bit checksum value using javascript big endian format

i want to calculate the 16 bit checksum value in big endian format for bluetooth data transmission. i have passed the data to send is 123456 as per documentation. and got the checksum 100219 but in the documentation the checksum value is 0219. how to calculate same like that.
Calculating a 16 bit checksum?. i am following this link but not get output as expected as implemented in javascript
// ASCII only
function stringToBytes(string) {
var array = new Uint8Array(string.length);
for (var i = 0, l = string.length; i < l; i++) {
array[i] = string.charCodeAt(i);
}
return array.buffer;
}
datalength = dec2hex(msg.length);
megPassedToChecksum = 'ABAB'+ datalength + msg ;
var bytes = stringToBytes(megPassedToChecksum);
var byteArray = new Uint8Array(bytes);
var checksum = new Uint16Array();
checksum = 0;
val = [];
length = byteArray.length;
var even_length = length - (length % 2); // Round down to multiple of 2
for (var i = 0; i < even_length; i += 2) {
var val = byteArray[i] + 256 * byteArray[i + 1];
checksum += val;
}
if (i < length) {
checksum += byteArray[i];
}

How can I generate check sum code in dart?

I want to use PayMaya EMV Merchant Presented QR Code Specification for Payment Systems everything is good except CRC i don't understand how to generate this code.
that's all exist about it ,but i still can't understand how to generate this .
The checksum shall be calculated according to [ISO/IEC 13239] using the polynomial '1021' (hex) and initial value 'FFFF' (hex). The data over which the checksum is calculated shall cover all data objects, including their ID, Length and Value, to be included in the QR Code, in their respective order, as well as the ID and Length of the CRC itself (but excluding its Value).
Following the calculation of the checksum, the resulting 2-byte hexadecimal value shall be encoded as a 4-character Alphanumeric Special value by converting each nibble to an Alphanumeric Special character.
Example: a CRC with a two-byte hexadecimal value of '007B' is included in the QR Code as "6304007B".
This converts a string to its UTF-8 representation as a sequence of bytes, and prints out the 16-bit Cyclic Redundancy Check of those bytes (CRC-16/CCITT-FALSE).
int crc16_CCITT_FALSE(String data) {
int initial = 0xFFFF; // initial value
int polynomial = 0x1021; // 0001 0000 0010 0001 (0, 5, 12)
Uint8List bytes = Uint8List.fromList(utf8.encode(data));
for (var b in bytes) {
for (int i = 0; i < 8; i++) {
bool bit = ((b >> (7-i) & 1) == 1);
bool c15 = ((initial >> 15 & 1) == 1);
initial <<= 1;
if (c15 ^ bit) initial ^= polynomial;
}
}
return initial &= 0xffff;
}
The CRC for ISO/IEC 13239 is this CRC-16/ISO-HDLC, per the notes in that catalog. This implements that CRC and prints the check value 0x906e:
import 'dart:typed_data';
int crc16ISOHDLC(Uint8List bytes) {
int crc = 0xffff;
for (var b in bytes) {
crc ^= b;
for (int i = 0; i < 8; i++)
crc = (crc & 1) != 0 ? (crc >> 1) ^ 0x8408 : crc >> 1;
}
return crc ^ 0xffff;
}
void main() {
Uint8List msg = Uint8List.fromList([0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39]);
print("0x" + crc16ISOHDLC(msg).toRadixString(16));
}

How do I convert bitmap format of a UIImage?

I need to convert my bitmap from the normal camera format of kCVPixelFormatType_32BGRA to the kCVPixelFormatType_24RGB format so it can be consumed by a 3rd party library.
How can this be done?
My c# code looks like this in an effort of doing it directly with the byte data:
byte[] sourceBytes = UIImageTransformations.BytesFromImage(sourceImage);
// final source is to be RGB
byte[] finalBytes = new byte[(int)(sourceBytes.Length * .75)];
int length = sourceBytes.Length;
int finalByte = 0;
for (int i = 0; i < length; i += 4)
{
byte blue = sourceBytes[i];
byte green = sourceBytes[i + 1];
byte red = sourceBytes[i + 2];
finalBytes[finalByte] = red;
finalBytes[finalByte + 1] = green;
finalBytes[finalByte + 2] = blue;
finalByte += 3;
}
UIImage finalImage = UIImageTransformations.ImageFromBytes(finalBytes);
However I'm finding that my sourceBytes length is not always divisible by 4 which doesn't make any sense to me.

Convert binary two's complement data into integer in objective-c

I have some binary data (twos complement) coming from an accelerometer and I need to convert it to an integer. Is there a standard library function which does this, or do I need to write my own code?
For example: I receive an NSData object from the acclerometer, which when converted to hex looks like this:
C0088001803F
Which is a concatenation of 3 blocks of 2-byte data:
x = C008
y = 8001
z = 803F
Focussing on the x-axis only:
hex = C008
decimal = 49160
binary = 1100000000001000
twos complement = -16376
Is there a standard function for converting from C008 in twos complement directly to -16376?
Thank you.
Something like:
const int8_t* bytes = (const int8_t*) [nsDataObject bytes];
int32_t x = (bytes[0] << 8) + (0x0FF & bytes[1]);
x = x << 16;
x = x >> 16;
int32_t y = (bytes[2] << 8) + (0x0FF & bytes[3]);
y = y << 16;
y = y >> 16;
int32_t z = (bytes[4] << 8) + (0x0FF & bytes[5]);
z = z << 16;
z = z >> 16;
This assumes that the values really are "big-endian" as suggested in the question.

Resources