Zero fill right shift in swift - ios

(byte) ((val & 0xff00) >>> 8);
This is the Java code. I want to convert this code to Swift. But there is no >>> operator in swift. How can I use Zero fill right shift in Swift?

If you use the truncatingBitPattern initializer of integer types
to extract a byte then you don't have to mask the value and it does not matter if the shift operator fills with zeros or ones (which depends on whether the source
type is unsigned or signed).
Choose Int8 or UInt8 depending
on whether the byte should be interpreted as a signed or
unsigned number.
let value = 0xABCD
let signedByte = Int8(truncatingBitPattern: value >> 8)
print(signedByte) // -85
let unsignedByte = UInt8(truncatingBitPattern: value >> 8)
print(unsignedByte) // 171

Operator >> in Swift is zero-fill (for unsigned integers):
The bit-shifting behavior for unsigned integers is as follows:
Existing bits are moved to the left or right by the requested number of places.
Any bits that are moved beyond the bounds of the integer’s storage are discarded.
Zeros are inserted in the spaces left behind after the original bits are moved to the left or right.
https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/AdvancedOperators.html#//apple_ref/doc/uid/TP40014097-CH27-ID29

You don't need zero fill shift int this case because a byte is only 8-bits.
The code you have is the same as
(byte) (((val & 0xFF00) >> 8) & 0xFF)
or
(byte) ((val & 0xFF00) >> 8)
or
(byte) (val >> 8)

Zero-fill right shift operator doesn't exist in swift/objc, unfortunately. As an alternative / workaround:
// java
// let's say we want to zero-fill right shift 4 bits
int num = -333;
num >>>= 4; // num: 268435435
// objc
NSInteger num = -333;
num >>= 1;
if (num < 0) num ^= NSIntegerMin;
num >>= 3; // num: 268435435
// swift (assume we are dealing with 32 bit integer)
var num: Int32 = -333
num >>= 1
if num < 0 {
num ^= Int32.min
}
num >>= 3 // num: 268435435
Essentially get rid of the sign bit when negative.

Related

How to Calculate CRC Starting at Last Byte

I'm trying to implement a CRC-CCITT calculator in VHDL. I was able to initially do that; however, I recently found out that data is delivered starting at the least-significant byte. In my code, data is transmitted 7 bytes at a time through a frame. So let's say we have the following data: 123456789 in ASCII or 313233343536373839 in hex. The data would be transmitted as such (with the following CRC):
-- First frame of data
RxFrame.Data <= (
1 => x"39", -- LSB
2 => x"38",
3 => x"37",
4 => x"36",
5 => x"35",
6 => x"34",
7 => x"33"
);
-- Second/last frame of data
RxFrame.Data <= (
1 => x"32",
2 => x"31", -- MSB
3 => xx, -- "xx" means irrelevant data, not part of CRC calculation.
4 => xx, -- This occurs only in the last frame, when it specified in
5 => xx, -- byte 0 which bytes contain data
6 => xx,
7 => xx
);
-- Calculated CRC should be 0x31C3
Another example with data 0x4376669A1CFC048321313233343536373839 and its correct CRC is shown below:
-- First incoming frame of data
RxFrame.Data <= (
1 => x"39", -- LSB
2 => x"38",
3 => x"37",
4 => x"36",
5 => x"35",
6 => x"34",
7 => x"33"
);
-- Second incoming frame of data
RxFrame.Data <= (
1 => x"32",
2 => x"31",
3 => x"21",
4 => x"83",
5 => x"04",
6 => x"FC",
7 => x"1C"
);
-- Third/last incoming frame of data
RxFrame.Data <= (
1 => x"9A",
2 => x"66",
3 => x"76",
4 => x"43", -- MSB
5 => xx, -- Irrelevant data, specified in byte 0
6 => xx,
7 => xx
);
-- Calculated CRC should be 0x2848
Is there a concept I'm missing? Is there a way to calculate the CRC with the data being received in reverse order? I am implementing this for CANopen SDO block protocols. Thanks!
CRC calculation algorithm to verify SDO block transfer from CANopen standard
Example code to generate a CRC16 with the bytes read in reverse (last byte first), using a function to do a carryless multiply modulo the CRC polynomial. An explanation follows.
#include <stdio.h>
typedef unsigned char uint8_t;
typedef unsigned short uint16_t;
#define POLY (0x1021u)
/* carryless multiply modulo crc polynomial */
uint16_t MpyModPoly(uint16_t a, uint16_t b) /* (a*b)%poly */
{
uint16_t pd = 0;
uint16_t i;
for(i = 0; i < 16; i++){
/* assumes twos complement */
pd = (pd<<1)^((0-(pd>>15))&POLY);
pd ^= (0-(b>>15))&a;
b <<= 1;
}
return pd;
}
/* generate crc in reverse byte order */
uint16_t Crc16R(uint8_t * b, size_t sz)
{
uint8_t *e = b + sz; /* end of bfr ptr */
uint16_t crc = 0u; /* crc */
uint16_t pdm = 0x100u; /* padding multiplier */
while(e > b){ /* generate crc */
pdm = MpyModPoly(0x100, pdm);
crc ^= MpyModPoly( *--e, pdm);
}
return(crc);
}
/* msg will be processed in reverse order */
static uint8_t msg[] = {0x43,0x76,0x66,0x9A,0x1C,0xFC,0x04,0x83,
0x21,0x31,0x32,0x33,0x34,0x35,0x36,0x37,
0x38,0x39};
int main()
{
uint16_t crc;
crc = Crc16R(msg, sizeof(msg));
printf("%04x\n", crc);
return 0;
}
Example code using X86 xmm pclmulqdq and psrlq, to emulate a 16 bit by 16 bit hardware (VHDL) carryless multiply:
/* __m128i is an intrinsic for X86 128 bit xmm register */
static __m128i poly = {.m128i_u32[0] = 0x00011021u}; /* poly */
static __m128i invpoly = {.m128i_u32[0] = 0x00008898u}; /* 2^31 / poly */
/* carryless multiply modulo crc polynomial */
/* using xmm pclmulqdq and psrlq */
uint16_t MpyModPoly(uint16_t a, uint16_t b)
{
__m128i ma, mb, mp, mt;
ma.m128i_u64[0] = a;
mb.m128i_u64[0] = b;
mp = _mm_clmulepi64_si128(ma, mb, 0x00); /* mp = a*b */
mt = _mm_srli_epi64(mp, 16); /* mt = mp>>16 */
mt = _mm_clmulepi64_si128(mt, invpoly, 0x00); /* mt = mt*ipoly */
mt = _mm_srli_epi64(mt, 15); /* mt = mt>>15 = (a*b)/poly */
mt = _mm_clmulepi64_si128(mt, poly, 0x00); /* mt = mt*poly */
return mp.m128i_u16[0] ^ mt.m128i_u16[0]; /* ret mp^mt */
}
/* external code to generate invpoly */
#define POLY (0x11021u)
static __m128i invpoly; /* 2^31 / poly */
void GenMPoly(void) /* generate __m12i8 invpoly */
{
uint32_t N = 0x10000u; /* numerator = x^16 */
uint32_t Q = 0; /* quotient = 0 */
for(size_t i = 0; i <= 15; i++){ /* 31 - 16 = 15 */
Q <<= 1;
if(N&0x10000u){
Q |= 1;
N ^= POLY;
}
N <<= 1;
}
invpoly.m128i_u16[0] = Q;
}
Explanation: consider the data as separate strings of ever increasing length, padded with zeroes at the end. For the first few bytes of your example, the logic would calculate
CRC = CRC16({39})
CRC ^= CRC16({38 00})
CRC ^= CRC16({37 00 00})
CRC ^= CRC16({36 00 00 00})
...
To speed up this calculation, rather than actually pad with n zero bytes, you can do a carryless multiply of a CRC by 2^{n·8} modulo POLY, where POLY is the 17 bit polynomial used for CRC16:
CRC = CRC16({39})
CRC ^= (CRC16({38}) · (2^08 % POLY)) % POLY
CRC ^= (CRC16({37}) · (2^10 % POLY)) % POLY
CRC ^= (CRC16({36}) · (2^18 % POLY)) % POLY
...
A carryless multiply modulo POLY is equivalent to what CRC16 does, so this translates into pseudo code (all values in hex, 2^8 = 100)
CRC = 0
PDM = 100 ;padding multiplier
PDM = (100 · PDM) % POLY ;main loop (2 lines per byte)
CRC ^= ( 39 · PDM) % POLY
PDM = (100 · PDM) % POLY
CRC ^= ( 38 · PDM) % POLY
PDM = (100 · PDM) % POLY
CRC ^= ( 37 · PDM) % POLY
PDM = (100 · PDM) % POLY
CRC ^= ( 36 · PDM) % POLY
...
Implementing (A · B) % POLY is based on binary math:
(A · B) % POLY = (A · B) ^ (((A · B) / POLY) · POLY)
Where multiply is carryless (XOR instead of add) and divide is borrowless (XOR instead of subtract). Since the divide is borrowless, and most significant term of POLY is x^16, the quotient
Q = (A · B) / POLY
only depends on the upper 16 bits of (A · B). Dividing by POLY uses multiplication by the 16 bit constant IPOLY = (2^31)/POLY followed by a right shift:
Q = (A · B) / POLY = (((A · B) >> 16) · IPOLY) >> 15
The process uses a 16 bit by 16 bit carryless multiply, producing a 31 bit product.
POLY = 0x11021u ; CRC polynomial (17 bit)
IPOLY = 0x08898u ; 2^31 / POLY
; generated by external software
MpyModPoly(A, B)
{
MP = A · B ; MP = A · B
MT = MP >> 16 ; MT = MP >> 16
MT = MT · IPOLY ; MT = MT · IPOLY
MT = MT >> 15 ; MT = (A · B) / POLY
MT = MT · POLY ; MT = ((A · B) / POLY) * POLY
return MP xor MT ; (A·B) ^ (((A · B) / POLY) · POLY)
}
A hardware based carryless multiply would look something like this 4 bit · 4 bit example.
p[] = [a3 a2 a1 a0] · [b3 b2 b1 b0]
p[] is a 7 bit product generated with 7 parallel circuits.
The time for multiply would be worst case propagation time for p3.
p6 = a3&b3
p5 = a3&b2 ^ a2&b3
p4 = a3&b1 ^ a2&b2 ^ a1&b3
p3 = a3&b0 ^ a2&b1 ^ a1&b2 ^ a0&b3
p2 = a2&b0 ^ a1&b1 ^ a0&b2
p1 = a1&b0 ^ a0&b1
p0 = a0&b0
If the xor gates available only have 2 bit inputs, the logic can
be split up. For example:
p3 = (a3&b0 ^ a2&b1) ^ (a1&b2 ^ a0&b3)
I don't know if your VHDL toolset includes a library for carryless multiply. For a 16 bit by 16 bit multiply resulting in a 31 bit product (p30 to p00), p15 has 16 outputs from the 16 ands (in parallel), which could be xor'ed using a tree like structure, 8 xors in parallel feeding into 4 xors in parallel feeding into 2 xor's in parallel into a single xor. So the propagation time would be 1 and and 4 xor propagation times.
Here is an example in C that you can adapt. Since you mentioned VHDL, this is a bit-wise implementation suitable for casting into gates and flip-flops. However, if cycles are more precious to you than memory and gates, then there is also a byte-wise table-driven version that would run in 1/8 the number of cycles.
What this does is the inverse of what is done in a normal CRC calculation. It then applies the same size input in zeros with a normal CRC to get what the normal CRC would have been on that input. Running the zeros through takes the same number of cycles as the inverse CRC, i.e. O(n) where n is the size of the input. If that latency is too large, that can be reduced to O(log n) cycles, with some investment in gates.
#include <stddef.h>
// Update crc with the CRC-16/XMODEM of n zero bytes. (This can be done in
// O(log n) time or cycles instead of O(n), with a little more effort.)
static unsigned crc16x_zeros_bit(unsigned crc, size_t n) {
for (size_t i = 0; i < n; i++)
for (int k = 0; k < 8; k++)
crc = crc & 0x8000 ? (crc << 1) ^ 0x1021 : crc << 1;
return crc & 0xffff;
}
// Update crc with the CRC-16/XMODEM of the len bytes at mem in reverse. If mem
// is NULL, then return the initial value for the CRC. When done,
// crc16x_zeros_bit() must be used to apply the total length of zero bytes, in
// order to get what the CRC would have been if it were calculated on the bytes
// fed in the opposite order.
static unsigned crc16x_inverse_bit(unsigned crc, void const *mem, size_t len) {
unsigned char const *data = mem;
if (data == NULL)
return 0;
crc &= 0xffff;
for (size_t i = 0; i < len; i++) {
for (int k = 0; k < 8; k++)
crc = crc & 1 ? (crc >> 1) ^ 0x8810 : crc >> 1;
crc ^= (unsigned)data[i] << 8;
}
return crc;
}
#include <stdio.h>
int main(void) {
// Do framed example.
unsigned crc = crc16x_inverse_bit(0, NULL, 0);
crc = crc16x_inverse_bit(crc, (void const *)"9876543", 7);
crc = crc16x_inverse_bit(crc, (void const *)"21", 2);
crc = crc16x_zeros_bit(crc, 9);
printf("%04x\n", crc);
// Do another one.
crc = crc16x_inverse_bit(0, NULL, 0);
crc = crc16x_inverse_bit(crc, (void const *)"9876543", 7);
crc = crc16x_inverse_bit(crc, (void const *)"21!\x83\x04\xfc\x1c", 7);
crc = crc16x_inverse_bit(crc, (void const *)"\x9a" "fvC", 4);
crc = crc16x_zeros_bit(crc, 18);
printf("%04x\n", crc);
return 0;
}
Here is the O(log n) version of crc16x_zeros_bit():
// Return a(x) multiplied by b(x) modulo p(x), where p(x) is the CRC
// polynomial. For speed, a cannot be zero.
static inline unsigned multmodp(unsigned a, unsigned b) {
unsigned p = 0;
for (;;) {
if (a & 1) {
p ^= b;
if (a == 1)
break;
}
a >>= 1;
b = b & 0x8000 ? (b << 1) ^ 0x1021 : b << 1;
}
return p & 0xffff;
}
// Return x^(8n) modulo p(x).
static unsigned x2nmodp(size_t n) {
unsigned p = 1; // x^0 == 1
unsigned q = 0x10; // x^2^2
while (n) {
q = multmodp(q, q); // x^2^k mod p(x), k = 3,4,...
if (n & 1)
p = multmodp(q, p);
n >>= 1;
}
return p;
}
// Update crc with the CRC-16/XMODEM of n zero bytes.
static unsigned crc16x_zeros_bit(unsigned crc, size_t n) {
return multmodp(x2nmodp(n), crc);
}

How can I generate check sum code in dart?

I want to use PayMaya EMV Merchant Presented QR Code Specification for Payment Systems everything is good except CRC i don't understand how to generate this code.
that's all exist about it ,but i still can't understand how to generate this .
The checksum shall be calculated according to [ISO/IEC 13239] using the polynomial '1021' (hex) and initial value 'FFFF' (hex). The data over which the checksum is calculated shall cover all data objects, including their ID, Length and Value, to be included in the QR Code, in their respective order, as well as the ID and Length of the CRC itself (but excluding its Value).
Following the calculation of the checksum, the resulting 2-byte hexadecimal value shall be encoded as a 4-character Alphanumeric Special value by converting each nibble to an Alphanumeric Special character.
Example: a CRC with a two-byte hexadecimal value of '007B' is included in the QR Code as "6304007B".
This converts a string to its UTF-8 representation as a sequence of bytes, and prints out the 16-bit Cyclic Redundancy Check of those bytes (CRC-16/CCITT-FALSE).
int crc16_CCITT_FALSE(String data) {
int initial = 0xFFFF; // initial value
int polynomial = 0x1021; // 0001 0000 0010 0001 (0, 5, 12)
Uint8List bytes = Uint8List.fromList(utf8.encode(data));
for (var b in bytes) {
for (int i = 0; i < 8; i++) {
bool bit = ((b >> (7-i) & 1) == 1);
bool c15 = ((initial >> 15 & 1) == 1);
initial <<= 1;
if (c15 ^ bit) initial ^= polynomial;
}
}
return initial &= 0xffff;
}
The CRC for ISO/IEC 13239 is this CRC-16/ISO-HDLC, per the notes in that catalog. This implements that CRC and prints the check value 0x906e:
import 'dart:typed_data';
int crc16ISOHDLC(Uint8List bytes) {
int crc = 0xffff;
for (var b in bytes) {
crc ^= b;
for (int i = 0; i < 8; i++)
crc = (crc & 1) != 0 ? (crc >> 1) ^ 0x8408 : crc >> 1;
}
return crc ^ 0xffff;
}
void main() {
Uint8List msg = Uint8List.fromList([0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39]);
print("0x" + crc16ISOHDLC(msg).toRadixString(16));
}

Is there a direct way to get a unique value representing RGB color in opencv C++

My image is a RGB image. I want to get a unique value (such as a unicode value) to represent the RGB color value of a certain pixel. For example If the pixels red channel=23, Green channel=200,Blue channel=45 this RGB color could be represented by 232765. I wish if there is a direct opencv c++ function to get such a value from a pixel. And note that this unique value should be unique for that RGB value.
I want something like this and I know this is not correct.
uniqueColorForPixel_i_j=(matImage.at<Vec3b>(i,j)).getUniqueColor();
I hope something could be done if we can get the Scalar value of a pixel. And as in the way RNG can generate a random Scalar RGB value from number, can we get the inverse...
Just a small sample code to show how to pass directly a Vec3b to the function, and an alternative way to shift-and approach.
The code is based on this answer.
UPDATE
I added also a simple struct BGR, that will handle more easily the conversion between Vec3b and unsigned.
UPDATE 2
The code in your question:
uniqueColorForPixel_i_j=(matImage.at<Vec3b>(i,j)).getUniqueColor();
doesn't work because you're trying to call the method getUniqueColor() on a Vec3b which hasn't this method. You should instead pass the Vec3b as the argument of unsigned getUniqueColor(const Vec3b& v);.
The code should clarify this:
#include <opencv2\opencv.hpp>
using namespace cv;
unsigned getUniqueColor_v1(const Vec3b& v)
{
return ((v[2] & 0xff) << 16) + ((v[1] & 0xff) << 8) + (v[0] & 0xff);
}
unsigned getUniqueColor_v2(const Vec3b& v)
{
return 0x00ffffff & *((unsigned*)(v.val));
}
struct BGR
{
Vec3b v;
unsigned u;
BGR(const Vec3b& v_) : v(v_){
u = ((v[2] & 0xff) << 16) + ((v[1] & 0xff) << 8) + (v[0] & 0xff);
}
BGR(unsigned u_) : u(u_) {
v[0] = uchar(u & 0xff);
v[1] = uchar((u >> 8) & 0xff);
v[2] = uchar((u >> 16) & 0xff);
}
};
int main()
{
Vec3b v(45, 200, 23);
unsigned col1 = getUniqueColor_v1(v);
unsigned col2 = getUniqueColor_v2(v);
unsigned col3 = BGR(v).u;
// col1 == col2 == col3
//
// hex: 0x0017c82d
// dec: 1558573
Vec3b v2 = BGR(col3).v;
// v2 == v
//////////////////////////////
// Taking values from a mat
//////////////////////////////
// Just 2 10x10 green mats
Mat mat1(10, 10, CV_8UC3);
mat1.setTo(Vec3b(0, 255, 0));
Mat3b mat2(10, 10, Vec3b(0, 255, 0));
int row = 2;
int col = 3;
unsigned u1 = getUniqueColor_v1(mat1.at<Vec3b>(row, col));
unsigned u2 = BGR(mat1.at<Vec3b>(row, col)).u;
unsigned u3 = getUniqueColor_v1(mat2(row, col));
unsigned u4 = BGR(mat2(row, col)).u;
// u1 == u2 == u3 == u4
return 0;
}

Convert binary two's complement data into integer in objective-c

I have some binary data (twos complement) coming from an accelerometer and I need to convert it to an integer. Is there a standard library function which does this, or do I need to write my own code?
For example: I receive an NSData object from the acclerometer, which when converted to hex looks like this:
C0088001803F
Which is a concatenation of 3 blocks of 2-byte data:
x = C008
y = 8001
z = 803F
Focussing on the x-axis only:
hex = C008
decimal = 49160
binary = 1100000000001000
twos complement = -16376
Is there a standard function for converting from C008 in twos complement directly to -16376?
Thank you.
Something like:
const int8_t* bytes = (const int8_t*) [nsDataObject bytes];
int32_t x = (bytes[0] << 8) + (0x0FF & bytes[1]);
x = x << 16;
x = x >> 16;
int32_t y = (bytes[2] << 8) + (0x0FF & bytes[3]);
y = y << 16;
y = y >> 16;
int32_t z = (bytes[4] << 8) + (0x0FF & bytes[5]);
z = z << 16;
z = z >> 16;
This assumes that the values really are "big-endian" as suggested in the question.

How to calculate binary checksum?

I'm working on an app of hardware communication that I send or require data from an external hardware. I have the require data part done.
And I just find out I could use some help to calculate the checksum.
A package is created as NSMutableData, then it will be converted in to Byte Array before sending out.
A package looks like this:
0x1E 0x2D 0x2F DATA checksum
I'm thinking I can convert hex into binary to calculate them one by one. But I don't know if it's a good idea. Please let me know if this is the only way to do it, or there are some built in functions I don't know.
Any suggestions will be appreciated.
BTW, I just found the code for C# from other's post, I'll try to make it work in my app. If I can, I'll share it with you. Still any suggestions will be appreciated.
package org.example.checksum;
public class InternetChecksum {
/**
* Calculate the Internet Checksum of a buffer (RFC 1071 - http://www.faqs.org/rfcs/rfc1071.html)
* Algorithm is
* 1) apply a 16-bit 1's complement sum over all octets (adjacent 8-bit pairs [A,B], final odd length is [A,0])
* 2) apply 1's complement to this final sum
*
* Notes:
* 1's complement is bitwise NOT of positive value.
* Ensure that any carry bits are added back to avoid off-by-one errors
*
*
* #param buf The message
* #return The checksum
*/
public long calculateChecksum(byte[] buf) {
int length = buf.length;
int i = 0;
long sum = 0;
long data;
// Handle all pairs
while (length > 1) {
// Corrected to include #Andy's edits and various comments on Stack Overflow
data = (((buf[i] << 8) & 0xFF00) | ((buf[i + 1]) & 0xFF));
sum += data;
// 1's complement carry bit correction in 16-bits (detecting sign extension)
if ((sum & 0xFFFF0000) > 0) {
sum = sum & 0xFFFF;
sum += 1;
}
i += 2;
length -= 2;
}
// Handle remaining byte in odd length buffers
if (length > 0) {
// Corrected to include #Andy's edits and various comments on Stack Overflow
sum += (buf[i] << 8 & 0xFF00);
// 1's complement carry bit correction in 16-bits (detecting sign extension)
if ((sum & 0xFFFF0000) > 0) {
sum = sum & 0xFFFF;
sum += 1;
}
}
// Final 1's complement value correction to 16-bits
sum = ~sum;
sum = sum & 0xFFFF;
return sum;
}
}
When I post this question a year ago, I was still quite new to Objective-C. It turned out to be something very easy to do.
The way you calculate checksum is based on how checksum is defined in your communication protocol. In my case, checksum is just the sum of all the previous bytes sent or the data you want to send.
So if I have a NSMutableData *cmd that has five bytes:
0x10 0x14 0xE1 0xA4 0x32
checksum is the last byte of 0x10+0x14+0xE1+0xA4+0x32
So the sum is 01DB, checksum is 0xDB.
Code:
//i is the length of cmd
- (Byte)CalcCheckSum:(Byte)i data:(NSMutableData *)cmd
{ Byte * cmdByte = (Byte *)malloc(i);
memcpy(cmdByte, [cmd bytes], i);
Byte local_cs = 0;
int j = 0;
while (i>0) {
local_cs += cmdByte[j];
i--;
j++;
};
local_cs = local_cs&0xff;
return local_cs;
}
To use it:
Byte checkSum = [self CalcCheckSum:[command length] data:command];
Hope it helps.

Resources