How do I create an FCS for PPP packets? - checksum

I am trying to create a software simulation on an Ubuntu GNU/Linux machine which will work like PPPoE. I would like this simulator to take outgoing packets, strip off the ethernet header, insert the PPP flags (7E, FF, 03, 00, and 21) and place the IP layer information in the PPP packet. I am having trouble with the FCS that goes after the data. From what I can tell, the cell modem I am using has a 2 byte FCS using the CRC16-CCITT method. I have found several pieces of software that will calculate this checksum, but none of them produce what is coming out the serial line (I have a serial line "sniffer" that shows me everything the modem is being sent).
I have been looking into the source of pppd and the linux kernel itself, and I can see that both of them have a method of adding an FCS to the data. It seems quite difficult to implement, as I have no experience in kernel hacking. Can someone come up with a simple way (preferably in Python) of calculating an FCS that matches the one that the kernel produces?
Thanks.
P.S. If anyone wants, I can add a sample of the data output I am getting to the serial modem.

Used simple python library crcmod.
import crcmod #pip3 install crcmod
fcsData = "A0 19 03 61 DC"
fcsData=''.join(fcsData.split(' '))
print(fcsData)
crc16 = crcmod.mkCrcFun(0x11021, rev=True,initCrc=0x0000, xorOut=0xFFFF)
print(hex(crc16(bytes.fromhex(fcsData))))
fcs=hex(crc16(bytes.fromhex(fcsData)))

I recently did something like this while testing code to kill a ppp connection ..
This worked for me:
# RFC 1662 Appendix C
def mkfcstab():
P = 0x8408
def valiter():
for b in range(256):
v = b
i = 8
while i:
v = (v >> 1) ^ P if v & 1 else v >> 1
i -= 1
yield v & 0xFFFF
return tuple(valiter())
fcstab = mkfcstab()
PPPINITFCS16 = 0xffff # Initial FCS value
PPPGOODFCS16 = 0xf0b8 # Good final FCS value
def pppfcs16(fcs, bytelist):
for b in bytelist:
fcs = (fcs >> 8) ^ fcstab[(fcs ^ b) & 0xff]
return fcs
To get the value:
fcs = pppfcs16(PPPINITFCS16, (ord(c) for c in frame)) ^ 0xFFFF
and swap the bytes (I used chr((fcs & 0xFF00) >> 8), chr(fcs & 0x00FF))

Got this from mbed.org PPP-Blinky:
// http://www.sunshine2k.de/coding/javascript/crc/crc_js.html - Correctly calculates
// the 16-bit FCS (crc) on our frames (Choose CRC16_CCITT_FALSE)
int crc;
void crcReset()
{
crc=0xffff; // crc restart
}
void crcDo(int x) // cumulative crc
{
for (int i=0; i<8; i++) {
crc=((crc&1)^(x&1))?(crc>>1)^0x8408:crc>>1; // crc calculator
x>>=1;
}
}
int crcBuf(char * buf, int size) // crc on an entire block of memory
{
crcReset();
for(int i=0; i<size; i++)crcDo(*buf++);
return crc;
}

Related

How would one create a bitwise rotation function in dart?

I'm in the process of creating a cryptography package for Dart (https://pub.dev/packages/steel_crypt). Right now, most of what I've done is either exposed from PointyCastle or simple-ish algorithms where bitwise rotations are unnecessary or replaceable by >> and <<.
However, as I move toward complicated cryptography solutions, which I can do mathematically, I'm unsure of how to implement bitwise rotation in Dart with maximum efficiency. Because of the nature of cryptography, the speed part is emphasized and uncompromising, in that I need the absolute fastest implementation.
I've ported a method of bitwise rotation from Java. I'm pretty sure this is correct, but unsure of the efficiency and readability:
My tested implementation is below:
int INT_BITS = 64; //Dart ints are 64 bit
static int leftRotate(int n, int d) {
//In n<<d, last d bits are 0.
//To put first 3 bits of n at
//last, do bitwise-or of n<<d with
//n >> (INT_BITS - d)
return (n << d) | (n >> (INT_BITS - d));
}
static int rightRotate(int n, int d) {
//In n>>d, first d bits are 0.
//To put last 3 bits of n at
//first, we do bitwise-or of n>>d with
//n << (INT_BITS - d)
return (n >> d) | (n << (INT_BITS - d));
}
EDIT (for clarity): Dart has no unsigned right or left shift, meaning that >> and << are signed right shifts, which bears more significance than I might have thought. It poses a challenge that other languages don't in terms of devising an answer. The accepted answer below explains this and also shows the correct method of bitwise rotation.
As pointed out, Dart has no >>> (unsigned right shift) operator, so you have to rely on the signed shift operator.
In that case,
int rotateLeft(int n, int count) {
const bitCount = 64; // make it 32 for JavaScript compilation.
assert(count >= 0 && count < bitCount);
if (count == 0) return n;
return (n << count) |
((n >= 0) ? n >> (bitCount - count) : ~(~n >> (bitCount - count)));
}
should work.
This code only works for the native VM. When compiling to JavaScript, numbers are doubles, and bitwise operations are only done on 32-bit numbers.

SSE/AVX: Choose from two __m256 float vectors based on per-element min and max absolute value

I am looking for efficient AVX (AVX512) implementation of
// Given
float u[8];
float v[8];
// Compute
float a[8];
float b[8];
// Such that
for ( int i = 0; i < 8; ++i )
{
a[i] = fabs(u[i]) >= fabs(v[i]) ? u[i] : v[i];
b[i] = fabs(u[i]) < fabs(v[i]) ? u[i] : v[i];
}
I.e., I need to select element-wise into a from u and v based on mask, and into b based on !mask, where mask = (fabs(u) >= fabs(v)) element-wise.
I had this exact same problem just the other day. The solution I came up with (using AVX only) was:
// take the absolute value of u and v
__m256 sign_bit = _mm256_set1_ps(-0.0f);
__m256 u_abs = _mm256_andnot_ps(sign_bit, u);
__m256 v_abs = _mm256_andnot_ps(sign_bit, v);
// get a mask indicating the indices for which abs(u[i]) >= abs(v[i])
__m256 u_ge_v = _mm256_cmp_ps(u_abs, v_abs, _CMP_GE_OS);
// use the mask to select the appropriate elements into a and b, flipping the argument
// order for b to invert the sense of the mask
__m256 a = _mm256_blendv_ps(u, v, u_ge_v);
__m256 b = _mm256_blendv_ps(v, u, u_ge_v);
The AVX512 equivalent would be:
// take the absolute value of u and v
__m512 sign_bit = _mm512_set1_ps(-0.0f);
__m512 u_abs = _mm512_andnot_ps(sign_bit, u);
__m512 v_abs = _mm512_andnot_ps(sign_bit, v);
// get a mask indicating the indices for which abs(u[i]) >= abs(v[i])
__mmask16 u_ge_v = _mm512_cmp_ps_mask(u_abs, v_abs, _CMP_GE_OS);
// use the mask to select the appropriate elements into a and b, flipping the argument
// order for b to invert the sense of the mask
__m512 a = _mm512_mask_blend_ps(u_ge_v, u, v);
__m512 b = _mm512_mask_blend_ps(u_ge_v, v, u);
As Peter Cordes suggested in the comments above, there are other approaches as well like taking the absolute value followed by a min/max and then reinserting the sign bit, but I couldn't find anything that was shorter/lower latency than this sequence of instructions.
Actually, there is another approach using AVX512DQ's VRANGEPS via the _mm512_range_ps() intrinsic. Intel's intrinsic guide describes it as follows:
Calculate the max, min, absolute max, or absolute min (depending on control in imm8) for packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst. imm8[1:0] specifies the operation control: 00 = min, 01 = max, 10 = absolute max, 11 = absolute min. imm8[3:2] specifies the sign control: 00 = sign from a, 01 = sign from compare result, 10 = clear sign bit, 11 = set sign bit.
Note that there appears to be a typo in the above; actually imm8[3:2] == 10 is "absolute min" and imm8[3:2] == 11 is "absolute max" if you look at the details of the per-element operation:
CASE opCtl[1:0] OF
0: tmp[31:0] := (src1[31:0] <= src2[31:0]) ? src1[31:0] : src2[31:0]
1: tmp[31:0] := (src1[31:0] <= src2[31:0]) ? src2[31:0] : src1[31:0]
2: tmp[31:0] := (ABS(src1[31:0]) <= ABS(src2[31:0])) ? src1[31:0] : src2[31:0]
3: tmp[31:0] := (ABS(src1[31:0]) <= ABS(src2[31:0])) ? src2[31:0] : src1[31:0]
ESAC
CASE signSelCtl[1:0] OF
0: dst[31:0] := (src1[31] << 31) OR (tmp[30:0])
1: dst[31:0] := tmp[63:0]
2: dst[31:0] := (0 << 31) OR (tmp[30:0])
3: dst[31:0] := (1 << 31) OR (tmp[30:0])
ESAC
RETURN dst
So you can get the same result with just two instructions:
auto a = _mm512_range_ps(v, u, 0x7); // 0b0111 = sign from compare result, absolute max
auto b = _mm512_range_ps(v, u, 0x6); // 0b0110 = sign from compare result, absolute min
The argument order (v, u) is a bit unintuitive, but it's needed in order to get the same behavior that you described in the OP in the event that the elements have equal absolute value (namely, that the value from u is passed through to a, and v goes to b).
On Skylake and Ice Lake Xeon platforms (probably any of the Xeons that have dual FMA units, probably?), VRANGEPS has throughput 2, so the two checks can issue and execute simultaneously, with latency of 4 cycles. This is only a modest latency improvement on the original approach, but the throughput is better and it requires fewer instructions/uops/instruction cache space.
clang does a pretty reasonable job of auto-vectorizing it with -ffast-math and the necessary __restrict qualifiers: https://godbolt.org/z/NMvN1u. and both inputs to ABS them, compare once, vblendvps twice on the original inputs with the same mask but the other sources in the opposite order to get min and max.
That's pretty much what I was thinking before checking what compilers did, and looking at their output to firm up the details I hadn't thought through yet. I don't see anything more clever than that. I don't think we can avoid abs()ing both a and b separately; there's no cmpps compare predicate that compares magnitudes and ignores the sign bit.
// untested: I *might* have reversed min/max, but I think this is right.
#include <immintrin.h>
// returns min_abs
__m256 minmax_abs(__m256 u, __m256 v, __m256 *max_result) {
const __m256 signbits = _mm256_set1_ps(-0.0f);
__m256 abs_u = _mm256_andnot_ps(signbits, u);
__m256 abs_v = _mm256_andnot_ps(signbits, v); // strip the sign bit
__m256 maxabs_is_v = _mm256_cmp_ps(abs_u, abs_v, _CMP_LT_OS); // u < v
*max_result = _mm256_blendv_ps(v, u, maxabs_is_v);
return _mm256_blendv_ps(u, v, maxabs_is_v);
}
You'd do the same thing with AVX512 except you compare into a mask instead of another vector.
// returns min_abs
__m512 minmax_abs512(__m512 u, __m512 v, __m512 *max_result) {
const __m512 absmask = _mm512_castsi512_ps(_mm512_set1_epi32(0x7fffffff));
__m512 abs_u = _mm512_and_ps(absmask, u);
__m512 abs_v = _mm512_and_ps(absmask, v); // strip the sign bit
__mmask16 maxabs_is_v = _mm512_cmp_ps_mask(abs_u, abs_v, _CMP_LT_OS); // u < v
*max_result = _mm512_mask_blend_ps(maxabs_is_v, v, u);
return _mm512_mask_blend_ps(maxabs_is_v, u, v);
}
Clang compiles the return statement in an interesting way (Godbolt):
.LCPI2_0:
.long 2147483647 # 0x7fffffff
minmax_abs512(float __vector(16), float __vector(16), float __vector(16)*): # #minmax_abs512(float __vector(16), float __vector(16), float __vector(16)*)
vbroadcastss zmm2, dword ptr [rip + .LCPI2_0]
vandps zmm3, zmm0, zmm2
vandps zmm2, zmm1, zmm2
vcmpltps k1, zmm3, zmm2
vblendmps zmm2 {k1}, zmm1, zmm0
vmovaps zmmword ptr [rdi], zmm2 ## store the blend result
vmovaps zmm0 {k1}, zmm1 ## interesting choice: blend merge-masking
ret
Instead of using another vblendmps, clang notices that zmm0 already has one of the blend inputs, and uses merge-masking with a regular vector vmovaps. This has zero advantage of Skylake-AVX512 for 512-bit vblendmps (both single-uop instructions for port 0 or 5), but if Agner Fog's instruction tables are right, vblendmps x/y/zmm only ever runs on port 0 or 5, but a masked 256-bit or 128-bit vmovaps x/ymm{k}, x/ymm can run on any of p0/p1/p5.
Both are single-uop / single-cycle latency, unlike AVX2 vblendvps based on a mask vector which is 2 uops. (So AVX512 is an advantage even for 256-bit vectors). Unfortunately, none of gcc, clang, or ICC turn the _mm256_cmp_ps into _mm256_cmp_ps_mask and optimize the AVX2 intrinsics to AVX512 instructions when compiling with -march=skylake-avx512.)
s/512/256/ to make a version of minmax_abs512 that uses AVX512 for 256-bit vectors.
Gcc goes even further, and does the questionable "optimization" of
vmovaps zmm2, zmm1 # tmp118, v
vmovaps zmm2{k1}, zmm0 # tmp118, tmp114, tmp118, u
instead of using one blend instruction. (I keep thinking I'm seeing a store followed by a masked store, but no, neither compiler is blending that way).

code snippet lack of understanding

I am boxing with a C code snippet I need to convert.
One of the functions is as follows:
+(float) calcTemp:(NSData *)data {
char scratchVal[data.length];
[data getBytes:&scratchVal length:data.length];
UInt16 temp;
temp = (scratchVal[0] & 0xff) | ((scratchVal[1] << 8) & 0xff00);
return (float)temp;
}
This line I just can't seem to grasp:
temp = (scratchVal[0] & 0xff) | ((scratchVal[1] << 8) & 0xff00);
i know its probably a novo question (but I am a noob^), if someone could explain that line to me i would greatly appreciate it. In particular the things with address references and the operator uses.
In the code snippet I don't see why they call getBytes:length method on data, since its not being used. But mainly, Im just trying to understand the line that I pointed out.
The line
temp = (scratchVal[0] & 0xff) | ((scratchVal[1] << 8) & 0xff00);
is creating an unsigned 16-bit integer value from two bytes originating in scratchVal. A single & in this context is not the address operator but bitwise AND. So the lower byte of temp is set from the first byte contained in scratchVal, and the upper byte of temp is set by left-shifting the second byte contained in scratchVal. The two resulting numbers are joined together using bitwise OR |. To avoid sign extension or other unwanted bits the masks 0xff and 0xff00 are used to ensure all undesirables are zero.
Presented visually, if scratchVal contains the bits aaaaaaaa bbbbbbbb in the first two bytes then temp will end up as an unsigned integer with the bit pattern bbbbbbbbaaaaaaaa.
The second question asked why they're calling -getBytes:length:. The line
[data getBytes:&scratchVal length:data.length];
reads the bytes from data into the scratchVal temporary buffer.
In response to the question in the comment
why it is needed to left shift the bits to concatenate them
A simple assignment won't work. Assuming again that scratchVal is a char buffer containing the bits aaaaaaaa bbbbbbbb, the code
temp = scratchVal[0];
would make temp equal to the UInt16 equivalent of the bits aaaaaaaa. You can't use addition because the result will be whatever value comes from adding the two bytes together (aaaaaaaa + bbbbbbbb).
Using real numbers as an example, suppose the first two bytes of scratchVal are equal to 0x7f 0x7f.
temp = scratchVal[0] + scratchVal[1];
Turns out to be 0x7f + 0x7f = 0xfe which is not the purpose of this code.
Building the value using OR can be better understood by breaking it down into steps.
The first part of the expression is scratchVal[0] & 0xff = 0x7f & 0xff = 0x7f
The second part is (scratchVal[1] << 8) & 0xff00 = (0x7f << 8) & 0xff = 0x7f00 & 0xff = 0x7f00
The final result in this case is 0x7f | 0x7f00 = 0x7f7f.

Warning: Signed shift result (0x1F0000000) requires 34 bits to represent, but 'int' only has 32 bits

After compiling the reMail project with no error, one of the warnings is:
remail-iphone/sqlite3/sqlite3.c:18703:15: Signed shift result
(0x1F0000000) requires 34 bits to represent, but 'int' only has 32
bits
i.e. (0x1f<<28) in the following code:
if (!(a&0x80))
{
a &= (0x1f<<28)|(0x7f<<14)|(0x7f);
b &= (0x7f<<14)|(0x7f);
b = b<<7;
a |= b;
s = s>>11;
*v = ((u64)s)<<32 | a;
return 7;
}
What's the proper way to kill this warning for iOS (32-bit)?
reMail for iPhone seems to be using an old version of SQLite (3.6.15). If I'm not mistaken, the following commit should fix exactly this problem: http://www.sqlite.org/src/info/587109c81a9cf479?sbs=0
if (!(a&0x80))
{
/* assert( ((0xFF<<28)|(0x7f<<14)|(0x7f))==0xf01fc07f ); */
a &= 0xf01fc07f;
b &= (0x7f<<14)|(0x7f);
b = b<<7;
a |= b;
s = s>>11;
*v = ((u64)s)<<32 | a;
return 7;
}
However, there might be other code sections where this problem occurs. The mentioned link shows two instances in util.c, but since sqlite.c is "an amalgamation of many separate C source files from SQLite", you may find additional occurences.
Maybe reMail would work with a recent version of SQLite, too...

PGMidi changing pitch sendBytes example

I'm trying the second day to send a midi signal. I'm using following code:
int pitchValue = 8191 //or -8192;
int msb = ?;
int lsb = ?;
UInt8 midiData[] = { 0xe0, msb, lsb};
[midi sendBytes:midiData size:sizeof(midiData)];
I don't understand how to calculate msb and lsb. I tried pitchValue << 8. But it's working incorrect, When I'm looking to events using midi tool I see min -8192 and +8064 max. I want to get -8192 and +8191.
Sorry if question is simple.
Pitch bend data is offset to avoid any sign bit concerns. The maximum negative deviation is sent as a value of zero, not -8192, so you have to compensate for that, something like this Python code:
def EncodePitchBend(value):
''' return a 2-tuple containing (msb, lsb) '''
if (value < -8192) or (value > 8191):
raise ValueError
value += 8192
return (((value >> 7) & 0x7F), (value & 0x7f))
Since MIDI data bytes are limited to 7 bits, you need to split pitchValue into two 7-bit values:
int msb = (pitchValue + 8192) >> 7 & 0x7F;
int lsb = (pitchValue + 8192) & 0x7F;
Edit: as #bgporter pointed out, pitch wheel values are offset by 8192 so that "zero" (i.e. the center position) is at 8192 (0x2000) so I edited my answer to offset pitchValue by 8192.

Resources