I am boxing with a C code snippet I need to convert.
One of the functions is as follows:
+(float) calcTemp:(NSData *)data {
char scratchVal[data.length];
[data getBytes:&scratchVal length:data.length];
UInt16 temp;
temp = (scratchVal[0] & 0xff) | ((scratchVal[1] << 8) & 0xff00);
return (float)temp;
}
This line I just can't seem to grasp:
temp = (scratchVal[0] & 0xff) | ((scratchVal[1] << 8) & 0xff00);
i know its probably a novo question (but I am a noob^), if someone could explain that line to me i would greatly appreciate it. In particular the things with address references and the operator uses.
In the code snippet I don't see why they call getBytes:length method on data, since its not being used. But mainly, Im just trying to understand the line that I pointed out.
The line
temp = (scratchVal[0] & 0xff) | ((scratchVal[1] << 8) & 0xff00);
is creating an unsigned 16-bit integer value from two bytes originating in scratchVal. A single & in this context is not the address operator but bitwise AND. So the lower byte of temp is set from the first byte contained in scratchVal, and the upper byte of temp is set by left-shifting the second byte contained in scratchVal. The two resulting numbers are joined together using bitwise OR |. To avoid sign extension or other unwanted bits the masks 0xff and 0xff00 are used to ensure all undesirables are zero.
Presented visually, if scratchVal contains the bits aaaaaaaa bbbbbbbb in the first two bytes then temp will end up as an unsigned integer with the bit pattern bbbbbbbbaaaaaaaa.
The second question asked why they're calling -getBytes:length:. The line
[data getBytes:&scratchVal length:data.length];
reads the bytes from data into the scratchVal temporary buffer.
In response to the question in the comment
why it is needed to left shift the bits to concatenate them
A simple assignment won't work. Assuming again that scratchVal is a char buffer containing the bits aaaaaaaa bbbbbbbb, the code
temp = scratchVal[0];
would make temp equal to the UInt16 equivalent of the bits aaaaaaaa. You can't use addition because the result will be whatever value comes from adding the two bytes together (aaaaaaaa + bbbbbbbb).
Using real numbers as an example, suppose the first two bytes of scratchVal are equal to 0x7f 0x7f.
temp = scratchVal[0] + scratchVal[1];
Turns out to be 0x7f + 0x7f = 0xfe which is not the purpose of this code.
Building the value using OR can be better understood by breaking it down into steps.
The first part of the expression is scratchVal[0] & 0xff = 0x7f & 0xff = 0x7f
The second part is (scratchVal[1] << 8) & 0xff00 = (0x7f << 8) & 0xff = 0x7f00 & 0xff = 0x7f00
The final result in this case is 0x7f | 0x7f00 = 0x7f7f.
Related
I have a specific requirement to convert a stream of bytes into a character encoding that happens to be 6-bits per character.
Here's an example:
Input: 0x50 0x11 0xa0
Character Table:
010100 T
000001 A
000110 F
100000 SPACE
Output: "TAF "
Logically I can understand how this works:
Taking 0x50 0x11 0xa0 and showing as binary:
01010000 00010001 10100000
Which is "TAF ".
What's the best way to do this programmatically (pseudo code or c++). Thank you!
Well, every 3 bytes, you end up with four characters. So for one thing, you need to work out what to do if the input isn't a multiple of three bytes. (Does it have padding of some kind, like base64?)
Then I'd probably take each 3 bytes in turn. In C#, which is close enough to pseudo-code for C :)
for (int i = 0; i < array.Length; i += 3)
{
// Top 6 bits of byte i
int value1 = array[i] >> 2;
// Bottom 2 bits of byte i, top 4 bits of byte i+1
int value2 = ((array[i] & 0x3) << 4) | (array[i + 1] >> 4);
// Bottom 4 bits of byte i+1, top 2 bits of byte i+2
int value3 = ((array[i + 1] & 0xf) << 2) | (array[i + 2] >> 6);
// Bottom 6 bits of byte i+2
int value4 = array[i + 2] & 0x3f;
// Now use value1...value4, e.g. putting them into a char array.
// You'll need to decode from the 6-bit number (0-63) to the character.
}
Just in case if someone is interested - another variant that extracts 6-bit numbers from the stream as soon as they appear there. That is, results can be obtained even if less then 3 bytes are currently read. Would be useful for unpadded streams.
The code saves the state of the accumulator a in variable n which stores the number of bits left in accumulator from the previous read.
int n = 0;
unsigned char a = 0;
unsigned char b = 0;
while (read_byte(&byte)) {
// save (6-n) most significant bits of input byte to proper position
// in accumulator
a |= (b >> (n + 2)) & (077 >> n);
store_6bit(a);
a = 0;
// save remaining least significant bits of input byte to proper
// position in accumulator
a |= (b << (4 - n)) & ((077 << (4 - n)) & 077);
if (n == 4) {
store_6bit(a);
a = 0;
}
n = (n + 2) % 6;
}
This is my first attempt at dpcu, I'm checking machine code generated by dpcu-16 assembly
I am using this emulator : http://dcpu.ru/
I am trying to compare code generated by
SET A, 0x1E
SET A, 0x1F
code generated is as follow :
fc01
7c01 001f
I don't get why operand size changes between those two values
That emulator appears to be using the next version of the DCPU-16 spec, which specifies that the same-word literal value for a permits values from 0xFFFF (-1) to 0x1E (30). This means that to get any literal value outside this range the assembler has to use the next-word literal syntax, which makes the operand one byte bigger.
0x1F (dec:31) is no longer a short literal (values -1 to 30), so it has to be read as a "next word" argument.
The opcodes are thus:
SET A, 0x1E
SET = 00001
A = 00000
1E = 111111
op = 1111110000000001 = fc01
SET A, 0x1F
SET = 00001
A = 00000
NW = 011111
op = 0111110000000001 = 7c01 + 001f
I'm trying the second day to send a midi signal. I'm using following code:
int pitchValue = 8191 //or -8192;
int msb = ?;
int lsb = ?;
UInt8 midiData[] = { 0xe0, msb, lsb};
[midi sendBytes:midiData size:sizeof(midiData)];
I don't understand how to calculate msb and lsb. I tried pitchValue << 8. But it's working incorrect, When I'm looking to events using midi tool I see min -8192 and +8064 max. I want to get -8192 and +8191.
Sorry if question is simple.
Pitch bend data is offset to avoid any sign bit concerns. The maximum negative deviation is sent as a value of zero, not -8192, so you have to compensate for that, something like this Python code:
def EncodePitchBend(value):
''' return a 2-tuple containing (msb, lsb) '''
if (value < -8192) or (value > 8191):
raise ValueError
value += 8192
return (((value >> 7) & 0x7F), (value & 0x7f))
Since MIDI data bytes are limited to 7 bits, you need to split pitchValue into two 7-bit values:
int msb = (pitchValue + 8192) >> 7 & 0x7F;
int lsb = (pitchValue + 8192) & 0x7F;
Edit: as #bgporter pointed out, pitch wheel values are offset by 8192 so that "zero" (i.e. the center position) is at 8192 (0x2000) so I edited my answer to offset pitchValue by 8192.
I am trying to create a software simulation on an Ubuntu GNU/Linux machine which will work like PPPoE. I would like this simulator to take outgoing packets, strip off the ethernet header, insert the PPP flags (7E, FF, 03, 00, and 21) and place the IP layer information in the PPP packet. I am having trouble with the FCS that goes after the data. From what I can tell, the cell modem I am using has a 2 byte FCS using the CRC16-CCITT method. I have found several pieces of software that will calculate this checksum, but none of them produce what is coming out the serial line (I have a serial line "sniffer" that shows me everything the modem is being sent).
I have been looking into the source of pppd and the linux kernel itself, and I can see that both of them have a method of adding an FCS to the data. It seems quite difficult to implement, as I have no experience in kernel hacking. Can someone come up with a simple way (preferably in Python) of calculating an FCS that matches the one that the kernel produces?
Thanks.
P.S. If anyone wants, I can add a sample of the data output I am getting to the serial modem.
Used simple python library crcmod.
import crcmod #pip3 install crcmod
fcsData = "A0 19 03 61 DC"
fcsData=''.join(fcsData.split(' '))
print(fcsData)
crc16 = crcmod.mkCrcFun(0x11021, rev=True,initCrc=0x0000, xorOut=0xFFFF)
print(hex(crc16(bytes.fromhex(fcsData))))
fcs=hex(crc16(bytes.fromhex(fcsData)))
I recently did something like this while testing code to kill a ppp connection ..
This worked for me:
# RFC 1662 Appendix C
def mkfcstab():
P = 0x8408
def valiter():
for b in range(256):
v = b
i = 8
while i:
v = (v >> 1) ^ P if v & 1 else v >> 1
i -= 1
yield v & 0xFFFF
return tuple(valiter())
fcstab = mkfcstab()
PPPINITFCS16 = 0xffff # Initial FCS value
PPPGOODFCS16 = 0xf0b8 # Good final FCS value
def pppfcs16(fcs, bytelist):
for b in bytelist:
fcs = (fcs >> 8) ^ fcstab[(fcs ^ b) & 0xff]
return fcs
To get the value:
fcs = pppfcs16(PPPINITFCS16, (ord(c) for c in frame)) ^ 0xFFFF
and swap the bytes (I used chr((fcs & 0xFF00) >> 8), chr(fcs & 0x00FF))
Got this from mbed.org PPP-Blinky:
// http://www.sunshine2k.de/coding/javascript/crc/crc_js.html - Correctly calculates
// the 16-bit FCS (crc) on our frames (Choose CRC16_CCITT_FALSE)
int crc;
void crcReset()
{
crc=0xffff; // crc restart
}
void crcDo(int x) // cumulative crc
{
for (int i=0; i<8; i++) {
crc=((crc&1)^(x&1))?(crc>>1)^0x8408:crc>>1; // crc calculator
x>>=1;
}
}
int crcBuf(char * buf, int size) // crc on an entire block of memory
{
crcReset();
for(int i=0; i<size; i++)crcDo(*buf++);
return crc;
}
Let's say i have this bit field value: 10101001
How would i test if any other value differs in any n bits. Without considering
the positions?
Example:
10101001
10101011 --> 1 bit different
10101001
10111001 --> 1 bit different
10101001
01101001 --> 2 bits different
10101001
00101011 --> 2 bits different
I need to make a lot of this comparisons so i'm primarily looking for perfomance but any
hint is very welcome.
Take the XOR of the two fields and do a population count of the result.
if you XOR the 2 values together, you are left only with the bits that are different.
You then only need to count the bits which are still 1 and you have your answer
in c:
unsigned char val1=12;
unsigned char val2=123;
unsigned char xored = val1 ^ val2;
int i;
int numBits=0;
for(i=0; i<8; i++)
{
if(xored&1) numBits++;
xored>>=1;
}
although there are probably faster ways to count the bits in a byte
(you could for instance use a lookuptable for 256 values)
Just like everybody else said, use XOR to determine what's different and then use one of these algorithms to count.
This gets the bit difference between the values and counts the bits three at a time:
public static int BitDifference(int a, int b) {
int cnt = 0, bits = a ^ b;
while (bits != 0) {
cnt += (0xE994 >> ((bits & 7) << 1)) & 3;
bits >>= 3;
}
return cnt;
}
XOR the numbers, then the problem becomes a matter of counting the 1s in the result.
In Java:
Integer.bitCount(a ^ b)
Comparison is performed with XOR, as others already answered.
counting can be performed in several ways:
shift left and addition.
lookup in a table.
logic formulas that you can find with Karnaugh maps.