How to decode this DBC definition of CANbus? - can-bus

I wrote a .DBC file decoder which works very well.
But as I add a new hardware DBC, for the following message, my code does not decode these signals correctly. Here is the DBC portion:
BO_ 2566852816 ERROR_INFO: 8 Vector__XXX
SG_ Slot4_Error_Reserved : 62|7#0+ (1,0) [0|127] "" Vector__XXX
SG_ Slot3_Error_Reserved : 46|7#0+ (1,0) [0|127] "" Vector__XXX
SG_ Slot2_Error_Reserved : 30|7#0+ (1,0) [0|127] "" Vector__XXX
SG_ Slot1_Error_Reserved : 14|7#0+ (1,0) [0|127] "" Vector__XXX
SG_ Slot4_Error_State : 49|3#0+ (1,0) [0|7] "#" Vector__XXX
SG_ Slot3_Error_State : 33|3#0+ (1,0) [0|7] "#" Vector__XXX
SG_ Slot2_Error_State : 17|3#0+ (1,0) [0|7] "#" Vector__XXX
SG_ Slot4_Error_Id : 55|6#0+ (1,0) [0|63] "#" Vector__XXX
SG_ Slot3_Error_Id : 39|6#0+ (1,0) [0|63] "#" Vector__XXX
SG_ Slot2_Error_Id : 23|6#0+ (1,0) [0|63] "#" Vector__XXX
SG_ Slot1_Error_State : 1|3#0+ (1,0) [0|7] "#" Vector__XXX
SG_ Slot1_Error_Id : 7|6#0+ (1,0) [0|63] "#" Vector__XXX
Here is the bytes for the ERROR_INFO frame I receive:
04 00 08 00 0D 00 10 00
BMS master decodes it like so, which looks fine to me:
Signal
Decoded value
Slot2_Error_Reserved
0
Slot3_Error_Id
3
Slot4_Error_State
0
Slot3_Error_Reserved
0
Slot2_Error_State
0
Slot4_Error_Id
4
Slot4_Error_Reserved
0
Slot2_Error_Id
2
Slot1_Error_Reserved
0
Slot3_Error_State
2
Slot1_Error_Id
1
Slot1_Error_State
0
If we focus on the definition of Slot1_Error_State — aka 1|3#0+— this means:
start bit is 1, aka the second;
length is 3 bits;
and 0 means big endian (1 would have meant little endian);
the+ means unsigned.
As per my understanding of the DBC format, starting at bit 1 for 3 bits is nonsense. But this proves that I am wrong.
I tried to decode the values, and I made it, as follows:
As you can see on my drawing, I got the ID and states identical to what bmsMaster found.
But, I pull my hair to understand how this relies to the definition the DBC contains.
Anyone can explain, step by step, how to apply the two rules I talked about at the beginning of the question, pls?

The confusion comes from the fact that your signals are defined as big-endian.
I often get confused myself when big and little endian are mixed.
If you try to visualize Slot1_Error_State, it should actually start at Byte0 least significant bits (1, then 0) and end at Byte1 most significant bit:

Related

Ada - how to explicitly pack a bit-field record type?

Please consider the following experimental Ada program which attempts to create a 32-bit record with well defined bit fields, create one and output it to a file stream...
with System;
with Ada.Text_IO; use Ada.Text_IO;
with Ada.Streams.Stream_Io; use Ada.Streams.Stream_Io;
procedure Main is
type Bit is mod (2 ** 1);
type Opcode_Number is mod (2 ** 4);
type Condition_Number is mod (2 ** 4);
type Operand is mod (2 ** 9);
type RAM_Register is
record
Opcode : Opcode_Number;
Z : Bit;
C : Bit;
R : Bit;
I : Bit;
Cond : Condition_Number;
Rsvd_1 : Bit;
Rsvd_2 : Bit;
Dest : Operand;
Src : Operand;
end record;
for RAM_Register use
record
Opcode at 0 range 28 .. 31;
Z at 0 range 27 .. 27;
C at 0 range 26 .. 26;
R at 0 range 25 .. 25;
I at 0 range 24 .. 24;
Cond at 0 range 20 .. 23;
Rsvd_1 at 0 range 19 .. 19;
Rsvd_2 at 0 range 18 .. 18;
Dest at 0 range 9 .. 17;
Src at 0 range 0 .. 8;
end record;
for RAM_Register'Size use 32;
for RAM_Register'Bit_Order use System.High_Order_First;
-- ADA 2012 language reference 'full_type_declaration'
-- (page 758, margin number 8/3) for RAM_Register
pragma Atomic (RAM_Register);
-- 3 2 1 0
-- 10987654321098765432109876543210
-- OOOOzcriCONDrrDDDDDDDDDsssssssss
X : RAM_Register := (2#1000#,
2#1#,
2#1#,
2#1#,
2#1#,
2#1000#,
2#1#,
2#1#,
2#100000001#,
2#100000001#);
The_File : Ada.Streams.Stream_IO.File_Type;
The_Stream : Ada.Streams.Stream_IO.Stream_Access;
begin
begin
Open (The_File, Out_File, "test.dat");
exception
when others =>
Create (The_File, Out_File, "test.dat");
end;
The_Stream := Stream (The_File);
RAM_Register'Write (The_Stream, X);
Close (The_File);
end Main;
I used the info here: https://rosettacode.org/wiki/Object_serialization#Ada and here: https://en.wikibooks.org/wiki/Ada_Programming/Attributes/%27Bit_Order (the very last example) to create the above.
Running the code and examining the output with xxd -g1 test.dat gives the following 12 bytes of output...
00000000: 08 01 01 01 01 08 01 01 01 01 01 01 ............
QUESTION:
How can this 32 bit record be written to, or read from, a stream as 32 bits, observing all bitfield positions? Imagine I was communicating with a microcontroller on an RS-232 port, each bit will be required to be exactly in the right place at the right time. The syntax for RAM_Register use record... seems to have had no effect on how 'Write arranges its output.
If I do provide my own 'Read and 'Write implementations, doesn't that directly contradict the 'for RAM_Register use record...` code?
You probably will have to convert the instance to an unsigned integer (via an unchecked conversion) and then write the unsigned integer to the stream. The default implementation of Write ignores the representation clause (see also RM 13 9/3):
For composite types, the Write or Read attribute for each component is called in canonical order, [...]
So, add
with Interfaces; use Interfaces;
with Ada.Unchecked_Conversion;
and define RAM_Register as
type RAM_Register is
record
Opcode : Opcode_Number;
Z : Bit;
C : Bit;
R : Bit;
I : Bit;
Cond : Condition_Number;
Rsvd_1 : Bit;
Rsvd_2 : Bit;
Dest : Operand;
Src : Operand;
end record with Atomic;
procedure Write
(Stream : not null access Ada.Streams.Root_Stream_Type'Class;
Item : RAM_Register);
for RAM_Register'Write use Write;
for RAM_Register use
record
Opcode at 0 range 28 .. 31;
Z at 0 range 27 .. 27;
C at 0 range 26 .. 26;
R at 0 range 25 .. 25;
I at 0 range 24 .. 24;
Cond at 0 range 20 .. 23;
Rsvd_1 at 0 range 19 .. 19;
Rsvd_2 at 0 range 18 .. 18;
Dest at 0 range 9 .. 17;
Src at 0 range 0 .. 8;
end record;
for RAM_Register'Size use 32;
for RAM_Register'Bit_Order use System.High_Order_First;
-----------
-- Write --
-----------
procedure Write
(Stream : not null access Ada.Streams.Root_Stream_Type'Class;
Item : RAM_Register)
is
function To_Unsigned_32 is
new Ada.Unchecked_Conversion (RAM_Register, Unsigned_32);
U32 : Unsigned_32 := To_Unsigned_32 (Item);
begin
Unsigned_32'Write (Stream, U32);
end Write;
This yields
$ xxd -g1 test.dat
00000000: 01 03 8e 8f ....
Note: the bitorder may have been reversed as I had to comment the aspect specification for RAM_Register'Bit_Order use System.High_Order_First;

Lua find operand in a string

I have a Lua string like "382+323" or "32x291" or "94-23", how can I check and return the position of the operands?
I found String.find(s, "[+x-]") did not work. Any ideas?
th> str = '5+3'
th> string.find(str, '[+-x]')
1 1
th> string.find(str, '[+x-]')
2 2
[+-x] is a pattern match for 1 character in the range between "+" and "x".
When you want to use dash as character and not as the meta character you should start or end the character group with it.
print("Type an arithmetic expression, such as 382 x 3 / 15")
expr = io.read()
i = -1
while i do
-- Find the next operator, starting from the position of the previous one.
-- The signals + and - are special characters,
-- so you have to use the % char to escape each one.
-- [The find function returns the indices of s where this occurrence starts and ends][1].
-- Here we are obtaining just the start index.
i = expr:find("[%+x%-/]", i+1)
if i then
print("Operator", expr:sub(i, i), "at position", i)
end
end

How to set 5 bits to value 3 at offset 387 bit in byte data sequence?

I need set some bits in ByteData at position counted in bits.
How I can do this?
Eg.
var byteData = new ByteData(1024);
var bitData = new BitData(byteData);
// Offset in bits: 387
// Number of bits: 5
// Value: 3
bitData.setBits(387, 5, 3);
Yes it is quite complicated. I dont know dart, but these are the general steps you need to take. I will label each variable as a letter and also use a more complicated example to show you what happens when the bits overflow.
1. Construct the BitData object with a ByteData object (A)
2. Call setBits(offset (B), bits (C), value (D));
I will use example values of:
A: 11111111 11111111 11111111 11111111
B: 7
C: 10
D: 00000000 11111111
3. Rather than using an integer with a fixed length of bits, you could
use another ByteData object (D) containing your bits you want to write.
Also create a mask (E) containing the significant bits.
e.g.
A: 11111111 11111111 11111111 11111111
D: 00000000 11111111
E: 00000011 11111111 (2^C - 1)
4. As an extra bonus step, we can make sure the insignificant
bits are really zero by ANDing with the bitmask.
D = D & E
D 00000000 11111111
E 00000011 11111111
5. Make sure D and E contain at least one full zero byte since we want
to shift them.
D 00000000 00000000 11111111
E 00000000 00000011 11111111
6. Work out these two integer values:
F = The extra bit offset for the start byte: B mod 8 (e.g. 7)
G = The insignificant bits: size(D) - C (e.g. 14)
7. H = G-F which should not be negative here. (e.g. 14-7 = 7)
8. Shift both D and E left by H bits.
D 00000000 01111111 10000000
E 00000001 11111111 10000000
9. Work out first byte number (J) floor(B / 8) e.g. 0
10. Read the value of A at this index out and let this be K
K = 11111111 11111111 11111111
11. AND the current (K) with NOT E to set zeros for the new bits.
Then you can OR the new bits over the top.
L = (K & !E) | D
K & !E = 11111110 00000000 01111111
L = 11111110 01111111 11111111
12. Write L to the same place you read it from.
There is no BitData class, so you'll have to do some of the bit-pushing yourself.
Find the corresponding byte offset, read in some bytes, mask out the existing bits and set the new ones at the correct bit offset, then write it back.
The real complexity comes when you need to store more bits than you can read/write in a single operation.
For endianness, if you are treating the memory as a sequence of bits with arbitrary width, I'd go for little-endian. Endianness only really makes sense for full-sized (2^n-bit, n > 3) integers. A 5 bit integer as the one you are storing can't have any endianness, and a 37 bit integer also won't have any natural way of expressing an endianness.
You can try something like this code (which can definitely be optimized more):
import "dart:typed_data";
void setBitData(ByteBuffer buffer, int offset, int length, int value) {
assert(value < (1 << length));
assert(offset + length < buffer.lengthInBytes * 8);
int byteOffset = offset >> 3;
int bitOffset = offset & 7;
if (length + bitOffset <= 32) {
ByteData data = new ByteData.view(buffer);
// Can update it one read/modify/write operation.
int mask = ((1 << length) - 1) << bitOffset;
int bits = data.getUint32(byteOffset, Endianness.LITTLE_ENDIAN);
bits = (bits & ~mask) | (value << bitOffset);
data.setUint32(byteOffset, bits, Endianness.LITTLE_ENDIAN);
return;
}
// Split the value into chunks of no more than 32 bits, aligned.
do {
int bits = (length > 32 ? 32 : length) - bitOffset;
setBitData(buffer, offset, bits, value & ((1 << bits) - 1));
offset += bits;
length -= bits;
value >>= bits;
bitOffset = 0;
} while (length > 0);
}
Example use:
main() {
var b = new Uint8List(32);
setBitData(b.buffer, 3, 8, 255);
print(b.map((v)=>v.toRadixString(16)));
setBitData(b.buffer, 13, 6*4, 0xffffff);
print(b.map((v)=>v.toRadixString(16)));
setBitData(b.buffer, 47, 21*4, 0xaaaaaaaaaaaaaaaaaaaaa);
print(b.map((v)=>v.toRadixString(16)));
}

How to take in digits 0-255 from a file with no delimeters

I have a plaintext file that has only numerical digits in it (no spaces, commas, newlines, etc.) which contains n digits which range from 0 to 255. I want to take it in and store these values in an array.
Example
Let's say we have this sequence in the file:
581060100962552569
I want to take it in like this, where in.read is the file input stream, tempArray is a local array of at most 3 variables that is wiped every time something is stored in endArray, which is where I want the final values to go:
in.read tempArray endArray
5 [5][ ][ ] [] //It reads in "5", sees single-digit number X guarantees that "5X" is less than or equal to 255, and continues
8 [5][8][ ] [58] //It reads in "8", realizes that there's no number X that could make "58X" smaller than or equal to "255", so it stores "58" in endArray
1 [1][ ][ ] [58] //It wipes tempArray and reads the next value into it, repeating the logic of the first step
0 [1][0][ ] [58] //It realizes that all single-digit numbers X guarantee that "10X" is less than or equal to "255", so it continues
6 [1][0][6] [58][106] //It reads "6" and adds "106" to the endArray
0 [0][ ][ ] [58][106] //It wipes tempArray and stores the next value in it
1 [0][1][ ] [58][106]
0 [0][1][0] [58][106][10] //Even though all single-digit numbers X guarantee that "010X" is less than or equal to "255", tempArray is full, so it stores its contents in endArray as "10".
0 [0][ ][ ] [58][106][10]
9 [0][9][ ] [58][106][10]
6 [0][9][6] [58][106][10][96] //Not only can "96" not have another number appended to it, but tempArray is full
2 [2][ ][ ] [58][106][10][96]
5 [2][5][ ] [58][106][10][96] //There are numbers that can be appended to "25" to make a number less than or equal to "255", so continue
5 [2][5][5] [58][106][10][96][255] //"5" can be appended to "25" and still be less than or equal to "255", so it stores it in tempArray, finds tempArray is full, so it stores tempArray's values in endArray as "255"
2 [2][ ][ ] [58][106][10][96][255][37]
5 [2][5][ ] [58][106][10][96][255][37] //There are numbers that can be appended to "25" to make a number less than or equal to "255", so continue
6 [6][ ][ ] [58][106][10][96][255][37][25] //It sees that adding "6" to "25" would make a number that's larger than 255, so it stores "25" in the endArray and remembers "6" in the tempArray
9 [6][9][ ] [58][106][10][96][255][37][25][69] //It sees that there is no number X such that "69X" is less than "255", so it stores "69" in endArray
Does anyone know how I can accomplish this behavior? Please try to keep your answers in pseudocode, so it can be translated to many programming langauges
I would not use the temp array for holding the intermediate numbers - for the CPU numbers are stored in binary format and you are reading decimal numbers.
Something like this could solve your problem:
array = []
accumulator = 0
count = 0
while not EOF:
n = readDigit()
if accumulator*10 + n > 256 or count == 2:
array.push(accumulator)
accumulator = n
count = 0
else:
accumulator = accumulator*10 + n
count = count + 1
The results are appended to the array called array.
Edit: Thanks to DeanOC for noticing the missing counter. But DeanOC's solution initializes the counter for the first iteration to 0 instead of 1.
antiguru's response is nearly there.
The main problem is that it doesn't take into consideration that the numbers can only have 3 digits. This modification should work for you.
array = []
accumulator = 0
digitCounter = 0
while not EOF
n = readDigit()
if accumulator*10 + n > 255 or digitcounter = 3:
array.push(accumulator)
accumulator = n
digitCounter = 1
else:
accumulator = accumulator*10 + n
digitCounter = DigitCounter + 1

Reading a Shapefile with ColdFusion

I am trying to read a binary file and parse the bytes I have the white paper spec on Shapefiles to know how to parse the file, however I cannot seem to find the correct functions in ColdFusion to handle reading bytes and deciding what to do with them.
<cffile action="READBINARY"
file="mypath/www/_Dev/tl_2009_25_place.shp"
variable="infile" >
PDF file with spec:http://www.esri.com/library/whitepapers/pdfs/shapefile.pdf
For example I have the spec:
Position Field Value Type Order
Byte 0 File Code 9994 Integer Big
Byte 4 Unused 0 Integer Big
Byte 8 Unused 0 Integer Big
Byte 12 Unused 0 Integer Big
Byte 16 Unused 0 Integer Big
Byte 20 Unused 0 Integer Big
Byte 24 File Length File Length Integer Big
Byte 28 Version 1000 Integer Little
Byte 32 Shape Type Shape Type Integer Little
Byte 36 Bounding Box Xmin Double Little
Byte 44 Bounding Box Ymin Double Little
Byte 52 Bounding Box Xmax Double Little
Byte 60 Bounding Box Ymax Double Little
Byte 68* Bounding Box Zmin Double Little
Byte 76* Bounding Box Zmax Double Little
Byte 84* Bounding Box Mmin Double Little
Byte 92* Bounding Box Mmax Double Little
If this was just a flat text file i would use mid function to read my positions.
Can this be done in ColdFusion and Which functions can achieve my goal?
I found this function inside of FarStream.as found at http://code.google.com/p/vanrijkom-flashlibs/wiki/SHP which is an Actionscript3 file, but it represents the kind of task i need to do.
private function readHeader(e: ProgressEvent): void {
// check header:
if (! ( readByte()==0x46
&& readByte()==0x41
&& readByte()==0x52
))
{
dispatchEvent(new IOErrorEvent
( IOErrorEvent.IO_ERROR
, false,false
, "File is not FAR formatted")
);
close();
return;
}
// version:
vMajor = readByte();
vMinor = readByte();
if (vMajor>VMAJOR) {
dispatchEvent(new IOErrorEvent
( IOErrorEvent.IO_ERROR
, false,false
, "Unsupported archive version (v."+vMajor+"."+vMinor+")")
);
close();
return;
}
// table size:
tableSize = readUnsignedInt();
// done processing header:
gotHeader= true;
}
And here is the final solution
<cfset shapeFile = createObject("java","com.bbn.openmap.layer.shape.ShapeFile").init('/www/_Dev/tl_2009_25_place.shp')>
<cfdump var="#shapeFile.getFileLength()#">
<cffile action="READBINARY" file="mypath/www/_Dev/tl_2009_25_place.shp" variable="infile" >
<cfset shapeFile = createObject("java","com.bbn.openmap.layer.shape.ShapeFile").init(infile)>
<cfdump var="#shapeFile#">
Maybe something like this?

Resources