I have gathered the can Data of a Scania G380 Truck using stm32 mcu.
there is a problem with DM1 faults. according to j1939-73 when dm1 data length is over than 8 bytes it would be packetized by TP.DT (pgn 0xebff) announced by a TP.CM (pgn 0xecff), but I face with these data, packetized in a strange manner :
18ECFF00 DATA: FF FF 7D 7D FD FF FF FF
18EBFF00 DATA: FF FF 7D 7D 3C FF FF FF
18EBFF00 DATA: FF FF 7D 7D FD FF FF FF
18EBFF00 DATA: FF FF 7D 7D 3C FF FF FF
18EBFF00 DATA: FF FF 7D 7D FD FF FF FF
it seems it doesn't follow the protocol.
another strange problem is that dm1 faults were broadcast in single packet repeatedly instead of being packetized in TP.DT pgn. for example I have this log:
18FECA27 DATA: 00 17 09 07 34 22 74 7D TIME: 425447
18FECA10 DATA: 2F 21 43 3C 37 43 06 55 TIME: 425474
18FECA2F DATA: D1 FF 1F FF FF FF FF FF TIME: 425594
18FECA0B DATA: 38 00 FF FF FF FF 00 00 TIME: 425626
18FECA00 DATA: 00 FB 00 FB 3F FC FF FF TIME: 425634
could anyone help me please?
18ECFF00 DATA: FF FF 7D 7D FD FF FF FF
18EBFF00 DATA: FF FF 7D 7D 3C FF FF FF
18EBFF00 DATA: FF FF 7D 7D FD FF FF FF
18EBFF00 DATA: FF FF 7D 7D 3C FF FF FF
18EBFF00 DATA: FF FF 7D 7D FD FF FF FF
This looks like some garbage/default values. I think Scania is not using DM1 messages to report DTCs over CAN bus.
Related
how can I parse an binary after more than one byte?
for example:
56 30 30 31 07 00 00 00 EF BF BD EF BF BD 2A 5C EF BF BD 03 01 45 7E 12 02 EF BF BD 00 EF BF BD 1F 56 30 30 31 01 00 00 00 EF BF BD EF BF BD 2A 5C EF BF BD 03 02 45 24 56 30 30 31 04 00 00 00 EF BF BD EF BF BD 2A 5C EF BF BD 03 02 45 13 00 00 00 56 30 30 31 07 00 00 00 EF BF BD EF BF BD 2A 5C EF BF BD 03 01 45 7E 12 02 24 00 4D EF BF
I want to parse this after 0x56 0x30 0x30 0x31. How can I do this? Before every new 0x56 0x30 0x30 0x31 the old packet(string) should end.
like this:
56 30 30 31 07 00 00 00 EF BF BD EF BF BD 2A 5C EF BF BD 03 01 45 7E 12 02 EF BF BD 00 EF BF BD 1F
56 30 30 31 01 00 00 00 EF BF BD EF BF BD 2A 5C EF BF BD 03 02 45 24
56 30 30 31 04 00 00 00 EF BF BD EF BF BD 2A 5C EF BF BD 03 02 45 13 00 00 00
56 30 30 31 07 00 00 00 EF BF BD EF BF BD 2A 5C EF BF BD 03 01 45 7E 12 02 24 00 4D EF BF
I already did something simular for parsing after one byte into an table. But I can not change it into my new problem.
Thats the code for parsing after one Byte 0x7E.
function print_table(tab)
print("Table:")
for key, value in pairs(tab) do
io.write(string.format("%02X ", value))
end
print("\n")
end
local function read_file(path, callback)
local file = io.open(path, "rb")
if not file then
return nil
end
local t = {}
repeat
local str = file:read(4 * 1024)
for c in (str or ''):gmatch('.') do
if c:byte() == 0x7E then
callback(t) -- function print_table
t = {}
else
table.insert(t, c:byte())
end
end
until not str
file:close()
return t
end
local result = {}
function add_to_table_of_tables(t)
table.insert(result, t)
end
local fileContent = read_file("file.dat", print_table)
It is important that 56 30 30 31 is as first written in the string.
Thank your for your help!
I need it also with reading my input in from an file.
I am reading my file like this in:
local function read_file(path) --function read_file
local file = io.open(path, "rb") -- r read mode and b binary mode
if not file then return nil end
local content = file:read "*all" -- *all reads the whole file
file:close()
return content
end
You can use a gsub to replace the target substring with a single char unique to the input string, I will use \n for this example. after that you can use gmatch where it selects a run of char that are not the substituted char.
local input = [[56 30 30 31 07 00 00 00 EF BF BD EF BF BD 2A 5C EF BF BD 03 01 45 7E 12 02 EF BF BD 00 EF BF BD 1F 56 30 30 31 01 00 00 00 EF BF BD EF BF BD 2A 5C EF BF BD 03 02 45 24 56 30 30 31 04 00 00 00 EF BF BD EF BF BD 2A 5C EF BF BD 03 02 45 13 00 00 00 56 30 30 31 07 00 00 00 EF BF BD EF BF BD 2A 5C EF BF BD 03 01 45 7E 12 02 24 00 4D EF BF]]
local pattern ="([^\n]+)"
local rowPrefix = "56 30 30 31"
input = input:gsub(rowPrefix, "\n")
for row in input:gmatch(pattern) do
print(rowPrefix .. row)
end
Output:
56 30 30 31 07 00 00 00 EF BF BD EF BF BD 2A 5C EF BF BD 03 01 45 7E 12 02 EF BF BD 00 EF BF BD 1F
56 30 30 31 01 00 00 00 EF BF BD EF BF BD 2A 5C EF BF BD 03 02 45 24
56 30 30 31 04 00 00 00 EF BF BD EF BF BD 2A 5C EF BF BD 03 02 45 13 00 00 00
56 30 30 31 07 00 00 00 EF BF BD EF BF BD 2A 5C EF BF BD 03 01 45 7E 12 02 24 00 4D EF BF
Resources for more information:
Programming in Lua: 20.1 – Pattern-Matching Functions
Lua 5.3 Reference Manual: string.gmatch
Lua 5.3 Reference Manual: string.gsub
Adapt this code:
S=[[56 30 30 31 07 00 00 00 EF BF BD EF BF BD 2A 5C EF BF BD 03 01 45 7E 12 02 EF BF BD 00 EF BF BD 1F 56 30 30 31 01 00 00 00 EF BF BD EF BF BD 2A 5C EF BF BD 03 02 45 24 56 30 30 31 04 00 00 00 EF BF BD EF BF BD 2A 5C EF BF BD 03 02 45 13 00 00 00 56 30 30 31 07 00 00 00 EF BF BD EF BF BD 2A 5C EF BF BD 03 01 45 7E 12 02 24 00 4D EF BF]]
H=[[56 30 30 31]]
E="\n"
S=S:gsub(H,E..H)
S:gsub(E.."([^"..E.."]+)",print)
I have already written a bit on this in https://superuser.com/questions/1389657/backup-access-sms-on-nokia-3310-3g-2017-from-linux-pc ; basically, I'm trying to back up SMS messages on a Nokia 3310 3G onto a Ubuntu 18.04 PC; note that hardware system-on-chip and OS differs by version for a Nokia 3310 (2017):
System on chip / Operating system:
MediaTek MT6260 / Nokia Series 30+ (2G)
Spreadtrum SC7701B / Java-powered Smart Feature OS (3G)
Spreadtrum SC9820A / Yun OS (4G, CMCC)
I have the 3G, so I have a "Smart Feature OS" which apparently is a version of KaiOS (Is there any difference between KaiOS and 'Smart Feature OS'? : KaiOS), which apparently (KaiOS – A Smartphone Operating System | Hacker News) is a fork of Firefox OS.
When you hit Menu > Storage > Create backup (https://www.nokia.com/phones/en_int/support/nokia-3310-3g-user-guide/create-a-backup) on this phone, it generates a folder with files in it, named like this:
$ tree All-backup_01-01-2019_20-18-54
All-backup_01-01-2019_20-18-54
├── ibphone_head.in
├── phonebook.ib
└── sms.ib
I've never heard of .ib files before, and I was hoping someone here knew what they are. A quick look suggests IB File Extension - Open .IB File (InterBase Database), however I tried using these .ib files with http://fbexport.sourceforge.net/fbexport.php "Tool for exporting and importing data with Firebird and InterBase databases", and I get:
Engine Code : 335544323
Engine Message :
file ./All-backup_01-01-2019_20-18-54/phonebook.ib is not a valid database
So, that's not it.
Here is a hexdump of ibphone_head.in - looks like there is no personal identifying info here:
$ hexdump -C All-backup_01-01-2019_20-18-54/ibphone_head.in
00000000 00 00 00 00 41 00 6c 00 6c 00 2d 00 62 00 61 00 |....A.l.l.-.b.a.|
00000010 63 00 6b 00 75 00 70 00 5f 00 30 00 31 00 2d 00 |c.k.u.p._.0.1.-.|
00000020 30 00 31 00 2d 00 32 00 30 00 31 00 39 00 5f 00 |0.1.-.2.0.1.9._.|
00000030 32 00 30 00 2d 00 31 00 38 00 2d 00 35 00 34 00 |2.0.-.1.8.-.5.4.|
00000040 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00000100 00 00 00 00 45 00 3a 00 5c 00 42 00 61 00 63 00 |....E.:.\.B.a.c.|
00000110 6b 00 75 00 70 00 73 00 5c 00 41 00 6c 00 6c 00 |k.u.p.s.\.A.l.l.|
00000120 2d 00 62 00 61 00 63 00 6b 00 75 00 70 00 5f 00 |-.b.a.c.k.u.p._.|
00000130 30 00 31 00 2d 00 30 00 31 00 2d 00 32 00 30 00 |0.1.-.0.1.-.2.0.|
00000140 31 00 39 00 5f 00 32 00 30 00 2d 00 31 00 38 00 |1.9._.2.0.-.1.8.|
00000150 2d 00 35 00 34 00 00 00 00 00 00 00 00 00 00 00 |-.5.4...........|
00000160 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00000200 00 00 00 00 30 30 30 31 2e 30 30 30 30 33 00 00 |....0001.00003..|
00000210 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00000230 00 00 00 00 00 00 6d 6d 69 6b 65 79 62 61 63 6b |......mmikeyback|
00000240 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00000250 00 00 00 00 00 00 03 00 00 00 00 00 b0 cd 09 00 |................|
00000260 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00000270 00 00 00 00 00 00 00 00 f3 dd 00 00 7e 2f 00 00 |............~/..|
00000280 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
000002c4
So, it seems that strings are encoded with 2 bytes, "wide char"; and for the most part, ibphone_head.in seems to just encode the name of its containing/parent folder, All-backup_01-01-2019_20-18-54.
Here is a hexdump of phone book, where I've anonymized names to AAAAAAAAA and BBB:
$ hexdump -C -n 1900 All-backup_01-01-2019_20-18-54/phonebook.ib
00000000 70 00 68 00 6f 00 6e 00 65 00 62 00 6f 00 6f 00 |p.h.o.n.e.b.o.o.|
00000010 6b 00 2e 00 69 00 62 00 00 00 00 00 00 00 00 00 |k...i.b.........|
00000020 00 00 00 00 01 00 00 00 30 f8 04 00 62 01 00 00 |........0...b...|
00000030 62 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |b...............|
00000040 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00000240 00 00 00 00 98 03 00 00 01 00 00 00 ff ff ff ff |................|
00000250 ff ff ff ff 01 00 01 00 00 00 00 00 00 00 00 00 |................|
00000260 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00000360 d9 d4 37 46 00 00 00 00 00 00 01 02 00 00 04 01 |..7F............|
00000370 07 12 80 88 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00000380 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
000003b0 09 00 41 00 41 00 41 00 41 00 41 00 41 00 41 00 |..A.A.A.A.A.A.A.|
000003c0 41 00 41 00 00 00 00 00 00 00 00 00 00 00 00 00 |A.A.............|
000003d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
000005e0 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff ff |................|
000005f0 ff ff ff ff 98 03 00 00 01 00 00 00 ff ff ff ff |................|
00000600 ff ff ff ff 04 00 01 00 00 00 00 00 00 00 00 00 |................|
00000610 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00000710 d9 d4 37 46 00 00 00 00 00 00 01 02 00 00 06 11 |..7F............|
00000720 83 29 23 13 58 f9 00 00 00 00 00 00 00 00 00 00 |.)#.X...........|
00000730 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00000760 03 00 42 00 42 00 42 00 00 00 00 00 |..B.B.B.....|
0000076c
Here it seems d9 d4 37 46 is a delimiter for entries, and entries seem to be 0x3b0 = 944 bytes apart; cannot tell where the actual phone number is stored, though.
I won't be pasting the hexdump contents of sms.ib, because I fear to reveal more personal info than I intend to.
However, maybe what has been posted already, will help someone see if this is an already established file format? In any case, I'd like to convert the contents of these files to plain text ...
The phonebook.ib is a proprietary file format. I did some analysis and got to the point where I am able to extract names and phones numbers from it.
Header is 580 bytes and doesn't seem to contain anything interesting.
Entries appear to begin with the entry length, encoded as 3 4-bit decimal nibbles followed by another digit (version number maybe?).
In my sample file all entries were 940 bytes and therefore had 94 03 as their first two bytes. Other field entries I identified at different offsets:
entry+0x12a [1 byte]: Number of bytes representing phone number.
entry+0x12b [1 byte]: Seems like a two-digit decimal coded flag. If high nibble is set (e.g. 10) then phone number begins with +.
entry+0x12c: Phone number, decimal coded. For example, 123456 would appear as 21 43 65. Special digits:
a is *
b is #
f is ignored (seen as a last digit when the number of digits is not even).
entry+0x16c [1 byte]: Name length.
entry+0x16e: Name in UTF-16 (i.e. 2 bytes per char).
Taking your snippet as an example:
0x36e is the phone length, 4 bytes.
0x36f is the extra flag, high nibble is 0 so no + prefix.
0x370 is where the phone number begins, 07 12 80 88 translates to 70210888.
A simple reference parser can be found at my repo: https://github.com/yossigo/phonebook_ib_export
I was about to make the opposite script than Yossi's to convert .vcf files back into the .is format when I found a workaround to backup and restore all contacts to and from the vcf file format with the Nokia 3310 3G :
BACKUP CONTACTS TO VCF FILE: in your contacts, click the left button to share, select all of them using the option in the left button menu, send via Bluetooth to any device, you can then retrieve the vcf file from the other Bluetooth device OR from the /vCard/ folder at the root of your phone (sd card if one is plugged)
RESTORE CONTACTS FROM VCF FILE: copy the file anywhere on the phone or sd card, unplug the phone and use the embedded file browser to find the file, then follow Nokia’s engineer logic by not using the Open button but instead click on the left button and use the option Save vCard
My wifi sniff device can output data to a raw file. But it may begin with the middle of a frame, and each frame starts right after another. A pcap file must contain packet headers, which I don't have. So I tried to discard the half complete frame at the beginning of the file, and put the rest into a pcap file with one packet. Then wireshark can analyze the first frame, even with wrong packet size.
My question is how to make wireshark analyze the remaining frames ?
Edit: This is a sample pcap with 2 frame, but without the second packet header
00000000 D4 C3 B2 A1 02 00 04 00 00 00 00 00 00 00 00 00 Ôò¡............
00000010 FF FF 00 00 69 00 00 00 05 00 00 00 00 00 00 00 ÿÿ..i...........
00000020 80 00 00 00 80 00 00 00 08 02 00 00 01 00 5E 00 €...€.........^.
00000030 00 FC E8 94 F6 3C 5F 40 20 68 9D 9A 4B D7 70 73 .üèâ€Ã¶<_# h.Å¡K×ps
00000040 AA AA 03 00 00 00 08 00 46 00 00 20 38 F8 00 00 ªª......F.. 8ø..
00000050 01 02 48 D5 C0 A8 01 66 E0 00 00 FC 94 04 00 00 ..HÕÀ¨.fà ..üâ€...
00000060 16 00 09 03 E0 00 00 FC 08 02 00 00 01 00 5E 7F ....à ..ü......^.
00000070 FF FA E8 94 F6 3C 5F 40 20 68 9D 9A 4B D7 F0 75 ÿúèâ€Ã¶<_# h.Å¡K×ðu
00000080 AA AA 03 00 00 00 08 00 46 00 00 20 38 F9 00 00 ªª......F.. 8ù..
00000090 01 02 39 D6 C0 A8 01 66 EF FF FF FA 94 04 00 00 ..9ÖÀ¨.fïÿÿúâ€...
000000A0 16 00 FA 04 EF FF FF FA ..ú.ïÿÿú
My question is how to make wireshark analyze the remaining frames ?
Detect the beginnings and ends of frames in your bit sequence, and put each frame into a separate record in the pcap file.
If there's nothing in the bit sequence to allow your software to determine where one frame ends and another frame begins, there's nothing in the bit sequence to allow Wireshark to do so, so if you want to have Wireshark analyzer frames past the first frame, you are FORCED to ensure that there's something in the bit stream to determine frame boundaries, and you might as well have your software break the bit stream into frames.
I've harnessed a project released on internet a long time ago. Here comes the details, all irrelevant things being stripped off for sake of concision and clarity.
A binary file whose content is descibed below
HEX DUMP:
55 89 E5 83 EC 08 C7 45 FC 00 00 00 00 8B 45 FC
3B 45 10 72 02 EB 19 8B 45 FC 8B 55 0C 01 C2 8B
45 FC 03 45 08 8A 00 88 02 8D 45 FC FF 00 EB DD
C6 45 FA 00 83 7D 10 01 76 6C 80 7D FA 00 74 02
EB 64 C6 45 FA 01 C7 45 FC 00 00 00 00 8B 45 10
48 39 45 FC 72 02 EB E2 8B 45 FC 8B 4D 0C 01 C1
8B 45 FC 03 45 0C 8D 50 01 8A 01 3A 02 73 30 8B
45 FC 03 45 0C 8A 00 88 45 FB 8B 45 FC 8B 55 0C
01 C2 8B 45 FC 03 45 0C 40 8A 00 88 02 8B 45 FC
03 45 0C 8D 50 01 8A 45 FB 88 02 C6 45 FA 00 8D
45 FC FF 00 EB A7 C9 C2 0C 00 90 90 90 90 90 90
is loaded into memory and executed using the following method snippet
var
MySrcArray,
MyDestArray: array [1 .. 15] of Byte;
// ...
MyBuffer: Pointer;
TheProc: procedure;
SortIt: procedure(ASrc, ADest: Pointer; ASize: LongWord); stdcall;
begin
// Initialization of MySrcArray with random Bytes and display here ...
// Instructions of loading of the binary file into MyBuffer using merely **GetMem** here ...
#SortIt := MyBuffer;
try
SortIt(#MySrcArray, #MyDestArray, 15);
// Display of MyDestArray (The outcome of the processing !)
except
// Invalid code error handling
end;
// Cleaning code here ...
end;
works like a charm on my box.
My Question:
How comes it works without using VirtualAlloc and/or VirtualProtect?
I'm assuming you are asking why this works without being stopped by Data Execute Prevention? For 32-bit programs DEP is opt-in by default meaning that the application must explicitly enable it.
If you change the DEP setting to "Turn on DEP for all programs and services except those I select" then your application will trigger a DEP warning and crash.
I need to extract raw alpha from images in order to pass this to an application for use as an opacity mask. The expected format is 8 byte unsigned ints per pixel. How can I do this with ImageMagick? I have tried convert image.png image.a but the .a file does not seem to have the correct data.
What is the best way to extract the alpha with ImageMagick? Ideally, this would work with any input image format that supports alpha or transparency.
Try this:
$ convert -size 16x16 xc:none -draw "stroke black fill red circle 8,8 4,4" circle_on_transparent_bg.png
$ convert circle_on_transparent_bg.png -channel A -separate -depth 8 gray:image_alpha.raw
$ od -t x1 image_alpha.raw
0000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
0000040 00 00 00 00 00 01 20 89 be 8a 20 01 00 00 00 00
0000060 00 00 00 00 10 ff ff ff ff ff ff ff 12 00 00 00
0000100 00 00 00 13 ff ff ff ff ff ff ff ff ff 11 00 00
0000120 00 00 01 ff ff ff ff ff ff ff ff ff ff ff 01 00
0000140 00 00 21 ff ff ff ff ff ff ff ff ff ff ff 1f 00
0000160 00 00 8d ff ff ff ff ff ff ff ff ff ff ff 89 00
0000200 00 00 c0 ff ff ff ff ff ff ff ff ff ff ff ff 00
0000220 00 00 8b ff ff ff ff ff ff ff ff ff ff ff 93 00
0000240 00 00 21 ff ff ff ff ff ff ff ff ff ff ff 24 00
0000260 00 00 01 ff ff ff ff ff ff ff ff ff ff ff 01 00
0000300 00 00 00 10 ff ff ff ff ff ff ff ff ff 11 00 00
0000320 00 00 00 00 12 ff ff ff ff ff ff ff 10 00 00 00
0000340 00 00 00 00 00 01 23 91 c4 8e 22 01 00 00 00 00
0000360 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0000400
Try this:
convert image.png -channel A -separate image_alpha.png
Well, it's quite straightforward: you take alpha channel and save it to another file. Script outputs with 1-channel png (8 byte per pixel).