I want to differentiate between rx and tx CAN-messages from a DBC but can't find any difference, when I open it with notebook. As anyone a idea if they can be identified in a DBC?
The DBC Format is described here and here.
The sender is specified on the same line as the message definition like so:
BO_ <CAN-ID> <MessageName>: <MessageLength> <SendingNode>
While the receiver is specified on the signal line:
SG_ <SignalName> [M|m<MultiplexerIdentifier>] : <StartBit>|<Length>#<Endianness><Signed> (<Factor>,<Offset>) [<Min>|<Max>] "[Unit]" [ReceivingNodes]
So for the following definition, IO is the sending node while DBG is the receiving node (in other word this is a Tx-Message for IO and an Rx-Message for DBG):
BO_ 500 IO_DEBUG: 4 IO
SG_ IO_DEBUG_test_unsigned : 0|8#1+ (1,0) [0|0] "" DBG
I'm trying to build a LittleFS file system binary on my PC and flash it to my WeMos D1 Mini Pro (16MB) ESP8266.
I used the following code on the ESP
LittleFS.begin()
FSInfo info;
LittleFS.info(info);
Serial.print("LittleFS block size:");
Serial.println(info.blockSize);
Serial.print("LittleFS total bytes:");
Serial.println(info.totalBytes);
To determine the block size and total bytes, which gave me 8192 and 14655488 respectively. 14655488 / 8192 = 1789 so I used 1789 as the block size in the below python:
from littlefs import LittleFS
fs = LittleFS(block_size=8192, block_count=1789)
with open( 'index.html', 'rb' ) as f:
data = f.read()
with fs.open( '/index.html', 'w') as fh:
fh.write( data )
with open('fs.bin', 'wb') as fh:
fh.write(fs.context.buffer)
This creates a 14655488 bytes .bin file.
I then looked in boards.txt and found these lines:
d1_mini_pro.menu.eesz.16M14M=16MB (FS:14MB OTA:~1019KB)
d1_mini_pro.menu.eesz.16M14M.build.flash_size=16M
d1_mini_pro.menu.eesz.16M14M.build.flash_size_bytes=0x1000000
d1_mini_pro.menu.eesz.16M14M.build.flash_ld=eagle.flash.16m14m.ld
d1_mini_pro.menu.eesz.16M14M.build.spiffs_pagesize=256
d1_mini_pro.menu.eesz.16M14M.upload.maximum_size=1044464
d1_mini_pro.menu.eesz.16M14M.build.rfcal_addr=0xFFC000
d1_mini_pro.menu.eesz.16M14M.build.spiffs_start=0x200000
d1_mini_pro.menu.eesz.16M14M.build.spiffs_end=0xFFA000
d1_mini_pro.menu.eesz.16M14M.build.spiffs_blocksize=8192
This confirms the block size and gives the SPIFFS (but LittleFS is equivalent here, right?) start address as 0x200000
Checking these Arduino bits, i get:
FS_PHYS_ADDR: 2097152 (0x200000)
FS_PHYS_SIZE: 14655488
FS_PHYS_PAGE: 256
FS_PHYS_BLOCK: 8192
So then I used:
python upload.py --chip esp8266 --port COM6 --baud 460800 write_flash 0x200000 fs.bin
which outputs:
esptool.py v2.8
Serial port COM6
Connecting....
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: ec:fa:bc:6e:19:90
Uploading stub...
Running stub...
Stub running...
Changing baud rate to 460800
Changed.
Configuring flash size...
Auto-detected Flash size: 16MB
Compressed 14655488 bytes to 215596...
Writing at 0x00234000... (100 %)
Wrote 14655488 bytes (215596 compressed) at 0x00200000 in 56.7 seconds (effective 2067.0 kbit/s)...
Hash of data verified.
Leaving...
Hard resetting via RTS pin...
However, when I then use code like the below
Dir root = LittleFS.openDir("/");
while (root.next())
{
Serial.print(root.fileName());
}
I get nothing, and
LittleFS.exists("/index.html")
returns false.
What am I doing wrong, or how do I debug this?
I'm uploading my firmware (not the filesystem) via Visual Studio Code, and the board configuration I'm using is
"xtal=80,vt=flash,exception=legacy,ssl=all,eesz=16M14M,ip=lm2f,dbg=Disabled,lvl=None____,wipe=none,baud=921600"
If I open the bin in a hex editor, I can see:
Which is some javascript inside the html file.
If I do this:
uint32_t b;
ESP.flashRead(0x006C2800 + 0x200000, &b, 1);
Serial.println(b);
then it returns 115/0x73, so it looks like the binary has flashed successfully, so that leaves me with either the binary being flashed in the wrong place, or it being corrupted/invalid....
I haven't fixed this exact problem, but I have achieved what I want.
This:
C:\Users\Andrew Bullock\AppData\Local\Arduino15\packages\esp8266\tools\mklittlefs\2.5.0-4-fe5bb56\mklittlefs.exe
exists (instructions here https://github.com/earlephilhower/mklittlefs) which works. So I can only assume that the Python wrapper or the version of LittleFS it's using is somehow incompatible or broken.
The only problem here i can see is that the block_count you gave in your python script is wrong. block_count actually refers to the pageSize of the Filesystem, that holds number of bytes every page is going to hold, in ESP littleFS the pageSize is 256.
So your object initialisation should be like this:
fs = LittleFS(block_size=8192, block_count=259)
I am recently using DTrace to analyze my iOS app。
Everything goes well except when I try to use the built-in variable stackDepth。
I read the document here where shows the introduction of built-in variable stackDepth.
So I write some D code
pid$target:::entry
{
self->entry_times[probefunc] = timestamp;
}
pid$target:::return
{
printf ("-----------------------------------\n");
this->delta_time = timestamp - self->entry_times[probefunc];
printf ("%s\n", probefunc);
printf ("stackDepth %d\n", stackdepth);
printf ("%d---%d\n", this->delta_time, epid);
ustack();
printf ("-----------------------------------\n");
}
And run it with sudo dtrace -s temp.d -c ./simple.out。 unstack() function goes very well, but stackDepth always appears to 0。
I tried both on my iOS app and a simple C program.
So anybody knows what's going on?
And how to get stack depth when the probe fires?
You want to use ustackdepth -- the user-land stack depth.
The stackdepth variable refers to the kernel thread stack depth; the ustackdepth variable refers to the user-land thread stack depth. When the traced program is executing in user-land, stackdepth will (should!) always be 0.
ustackdepth is calculated using the same logic as is used to walk the user-land stack as with ustack() (just as stackdepth and stack() use similar logic for the kernel stack).
This seems like a bug in the Mac / iOS implementation of DTrace to me.
However, since you're already probing every function entry and return, you could just keep a new variable self->depth and do ++ in the :::entry probe and -- in the :::return probe. This doesn't work quite right if you run it against optimized code, because any tail-call-optimized functions may look like they enter but never return. To solve that, you can turn off optimizations.
Also, because what you're doing looks a lot like this, I thought maybe you would be interested in the -F option:
Coalesce trace output by identifying function entry and return.
Function entry probe reports are indented and their output is prefixed
with ->. Function return probe reports are unindented and their output
is prefixed with <-.
The normal script to use with -F is something like:
pid$target::some_function:entry { self->trace = 1 }
pid$target:::entry /self->trace/ {}
pid$target:::return /self->trace/ {}
pid$target::some_function:return { self->trace = 0 }
Where some_function is the function whose execution you want to be printed. The output shows a textual call graph for that execution:
-> some_function
-> another_function
-> malloc
<- malloc
<- another_function
-> yet_another_function
-> strcmp
<- strcmp
-> malloc
<- malloc
<- yet_another_function
<- some_function
$ readelf -s /lib/i386-linux-gnu/libc-2.13.so
below is a line from the cmd ouput
Num: Value Size Type Bind Vis Ndx Name
2261: 00040130 20136 FUNC GLOBAL DEFAULT 12 vfprintf##GLIBC_2.0
Could some nice buddy tell me why the size of vfprintf is so big?
thanks.
Because vfprintf is where the real work is done. printf, fprintf and vprintf just wrap around it.
sprintf, snprintf, vsprintf and vsnprintf create a string stream and pass it to vfprintf.
I need to parse very large log files (>1Gb, <5Gb) - actually I need to strip the data into objects so I can store them in a DB. The log file is sequential (no line breaks), like:
TIMESTAMP=20090101000000;PARAM1=Value11;PARAM2=Value21;PARAM3=Value31;TIMESTAMP=20090101000100;PARAM1=Value11;PARAM2=Value21;PARAM3=Value31;TIMESTAMP=20090101000152;PARAM1=Value11;PARAM2=Value21;PARAM3=Value31;...
I need to strip this into the table:
TIMESTAMP | PARAM1 | PARAM2 | PARAM3
The process need to be as fast as possible. I'm considering using Perl, but any suggestions using C/C++ would be really welcome. Any ideas?
Best regards,
Arthur
Write a prototype in Perl and compare its performance against how fast you can read data off of the storage medium. My guess is that you'll be I/O bound, which means that using C won't offer a performance boost.
This presentation about the use of Python generators blew my mind:
http://www.dabeaz.com/generators-uk/
David M. Beazley shows how to process multi-gigabyte log files by basically defining a generator for each processing step. The generators are then 'plugged' into each other until you have some simple utility functions
lines = lines_from_dir("access-log*","www")
log = apache_log(lines)
for r in log:
print r
which can then be used for all sorts of querying:
stat404 = set(r['request'] for r in log
if r['status'] == 404)
large = (r for r in log
if r['bytes'] > 1000000)
for r in large:
print r['request'], r['bytes']
He also shows that performance compares well to the performance of standard unix tools like grep, find etc.
Of course this being Python, it's much easier to understand and most importantly easier to customise or adapt to different problem sets than perl or awk scripts.
(The code examples above are copied from the presentation slides.)
Lex handles this sort of things amazingly well.
But really, use AWK. It's performance is not bad, even comparing with Perl, etc. Of cource Map/Reduce would work quite well, but what about the overhead of splitting the file into appropriate chunks?
Try AWK
The key won't be the language because the problem is I/O bound, so pick the language that you feel most comfortable with.
The key is how it is coded. You'll be fine as long as you don't load the whole file in memory -- load chunks at a time, and save the data chunks at a time, it will be more efficient.
Java has a PushbackInputStream that may make this easier to code. The idea is that you guess how much to read, and if you read too little, then push the data back, and read a larger chunk.
Then when you've read too much, process the data and then push back the remaining bit and continue to the next iteration of the loop.
Something like this should work.
use strict;
use warnings;
my $filename = shift #ARGV;
open my $io, '<', $filename or die "Can't open $filename";
my ($match_buf, $read_buf, $count);
while (($count = sysread($io, $read_buf, 1024, 0)) != 0) {
$match_buf .= $read_buf;
while ($match_buf =~ s{TIMESTAMP=(\d{14});PARAM1=([^;]+);PARAM2=([^;]+);PARAM3=([^;]+);}{}) {
my ($timestamp, #params) = ($1, $2, $3, $4);
print $timestamp ."\n";
last unless $timestamp;
}
}
This is easily handled in Perl, Awk, or C. Here's a start on a version in C for you:
#include <stdio.h>
#include <err.h>
int
main(int argc, char **argv)
{
const char *filename = "noeol.txt";
FILE *f;
char buffer[1024], *s, *p;
char line[1024];
size_t n;
if ((f = fopen(filename, "r")) == NULL)
err(1, "cannot open %s", filename);
while (!feof(f)) {
n = fread(buffer, 1, sizeof buffer, f);
if (n == 0)
if (ferror(f))
err(1, "error reading %s", filename);
else
continue;
for (s = p = buffer; p - buffer < n; p++) {
if (*p == ';') {
*p = '\0';
strncpy(line, s, p-s+1);
s = p + 1;
if (strncmp("TIMESTAMP", line, 9) != 0)
printf("\t");
printf("%s\n", line);
}
}
}
fclose(f);
}
Sounds like a job for sed:
sed -e 's/;\?[A-Z0-9]*=/|/g' -e 's/\(^\|\)\|\(;$\)//g' < input > output
You might want to take a look at Hadoop (java) or Hadoop Streaming (runs Map/Reduce jobs with any executable or script).
If you code your own solution, you will probably benefit from reading larger chunks of data from the file and processing them in batches (rather than using, say, readline()) and looking for the newline marking the end of each row. With this approach, you need to be mindful that you may not have retrieved the entirety of the last line, so some logic would be required to handle that.
I don't know what performance benefits you'd realize, since I haven't tested it, but I've leveraged similar techniques with success.
I know this is an exotic language and may be not the best solution to do that but when i've ad hoc data, i consider PADS