I'm trying to debug a program that is using the Z3 API, and I'm wondering if there's a way, either from within the API or by giving Z3 a command, to print the current logical context, hopefully as if it had been read in an SMT-LIB file.
This question from 7 years ago seemed to indicate that there would be a way to do this, but I couldn't find it in the API docs.
Part of my motivation is that I'm trying to debug whether my program is slow because it's creating an SMT problem that's hard to solve, or whether the slowdown is elsewhere. Being able to view the current context as an SMT-LIB file, and run it in Z3 on the command line, would make this easier.
It's not quite clear what you mean by "logical context." If you mean all the assertions the user has given to the solver, then the command:
(get-assertions)
will return it as an S-expression like list; see Section 4.2.4 of http://smtlib.cs.uiowa.edu/papers/smt-lib-reference-v2.6-r2017-07-18.pdf
But this doesn't sound useful for your purposes; after all it is going to return precisely everything you yourself have asserted.
If you're looking for a dump of all the learned-lemmas, internal assertions the solver created etc; I'm afraid there's no way to do that from SMTLib. You probably can't even do that using the programmatic API either. (Though this needs to be checked.) That would only be possible by actually modifying the source code of z3 itself (which is open-source), and putting in relevant debug traces. But that would require a lot of study of the internals of z3 and would unlikely to help unless you're intimately knowledgeable about z3 code base itself.
I find that running z3 -v:10 can sometimes provide diagnostic info; if you see it repeatedly printing something, it's a good indication that something has gone wrong in that area. But again, what it prints and what it exactly means is guess work unless you study the source code itself.
I have been struggling a lot to get this to work.
Can someone provide an example with any LUA api of 2 scripts that pass a message back and forth.
I have tried Oil, lua-ipc and zeromq.
But I face several missing libraries issues.
The ultimate goal is to pass a vector of numbers from one Lua process to another Lua process (with a different version of Lua) without going through disk.
Here is a similar example in python of IPC in a single file. Something similar in lua would be extremely helpful.
I really need an example as my knowledge in pipes or UDP/TCP is not strong.
The equivalent would be to use luasocket. These examples come very close to the python example given. Here socket:receive() is used for the framing.
https://github.com/diegonehab/luasocket/blob/master/samples/listener.lua
https://github.com/diegonehab/luasocket/blob/master/samples/talker.lua
ConQAT's doc claims it can do clone detection on COBOL code, but I can't find any appropriate block in the list of Included blocks.
The only one that could be considered is StatementCloneAnalysis but it would get confused by the line numbers that precede each line:
016300******************************************************************0058
Interesting tool. I took a quick look and it seems to me that a simple fix might be to pre-process COBOL source to overwrite columns 1 through 6 with spaces and trim everything after column 72.
After poking around for a while I came across the NextToken scanner definition file for COBOL. It looks like it will "happily" pick up tokens from the sequence number area as well as after column 72. The tokenizer looks like it only deals with COBOL source code after it has gone through the library processing phase of a compile (i.e. after compiler directives such as COPY/REPLACE have been processed). COPY/REPLACE were specified as keywords but I really don't see how this tokenizer would deal with them properly - particularly where pseudo text is involved.
If working with an IBM COBOL compiler, you can specifying the MDECK option on a compile to generate a suitable source file for analysis. I am not familiar with other vendors so cannot comment further on how to generate a post text-manipulation source deck.
The level of clone detection conquat provides for COBOL appears to be very limited relative to other languages (e.g. java). I suspect you will have to put in a lot of hours to get anything more than trivial clone detection out of it for COBOL programs. However this could be a very useful project given the heavy use of cut/paste coding in typical COBOL programs (COBOL programmers often make a joke out of it: Only one COBOL program has ever been written, the rest are just modified copies of it). I wish you well.
Given that ConQat deals with COBOL badly, you might look at our CloneDR tool.
It has a version that works explicitly with IBM Enterprise COBOL, using a precise parser, and it handles all that sequence number nonsense correctly. (It will even read the COBOL code in its native ECBDIC, meaning a literal string containing an ASCII newline character doesn't break the parser).
[If your COBOL isn't IBM COBOL, this won't help you, but otherwise you won't "have to put a lot of hours to get anything"].
We think the AST-based detection technique detects better clones more accurately than ConQat's token-based detection. The site explains why in detail, and shows sample COBOL clones detected by CloneDR.
Specific to the OP who appears to be working in Japan: as a bonus, CloneDR handles Japanese character sets because it is implemented on top of an underlying tool infrastructure that is Unicode and Shift-JIS enabled. We haven't had a lot of experience with Japanese COBOL so there might be a remaining glitch; see G literals with Japanese characters.
I need to add some capabilities to very complex, multi-layered makefile (lots of include files, lots of targets, lots of variables) that was written by someone else who of course is no longer with the company. There are some folks here who understand some of it, but not the whole thing.
Is there a Make debugger where I can single-step through it, or some equivalent way to follow the execution through the many files and targets that a compile flows through? Or is there some other way to get a handle on how this thing works?
GNU Make has various options for printing out some debugging information while running. -d will print out a whole lot of debugging information; perhaps too much, and perhaps not what you need, but you might try sifting through that. There are also options like -n for doing a dry run, -p for printing the database of rules, -w for printing out which directory you're in to help track down issues in complicated recursive makes, and --warn-undefined-variables for tracking down certain problems with variables.
If none of that works, you might want to try Remake, which claims to be a patched GNU make with a debugger. I haven't used it, but from the documentation, it looks like it might help you out, and it has some more advice on trying to track down bugs in Makefiles.
You might want to look at this article http://oreilly.com/catalog/make3/book/ch12.pdf (A chapter "Debugging Makefiles" from Oreilly's "Managing Projects with GNU Make, Third Edition")
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
My company's proprietary software generates a log file that is much easier to use if it is parsed. The log parser we all use was written by another employee as a side project, and it has horrible performance.
These log files can grow to 10s of megabytes very quickly, and the parser we currently use has issues if a log file is bigger than 1 megabyte.
So, I want to write a program that can parse this massive amount of text in the shortest amount of time possible. We use Windows exclusively, so running on Windows is a must. Our current implementation runs on a local web server, and I'm convinced that running it as an application would have to be faster.
All suggestions will be helpful. Thanks.
EDIT: My ultimate goal is to parse the text and display it in a much more user friendly manner with colors and such. Can you do this with Perl and Python? I know you can do this with Java and C++. So, it will function like Notepad where you open a log file, but on the screen you display the user-friendly format instead of the raw file.
EDIT: So, I cant choose the best answer, and that was to choose a language that can best display what I'm going for, and then write the parser in that. Also, using ANTLR will probably make this process much easier. I changed the original question, since I guess I didn't ask what I was really looking for. Thanks everyone!
Hmmm, "go with what you know" was a good answer. Perl was designed for this sort of thing (but imo is well suited for simple parsing, but I'd personally avoid it for complex projects).
If it gets even a little complex, why not use a proper syntax and grammar set-up?
Lex & Yacc (or Flex & Bison) spring to mind, but personally I would always reach for Antlr
Define various "words" in terms of patterns (syntax), and rules to combine those words (grammar) and Antlr will spit out a program to parse your input (you can have the program in Java, C, C++ and more (you are worried about parse time, so choose a compiled language, of course)).
I personally find it tedious to hand-craft parsers, and even more tedious to debug them, but AntlrWorks is a lovely IDE which really makes it a piece of cake ...
That bit at the bottom is defining a grammar rule.
If you mess up your grammar rules, you will be informed. This is not the case with hand-crafted parsers, where you just scratch your body part and wonder about the "strange results"...
Check it out. Even if you think your project is trivial now, it may well grow. And if you have any interest in parsing you do owe it to yourself to at least be familiar with lex/yacc, but especially Antlr(Works)
You should use the language that YOU know... Unless you have so much time available to complete the project that you can also spend the time learning a new language.
I would suggest using Python or Perl. Parsing large text files with regular expressions is really fast.
Whatever language your coworker used.
(I could tell you that any macro assembler will let you write code that would rip through your data, but seriously, are you going to spend months writing assembly just to save a few seconds of CPU time? Rewriting a program is fun but it's not practical.)
Whip out your profiler, point it at your horribly performing log parser, and fix the performance problems. If it's a common language, there will be people here who can help.
Parse this massive amount of text in the shortest time possible.
Consider the PADS Project from AT&T. It's a special-purpose language, compatible with C, that's designed exactly for high-speed parsing of log files and other ad hoc data formats. There's even a feature where it can try to learn your log format from examples, although I don't know if that has hit production yet. The people behind the project are really smart, and it's had a big impact within the phone company. PADS gives very high performance on data streams that produce gigabytes. Joe Bob says check it out.
If "massive text in the shortest time possible", Perl and Python are not the answer. But if you need to whip up something not too slow, and it's OK to take longer, Perl and Python could be OK. Tems of megabytes is not actually that big.
I've used both Python and Perl. Perl is a more natural fit for this but can be hard to maintain. Python will do it just as well and is easier to read. Go for Python.
I believe perl is considered a good choice to parse text.
Maybe a finished product such as the MS LogParser (usage podcast here) may do what you need and it's free.
Perl is good for text processing.
A number of very good text processing programs have been written in Perl. Ack (a grep replacement) is one.
Sounds like a job for Perl, much as I don't particularly care for it as a language myself. ActivePerl is a reasonable distribution of Perl for Windows.
I'd suggest Perl. It was practically built for parsing log files. As for output I agree with ghostdog74, HTML is the way to go. Perl has dozens of modules that allow you to build and/or template HTML.
I'd parse out the data using regular expressions, then use Template::Toolkit (on CPAN) to create nice pages using HTML and CSS templates.
c/c++ or java...
for c/c++ i have snippet that might help you:
FILE *f = fopen(file, "rb");
if(f == NULL) {
return DBDEMON_OPEN_ERROR; // open fail
}
for(int i = 0; feof(f) == 0; i++)
{
fscanf(f,"%d %s %s %c\n", &db[i].id, &db[i].name[0], &db[i].uid[0], &db[i].priviledge);
db_size++;
}
fclose(f);
this is reading a file with the following format:
int string string char
1 SOMETHING ANYTHING Z
to a struct define as follows:
typedef struct {
unsigned int id;
char name[DBDEMON_NAME_MAXSIZE];
char uid[DBDEMON_UID_MAXSIZE];
char priviledge;
} DATABASE;
Use fscanf with care, since no types are checked, etc, it can result in errors.
But I think this is pretty efficient.