If only (check-sat) is done, It mark as no sat.
But if you try (get-model), it doesn't mark, and the error comes out right away.
Is there any way that mark me without being error?
As Patrick commented, it's really hard to decipher what you're asking. Please provide some code snippets to show what you get and what you hope to achieve.
Having said that, I'll take a wild guess that you are in a situation where the solver is unsat, i.e., (check-sat) returns an unsat response. Yet, your script has (get-model) in the next line, which of course errors out since there's no model. And you'd like a way to avoid the error message.
This is a known limitation of SMTLib command language: You cannot programmatically inspect the output of commands, unfortunately. The usual way to solve this sort of problem is to have a "live" connection to the solver (typically in the form of a text pipe), and read a line after issuing (check-sat), and programmatically continue depending on the response. This is how most systems built on top of SMT-solvers work.
Alternatively, you can use higher-level APIs in other languages (C/C++/Java/Python/Haskell..) and program z3 that way; and use the facilities of the host language to direct that interaction. This requires you to learn another interface on top of SMTLib, but for serious uses of this technology, you probably don't want to limit yourself to a pure SMTLib interface anyhow.
Also see this answer for a related discussion: Executing get-model or unsat-core depending on solver's decision
Hope this addresses your issue, though it's hard from your question whether this is what you were getting at.
Related
Consider the following smt2 file generated with the help of Klee.
I am trying to evaluate it using z3. However, z3 hangs forever. Specifically, when the formula is UNSAT, z3 runs for ever and does not produce any result.
Is formula size is big?
Is there any issue while using logic theory AUFBV?
May I get some suggestions to improve the z3 performance.
Each assert statement having some common subexpression. Is it possible to improve the z3 performance by solving subexpression separately?
This is going to be impossible to answer as the SMT-lib file you've linked is undecipherable for a non-KLEE user. I recommend asking KLEE folks directly using your original program that gave rise to this. Failing that, try to reduce the SMT2Lib to a minimum and see if you can at least hand-annotate to see what it's trying to do.
Regarding your question for common subexpressions: You have to experiment to find out. But the way most such solvers are constructed, they /will/ discover common subexpressions themselves and reuse lemmas about them automatically as they convert your input to an internal representation. So, it'd surprise me if it helped in any significant way to do this by hand; unless the input is really massive. (The example you linked isn't really that big so I doubt that's an issue.)
I know parseSMTLIB2File Java API ignores certain commands in an SMT2 file. However, is there a way around it? I am generating smt2 files and use parseSMTLIB2File and solver.check() to parse and solve the constraints.
Now, I would like to use the unsat cores from the solver for some computation. I know I probably can do it using std in and out (here). However, this will be very inefficient for running the algorithms. Further, it is also not ideal to change the entire code base to switch every constraint generation through Z3 Java APIs.
Since the native C++ interface handles options and (tracked) assertions well. Hence, is there any way around it? How can I do this programmatically and efficiently?
Do other C++/C/Python parseSMTLIB2File APIs perform the same thing as Java's or they may read in additional things.
is there a way around it?
No. parseSMTLIB2File is not a complete interface to the solver, and it's not intended to be. The only options are to switch to the full API interface, or to the full text interface by emitting .smt2 files and passing those to Z3. The latter can be done via pipes instead of actual files, and many users are happy with the performance of that.
I found this question here in which OP asks for a way to profile an ANTLTR grammar.
However the answer is somewhat unsatisfying as it is limited to grammars without actions and - even more important - it is an automated profiling that will (as I see it) use the defaul constructor of the generated lexer/parser to construct it.
I need to profile a grammar, that does contain actions and that has to be constructed using a custom constructor. Therefore I'd need to be able to instantiate the lexer + parser myself and then profile it.
I was unable to find any information on this topic. I know there is a profiler for IntelliJ but it works quite similar to the one described in the linked question's answer (maybe it's even the same).
Does anyone know how I can profile my grammar with this special needs? I don't need any fancy GUI. I'd be satisified if I get the result printed to the console or something like that.
To wrap it up: I'm searching for either a tool or a hint on how to write some code that lets me profile my ANTLR grammar (with self-instantiated lexer/parser).
Btw my target language is Java so I guess the profiler has to be in Java as well.
A good start is setting Parser.setProfile() to true and examine what you get from Parser.getParseInfo() after a parse run. I haven't yet looked closer what's provided in detail by the profiling result, but it's on the todo list for my vscode extension for ANTLR4 to provide profiling info for grammars to help improving them.
A hint for getting from the decision info to a specific rule: there's a decision number, which is an index into ATN.decisionToState. The DecisionState instance you can get by this is an ATNState descendant, which allows to get a ATNState.ruleIndex from it. The rule index then can be used with your parser's ruleNames property to find the name of that rule. The value is also what is used for the rule's enum entry.
When writing proofs I noticed that Agda's auto proof search frequently wouldn't find solutions that seem obvious to me. Unfortunately coming up with a small example, that illustrates the problem seems to be hard, so I try to describe the most common patterns instead.
I forgot to add -m to the hole to make Agda look at the module scope. Can I make that flag the default? What downsides would that have?
Often the current hole can be filled by a parameter of the function I am about to implement. Even when adding -m, Agda will not consider function parameters or symbols introduced in let or where clauses though. Is there something wrong with simply trying all of them?
When viewing a goal, symbols introduced in let or where clauses are not even displayed. Why?
What other habits can make using auto more effective?
Agda's auto proof search is hardwired into the compiler. That makes it fast,
but limits the amount of customization you can do. One alternative approach
would be to implement a similar proof search procedure using Agda's
reflection mechanism. With the recent beefed up version of reflection using
the TC monad,
you no longer need to implement your own unification procedure.
Carlos
Tome's been working on reimplementing these ideas (check out his code
https://github.com/carlostome/AutoInAgda ). He's been working on several
versions that try to use information from the context, print debugging info,
etc. Hope this helps!
This question is intended to be a discussion of people's personal opinions in handling user input.
This portion of the project that I am working on handles user input in a manner similar to an IRC chat. For instance, there are set commands and whatnot, for chatting, executing actions, etc.
Now, I have several options to choose from for parsing this input. I could go with regular expressions, I could parse it directly (ie a large switch statement with all supported commands, simply checking the first x number of characters in the user input), or could even go crazy and add in a parser similar to Flex/Bison implementations. One other option I was considering was defining all commands in an XML file to separate them from the code implementation.
So, what are the thoughts of the community?
I'd go with a nice mixed bag of all.
Obviously you'll have to sanitize the input. Make sure there's no nasty stuff there, depending on where the input is going to prevent SQL injection, XSS, CSRF etc...
But when your input is clean, you could go with a regexp that catches the ones intended as command and gets all necessary submatches (command parameters etc.) and then have some sort of dispatcher-switch statement or similar.
There really is no cover-all best practice here, apart from always always and quadruple-always making sure user input is sanitized. Apart from that, go with what seems to fit best for your case.
Obviously there are those that say if you've got a problem and you're thinking of using reg exps to solve said problem, you've got two problems, but used cautiously, they're the best thing ever. Just remember that regexp-monsters can read to really poor readability really quick.