assertion violation in MVS with Dafny , but it is verified in rise4fun - dafny

https://rise4fun.com/Dafny/ZkKN
This assertion is not verified by Dafny 2.3.0. over MVS, but it is verfied in rise4fun, of course with a warning about triggers. It causes "Verification inconclusive".
Moreover, https://rise4fun.com/Dafny/Um6t does not print "hello" (is not running) in rise4fun. It should be some error since there is no "assertion violation".
Please, some help?

Your program verifies when I add the -arith:2 flag, which adds symbolic synonyms for the arithmetic symbols and allows them to be used in triggers.
Edit:
A more general answer is that your problem uses nonlinear arithmetic, which is in general undecidable. There are some tips on how to handle those in the FAQ at https://github.com/dafny-lang/dafny/wiki/FAQ, I don't have much experience with Dafny and nonlinear arithmetic myself, however.
I don't know why your file has worked before, but to investigate you could print the SMT encoding Dafny feeds to Z3 (see dafny output as SMT file) and compare different versions, if there's no difference, maybe there's a difference between Z3 versions.
Maybe there's a way to encode your problem differently, that works in a more stable manner between different solver versions, assuming there's no bug in any of the tools.

Related

Performance Visibility for Z3

Is there a way to see what Z3 is doing under the hood? I would like to be able to see the steps it is taking, how long they take, how many steps, etc. I'm checking the equality of floating point addition/multiplication hardware designs with Z3's builtin floating point addition/multiplication. It is taking quite longer than expected, and it would be helpful to see what exactly it's doing in the process.
You can run it with higher verbosity:
z3 -v:10
This will print a lot of diagnostic info. But the output is unlikely to be readable/understandable unless you're really familiar with the source code. Of course, being an open source project, you can always study the code itself: https://github.com/Z3Prover/z3
If you're especially interested in e-matching and quantifiers, you can use the Axiom Profiler (GitHub source, research paper) to analyse a Z3 trace. The profiler shows instantiation graphs, tries to explain instantiations (which term triggered, which equalities were involved), and can even detect and explain matching loops.
For this, run z3 trace=true, and open the generated trace file in the profiler. Note that the tool is a research prototype: I've noticed some bugs, it seems to work more reliable on Windows than on *nix, and it might not always support the latest Z3 versions.

Print current logical context as an SMT-LIB file in Z3

I'm trying to debug a program that is using the Z3 API, and I'm wondering if there's a way, either from within the API or by giving Z3 a command, to print the current logical context, hopefully as if it had been read in an SMT-LIB file.
This question from 7 years ago seemed to indicate that there would be a way to do this, but I couldn't find it in the API docs.
Part of my motivation is that I'm trying to debug whether my program is slow because it's creating an SMT problem that's hard to solve, or whether the slowdown is elsewhere. Being able to view the current context as an SMT-LIB file, and run it in Z3 on the command line, would make this easier.
It's not quite clear what you mean by "logical context." If you mean all the assertions the user has given to the solver, then the command:
(get-assertions)
will return it as an S-expression like list; see Section 4.2.4 of http://smtlib.cs.uiowa.edu/papers/smt-lib-reference-v2.6-r2017-07-18.pdf
But this doesn't sound useful for your purposes; after all it is going to return precisely everything you yourself have asserted.
If you're looking for a dump of all the learned-lemmas, internal assertions the solver created etc; I'm afraid there's no way to do that from SMTLib. You probably can't even do that using the programmatic API either. (Though this needs to be checked.) That would only be possible by actually modifying the source code of z3 itself (which is open-source), and putting in relevant debug traces. But that would require a lot of study of the internals of z3 and would unlikely to help unless you're intimately knowledgeable about z3 code base itself.
I find that running z3 -v:10 can sometimes provide diagnostic info; if you see it repeatedly printing something, it's a good indication that something has gone wrong in that area. But again, what it prints and what it exactly means is guess work unless you study the source code itself.

Bug? Changing order of assertions affects satisfiability

After changing the order of assertions in unsat query it becomes sat.
The query structure is:
definitions1
assertions1
definitions2
bad_assertions
check-sat
I sort bad_assertions with Python's sorted function, and this makes Unsat query Sat.
Z3 versions 4.0, 4.1; Ubuntu 12.04
Unfortunately, queries are quite large which makes them difficult to debug,
so I can provide any other additional info if.
Here are originally unsat query with marked lines for mixing, and a simple python script to mix lines in the query.
I managed to reproduce the problem reported in your question. Both examples are satisfiable. The script that produces unsat is exposing a bug in the datatype theory. I fixed the bug, and the fix will be available in Z3 4.2. Since this is a soundness bug, we will release version 4.2 very soon. In the meantime, you can workaround the bug by using the option RELEVANCY=0 in the command line.
From your description it sounds like a bug.
sat/unsat should of course not depend on ordering.
If packaging up a repro is difficult, then one way to help us debug
the problem, once you have confidence in what triggers the bug,
is to use "open_log()" to dump a trace of all interactions with Z3.
You should use "open_log" before other calls to Z3.
We can then replay the log without your sources.

different run time for the same code in Z3

I used fixedpoint of z3, and I found that the running time of the fixedpoint is always different. Have you meet the same problem? why does this happen?
Sounds unexpected if you get large variety for the same code starting from the same state.
If you start from different states (that is, if you have made miscellaneous calls to Z3
whether over the text API or programmatic API, between the rounds). Z3 should otherwise not
exhibit hugely non-deterministic behavior. Non-deterministic behavior may arise from bugs, so
it will be appreciated if you can further and more precisely describe the scenario that is exercising it.

Z3 C-API gets stuck

I am using Z3 C-API to solve some problems that contain constraints in nonlinear integer arithmetic (NIA). I am using z3 4.0 for linux.
When I run my program that calls Z3 C-API it gets stuck on some problems and does not produce any answer - sat, unsat or unknown. It gets stuck inside Z3_check_and_get_model() call.
I tried running it for more than 10 mins but didn't get any answer.
The binary keeps doing some work and the CPU usage is almost 100%.
I thought that this is happening because of nonlinear constraints, but when I tried the same examples on Z3-SMT online and also with z3 linux binary with -smt2 option , it gives me sat solution with a model quickly.
The problems although in the domain of NIA are relatively simple.
I am not sure why this is happening.
Am I missing something here - perhaps some configuration setting that I need to provide to get the same performance as that with the z3 binary with -smt2 input? Or is this a bug?

Resources