SMT-LIB Benchmarks - z3

I would like to benchmark some SMT solvers and the SMT-LIB Benchmark repository [1,2] seems to be a good place to start.
However, the link has been down for at least some days. Does anyone know any other place where I can find these benchmarks?
[1] http://www.smtlib.org/
[2] http://smtexec.org/exec/smtlib-portal-benchmarks.php
EDIT:
The benchmarks are now here:
[1] http://smtlib.cs.uiowa.edu/benchmarks.shtml

The server that hosts the SMTLIB benchmarks has broken down and is currently being repaired. According to information I got from Cesare Tinelli, it should be back online some time this week.

The server is now at: http://smtlib.cs.uiowa.edu/ and the benchmarks are now at: http://smtlib.cs.uiowa.edu/benchmarks.shtml. I edited the question to avoid confusion.

Related

Performance Visibility for Z3

Is there a way to see what Z3 is doing under the hood? I would like to be able to see the steps it is taking, how long they take, how many steps, etc. I'm checking the equality of floating point addition/multiplication hardware designs with Z3's builtin floating point addition/multiplication. It is taking quite longer than expected, and it would be helpful to see what exactly it's doing in the process.
You can run it with higher verbosity:
z3 -v:10
This will print a lot of diagnostic info. But the output is unlikely to be readable/understandable unless you're really familiar with the source code. Of course, being an open source project, you can always study the code itself: https://github.com/Z3Prover/z3
If you're especially interested in e-matching and quantifiers, you can use the Axiom Profiler (GitHub source, research paper) to analyse a Z3 trace. The profiler shows instantiation graphs, tries to explain instantiations (which term triggered, which equalities were involved), and can even detect and explain matching loops.
For this, run z3 trace=true, and open the generated trace file in the profiler. Note that the tool is a research prototype: I've noticed some bugs, it seems to work more reliable on Windows than on *nix, and it might not always support the latest Z3 versions.

Unsatisfiable Assumptions in Z3?

According to the SMTLib doc here, we can check sat using check-sat-assuming and then one can determine the unsatisfiable assumptions using get-unsat-assumptions.
Reflecting that on Z3 in JavaAPI, I can see checkAssuming API doing the same thing as thecheck-sat-assuming but I can't seem to find anything that is doing something similar to get-unsat-assumptions, all I can find is getUnsatCore api.
So my question is, is there anyway that I can get unsat assumptions in Z3 using JavaAPI?
Much appreciated!
Looks like an oversight in the Java API. You might want to file a ticket (or better yet a pull-request) at their github site: https://github.com/Z3Prover/z3/issues

Are there any papers describe why flex is faster than lex?

flex is called the "fast" lexical analyzer, but I can not find any document that explains why it is faster than lex. flex has a manual, but it focus on its usage instead of its internals. Could any experts in this field give some help please? Either an explanation about flex's performance improvements or a link to it is welcome.
This answer is from Vern Paxson, and he allows it being shared here.
Alas, this would take quite a bit of time to sketch in any sort of
useful detail, as there are a number of techniques that contribute to
its performance. I wrote a paper about it a loooong time ago (mid
80s!) but don't have a copy of it. Evidently you can buy it from:
http://www.ntis.gov/search/product.aspx?ABBR=DE85000703
Sorry not to be of more help ...
To add to Vern's statement, flex does a lot better job of table compression, providing several different space/time tradeoffs, and its inner loop is also considerably faster than lex's.
According to a (Usenet?) paper by van Jacobsen in the 1980s, lex was largely written by an AT&T intern. VJ described how its inner loop could be reduced from several dozen instructions to about three.
Vern Paxon wrote flex for what he described at the time as the fastest data acquisition applications in the world. Not sure if I should go into more details here.
I had the privilege of helping Vern with the 8-bit version, as I was working in compilers that had to scan Kanji and Katakana at the time.
I'm not so sure flex is so much faster than the AT&T version lex. Both programs have been developed independently and to avoid confusion with the official version, the authors of flex probably choose a slightly different name. They might have intended to generate faster scanners, which is also suggested by a couple of options to trade space for time. Also they motivate making %option yylineno (and a few other features) optional with the speed of the generated scanner.
Whether the slight differences in speed for such scanners are still relevant is debatable. I couldn't find any official statement on the choice of name either, so I guess you'd have to ask the original authors Jef Poskanzer and/or Vern Paxson. If you find them and get an answer, then please let us know here. History of software is interesting and you can still get the answer first hand.

Concise description of the Lua vm?

I've skimmed Programming in Lua, I've looked at the Lua Reference.
However, they both tells me this function does this, but not how.
When reading SICP, I got this feeling of: "ah, here's the computational model underlying scheme"; I'm trying to get the same sense concerning Lua -- i.e. a concise description of it's vm, a "how" rather than a "what".
Does anyone know of a good document (besides the C source) describing this?
You might want to read the No-Frills Intro to Lua 5(.1) VM Instructions (pick a link, click on the Docs tab, choose English -> Go).
I don't remember exactly where I've seen it, but I remember reading that Lua's authors specifically discourage end-users from getting into too much detail on the VM; I think they want it to be as much of an implementation detail as possible.
Besides already mentioned A No-Frills Introduction to Lua 5.1 VM Instructions, you may be interested in this excellent post by Mike Pall on how to read Lua source.
Also see related Lua-Users Wiki page.
See http://www.lua.org/source/5.1/lopcodes.h.html . The list starts at OP_MOVE.
The computational model underlying Lua is pretty much the same as the computational model underlying Scheme, except that the central data structure is not the cons cell; it's the mutable hash table. (At least until you get into metaprogramming with metatables.) Otherwise all the familiar stuff is there: nested first-class functions with mutable local variables (let-bound variables in Scheme), and so on.
It's not clear to me that you'd get much from a study of the VM. I did some hacking on the VM a while back and it's a lot like any other register-oriented VM, although maybe a bit cleaner. Only a handful of instructions are Lua-specific.
If you're curious about the metatables, the semantics is described clearly, if somewhat verbosely, in Section 2.8 of the reference manual for Lua 5.1. If you look at the VM code in src/lvm.c you'll see almost exactly that logic implemented in C (e.g., the internal Arith function). The VM instructions are specialized for the common cases, but it's all terribly straightforward; nothing clever is involved.
For years I've been wanting a more formal specification of Lua's computational model, but my tastes run more toward formal semantics...
I've found The Implementation of Lua 5.1 very useful for understanding what Lua is actually doing.
It explains the hashing techniques, garbage collection and some other bits and pieces.
Another great paper is The Implmentation of Lua 5.0, which describes design and motivations of various key systems in the VM. I found that reading it was a great way to parse and understand what I was seeing in the C code.
I am surprised you refer to the C source for the VM as this is protected by lua.org and the tecgraf/puc rio in Brazil specially as the language is used for real business and commercial applications in a number of countries. The paper about The Implementation of lua contains details about the VM in the most detail it is permitted to include but the structure of the VM is proprietary. It is worth noting that versions 5.0 and 5' were commissioned by IBM in Europe for use on customer mainframes and their register-based version have a VM which accepts the IBM defined format of intermediate instructions.

What is the best way to learn Erlang?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Other than specific projects (although those are welcome as well)...
What tools, books, articles, and other resources should I have at my desk to help me learn Erlang?
Also, are there mobile runtimes for Erlang?
Please point me in the right direction.
Note: yes, I have visited Erlang and Wikipedia, but I'd like to hear some reliable, experienced opinions.
I'm a month-or-so into learning and the guides I'm enjoying most are:
The Erlang Site's Getting Started with Erlang Guide
Joe Armstrong's Book Software for a Concurrent World (thoroughly recommended)
And I have on order: O'Reilly's Erlang Programming which has had some really positive reviews and sounds like a good companion to Joe Armstrong's book (covering many of the same topics in greater depth, possibly with more "real world" examples)
I think you can dive into the Getting Started guide straight away and it will certainly give you a feel for functional programming and then concurrency.
If you're in London this June there is the Erlang Factory conference which looks really good.
While I remember, these are two good presentations taking you through Erlang and it's uses:
Thinking in Erlang
Functions + Messages + Concurrency = Erlang
Finally, you can follow my learning experiences on my blog (joelhughes.co.uk/blog) my step by step adjustment of FizzBuzz from python/ruby/php to Erlang might give you a good flavour (sorry about the shameless self promotion).
I have to say learning Erlang is currently one of my greatest pleasures, there is something very satisfying about it!
For beginners, the "Learn you some Erlang" guide is supremely awesome. It is as of yet incomplete, but provides a lot even with what little is there.
It also has an RSS so you can be informed when (if?) it is updated.
I found the best thing to do to learn erlang was reading joe's thesis
http://www.sics.se/~joe/thesis/armstrong_thesis_2003.pdf
and then writing something I enjoyed, for me it was an iax2 server.
What I can recommend you is not to browse the Wings3d source code.
(I did it and it was a waste of time similar as when I tried to read the Quake2 sources :-p)
I have the both the Erlang Progamming and the Software for a Concurrent World, both are excellent. I might almost say the Erlang Programming is better, it shows a lot more about using OTP (Erlang libraries), but I was also a little more comfortable with the language when I was reading it, so that's what I was looking for.
The Getting Started with Erlang Guide is also pretty good.
Definitely you should give writing a simple server a try. That's one of the areas where Erlang really shines and there's plenty of documentation and tutorials around message passing and the gen_server module.
-- edit
Also, you can run Erlang on ARM based mobile devices (ARMv5+) for sure, you could ask on erlang-questions for other architectures. Check out http://wiki.trapexit.org/index.php/Cross_compiling for the basics of getting started with cross-compiling.

Resources