References for Z3 - how does it work[internal theory]? - z3

I am interested in reading the internal theory behind Z3. Specifically I want to read, how Z3 SMT solver works, and how it is able to find counterexamples for an incorrect model. I wish to be able to manually work out a trace for some very simple example.
However, all Z3 references seem to be how to code in it; or a very high level description of their algorithm. I am unable to find a description of the algorithms used. Is this information not made public by Microsoft?
Could anyone quote any references (papers/books) which give a comprehensive insight into Z3's theory + working?

My personal opinion is that the best reference to start with is Kroening and Strichman's Decision Procedures book. (Make sure to get the 2nd edition as it has good updates!) It covers almost all topics of interest to certain depth, and has many references at the back for you to follow up on. The book also has a website http://www.decision-procedures.org with extra readings, slides, and project ideas.
Another book of interest in this field is Bradley and Manna's The Calculus of Computation. While this book isn't specific to SAT/SMT, it covers many of the similar topics and how these ideas play out in the realm of program verification. Also see http://theory.stanford.edu/~arbrad/pivc/index.html for the associated software/tools.
Of course neither of these books are specific to z3, so you won't find anything detailed about how z3 itself is constructed in them. For programming z3 and some of the theory behind it, the "tutorial" paper by Bjørner, de Moura, Nachmanson, and Wintersteiger is a great read.
Once you go through these, I suggest reading individual papers by the developers, depending on where your interests are:
Bjørner: https://www.microsoft.com/en-us/research/people/nbjorner/publications/
de Moura: https://www.microsoft.com/en-us/research/people/leonardo/publications/
Wintersteiger: https://www.microsoft.com/en-us/research/people/cwinter/publications/
Nachmanson: https://www.microsoft.com/en-us/research/people/levnach/publications/
And there's of course a plethora of resources on the internet, many papers, presentations, slide-decks etc. Feel free to ask specific questions directly in this forum, or for truly z3 internal specific questions, you can use their discussions forum.
Note Regarding the differences between editions of Kroening and Strichman's book, here’s what the authors have to say:
The first edition of this book was adopted as a textbook in courses worldwide. It was published in 2008 and the field now called SMT was then in its infancy, without the standard terminology and canonic algorithms it has now; this second edition reflects these changes. It brings forward the DPLL(T) framework. It also expands the SAT chapter with modern SAT heuristics, and includes a new section about incremental satisfiability, and the related Constraints Satisfaction Problem (CSP). The chapter about quantifiers was expanded with a new section about general quantification using E-matching and a section about Effectively Propositional Reasoning (EPR). The book also includes a new chapter on the application of SMT in industrial software engineering and in computational biology, coauthored by Nikolaj Bjørner and Leonardo de Moura, and Hillel Kugler, respectively.

Related

Does it make sense to interrogate structured data using NLP?

I know that this question may not be suitable for SO, but please let this question be here for a while. Last time my question was moved to cross-validated, it froze; no more views or feedback.
I came across a question that does not make much sense for me. How IFC models can be interrogated via NLP? Consider IFC models as semantically rich structured data. IFC defines an EXPRESS based entity-relationship model consisting of entities organized into an object-based inheritance hierarchy. Examples of entities include building elements, geometry, and basic constructs.
How could NLP be used for such type of data? I don't see NLP relevant at all.
In general, I would suggest that using NLP techniques to "interrogate" already (quite formally) structured data like EXPRESS would be overkill at best and a time / maintenance sinkhole at worst. In general, the strengths of NLP (human language ambiguity resolution, coreference resolution, text summarization, textual entailment, etc.) are wholly unnecessary when you already have such an unambiguous encoding as this. If anything, you could imagine translating this schema directly into a Prolog application for direct logic queries, etc. (which is quite a different direction than NLP).
I did some searches to try to find the references you may have been referring to. The only item I found was Extending Building Information Models Semiautomatically Using Semantic Natural Language Processing Techniques:
... the authors propose a new method for extending the IFC schema to incorporate CC-related information, in an objective and semiautomated manner. The method utilizes semantic natural language processing techniques and machine learning techniques to extract concepts from documents that are related to CC [compliance checking] (e.g., building codes) and match the extracted concepts to concepts in the IFC class hierarchy.
So in this example, at least, the authors are not "interrogating" the IFC schema with NLP, but rather using it to augment existing schemas with additional information extracted from human-readable text. This makes much more sense. If you want to post the actual URL or reference that contains the "NLP interrogation" phrase, I should be able to comment more specifically.
Edit:
The project grant abstract you referenced does not contain much in the way of details, but they have this sentence:
... The information embedded in the parametric 3D model is intended for facility or workplace management using appropriate software. However, this information also has the potential, when combined with IoT sensors and cognitive computing, to be utilised by healthcare professionals in Ambient Assisted Living (AAL) environments. This project will examine how as-constructed BIM models of healthcare facilities can be interrogated via natural language processing to support AAL. ...
I can only speculate on the following reason for possibly using an NLP framework for this purpose:
While BIM models include Industry Foundation Classes (IFCs) and aecXML, there are many dozens of other formats, many of them proprietary. Some are CAD-integrated and others are standalone. Rather than pay for many proprietary licenses (some of these enterprise products are quite expensive), and/or spend the time to develop proper structured query behavior for the various diverse file format specifications (which may not be publicly available in proprietary cases), the authors have chosen a more automated, general solution to extract the content they are looking for (which I assume must be textual or textual tags in nearly all cases). This would almost be akin to a search engine "scraping" websites and looking for key words or phrases and synonyms to them, etc. The upside is they don't have to explicitly code against all the different possible BIM file formats to get good coverage, nor pay out large sums of money. The downside is they open up new issues and considerations that come with NLP, including training, validation, supervision, etc. And NLP will never have the same level of accuracy you could obtain from a true structured query against a known schema.

spell checker uses language model

I look for spell checker that could use language model.
I know there is a lot of good spell checkers such as Hunspell, however as I see it doesn't relate to context, so it only token-based spell checker.
for example,
I lick eating banana
so here at token-based level no misspellings at all, all words are correct, but there is no meaning in the sentence. However "smart" spell checker would recognize that "lick" is actually correctly written word, but may be the author meant "like" and then there is a meaning in the sentence.
I have a bunch of correctly written sentences in the specific domain, I want to train "smart" spell checker to recognize misspelling and to learn language model, such that it would recognize that even thought "lick" is written correctly, however the author meant "like".
I don't see that Hunspell has such feature, can you suggest any other spell checker, that could do so.
See "The Design of a Proofreading Software Service" by Raphael Mudge. He describes both the data sources (Wikipedia, blogs etc) and the algorithm (basically comparing probabilities) of his approach. The source of this system, After the Deadline, is available, but it's not actively maintained anymore.
One way to do this is via a character-based language model (rather than a word-based n-gram model). See my answer to Figuring out where to add punctuation in bad user generated content?. The problem you're describing is different, but you can apply a similar solution. And, as I noted there, the LingPipe tutorial is a pretty straightforward way of developing a proof-of-concept implementation.
One important difference - to capture more context, you may want to train a larger n-gram model than the one I recommended for punctuation restoration. Maybe 15-30 characters? You'll have to experiment a little there.

How can I select a FAQ entry from a user's natural-language inquiry?

I am working on an app where the user submits a series of questions. These questions are freeform text, but are based on a specific product, so I have a general understanding of the context. I have a FAQ listing, and I need to try to match the user's question to a question in the FAQ.
My language is Delphi. My general thought approach is to throw out small "garbage words", a, an, the, is, of, by, etc... Run a stemming program over these words to get the root words, and then try to match as many of the remaining words as possible.
Is there a better approach? I have thought about some type of natural language processing, but I am afraid that I would be looking at years of development, rather than a week or two.
You don't need to invent a new way of doing this. It's all been done before. What you need is called a FAQ finder, introduced by Hammond, et al in 1995 (FAQ finder: a case-based approach to knowledge navigation, 11th Conference on Artificial Intelligence for Applications).
AI Magazine included a paper by some of the same authors as the first paper that evaluated their implementation. Burke, et al, Question Answering from Frequently Asked Question Files: Experiences with the FAQ FINDER System, 1997. It describes two stages for how it works:
First, they use Smart, an information-retrieval system, to generate an initial set of candidate questions based on the user's input. It looks like it works similarly to what you described, stemming all the words and omitting anything on the stop list of short words.
Next, the candidates are scored against the user's query according to statistical similarity, semantic similarity, and coverage. (Read the paper for details.) Scoring semantic similarity relies on WordNet, which groups English words into sets of distinct concepts. The FAQ finder reviewed here was designed to cover all Usenet FAQs; since your covered domain is smaller, it might be feasible for you to apply more domain knowledge than the basics that WordNet provides.
Not sure if this solution is precisely what you're looking for, but if you're looking to parse natural language, you could use the Link-Grammar Parser.
Thankfully, I've translated this for use with Delphi (complete with a demo), which you can download (free and 100% open source) from this page on my blog.
In addition to your stemming approach, I suggest that you are going to need to look into one or more of the following:
Recognize important pairs or phrases (2 or more words). For example if your domain is a technical field, an important pair that should be automatically considered as a pair instead of individual words, where the pair of words means something special (in programming, "linked list", "serial port", etc, are more important in their meaning as a pair of words than individually).
A large list of synonyms ("turn == rotate", "open == access", etc ).
I would be tempted to tear apart "search engine" open source software in whatever language it was written in, and see what general techniques they use.

Heuristic Approaches to Finding Main Content

Wondering if anybody could point me in the direction of academic papers or related implementations of heuristic approaches to finding the real meat content of a particular webpage.
Obviously this is not a trivial task, since the problem description is so vague, but I think that we all have a general understanding about what is meant by the primary content of a page.
For example, it may include the story text for a news article, but might not include any navigational elements, legal disclaimers, related story teasers, comments, etc. Article titles, dates, author names, and other metadata fall in the grey category.
I imagine that the application value of such an approach is large, and would expect Google to be using it in some way in their search algorithm, so it would appear to me that this subject has been treated by academics in the past.
Any references?
One way to look at this would be as an information extraction problem.
As such, one high-level algorithm would be to collect multiple examples of the same page type and deduce parsing (or extraction) rules for the parts of the page which are different (this is likely to be the main topic). The intuition is that common boilerplate (header, footer, etc) and ads will eventually appear on multiple examples of those web pages, so by training on a few of them, you can quickly start to reliably identify this boilerplate/additional code and subsequently ignore it. It's not foolproof, but this is also the basis of web scraping technologies, both commercial and academic, like RoadRunner:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.21.8672&rep=rep1&type=pdf
The citation is:
Valter Crescenzi, Giansalvatore Mecca,
Paolo Merialdo: RoadRunner: Towards
Automatic Data Extraction from Large
Web Sites. VLDB 2001: 109-118
There's also a well-cited survey of extraction technologies:
Alberto H. F. Laender , Berthier A.
Ribeiro-Neto , Altigran S. da Silva ,
Juliana S. Teixeira, A brief survey of
web data extraction tools, ACM SIGMOD
Record, v.31 n.2, June 2002
[doi>10.1145/565117.565137]

How to evaluate a device to the relation with theories such as: "senses (Visual, Auditory, Haptic) and cognition (short term and long term memory)"?

How can I evaluate a computerized device or a software application in the HCI field to the relation with these theories such as: "Senses (Visual, Auditory, Haptic) and cognition (short term and long term memory)" and based on the context where the device is used? Any help or advice is appreciated.
My guess would be that the senses part would be covered by:
how pleasing is the device/software.
how real is the application.
and following from that:
immersion
Virtual reality is a big thing in the HCI world. Fire fighters, pilots, the army etc etc use virtual worlds to do more and more of their training, it would be important for them to actually feel like they are there so they react more naturally.
What I can think of for short term and long term cognition:
menu sizes.
categorization.
how many clicks does it take to do X.
These all help a user to remember how to achieve task X and where it was located in the software. (I guess that's all long term...)
I hope this inspires you a bit. Go to http://scholar.google.com/ and find some papers on the subjects, at the very least these papers will explain how they evaluate what they are testing, if you can't find a paper that discusses the evaluation techniques themselves.
Hint: If you are studying at a university the university usually has already paid for full access to the papers. Access scholar.google from a computer at university or use a vpn to connect through your university. Direct links to the papers are located on the right of the search result. As a bonus you can configure scholar to add a link with the bibTeX information!
The first result I got was a chapter of a book on user interfaces, which is about testing the user interface. Happy hunting!
There are a set of heuristics that, at least from my knowledge, have collectively become an industry standard method for evaluating interfaces. They were developed by Jakob Nielsen and can be found here:
http://www.useit.com/papers/heuristic/heuristic_list.html
Usually when someone says they are performing a "Heuristic Evaluation" or an "Expert Evaluation" that means they are basing it off of these 10 criteria. It's possible your professor is looking for something along these lines. I had a very similar experience in two courses I took recently, I had to write papers evaluating several interfaces on Nielsen's Heuristics.
A couple other useful links:
http://www.westendweb.com/usability/02heuristics.htm
http://www.morebusiness.com/getting_started/website/d913059671.brc
http://www.stcsig.org/usability/topics/articles/he-checklist.html
Hope this helps, good luck!

Resources