I am new to SMT solvers. I would like to know that how could I encode a simple TSP problem having 4/6 nodes? I am confused how to set my constraints using the Z3pay APA. Any kind of hint or help would be highly appreciated.
You can phrase your TSP problem as an ILP problem. The question is now how to encode the TSP as an ILP. There are two well known answers: Miller–Tucker–Zemlin and Dantzig–Fulkerson–Johnson.
The basic idea is as follows: Say we have n cities. Let us denote by d_ij the distance between cities i and j and let us denote by x_{ij} the boolean value (0 or 1) whether the TSP contains the edge from i to j. Then finding the smallest tour means
minimize sum_{i,j} x_{ij} d_{ij}
such that the x_{ij} describe a cycle. With those two conditions we get one or more cycles:
sum_{j} x_{ij} = 1 for all i exactly one outgoing edge per city
sum_{i} x_{ij} = 1 for all j exactly one ingoing edge per city
Now we have to exclude the case that the solutions comprises multiple cycles. We can add this exponential number of Dantzig–Fulkerson–Johnson conditions:
sum_{i in S} sum_{j in S} x_{ij} < |S| for all proper subsets S of {1, ..., n}
Note that if our solution contains two cycles then for S being the vertex set of one of the cycles then the x_{ij}-sum will be |S|. On the other hand, if there is only one cycle then the x_{ij}-sum will never reach |S|, e.g., if you remove one vertex from {1, ..., n} then the number of edges remaining is n-2, but |S| = n-1.
Of course, an exponential number of constraints is not what we want, so we look for a more clever way to exclude the subcycle cases. And here is where Miller–Tucker–Zemlin jumps in.
A different approach would be to simply ignore the subcycle problem, compute a solution and check whether the solution comprises subcycles. If it does, exclude the solution by adding it as a lazy constraint and repeat until you get a single-cycle solution. The keyword here is lazy constraint.
There is a nice sample that may be useful for you:
http://z3.codeplex.com/SourceControl/changeset/view/1235b3ea24d9#examples/python/hamiltonian/hamiltonian.py
Related
This question is related to my previous question
Is it possible to get a legit range info when using a SMT constraint with Z3
So it seems that "efficiently" finding the maximum range info is not proper, given typical 32-bit vectors and so on. But on the other hand, I am thinking whether it is feasible to find certain "sub-maximum" range info, which hopefully becomes more efficient. Another thing is that we may want to have certain "safe" guarantee, say for all elements in the sub-maximum range, they must satisfy the constraint, but there could exist some other solutions that would satisfy the constraint as well.
I am currently exploring whether model counting technique could make sense in this setting. Any thoughts would be appreciated very much. Thanks.
General case
This is not just a question of efficiency. Consider a problem where you have two variables a and b, and a single constraint:
a != b
What's the range of b? (maximum or otherwise?)
You can say all values are legitimate. But that would be wrong, as obviously the choice of a impacts the choice of b. The more variables you have around, the more complicated the problem will become. I don't think the problem is even well defined in this case, so searching for a solution (efficient or otherwise) doesn't make much sense.
Single variable assumption
Having said that, I think you can come up with a solution if you assume there's precisely one variable in the system. (Or, alternatively, if you fix all the other variables to some predefined constants.) If you're willing to go down this path, then you can implement a binary search algorithm to find a reasonably sized range by simply proving the quantified formula
Exists([b], And(b >= minBound, b <= maxBound, Not(constraints)))
Once you get unsat for this, you have your range. So long as you get sat, you can adjust your minBound/maxBound to search within smaller ranges. In the worst case, this can turn into a linear walk, but you can "cut-down" this search by making sure you go down a significant size in each step. That could be a parameter to the whole search, depending on how large you want your intervals to be. It'll have to be a choice between trying to find a maximal range, and how long you want to spend in this search. Of course, if you cut-down too much, you can miss a big interval, but that's the cost of efficiency.
Example1 (Good case) There's a single constraint that says b != 5. Then your search will be quick and depending on which branch you'll go, you'll either find [0, 4] or [6, 255] assuming 8-bit words.
Example2 (Bad case) There's a single constraint that says b is even. Then your search will exhibit worst-case behavior, and if your "cut-down" size is 1, you'll possibly iterate 255 times before you settle down on [0, 0]; assuming z3 gives you the maximum odd number in each call.
I hope that illustrates the point. In general, though, I'd assume you'd be closer to the "good case" for practical applications and even if your cut-down size is minimal you can most likely converge in a few iterations. Of course, this entirely depends on your problem domain, but I'd expect it to hold for software analysis in general.
I'm pretty new in Z3, but a thing that my problem could be resolved with it.
I have two variables A and B and two pattern like this:
pattern_1: 1010x11x
pattern_2: x0x01111
where 1 and 0 are the bits zero and one, and x (dont care) cold be the bit 0 or 1.
I would like to use Z3Py for check if A with the pattern_1 and B with the pattern_2 can be true at the same time.
In this case if A = 10101111 and B = 10101111 than A and B cold be true ate the same time.
Can anyone help me with this?? It is possible resolve this with Z3Py
revised answer after clarification
Here's one way you could represent those constraints. There is an operation called Extract that can be applied to bit-vector terms. It is defined as follows:
def Extract(high, low, a):
"""Create a Z3 bit-vector extraction expression."""
where high is the high bit to be extracted, low is the low bit to be extracted, and a is the bitvector. This function represents the bits of a between high and low, inclusive.
Using the Extract function you can constrain each bit of whatever term you want to check so that it matches the pattern. For example, if the seventh bit of D must be a 1, then you can write s.add(Extract(7, 7, D) == 1). Repeat this for each bit in a pattern that isn't an x.
I'm currently trying to implement the viterbi algorithm in python, more specifically the version presented in an online course.
As it stands, the algorithm is presented that way:
given a sentence with K tokens, we have to generate K tags .
We assume that tag K-1 = tag K-2 = '*', then for k going from 0 to K,
we set the tag for the token as follows :
tag(WORD_k) = argmax(p(k-1, tag_k-2, tag_k-1) * e( word_k, tag_k) * q(tag_k, tag_k-1, tag_k-1))
From my understanding this is straightforward because the p parameters are already calculated on each step (we go from 1 forward, and we already know p0), and max for the e and q params can be calculated by one iteration through the tags (since we can't come up with 2 different tags, we basically have to find the tag T for which the q * e product is maximal, and return that). This saves a lot of time, since we are almost at linear time in terms in big O notation, instead of exponential complexity, which we would get if we iterated through all possible word/tag combinations.
Am I getting the core of the algorithm correctly or am I missing something out?
Thanks in advance
since we can't come up with 2 different tags, we basically have to
find the tag T for which the q * e product is maximal, and return that
Yeah, sounds about right. q is the trigram (transition) probability and e is named the emission probability. As you said is unchanged between different paths in each stage, so the max is only dependent on the other two.
Each tag sequence should start with two asterisks at positions -2 and -1. So the first assumption is correct:
If we assume to be the maximum probability that the last two tags at position k are u and v, based on what we just said about the beginning asterisks, the base case would be
.
You had two errors in the general case though. The emission probability is a conditional. Also in the trigram, is repeated two times and the formula given is incorrect:
I'm using the Levenshtein distance algorithm to filter through some text in order to determine the best matching result for the purpose of text field auto-completion (and top 5 best results).
Currently, I have an array of strings, and apply the algorithm to each one in an attempt to determine how close of a match it is to the text which was typed by the user. The problem is that I'm not too sure how to interpret the values outputted by the algorithm to effectively rank the results as expected.
For example: (Text typed = "nvmb")
Result: "game" ; levenshtein distance = 3 (best match)
Result: "number the stars" ; levenshtein distance = 13 (second best match)
This technically makes sense; the second result needs many more 'edits', because of it's length. The problem is that the second result is logically and visually a much closer match than the first one. It's almost as if I should ignore any characters longer than the length of the typed text.
Any ideas on how I could achieve this?
Levenshtein distance itself is good for correcting query, not for auto-completion.
I can propose alternative solution:
First, store your strings in prefix tree instead of array, so you will have no need to analyze all of them.
Second, given user input enumerate strings with fixed distance from it and suggest completions for any.
Your example: Text typed = "nvmb"
Distance is 0, no completions
Enumerate strings with distance 1
Only "numb" will have some completions
Another example:Text typed="gamb"
For distance 0 you have only one completion, "gambling", make it first suggestion, and continue to get 4 more
For distance 1 you will get "game" and some completions for it
Of course, this approach sometimes gives more than 5 results, but you can order them by another criterion, not depending on current query.
I think it is more efficient because typically you can limit distance with at maximum two, i.e. check order of 1000*n prefixes, where n is length of input, most times less than number of stored strings.
The Levenshtein distance corresponds to the number of single-character insertions, deletions and substitutions in an optimal global pairwise alignment of two sequences if the gap and mismatch costs are all 1.
The Needleman-Wunsch DP algorithm will find such an alignment, in addition to its score (it's essentially the same DP algorithm as the one used to calculate the Levenshtein distance, but with the option to weight gaps, and mismatches between any given pair of characters, arbitrarily). But there are more general models of alignment that allow reduced penalties for gaps at the start or the end (and reduced penalties for contiguous blocks of gaps, which may also be useful here, although it doesn't directly answer the question). At one extreme, you have local alignment, which is where you pay no penalty at all for gaps at the ends -- this is computed by the Smith-Waterman DP algorithm. I think what you want here is in-between: You want to penalise gaps at the start of both the query and test strings, and gaps in the test string at the end, but not gaps in the query string at the end. That way, trailing mismatches cost nothing, and the costs will look like:
Query: nvmb
Costs: 0100000000000000 = 1 in total
Against: number the stars
Query: nvmb
Costs: 1101 = 3 in total
Against: game
Query: number the stars
Costs: 0100111111111111 = 13 in total
Against: nvmb
Query: ber star
Costs: 1110001111100000 = 8 in total
Against: number the stars
Query: some numbor
Costs: 111110000100000000000 = 6 in total
Against: number the stars
(In fact you might want to give trailing mismatches a small nonzero penalty, so that an exact match is always preferred to a prefix-only match.)
The Algorithm
Suppose the query string A has length n, and the string B that you are testing against has length m. Let d[i][j] be the DP table value at (i, j) -- that is, the cost of an optimal alignment of the length-i prefix of A with the length-j prefix of B. If you go with a zero penalty for trailing mismatches, you only need to modify the NW algorithm in a very simple way: instead of calculating and returning the DP table value d[n][m], you just need to calculate the table as before, and find the minimum of any d[n][j], for 0 <= j <= m. This corresponds to the best match of the query string against any prefix of the test string.
I saw in a previous post from last August that Z3 did not support optimizations.
However it also stated that the developers are planning to add such support.
I could not find anything in the source to suggest this has happened.
Can anyone tell me if my assumption that there is no support is correct or was it added but I somehow missed it?
Thanks,
Omer
If your optimization has an integer valued objective function, one approach that works reasonably well is to run a binary search for the optimal value. Suppose you're solving the set of constraints C(x,y,z), maximizing the objective function f(x,y,z).
Find an arbitrary solution (x0, y0, z0) to C(x,y,z).
Compute f0 = f(x0, y0, z0). This will be your first lower bound.
As long as you don't know any upper-bound on the objective value, try to solve the constraints C(x,y,z) ∧ f(x,y,z) > 2 * L, where L is your best lower bound (initially, f0, then whatever you found that was better).
Once you have both an upper and a lower bound, apply binary search: solve C(x,y,z) ∧ 2 * f(x,y,z) > (U - L). If the formula is satisfiable, you can compute a new lower bound using the model. If it is unsatisfiable, (U - L) / 2 is a new upper-bound.
Step 3. will not terminate if your problem does not admit a maximum, so you may want to bound it if you are not sure it does.
You should of course use push and pop to solve the succession of problems incrementally. You'll additionally need the ability to extract models for intermediate steps and to evaluate f on them.
We have used this approach in our work on Kaplan with reasonable success.
Z3 currently does not support optimization. This is on the TODO list, but it has not been implemented yet. The following slide decks describe the approach that will be used in Z3:
Exact nonlinear optimization on demand
Computation in Real Closed Infinitesimal and Transcendental Extensions of the Rationals
The library for computing with infinitesimals has already been implemented, and is available in the unstable (work-in-progress) branch, and online at rise4fun.