Let's say you have a Boolean rule/expression like so
(A OR B) AND (D OR E) AND F
You want to convert it into as many AND only expressions as possible, like so
A AND D AND F
A AND E AND F
B AND D AND F
B AND E AND F
You are just reducing the OR's so it becomes
(A AND D AND F) OR (A AND E AND F) OR (...)
Is there a property in Boolean algebra that would do this?
Take a look at DeMorgan's theorem. The link points to a document relating to electronic gates, but the theory remains the same.
It says that any logical binary expression remains unchanged if we
Change all variables to their complements.
Change all AND operations to ORs.
Change all OR operations to ANDs.
Take the complement of the entire expression.
(quoting from the above linked document)
Your example is exploiting the the distributivity of AND over OR, as shown here.
All you need to do is apply that successively. For example, using x*(y+z) = (x*y)+(x*z) (where * denotes AND and + denotes OR):
0. (A + B) * (D + E) * F
1. Apply to the first 2 brackets results in ((A+B)*D)+((A+B)*E)
2. Apply to content of each bracket results in (A*D+B*D)+(A*E+B*E)
3. So now you have ((A*D+B*D)+(A*E+B*E))*F
4. Applying the law again results in (A*D+B*D)*F+(A*E+B*E)*F
5. Apply one more time results in A*D*F+B*D*F+A*E*F+B*E*F, QED
You may be interested in reading about Karnaugh maps. They are a tool for simplifying boolean expressions, but you could use them to determine all of the individual expressions as well. I'm not sure how you might generalize this into an algorithm you could write a program for though.
You might be interested in Conjunctive Normal form or its brother, Disjunctive normal form.
As far as I know boolean algebra can not be build only with AND and OR operations.
If you have only this two operation you are not able to receive NOT operation.
You can convert any expression to the full set of boolean operations.
Here is some full sets:
AND and NOT
OR and NOT
Assuming you can use the NOT operation, you can rewrite any Boolean expression with only ANDs or only ORs. In your case:
(A OR B) AND (D OR E) AND F
I tend to use engineering shorthand for the above and write:
AND as a product (. or nothing);
OR as a sum (+); and
NOT as a single quote (').
So:
(A+B)(D+E)F
The corollary to arithmetic is actually quite useful for factoring terms.
By De Morgan's Law:
(A+B) => (A'B')'
So you can rewrite your expression as:
(A+B)(D+E)F
(A'B')'(D'E')'F
Related
We try to change the way maxima translates multiplication when converting to tex.
By default maxima gives a space: \,
We changed this to our own latex macro that looks like a space, but in that way we conserve the sementical meaning which makes it easier to convert the latex back to maxima.
:lisp (setf (get 'mtimes 'texsym) '("\\invisibletimes "));
However, we have one problem, and that is when we put simplification on. We use this for generating steps in the explanation of a solution. For example:
tex1(block([simp: false], 2*3));
Of course when multiplying numbers we can want an explicit multiplication (\cdot).
So we would like it that if both arguments of the multiplication are numbers, that we then have a \cdot when translating to tex.
Is that possible?
Yes, if there is a function named by the TEX property, that function is called to process an expression. The function named by TEX takes 3 arguments, namely an expression with the same operator to which the TEX property is attached, stuff to the left, and stuff to the right, and the TEX function returns a list of strings which are the bits of TeX which should be output.
You can say :lisp (trace tex-mtimes) to see how that works. You can see the functions attached to MTIMES or other operators by saying :lisp (symbol-plist 'mtimes) or in general :lisp (symbol-plist 'mfoo) for another MFOO operator.
So if you replace TEX-MTIMES (by :lisp (setf (get 'mtimes 'tex) 'my-tex-mtimes)) by some other function, then you can control the output to a greater extent. Here is an outline of a suitable function for your purpose:
(defun my-tex-mtimes (e l r)
(if $simp
(tex-nary e l r) ;; punt to default handler
(tex-mtimes-special-case e l r)))
You can make TEX-MTIMES-SPECIAL-CASE as complicated as you want. I assume that you can carry out the Lisp programming for that. The simplest thing to try, perhaps a point of departure for further efforts, is to just temporarily replace TEXSYM with \cdot. Something like:
(defun tex-mtimes-special-case (e l r)
(let ((prev-texsym (get 'mtimes 'texsym)))
(prog2 (setf (get 'mtimes 'texsym) (list "\\cdot "))
(tex-nary e l r)
(setf (get 'mtimes 'texsym) prev-texsym))))
On wikipedia it says that using call/cc you can implement the amb operator for nondeterministic choice, and my question is how would you implement the amb operator in a language in which the only support for continuations is to write in continuation passing style, like in erlang?
If you can encode the constraints for what constitutes a successful solution or choice as guards, list comprehensions can be used to generate solutions. For example, the list comprehension documentation shows an example of solving Pythagorean triples, which is a problem frequently solved using amb (see for example exercise 4.35 of SICP, 2nd edition). Here's the more efficient solution, pyth1/1, shown on the list comprehensions page:
pyth1(N) ->
[ {A,B,C} ||
A <- lists:seq(1,N-2),
B <- lists:seq(A+1,N-1),
C <- lists:seq(B+1,N),
A+B+C =< N,
A*A+B*B == C*C
].
One important aspect of amb is efficiently searching the solution space, which is done here by generating possible values for A, B, and C with lists:seq/2 and then constraining and testing those values with guards. Note that the page also shows a less efficient solution named pyth/1 where A, B, and C are all generated identically using lists:seq(1,N); that approach generates all permutations but is slower than pyth1/1 (for example, on my machine, pyth(50) is 5-6x slower than pyth1(50)).
If your constraints can't be expressed as guards, you can use pattern matching and try/catch to deal with failing solutions. For example, here's the same algorithm in pyth/1 rewritten as regular functions triples/1 and the recursive triples/5:
-module(pyth).
-export([triples/1]).
triples(N) ->
triples(1,1,1,N,[]).
triples(N,N,N,N,Acc) ->
lists:reverse(Acc);
triples(N,N,C,N,Acc) ->
triples(1,1,C+1,N,Acc);
triples(N,B,C,N,Acc) ->
triples(1,B+1,C,N,Acc);
triples(A,B,C,N,Acc) ->
NewAcc = try
true = A+B+C =< N,
true = A*A+B*B == C*C,
[{A,B,C}|Acc]
catch
error:{badmatch,false} ->
Acc
end,
triples(A+1,B,C,N,NewAcc).
We're using pattern matching for two purposes:
In the function heads, to control values of A, B and C with respect to N and to know when we're finished
In the body of the final clause of triples/5, to assert that conditions A+B+C =< N and A*A+B*B == C*C match true
If both conditions match true in the final clause of triples/5, we insert the solution into our accumulator list, but if either fails to match, we catch the badmatch error and keep the original accumulator value.
Calling triples/1 yields the same result as the list comprehension approaches used in pyth/1 and pyth1/1, but it's also half the speed of pyth/1. Even so, with this approach any constraint could be encoded as a normal function and tested for success within the try/catch expression.
I'm seeing rather surprising behavior from ctx-solver-simplify, where the order of parameters to z3.And() seems to matter, using the latest version of Z3 from the master branch of https://git01.codeplex.com/z3 (89c1785b):
#!/usr/bin/python
import z3
a, b = z3.Bools('a b')
p = z3.Not(a)
q = z3.Or(a, b)
for e in z3.And(p, q), z3.And(q, p):
print e, '->', z3.Tactic('ctx-solver-simplify')(e)
produces:
And(Not(a), Or(a, b)) -> [[Not(a), Or(a, b)]]
And(Or(a, b), Not(a)) -> [[b, Not(a)]]
Is this a bug in Z3?
No, this is not a bug. The tactic ctx-solver-simplify is very expensive and inherently asymmetric. That is, the order the sub-formulas are visited affect the final result. This tactic is implemented in the file src/smt/tactic/ctx_solver_simplify_tactic.cpp. The code is quite readable. Note that, it uses a "SMT" solver (m_solver), and makes several calls to m_solver.check() while traversing the input formula. Each one of these calls can be quite expensive. For particular problem domains, we can implement a version of this tactic that is even more expensive, and avoids the asymmetry described in your question.
EDIT:
You may also consider the tactic ctx-simplify, it is cheaper than ctx-solver-simplify, but it is symmetric. The tactic ctx-simplify will essentially applies rules such as:
A \/ F[A] ==> A \/ F[false]
x != c \/ F[x] ==> F[c]
Where F[A] is a formula that may contain A. It is cheaper than ctx-solver-simplify because it does not invoke a SMT solver when it is traversing the formula. Here is an example using this tactic (also available online):
a, b = Bools('a b')
p = Not(a)
q = Or(a, b)
c = Bool('c')
t = Then('simplify', 'propagate-values', 'ctx-simplify')
for e in Or(c, And(p, q)), Or(c, And(q, p)):
print e, '->', t(e)
Regarding humand-readability, this was never a goal when implementing any tactic. Please, tell us if the tactic above is not good enough for your purposes.
I think it will be better to write a custom tactic because there are
other tradeoffs when you essentially ask for canonicity.
Z3 does not have any tactics that convert formulas into a canonical form.
So if you want something that always produces the same answer for formulas
that are ground equivalent you will have to write a new normalizer that
ensures this.
The ctx_solver_simplify_tactic furthermore makes some tradeoffs when simplifying formulas:
it avoids simplifying the same sub-formula multiple times. If it did, it could increase
the size of the resulting formula significantly (exponentially).
Is there a (simple) way, within a parsing expression grammar (PEG), to express an "unordered sequence"? A rule such as
Rule <- A B C
requires A, B and C to match in order. A rule such as
Rule <- (A B C) / (B C A) / (C A B) / (A C B) / (C B A) / (B A C)
allows them to match in any order (which is what we want) but it is cumbersome and inapplicable in practice with more terms in the sequence.
Is the only solution to use a syntactically looser rule such as
Rule <- (A / B / C){3}
and semantically check that each rule matches only once?
The fact that, e.g., Relax NG Compact Syntax has an "unordered list" operator to parse XML make me hint that there is no obvious solution.
Last question: do you think the addition of such an operator would bring ambiguity to PEG?
Grammar rules express precisely the sequence of forms that you want, regardless of parsing engine (e.g., PEG, LALR, LL(k), ...) that you choose.
The only way to express that you want all possible sequences of just of something using BNF rules is the big ugly rule you proposed.
The standard solution is to simply define:
rule <- (A | B | C)*
(or whatever syntax your parser generator accepts for lists) and semantically count that only 3 forms are provided and they are unique.
Often people building parser generators add special "extended BNF" notations to let them describe special circumstances; you gave an example use {3} as special syntax implying that you only wanted "3 of" under the assumption the parser generator accepts this notation and does the appropriate enforcement. One can imagine an extension notation {unique} to let you describe your situation. I've never seen a parser generator that implemented that idea.
I want to test whether two languages have a string in common. Both of these languages are from a subset of regular languages described below and I only need to know whether there exists a string in both languages, not produce an example string.
The language is specified by a glob-like string like
/foo/**/bar/*.baz
where ** matches 0 or more characters, and * matches zero or more characters that are not /, and all other characters are literal.
Any ideas?
thanks,
mike
EDIT:
I implemented something that seems to perform well, but have yet to try a correctness proof. You can see the source and unit tests
Build FAs A and B for both languages, and construct the "intersection FA" AnB. If AnB has at least one accepting state accessible from the start state, then there is a word that is in both languages.
Constructing AnB could be tricky, but I'm sure there are FA textbooks that cover it. The approach I would take is:
The states of AnB is the cartesian product of the states of A and B respectively. A state in AnB is written (a, b) where a is a state in A and b is a state in B.
A transition (a, b) ->r (c, d) (meaning, there is a transition from (a, b) to (c, d) on symbol r) exists iff a ->r c is a transition in A, and b ->r d is a transition in B.
(a, b) is a start state in AnB iff a and b are start states in A and B respectively.
(a, b) is an accepting state in AnB iff each is an accepting state in its respective FA.
This is all off the top of my head, and hence completely unproven!
I just did a quick search and this problem is decidable (aka can be done), but I don't know of any good algorithms to do it. One is solution is:
Convert both regular expressions to NFAs A and B
Create a NFA, C, that represents the intersection of A and B.
Now try every string from 0 to the number of states in C and see if C accepts it (since if the string is longer it must repeat states at one point).
I know this might be a little hard to follow but this is only way I know how.