Testing intersection of two regular languages - parsing

I want to test whether two languages have a string in common. Both of these languages are from a subset of regular languages described below and I only need to know whether there exists a string in both languages, not produce an example string.
The language is specified by a glob-like string like
/foo/**/bar/*.baz
where ** matches 0 or more characters, and * matches zero or more characters that are not /, and all other characters are literal.
Any ideas?
thanks,
mike
EDIT:
I implemented something that seems to perform well, but have yet to try a correctness proof. You can see the source and unit tests

Build FAs A and B for both languages, and construct the "intersection FA" AnB. If AnB has at least one accepting state accessible from the start state, then there is a word that is in both languages.
Constructing AnB could be tricky, but I'm sure there are FA textbooks that cover it. The approach I would take is:
The states of AnB is the cartesian product of the states of A and B respectively. A state in AnB is written (a, b) where a is a state in A and b is a state in B.
A transition (a, b) ->r (c, d) (meaning, there is a transition from (a, b) to (c, d) on symbol r) exists iff a ->r c is a transition in A, and b ->r d is a transition in B.
(a, b) is a start state in AnB iff a and b are start states in A and B respectively.
(a, b) is an accepting state in AnB iff each is an accepting state in its respective FA.
This is all off the top of my head, and hence completely unproven!

I just did a quick search and this problem is decidable (aka can be done), but I don't know of any good algorithms to do it. One is solution is:
Convert both regular expressions to NFAs A and B
Create a NFA, C, that represents the intersection of A and B.
Now try every string from 0 to the number of states in C and see if C accepts it (since if the string is longer it must repeat states at one point).
I know this might be a little hard to follow but this is only way I know how.

Related

Rules to convert a CFG to a NDPA?

I have to define an FA by using this grammar:
S -> aSb
S -> c
S -> dA
A -> Sd
How do I manage the first rule and the last one?
For the second one I think I have to create another state (the final one) and link S and this new state. For the third one instead, I think I have to create the state "A" and link it to S by passing "d".
There are algorithms you can use to get a PDA from a CFG: look into top-down and bottom-up parsers, for instance. What I think of as the usual proof that PDAs accept languages generated by CFGs, and vice versa, uses such a construction.
An alternative is to understand the language generated by the grammar, and to design a PDA for it directly. This is less mechanical but has the potential to yield a more concise PDA. If you want to go this route, we can first simplify the grammar by recognizing the nonterminal A can safely be replaced by the RHS of the only production for it:
S -> aSb
S -> c
S -> dSd // removed A -> Sd and replaced here
How does this grammar work?
You have c in the middle by the 2nd production;
You have matching ds on the left and right of the c;
You have as on the left matching bs on the right of c.
A PDA should work as follows:
Read as and ds until you see a c. Push everything on the stack as you go. When you see a c, go to the next state, but don't push the c.
Read bs and ds, popping as and ds from the stack, until:
The topmost stack symbol doesn't match input; crash.
You run out of input with symbols still on the stack; crash.
You run out of stack symbols with input remaining; crash.
You run out of stack and input simultaneously; accept.
Here's a transition table:
q s x q' s'
------------------------------
q0 a,d,Z a q0 aa,ad,aZ
q0 a,d,Z d q0 da,dd,dZ
q0 a,d,Z c q1 a,d,Z
q1 a b q1 -
q1 d d q1 -
If we accept in q1 by empty stack, these transitions are enough. If we want to accept by empty stack or accepting state, we could add a transition like f(q1, Z, -) = (q2, Z) and make q2 accepting; the PDA would transition there nondeterministically and would crash unless the input were also exhausted.

Polish to infix notation

Lets say we have an expression in prefix notation or(1) and A B or(2) or(3) C D E (where A, B, C, D, E are boolean values and or numbered for convenience) that we want to convert to an infix notation. In principle I have two ways to evaluate it:
(1) start at or(3) C D, then or(2), then and, then or(1)
(2) start at and A B then check or(3), or(2). Lastly check or(1)
(1) Evaluate starting from right most operator
(2) Evaluate starting from left most operator having all operands as it's direct neighbors.
Both evaluations yield (A and B) or C or D or E.
Which evaluation sequence is correct?
Will these two evaluations ever give different result for the same prefix record?
http://www.cs.man.ac.uk/~pjj/cs212/fix.html recommends the first method.
You will get the same result regarding of the order, so it is up to you.

Automata and Formal Languages

Showing that the reverse of a word for a regular language L is also regular
I am confused as to how I am to approach this question, i've been stuck for hours: For a word x, we use x^r to denote its reverse. For a language L, we use L^r to denote {x^r where x is in the set of L}. Show that if L is regular then so is L^r
If L is regular, then there exists some regular grammar which generates it. It can be always represented as either a left-regular grammar, or a right-regular grammar. Let's assume that it's left-regular grammar G_l(the proof for right-regular grammar is analogous).
This grammar has productions of two types; the terminating-type:
A -> a, where A is non-terminal and a is either a terminal or empty string (epsilon)
or the chaining type:
B -> Ca, where B, C are non-terminals and a is a terminal
When we apply reverse to a regular language, we basically also apply it to the tails of productions (since heads are just single non-terminals). It's going to be proved later on. So we get a new grammar G_r, with productions:
A -> a, where A is non-terminal and a is either a terminal or empty string (epsilon)
B -> aC, where B, C are non-terminals and a is a terminal
But hey, it's a right-regular grammar! So the language it accepts is also regular.
There is one thing to do - to show that reversing tails actually does the thing it's supposed to. We're going to prove it very simply:
If L contains \epsilon, then there is production 'S -> \epsilon' in G_l. Since we don't touch productions like that, it's also present in G_r.
If L contains a, a word composed of a single terminal, then it's similar to the above
If L contains aZ, where a is a terminal and Z is a word from the language constructed from chopping off the first terminals out of words in L, then L^r contains (because of changes to the chaining productions) (Z^r)a. Z is also a regular language, since it can be constructed by dropping the first "level" of left-productions from G_l, which leaves us with a regular grammar.
I hope it helped. There's also an arguably easier way of doing that by reversing edges of the relevant finite automata and changing accepting and entry states a bit.

What exactly is the 'pumping length' in the Pumping lemma?

I'm trying to understand what is this 'magical' number 'n' that is used in every application of the Pumping lemma. After hours of research on the subject, I came to the following website: http://elvis.rowan.edu/~nlt/TheoryNotes/PumpingLemma.pdf
It states
n is
the longest a string can be without having a loop. The biggest n can
be is s, though it might be smaller for some particular language.
From what I understand if there is a Language L then the pumping length of L is the amount of states in the Finite State Automata that recognizes L. Is this true?
If it is then what exactly does the last line from above say "though it might be smaller for some particular language"? Complete mess in my head. Could somebody clear it up, please?
A state doesn't recognise a language. A DFA recognises a language by accepting exactly the set of words in the languages and no others. A DFA has many states.
If there is a regular language L, which can be modelled by the Pumping Lemma, it will have a property n.
For a DFA with s states, in order for it to accept L, s must be >= n.
The last line merely states that there are some languages in which s is greater than n, rather than equal.
This is probably more suited for https://cstheory.stackexchange.com/ or https://cs.stackexchange.com/ (not quite sure of the value of both myself).
Note: Trivially, not all DFA's with sufficient states will accept the language. Also, the fact that a language passes the pumping lemma doesn't mean it's regular (but failing it means definitely isn't).
Note also, the language changes between FA and DFA - this is a bit lax, but because NDFAs have the same power as DFAs and DFAs are easier to write and understand, DFAs are used for the proof.
Edit I'll give an example of a regular language, so you can see an idea of u,v,w,z and n. Then we'll discuss s.
L = xy^nz, n > 2 (i.e. xyyz, xyyyz, xyyyyz)
u = xy
v = y
w = z
n = 4
Hence:
|z| = 3: xyz (i = 0) Not in L, but |z| < n
|z| = 4: xyyz (i = 1)
|z| = 5: xyyyz (i = 2)
etc
Hence, it's modelled by the Pumping Lemma. A DFA would need at least 4 states. So let's think of one.
-> State 1: x
State 1:
-> State 2: y
State 2:
-> State 3: y
State 3:
-> State 3: y
-> State 4: z
State 4:
Accepting state
Terminating state
4 states, as expected. Not possible to do it in less, as expected by n = 4. In this case, the example is quite simple so I don't think you can build one with more than 4 states (but that would be okay if it were needed).
I think a possibility is when you have a FA with an unreachable state. The FA has s states, but 1 is unreachable, so all strings recognizing L will be comprised of (s-1)=n states, so n<s.

refactoring boolean equation

Let's say you have a Boolean rule/expression like so
(A OR B) AND (D OR E) AND F
You want to convert it into as many AND only expressions as possible, like so
A AND D AND F
A AND E AND F
B AND D AND F
B AND E AND F
You are just reducing the OR's so it becomes
(A AND D AND F) OR (A AND E AND F) OR (...)
Is there a property in Boolean algebra that would do this?
Take a look at DeMorgan's theorem. The link points to a document relating to electronic gates, but the theory remains the same.
It says that any logical binary expression remains unchanged if we
Change all variables to their complements.
Change all AND operations to ORs.
Change all OR operations to ANDs.
Take the complement of the entire expression.
(quoting from the above linked document)
Your example is exploiting the the distributivity of AND over OR, as shown here.
All you need to do is apply that successively. For example, using x*(y+z) = (x*y)+(x*z) (where * denotes AND and + denotes OR):
0. (A + B) * (D + E) * F
1. Apply to the first 2 brackets results in ((A+B)*D)+((A+B)*E)
2. Apply to content of each bracket results in (A*D+B*D)+(A*E+B*E)
3. So now you have ((A*D+B*D)+(A*E+B*E))*F
4. Applying the law again results in (A*D+B*D)*F+(A*E+B*E)*F
5. Apply one more time results in A*D*F+B*D*F+A*E*F+B*E*F, QED
You may be interested in reading about Karnaugh maps. They are a tool for simplifying boolean expressions, but you could use them to determine all of the individual expressions as well. I'm not sure how you might generalize this into an algorithm you could write a program for though.
You might be interested in Conjunctive Normal form or its brother, Disjunctive normal form.
As far as I know boolean algebra can not be build only with AND and OR operations.
If you have only this two operation you are not able to receive NOT operation.
You can convert any expression to the full set of boolean operations.
Here is some full sets:
AND and NOT
OR and NOT
Assuming you can use the NOT operation, you can rewrite any Boolean expression with only ANDs or only ORs. In your case:
(A OR B) AND (D OR E) AND F
I tend to use engineering shorthand for the above and write:
AND as a product (. or nothing);
OR as a sum (+); and
NOT as a single quote (').
So:
(A+B)(D+E)F
The corollary to arithmetic is actually quite useful for factoring terms.
By De Morgan's Law:
(A+B) => (A'B')'
So you can rewrite your expression as:
(A+B)(D+E)F
(A'B')'(D'E')'F

Resources