Big-O and Big Omega of a function - Homework - analysis

So I have been given a function, and I'll change the function since it is homework, and I want to learn HOW to do this instead of being told what the answer is.
Using the definitions of big-Oh and Ω, find the upper and lower bounds for
the following expressions. Be sure to state appropriate values for c and k.
c13n + c2n4, where the constants are positive integers.
Now, I understand how to determine if a function f(n) ∈ O(g(n)) or a f(n) ∈ Ω(g(n) from class.
What I don't understand is how to determine the g(n) if all you have is f(n). I hope that makes sense!
Edit: I'm sure you could brute force it an plug in a bunch of functions for g(n), but that isn't really what I want if there is a better solution.
Edit2: We can't use the limit methods for this, they want us to use the basic definitions somehow.
Edit 3: Here are the definitions we have been given:
Here is what I have for Big O:
For T(n) a non-negatively valued function, T(n) is in set O(f(n))
if there exist two positive constants c and k such that T(n)<=c*f(n)
for all n > k.
And for Ω:
For T(n) a non-negatively valued function, T(n) is in set Ω(g(n)) if there exist two positive constants c and k such that T(n) >= c*g(n) for all n > k

The intuition is f ∈ O(g) implies g is somehow equal or bigger than f; and f ∈ Ω(g) implies g is somehow equal or smaller than f. In my answer, I won't be too precise/picky on how to choose a constant.
First to warm up, you should convince yourself that
f ∈ O(f) and f ∈ Ω(f). (let c=1, k=1 in the definitions).
If f ∈ O(g), then g ∈ Ω(f) and vice versa. (if you find constants (c,k) for one, then (1/c, k) are the constants you need for the other)
If f ∈ O(g) then f ∈ O(P*g) and Q*f ∈ O(g) for any positive constant P,Q. This means multiplying functions by positive constants don't matter. Similarly for Ω.
If f ∈ O(g) and f ∈ O(h), then f ∈ O(MIN(g,h)).
If f ∈ Ω(g) and f ∈ Ω(h), then f ∈ Ω(MAX(g,h)).
When you are faced with trying to find O or Ω of f+g, you normally would guess O(f) or O(g) or Ω(f) or Ω(g).
In your case of 3^n + n^4, we know 3^n ∈ O(3^n), n^4 ∈ O(n^4), and 3^n + n^4 ∈ O(3^n + n^4). But we want to do better. We want to prove 3^n + n^4 ∈ O(3^n + 3^n) = O(3^n). We can do this if we can show n^4 ∈ O(3^n).
We should do exactly as the definition says we should do: show there are (c,k) such that for all n>k
n^4 ≤ c3^n
4log(n) ≤ log(c) + nlog(3)
4log(n) - nlog(3) ≤ log(c)
One way of showing that this c always exists is with calculus: show that 4log(n) - nlog(3) is a decreasing function eventually. The derivative is 4/n - log(3) and we can see for sufficiently large n, it is negative. Therefore for sufficiently large n, 4log(n) -nlog(3) is decreasing. Therefore there is a positive constant c for which the inequality is true. Therefore n^4 ∈ O(3^n). And 3^n + n^4 ∈ O(3^n + 3^n) = O(3^n).
Because 3^n + n^4 ≥ 1*3^n, 3^n + n^4 ∈ Ω(3^n). To illustrate that constants don't matter, let's use the c_1 and c_2 you had: c_1*3^n + c_2*n^4. Let d := min(c_1, c_2). Then
c_1*3^n + c_2*n^4 ≥ d(3^n + n^4) ≥ d*3^n
So c_1*3^n + c_2*n^4 ∈ Ω(3^n). Similarly, with O(3^n): let d := max(c_1, c2). Then for sufficiently large n,
c_1*3^n + c_2*n^4 ≤ d(3^n + n^4) ≤ d(c*3^n) = (dc)*3^n
We know this c exists because 3^n + n^4 ∈ O(3^n). Therefore c_1*3^n + c_2*n^4 ∈ O(3^n).
Not sure if I answered sufficiently but hope it helps.

Related

How do I define a type whose definition is implicit in the typing?

Say I want to define the "diagonal of a type":
Σ[ x ∈ A ] Σ[ y ∈ A ] x ≡ y
In my mind, it should be the type of equality in A. If I try with
data Diag (A : Set) : Σ[ x ∈ A ] Σ[ y ∈ A ] x ≡ y
It complains that Diag is "defined but not accompanied by a definition". The point is, shouldn't it be already defined?
I suspect behind this doubt there is a big misunderstanding of how types works in Agda. I come from a course in MLTT, and there I can derive something like
whose canonical elements are of known shape.

How are the equational reasoning operators used in practice?

The Agda Standard Library exports some operators that allow you to write proofs in a manner similar to what you would do on paper, or how it is taught in the Haskell community. While you can write "conventional" Agda proofs in a somewhat systematic manner by refining a Goal using with abstractions, rewrites or helper lemmas as needed, it isn't really clear to me how a proof using the equality reasoning primitives "comes to be".
That is, while you can find examples about what these proofs look like when they are finished and type-check here and there, these already worked examples don't show you how they are developed in a systematic step-by-step (maybe hole-driven) manner.
How is it done in practice? Do people "refactor" an already existing proof? Do you try to "burn the candle from both sides" by starting with the left-hand and right-hand sides of the initial Goal and a hole in the middle?
Furthermore, the Agda documentation states that if the equality reasoning primitives are in scope, "then Auto will do equality reasoning using these constructs". What does that mean?
I would appreciate it if someone could point me in the right direction, or even post an example of how they develop these kinds of proofs step-by-step, what questions they ask themselves as they go through it, where they put the holes and so on. Thanks!
I think it would be more helpful for you to look at the definitions for equational reasoning for Identity here Equational Reasoning. The main point is that it is just a nicer way to apply transitivity chains and allowing the user to see the actual expression in code, rather than the proof evidence which is not that easy to read.
The way I go about building a proof using equational reasoning for any setoid is this. With the example of natural numbers
open import Relation.Binary.PropositionalEquality
open ≡-Reasoning
data ℕ : Set where
zero : ℕ
succ : ℕ → ℕ
_+_ : ℕ → ℕ → ℕ
m + zero = m
m + succ n = succ (m + n)
Let's take commutativity as an example.
This is how I start with the goals.
comm+ : ∀ m n → m + n ≡ n + m
comm+ m zero = {!!}
comm+ m (succ n) =
begin
succ (m + n)
≡⟨ {!!} ⟩
succ n + m
∎
Now I see the original expression and goal and my goal proof is between the brackets.
I work only on the expression leaving the proof objects untouched and add
what I think should work.
comm+ : ∀ m n → m + n ≡ n + m
comm+ m zero = {!!}
comm+ m (succ n) =
begin
succ (m + n)
≡⟨ {!!} ⟩
succ (n + m)
≡⟨ {!!}⟩
succ n + m
∎
Once I think I have a proof, I work on the proof objects that justify my steps.
Regarding the auto-tactic, you should not bother with that in my opinion. It's not being worked on for a while.

Can all context free grammars be converted to NFA/DFA?

I've seen this post about how to convert context free grammar to a DFA:
Automata theory : Conversion of a Context free grammar to a DFA
However, just wondering can all context free grammars be converted to DFA/NFA? What about context free grammars that cannot be expressed as a regular expression? Ex. S->(S) | ()
Thanks!
Only regular languages can be converted to a DFA, and not all CFGs represent regular languages, including the one in the question.
So the answer is "no".
NFAs are not more expressive than DFAs, so the above statement would still be true if you replaced DFA with NFA
A CFG represents a regular language if it is right- or left-linear. But the mere fact that a CFG is not left- or right-linear proves nothing. For example, S→a | a S a happens to generate the same language as S→a | S a a.
Yes ... if the F in "DFA" is replaced by I to get "DIA", but no ... for DFA, itself; and I will show how this works for your example at the end. In fact, all languages have DIA's whose state diagrams reside on a single Universal State Diagram as sub-diagrams thereof.
Consider your example, but rewrite it as S → u S v, S → w. This grammar, like all grammars, is algebraically a system of inequations over a certain partially ordered algebra. In particular, it can be rewritten as
S ⊇ {u}S{v}, S ⊇ {w},
or equivalently as
S ⊇ {u}S{v} ∪ {w}.
The object identified by the grammar is the least solution to the system. Since the system is a fixed point system S ⊇ f(S) = {u}S{v} ∪ {w}, then the least solution may also be described as the least fixed point solution and it is denoted μx f(x) = μx({u}x{v} ∪ {w}).
The ordering relation, for this algebra here, is subset ordering y ⊆ x ⇔ x ⊇ y. The operations include a product AB ≡ { ab: a ∈ A, b ∈ B }, defined element-wise (where, component-wise, the product is word concatenation, with ab being the concatenation of a and b). The product has {1} as an identity, where 1 denotes the empty word. Both word concatenation and product satisfy the fundamental properties
(xy)z = x(yz) [Associativity]
and
xe = x = ex [Identity property]
with the respective identities e = 1 (for concatenation) or e = {1} (for set product). The algebra is called a Monoid.
The simplest and most direct monoid formed from the elements X = {u,v,w} is the Free Monoid X* = {u,v,w}*, which is equivalently described as the set of all words of finite length (including the empty word, 1, of length 0) formed from u, v and w. It is possible to frame the question in terms of more general monoids, but (as the literature usually does) I will restrict it to free monoids.
The family of languages over X is one and the same as the family 𝔓M of subsets A ⊆ M of the monoid M = X*; the defining condition being A ∈ 𝔓M ⇔ A ⊆ M. Other distinguished subfamilies exist, such as the families ℜM ⊆ ℭM ⊆ 𝔗M ⊆ 𝔓M, respectively, of rational, context-free and Turing (or recursively enumerable) languages. The second of these ℭM, which is what your question is concerned with, are given by context-free grammars and are identified as the least fixed point solutions to the corresponding fixed point system of inequations.
Over 𝔓M, one can define the left-quotient operation v\A = { w ∈ M: vw ∈ A }, for each word v ∈ M and subset A ∈ 𝔓M. Because M = X* is a free monoid, it can be decomposed uniquely into left-quotients on the individual elements of X, by the properties 1\A = A, and (vw)\A = w\(v\A).
Correspondingly, one can define a state transition on each x ∈ X by x: A → x\A, treating each subset A ∈ 𝔓M as a state. Together, 𝔓M comprises the state set of the Universal State Diagram over M. Because M = X* is a free monoid, every element of M is either of the form xw for some x ∈ X and w ∈ X*, or is the empty word 1. The decomposition is unique: xw ≠ 1 for any x ∈ X or w ∈ X* and xw = x'w' for x, x' ∈ X and w, w' ∈ X*, only if x = x' and w = w'. Therefore, every A ∈ 𝔓M decomposes uniquely into a partition in a manner analogous to Taylor's Theorem as
A = A₀ ∪ ⋃_{x∈X} {x} x\A.
where A₀ ≡ A ∩ {1} is either {1} if 1 ∈ A or is ∅ if 1 ∉ A. The states for which A₀ = {1} may be regarded as the Final States in the Universal State Diagram.
The analogy to Taylor's Theorem is not too far-removed, since the left-quotient satisfies an analogue of the Product Rule
x\(AB) = (x\A) B ∪ A₀ (x\B)
so it is also denoted as a partial derivative x\A = ∂A/∂x: the Brzozowski Derivative, so that the decomposition rule could just as well be written as:
A = A₀ ∪ ⋃_{x∈X} {x} ∂A/∂x.
What you actually have is an infinite fixed-point system of inequations
A ⊇ A₀ ∪ ⋃_{x∈X} {x} ∂A/∂x for all A ∈ 𝔓M,
with variables A ∈ 𝔓M ranging over all of 𝔓M, whose right-hand sides are all right-linear in the variables. The sets, themselves, are the least fixed point solution to their own system (and to all closed subsystems of the universal system that contain that set as a variable).
Choosing different states as start states yields the different DIA's contained within it. Every minimal DIA (and every minimal DFA) of every language over X is contained in it.
In particular, in this diagram, you can consider the largest subdiagram accessible from a specific state A ∈ 𝔓M. All the states that can be accessed from A are left-quotients by words in M. So, together they comprise a family δA ≡ { v\A: v ∈ M }. The subdiagram consisting only of these states gives you the minimal DIA for the language A, where A, itself, is treated as the start state of the DIA.
If δA is finite, then the I is an F and it's actually a DFA - and that's what you're looking for. Which states in 𝔓M have DIA that are actually DFA's? The regular ones - the ones in the subfamily ℜM ⊆ 𝔓M. This is the case when M = X* is a free monoid. I'm not totally sure if this can also be proven for non-free monoids (like X* × Y*, whose rational subsets ℜ(X* × Y*) are one and the same as what are known as rational transductions) ... because of the reliance on the Taylor's Formula decomposition. There is still something like a Taylor's Theorem, but the decompositions are not necessarily partitions or unique, any longer.
For larger subfamilies of 𝔓M, the DIA are necessarily infinite; but their transitions may possess a sufficient degree of symmetry to allow both the states and transition rules to be wrapped up more succinctly. Correspondingly, one can distinguish different families of DIA by what symmetry properties they possess.
For your example, X = {u,v,w} and M = {u,v,w}*. The subset identified by your grammar is S = {uⁿ w vⁿ: n = 0, 1, 2, ...}. We can define the following sets
S(n) = S {vⁿ}, T(n) = {vⁿ}, for n = 0, 1, 2, ...
The sub-diagram of states accessible from S consists of all the states
δS = { S(n): n = 0, 1, 2, ... } ∪ { T(n): n = 0, 1, 2, ... } ∪ { ∅ }
The state transitions are the following
u: S(n) → S(n+1)
v: T(n+1) → T(n)
w: S(n) → T(n)
with x: A → ∅ in all other cases for x ∈ {u,v,w} and A ∈ δS. The sole final state is T(0).
As you can see, the DIA is infinite and is not a DFA at all. If you were to draw out the diagram, you would see an infinite ladder with S = S(0) being the start state T(0) = {1} the final state, with all the u transitions climbing up a rung, all the v transitions climbing down a rung, and the w transitions crossing over on a rung.
The symmetry is captured by factoring the state set into
δS = {S,T}×{0,1,2,3,⋯} ∪ {∅}
with S(n) rewritten as (S,n) and T(n) as (T,n). This includes a finite set of states Q = {S,T} for a finite state "control" and a set of states D = {0,1,2,3,⋯} for a "device"; as well as the empty set ∅ for the fail state. That device is none other than a counter, and this DIA is just a one-counter automaton in disguise.
All of the classical automata models posed in the literature have a similar form, when expressed as DIA. They contain a state set Q×D ∪ {∅} that includes a finite set Q for the "finite state control" and a (generally infinite) state set D for the device, along with the fail state ∅. The restrictions or constraints on the device correspond to what types of symmetries are contained in the underlying DIA. A deterministic PDA, with two stack symbols {a,b} for instance, has a device state set D = {a,b}* (consisting of all stack words), and an underlying DIA that has the form of an infinite binary tree with copies of Q residing at each node.
You can best see this by writing out and graphing the DIA for the Dyck language, which is given by the grammar D₂ → b D₂ d D₂, D₂ → p D₂ q D₂, D₂ → 1 as a language over X = {b,d,p,q} and subset of M = X* = {b,d,p,q}*; i.e. as the least-fixed point D₂ = μx ({b}x{d}x ∪ {p}x{q}x ∪ {1}).
Every subset in A ⊆ ℭM can be expressed in terms of a subset in A' ⊆ ℜM[b,d,p,q] of the free extension of the monoid M by indeterminates {b,d,p,q}, by carrying out insertions of {b,d,p,q} in suitable places in A, such that the result upon applying the identities {bd} = {1} = {pq}, {bq} = ∅ = {pd}, and xy = yx for x ∈ M and y ∈ {b,d,p,q} will yield A, itself, from A'.
This result (known, but unpublished since the 1990's and published only in 2022) is the algebraic form of the Chomsky-Schützenberger Theorem and is true for all monoids M. For instance, it holds for the non-free monoid M = X* × Y*, where the corresponding family ℭ(X* × Y*) comprise the push-down transductions from X to Y (or "simple syntax directed translations"; aka yacc-like grammars).
So, there is also something like a DFA even for these classes of DIA; provided you include transition arrows for {b,d,p,q}. For your example, A = μx({u}x{v} ∪ {w}), you have A' = {b}{up,qv,w}*{d} and you can easily write down the corresponding DFA. That automaton is just the one-counter machine, itself, with "b" interpreted as "start up at count 0", "d" as "check for count 0 and finish", "p" as "add one to the count" and "q" as "check for count greater than 0 and subtract 1". With respect to the algebraic rules given for {b,d,p,q}, A' is not just a representation of A, it is actually is A: A' = A.

LL(1) Parsing -- First(A) with Recursive First Alternatives

How would I apply the FIRST() rule on a production such as :
A -> AAb | Ab | s
where A is a non-terminal, and b,s are terminals.
FIRST(A) of alternatives 1 & 2 would be A again, but such would end in infinite applications of FIRST, since I need a terminal to get the FIRST set?
To compute FIRST sets, you typically perform a fixed-point iteration. That is, you start off with a small set of values, then iteratively recompute FIRST sets until the sets converge.
In this case, you would start off by noting that the production A → s means that FIRST(A) must contain {s}. So initially you set FIRST(A) = {s}.
Now, you iterate across each production of A and update FIRST based on the knowledge of the FIRST sets you've computed so far. For example, the rule
A → AAb
Means that you should update FIRST(A) to include all elements of FIRST(AAb). This causes no change to FIRST(A). You then visit
A → Ab
You again update FIRST(A) to include FIRST(Ab), which is again a no-op. Finally, you visit
A → s
And since FIRST(A) already contains s, this causes no change.
Since nothing changed on this iteration, you would end up with FIRST(A) = {s}, which is indeed correct because any derivation starting at A ultimately will produce an s as its first character.
For more information, you might find these lecture slides useful (here's part two). They describe in detail how top-down parsing works and how to iteratively compute FIRST sets.
Hope this helps!
My teaching notes are in Spanish, but the algorithms are in English. This is one way to calculate FIRST:
foreach a ∈ Σ do
F(a) := {a}
for each A ∈ N do
if A→ε ∈ P then
F(A) := {ε}
else
F(A) := ∅
repeat
for each A ∈ N do
F'(A) := F(A)
for each A → X1X2...Xn ∈ P do
if n > 0 then
F(A) := F(A) ∪ F'(X1) ⋅k F'(X2) ⋅k ... ⋅k F'(Xn)
until F(A) = F'(A) forall A ∈ N
FIRSTk(X) := F(X) forall X ∈ (Σ ∪ N)
Σ is the alphabet (terminals), N is the set of non-terminals, P is the set of productions (rules), ε is the null string, and ⋅k is concatenation trimmed to k places. Note that ∅ ⋅k x = ∅, and that concatenating two sets produces the concatenation of the elements in the Cartesian product.
The easiest way to calculate FIRST sets by hand is by using one table per algorithm iteration.
F(A) = ∅
F'(A) = F(A) ⋅1 F(A) .1 F(b) U F(A) .1 F(b) U F(s)
F'(A) = ∅ ⋅1 ∅ ⋅1 {b} U ∅ ⋅1 {b} U {s}
F'(A) = ∅ U ∅ U {s}
F'(A) = {s}
F''(A) = F'(A) ⋅1 F'(A) .1 F'(b) U F'(A) .1 F'(b) U F'(s)
F''(A) = {s} ⋅1 {s} ⋅1 {b} U {s} ⋅1 {b} U {s}
F''(A) = {s} U {s} U {s}
F''(A) = {s}
And we're done, because F' = F'', so FIRST = F'', and FIRST(A) = {s}.
your grammar rule has left recursion as you already realized and LL parsers are not able to parse grammars with left recursion.
So you need to get rid of left recursion first and then you should be able to compute the first set for the rule.

What are the parsing algorithms for programmed grammars?

I want to know what are the parsing algorithms used for parsing programmed grammars. Any Links , blogs or anything where i can read about programmed grammars and there parsing algorithms except IEEE research papers ?
I think it's explained well in The power of programmed grammars with graphs from various classes:
A context-free grammar is specified as a quadruple G = (N, T, S, P), where N
is a finite non-empty set called the nonterminal alphabet, T is a finite non-empty set called the terminal alphabet (N ∩ T = ∅), S ∈ N is the start symbol, and P
is a finite subset of N × (N ∪ T)∗ called the set of rules. Rules are also named
as productions.
A programmed grammar (without appearance checking) is a six-tuple G =
(N, T, S, Lab, P, PG) where N , T and S are specified as in a context-free grammar, Lab is an alphabet (of labels), P is a finite set of context-free rules called
the set core productions, and PG is a finite set of triples r = (q, p, σ), where
q ∈ Lab is the label of r, p ∈ P is a context-free production called the core
production of r, and σ is a subset of Lab and is termed the success field of r.
The elements of PG are called the rules of G.
The language L(G) generated by a programmed grammar G specified as above
is defined as the set of all words w ∈ T∗ such that there is a derivation
S = w0 =⇒r1 w1 =⇒r2 w2 =⇒r3 . . . =⇒rk wk = w,
where k ≥ 1 and, for 1 ≤ i ≤ k, wi−1 = wi−1 Ai wi−1 and wi = wi−1 vi wi−1
for some words wi−1 , wi−1 ∈ (N ∪ T)∗ , ri = (qi , Ai → vi , σi) and, for i < k,
qi+1 ∈ σi.
Excuse the lack of LaTeX.
In a similar way that Ogden's Lemma is stronger than the Pumping Lemma (because of markings), the concept of programmed grammar is stricter than context-free because of these labellings.

Resources