How would you draw the NFA Language x*? - automata

I am learning about automata, and we are now learning about NFAs, my professor told us to write an NFA for some language x*, as I understands * means that if you also have no input the language should be accepted, so when I made the NFA, in this case the start stage would also be the final stage my professor told me it is wrong because it would have any sense to make a automata that accepts on nothing, am I in the wrong here?

Related

What is the point of the 4 grammars specified in Chomsky hierarchy?

I'm currently studying compiler's and am on the topic of "Chomsky Hierarchy and the 4 languages." But it beats me as to what the practical purpose of all this is?
It'd be great if I could see real-life examples of the 4 grammars: Unrestricted, CSG, CFG, Regular Grammer come to play.
I found online that Chomsky hierarchy along with the 4 grammars is used to evaluate proposals within cognitive science but this goes way over my head. It'd be great if someone could break it down for me, thanks a lot!
There is no practical value. That's the whole point.
Let me try to break that down a bit. It's useful to remember that Chomsky is a linguist --someone who studies human languages-- and that he was writing in the late 1950s when computational theory was not as well-developed as it is today. (To put it mildly.) His goal was to find a mathematical model which could provide some insights into the mechanisms by which human beings generate and understand sentences, and he took as his starting point a particular simple model of sentence generation.
In this model, a grammar is a function F, which transforms elements from an arbitrary sequence of symbols from some alphabet onto another sequence of symbols from the same alphabet. F is defined by a finite set of pairs (called productions) α → β. We then say that F(ω) = F(ζ) if the definition of F contains some pair α → β such that α is a substring of ω and ζ is the result of substituting a single instance of α in ω with β.
That's not very interesting in and of itself; we make that into a full language by starting with some designated starting sequence, normally represented as the single symbol S, and repeatedly apply F as many times as is necessary. (In all interesting grammars, the set so constructed is infinite, so it cannot actually be constructed. But we can imagine proceeding from the starting point until we find the sentence we wanted to generate.)
The problem with this model is that it can be used to describe an arbitrary Turing Machine. Or, if you like, an arbitrary computer program, although the equivalence is easier to see with a Turing Machine. In other words, it is at least theoretically possible to construct a finite grammar which will recognise strings consisting of the description of a Turing Machine (i.e., a program written in some programming language) followed by an input and an output only if the Turing Machine applied to the input would produce the output. In other words, there exists (in the mathematical sense) a grammar of this form which is computationally equivalent to a general purpose computer.
Unfortunately, that's not actually very useful if our goal is to understand sentences, because there is actually no algorithm for running computers backwards. The best we can do as a general solution is to enumerate all possible inputs and run the program on each of them until we find the output we hoped for. And that doesn't actually work because there is no limit to the amount of time the program might take to produce an output and no way to even know if the program will eventually come to an end. (This is called the "halting problem".) So we might get stuck on some possible input, and we'll never know if some other input might have produced the desired output.
As a result, we cannot tell whether the provided input was "grammatical", that is, whether it conformed to the grammar provided. And that's not just the case with the particular grammar we built to emulate Turing Machines. It means that we have no confidence that we can recognise sentences from any arbitrary grammar, and even if we stumble upon an answer, we have no way to limit how much time it might take to get there.
Clearly, this is not how human beings understand each other. So if it is to serve a practical purpose, we must restrict the possible grammars in some way to make them computationally feasible.
On the other end of the spectrum, a lot was known about finite-state machines. A finite-state machine is a Turing Machine without a tape; that is to say, it is simply a finite collection of states. In each state, the machine reads a single input symbol and uses it to decide what the next state will be. It turns out that finite-state machines can be modelled using a grammar (as above) restricted to very simple productions, each of which is either of the form A → a B or A → a, where a is a symbol from the "terminal alphabet" (that is, a word) and A and B are single grammatical symbols. These grammars are called "regular grammars" and they are computationally equivalent to what mathematicians call "regular expressions" (which are a small subset of what is recognised by "regex" libraries, but that's a whole other discussion).
Regular grammars are easy to parse. All that is needed to is trace through the state machine, so it can be done without backtracking in time proportional to the length of the input. But regular grammars are far too weak to be able to represent human language, or even most computer languages. As a simple example, algebraic expressions with parentheses cannot be recognised with a regular grammar (or with a finite-state machine) because there is no way to count the parenthesis depth; the finite-state machine has no memory at all (other than knowing which state it is in, and there are only a finite number of states).
So unrestricted grammars are too powerful to parse and regular grammars are too weak to be useful. (Useful for complex parsing problems, that is. There are certain applications for regular expressions, but parsing complete computer programs is not one of them.)
The next step, then, was to try to find a restriction on grammars which was still powerful enough to represent human language without being so powerful that parsing became impossible.
That, finally, is the origin of Chomsky's hierarchy. Between the two extremes described above (type 0 and type 3 grammars), Chomsky proposed two possible intermediate restrictions -- type 1 and type 2 grammars -- and proved a number of important properties about each of them.
While this work turned out to be fundamental in the development of formal language theory, it cannot really be said to have answered the question Chomsky started with. Type 2 grammars -- context-free grammars -- are indeed computationally tractable; they can be parsed with simple algorithms in polynomial time, and can represent a large number of useful languages. But they are still too weak to represent human language. In particular, context-free grammars cannot represent a language as simple as "all strings which contain two instances of the same substring". Type 1 grammars -- context-sensitive grammars -- can probably represent any useful language, and are not quite as unruly as unrestricted grammars, but they are still too powerful to parse. (Since the derivation steps in a context-sensitive grammar never get shorter, it is possible to enumerate all possible derivations from a starting point in order by length, which means that you can decide whether a sentence is generated by the grammar without running into the halting problem. But that's as good as it gets; that procedure takes exponential time and is not remotely feasible for non-trivial inputs.)
In the six decades since Chomsky published his seminal papers, a lot of work has been done to try to find useful intermediate restrictions between type 1 and type 2 grammars. And there has been a lot of useful study into algorithms for parsing context-free languages, which is of enormous utility in building compilers. All of this builds on the crucial work done by Chomsky and the other computational theorists whose work he built on -- Markov, Turing, Church and Kleene, just to name a few worthy of study. But Chomsky's original project remains unsolved.
So if your goal is to build a simple parser for a programming language, the Chomsky hierarchy is probably just an interesting footnote. But if you are interested in the academic study of formal language theory, there are still lots of interesting unsolved problems to work on.

Neural Networks For Generating New Programming Language Grammars

I have recently had the need to create an ANTLR language grammar for the purpose of a transpiler (Converting one scripting language to another). It occurs to me that Google Translate does a pretty good job translating natural language. We have all manner of recurrent neural network models, LSTM, and GPT-2 is generating grammatically correct text.
Question: Is there a model sufficient to train on grammar/code example combinations for the purpose of then outputting a new grammar file given an arbitrary example source-code?
I doubt any such model exists.
The main issue is that languages are generated from the grammars and it is next to impossible to convert back due to the infinite number of parser trees (combinations) available for various source-codes.
So in your case, say you train on python code (1000 sample codes), the resultant grammar for training will be the same. So, the model will always generate the same grammar irrespective of the example source code.
If you use training samples from a number of languages, the model still can't generate the grammar as it consists of an infinite number of possibilities.
Your example of Google translate works for real life translation as small errors are acceptable, but these models don't rely on generating the root grammar for each language. There are some tools that can translate programming languages example, but they don't generate the grammar, work based on the grammar.
Update
How to learn grammar from code.
After comparing to some NLP concepts, I have a list of issue that may arise and a way to counter them.
Dealing with variable names, coding structures and tokens.
For understanding the grammar, we'll have to breakdown the code to its bare minimum form. This means understanding what each and every term in the code means. Have a look at this example
The already simple expression is reduced to the parse tree. We can see that the tree breaks down the expression and tags each number as a factor. This is really important to get rid of the human element of the code (such as variable names etc.) and dive into the actual grammar. In NLP this concept is known as Part of Speech tagging. You'll have to develop your own method to do the tagging, by it's easy given that you know grammar for the language.
Understanding the relations
For this, you can tokenize the reduced code and train using a model based on the output you are looking for. In case you want to write code, make use of a n grams model using LSTM like this example. The model will learn the grammar, but extracting it is not a simple task. You'll have to run separate code to try and extract all the possible relations learned by the model.
Example
Code snippet
# Sample code
int a = 1 + 2;
cout<<a;
Tags
# Sample tags and tokens
int a = 1 + 2 ;
[int] [variable] [operator] [factor] [expr] [factor] [end]
Leaving the operator, expr and keywords shouldn't matter if there is enough data present, but they will become a part of the grammar.
This is a sample to help understand my idea. You can improve on this by having a deeper look at the Theory of Computation and understanding the working of the automata and the different grammars.
What you're describing is 'just' learning structure of Context-Free Grammars.
I'm not sure if this approach will actually work for your case, but it's a long-standing problem in NLP: grammar induction for Context-Free Grammars. An example introduction how to tackle this problem using statistical learning methods can be found in Charniak's Statistical Language Learning.
Note that what I described is about CFGs in general, but you might want to check induction for LL grammars, because parser generators mostly use these types of grammars.
I know nothing about ANTLR, but there are pretty good examples of translating natural language e.g. into valid SQL requests: http://nlpprogress.com/english/semantic_parsing.html#sql-parsing.

Understanding and Writing Parsers

I'm writing a program that requires me to create my first real, somewhat complicated parser. I would like to understand what parsing algorithms exists, as well as how to create a "grammar". So my question(s) are as follows:
1) How does one create a formal grammar that a parser can understand? What are the basic components of a grammar?
2) What parsing algorithms exists, and what kind of input does each exceed at parsing?
3) In light of the broad nature of the questions above, what are some good references I can read through to understand the answer to questions 1 and 2?
I'm looking for more of a broad overview with the keywords/topic areas I need so I can look into the details myself. Thanks everybody!
You generally write a context-free grammar G that describes a certain formal language L (e.g. the set of all syntactically valid C programs) which is simply a set of strings over a certain alphabet (think of all well-formed C programs; or of all well-formed HTML documents; or of all well-formed MARKDOWN posts; all of these are sets of finite strings over certain subsets of the ASCII character set). After that you come up with a parser for the given grammar---that is, an algorithm that, given a string w, decides whether the string w can be derived by the grammar G. (For example, the grammar of the C11 language describes the set of all well-formed C programs.)
Some types of grammars admit simple-to-implement parsers. An example of grammars that are often used in practice are LL grammars. A special subset of LL grammars, called the LL(1) grammars, have parsers that run in linear time (linear in the length of the string we're parsing).
There are more general parsing algorithms---most notably the Early parser and the CYK algorithm---that take as inpuit a string w and a grammar G and decide in time O(|w|^3) whether the string w is derivable by the grammar G. (Notice how cool this is: the algorithm takes the grammar as an agrument. But I don't think this is used in practice.)
I implemented the Early parser in Java some time ago. If your're insterested, the code is available on GitHub.
For a concrete example of the whole process, consider the language of all balanced strings of parenthesis (), (()), ((()))()(())(), etc. We can describe them with the following context-free grammar:
S -> (S) | SS | eps
where eps is the empty production. For example, we can derive the string (())() as follows: S => SS => (S)S => ((S))S => (())S => (())(S) => (())(). We can easily implement a parser for this grammar (left as exercise :-).
A very good references is the so-called dragon book: Compilers: Principles, Techniques, and Tools by Aho et al. It covers all the essential topics. Another good reference is the classic book Introduction to Automata Theory, Languages, and Computation by Hopcroft et al.

How to check Natural Language Sentence Structure validity using parser in java?

I am working on a project in which there is a part where I will have to input a sentence to check whether it is a valid sentence or not.
For example, if I give the input as "I am working at home", then the output will give me "Valid Sentence" where if I give the input as "I working home am at", it will give me "Invalid Sentence".
I searched some natural language parsing methods like NLP, Stanford Parser, but it would be helpful if someone please guide me through some java examples about the related problems.
I will be grateful in advance for this help. Thank you.
Whether you use parse trees or not, you will need to use a Markov process to check validity. The features can be word sequences, part-of-speech tag sequences, parse tree segments (i.e. production rules and their extensions), etc. For these, you would use a tokenizer, a POS tagger and a natural language parser, respectively.
The validity check will also be a probabilistic score, not an absolute truth. All (or almost all) natural language parsers are statistical. Which means they require training data. These parsers use context-free grammars or mildly context-sensitive grammars such as CCG or TAG, which are among the best computational approximations of natural language grammars.
Essentially, the model will tell you how likely is it for a feature to appear in a valid sentence after a certain sequence of features has already been seen. That is, it will allow you to compute probabilities of the form P("at"|"am working") and P("at"|"home am"). The former should have a higher probability than the latter. You will need to experimentally determine how high a probability should be in order for a sentence to be considered as valid.
As qqlihq commented, these are under the broad definition of language models. For sentence validity, however, you will usually not need to measure perplexity. The conditional probability measures should suffice.

Finding interrogative sentences

I need to detect whether a given sentence have interrogative characteristics? Is there any available gem(ROR) that implements this with NLP libraries or any current state of art implementation.
In general NLP code is python or java.
For this problem though I think you could hack together something that's basically - does it start with a W-word, or 'Are' and end with a question mark? More advanced, you could take a few thousand question sentences, create features similar to the quick hack (ie first word, final char) and then train a machine learning model on it. You'd probably do this in python the quickest, and then you could write the model interpreter in ruby (the interpreter is easy).
Or you could just write a simple Naive Bayes classifier!

Resources