Latin language dependency parser - parsing

I know about StanfordNLP parser but it does not cater for the Latin language. Any idea for dependency parsers for Latin pls?

unfortunately at this time we do not have any models for Latin that we distribute. But you can just train your own
There are instructions for training your own model here:
https://stanfordnlp.github.io/CoreNLP/depparse.html
That site also has a link to where Latin training data can be found.

Related

Arabic text summarization using bert (araBert)

Among models that summarize the Arabic texts, I chose AraBert.
After installing the necessary tools and libraries,
I want to know how do I use this model for summarization if some of you have already used it?
What are the steps or the instructions that I must follow I am still a beginner in this field?
Could you please guide me?
https://huggingface.co/ahmeddbahaa/AraBART-finetuned-ar
You can't use AraBert for summarization due to summarization is NLU (Natural Language Understanding) and it's advanced, so you need large corpus like how they trained Bert for English language.
You can use AraBert for NER ,sentiment analysis and question answering.

How to convert from Stanford Universal Dependencies to Phrase Grammar?

In my application I am using Stanford CoreNLP for parsing english text into a graph data structure (Universal Dependencies).
After some modifications of the graph I need to generate a natural language output for which I am using SimpleNLG: https://github.com/simplenlg/simplenlg
However SimpleNLG is using Phrase Grammar.
Therefore in order to successfully use SimpleNLG for natural language generation I need to convert from Universal Dependencies into Phrase Grammar.
What is the easiest way of achieving this?
So far I have only come across this article on this topic:
http://delivery.acm.org/10.1145/1080000/1072147/p14-xia.pdf?ip=86.52.161.138&id=1072147&acc=OPEN&key=4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E6D218144511F3437&CFID=642131329&CFTOKEN=21335001&acm=1468166339_844b802736ce07dab89064efb7f8ede9
I am hoping that someone might have some more practical code examples to share on this issue?
Phrase-structure trees contain more information than dependency trees and therefore you cannot deterministically convert dependency trees to phrase-structure trees.
But if you are using CoreNLP to parse the sentences, take a look at the parse annotator. Unlike the dependency parser, this parser also outputs phrase-structure trees, so you can use this annotator to directly parse your sentences to phrase-structure trees.

Choose the best NLP parsers

I want to analyze a sentence using a context free grammar for NLP tasks.
I want to know which grammar parsers as Stanford Parser, Malt Parser,... would be great?
What are advantages and disadvantages about syntactic parsing and dependency parsing in those parsers?
How can they support library for programming languages as Java, PHP,...?
I recommend Stanford parser. It supports a powerful library for Java coding. You should check this site for more information about Stanford parser http://stanfordnlp.github.io/CoreNLP/. You also can run some demos at http://nlp.stanford.edu:8080/corenlp/

Packrat parsing vs. LALR parsing

A lot of websites states that packrat parsers can parse input in linear time.
So at the first look they me be faster than LALR parser contructed by the tools yacc or bison.
I wanted to know if the performance of packrat parsers is better/worse than the performance of LALR parser when tested with common input (like programming language source files) and not with any theoretical inputs.
Does anyone can explain the main differences between the two approaches.
Thanks!
I'm not an expert at packrat parsing, but you can learn more at Parsing expression grammar on Wikipedia.
I haven't dug into it so I'll assume the linear-time characterization of packrat parsing is correct.
L(AL)R parsers are linear time parsers too. So in theory, neither packrat nor L(AL)R parsers are "faster".
What matters, in practice, of course, is implementation. L(AL)R state transitions can be executed in very few machine instructions ("look token code up in vector, get next state and action") so they can be extremely fast in practice. By "compiling" L(AL)R parsing to machine code, you can end up with lightning fast parsers, as shown by this 1986 Tom Pennello paper on Very Fast LR parsing. (Machines are now 20 years faster than when he wrote the paper!).
If packrat parsers are storing/caching results as they go, they may be linear time, but I'd guess the constant overhead would be pretty high, and then L(AL)R parsers in practice would be much faster. The YACC and Bison implementations from what I hear are pretty good.
If you care about the answer, read the basic technical papers closely; if you really care, then implement one of each and check out the overhead constants. My money is strongly on L(AL)R.
An observation: most language front-ends don't spend most of their time "parsing"; rather, they spend a lot of time in lexical analysis. Optimize that (your bio says you are), and the parser speed won't matter much.
(I used to build LALR parser generators and corresponding parsers. I don't do that anymore; instead I use GLR parsers which are linear time in practice but handle arbitrary context-free grammmars. I give up some performance, but I can [and do, see bio] build dozens of parsers for many languages without a lot of trouble.).
I am the author of LRSTAR, an open-source LR(k) parser generator. Because people are showing interest in it, I have put the product back online here LRSTAR.
I have studied the speed of LALR parsers and DFA lexers for many years. Tom Pennello's paper is very interesting, but is more of an academic exercise than a real-world solution for compilers. However, if all you want is a pattern recognizer, then it may be the perfect solution for you.
The problem is that real-world compilers usually need to do more than pattern recognition, such as symbol-table look-up for incoming symbols, error recovery, provide an expecting list (statement completion information), and build an abstract-syntax tree while parsing.
In 1989, I compared the parsing speed of LRSTAR parsers to "yacc" and found that they are 2 times the speed of "yacc" parsers. LRSTAR parsers use the ideas published in the paper: "Optimization of Parser Tables for Portable Compilers".
For lexer (lexical analysis) speed I discovered in 2009 that "re2c" was generating the fastest lexers, about twice the speed of those generated by "flex". I was rewriting the LRSTAR lexer generator section at that time and found a way to make lexers that are almost as fast as "re2c" and much smaller. However, I prefer the table-driven lexers that LRSTAR generates, because they are almost as fast and the code compiles much quicker.
BTW, compiler front-ends generated by LRSTAR can process source code at a speed of 2,400,000 lines per second or faster. The lexers generated by LRSTAR can process 30,000,000 tokens per second. The testing computer was a 3.5 GHz machine (from 2010).
[2015/02/15] here is the 1986 Tom Pennello paper on Very Fast LR parsing
http://www.genesishistory.org/content/ProfPapers/VF-LRParsing.pdf
I know this is an old post, but a month or so ago, I stumbled on this paper: https://www.mercurylang.org/documentation/papers/packrat.pdf and accidentally saw this post today.
The watered-down version of that the paper says: packrat memoisation is a mixed blessing. The best results can be achieved if you have some heuristics wrt how often this or another rule is going to match. Essentially, it only makes sense to memoise the rules that have two following properties: (1) few elements, (2) very common.
Performance is mostly a matter of language design. For each language, there will be an approach, technology, or parser generator that will make best fit.
I can't prove it without more thought, but I think that nothing can beat a top-down descent parser in which the semantics drive the parser, and the parser drives the lexer, performance-wise. It would also be among the most versatile and easier to maintain among the implementations.

Resources for writing a recursive descent parser by hand

I'm looking to write a recursive descent parser by hand and I'm looking for good resources on how to structure it, algorithms, etc.
There is a good tutorial on codeproject under "Compiler Patterns". Lately, you can even just Google "compiler patterns".
http://www.codeproject.com/Articles/286121/Compiler-Patterns
The article covers most aspects of building a simple compiler (the back-end, the BNF, and the patterns used to implement the various BNF rules), but is not very heavy on theory, or even on why a recursive descent compiler works to convert language input into code.
I can suggest "Crafting a Compiler" by Charles N. Fischer and Richard J. LeBlanc.
Edit. This is an updated version: http://www.amazon.com/Crafting-Compiler-Charles-N-Fischer/dp/0136067050/ref=sr_1_2?ie=UTF8&s=books&qid=1258514561&sr=8-2

Resources