Arabic text summarization using bert (araBert) - arabic

Among models that summarize the Arabic texts, I chose AraBert.
After installing the necessary tools and libraries,
I want to know how do I use this model for summarization if some of you have already used it?
What are the steps or the instructions that I must follow I am still a beginner in this field?
Could you please guide me?
https://huggingface.co/ahmeddbahaa/AraBART-finetuned-ar

You can't use AraBert for summarization due to summarization is NLU (Natural Language Understanding) and it's advanced, so you need large corpus like how they trained Bert for English language.
You can use AraBert for NER ,sentiment analysis and question answering.

Related

Learning word alignment from nltk

I have a parallel corpus for english-german. Is there a way to extract word alignment table from this corpus using nltk? I don't know if nltk.align is supposed to do this. I am unable to figure out from the documentation.
Look at the source of the modules in the nltk.translate package (previously known as nltk.align); you'll find descriptions of the available algorithms and references to the research literature that explains them in more detail.

Research papers classification on the basis of title of the research paper

Dear all I am working on a project in which I have to categories research papers into their appropriate fields using titles of papers. For example if a phrase "computer network" occurs somewhere in then title then this paper should be tagged as related to the concept "computer network". I have 3 million titles of research papers. So I want to know how I should start. I have tried to use tf-idf but could not get actual results. Does someone know about a library to do this task easily? Kindly suggest one. I shall be thankful.
If you don't know categories in advance, than it's not classification, but instead clustering. Basically, you need to do following:
Select algorithm.
Select and extract features.
Apply algorithm to features.
Quite simple. You only need to choose combination of algorithm and features that fits your case best.
When talking about clustering, there are several popular choices. K-means is considered one of the best and has enormous number of implementations, even in libraries not specialized in ML. Another popular choice is Expectation-Maximization (EM) algorithm. Both of them, however, require initial guess about number of classes. If you can't predict number of classes even approximately, other algorithms - such as hierarchical clustering or DBSCAN - may work for you better (see discussion here).
As for features, words themselves normally work fine for clustering by topic. Just tokenize your text, normalize and vectorize words (see this if you don't know what it all means).
Some useful links:
Clustering text documents using k-means
NLTK clustering package
Statistical Machine Learning for Text Classification with scikit-learn and NLTK
Note: all links in this answer are about Python, since it has really powerful and convenient tools for this kind of tasks, but if you have another language of preference, you most probably will be able to find similar libraries for it too.
For Python, I would recommend NLTK (Natural Language Toolkit), as it has some great tools for converting your raw documents into features you can feed to a machine learning algorithm. For starting out, you can maybe try a simple word frequency model (bag of words) and later on move to more complex feature extraction methods (string kernels). You can start by using SVM's (Support Vector Machines) to classify the data using LibSVM (the best SVM package).
The fact, that you do not know the number of categories in advance, you could use a tool called OntoGen. The tool basically takes a set of texts, does some text mining, and tries to discover the clusters of documents. It is a semi-supervised tool, so you must guide the process a little, but it does wonders. The final product of the process is an ontology of topics.
I encourage you, to give it a try.

Keyword extraction from short dutch texts

I would like to extract keywords from short dutch texts. Is there an API for this or some library which i could use.
In case those are not available for dutch, any tips on how to extract them myself are also appreciated. I already tried it myself by running the texts through a part of speech tagger and lemmatizer. But from then on i find it quite difficult to extract decent keywords. TF-IDF is not useful sice texts are too short to get good results.
I prefer Java, but any other language implementations are also very welcome.
Here is my video series on text mining with RapidMiner. It shows how to easily get the TF-IDF and more:
http://vancouverdata.blogspot.ca/2010/11/text-analytics-with-rapidminer-loading.html

opennlp vs stanford nlptools vs berkeley

Hi the aim is to parse a sizeable corpus like wikipedia to generate the most probable parse tree,and named entity recognition. Which is the best library to achieve this in terms of performance and accuracy? Has anyone used more than one of the above libraries?
I use in my experiments the standford tagger but it really depends on the quality of your articles from wikipedia. Here you will find a comparison of different part-of-speech taggin implmentations - PoS on aclweb.
I'm currently using Enju HPSG parser which seems to be better than the others.
Refer to this paper: http://nlp.stanford.edu/pubs/lrecstanforddeps_final_final.pdf

Naive Bayesian for Topic detection using "Bag of Words" approach

I am trying to implement a naive bayseian approach to find the topic of a given document or stream of words. Is there are Naive Bayesian approach that i might be able to look up for this ?
Also, i am trying to improve my dictionary as i go along. Initially, i have a bunch of words that map to a topics (hard-coded). Depending on the occurrence of the words other than the ones that are already mapped. And depending on the occurrences of these words i want to add them to the mappings, hence improving and learning about new words that map to topic. And also changing the probabilities of words.
How should i go about doing this ? Is my approach the right one ?
Which programming language would be best suited for the implementation ?
Existing Implementations of Naive Bayes
You would probably be better off just using one of the existing packages that supports document classification using naive Bayes, e.g.:
Python - To do this using the Python based Natural Language Toolkit (NLTK), see the Document Classification section in the freely available NLTK book.
Ruby - If Ruby is more of your thing, you can use the Classifier gem. Here's sample code that detects whether Family Guy quotes are funny or not-funny.
Perl - Perl has the Algorithm::NaiveBayes module, complete with a sample usage snippet in the package synopsis.
C# - C# programmers can use nBayes. The project's home page has sample code for a simple spam/not-spam classifier.
Java - Java folks have Classifier4J. You can see a training and scoring code snippet here.
Bootstrapping Classification from Keywords
It sounds like you want to start with a set of keywords that are known to cue for certain topics and then use those keywords to bootstrap a classifier.
This is a reasonably clever idea. Take a look at the paper Text Classication by Bootstrapping with Keywords, EM and Shrinkage by McCallum and Nigam (1999). By following this approach, they were able to improve classification accuracy from the 45% they got by using hard-coded keywords alone to 66% using a bootstrapped Naive Bayes classifier. For their data, the latter is close to human levels of agreement, as people agreed with each other about document labels 72% of the time.

Resources