Using BERT in domain-specific corpus - machine-learning

I am planning to use BERT in my domain-specific corpus. Should I retrain BERT from scratch (pre-training) or can I do fine-tuning instead? How can I add my new vocabulary if I need to do fine-tuning? Thanks!

Related

Word2vec is a Generalization or memorization algorithm?

I need to know that word2vec is a Generalization algorithm like all ML algorithms or its Memorization algorithm like KNN ?
Because we have 2 types of algorithms model based and memory based , word2vec is coming in which category when it's using for most_similar_items
Let me define generalization as the ability of a model which has completed training to be effective in prediction across a whole range of inputs, including include inputs that is not part of training. From that perspective, Word2Vec cannot predict words that are not part of the training dataset because it simply would not have trained on the context of it to create an embedding. To qualify as a generalization method, it needs to be able to predict on an input which was not part of the training dataset.
Word2Vec model maintains a dictionary of words to the corresponding embedding/vector. In summary, cannot predict on unknown words. This was one of the important differences between fastText model and Word2Vec.

How to perform incremental training of large data set using (scikit) Adaboost classifier?

I have a large size of the training dataset, so in order to fit it into the AdaBoost classifier, I would like to do incremental training.
Like in xgb we have a parameter called xgb_model to use the trained xgb model for further fitting the new data, I am looking for such parameters in AdaBoost classifier.
Currently, I have am trying to use fit function to iteratively train the model but it seems my classifier is not using the previous weights. How can I solve this?
It's not possible out-of-the-box. sklearn supports incremental/online training in some estimators, but not AdaBoostClassifier.
The estimators that support incremental training are listed here and have a special method named partial_fit().
see gaenari
it is c++ incremental decision tree.
support:
insert(csv), update(), and rebuild().
report()

How to use doc2vec embeddings as an input to a neural network

I'm trying to slowly begin working on a Twitter recommender system as part of a project, which requires me to use some form of deep learning. My goal is to recommend other tweets based on the topical content of a tweet with unlabelled data.
I have pre-processed my data and trained a few variations of models in doc2vec to get both word embeddings and document embeddings. But my issue is that I feel a little lost with where to go from here. I've read that doc2vec can be used as an input to a deeper neural network for training such as an LSTM or even a CNN.
Could anyone help me understand how these document embeddings (and word embeddings, I trained the model on DM mode) are used as input and what the purpose of the neural net would be in this case, is it for clustering? I understand the question is a little open-ended but I'm quite new to all this, any help would be appreciated.
If you have trained a d dimensional doc2vec for each document that will become the input vector for that particular tweet. If you have n number of documents, it will become n*d dimensional matrix. Now, this matrix can be given to the neural network. LSTM and CNN models are all used for supervised learning problems (where you have labeled data).
If you dont have labelled data, then go for unsupervised learning. Clustering comes under this! You can run different clustering algos and recommend based on this.

Gensim's Document similarity can be used as supervised classification?

Gensim has this document similarity feature which when inputted a query document, it outputs the similarity of that particular document with all the documents it has in its index
Can this be used like an "approximate" version of supervised classification?
I know gensim's word2vec uses Deep Learning, is this involved during the above step?

How do I update a trained model (weka.classifiers.functions.MultilayerPerceptron) with new training data in Weka?

I would like to load a model I trained before and then update this model with new training data. But I found this task hard to accomplish.
I have learnt from Weka Wiki that
Classifiers implementing the weka.classifiers.UpdateableClassifier interface can be trained incrementally.
However, the regression model I trained is using weka.classifiers.functions.MultilayerPerceptron classifier which does not implement UpdateableClassifier.
Then I checked the Weka API and it turns out that no regression classifier implements UpdateableClassifier.
How can I train a regression model in Weka, and then update the model later with new training data after loading the model?
I have some data mining experience in Weka as well as in scikit-learn and r and updateble regression models do not exist in weka and scikit-learn as far as I know. Some R libraries however do support updating regression models (take a look at this linear regression model for example: http://stat.ethz.ch/R-manual/R-devel/library/stats/html/update.html), so if you are free to switching data mining tool this might help you out.
If you need to stick to Weka than I'm afraid that you would probably need to implement such a model yourself, but since I'm not a complete Weka expert please check with the guys at weka list (http://weka.wikispaces.com/Weka+Mailing+List).
The SGD classifier implementation in Weka supports multiple loss functions. Among them are two loss functions that are meant for linear regression, viz. Epsilon insensitive, and Huber loss functions.
Therefore one can use a linear regression trained with SGD as long as either of these two loss functions are used to minimize training error.

Resources