Text classification with BERT and PyTorch Lightning - machine-learning

I am currently working on multi-label text classification with BERT and PyTorch Lightning. I am new to machine learning and am confused on how to train my model on AWS.
My questions include which Accelerated Computing instance (Amazon EC2) do I use considering I have a large database with 377 labels. Along with that, will I be able to run my code through Jupyter Notebooks with libraries like torch? Thank you for your time.

Related

How can we Stack XGBOOST model with dense neural networks

i am a newbie in ML , i was trying to solve a multi-class classification problem . I have used XGBOOST to reduce the log loss, I also tried a Dense neural network to reduce the log loss , it also seem to work well. Now is there a way i can stack these two models so that i can further reduce the log loss.
You can do it with Apple coremltools.
Take your XGBoost model and convert to MLModel using the converter.
Create a pipeline model combining that model with any neural networks.
I'm sure there are other tools with pipeline. But you will need to convert XGBoost to some other format.

Tensorflow - saving the model generated in the PC to be used in the mobile version of Tensorflow or OpenCV

Currently I'm still working on that age classification project using the Convolutional Neural Network built in Tensorflow. How, and in what format, do I save the state of my current model trained in my PC to be able to use said model in the Tensorflow in my mobile app, or even OpenCV's tiny-dnn (dnn_modern)? Because I'm not sure if the checkpoint file would work in OpenCV's tiny-dnn.

How intensive is training a machine learning algorithm?

I'd like to make an app using iOS's new CoreML framework that does image recognition. To do so I'd probably have to train my own model, and I'm wondering exactly how much data and compute power it would require. Is it something I could feasibly accomplish on an dual core i5 Macbook Pro using Google Images for source data or would it be much more involved?
It depends on what sort of images you want to train your model to recognize.
What is often done is fine-tuning an existing model. You take a pretrained version of Inception-v3 (let's say) and then replace the final layer with your own. You train this last layer on your own images.
You still need a fair number of training images (a few 100 per category, but more is better) but you can do this on your MacBook Pro in anywhere between 30 minutes to a few hours.
TensorFlow comes with a script that makes it really easy to do this. Keras has a great blog post on how to do this. I used the TensorFlow script to re-train Inception-v3 to tell apart my two cats, from 50 or so images of each cat.
If you want to train from scratch you probably want to do this in the cloud using AWS, Google's Cloud ML Engine, or something easy like FloydHub.

Is it possible to use Caffe Only for classification without any training?

Some users might see this as opinion-based-question but if you look closely, I am trying to explore use of Caffe as a purely testing platform as opposed to currently popular use as training platform.
Background:
I have installed all dependencies using Jetpack 2.0 on Nvidia TK1.
I have installed caffe and its dependencies successfully.
The MNIST example is working fine.
Task:
I have been given a convnet with all standard layers. (Not an opensource model)
The network weights and bias values etc are available after training. The training has not been done via caffe. (Pretrained Network)
The weights and bias are all in the form of MATLAB matrices. (Actually in a .txt file but I can easily write code to get them to be matrices)
I CANNOT do training of this network with caffe and must used the given weights and bias values ONLY for classification.
I have my own dataset in the form of 32x32 pixel images.
Issue:
In all tutorials, details are given on how to deploy and train a network, and then use the generated .proto and .caffemodel files to validate and classify. Is it possible to implement this network on caffe and directly use my weights/bias and training set to classify images? What are the available options here? I am a caffe-virgin so be kind. Thank you for the help!
The only issue here is:
How to initialize caffe net from text file weights?
I assume you have a 'deploy.prototxt' describing the net's architecture (layer types, connectivity, filter sizes etc.). The only issue remaining is how to set the internal weights of caffe.Net to pre-defined values saved as text files.
You can get access to caffe.Net internals, see net surgery tutorial on how this can be done in python.
Once you are able to set the weights according to your text file, you can net.save(...) the new weights into a binary caffemodel file to be used from now on. You do not have to train the net if you already have trained weights, and you can use it for generating predictions ("test").

How can we train and test a neural network with UNB ISCX benchmark dataset?

I have tried with KDD dataset on my neural net and now I want to extend using ISCX dataset. Some part of this dataset contains the HTTP DOS attacks labelled represents replica of real time network traffic but I couldn't figure out how can I convert them into Neural inputs(numeric) to train and test my neural net which would classify these intrusion vectors..
Appreciated for Any pointers..
I didn't work with this data set, but if you have sufficient information about features and values of each feature, you can create .arff file quickly and then use WEKA very easy.
Although you can use many applications but some user-friendly applications such as GUI of WEKA has the capability of working with discrete and non numerical features very easy. and can help you to start working with your data set as fast as possible.

Resources