How to train an ML model? [duplicate] - machine-learning

This question already has an answer here:
How to create & train a neural model to use for Core ML [closed]
(1 answer)
Closed 5 years ago.
As you may now, Apple introduced Core ML for iOS 11 in this year's WWDC. This framework makes use of an already trained ML model in an specific format that you can convert if your source model doesn't match it. Apple also makes available for downloading and directly integratiing some already trained ML models here.
On the other hand, they also mentioned at the WWDC 2017 that you can train model by using tools such as Caffe or Keras.
I'd like to train a model with a more specific purpose than the ones already trained and provided by Apple, that look quite generic ones. But I'm not an ML expert and I'd appreciate an starting point for this.
Where can I find models that I can train? And then, how can I train them? I'm looking for some posts or tutorials for this with no success. I read some posts like this one, but it doesn't provide the guidelines I need.

You will find pretrained (CNN) Keras models documented here: https://keras.io/applications/
The Keras blog is a good starting point too: https://blog.keras.io/category/tutorials.html
And these are very good tutorials: http://machinelearningmastery.com/start-here/

Related

How to know if an image dataset is learnable/trainable?

I did some research, most of the reponses only answered the question "How is the images get learned", and the answer is through learning hidden features. But I wonder how to determine if a set of images can be learned by a machine learning model, such as a DNN. I understand that the convolutional layers for the current DNN can learn the images and try to extract only useful feature, but is there any study showing a metrics or evaluation of the dataset itself? So that we can say after satisfying some condition, one dataset is learnable.
I tried to search for some papers aiming at this, but I did not find any useful answers for this question.

Ml.Net Imageclassification incremental learning

Is Ml.net image classification trainer support incremental learning or not and if yes can anybody show me example or topic to read it
If by incremental learning you're referring to taking the weights from a trained model and using that as the starting point to continue training, it's not supported at the moment. Technically though you're not starting from scratch with the image classification trainer since you're using transfer learning to train the last layer of the chosen pretrained image classification network, but incremental learning on your trained model is currently not supported. I would suggest posting an issue in the repo requesting this feature so others who may want this feature as well are able to upvote / comment on it.
https://github.com/dotnet/machinelearning/issues/new/choose

Fine-tuning GPT-2/3 on new data [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
I'm trying to wrap my head around training OpenAI's language models on new data sets. Is there anyone here with experience in that regard?
My idea is to feed either GPT-2 or 3 (I do not have API access to 3 though) with a textbook, train it on it and be able to "discuss" the content of the book with the language model afterwards. I don't think I'd have to change any of the hyperparameters, I just need more data in the model.
Is it possible??
Thanks a lot for any (also conceptual) help!
Presently GPT-3 has no way to be finetuned as we can do with GPT-2, or GPT-Neo / Neo-X. This is because the model is kept on their server and requests has to be made via API. A Hackernews post says that finetuning GPT-3 is planned or in process of construction.
Having said that, OpenAI's GPT-3 provide Answer API which you could provide with context documents (up to 200 files/1GB). The API could then be used as a way for discussion with it.
EDIT:
Open AI has recently introduced Fine Tuning beta.
https://beta.openai.com/docs/guides/fine-tuning
Thus it will be best answer to the question to follow through description on that link.
You can definitely retrain GPT-2. Are you only looking to train it for language generation purposes or do you have a specific downstream task you would like to adapt the GPT-2?
Both these tasks are possible and not too difficult. If you want to train the model for language generation i.e have it generate text on a particular topic, you can train the model exactly as it was trained during the pre-training phase. This means training it on a next-token prediction task with a cross-entropy loss function. As long as you have a dataset, and decent compute power, this is not too hard to implement.
When you say, 'discuss' the content of the book, it seems to me that you are looking for a dialogue model/chatbot. Chatbots are trained in a different way and if you are indeed looking for a dialogue model, you can look at DialoGPT and other models. They can be trained to become task-oriented dialog agents.

Options for in-cloud deep learning iOS

Is there any in-cloud deep-learning solutions that makes data predictions?
For example, user may write some text into text field and algorithm (deep learning code)
should suggest one of 8 categories based on input.
If it suggests wrong variant - user may select correct one, and algorithm should improve itself
in real-time without new app’s release. Also learning model should be shared between users.
Or another example:
User writes some text into field, and algorithm improves that text based on trained input.
Is there are any solutions for that available right now on iOS?
Which is the best for price/value?
Update: CoreML is not an option because it doesn't share model and requires app release to update the model.
It seems to me that what you are looking for can be covered by Core ML, starting in iOS 11.
Here you have 2 links to the WWDC talks:
Introducing Core ML
Core ML in depth
The first step is for you to build a Core ML model.
Hope this helps

Binary Classification Model training in CoreML

I have just started exploring CoreML and was wondering if there is a way to train a Binary Classification Model using the same.
Please provide me any references or examples as I am a ML noob.
Core ML doesn't offer any APIs for building or training models. It works with models you've already trained elsewhere (Keras, Caffe, etc) to perform whatever prediction or classification task you built the model for. See Apple's Core ML docs for info on how to convert a model for use with Core ML.
Core ML offers building and training models as of macOS 10.14 (Mojave). In XCode you can train models in various ways.
I don't believe they currently support a binary classifier, but if you can build an image set of X and NOT X you could emulate such.
Apple's Docs: https://developer.apple.com/documentation/create_ml/creating_an_image_classifier_model

Resources