I'm trying to perform a complicated function approximation in Tensorflow with several layers. The function is going to be trained using a lot of generated data, so I want to be able to generate the data at runtime simply due to the sheer quantity of generated data necessary. I decided to try using an Estimator with a ModelFnOps, but I'm at the point where I'm writing the training loop and I can't seem to find any documentation on using something like eval(feed_dict=my_feed_dict) that is shown here. The only thing I've found so far has been calling fit() on the Estimator, but that requires calling the entire data set (unless I've misunderstood the purpose of that function). Is there any way to feed in single examples or batches within a loop to train an Estimator?
You can feed in your data via an input function. This input function is a first-class function that gets passed to the estimator (or the eval/train/predict methods).
You can also make use of the dataset API to create data feeders and iterators and return the feeder operations in your input functions.
Related
I have a binary classification problem I'm trying to tackle in Keras. To start, I was following the usual MNIST example, using softmax as the activation function in my output layer.
However, in my problem, the 2 classes are highly unbalanced (1 appears ~10 times more often than the other). And what's even more critical, they are non-symmetrical in the way they may be mistaken.
Mistaking an A for a B is way less severe than mistaking a B for an A. Just like a caveman trying to classify animals into pets and predators: mistaking a pet for a predator is no big deal, but the other way round will be lethal.
So my question is: how would I model something like this with Keras?
thanks a lot
A non-exhaustive list of things you could do:
Generate a balanced data set using data augmentations. If the data are images, you can add image augmentations in a custom data generator that will output balanced amounts of data from each class per batch and save the results to a new data set. If the data are tabular, you can use a library like imbalanced-learn to perform over/under sampling.
As #Daniel said you can use class_weights during training (in the fit method) in a way that mistakes on important class are penalized more. See this tutorial: Classification on imbalanced data. The same idea can be implemented with a custom loss function with/without class_weights during training.
Module's parameters get changed during training, that is, they are what is learnt during training of a neural network, but what is a buffer?
and is it learnt during neural network training?
Pytorch doc for register_buffer() method reads
This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s running_mean is not a parameter, but is part of the persistent state.
As you already observed, model parameters are learned and updated using SGD during the training process.
However, sometimes there are other quantities that are part of a model's "state" and should be
- saved as part of state_dict.
- moved to cuda() or cpu() with the rest of the model's parameters.
- cast to float/half/double with the rest of the model's parameters.
Registering these "arguments" as the model's buffer allows pytorch to track them and save them like regular parameters, but prevents pytorch from updating them using SGD mechanism.
An example for a buffer can be found in _BatchNorm module where the running_mean , running_var and num_batches_tracked are registered as buffers and updated by accumulating statistics of data forwarded through the layer. This is in contrast to weight and bias parameters that learns an affine transformation of the data using regular SGD optimization.
Both parameters and buffers you create for a module (nn.Module).
Say you have a linear layer nn.Linear. You already have weight and bias parameters. But if you need a new parameter you use register_parameter() to register a new named parameter that is a tensor.
When you register a new parameter it will appear inside the module.parameters() iterator, but when you register a buffer it will not.
The difference:
Buffers are named tensors that do not update gradients at every step, like parameters.
For buffers, you create your custom logic (fully up to you).
The good thing is when you save the model, all params and buffers are saved, and when you move the model to or off the CUDA params and buffers will go as well.
I am using kmeans clustering technique from a video but i do not understand why we use .fit method in kmeans clustering?
kmeans = KMeans(n_clusters=5, random_state=0)
kmeans.fit(X) //why we use this fit method here
kmeans is your defined model.
To train our model , we use kmeans.fit() here.
The argument in
kmeans.fit(argument)
is our data set that need to be Clustered.
After using the
fit() function
our model is ready.
And we get labels for that clusters using
data_labels = kmeans.labels_
Because the sklearn people early on decided that everything should have fit(X, y) and predict(X) functions. And it likely is not going to change, because of backwards compatibility...
It does not make a whole lot of sense for clustering, which does not use y (which defaults to None as it is ignored). And there is no real use case where you would want to drop-in replace a classifier with a clustering, either.
Nevertheless, you'll at some point need to run the algorithm. It is an anti-pattern to do this in a constructor (so KMeans(n_clusters=5, data=X) is a no-no), so you will have to invoke some method. You may as well call it fit then, which fits at least for optimization based methods such as k-means.
You could, however, simply use the method k_means(X, n_clusters=5) instead of using the class. Then it would be a single line (see the source code of fit for an example).
I wrote a program which contains an algorithm called distributed randomized gradient descent (DRGD). There are some internal variables in the algorithm which are used to calculate the step lengths. The training algorithms should be much complex than DRGD, so there should be more internal variables. If we preserve these variables, we can pause training and test the model; then, we will resume the training again.
It is possible to save the states of the trainer and resume training by calling the .save_states() and .load_states() functions on the Trainer class during a training with MXNet Gluon.
Here is an example:
trainer = gluon.Trainer(net.collect_params(), 'adam')
trainer.save_states('training.states')
trainer.load_states('training.states')
If you want to store some data across multiple devices (GPUs or machines) you can use KVStore. Here is the tutorial on how to use it.
Please note, that KVStore is considered to be quite an advanced feature, and should be used with care.
I am not sure, but it could be that what you call a "Trainer" in MXNet world may actually be called an "Optimizer". So, please consider reading this API page as well.
How train_on_batch() is different from fit()? What are the cases when we should use train_on_batch()?
For this question, it's a simple answer from the primary author:
With fit_generator, you can use a generator for the validation data as
well. In general, I would recommend using fit_generator, but using
train_on_batch works fine too. These methods only exist for the sake of
convenience in different use cases, there is no "correct" method.
train_on_batch allows you to expressly update weights based on a collection of samples you provide, without regard to any fixed batch size. You would use this in cases when that is what you want: to train on an explicit collection of samples. You could use that approach to maintain your own iteration over multiple batches of a traditional training set but allowing fit or fit_generator to iterate batches for you is likely simpler.
One case when it might be nice to use train_on_batch is for updating a pre-trained model on a single new batch of samples. Suppose you've already trained and deployed a model, and sometime later you've received a new set of training samples previously never used. You could use train_on_batch to directly update the existing model only on those samples. Other methods can do this too, but it is rather explicit to use train_on_batch for this case.
Apart from special cases like this (either where you have some pedagogical reason to maintain your own cursor across different training batches, or else for some type of semi-online training update on a special batch), it is probably better to just always use fit (for data that fits in memory) or fit_generator (for streaming batches of data as a generator).
train_on_batch() gives you greater control of the state of the LSTM, for example, when using a stateful LSTM and controlling calls to model.reset_states() is needed. You may have multi-series data and need to reset the state after each series, which you can do with train_on_batch(), but if you used .fit() then the network would be trained on all the series of data without resetting the state. There's no right or wrong, it depends on what data you're using, and how you want the network to behave.
Train_on_batch will also see a performance increase over fit and fit generator if youre using large datasets and don't have easily serializable data (like high rank numpy arrays), to write to tfrecords.
In this case you can save the arrays as numpy files and load up smaller subsets of them (traina.npy, trainb.npy etc) in memory, when the whole set won't fit in memory. You can then use tf.data.Dataset.from_tensor_slices and then using train_on_batch with your subdataset, then loading up another dataset and calling train on batch again, etc, now you've trained on your entire set and can control exactly how much and what of your dataset trains your model. You can then define your own epochs, batch sizes, etc with simple loops and functions to grab from your dataset.
Indeed #nbro answer helps, just to add few more scenarios, lets say you are training some seq to seq model or a large network with one or more encoders. We can create custom training loops using train_on_batch and use a part of our data to validate on the encoder directly without using callbacks. Writing callbacks for a complex validation process could be difficult. There are several cases where we wish to train on batch.
Regards,
Karthick
From Keras - Model training APIs:
fit: Trains the model for a fixed number of epochs (iterations on a dataset).
train_on_batch: Runs a single gradient update on a single batch of data.
We can use it in GAN when we update the discriminator and generator using a batch of our training data set at a time. I saw Jason Brownlee used train_on_batch in on his tutorials (How to Develop a 1D Generative Adversarial Network From Scratch in Keras)
Tip for quick search: Type Control+F and type in the search box the term that you want to search (train_on_batch, for example).